X

Recent Posts

Customer Stories

Thymos Intelligence Selects Oracle Cloud Infrastructure as HPC Cloud Service Provider

Thymos Intelligence Providing an Environment in the Cloud to Run HPC Applications Tokyo, Japan - 2018/07/26   Oracle Corporation Japan announced today that Thymos Intelligence Corporation selects Oracle Cloud Infrastructure as the cloud infrastructure of their high-performance cloud computing (HPC) service, iHAB CLUSTER.   Thymos Intelligence is advocating the concept of iHAB as the future of cloud computing that allows users to use without noticing the location, on-premises or cloud. Under iHAB, Thymos is providing three services: iHAB CLUSTER, iHAB Storage, and iHAB DC. iHAB CLUSTER provides computing resources required for Computer Aided Engineering (CAE) and Deep Learning.   Thymos Intelligence has been demanding the public cloud that meets iHAB CLUSTER customers' requirements: sudden increase of computing resource demand and a stable high-performance environment to run CAE and AI workloads. To select the public cloud that meets the requirements, Thymos Intelligence tested using a CAE application that is used for actual workloads, and as a result, Thymos Intelligence selected Oracle Cloud Infrastructure due to its excellent performance.   The selection points are as follows: Excellent Performance: Bare metal instances can provide higher computing performance than virtual machines, and high storage IOPS performance, which comes from NVM Express local and remote block storage, enables the ability to run HPC workloads at high performance. Also, the latest NVIDIA Tesla V100 GPU brings excellent performance to Deep Learning and AI workloads.   Fast and Stable Network: A low-latency and high-bandwidth (25 Gbps x 2) nonblocking network enables high performance and stable internode and storage access.   High Price-Performance: Bare metal instances can archive higher performance when compared with the similar shape of virtual machines. Also, low-latency and high-bandwidth networks can be used without additional fees. As a result, it provides higher performance at a lower cost.  In addition, outbound data transfer is free of charge up to 10 TB, so when a large data download is needed, it still can be provided at a lower cost. Naohiro Saso, Sales & Marketing Manager at Thymos Intelligence Corporation commented: The high-performance cloud computing iHAB CLUSTER is the service that provides the latest computing resources on demand to mainly analytics workloads in the manufacturing industry.  IHAB CLUSTER is configured specially for each customer to meet their environment and requirements. It is provided on the high-performance cluster C540, which has the latest CPU, and the cluster is located in a data center in the Tokyo metropolitan area. The customer can use the computing resource as if it is their own on-premises resource.  In order to expand its resources, iHAB CLUSTER has selected Oracle Cloud Infrastructure for a lineup of iHAB CLUSTER service platform because of the rapid new technology adoption and cost performance of  Oracle Cloud Infrastructure.  We expect that continuous and further development of Oracle Cloud Infrastructure for HPC would lead to the expansion of the iHAB CLUSTER service.   Reference Information Timoth Intelligence Co., Ltd.: iHAB CLUSTER Oracle Cloud Infrastructure   About Oracle Japan Japan corporation of Oracle corporation. With the slogan "beyond your cloud> commit;", the provision of cloud services that maximize information value through a data-driven approach such as a broad and maximally integrated cloud application and cloud platform, We are developing business of various services to support use of. Listed on the First Section of the Tokyo Stock Exchange in 2000 (Securities Code: 4716). URL www.oracle.com/en   About Oracle  In addition to a wide range of SaaS applications covering ERP, HCM, and Customer Experience (CX), Oracle Cloud offers Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) including the industry best database to the Americas, Europe, It is offered from the data center across the Asian region. For more information on Oracle (NYSE: ORCL) please visit www.oracle.com .   * Oracle and Java are registered trademarks of Oracle Corporation and its subsidiaries, affiliates in the United States and other countries. Company names, product names, etc. in the text may be trademarks or registered trademarks of each company. This document is the sole purpose of providing information and can not be incorporated into any contract.

Thymos Intelligence Providing an Environment in the Cloud to Run HPC Applications Tokyo, Japan - 2018/07/26   Oracle Corporation Japan announced today that Thymos Intelligence Corporation selects...

Interconnecting Clouds with Oracle Cloud Infrastructure

A multicloud architecture uses more than one cloud service provider. Companies have more than one cloud provider for many reasons: to provide resiliency, to plan for disaster recovery, to increase performance, and to save costs. When companies want to migrate cloud resources from one cloud provider to another, cloud-to-cloud access and networking is required. Oracle Cloud Infrastructure provides the internet gateway (IGW) and dynamic routing gateway (DRG) service gateway options for connecting an Oracle Cloud Infrastructure virtual cloud network (VCN) with the internet, on-premises data centers, or other cloud providers. This post describes the connectivity service options that are available to help you plan your network connectivity to the Oracle Cloud in general, and it discusses connectivity options between the cloud providers. Connectivity Option Overview All major cloud service providers (CSPs) offer three distinct network connectivity service options: Public internet   IPSec VPN Dedicated connections (Oracle's service is called Oracle Cloud Infrastructure FastConnect) Depending on the workloads and the amount of data that must be transferred, one, two, or all three network connectivity service options are required.   Max (Mb/s) Latency Jitter Cost Secure Public internet < 10,000 Variable Variable Variable No IPSec VPN < 250 Variable Variable Variable Yes FastConnect < 100,000 Predictable Predictable Predictable Yes   Public internet provides accessibility from any internet-connected device. IPSec VPN is a secured encrypted network that provides access by extending your network into the cloud. FastConnect provides dedicated connectivity and offers an alternative connectivity to internet. Because of the exclusive nature of this service, it is more reliable and offers low latency, dedicated bandwidth, and secure access. FastConnect offers the following connectivity models: Connectivity via an Oracle network provider or exchange partner Connectivity via direct peering within the data center Connectivity via dedicated circuits from a third-party network Connectivity Option Details Following are optimal connectivity options. To compare the options based on speed, cost, and time, see the next section, “Choosing Your Connectivity Option.” Option 1: Connecting via an IPSec VPN IPSec VPN provides added security by encrypting data traffic. The achievable bandwidth over a VPN is limited to 250 Mbps. Therefore, multiple VPN tunnels might be required depending on the total amount of data to transfer and the required transfer rate. Steps-by-step instructions for creating a secure connection between Oracle Cloud Infrastructure and other cloud providers are available in Secure Connection between Oracle and Other Cloud Providers. Option 2: Connecting via a Cloud Exchange Exchange providers can provide connectivity to a large ecosystem of cloud providers over the same dedicated physical connection between on-premises and the exchange provider. Some available providers are Megaport, Equinix, and Digital Realty. To route between the clouds, you have the following options: Use the virtual router service from the exchange provider—for example, Megaport Cloud Router (MCR). Colocate a physical customer edge (CE) device with the exchange provider. The following table shows the pros and cons of using a virtual router service versus colocating a physical router with the exchange provider:   Pros Cons Using a virtual router service Easy to deploy Provides bandwidth on demand Is cost-effective to deploy and maintain Flexibility to make routing changes is within the scope of support from the cloud exchange Non-availability of public IP communication Using a dedicated physical router Provides flexibility in managing routing functions Gives you the ability to deploy your choice of hardware Long deployment times Scaling limitations Hardware maintenance and associated monetary costs   Although the scope of this blog is to provide optimal connectivity options with a partner-agnostic approach, we are using the Megaport Cloud Router (MCR) option as an example because it’s easy to deploy and provides a virtual router service. We are also using Amazon Web Services (AWS) for our example cloud provider connection, although Megaport supports connectivity to many cloud providers, including Azure and Google Cloud Platform.                         Setting up the connectivity involves the following steps: Connect FastConnect with Megaport through the Oracle Cloud Infrastructure Console. Connect AWS Direct Connect with Megaport through the AWS console. Create the MCR: Create a Virtual Cross Connect (VXC) connection to FastConnect from MCR Create a VXC connection to the connecting cloud provider (for example, AWS Direct Connect) from the MCR. After you set up FastConnect, the MCR, and the connection with the cloud provider (for example, AWS Direct Connect, Azure ExpressRoute, or Google Cloud Platform), you can access the resources by their private IP addresses and the traffic will be routed via the high-bandwidth, low-latency connection. Choosing Your Connectivity Option Use the following high-level information to help you choose your connectivity option. However, be aware that the best connectivity option varies for different use cases. Information is given for AWS Direct Connect as an example. Speed FastConnect offers 1G and 10G port speeds. Direct Connect offers port speeds of 50M, 100M, 200M, 300M, 400M, 500M, 1G, and 10G. IPSec VPN speeds are limited under 500Mb/s in most cases. Cost Oracle FastConnect charges a flat port-hour fee, and there are no charges for data transfer. For more information, see Oracle FastConnect Pricing. The Oracle IPSec VPN service does not charge for inbound data transfer, outbound data transfer is free up to a 10-TB transfer, and there is a small fee after the 10-TB limit is exceeded. For more information, see Oracle IPSec VPN Pricing. Amazon pricing has a port fee and data transfer charge. Inbound data is not metered but outbound data is metered and charged. For more information, see Amazon Direct Connect Pricing. Megaport pricing is based on the rate limit that you choose when you create the MCR. The options available are 100 Mbps, 500 Mbps, and 1, 2, 3, 4, and 5 Gbps. Charging rates (per monthly values) are displayed at the time of deployment based on where you are deploying the MCR and the regions that your connection spans. Time Data transfer times depend on the speed choices made at each hop. Comparing dedicated connectivity and IPSec VPN, dedicated connectivity provides a deterministic timeframe because the connectivity uses a private medium and is more reliable and consistent. The following table shows hypothetical cost scenarios based on bandwidth for the time to data transfer from AWS to Oracle Cloud Infrastructure:   Data (TB) 10 100 1,000 10,000 Rate Gb/s 1 22h13m12s 9d6h13m12s 92d14h13m12s 925d22h13m12s 10 2h13m12s 22h13m12s 9d6h13m12s 92d14h13m12s 100 13m12s 2h13m12s 22h13m12s 9d6h13m12s   Summary This post discusses the intercloud connectivity options that are available in general and how multicloud access can be implemented with Oracle Cloud Infrastructure. It provides high-level indicators that can help you define your connectivity path and compares the connectivity options available to help you choose the optimum connectivity for your use case. For more information and a detailed step-by-step guide for connectivity, see the Migrating Oracle Databases from Amazon Web Services to Oracle Cloud Infrastructure Database white paper.

A multicloud architecture uses more than one cloud service provider. Companies have more than one cloud provider for many reasons: to provide resiliency, to plan for disaster recovery, to increase...

Oracle Cloud Infrastructure

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Jean Rodrigues

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed Jean Rodrigues of Oracle. Jean is a Principal IT Consultant working in Oracle’s Managed Cloud Services group, which is a global team that implements, runs, and maintains services for customers who have their workloads fully managed by Oracle. His role includes providing technical leadership and architecting customers' Oracle Cloud Infrastructure and Cloud at Customer workloads. Greg: Jean, how did you prepare for the certification? Jean: It was an exciting journey. I’ve been working in cloud for a while. I have followed the development of Oracle Cloud Infrastructure because I truly believe it is a great offering from Oracle that will benefit many enterprise customers. When the Oracle Cloud Infrastructure Architect Associate certification launched, I immediately started the preparation by following the learning path published on the exam page. I took the training, went over the documentation, did hands-on exercises, and took the practice exam. Additionally, I attended Oracle Training, which greatly helped me prepare. The instructor explained the concepts very well and provided valuable real-world examples. I highly recommend that training. Greg: How long did it take you to prepare for the exam? Jean: I took around two months to prepare, spending around one hour a day reading and practicing in the environment. I booked the exam through Pearson VUE, showed up 15 minutes earlier, and everything went smoothly. Greg: How is life after getting certified? Jean: I received great feedback from management and coworkers on this accomplishment, and I was glad to see that some of them were inspired to prepare to take the exam as well. I’ve helped some of my colleagues with their preparation, and I am pretty sure that soon we will have more Oracle Cloud Infrastructure Architect Associates within the team. Preparing for this exam helped me acquire a huge amount of knowledge in advanced cloud topologies, mainly around networking, distributed computing, and cloud native. It’s just awesome to see how microservices architectures, Docker, Kubernetes, and other cutting-edge patterns and technologies can help customers to innovate. Today I feel confident helping customers design a highly available, high-performance, and cost-effective architecture in Oracle Cloud Infrastructure. Greg: Any other advice you’d like to share? Jean: Stay focused and have fun. As I like to say, it is not about the credential you earn, it is about all the learning and expertise you will acquire down the road. The hands-on practices using a trial account help tremendously. == If you want to follow Jean's advice, go to the Oracle Cloud Infrastructure 2018 Architect Associate page to learn more about training materials, courses, and to register for your exam. Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman   Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series: Umair Siddiqui Nitin Vengurlekar Rajib Kundu Miranda Swenson Robby Robertson Chris Riggin Anuj Gulati

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed...

Performance

Oracle Tests Better in Performance than Amazon Web Services

Oracle Cloud Infrastructure Compute bare metal instances is shown in independent testing by StorageReview to have a 2X-5X performance advantage with comparable or dramatically lower pricing, compared to similar configurations from Amazon Web Services (AWS) across a wide range of workloads.  The Testing: End-to-End Workload Performance In March 2018, StorageReview gave Oracle an Editor’s Choice award for the performance and innovation that they saw when testing Oracle Cloud Infrastructure bare metal and virtual machine instances. At the time, Oracle Cloud Infrastructure was the only cloud that they had tested, but the results compared favorably to on-premises configurations running the same workloads. In August 2018, StorageReview tested AWS i3.metal bare metal instances across the same range of workloads they ran for Oracle previously, and the results were a strong validation of the Oracle Cloud Infrastructure performance proposition for customers. The testing done in the lab at StorageReview covers more than storage. The testing is end-to-end workload performance testing, and it measures all the components that make up the user’s experience on the tested platforms. The results provide an aggregate measurement of performance across compute, storage, and network components, and is about as close as a lab can get to estimating the performance that’s likely to be seen by a user. The Results: Oracle is Up to 5X Faster than AWS In the testing, Oracle demonstrated up to 5X the performance when running on remote block storage, and double the performance when running workloads on local SSD storage. Every workload tested, including Oracle Database, Microsoft SQL Server, 4k random read and random write, 64k sequential read and sequential write, as well as a variety of virtual desktop workloads, all showed a similar performance advantage for Oracle Cloud Infrastructure in comparison with the results for AWS. Additionally, the latency recorded at peak performance was far lower on Oracle, and the percentage of recorded performance with latency below 1ms, the common threshold for application usability, was far higher. Latency has a powerful impact on variability of performance. Customers running performance-sensitive systems of record need performance consistency, one of the key design points of Oracle Cloud Infrastructure, and these results show that Oracle can deliver a higher level of consistency than AWS in addition to the higher level of performance. Superior Oracle Database Workload Performance When we designed Oracle Cloud Infrastructure, we knew that a primary use case for our customers would be Oracle Database and the critical business applications that run on top of our database, so we knew we had to deliver exceptional results for these demanding workloads. The results showed we hit the mark. For performance-intensive database workloads, Oracle Cloud Infrastructure offers performance results that are head and shoulders above the capabilities offered by AWS.  The results with a configuration that uses remote block storage, network connected to bare metal instances on both clouds, shows the most dramatic advantage for Oracle. Oracle provides 5X the performance, as seen here: How does Oracle get such a big advantage over AWS? With the remote block storage configuration, the answer comes down to the unique cloud architecture we've built to address the needs of enterprise users, and more specifically, how we built our network and our block storage service. Oracle has a next-generation cloud network that connects our cloud components, including between servers and the block storage sub-systems. The network has no resource over-subscription, so performance doesn't get compromised when the network gets busy.  Further, we used a flat network topology, which reduces the number of hops and the associated latency between any two devices. Off-box network virtualization offloads the effort from the server, which reduces the performance tax that customers would see without such an approach. Finally, storage traffic uses the full 25-Gbps pipe to the server, while AWS confines the storage traffic to their EBS optimized link that’s limited to 10 Gbps for their bare metal instance. The Oracle Block Volume Service is designed for maximum performance with all-SSD capacity and rates the highest IOPS per GB and IOPS per instance metrics of any block storage service in the cloud. One of the key things that you can see in the performance comparison for remote block storage is that a higher percentage of the IOPS Oracle delivers is usable, with latency below 1 ms, the common threshold for application latency tolerance. In this graph, the percentage of unusable IOPS of the peak recorded for Oracle is 10%, while Amazon records 25% of its peak IOPS at unusable latency levels, both represented by the hashed bars at the top of the peak IOPS levels. Higher levels of latency contribute to variability of performance at high levels of performance. Part of Oracle's design point in cloud is to cap performance before latency becomes a major issue, making the performance we deliver less variable, delivering better results for critical workloads that need consistency as much as they need high performance. With the local SSD configuration, the Oracle performance advantage for Oracle Database workloads is slimmer, but still significant. In this case, Oracle provides double the performance, but also gives customers more than 3X the local storage capacity, making this extremely high performance configuration far more usable for workloads that need to scale capacity over time. The comparative performance for local storage configurations can be seen here: Fewer factors go into the performance difference when local NVMe SSD storage is used. Both vendors are using a similar media type, and there's no network connection between server and storage that impacts performance since the storage sits on-board the bare metal server. In this case, the Oracle advantage comes from the SSD drive itself, which has built-in cache that increases performance enough to drive the 2X performance benefit demonstrated. In addition to twice the performance recorded when running on SSD, Oracle offers 51TB of SSD on our bare metal instances, while AWS offers just 15TB, meaning that it's much more likely that customers can accommodate big scale applications, as well as the capacity needed for data redundancy and ongoing growth of darta on local SSD with Oracle than with AWS. Superior Performance for SQL Server, Virtual Desktop, and General Workloads While we built Oracle Cloud Infrastructure to be optimized for Oracle Database, the enterprise optimized infrastructure we built also has significant performance advantages over AWS for all the other workloads that StorageReview tested.  Customers with demanding performance requirements for any category of workloads will clearly find a good home with Oracle Cloud. Here are the results for running Microsoft SQL Server, with Oracle delivering double the performance on local SSD and more than 5X with remote block storage, along with far better usable IOPS: Here's what StorageReview measured for a 4K random write workload, with Oracle showing more than double the performance on local SSD and just under 5X on remote block storage: And finally, this is how it broke down for a virtual desktop infrastructure (VDI) workload, a test of initial login, with Oracle showing 2.6X the performance on local SSD and almost 5X with remote block storage: Price for Oracle Block Storage is 19X lower for up to 5X higher performance The last thing is price. Although Oracle is delivering a huge performance advantage, the cost is lower than AWS in most cases, as has been validated in other independent analysis. For block storage, StorageReview built the highest performance configuration possible on AWS so that it would compare as favorably as possible. The problem with that, however, is that AWS makes customers pay for the amount of input/output performance that they consume, which drives up the cost dramatically. For this series of testing, Oracle delivers 4-5X more performance at 19X lower cost.  In the configuration that StorageReview tested, the total cost for the AWS solution was $69,794 per month, driven largely by the cost of storage performance which customers must forecast and pay for on Amazon's high performance storage offering, Elastic Block Storage Provisioned IOPS. The Oracle Cloud Infrastructure configuration with higher performance across all workloads cost $3,697 per month, with Oracle's Block Volume Storage service delivering superior performance without charges for IOPS consumption. In the local storage configuration, Oracle costs slightly more than AWS, by about 25%. However, Oracle also offers double the performance, more memory and 3.4X the local storage capacity, meaning that we can run bigger workloads and accommodate more workload growth over time. For customers that care about performance, this is an equation that delivers tremendous value. We built Oracle Cloud Infrastructure to deliver consistent high performance for demanding enterprise workloads of all kinds, and we’re thrilled to see the advantages of our design demonstrated so clearly. We invite users to try Oracle Cloud to see how it can help them solve their biggest business challenges with the confidence of industry-leading performance that doesn’t break the bank.

Oracle Cloud Infrastructure Compute bare metal instances is shown in independent testing by StorageReview to have a 2X-5X performance advantage with comparable or dramatically lower pricing, compared...

Product News

Taking a Look at the Oracle Cloud Infrastructure Storage Gateway

Object storage is great for managing unstructured data at scale, but often it’s not that easy to use with existing applications because you need to modify the applications and learn new APIs. Or, perhaps you simply want to work with file systems because that’s what you’re used to. In these cases, a storage gateway is what you need. Oracle Cloud Infrastructure Storage Gateway makes Oracle Cloud Infrastructure Object Storage appear like a NAS, providing seamless, no-fuss access to the cloud for businesses with file-based workflows. There’s no need to re-architect legacy applications or to disrupt users’ access to files they’re working with. Top 5 Features of Oracle Storage Gateway Here are my top 5 reasons why Storage Gateway is great for your cloud data use cases. 1. Removes Data Lock-In: Data Is Accessible in Native Format Any file that you write to a Storage Gateway file system is written as an object with the same name in its associated Oracle Cloud Infrastructure Object Storage bucket (with its file attributes stored as object metadata). This means that you don’t need the gateway to read back your data; you can access your files directly from the bucket by using Oracle APIs, SDKs, HDFS connector, third-party tools, the CLI, and the Console. A Refresh operation in Storage Gateway lets you read back, as files, any objects that were added directly to the Object Storage bucket by other applications. Your data is now available in the same format both on-premises and from within Oracle Cloud Infrastructure. 2. No Cost, Easy to Set Up Storage Gateway runs as a Linux Docker instance on a local host with local disk storage used for caching, or it can run in an Oracle Cloud Infrastructure Compute instance with attached block storage. 3. Storage Cache for High Performance to the Cloud Configure the cache storage to be large enough to hold your largest data set or the files you want low-latency, local access to. Then, any files written into file systems that you create on your local gateway are written asynchronously and efficiently over the WAN to the cloud. When this data becomes active again, it can be brought back into the local Storage Gateway cache. 4. Keep Files You Need Fast Access to Pinned to Local Storage Files that you know you’ll want high-speed access to can be pinned to remain in the cache while you need them, eliminating undesirable latency between your users and data in the cloud. 5. Capacity Without Limit Adding Storage Gateway to your existing storage environment means that you can take advantage of the durability and massive scale of Object Storage. Your data sets can expand and contract without the expense of provisioning new hardware. Grow as fast and as large as you need to while paying only for the storage that you consume.   Store Data Where It Makes the Best Sense for Your Business The gateway effectively expands your storage footprint to leverage the price-performance advantage of the highly durable and secure Object Storage. Moving less-frequently accessed data to the cloud frees up expensive on-premises storage and helps reduce NAS sprawl.   Top 5 Problems, Solved! Here are my top 5 choices for business problems that the Storage Gateway addresses today: 1. Migrating Data to the Cloud When you decide to move data into the cloud, often the initial data migration becomes an obstacle because of limited bandwidth uploading over your WAN or just sheer data volume. In these cases, the new Oracle Data Transfer Service makes sense. When network speed isn’t the issue, the Storage Gateway is a great choice. Start writing the data that you need in the cloud to your storage gateway, and your data is asynchronously and efficiently written to your storage bucket in Object Storage. After your initial data is in Object Storage, it’s easy to incrementally add new or modified files by using your on-premises storage gateway. 2. Hybrid Cloud Workloads and Data Processing If you’re considering or already running applications and big data services in Oracle Cloud, Storage Gateway makes it easy to upload local files to one or more Object Storage buckets for them to use. For cloud-native applications and services, you can access this data directly from the bucket. For file-based applications, you launch a Compute instance in the cloud, install a storage gateway on it, and then use it to read and write your data. After running applications in the cloud, you can write the results back to local storage via the gateway. 3. Nearline Content Repositories and Data Distribution When you end a project, you often need to keep some files available on less expensive, nearline cloud storage so that they are more readily sharable for reuse. Using Storage Gateway to migrate these assets from expensive NAS to a cooler tier of cloud storage shifts the storage costs from a capital expense to operational budget and provides always-on access to and reuse of these assets across geographies and organizations. 4. Back Up and Archive with 3-2-1 Data Protection Many institutions are storing backups on local NAS systems or tape. Based on business policies, these full or partial backups might be kept just a few weeks or for several months or years. Being able to tier older backups to the cloud and keep just the most recent backup in local cache can offer tremendous space and cost savings and let you meet backup and recovery SLAs. Using Storage Gateway as an on-ramp to the cloud makes it easy to adhere to the 3-2-1 best practice rule for backup and recovery: Have at least 3 copies of data.    (Move 1 or both backup copies into the cloud, keeping the original onsite.) Use 2 different storage types.    (Cloud counts as a different storage type.) Keep at least 1 copy of data offsite.    (Select your object storage cloud region.) 5. Tiered Storage and NAS Capacity Expansion Storage Gateway essentially expands your on-premises storage to include Oracle Cloud Infrastructure Object Storage. The Storage Gateway cache lets you tier data by asynchronously moving colder, tier-2 data to the cloud while keeping it readily accessible. Data you might once have considered moving to tape to help free up more expensive online local storage can now be tiered off to Object Storage where it can still be accessed as needed. By adding Storage Gateway to your existing NAS environment, you can take advantage of the Object Storage durability, massive scale, and pay-as-you-grow pricing while ensuring low-latency access to recently accessed data (or pinned data).   A Final Thought Storage Gateway is the evolution of the Storage Software Appliance gateway product. If you’re using Oracle Cloud Infrastructure Object Storage, you’ll want to use Storage Gateway with its enhanced file-to-object transparency and other sophisticated features. Over the coming months, we’re adding more features and explaining more use cases, so please stay tuned for more!

Object storage is great for managing unstructured data at scale, but often it’s not that easy to use with existing applications because you need to modify the applications and learn new APIs....

Product News

Introducing Updateable Instance Metadata

Some of our most security conscious customers are governments. In discussions with several of these customers the idea of a secure compute enclave was raised. They described it as an environment where highly sensitive data can be utilized while also not requiring or allowing inbound connectivity.  Starting today customers can update instance metadata on all OCI instances via the OCI API, SDKs and the CLI. Updateable Instance Metadata enables an atypical and secure communications channel to compute instances that does not require any externally accessible services. Customers can now more easily build secure compute enclaves for highly sensitive workloads. Instance metadata and cloud-init are two of the little pieces of magic that make IaaS so compelling. Instance metadata has always been leveraged at initial launch by customers who rely on cloud-init (and for Windows) to configure an instance. That configuration could be a simple `yum update` or it could be used to install an Oracle Management Cloud Agent for advanced monitoring and management. Installing and configuring Chef or Puppet agents, joining an Active Directory domain, much more are all simple to automate with instance metadata. Here’s what some of the metadata on an instance looks like – $ curl http://169.254.169.254/opc/v1/instance/ { "availabilityDomain" : "Uocm:PHX-AD-2", "faultDomain" : "FAULT-DOMAIN-1", "compartmentId" : "ocid1.compartment.oc1..aaaaaaaay4bxm4m5k7ii7oqyygolnuyozt5tyb5ufsl2jgcehm4hl4fslrwa", "displayName" : "updateable_metadata", "id" : "ocid1.instance.oc1.phx.abyhqljrrtcvkpxo33brxsfpykyrfg2n5r6owmyncywppxmt75ou2ap2n2xa", "image" : "ocid1.image.oc1.phx.aaaaaaaasez4lk2lucxcm52nslj5nhkvbvjtfies4yopwoy4b3vysg5iwjra", "metadata" : { "ssh_authorized_keys" : "ssh-rsa AAAAB3NzaC...4cON", "user_data" : "V2UncmUgaGlyaW5nLCBnZXQgaW4gdG91Y2ghIGNyYWlnLmNhcmxAb3JhY2xlLmNvbQ==" }, "region" : "phx", "canonicalRegionName" : "us-phoenix-1", "shape" : "VM.Standard2.1", "state" : "Running", "timeCreated" : 1536284426464 } Because instance metadata and cloud-init work so well together we often think about them as being a single thing. They aren’t. Cloud-init is an application that runs the first time an instance is launched, it gets a document from the instance metadata service and processes it per the documentation. When we decouple instance metadata from cloud-init it becomes obvious that instance metadata could be leveraged as an atypical communications channel. Traditionally we interact with compute instances by connecting to services running on the instance that accept inbound connections, SSH and HTTP are two common channels. These services introduce security risks; they can contain bugs, they can be misconfigured, they need to be regularly and carefully updated. The same applies to any application on an instance that accepts an inbound connection, they all create risk. What we need is a secure channel to communicate with a compute resource that doesn’t require any services that listen for external connections. Updateable Instance Metadata gives us this channel. Updateable Instance Metadata eliminates the need for listening services on the compute instance and allows us to leverage the strong OCI IAM permissions and policy features to secure it. Let’s imagine a dataset that is always encrypted in transit and at rest. Unfortunately, it’s still difficult to do useful work against encrypted data, it must be decrypted first. Decrypting the data increases the risk of losing control over it. Updateable Instance Metadata enables us to use the data and collect the results from a compute enclave that doesn’t accept any inbound connections. This is a significant security advantage. There are multiple pieces to this solution; A custom image that includes the analytics software plus a small application that polls the instance metadata. SSH and other services should be disabled, the firewall should be configured to deny all inbound connections. Set the GRUB menu timeout to 0. The custom image should also include a temporary key encryption key (KEK). A VCN with a private subnet and a Service Gateway. The private subnet isolates the instances from the Internet and the Service Gateway allows outbound access to the OCI Object Store without allowing access elsewhere. A bucket in the OCI Object Store. This will contain the encrypted dataset(s) as well as the results of the analysis. A Dynamic Group, matching rule, and IAM policy. These will authorize the instance to GET the data and POST the results to the object store. Now we can launch any number of instances, we’ll call them workers. When there is a dataset that needs to be processed we will use the OCI API to update the instance metadata on a worker with two key:values; “object”:”<path to object>” and “DEK”:”<data encryption key>”. The DEK should be unique to each individual unit of work. An application on the instance will get the object, decrypt the DEK and then the dataset. Once the analysis is complete the results can be encrypted with the DEK and PUT to the object store. The OCI API defines two metadata keys for an instance, `metadata` and `extendedMetadata`. The contents of the `metadata` and `extendedMetadata` PUT via the API are merged into the `metadata` key on an instance. Updating the `metadata` key via the API is subject to multiple limitations, let’s focus on `extendedMetadata`. The maximum size of the combined metadata, including userdata and SSH keys is 31.25 kibibytes. To update the metadata on our instance with our two new keys we first need to define them. Passing complex JSON on the CLI is difficult so we will source it from a file - $ cat extended-md.json { "object": "https://objectstorage.us-phoenix-1.oraclecloud.com/p/7GWMRaWucZ-dqIgocR9OVc6dUGiB5QwHX4V-QISkbCI/n/myns/b/money/o/someencypteddata", "DEK": "some DEK" } To apply the update - $ oci compute instance update --instance-id ocid1.instance.oc1.phx.abyhqljr…n2xa --extended-metadata file://./extended-md.json When we check the metadata on the instance again we can see our update - [opc@updateable-metadata ~]$ curl http://169.254.169.254/opc/v1/instance/ { "availabilityDomain" : "Uocm:PHX-AD-2", "faultDomain" : "FAULT-DOMAIN-1", "compartmentId" : "ocid1.compartment.oc1..aaaaaaaay4bxm4m5k7ii7oqyygolnuyozt5tyb5ufsl2jgcehm4hl4fslrwa", "displayName" : "updateable_metadata", "id" : "ocid1.instance.oc1.phx.abyhqljrrtcvkpxo33brxsfpykyrfg2n5r6owmyncywppxmt75ou2ap2n2xa", "image" : "ocid1.image.oc1.phx.aaaaaaaasez4lk2lucxcm52nslj5nhkvbvjtfies4yopwoy4b3vysg5iwjra", "metadata" : { "DEK": "some DEK", "user_data" : "V2UncmUgaGlyaW5nLCBnZXQgaW4gdG91Y2ghIGNyYWlnLmNhcmxAb3JhY2xlLmNvbQ==", "object" : "https://objectstorage.us-phoenix-1.oraclecloud.com/p/7GWMRaWucZ-dqIgocR9OVc6dUGiB5QwHX4V-QISkbCI/n/myns/b/money/o/someencypteddata", "ssh_authorized_keys" : "ssh-rsa AAAAB3NzaC...4cON" }, "region" : "phx", "canonicalRegionName" : "us-phoenix-1", "shape" : "VM.Standard2.1", "state" : "Running", "timeCreated" : 1536284426464 } Updateable Instance Metadata provides a highly secure, out-of-band communications channel that can be leveraged to build a secure compute enclave for highly sensitive workloads. I’m excited to see what you build with Updateable Instance Metadata, please let me know! To get started with Updateable Instance Metadata on OCI, visit https://cloud.oracle.com.  Updateable Instance Metadata are available at no additional cost in all public OCI regions and ADs. For more information, see the Oracle Cloud Infrastructure Getting Started guide, Compute service overview, and the Updateable Instance Metadata documentation. Craig

Some of our most security conscious customers are governments. In discussions with several of these customers the idea of a secure compute enclave was raised. They described it as an environment where...

Product News

Gartner Names Oracle a "Visionary" in New Magic Quadrant for Web Application Firewalls

You can't have a secure cloud without a secure edge. The internet and corporate networks are distributed systems, and web-based attacks take advantage of that. They target IoT devices, web servers and other endpoints, seeking access to your data and infrastructure. The only way to stop them is to prevent that malicious traffic from reaching your endpoints in the first place. That's why we acquired Zenedge in March. Its technologies—now available in the Oracle Dyn Web Application Security suite and coming to Oracle Cloud Infrastructure soon—enable organizations to protect against web server vulnerability exploits, DDoS attacks, bad bots, and other threats, both on-premises and in the cloud. This expertise is invaluable in an evolving threat landscape; we know where the hackers are going next and we're always working to meet them there. But don't take my word for it. Gartner has named Oracle a “Visionary” in its latest Magic Quadrant for Web Application Firewalls (WAFs). A cloud-based, globally deployed WAF is the cornerstone of any cloud edge security strategy. It sits in front of a web server, inspects traffic, and identifies and mitigates threats—both incoming (such as DDoS attacks) and outgoing (such as data breaches). Those capabilities are pretty standard across the WAF market, but they're not enough these days. There are too many types of web attacks, which are constantly evolving, and new threats are always emerging. The Oracle WAF stands out from the crowd with its use of machine learning. A supervised machine learning engine analyzes traffic queries and assigns them a score based on their potential risk. The WAF can then respond to threats by automatically blocking them or alerting security operations center analysts for further investigation. These risk scores are a valuable differentiator. Our customers told Gartner that the scores help them improve their WAF configuration and enable their security teams to focus on addressing the most important, complex threats. Oracle is committed to enterprise security as a pillar of its cloud platform. The emergence of the Oracle WAF as a visionary in the market is just the tip of the iceberg. Oracle Cloud Infrastructure embraces the hybrid and multicloud approach that customers demand. This approach provides needed flexibility and scalability, but it also makes the corporate network even more distributed than it already is. A comprehensive edge security strategy, including the use of a cutting-edge WAF, is necessary to protect your business in this environment. Partnered with the industry’s best data and insights on internet performance, availability, and security via our Internet Intelligence program, this strategy gives the market a trusted enterprise cloud for the future. Gartner, Magic Quadrant for Web Application Firewalls, Jeremy D'Hoinne, Adam Hils, Ayal Tirosh, Claudio Neiva, 29 August 2018 Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

You can't have a secure cloud without a secure edge. The internet and corporate networks are distributed systems, and web-based attacks take advantage of that. They target IoT devices, web servers and...

Events

General Availability of Virtual Machines with NVIDIA GPUs on Oracle Cloud Infrastructure

A few weeks ago, we announced preview availability for our virtual machine instances with NVIDIA Tesla Volta GPUs at the ISC conference in Germany. Customers have been using GPUs on Oracle Cloud Infrastructure for various use cases from engineering simulations and medical research to modern workloads such as machine learning training using frameworks such as TensorFlow. This week we’re at NVIDIA’s GPU Technology Conference in Tokyo and I’m excited to announce General Availability for these virtual machines with NVIDIA Tesla Volta V100 GPUs in our London (UK) and Ashburn (US) regions this week. You’ll be able to login and launch these instances just the same way you normally launch instances on Oracle Cloud Infrastructure. These virtual machines join the bare-metal compute instance we launched earlier in the year and provide you the entire server for very computationally intensive and accelerated workloads such as DNN training or run traditional high-performance computing (HPC) applications such as GROMACS or NAMD. Finally, we’re also making our Pascal generation GPU instances available on Virtual Machines in our Ashburn (US) and Frankfurt (Germany) regions as a new cost-effective GPU option. Data scientists, researchers, engineers and developers now have access to a portfolio of options ranging from a single P100 virtual machine to the cutting edge 8-way bare-metal instance with V100 Tesla GPUs. There’s something here for everyone!   Instance Shape GPU Type GPU(s) Core(s) Memory (GB) Interconnect Price (GPU/Hr) BM.GPU3.8 Tesla V100 (SXM2) 8 52 768 P2P over NVLINK $2.25 VM.GPU3.4 Tesla V100 (SXM2) 4 24 356 P2P over NVLINK $2.25 VM.GPU3.2 Tesla V100 (SXM2) 2 12 178 P2P over NVLINK $2.25 VM.GPU3.1 Tesla V100 (SXM2) 1 6 86 N/A $2.25 BM.GPU2.2 Tesla P100 2 28 192 N/A $1.275 VM.GPU2.1 Tesla P100 1 12 104 N/A $1.275 You can additionally use NVIDIA GPU Cloud to launch HPC or AI application containers by simply deploying our pre-configured images along with NGC credentials. You can follow detailed step-by-step instructions here or visit our GPU product page for more information. Finally, visit us this week in the exhibition hall at NVIDIA’s GTC Conference to talk to the engineering teams about Oracle Cloud Infrastructure or attend the breakout session on the 13th September at 12:10pm to learn more - https://www.nvidia.com/ja-jp/gtc/sessions/?sid=2018-1105. We hope to see you there! Karan

A few weeks ago, we announced preview availability for our virtual machine instances with NVIDIA Tesla Volta GPUs at the ISC conference in Germany. Customers have been using GPUs on Oracle...

Strategy

Openness at Oracle Cloud Infrastructure

Co-authored by: Bob Quillin, VP of Developer Relations, Oracle Cloud Infrastructure and Jason Suplizio, Principal Member of Technical Staff, Oracle Cloud Infrastructure Oracle is committed to creating a public cloud that embraces Open Source Software (OSS) technologies and their supporting communities. With the strong shift to cloud native technologies and DevOps methodologies, organizations are seeking an open, cloud-neutral technology stack that avoids cloud lock-in and allows them to run the same stack in any cloud or on-premises. As a participant in this competitive public-cloud ecosystem, Oracle Cloud Infrastructure respects this freedom to choose and provides the flexibility to run where the business or workloads require. Openness and OSS are cornerstones of the Oracle Cloud Infrastructure strategy, with contributions, support of open source foundations, community engagement, partnerships, and OSS-based services at the core of its efforts. Developer ecosystems grow and thrive in a vibrant and supported community—something Oracle believes in and actively supports. Oracle is one of the largest producers of open source software in the world, developing and providing contributions and other resources for projects including Apache NetBeans, Berkeley DB, Eclipse Jakarta, GraalVM, Kubernetes, Linux, MySQL, OpenJDK, PHP, VirtualBox, and Xen. This commitment naturally extends into public cloud computing, giving cloud customers the confidence to migrate their workloads with minimal impact to their business, code, and runtime. Oracle Cloud Infrastructure core services are built on open source technologies to support workloads for cloud native applications, data streams, eventing, and data transformation and processing.   Support for Open Source Communities “Oracle supports the cloud native community by, among other things, engaging at the highest level of membership with the Cloud Native Computing Foundation (CNCF). Their commitment to openness and interoperability is demonstrated by their support for the Certified Kubernetes conformance program and their continuing certification of Oracle Linux Container Services.” —Dan Kohn, Executive Director of The Cloud Native Computing Foundation (CNCF)   Oracle is an active member of several foundations committed to creating sustainable open source ecosystems and open governance. As a platinum member of the Linux Foundation since 2008, Oracle participates in a number of its projects, including the Cloud Native Computing Foundation (CNCF), the Open Container Initiative (OCI), the Xen Project, Hyperledger, Automotive Grade Linux, and the R Consortium. Since joining CNCF as a platinum member in 2017, Oracle Cloud Infrastructure engineering leadership sits on the CNCF Governing Board and continues to commit to a number of CNCF technologies, Kubernetes in particular.   The Oracle Cloud Infrastructure Container Engine for Kubernetes, for example, leverages standard upstream Kubernetes, validated against the CNCF Kubernetes Software Conformance program, to help ensure portability across clouds and on premises. As part of the first group of vendors certified under the Certified Kubernetes Conformance Program, Oracle works closely with CNCF working groups and committees to further the adoption of Kubernetes and related OSS across the industry. Oracle's strategy is to deliver open source–based container orchestration capabilities by offering a complete, integrated, and open service. To this aim, Container Engine for Kubernetes leverages Docker for container runtimes, Helm for package management, and standard Kubernetes for container orchestration. In addition to Kubernetes, Oracle works closely with CNCF teams on many of their other projects and working groups, including Prometheus, Envoy, OpenTracing, gRPC, serverless, service mesh, federation, and the Open Container Initiative. Oracle joined the Open Container Initiative to promote and achieve the initiative's primary goal, “to host an open source, technical community and build a vendor-neutral, portable and open specification and runtime for container-based solutions.” In accordance with that mission, Oracle developed the railcar project, which is an implementation of the Open Container Initiative's runtime spec. In further support of the container ecosystem, Oracle collaborates with Docker, Inc., to release Oracle's flagship databases, middleware, and developer tools into the Docker Store marketplace via the Docker Certification Program.  Open, conformant container technologies have become the tools of the trade for developers who need to move fast and build for the cloud. These developers rely on open, cloud-neutral, container-native software stacks that enable them to avoid lock-in and to run anywhere.    Built on Open Source "We believe that embracing Openness creates trust, choice, and portability for our customers. In addition to being platinum members in several Open Source Software foundations, we've also dedicated top engineering talent to contribute their leadership and software."   —Rahul Patil, Vice President, Software Development, Oracle Cloud Infrastructure   Oracle Cloud Infrastructure is built on and retains compatibility with the most advanced and prominent OSS technologies. Oracle Linux, the operating system that Oracle Cloud Infrastructure runs on, is an excellent case in point. Furthermore, we try to use the open source software, wherever possible, without modification. The reality is, however, that introducing innovative products to the market sometimes requires making enhancements to the underlying OSS code base. Under these circumstances, Oracle Cloud Infrastructure works to contribute these changes back to the open source community.  Chef and Ansible Customers who use Chef can also use the open source Chef Knife Plugin for Oracle Cloud Infrastructure. For customers who use Ansible, Oracle Cloud Infrastructure recently announced the availability of Ansible modules for orchestration, provisioning, and configuration management tasks (available on GitHub). These modules make it easy to author Ansible playbooks to automate the provisioning and configuration of Oracle Cloud Infrastructure services and resources, such as Compute, Load Balancing, and Database.  Fn Project Developers who are engaged with building cloud native applications will find a portable, open, container-native serverless solution for their development needs in Oracle's recently open-sourced Fn Project. The Fn Project can run on any cloud or on a developer's laptop. This open source serverless solution provides polyglot language support (including Java, Go, Ruby, Python, PHP, Rust, .NET, Core, and Node.js, with AWS Lambda compatibility) and will be offered as a fully managed functions-as-a-service (FaaS) offering on Oracle Cloud Infrastructure. Additionally, Oracle Cloud Infrastructure will be releasing a real-time event management service, which implements the CNCF's CloudEvents specification for a common, vendor-neutral format for event data. After they are released, the combination of the event management service and the Fn Project will be the only open source and standards-based serverless and eventing platform available among all public cloud providers. GraphPipe Recently Oracle announced the availability of GraphPipe, a new open source project that makes it easier for enterprises to deploy and query machine learning models from any framework. GraphPipe provides a standard, high-performance protocol for transmitting tensor data over the network, along with simple implementations of clients and servers that make it easy to deploy and query machine learning models from any framework. GraphPipe's efficient servers can serve models built in TensorFlow, PyTorch, mxnet, CNTK, or caffe2. All of GraphPipe’s source code, documentation, and examples to get started are all available on GitHub today. Kubernetes Through its work in the CNCF and otherwise, the Oracle Cloud Infrastructure team has invested deeply in Kubernetes.  As a part of that investment, because manually managing and maintaining a production Kubernetes cluster and the associated resources can require significant effort, the team created the Oracle Container Engine for Kubernetes.  Using standard, upstream Kubernetes, it creates and manages clusters for secure, high-performance, high-availability container deployments using Oracle Cloud Infrastructure's networking, compute, and storage resources, which includes bare metal instance types. The Oracle Cloud Infrastructure engineering team has also contributed many of its Kubernetes projects to the open source community, such as JenkinsX supported cloud provider (OKE),  flexvolume driver, volume provisioner, cloud controller manager, Terraform Kubernetes installer, crashcart, and smith (read more about these projects here). Terraform Terraform is a popular infrastructure as code (IaC) solution that aims to provide a consistent workflow for provisioning infrastructure from any provider, and a self-service workflow for publishing and consuming modules. Following the release of its Terraform provider, Oracle Cloud Infrastructure is increasing its investment in Terraform with the upcoming release a fully managed service that uses Terraform to manage infrastructure resources. That release will be accompanied by a group of open source Terraform modules for easy provisioning of Oracle Cloud Infrastructure services and of many other popular OSS technologies onto Oracle Cloud Infrastructure.   “We put our customers first in everything we do, and our customers tell us which OSS technology they want to use on Oracle Cloud Infrastructure. There are many more open source repositories which our customers use frequently, which we will support as first-class citizens over time. If you wish to see support of a specific OSS technology on Oracle Cloud Infrastructure, feel free to reach out to us or comment on this blog.” - Vinay Kumar, Vice President of Product Management, Oracle Cloud Infrastructure   There is a lot of history and momentum behind Oracle’s commitment to OSS, and Oracle Cloud Infrastructure is making rapid progress in building out a truly open public cloud platform. See it yourself, get started with Oracle Cloud Platform, with up to 3,500 free hours, by creating a free account.

Co-authored by: Bob Quillin, VP of Developer Relations, Oracle Cloud Infrastructure and Jason Suplizio, Principal Member of Technical Staff, Oracle Cloud Infrastructure Oracle is committed to creating...

Customer Stories

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Anuj Gulati

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed Anuj Gulati of IBM. Anuj works as a Technical Lead at IBM India. He has over nine years of experience managing database systems (RDBMS and non-RDBMS), ERPs, job schedulers, and web servers, and he has sound knowledge of Oracle Cloud Infrastructure concepts, including Ravello. He is certified as both an Oracle Cloud Infrastructure (OCI) Architect Associate and an OCI Classic Architect Associate. Greg: Anuj, how did you prepare for the certification? Anuj: I already had a fair understanding of Oracle Cloud Infrastructure (OCI) as I was already certified in OCI Classic. To prepare for the OCI Architect Associate Exam, the first step I took was to focus on understanding the business drivers that led to the new OCI offering. This helped me understand the cloud in more detail. Understanding the technical aspects is one thing, but understanding the reasoning for developing Oracle's next generation cloud was very beneficial. I also signed up for the 30-day trial which I found to be most beneficial. Getting my hands on OCI services greatly helped me understand the concepts. I reviewed all the use cases I could find and set these up on the trial account. And the documents found on docs.oracle.com contained almost everything that I needed to work with the Oracle Cloud. In addition, I’ve been following a lot of Oracle management on LinkedIn and whenever they posted any update, I tested out the update to familiarize myself with it. I also compared Oracle Cloud to the clouds offered by other vendors. I reviewed the technical aspects, which helped me better appreciate the offerings in Oracle Cloud that are unavailable in the other vendors’ clouds. I would say that preparing this way did take longer, but I still feel it was the best way for me to not only pass the exam but to truly understand the Oracle Cloud Infrastructure offering. Greg: Did being part of the reference program help you prepare for the exam? Anuj: Yes. We received some customized videos specifically for the OCI exam. I found these to be very helpful and assisted my overall understanding of OCI. Greg:  How long did it take you to prepare for the exam? Anuj: It took me about three months to prepare for the exam. It took longer than I had hoped to prepare due to my job responsibilities. For someone who has experience with other clouds, I think it would only take about one month to prepare for the exam. Greg:  How is life after getting certified? Anuj: I shared the digital badge for my OCI certification on LinkedIn and this received many views, which I was very pleased about. Passing this exam has given me a sense of confidence, a sense of pride. I feel like I am part of an elite group that has earned this certification. Many colleagues have reached out to me for advice on how to prepare for the exam and about the exam structure. From a technical perspective, it has helped me understand a lot of cloud concepts in general and some of the Oracle concepts in particular. Greg: Any other advice you’d like to share? Anuj: If you have the right skills and understanding, then this exam should not be too difficult for you. Go through the videos and documents that are available for free. You definitely need to create a trial account and work your way through it.   Please subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam. Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series: Umair Siddiqui Nitin Vengurlekar Rajib Kundu Miranda Swenson Robby Robertson Chris Riggin Anuj Gulati  

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed...

Solutions

Microsoft SQL Server Running on Linux Using Oracle Cloud Infrastructure

Microsoft SQL Server on Linux removes the barrier for organizations that prefer the Linux operating system over Microsoft Windows. It’s the same SQL Server database engine with many similar features; the only difference is the operating system. Currently, Microsoft supports the Linux version of SQL Server on Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Ubuntu. You can also run SQL Server in a Docker container. You must be able to install, update, and remove SQL Server from the command line. This post describes how to deploy a SQL Server database running on an Ubuntu Linux server on a single Oracle Cloud Infrastructure Compute VM. It also describes how you can use Oracle Cloud Infrastructure Block Volumes storage to store the SQL Server database files and transaction log files. Before You Begin Before you install SQL Server on Linux, consider the following prerequisites: Identify your IOPS or I/O throughput requirements. Check SQL Server documents for resource requirements. Choose an appropriate Oracle Cloud Infrastructure Compute VM shape (OCPU, memory, and storage). Create a secured network on Oracle Cloud Infrastructure to access the SQL Server database. Choose and install a supported Linux server version and its command-line tools. Identify required SQL Server services that must be installed. Generate the SSH key pair and secure the SSH private key and public key. Choose the Oracle Cloud Infrastructure VM Shape and OS You can choose the Linux image (Ubuntu) from the Oracle Cloud Infrastructure repository, or you can bring your own Linux image to deploy on the VM. We strongly recommend that you check the Linux server version support on Oracle Cloud Infrastructure before you start deploying. For this post, we chose Ubuntu 16.04 with Debian packages from the Oracle Cloud Infrastructure image repository, and the VM.Standard2.4 shape. Configure Network Access Before installing SQL Server, you must create an Oracle Cloud Infrastructure virtual cloud network (VCN) and choose the appropriate availability domain, subnet, and other components for your Linux server. In addition to the existing ingress stateful security rules in your VCN, you might need to add ingress security rules to allow remote SSH (secure shell) access to the Linux server. Ensure that the internet gateway route rules are enabled for internet access, which allows you to access the Linux host over the public network. The following images show the security rule added and route rules enabled to allow SSH access to the Linux host over the public network. Security list rule: Route table rule: For more information about working with security rules and route rules, see the Networking service documentation. Provision and Connect to the Linux Server When you provision the Linux (Ubuntu) server, you provide the SSH public key. After the server is provisioned, the following page is displayed in the Console, showing the public IP address to use to access the Linux host. Use SSH to connect to Linux host, using the username ubuntu and the private key of the SSH key pair.  Create a Block Storage Volume on Oracle Cloud Infrastructure We installed the operating system, OS command-line tools, SQL Server binary, and all the required SQL Server tools on the local boot volume. However, we stored the SQL Server database on a block storage volume. The following image illustrates creating a block storage volume on Oracle Cloud Infrastructure and choosing the appropriate backup option (Bronze) for the volume.  Run the following command on the Ubuntu Linux server as the root user to add the iSCSI target to this block storage volume at the operating-system level. After you run the preceding commands, you might need to partition the newly added iSCSI storage and create the file system (xfs and ext3 are both supported by SQL Server). The following image shows the mount point of the block storage volume after creating partition, creating appropriate filesystem and mounting the partition. Install SQL Server on Linux Follow these steps to install SQL Server on an Ubuntu Linux operating system. Run the following commands in a bash shell of the Ubuntu Linux terminal to install the mssql-server package. Import the public repository GPG keys: sudo wget -qO- https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add - Register the SQL Server Ubuntu repository: sudo add-apt-repository "$(wget -qO- https://packages.microsoft.com/config/ubuntu/16.04/mssql-server-2017.list)" Install SQL Server on Ubuntu Linux: sudo apt-get update sudo apt-get install -y mssql-server Run mssql-confsetup, and set the SA password and choose the SQL Server edition. sudo /opt/mssql/bin/mssql-conf setup Verify the SQL Server service. systemctl status mssql-server Connect to the SQL Server instance. To create the database, first access the database and connect with a tool that can run Transact-SQL statements on the SQL Server. You might need to install SQL Server Operations Studio, which is a cross-platform GUI database management utility, to manage your MS-SQL Server database. Conclusion This post demonstrated how to deploy Microsoft SQL Server on Ubuntu Linux using Oracle Cloud Infrastructure, and discussed how to use Oracle Cloud Infrastructure Block Volumes to store MS-SQL Server database to achieve higher performance and better manageability.

Microsoft SQL Server on Linux removes the barrier for organizations that prefer the Linux operating system over Microsoft Windows. It’s the same SQL Server database engine with many similar features;...

Security

Installing the Check Point CloudGuard Virtual Firewall Appliance on Oracle Cloud Infrastructure

Oracle Cloud Infrastructure offers a native firewall service where the customer can create Security Lists with stateful rules for packet inspections using IP addresses as source and destination with TCP and UDP ports. But customers also have the option to install and deploy other third party firewall products to satisfy additional requirements: To comply with their existing or required InfoSec policy To leverage existing operational knowledge To add security features that are not available with Security Lists like IDS/IPS In this blog we are featuring Check Point as many of our existing customers use Check Point Firewall products on their on-premise and they have enterprise licenses which they can use on Oracle IaaS as part of the "bring your own license" (BYOL) scheme. The Check Point CloudGuard family of security products can be deployed as virtual appliances to protect enterprise workloads running on cloud infrastructures (IaaS) or software services and applications (SaaS) against generation V cyberattacks. This post describes the general workflow and provides some associated steps for installing the Check Point CloudGuard IaaS virtual appliance on Oracle Cloud Infrastructure. For general guidance, see the How to Deploy a Virtual Firewall Appliance on Oracle Cloud Infrastructure blog post. Prerequisites To perform the steps in this post, you must meet the following prerequisites: You have an Oracle Cloud Infrastructure tenancy. You need have access to the Oracle Cloud Marketplace to download the Check Point CloudGuard IaaS Security Gateway. Optionally, you can store the image in your Object Storage (for example, in us-ashburn-1). You are familiar with the following Oracle Cloud Infrastructure terms: availability domain, bucket, compartment, image, instance, key pair, region, shape, tenancy, and VCN. For definitions, see the documentation glossary. Sizing The example in this post uses the VM.Standard2.4 compute shape. For a list of Oracle Compute shapes and pricing information, see the Compute pricing page. Architecture Diagram In this example, CloudGuard is deployed in a single gateway configuration, with three VNICs: one for the public internet facing traffic, the second for the DMZ, and the third for internal workloads. The internet and DMZ zones are on public subnets, and the internal zone is on a private subnet. Interface The following table lists the interface properties as shown in the architecture diagram: Zone Subnet IP Address VNIC Internet Public   VNIC 1 DMZ Public   VNIC 2 Intranet Private   VNIC 3 Step 1: Create the VCN Using the Oracle Cloud Infrastructure Console, create a virtual cloud network (VCN) and its associated resources for the CloudGuard security zones. The following images show examples of the resources in the console. VCN Internet Gateway Subnets Security Lists with Ingress and Egress Rules  Route Table Route Rule Step 2: Import the CloudGuard Image as a Custom Image Import the image from Object Storage and create a custom image. If you want to create the CloudGuard gateway in another region (for example, uk-london-1), you must preauthenticate the image from Object Storage.  Then, create the custom image. Step 3: Launch an Instance from the Custom Image Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images. Find the custom image that you want to use. Click the Actions icon (three dots), and then click Launch Instance. Provide additional launch options as described in Creating an Instance. Step 4: Add More VNICs (for the DMZ Security Zone) You can create additional VNICs when the first instance is running. To complete the additional VNIC configuration, you have to reboot. Double-click the instance. In the left-side menu, click Attached VNICs. Click Create VNIC. Enter a name. For Virtual Cloud Network, select the VCN. For Subnet, select a private subnet. Select the Skip Source/Destination Check check box. Click Create VNIC. Step 5: Create a Serial Console Connection to the Running Instance Create an serial console connection to the running instance by following the instructions at Instance Console Connections. Step 6: Configure CloudGuard  Configure the gateway by using the Check Point Gaia Portal or the SmartConsole. You can manage your Check Point Security Gateway in the following ways: Standalone configuration: CloudGuard acts as its own Security Management Server and Security Gateway Centrally managed: Same virtual network or outside the gateway  On premises From a different cloud or from another Oracle Cloud Infrastructure VCN or region From a different tenant in Oracle Cloud Configure the Gateway from the Gaia Portal Open an SSH client. Set the user for the administrator. Enter set user admin password. Set the password. Enter save config. Go to the Gaia Portal: https:\\<IP_address> The First Time Configuration Wizard is displayed. Perform the following steps to configure your system. When you get to the Installation Type page, you select the specific deployment of your system. On the Deployment Options page, select Setup, Install, or Recovery. On the Management Connection page, configure your system. On the Internet Connection page, configure the interface to connect to the internet. On the Device Information page, configure the DNS and proxy settings. On the Date and Time Settings page, set the time manually, or use the Network Time Protocol (NTP). On the Installation Type page, configure the system for your needs. Configure the Gateway from the SmartConsole Open the SmartConsole and go to the Gateways & Servers view. Click the new icon and then select Gateway. The Check Point Security Gateway Creation window is displayed. Select Wizard Mode. Enter values on the General Properties page. Initiate secure internal communications. Click Finish. The Check Point Gateway General Properties window is displayed. Configure the gateway.   Please refer to CheckPoint CloudGuard documentation for the step-by-step configuration: https://cloudmarketplace.oracle.com/marketplace/en_US/listing/37604515 In the next blog we will tackle high availability options for CloudGuard on OCI in a multi-VCN configuration. Please stay tuned! p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 16.0px Helvetica} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Helvetica} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.5px Helvetica} p.p4 {margin: 0.0px 0.0px 4.8px 0.0px; font: 11.0px Helvetica} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; min-height: 14.0px} p.p2 {margin: 0.0px 0.0px 1.9px 0.0px; font: 11.0px Helvetica} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Helvetica}

Oracle Cloud Infrastructure offers a native firewall service where the customer can create Security Lists with stateful rules for packet inspections using IP addresses as source and destination with...

Developer Tools

Deploy Kubeflow with Oracle Cloud Infrastructure Container Engine for Kubernetes

This post provides detailed instructions on how to deploy Kubeflow on Oracle Cloud Infrastructure Container Engine for Kubernetes.  Container Engine for Kubernetes is a fully managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. You can use this service when your development team wants to reliably build, deploy, and manage their cloud native applications. You just specify the compute resources that your applications require, and Container Engine for Kubernetes provisions them on Oracle Cloud Infrastructure automatically.  Kubeflow is an open-source project that makes the deployment and management of machine learning workflows on Kubernetes easy, portable, and scalable. Kubeflow automates the deployment of TensorFlow on Kubernetes. TensorFlow provides a state-of-the-art machine learning framework, and Kubernetes automates the deployment and management of containerized applications.  Step 1: Create a Kubernetes Cluster Create a Kubernetes cluster with Container Engine for Kubernetes. You can create this cluster manually by using the Oracle Cloud Infrastructure Console or automatically by using Terraform and the SDK. For better performance, we recommend using a bare metal compute shape to create nodes in your node pools.  Choose the right compute shape and number of nodes in the node pools, depending on the size of your data-set and on the compute capacity needed for your model training.  As an example, the following node pool was created with the BM.DenseIO1.36 shape which has 36 OCPUs and 512 GB memory.  Container Engine for Kubernetes creates a Kubernetes "kubeconfig" configuration file that you use to access the cluster using kubectl and Kubernetes Dashboard.  Step 2: Download the Kubernetes Configuration File Download the Kubernetes configuration file of the cluster that you just created. This configuration file is commonly known as a kubeconfig file for the cluster. At this point, you can use kubectl or the Kubernetes dashboard to access the cluster.  Please note that after your run "kubectl proxy" command,  you need to use following URL to access the Kubernetes dashboard.  http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ Step 3: Deploy Kubeflow After Kubernetes cluster is created,  you can deploy Kubeflow.  In this blog post, we will deploy Kubeflow with ksonnet.  Ksonnet is a framework for writing, sharing and deploying Kubernetes manifests. It helps to simplify Kubernetes deployment.  Please check whether ksonnet is installed on your local system. If it is not, install ksonnet before proceeding. Now you can deploy Kubeflow by using following command, provided in the Kubeflow documentation: export KUBEFLOW_VERSION=0.2.2 curl https://raw.githubusercontent.com/kubeflow/kubeflow/v${KUBEFLOW_VERSION}/scripts/deploy.sh | bash Note: The preceding command enables the collection of anonymous user data to help improve Kubeflow. If you don’t want data to be collected, you can explicitly disable it. For instructions, see the Kubeflow Usage Reporting guide.  During the Kubeflow deployment, you might encounter the following error: "jupyter-role" is forbidden: attempt to grant extra privileges: To work around this error, you need to grant your own user the required role-based access control (RBAC) role to create or edit other RBAC roles. Then, run the following command: $kubectl create clusterrolebinding default-admin --clusterrole=cluster-admin --user=ocid.user.oc1..aaaaa.... Step 4: Access Notebook Now you are ready to access Jupyter Notebook  and start to building your ML/AI models with your data sets.  To connect your notebook locally, you can run the following command: $kubectl get pods --selector="app=tf-hub" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}" $kubectl port-forward tf-hub-0 8000:8000 Summary With OCI Container Engine for Kubernetes and Kubeflow, you can easily setup a flexible and scalable machine learning and AI platform for your projects.  You can focus more on building and training your models rather than on managing the underlying infrastructure.  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures}

This post provides detailed instructions on how to deploy Kubeflow on Oracle Cloud Infrastructure Container Engine for Kubernetes.  Container Engine for Kubernetes is a fully managed, scalable, and...

Solutions

Oracle Database Offerings in Oracle Cloud Infrastructure

Oracle offers multiple cloud-based database options to meet a wide variety of use cases. The Oracle Cloud Infrastructure-based databases are available on bare-metal machines, virtual machines (VMs), and Exadata in different sizes. These offerings come with different levels of managed services, features, and price points, which makes it easy to find an option that meets your specific requirements. A 100 percent compatibility design ensures that all of Oracle’s database solutions use the same architecture and software, which enables you to leverage the same skills and support, whether you deploy the solutions on-premises, in a private cloud implementation, or in Oracle Cloud. Oracle offers Maximum Availability Architecture (MAA) guidelines and associated software and tools for high availability, disaster recovery, and data protection. All these technologies, like Real Application Clusters (RAC), Data Guard, and GoldenGate, and MAA best practices are also available for Oracle Cloud Databases. All cloud-based options are available in pay-as-you-go and monthly flex pricing options and allow you to leverage your existing licenses in a Bring Your Own License (BYOL) model. For detailed information about the included Oracle Database features, options, and packs, see the Permitted Features section of Oracle Database Licensing Information User Manual. In this post, I discuss the key features of different managed Oracle Database options for Oracle Cloud Infrastructure and compare them on the basis of performance, management, high availability, scalability, and cost. I also provide some prescriptive guidance to help you decide which option is a good choice for your use case. Scope Oracle provides a wide range of industry-leading on-premises and cloud-based solutions to meet the data management requirements of small- and medium-sized businesses as well as large global enterprises. This post covers only managed Oracle Database offerings for Oracle Cloud Infrastructure. It does not cover installing and operating Oracle (and other) databases directly on Oracle Cloud Infrastructure Compute instances or Oracle Exadata Cloud at Customer for on-premises. Oracle Cloud Infrastructure Autonomous Transaction Processing and Oracle Cloud Infrastructure Autonomous Data Warehouse are also not discussed here, these will be covered in a separate post. This post also does not cover other database options, such as Oracle Database Schema Cloud Service, Oracle NoSQL Database, or Oracle MySQL. You can find more information about these offerings and others in the Database documentation. Hardware Options for Oracle Database in Oracle Cloud Infrastructure Oracle Cloud Infrastructure supports several types of database (DB) systems that range in size, price, and performance. One way of classifying the systems is on the basis of their underlying compute options. You can provision databases in Oracle Cloud Infrastructure on Exadata machines, as well as on bare metal and virtual machine compute shapes. Exadata DB systems consist of a quarter rack, half rack, or full rack of compute nodes and storage servers, tied together by a high-speed, low-latency InfiniBand network. Exadata DB systems are available on X6 and X7 machines. Bare metal DB systems consist of a single bare metal server running on your choice of bare metal shapes. Locally attached NVMe storage is used for BM.DenseIO shapes. Virtual machine DB systems are available on your choice of VM.Standard shapes. A virtual machine DB system database uses Oracle Cloud Infrastructure Block Volume storage instead of local storage. You specify a storage size when you launch the DB system, and you can scale up the storage as needed at any time. Managed Oracle Database Offerings in Oracle Cloud Infrastructure Oracle offers the following managed database services running in Oracle Cloud Infrastructure: Oracle Exadata Cloud Service Oracle Cloud Infrastructure Database Oracle Database Cloud Service Oracle Exadata Cloud Service This service offers Oracle Databases hosted on Oracle Exadata Database machines. Exadata Cloud Service configurations were first offered on Oracle Exadata X5 systems. More recent Exadata Cloud Service configurations are based on Oracle Exadata X6 or X7 systems, which are the two currently available options in Oracle Cloud Infrastructure. You can choose from quarter-rack, half-rack, and full-rack system configurations. With Exadata X7 shapes in Oracle Cloud Infrastructure, you can get up to 8 DB nodes with 720 GB RAM per node, up to 368 OCPUs, and 1440 TB raw storage or 414 TB of usable storage with unlimited I/Os. Each Exadata Cloud Service instance is configured such that each database server of the Exadata system contains a single virtual machine (VM), called the domU, which is owned by the customer. Customers have root privileges for the Exadata database server domU and DBA privileges on the Oracle databases. Customers can configure the system as they like, and load additional agent software on the Exadata database servers to conform to business standards or security monitoring requirements. All of Oracle’s industry-leading capabilities are included with Exadata Cloud Service, such as Database In-Memory, Real Application Clusters (RAC), Active Data Guard, Partitioning, Advanced Compression, Advanced Security, Database Vault, OLAP, Spatial and Graph. Also included is Oracle Multitenant, which enables high consolidation density, rapid provisioning and cloning, efficient patching and upgrades, and significantly simplified database management. In Oracle Cloud Infrastructure, you can launch DB systems in different availability domains and configure Active Data Guard between them, along with using RAC for improved availability. Exadata Cloud Service is available through the Oracle Cloud My Services portal and the Oracle Cloud Infrastructure Console.                     Performance: Highest-performance managed Oracle Database offering in the cloud. Management: Best management features including deployment, patching, backups, and upgrading, with rolling updates for multiple nodes. High availability: Best HA with support for 8-node RAC based database clustering. Scalability: Best scale-out option. Cost: Exadata Cloud Service shapes are charged a minimum of 744 hours for the first month of the cloud service, whether or not you are actively using it, and whether or not you terminate that cloud service prior to using the entire 744 hours. For ongoing use of the same instance after the first month, you are charged for all active hours. Additional OCPUs are billed for active hours for the first month and ongoing use. This is generally the costliest managed DB option in Oracle Cloud Infrastructure, although higher-end bare metal shapes with similar resources, the pricing is not far apart. When evaluated in terms price/performance ratio, Exadata excels. More information: Features, Pricing, Documentation Guidance: Exadata Cloud Service is the most powerful Oracle Database, with all of the options, features, and Enterprise Manager Database Packs. Offering the highest performance, high availability, and scalability, this option is a great match for mission-critical and production applications. It is engineered to support OLTP, data warehouse, real-time analytic, and mixed database workloads at scale. It also typically costs more than other Oracle Cloud database options, but if you calculate in terms of price/performance ratio (like you should), the value it provides exceeds other alternatives. With the introduction of X7-based options, you can now start or scale down to zero cores, which makes the entry price point of Exadata Cloud Service lower than previous Exadata options. If your Database needs to scale beyond 2 nodes, Exadata Cloud Service that offers up to 8 nodes is recommended. Another good use case is consolidating a lot of databases using Exadata Cloud Service rather that deploying them on virtual machines. Other managed Database offerings have limitations in terms of I/O throughput and storage capacity which makes Exadata Cloud Service a good option when higher performance or capacity is required. Note: Two additional Exadata services are not available on Oracle Cloud Infrastructure but are relevant for several use cases: Exadata Cloud at Customer is similar to Oracle’s Exadata Cloud Service but is located in customers’ own data centers and managed by Oracle Cloud experts. This service enables a consistent Exadata cloud experience for customers whether on-premises or in Oracle Cloud Infrastructure data centers. This enables customers to use Exadata in their own data centers and behind their own firewalls for reasons such as data sovereignty issues, legal, regulatory, privacy or compliance requirements, sensitive data, custom security standards, extremely high SLAs or near zero latency requirements. Oracle Database Exadata Express Cloud Service is a good entry-level service for running Oracle Database in Oracle Cloud. It delivers an affordable and fully managed Oracle Database 12c Release 2 experience, with enterprise options, running on Oracle Exadata. It’s generally a good match for running line-of- business or SMB production apps. It’s also great for rapidly provisioning dev, test and quality assurance databases, and for quickly standing up multi-purpose sandbox environments.  Oracle Cloud Infrastructure Database The Oracle Cloud Infrastructure Database service is managed by the Database Control Plane running in Oracle Cloud Infrastructure and uses the platform’s native APIs. It is available through the Oracle Cloud Infrastructure Console and integrates natively with all the Oracle Cloud Infrastructure platform features and services, such as compartments, audit, tagging, search, Identity and Access Management (IAM), Block Volume, and Object Storage. The Database service offers 1-node DB systems on either bare metal or virtual machines, and 2-node RAC DB systems on virtual machines. You choose the shape when you launch a DB system. Bare Metal Shapes Bare metal DB systems consist of a single bare metal server with locally attached NVMe storage. Each DB system can have multiple database homes, which can be different versions. Each database home can have only one database, which is the same version as the database home. BM.DenseIO1.36: Provides a 1-node DB system (one bare metal server), with up to 36 CPU cores, 512 GB memory, and nine 3.2 TB (28.8 TB total) locally attached NVMe drives BM.DenseIO2.52: Provides a 1-node DB system (one bare metal server), with up to 52 CPU cores, 768 GB memory, and eight 6.4 TB (51.2 TB total) locally attached NVMe drives Virtual Machine Shapes You can provision a 1-node DB system on one virtual machine or a 2-node DB system with RAC on two virtual machines. Unlike a bare metal DB system, a virtual machine DB system can have only a single database home. The database home has a single database, which is the same version as the database home. A virtual machine DB system database uses Oracle Cloud Infrastructure block storage instead of local storage. The number of CPU cores on an existing virtual machine DB system cannot be changed. VM.Standard1 virtual machines: Provisioned on X5 machines. Five VM options are available with 1 to 16 CPU cores and 7 GB to 112 GB memory. VM.Standard2 virtual machines: Provisioned on X7 machines. Six VM options are available with 1 to 24 CPU cores and 15 GB to 320 GB memory. Performance: High performance with the bare metal option, and good performance with virtual machine shapes. Management: Very good management features including deployment and backups. High availability: Offers 2-node RAC-based database clustering. Data Guard is also available. Scalability: Very good scalability with CPU and storage scaling in bare metal option. Good scalability with storage scaling in virtual machine option. Cost: The virtual machine option is available at a very good price point. The bare metal option is more expensive than the virtual machine option but generally less expensive than the Exadata Cloud Service, depending on the shape and number of cores chosen. More information: Features, Pricing, Documentation Guidance: If you are just starting with Oracle Cloud and plan to mainly use Oracle Cloud Infrastructure services, you will find it easier to use the OCI Database service because it natively integrates with the rest of the Oracle Cloud Infrastructure features. If you want to use RAC, the Database service is a good option because Oracle Database Cloud Service does not yet offer RAC for the databases that it deploys in Oracle Cloud Infrastructure. The maximum storage available on a virtual machine database in this option is 40 TB of remote NVMe SSD block volumes. For bare metal, it is 51.2TB NVMe SSD raw, ~16TB for two-way mirroring and ~9TB with three-way mirroring. Using mirroring with bare metal option is a best practice and highly recommended for any production workloads. If your storage needs are bigger than these options and you want a managed database offering without the need for techniques like sharding, Exadata with up to 1440 TB of raw storage becomes a good option. Oracle Database Cloud Service Oracle Database Cloud Service can deploy databases on Oracle Cloud Infrastructure, Oracle Cloud Infrastructure Classic, and Oracle Cloud at Customer. As I mentioned before, I am focusing only on Oracle Cloud Infrastructure-based offerings. Database Cloud Service relies on an underlying component of Oracle Cloud named Platform Service Manager (PSM) to provide its service console and its REST API. As a result, the Database Cloud Service console has the same look and feel as the service consoles for other platform services like Oracle GoldenGate Cloud Service and Oracle Java Cloud Service, and the endpoint structure and feature set of the Database Cloud Service REST API is similar to those of the REST APIs for other platform services. Database Cloud Service also integrates nicely with Identity Cloud Service for authentication and authorization. Database Cloud Service is available through the Oracle Cloud My Services portal. With Database Cloud Service on Oracle Cloud Infrastructure, you can provision two types of databases: Single instance: A single Oracle Database instance and database data store hosted on one compute node. Single instance with Data Guard standby: Two single-instance databases, one acting as the primary database and one acting as the standby database in an Oracle Data Guard configuration. Outside of Oracle Cloud Infrastructure, Database Cloud Service can also provision 2-node clusters with RAC, two 2-node RAC clusters with one acting as a standby in a Data Guard configuration, and a 1-node database configured as a Data Guard standby. You can find more information about all possible Database Cloud Service configurations here. You must choose one of the following shapes when you use Database Cloud Service to launch a DB system in Oracle Cloud Infrastructure: Bare Metal Shapes Bare metal DB systems consist of a single bare metal server with remote block volumes. BM.Standard1.36: Provides a 1-node DB system (one bare metal server), with up to 36 CPU cores, 256 GB memory, and up to 1 PB of remote block volumes. BM.Standard2.52: Provides a 1-node DB system (one bare metal server), with up to 52 CPU cores, 768 GB memory, and up to 1 PB of remote block volumes. Virtual Machine Shapes You can provision a 1-node DB system on one virtual machine or a 2-node DB system with RAC on two virtual machines. Unlike a bare metal DB system, a virtual machine DB system can have only a single database home. The database home has a single database, which is the same version as the database home. A virtual machine DB system database uses Oracle Cloud Infrastructure block storage instead of local storage. The number of CPU cores on an existing virtual machine DB system cannot be changed. VM.Standard1 virtual machines: Provisioned on X5 machines. Five VM options are available with 1 to 16 CPU cores and 7 GB to 112 GB memory. VM.Standard2 virtual machines: Provisioned on X7 machines. Six VM options are available with 1 to 24 CPU cores and 15 GB to 320 GB memory. Performance: High performance with the bare metal option, and good performance with the virtual machine shapes. Management: Best management features, including deployment, patching, backups, and upgrading. High availability: Data Guard based standby option available. RAC based database clustering not yet available via Database Cloud Service on Oracle Cloud Infrastructure. Scalability: Very good scalability with CPU and storage scaling in bare metal option. Good scalability with storage scaling in virtual machine option. Cost: The virtual machine option is available at a very good price point. The bare metal option is more expensive than the virtual machine option but generally less expensive than Exadata Cloud Service, depending on the shape and number of cores chosen. More information: Features, Pricing, Documentation Guidance: If you are currently using Database Cloud Service with Oracle Cloud Infrastructure Classic and are migrating workloads from Oracle Cloud Infrastructure Classic to Oracle Cloud Infrastructure, then continuing to use Database Cloud Service will be the easier path for migrating to Oracle Cloud Infrastructure and using the databases will feel familiar. It also offers a more integrated management of existing PaaS services through the Oracle Cloud My Services portal. If you want to use RAC in Oracle Cloud Infrastructure, then the Exadata Cloud Service or Oracle Cloud Infrastructure Database service options are good options, as discussed earlier. As an extension, if you want nondisruptive rolling updates, then RAC or Exadata enable that because one node at a time can be updated in those options. The maximum storage available on a virtual machine database in this option is 40 TB of remote NVMe SSD block volumes. For bare metal, depending on machine type storage is up to 51.2TB NVMe SSD raw, ~16TB for two-way mirroring and ~9TB with three-way mirroring. Using mirroring with bare metal option is a best practice and highly recommended for any production workloads. If your storage needs are bigger these options, and you want a managed database offering without the need for sharding, Exadata with up to 1440 TB of raw storage becomes a good option. Summary In this post, I provide a high level overview of the three managed Oracle database offerings in Oracle Cloud Infrastructure: Oracle Exadata Cloud Service, Oracle Cloud Infrastructure Database, and Oracle Database Cloud Service. I discuss the key features of these three options and compare them on the basis of performance, management, high availability, scalability, and cost. I also provide some prescriptive guidance to help you decide which option is a good choice for your use case. For more customized guidance, and for help with any Oracle products and offerings contact your Oracle representative. Contact information is also available on this site.  

Oracle offers multiple cloud-based database options to meet a wide variety of use cases. The Oracle Cloud Infrastructure-based databases are available on bare-metal machines, virtual machines (VMs),...

Oracle Cloud Infrastructure

Configuring a Custom DNS Resolver and the Native DNS Resolver in the Same VCN

One of the main objectives of the Oracle Cloud Infrastructure Blog is to serve as a forum for Cloud Solutions Architects and Product Managers to provide best practices, introduce new enhancements and offer tips & tricks for migrating and running your most important workloads in the Oracle Cloud. I'm a Solutions Architect myself, and my job is to engage with customers from the design phase all the way through to implementation. And because I've had the privilege of working on so many customer deployments we have visibility into issues and needs that span multiple accounts. The joy in this customer-vendor feedback loop comes in finding repeatable ways to solve issues, address needs and improve our service offerings. In this blog post, I'll address a common issue that we've seen across a few customer accounts. This issue was caused by a configuration of the custom DNS resolver option in Oracle Cloud Infrastructure virtual cloud network (VCN) settings. This post explains the issue and how to resolve it. I want to acknowledge the contributions from the following team members from our Cloud Support and Operation teams for the speedy resolution of these support requests: Ankita Singh, Associate Solution Engineer Saulo Cruz, Principal Member of Technical Staff Issue When customers configure a subnet within a VCN, they can choose Internet and VCN Resolver or Custom Resolver when configuring the DHCP options. The default is Internet and VCN Resolver. If customers want to use their on-premises DNS servers (typically Microsoft Active Directory) across the FastConnect or IPSec VPN, they can select Custom Resolver. (For more information about the options, see the Networking documentation.) Generally, most enterprise customers put a DNS relay in the VCN within a shared services subnet. Typically the subnets within the VCN reflect this configuration. This works great for the applications.  However, the issue starts when customers try to provision an Oracle Database Cloud Service (DBCS) instance by using a prebuilt Oracle Database image on a subnet that is using the Custom Resolver DHCP option. The typical error message is as follows:  InvalidParameter - VCN RESOLVER FOR DNS AND DNS LABEL must be enabled for all subnets used to launch the specified shape This message goes away when the customer changes the DNS in the DHCP options to Internet and VCN Resolver. But this change breaks other applications. This issue happens because of the recursive nature of the native VCN resolver. Workaround We have found a workaround for this problem when the customer is using prebuilt DB images for a DBCS. The following diagram describes the architecture: To implement this workaround, perform the following steps: Use Terraform to create the VCN and required subnets. For instructions, see the VCN Overview and Deployment white paper. Select the VCN in which the Database instance is required to be launched. Select the Internet and VCN Resolver DHCP option (which is the default option). Launch the Database instance and make the required configuration for the instance. After the Database instance is launched, go to the DHCP options, select Custom Resolver, and enter the customer’s DNS server IP address. Instantiate the DNS relay server (or Microsoft Active Directory) in the shared resources subnet (referred in the preceding diagram as the shared subnet). Keep the DHCP option as Internet and VCN Resolver (the default). In all other application subnets, select the Custom Resolver DHCP option and enter the customer’s DNS server IP address. Note: Ensure that there is connectivity back to the customer DNS server or servers from the Oracle Cloud. Also ensure that you populate the DNS Label field when creating the VCN, or it will take the default value. This configuration also works across VCNs in the same region or across regions. For more information, see the Automate Oracle Cloud Infrastructure VCN Peering with Terraform blog post. Hopefully this post will help you avoid the rework involved in tearing down VCNs and subnets and re-creating them. If you want more information about integration with Microsoft Active Directory, Infoblox, or Bluecat, please leave a comment.

One of the main objectives of the Oracle Cloud Infrastructure Blog is to serve as a forum for Cloud Solutions Architects and Product Managers to provide best practices, introduce new enhancements and...

Customer Stories

Image Recognition Software Startup Takes on Big Players with Oracle Cloud Infrastructure

Image recognition software provider Netra is a fairly small player in the artificial intelligence (AI) market, but the company is using a high-performance, multicloud computing strategy to take on big players such as Google Cloud Vision and Amazon Rekognition. Netra helps businesses make sense of the tsunami of digital imagery on the internet, said CEO and founder Richard Lee, who shared his company's story on stage at the O'Reilly Velocity Conference in San Jose, California. Specifically, Netra uses computer vision, AI, and deep learning to help brands and agencies reach and better understand their ideal target audiences. The company's image recognition software analyzes billions of consumer photos to identify interests, life events, demographics, and brand preferences. "We provide image recognition as a service to our customers, and we deliver that through an API that gives access to our deep learning models, which are trained up on over 7,500 classifiers today," Lee said. "So, this is a little bit more complex than Hot Dog or Not Hot Dog." For those that don't watch HBO's Silicon Valley, this refers to an app on the show that identifies whether an image is of a hot dog or not. The deep learning models are deployed on Oracle Cloud Infrastructure and built on top of Apache Kafka, which is open source stream-processing software, plus Docker and Kubernetes. Netra's technology works by identifying objects of interest and looking for pattern matches around specific clusters of pixels. "For example, our humans model may detect [a human face] and then send it to our humans daemon, which then classifies age, gender, and ethnicity," Lee explained. "Likewise, our brands model looks for the presence of a logo. … And then lastly, our context and object model detects and classifies what else is in the image." The image recognition software accomplishes all this in about 200 milliseconds. Why Oracle Cloud Infrastructure? Netra's customer base has recently grown to include large enterprises, and with that comes a higher volume of images and videos to analyze, as well as higher service-level agreements. The Boston-based company is counting on Oracle Cloud Infrastructure to help it meet these increasing demands. "Fundamentally, [Oracle Cloud Infrastructure] gives a startup like us access to machines that would cost us thousands to purchase on our own, as well as the flexibility to scale up and down as needed," Lee said. "Oracle gives us really strong value in terms of pricing and performance." Lee said he likes the flexibility that Oracle Cloud Infrastructure provides, especially when there is a spike in demand for his company's services. "If we get hit with a couple million images … we're able to spin up a new instance almost within minutes, to be able to work the queue down," Lee added. "Once the queue gets below a certain threshold, we're able to spin that down to manage our costs." The deep learning models that Netra deploys in the Oracle cloud are very complex, and the amount of compute power it takes to process photo and video is "pretty intense," Lee said.  "We are always waiting for the next-generation GPU chips to be released," he said. "We're constantly pushing the envelope on the processing side, and we're always looking for the highest-performance hardware available. And from what we've seen, Oracle Cloud Infrastructure is the best price/performance on the bare metal side so far." Oracle gives startups such as Netra the computing horsepower necessary to train up deep learning models and compete with some of the biggest players out there. Running AI models in the cloud also gives Netra more bandwidth to focus on its core value proposition.  "With Oracle Cloud Infrastructure, it's not a matter of how big your capital budget is, because it's kind of democratized for everybody," Lee said. "Now it's more about: How good are your computer vision models? What kind of solutions can they build? In that case, it's a much fairer fight against competitors, and we're excited to be able to participate. That would have been impossible before the advent of cloud and really the cost/performance that Oracle Cloud Infrastructure has provided to us." Accelerating to the Cloud Netra also takes part in the Oracle Cloud Startup Accelerator program, which helps startups get up and running in a short period of time. Program participants can take advantage of several benefits, including free credits for Oracle Cloud Infrastructure, world-class mentoring and consulting, start-of-the-art cloud technology, coworking spaces, and access to Oracle customers and partners. Lee especially likes the fact that his company can now get noticed by hundreds of thousands of Oracle customers—and the free credits certainly don't hurt. "It's like nondilutive venture capital," he said. Sage Advice Lee advised that other startups that are considering a move to an enterprise cloud platform should take advantage of those free credits that cloud providers offer. "There is a lot of money to get started and build apps and to actually run high-performance services that are effectively free funding right now," he said. "So as a startup, you can really extend your runway with these credits. But in order to do that, you have to be smart about your architecture and how you deploy it. For example, we've used Docker containers or Kubernetes to be agile to be able to deploy across multiple providers and services." And don't forget to look for the best solutions in terms of pricing and performance. "I think it's an amazing time to start a company," he said. "You need fewer resources than ever before, and you can scale faster than ever before through a lot of these startup-type programs."

Image recognition software provider Netra is a fairly small player in the artificial intelligence (AI) market, but the company is using a high-performance, multicloud computing strategy to take on big...

Oracle Cloud Infrastructure

PCI Compliance on Oracle Cloud Infrastructure is EASY!

Oracle Cloud Infrastructure services have the PCI DSS Attestation of Compliance. The services covered are Compute, Networking, Load Balancing, Block Volumes, Object Storage, Archive Storage, File Storage, Data Transfer Service, Database, Exadata, Container Engine for Kubernetes, Container Registry, FastConnect, and Governance. In this blog post, we discuss the guidelines that help Oracle Cloud Infrastructure customers achieve PCI compliance for workloads running on Oracle IaaS.  Background Our guidelines for achieving PCI compliance fall on the shared-responsibility spectrum of the cloud security continuum. The following diagram describes the separation between responsibility for the security "of" the cloud and security "on" the cloud. As a customer, you are responsible for securing your workloads on Oracle Cloud Infrastructure. In some cases, you need to configure the services that Oracle provides. The responsibility is shared - Oracle maintains the services infrastructure and the customer consumes the services and configures the controls according to their security and compliance requirements. The following picture from the International Information System Security Certification Consortium (ISC2) clarifies the areas of responsibility for IaaS, PaaS, and SaaS: We follow Oracle's 7 Pillars of Trusted Secure Enterprise Platform to develop solutions that meet the customer’s security and compliance requirements. We will discuss this more in our next blog post on Security Solutions Architecture. For now, let’s focus on PCI on Oracle Cloud Infrastructure. Recommended High-Level Solutions for PCI Compliance on Oracle Cloud Infrastructure We follow the latest official publication from PCI Security Standards Council (R) - Requirements and Security Assessment Procedures version 3.2.1 (May 2018). As per the document, there are 12 detailed requirements across 6 sections that cover how to: p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Arial} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Arial} Build and Maintain a Secure Network and System Protect Cardholder Data Maintain a Vulnerability Management Program p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Arial} Implement Strong Access Control Measures p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Arial} Regularly Monitor and Test Networks p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Arial} Maintain an Information Security Policy   There are additional requirements for shared hosting providers like Oracle and we have already met the requirements through our attestation. Let's dive into the solutions.   Section 1: Build and Maintain a Secure Network and System Requirement 1: Install and maintain a firewall configuration to protect cardholder data. Solution: Use Oracle Cloud Infrastructure security lists (Oracle Cloud Infrastructure managed subnet-specific firewall rules). In addition, download Fortinet or Checkpoint firewall images from our Marketplace and provision firewall appliances on Oracle Cloud Infrastructure.  Requirement 2: Do not use vendor-supplied defaults for system passwords and other security parameters. Solution: Review the guidance in the PCI document. In addition, we have detailed documentation on how to manage user credentials on Oracle Cloud Infrastructure. Section 2: Protect Cardholder Data Requirement 3: Protect stored cardholder data. Solution: This involves protecting data at rest. By default, Oracle Cloud Infrastructure Block and Object Storage are encrypted. Additionally, with our upcoming KMS, or any supported HSM, Oracle Wallet, Oracle Key Vault and third-party vault offerings, we give you unprecedented flexibility around key and secret management. For data security, we provide Transparent Data Encryption (TDE) and column level encryption. Requirement 4: Encrypt transmission of cardholder data across open, public networks. Solution: All our control and management plane communications are protected with TLS, which is necessary for the PCI DSS attestation. We also recommend using TLS (not SSL) and front-ending the application with our load balancers, as and when required. Use of SSH and IPSec VPN along with FastConnect is highly recommended. Section 3: Maintain a Vulnerability Management Program Requirement 5: Protect all systems against malware and regularly update antivirus software or programs. Solution: Use our Dyn Malware Protection service to block malware at the edge of the your logical network before it can infect web applications running on Oracle Cloud Infrastructure. Additionally, ensure that anti-virus software is deployed at the OS level. Requirement 6: Develop and maintain secure systems and applications. Solution: We have many recommendations to develop and maintain secure systems. Have a patch management policy in place and use a managed cloud service provider for this purpose. If you're looking for a managed cloud service provider, Oracle Managed Cloud Services is an option along with many of our Oracle Cloud Infrastructure MSP partners. Section 4: Implement Strong Access Control Measures Requirement 7: Restrict access to cardholder data by business need-to-know. Requirement 8: Identify and authenticate access to system components. Solution: Review documentation on IAM access controls (compartments and policies). In addition, we suggest using Oracle CASB and Oracle IDCS for further security controls around access policies. For Oracle Container Engine for Kubernetes, our solution is to use Kubernetes Role Based Access Control in addition to IAM. Look out for a future blog post on Kubernetes security on Oracle Cloud Infrastructure. Requirement 9: Restrict physical access to cardholder data. Solution: This is covered under our physical security controls for the datacenter at the availability domain and region level. We have ISO 27001 certification as well as SOC 1, SOC 2 and SOC 3 attestations which provide the basis for control testing relevant to our PCI DSS Attestation of Compliance. Section 5: Regularly Monitor and Test Networks Requirement 10: Track and monitor all access to network resources and cardholder data Requirement 11: Regularly test security systems and processes. Solution: Use Oracle CASB and Oracle Cloud Infrastructure Audit Services for monitoring. Integrate CASB and Audit Logs with existing SIEM solutions. In addition to this, schedule regular penetration testing of environments based on Oracle Cloud Infrastructure, using the following links: Pen Testing on OCI, Schedule Pen Test via UI. More telemetry and monitoring features are coming and our teams are working on an automated OpenVAS solution. Section 6: Maintain an Information Security Policy Requirement 12: Maintain a policy that addresses information security for all personnel. Solution: While customers are responsible for their security policies, we are happy to help in anyway we can. Most customers have existing security policies and our team can help with cloud (IaaS, PaaS, or SaaS) specific perspectives. Here is a list of security policy templates per industry vertical from the SANS Institute. In conclusion, I hope these steps simplify the road to PCI compliance for your environments on Oracle Cloud Infrastructure. Look out for more blogs, white papers, and Infrastructure Security as Code (ISaC) for security and compliance on cloud to ease your migration to Oracle Cloud.  

Oracle Cloud Infrastructure services have the PCI DSS Attestation of Compliance. The services covered are Compute, Networking, Load Balancing, Block Volumes, Object Storage, Archive Storage, File...

Customer Stories

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Chris Riggin

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed Chris Riggin of Verizon. Chris is the lead Oracle Cloud Infrastructure Certified Cloud Architect for Verizon. He has been with Verizon since 1999 in several IT engineering and architecture capacities, but he has focused on cloud design since 2012. Chris holds a patent for designing and implementing a first-ever cloud management system for heterogeneous platforms and services. He regularly presents at several events and technology summits, including more than 10 speaking engagements at Oracle Open World. His work on Oracle Cloud Infrastructure technology played a key role in enabling it as a highly cost-competitive, scalable, and stable environment for his organization. Today, Chris continues to expand Oracle Cloud Infrastructure deliverables to keep up with business demands and future trending technologies, always maintaining an ambitious three-to-five-year road map. Greg: Chris, how did you prepare for the certification? Chris: I went through the training curriculum posted in Oracle University and followed the posted path. Following the path and attending some of the instructor-led courses helped me gain, or in some cases reinforce, at least 85% of the knowledge I needed to pass the exam. Also, working with a live Oracle Cloud Infrastructure (OCI) tenancy helped me identify any gaps I may have had in my skill-set as I was able to test many of the features within that tenancy. Greg: How long did it take you to prepare for the exam? Chris: Fortunately, my job was 100% OCI at the time, but I still needed at least two weeks where I was able to focus solely on exam preparations and make sure that I had the knowledge and skills necessary for the exam. Unfortunately, life got in the way and prevented me from putting in as much time and effort as I had hoped. I didn’t feel I was as prepared as I would have liked, so a day before the exam, I tried to reschedule. Unfortunately, when I called Pearson VUE, because I was within 24 hours of the exam delivery, I was not able to change the appointment. I literally was forced to cram several missed days of studies into the very last day before the exam! Turns out it was enough, or I just had plenty of experience, because I passed! The moral to this story is you should be aware that you cannot change your exam appointment within 24 hours of when it’s scheduled. Greg: How is life after getting certified? Chris: As the lead architect, earning the certification has reinforced my position as the subject matter expert. Now when I speak about OCI, I speak with authority. Before receiving my certification, there were many different opinions on how to proceed, and it seemed no one had the credentials to lead the discussion. After it became known I had earned the certification, people immediately began to listen to what I had to say. Since I’ve posted the digital badge on my signature, more than half the folks involved with OCI have gained an interest in taking the exam. They continually reach out to me for assistance, asking to be pointed in the right direction as to what to study, and even go so far as to ask for help after hours to prepare them for the exam. Greg: Any other advice you’d like to share? Chris: Do not focus solely on infrastructure. Make sure you are aware of and understand all the service offerings across the overall environment and exhibit a strong knowledge of cloud technologies and concepts outside of OCI. You should understand database, not necessarily to the level of expert, but you should understand some of the inherent services and service levels provided by Oracle. Learn about the OCI PaaS and SaaS offerings that are available. Understand DNS, connecting to the gateways, networking, and don’t forget Terraform! Finally, I would strongly suggest that you certify as soon as possible! The exam is only going to get more difficult as OCI continues to grow and mature.   Please subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam. Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blog posts in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series: Umair Siddiqui Nitin Vengurlekar Rajib Kundu Miranda Swenson Robby Robertson Chris Riggin Anuj Gulati

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed...

Oracle Cloud Infrastructure

Customize Block Volume Backups with the Oracle Cloud Infrastructure CLI

It is a common IT Operations practice to manage the data protection of compute instances through command line and scripts.  This post provides detailed instructions on how to customize your application compute instance's block volume backup by using the Oracle Cloud Infrastructure Command Line Interface (CLI). With the CLI, you can perform a block volume backup based on your schedule and remove old backups based on your retention period. Environment You can run this customized block volume backup task from a centralized system or inside the application compute instance itself. In the example in this post, the task is run inside a compute instance, and is created as a bash shell script. Because this task runs inside a compute instance, we recommend using the instance principal feature to avoid storing user credentials locally.  Volume Group For this customized volume backup script, we recommend using the volume group feature to a create block volume backup. This feature enables you to group multiple block volumes and create a collection of volumes from which you can create consistent volume backups and clones. You can restore an entire group of volumes from a volume group backup.  Customized Volume Backup Script Before you can run this customized script, you need to install the CLI on your compute instance. Detailed instructions for installing the CLI are located in the documentation. Step 1 The first step of this script gets required information about where this compute instance is located, such as availability domain, compartment OCID, and instance OCID. You can get this information through the metadata of the compute instance.  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures} # Get availability domain AD=$(curl -s http://169.254.169.254/opc/v1/instance/ |grep availabilityDomain | awk '{print $3;}' | awk -F\" '{print $2;}') echo "AD=$AD" # Get Compartment-id COMPARTMENT_ID=$(curl -s http://169.254.169.254/opc/v1/instance/ |grep compartmentId | awk '{print $3;}' | awk -F\" '{print $2;}') echo "COMPARTMENT_ID=$COMPARTMENT_ID" # Get Instance-id INSTANCE_ID=$(curl -s http://169.254.169.254/opc/v1/instance/ |grep ocid1.instance | awk '{print $3;}' | awk -F\" '{print $2;}') echo "INSTANCE_ID=$INSTANCE_ID"   Step 2 The second step of the script gets the tagging information from the boot volume of the compute instance. Then you can use the same tagging information to create the volume group and its backups. With the same tags, you can easily to sort or filter your volumes and their backups.      p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff; min-height: 11.0px} span.s1 {font-variant-ligatures: no-common-ligatures} # Get tags of boot volume of this instance # We will use these tags for volume group created for this instance's boot volume and other attached volumes   # Get boot Volume tag BOOTVOLUME_DEFINED_TAGS=$(oci compute boot-volume-attachment list --compartment-id=$COMPARTMENT_ID --availability-domain=$AD --instance-id=$INSTANCE_ID --auth instance_principal | jq '.data[] | ."defined-tags"')   BOOTVOLUME_FREEFORM_TAGS=$(oci compute boot-volume-attachment list --compartment-id=$COMPARTMENT_ID --availability-domain=$AD --instance-id=$INSTANCE_ID --auth instance_principal | jq '.data[] | ."freeform-tags"')   Note: The jq command is very useful for parsing the JSON output from the CLI.   Step 3 The third step of the script gets the boot volume OCID and a list of attached block volumes' OCIDs for the compute instance. These OCIDs will be used to construct JSON data for the volume group creation command.  # Get boot volume-id BOOTVOLUME_ID=$(oci compute boot-volume-attachment list --compartment-id=$COMPARTMENT_ID --availability-domain=$AD --instance-id=$INSTANCE_ID --auth instance_principal | grep boot-volume-id | awk '{print $2;}'|awk -F\" '{print $2;}') echo $BOOTVOLUME_ID   p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff; min-height: 11.0px} span.s1 {font-variant-ligatures: no-common-ligatures} # Get a list of attached block volumes BLOCKVOLUME_LIST=($(oci compute volume-attachment list --compartment-id=$COMPARTMENT_ID --availability-domain=$AD --instance-id=$INSTANCE_ID --auth instance_principal | grep volume-id | awk '{print $2;}'|awk -F\" '{print $2;}'))   # Construct JSON for volume group creat command LIST="[\"$BOOTVOLUME_ID\"" for volume in ${BLOCKVOLUME_LIST[*]} do    LIST="${LIST}, \"${volume}\"" done LIST="${LIST}]" SOURCE_DETAILS_JSON="{\"type\": \"volumeIds\", \"volumeIds\": $LIST}"   Step 4 The fourth step of the script checks whether existing volume groups have been created by the script before. If there is no existing volume group, the script creates the volume group based on the information from the previous steps, such as list OCIDs of the boot volume and all the attached block volumes. If there is an existing volume group, the script checks whether there are any changes to the member volumes inside the volume group; for example, new block volumes are attached to the compute instance. If there are changes, the script updates the volume group with the latest volumes.  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff; min-height: 11.0px} span.s1 {font-variant-ligatures: no-common-ligatures} # Check whether there is an existing available volume group created by the script. VOLUME_GROUP_NAME="volume-group-$INSTANCE_ID" VOLUME_GROUP_ID=$(oci bv volume-group list --compartment-id $COMPARTMENT_ID --availability-domain $AD --display-name $VOLUME_GROUP_NAME --auth instance_principal | jq '.data[] | select(."lifecycle-state" == "AVAILABLE") | .id' |awk -F\" '{print $2;}')   echo "VOLUME_GROUP_ID=$VOLUME_GROUP_ID"   # If volume group does not exist, then create a new volume group if [ -z "$VOLUME_GROUP_ID" ]; then   # Create volume group VOLUME_GROUP_ID=$(oci bv volume-group create --compartment-id $COMPARTMENT_ID --availability-domain $AD --source-details "$SOURCE_DETAILS_JSON" --defined-tags="$BOOTVOLUME_DEFINED_TAGS" --freeform-tags="$BOOTVOLUME_FREEFORM_TAGS" --display-name=$VOLUME_GROUP_NAME --wait-for-state AVAILABLE --max-wait-seconds 24000 --auth instance_principal | grep ocid1.volumegroup | awk '{print $2;}' |awk -F\" '{print $2;}')   echo "VOLUME_GROUP_ID=$VOLUME_GROUP_ID"   else # volume group exists and then check whehter there are any changes for the attached block volumes VOLUME_LIST_IN_VOLUME_GROUP=$(oci bv volume-group get --volume-group-id $VOLUME_GROUP_ID --auth instance_principal| jq '.data | ."volume-ids"' | grep ocid1.volume | awk -F\" '{print $2;}') # compare with attached block volume list LIST3=$(echo $BLOCKVOLUME_LIST $VOLUME_LIST_IN_VOLUME_GROUP | tr ' ' '\n' | sort | uniq -u) if [ -z "$LIST3" ]; then     echo "no change for volume group" else     # update volume group with updated volume ids list     VOLUME_GROUP_ID=$(oci bv volume-group update --volume-group-id $VOLUME_GROUP_ID --volume-ids "$LIST" --defined-tags="$BOOTVOLUME_DEFINED_TAGS" --freeform-tags="$BOOTVOLUME_FREEFORM_TAGS" --display-name=$VOLUME_GROUP_NAME --wait-for-state AVAILABLE --max-wait-seconds 24000 --auth instance_principal | grep ocid1.volumegroup | awk '{print $2;}' |awk -F\" '{print $2;}') fi fi   Step 5 The last step of the script creates the backup for this volume group. The script uses the same tags, defined-tags and freeform-tags, from the boot volume of the compute instance. However, you can define your own customized tags as needed.  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff; min-height: 11.0px} span.s1 {font-variant-ligatures: no-common-ligatures} # Create Backup VOLUME_GROUP_BACKUP_NAME="Volume-group-backup-$VOLUME_GROUP_ID"   VOLUME_GROUP_BACKUP_ID=$(oci bv volume-group-backup create --volume-group-id $VOLUME_GROUP_ID --defined-tags="$BOOTVOLUME_DEFINED_TAGS" --freeform-tags="$BOOTVOLUME_FREEFORM_TAGS" --display-name=$VOLUME_GROUP_BACKUP_NAME --wait-for-state AVAILABLE --max-wait-seconds 24000 --auth instance_principal | grep ocid1.volumegroupbackup | awk '{print $2;}' |awk -F\" '{print $2;}')   echo "VOLUME_GROUP_BACKUP_ID=$VOLUME_GROUP_BACKUP_ID" echo "VOLUME_GROUP_BACKUP_NAME=$VOLUME_GROUP_BACKUP_NAME"   You can configure the cron job to run this customized volume backup script according to your backup schedule.  Volume Backup Retention Script Based on your requirements, you might need to define a customized and flexible retention period for your volume backups. For example, say you want the retention period of the volume backups to be 14 days.  Following example script checks the creation times for your volume backups and then deletes the old backups beyond the retention period. You can configure and run this script in your cron job based on how often you want to conduct a backup retention check.  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff; min-height: 11.0px} span.s1 {font-variant-ligatures: no-common-ligatures} # get all the volume group backup RETENTION_DAYS=14 VOLUME_GROUP_BACKUP_LIST=$(oci bv volume-group-backup list --compartment-id $COMPARTMENT_ID --volume-group-id $VOLUME_GROUP_ID --display-name=$VOLUME_GROUP_BACKUP_NAME --auth instance_principal | jq -r '.data[] | select (."time-created" | sub("\\.[0-9]+[+][0-9]+[:][0-9]+$"; "Z") | def daysAgo(days):  (now | floor) - (days * 86400); fromdateiso8601 < daysAgo(14)) | .id')   echo $VOLUME_GROUP_BACKUP_LIST for backup in ${VOLUME_GROUP_BACKUP_LIST[*]} do    DELETED_VOLUME_GROUP_BACKUP_ID=$(oci bv volume-group-backup delete --volume-group-backup-id ${backup} --force --wait-for-state TERMINATED --max-wait-seconds 24000 --auth instance_principal | grep ocid1.volumegroupbackup | awk '{print $2;}' |awk -F\" '{print $2;}')    echo $DELETED_VOLUME_GROUP_BACKUP_ID done  

It is a common IT Operations practice to manage the data protection of compute instances through command line and scripts.  This post provides detailed instructions on how to customize your...

Events

Oracle Cloud Infrastructure Makes Debut on Gartner's IaaS Scorecard

Recently Gartner published their latest round of In Depth Assessments, which are a series of scorecards for the major IaaS vendors - Amazon Web Services, Microsoft Azure, Google Cloud Platform, and now Oracle Cloud Infrastructure. We're excited to have taken part in this comprehensive evaluation as one of the Big 4 IaaS players. Gartner In Depth Assessments evaluate cloud vendors' ability to address Gartner's list of required, preferred and optional criteria for production workloads running in the cloud. This year, their evaluation was based on 263 criteria points spanning everything from core computing, storage, and networking capabilities to integration with what is traditionally considered PaaS capabilities like database and data warehousing. Read Gartner's blog post to find out how the vendors scored. Our inclusion in these assessments shows strong validation from top industry analysts that Oracle is firmly established among the leading hyperscale IaaS players, and that we're being recognized for our rapid pace of adding key new services while delivering the best price / performance equation in the industry. This plays well with a recent RedMonk report that showed how Oracle offers the most compute and memory dollar for dollar when compared with other clouds. Gartner will dive further into the results of these In Depth Assessments during their popular Cloud War sessions at the Gartner Catalyst Conference later this month in San Diego. If you're attending, be sure to check out the bake-off on Sunday August 19, where Oracle solutions architects will demonstrate how you can deploy a 3-tier, highly available application environment on our cloud in 10 minutes. In addition, on Tuesday, August 21 at 10:45AM PT, Kash Iftikhar, our VP of Product Management and Strategy, will be joined on stage with Sherri Hammons, CTO of Beeline to discuss how they have been able to optimize their critical applications in the Oracle Cloud.

Recently Gartner published their latest round of In Depth Assessments, which are a series of scorecards for the major IaaS vendors - Amazon Web Services, Microsoft Azure, Google Cloud Platform,...

Oracle Cloud Infrastructure

Why I'm Betting on Oracle Cloud Infrastructure

Oracle? Seriously? That's the question people asked when I told them my destination after leaving IBM. A few months back, when I started looking for a career change, some good opportunities came my way. Some required me to move across the country to Seattle. Some required me to move to Silicon Valley. A few good local opportunities in the Boston area also came up. I had to make a hard choice. What do I want? Money? Respect? An important title? A strong company culture? After a lot of thought, I chose Oracle. Seriously. I have joined the company to lead strategy, vision, innovation, and evangelism in cloud infrastructure, edge services, and emerging technologies. Let me tell you why. About three years ago, when I was leading emerging tech strategy for IBM, we were working on technology to make Internet of Things (IoT) and edge devices collect, procure, analyze, share, decide, and act on data in a secure, autonomous, and automated fashion. One of the companies that my then-boss (and still-mentor) asked me to look at was Dyn. I argued with him, saying, “Dyn is a DNS resolution company. What value are they going to add to our mission and vision?” He said, “Trust me.” I still remember driving up to Manchester, New Hampshire, thinking, "Why am I going there?" But I also remember thinking about the fact that Dyn made a $100-million business out of DNS resolution! I at least had to learn about their go-to-market brilliance. As I became familiar with the company, I learned that Dyn is more than just DNS (more on that later). Their business acumen made me like them, and their integrity and culture made me like them even more. Integrity Dyn was facing tough competition from niche players who were offering DNS resolution services for almost free. They had to differentiate their value proposition to offer more to their customers than those competitors did, and they were consistently winning those battles. On Oct. 21, 2016, everything changed. A massive, worldwide distributed denial of service (DDoS) attack was launched against Dyn's DNS resolution service, temporarily disrupting access to much of the internet, including major sites such as Twitter, Amazon, Netflix, Spotify, PayPal, Salesforce and GitHub. This was not your garden-variety DDoS attack; it relied on tens of millions of IoT devices compromised via the infamous Mirai botnet, and it was only the second known attack of this kind. The first attack took down the blog of my favorite investigative security writer, Brian Krebs. When the hackers took him on, Akamai decided to stop hosting his blog, because it was disrupting their other customers. Everyone was watching closely to see how Dyn would respond to this new attack. The company could have surrendered to the hackers and asked for mercy. Instead, it fought back. Essentially, there were three attacks that day. Dyn mitigated the first in a couple of hours, the second in less than an hour, and the third before it happened. After that, the hackers decided to move on. The incident happened while Dyn was being acquired by Oracle. Considering the risk, Oracle could have just walked away. The fact that it didn't demonstrated its character and the value of Dyn. And unlike some major corporations who have tried to sweep security breaches under the rug, Dyn talked openly about the attack. That transparency helped other major companies prepare for future attacks and helped Dyn's reputation not only survive but thrive in the aftermath. As the Chinese proverb says, "Failure is not about falling down, but refusing to get back up." Culture Whenever I visited Dyn's Manchester office, everyone seemed to be having fun. The main attraction of the office was the slide (yes, a slide, like kids use in the park). I slid down that slide (in a suit!) the very first time I visited the office, and I still have videos to prove it. When I sent those videos to my kids, they asked, “What are you waiting for? When are you starting there?” In addition to the slide, the office had beer taps with rotating selections from local microbreweries, a big gong hanging in front of the slide, and a bunch of great restaurants within walking distance. But above all, the things that really stood out to me were the respect that Dyn employees had for others and their willingness to always learn. Vision When I was looking for a new career opportunity, I dove deeper into Oracle Dyn. It's part of the Seattle-based Oracle Cloud Infrastructure unit, which has developed an identity and culture similar to that of the original Dyn. The internet has become the most essential utility. Almost all major corporations use the internet to move their major, sensitive, and mission-critical workloads. For that to happen, every enterprise needs efficient and secure connectivity, plus full visibility into internet performance. When you are building an enterprise-grade cloud, consider the following questions: Cloud is not just about compute. It is about data. Is your cloud equipped to support enterprise data? Is your cloud provider flexible enough to allow you to build truly cloud native applications, regardless of your cloud deployment model? Is your provider secure from the edge to the core so that you can confidently send highly sensitive workloads to the cloud for processing? Can your provider support bare metal, virtual machines, serverless, Functions as a Service, containers, and a flexible orchestration system? Does your provider offer complete visibility into the internet portion of your network? Oracle Cloud Infrastructure does all these things, helping customers redefine what an enterprise version of the internet truly is. That's why I'm excited to join the team. And yes, we are hiring. Big time! Reach out to me either on LinkedIn or Twitter if you want to immerse yourself in this journey.

Oracle? Seriously? That's the question people asked when I told them my destination after leaving IBM. A few months back, when I started looking for a career change, some good opportunities came my way....

Oracle Cloud Infrastructure

Making It Easier for Organizations to Move Oracle-Based SAP Applications to the Cloud

For decades, Oracle has provided a robust, scalable, and reliable infrastructure for SAP applications and customers. For over 30 years, SAP and Oracle have worked closely to optimize Oracle technologies with SAP applications to give customers the best possible experience and performance. The most recent certification of SAP Business Applications on Oracle Cloud Infrastructure makes sense within the context of this long-standing partnership. As this blog post outlines, SAP NetWeaver® Application Server ABAP/Java is the latest SAP offering to be certified on Oracle Cloud Infrastructure, providing customers with better performance and security for their most demanding workloads, at a lower cost. Extreme Performance, Availability, and Security for SAP Business Suite Applications Oracle works with SAP to certify and support SAP NetWeaver® applications on Oracle Cloud Infrastructure, which makes it easier for organizations to move Oracle-based SAP applications to the cloud. Oracle Cloud enables customers to run the same Oracle Database and SAP applications, preserving their existing investments while reducing costs and improving agility. Unlike products from first-generation cloud providers, Oracle Cloud Infrastructure is uniquely architected to support enterprise workloads. It is designed to provide the performance, predictability, isolation, security, governance, and transparency required for your SAP enterprise applications. And it is the only cloud optimized for Oracle Database. Run your Oracle-based SAP applications in the cloud with the same control and capabilities as in your data center. There is no need to retrain your teams. Take advantage of performance and availability equal to or better than on-premises. Deploy your highest-performance applications (that require millions of consistent IOPs and millisecond latency) on elastic resources with pay-as-you-go pricing. Benefit from simple, predictable, and flexible pricing with universal credits. Manage your resources, access, and auditing across complex organizations. Compartmentalize shared cloud resources by using simple policy language to provide self-service access with centralized governance and visibility. Run your Oracle-based SAP applications faster and at lower cost. Moving SAP Workloads: Use Cases There are a number of different editions and deployment options for SAP Business Suite applications. As guidance, we are focusing on the following use cases: Develop and test in the cloud Test new customizations or new versions Validate patches Perform upgrades and point releases Backup and disaster recovery in the cloud Independent data center for high availability and disaster recovery Duplicated environment in the cloud for applications and databases Extend the data center to the cloud  Transient workloads (training, demos) Rapid implementation for acquired subsidiary, geographic expansion, or separate lines of business Production in the cloud Reduce reliance on or eliminate on-premises data centers Focus on strategic priorities and differentiation, not managing infrastructure Oracle Cloud Regions Today we have four Oracle Cloud Infrastructure regions, along with a number of Oracle Cloud Infrastructure Classic regions, and we’ve announced that we’re introducing 12 additional regions in coming months. This provides the global coverage that enterprises need. SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure Oracle Cloud Infrastructure offers hourly and monthly metered bare metal and virtual machine compute instances with up to 51.2 TB of locally attached NVMe SSD storage or up to 1PB (Petabyte) of iSCSI attached block storage. A Bare Metal  instance with a 51.2TB of NVMe storage with is capable of around 5.5 million 4K IOPS at < 1ms latency flash, the ideal platform for an SAP NetWeaver® workload using an Oracle Database. Get 60 IOPS per GB, up to a maximum of 25,000 IOPS per block volume, backed by Oracle's first in the industry performance SLA. Instances in the Oracle Cloud Infrastructure are attached using a 25 Gbps non-blocking network with no oversubscription. While each compute instance running on bare metal has access to the full performance of the interface, virtual machine servers can rely on guaranteed network bandwidths and latencies; there are no “noisy neighbors” to share resources or network bandwidth with. Compute instances in the same region are always less than 1 ms away from each other, which means that your SAP application transactions will be processed in less time, and at a lower cost than with any other IaaS provider.  To support highly available SAP deployments, Oracle Cloud Infrastructure builds regions with at least three availability domains. Each availability domain is a fully independent data center with no fault domains shared across availability domains. An SAP NetWeaver® Application Server ABAP/Java landscape can span across multiple availability domains. Planning Your SAP NetWeaver® Implementation For detailed information about deploying SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure, see the SAP NetWeaver Application Server ABAP/Java on Oracle Cloud Infrastructure white paper. This document also provides platform best practices and details about combining parts of Oracle Cloud Infrastructure, Oracle Linux, Oracle Database instances, and SAP application instances to run software products based on SAP NetWeaver® Application Server ABAP/Java in Oracle Cloud Infrastructure.  Topologies of SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure There are various installation options for SAP NetWeaver® Application Server ABAP/Java. You can place one complete SAP application layer and the Oracle Database on a single compute instance (two-tier SAP deployment). You can install the SAP application layer instance and the database instance on two different compute instances (three-tier SAP deployment). Based on the sizing of your SAP systems, you can deploy multiple SAP systems on one compute instance in a two-tier way or distribute those across multiple compute instances in two-tier or three-tier configurations. To scale a single SAP system, you can configure additional SAP dialog instances (DI) on additional compute instances. Recommended Instances for SAP NetWeaver® Application Server ABAP/Java Installation You can use the following Oracle Cloud Infrastructure Compute instance shapes to run the SAP application and database tiers. Bare Metal Compute BM.Standard1.36 BM.DenseIO1.36 BM.Standard2.52 BM.DenseIO2.52 Virtual Machine Compute VM.Standard2.1 VM.Standard2.2 VM.Standard2.4 VM.Standard2.8 VM.Standard2.16 VM.DenseIO2.8 VM.DenseIO2.16 For additional details, review the white paper referenced in the "Planning Your SAP NetWeaver® Implementation" section. Technical Components A SAP system consists of several application server instances and one database system. In addition to multiple dialog instances, the System Central Services (SCS) for AS Java instance and the ABAP System Central Services (ASCS) for AS ABAP instance provide message server and enqueue server for both stacks.  The following graphic gives an overview of the components of the SAP NetWeaver® Application Server: Conclusion This post provides some guidance about the main benefits of using Oracle Cloud Infrastructure for SAP NetWeaver® workloads, along with the topologies, main use cases, installation, and migration process. For more information, review the following additional resources.  Additional Resources SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure white paper Oracle Cloud Infrastructure technical documentation Oracle Cloud for SAP Overview SAP Solutions Portal SAP on Oracle Community High Performance X7 Compute Service Review and Analysis

For decades, Oracle has provided a robust, scalable, and reliable infrastructure for SAP applications and customers. For over 30 years, SAP and Oracle have worked closely to optimize Oracle...

Oracle Cloud Infrastructure

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Robby Robertson

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we interviewed Robby Robertson of Accenture. Robby has been with Accenture for over 18 years and has worked within the Oracle space during most of his time there. Robby was one of the first people to earn the Oracle Cloud Infrastructure Classic Architect Associate certification in 2016, and he recently earned the Oracle Cloud Infrastructure 2018 Architect Associate certification. Greg:  Robby, how did you prepare for the certification? Robby: I found the white papers to be amazingly helpful. They really forced me to try and duplicate what they’ve done. I also found the eLearning series to be an extremely good overview. Even stuff for the IAM; I didn’t know much about the home region as I just never had to read up about it. The introductory video forced me to research some of the topics further, which helped me prepare. Most beneficial was working with the hands-on labs. They were key to passing the exam. I installed the CLI on my laptop to test out the features and functions. I set up Terraform to see exactly how it works. This, along with walking through the white papers and trying to replicate the environments, was critical towards my preparation. Greg:  How is life after getting certified? Robby: After earning the certification, I posted the digital badge on LinkedIn. I think that’s the most that I’ve ever had a post viewed in my entire life. This was beneficial in making connections with others in the industry and building my network around Oracle Cloud. While I already had a robust network within Oracle, this helped me meet others within the Oracle Cloud team. By following these individuals on social media, I learned more about the latest OCI (Oracle Cloud Infrastructure) capabilities, features, and benefits. For my job as a Solution Architect, the OCI certification gives me the credentials I need. I’m viewed as a subject matter expert, and earning this certification helps support my status as a SME. Greg:  Any other advice you’d like to share? Robby: I’m telling my colleagues who are preparing for the exam to not take it lightly. The test is meant to be challenging. Do a little research and get a trial account to help reinforce your knowledge. The practice exam is extremely useful and right on point. It helps people understand what they are missing. Please subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam. Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other posts in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam blog series: Umair Siddiqui Nitin Vengurlekar Rajib Kundu Miranda Swenson Robby Robertson Chris Riggin Anuj Gulati

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we interviewed Robby...

Oracle Cloud Infrastructure

Introducing Fault Domains for Virtual Machine and Bare Metal Instances

We are excited to introduce fault domains, a new way to manage and improve availability for Oracle Cloud Infrastructure Virtual Machine and Bare Metal compute instances within an Availability Domain. Today you can use Availability Domains to help ensure high availability for your applications, by distributing virtual machine (VM) and bare metal instances across multiple availability domains within a single region. Availability Domains are physically isolated and do not share resources (power, cooling, network), which means the likelihood of multiple availability domains within a region failing is very small. The use of multiple Availability Domains ensures high availability because a failure in any one availability domain won't impact the resources running in the others. If you want more granular control of application availability within a single Availability Domain, you can now achieve that by using fault domains. Fault domains enable you to distribute your compute instances so that they are not on the same physical hardware within a single Availability Domain, thereby introducing another layer of fault tolerance. Fault domains can protect your application against unexpected hardware failures or outages caused by maintenance on the underlying compute hardware. Additionally, you can launch instances of all shapes within a fault domain.  Oracle Cloud Infrastructure is typically designed with three availability domains per region, and each availability domain has three fault domains. When carrying out maintenance on the underlying compute hardware, Oracle Cloud Infrastructure ensures that only a single fault domain is impacted at one time to guarantee availability of your instances in the remaining fault domains. Getting started is easy. When you create a new compute instance using the API, CLI or Console, you can specify the fault domain in which to place the instance. If you don’t specify a fault domain, the instance will be distributed automatically in one of the three fault domains within that availability domain. To modify the fault domain after an instance has been created, you must terminate and re-create the instance. All existing VM and bare metal instances have been distributed automatically among the three fault domains in the their availability domain. The instance details page shows the fault domain information along with other metadata about the instance. To get started with fault domains on Oracle Cloud Infrastructure, visit https://cloud.oracle.com. Fault domains are available at no additional cost in all public regions. For more information, see the Oracle Cloud Infrastructure Getting Started guide, Compute service overview, Compute FAQ, and Fault Domains documentation for more information. Sanjay Pillai

We are excited to introduce fault domains, a new way to manage and improve availability for Oracle Cloud Infrastructure Virtual Machine and Bare Metal compute instances within an Availability Domain. To...

Product News

Announcing NFS Export Options for File Storage

Hi, I am Mona Khabazan, Product Manager for Oracle Cloud Infrastructure File Storage. At the beginning of this year we launched File Storage, a brand-new service at an extremely high scale, to support enterprise cloud strategies. File Storage provides persistent shared file systems in the cloud that are highly available, highly durable, and fully managed. With File Storage, you can start small and grow up to 8 exabyte in every file system without any upfront provisioning or allocation. File Storage is needed by nearly every enterprise application that wants to move its workloads into the cloud. We built this service on a distributed architecture to provide full elasticity in the cloud to give you a competitive advantage. You don't have to worry about storage maintenance and capacity management; instead you can focus on your business needs and simplify your operations by leveraging from File Storage service. NFS Export Options We understood your need for a more granular access and security controls on a per file system basis to enable multi-tenant environments. So, we are now announcing NFS Export Options to enable you to set permissions on your file systems for Read or Read/Write access, limit root user access, require connection from a privileged port, or completely deny access to some clients. How it works When you create a file system and associated mount target, the export options for that file system are set to the following defaults: Source: 0.0.0.0/0 (All) Require Privileged Source Port: false Access: Read_Write Identity Squash: None The default settings allow full access for all NFS client source connections. These defaults can be changed for more granular access control, even though Mount Targets in File Storage are not accessible from the Internet. By default, your file system is visible only to all the hosts that are in the Mount Target's virtual cloud network (VCN) or peered to that VCN. Additionally, VCN security rules apply another layer of control. Now by using NFS Export Options, you can set additional limits on clients' ability to connect to your file systems to view or write data, based on the clients’ IP addresses. Managing which clients have access to your file systems is straightforward. For each file system, simply set the Source parameter to define which clients should access which file systems. Clients that are not listed do not have visibility into your file systems. Try It for Yourself Let’s say that you have three clients that are sharing one mount target, but each client has its own file system. In this scenario, you want to set them up so that they can’t access each other's data, as follows: Client A is assigned to CIDR block 10.0.0.0/24 and should have Read/Write access to File system A but not File System B. Client B is assigned to CIDR block 10.1.1.0/24 and should have Read/Write access to File System B but not File System A. Client C is assigned to CIDR block 10.2.2.0/24 and should not have access to either File System A or B.     Because Client A and Client B access the mount target from different CIDR blocks, you can set the client options for both file system exports to allow access to only a single CIDR block. To create this access: Set file system A to allow Read/Write access only to Client A, who is assigned to CIDR block 10.0.0.0/24. Because neither Client B nor Client C is included in this CIDR block, they cannot access file system A. oci fs export update --export-id <File_system_A_export_ID> --export-options '[{"source":"10.0.0.0/24","require-privileged-source-port":"true","access":"READ_WRITE","identity-squash":"NONE","anonymous-uid":"65534","anonymous-gid":"65534"}]'  Next, set file system B to allow Read/Write access only to Client B, who is assigned to CIDR block 10.1.1.0/24. Because neither Client A nor Client C is included in this CIDR block, they cannot access file system B. oci fs export update --export-id <File_system_B_export_ID> --export-options '[{"source":"10.1.1.0/24 ","require-privileged-source-port":"true","access":"READ_WRITE","identity-squash":"NONE","anonymous-uid":"65534","anonymous-gid":"65534"}]' Because you did not include Client C's CIDR block in any of these export options, neither file system A nor file system B is visible to Client C. Now, let’s say in a different scenario, to increase security you want to limit root user's privileges when connecting to file system D. Use the Identity Squash option to remap root users to UID and GID 65534. In UNIX-like systems, this combination is reserved for 'nobody', which is a user with no system privileges. oci fs export update --export-id <File_System_D_export_OCID> --export-options '[{"source":"0.0.0.0/0","require-privileged-source-port":"true","access":"READ_WRITE","identitysquash":"ROOT","anonymousuid":"65534","anonymousgid":"65534"}]' CLI, SDK, or Terraform Here I have demonstrated just two scenarios using the CLI. For more scenarios and instructions on how to achieve the same control with the SDK or Terraform, see Working with NFS Export Options. For more information about how different types of security work together in your file system, see About Security. We continue to strive to find areas of differentiation in storage technology that enterprises need most to give you a competitive advantage. Bring your storage-hungry workloads, and send me your thoughts on how we can continue to improve File Storage. There is ample opportunity ahead of us; we’re just getting started.  Mona Khabazan

Hi, I am Mona Khabazan, Product Manager for Oracle Cloud Infrastructure File Storage. At the beginning of this year we launched File Storage, a brand-new service at an extremely high scale, to support...

Developer Tools

Resilient IP-Based Connectivity Between IoT Sensors and Diverse Oracle Cloud Infrastructure Regions

This blog post specifically explores how to use Border Gateway Protocol (BGP) for resiliency and high availability for IP-based applications (not DNS-enabled) hosted in Oracle Cloud Infrastructure diverse regions. The scope is limited to IPv4 addresses, but the solution presented also works for IPv6 services with some additional configuration. Most of these applications fall in the IoT application domain.  Because of the implementation of ubiquitous connectivity for the Internet of Things (IoT), devices like sensors and gateways communicate back to central processors hosted in cloud data centers. I have used this solution as a way to achieve resiliency between IoT endpoints and diverse Oracle Cloud Infrastructure regions. Although Oracle Cloud Infrastructure provides computation and data storage resources for IoT workflows across regional availability domains, resiliency or high availability for the connectivity from sensor edge services to the Oracle Cloud Infrastructure regions is always a challenge. Usually, IoT devices use IPv6 while the computation applications in cloud datacenters are only IPv4 aware. Another limiting factor is that most of the sensors can’t use DNS for the services running in cloud datacenters because of the low buffer space of the IOT devices. This negates any DNS-based high-availability solution. Services Used  The following Oracle Cloud Infrastructure services and open-source software are used in this solution: Oracle Cloud Infrastructure Block Storage Oracle Cloud Infrastructure Compute Oracle Cloud Infrastructure FastConnect Oracle Cloud Infrastructure Object Storage Oracle Cloud Infrastructure Networking, including the following components: Virtual cloud network (VCN) Dynamic routing gateway Local peering gateway Remote peering gateway Internet gateway Subnet security list Software-defined networking (SDN) routing application Open-source routing engines For information about configuring the VCNs, subnets, and other Oracle Cloud Infrastructure constructs needed for this solution, see the following resources: https://cloud.oracle.com/opc/iaas/whitepapers/OCI_WhitePaper_VCN_v1.0_LL.pdf https://blogs.oracle.com/cloud-infrastructure/automate-application-deployment-across-availability-domains-on-oracle-cloud-infrastructure-with-terraform https://blogs.oracle.com/cloud-infrastructure/automate-oracle-cloud-infrastructure-vcn-peering-with-terraform Solution Overview This solution focuses on the following components: FastConnect deployment between the local point-of-presence (PoP) and the customer IoT VCN in the Oracle Cloud Infrastructure regional datacenter BGP configurations on the collocated SDN routers and Oracle Cloud Infrastructure dynamic routing gateways (DRGs) Peering configurations between local and remote DRGs Note: This solution excludes the details of IoT-workflow-related compute and storage handling of the data collectors and analytics applications. This solution also doesn’t examine the detailed architecture of the IoT edge services. The IoT application for this use case comprises sensors installed at gas pumps to measure oil surface temperatures and to detect any significant spill. The data is uploaded to the edge services for normalization before transmitting to the Oracle Cloud Infrastructure region for processing, where the IoT processing and analytics applications are running. The edge services can run on the customer’s on-premises datacenters, in a colocation datacenter, or in the Oracle IoT Cloud. The focus of this solution is how to design the connectivity from the customer’s on-premises or colocation datacenter to dual Oracle Cloud Infrastructure regions like Phoenix and Ashburn. Network Architecture Overview Connectivity methods from the edge services datacenters can be private, dedicated circuits including IPSec VPNs, and public connections using internet IPv4 space. A pair of SDN routers are used at the FastConnect colocation for IPv6 to IPv4 translation or IPSec termination before peering with the FastConnect edge routers. Both regions are connected by means of Oracle Cloud Infrastructure inter-region backbones for disaster recovery (DR) replication using a DRG at each end for remote peering. The DRGs are inherently highly available and configured in active-active mode at each regional end. The estimated throughput for each DRG per customer VCN is around 7 GBPS. If more bandwidth is required, multiple VCN and DRGs can be deployed. The latency between regions over the backbone is around 60 ms. Customers can deploy traffic accelerators like Riverbed virtual appliances in their VCNs at either end for caching. Logical View The logical view depicts the pair of redundant routers running in each of the Oracle Cloud Infrastructure PoPs. These routers are managed by the customer network teams or the Oracle Managed Cloud Services team. This is the control plane for the data path resiliency and high availability from the IoT sensors in the field to the IoT applications running across the Oracle Cloud Infrastructure regions. Region Design Customers should provision dual circuits or IPSec VPNs using SDN routers on each of the transit PoPs. On the backend, the Oracle Cloud Infrastructure team would establish connectivity from the customer routers to the Oracle Cloud Infrastructure PoP routers by using cross-connect or peering points. Each transit PoP is connected to all three availability domains (datacenters) in the region.  There are multiple FastConnect transit PoPs (ingress/egress) for a region and multiple FastConnect routers per PoP. Each transit PoP has access to each of the availability domains. All the connections from PoPs to the availability domains (ADs) are provisioned and managed by Oracle Cloud Infrastructure teams. Apart from planning and ordering connections, following are some of the follow-up tasks: Set up DRGs in respective Oracle Cloud Infrastructure regions Set up customer cross-connect groups and cross-connects Set up cabling in the FastConnect location Check light levels for each physical connection Confirm that all the interfaces are up Activate the cross-connects Set up virtual circuits Configure your edge Confirm that the BGP session is established The next section discusses one of the two options for connecting the edge services to the Oracle Cloud Infrastructure regions. Direct Cross-Connect: Colocation In this scenario, the pair of SDN routers are placed in the same colocation facility that serves as the FastConnect PoP. The routers are establishing external BGP (eBGP) peer relationships with the other edge data center routers and the Oracle Cloud Infrastructure DRGs. For DRG configuration guidance, see https://docs.cloud.oracle.com/iaas/Content/Network/Tasks/managingDRGs.htm. Information about BGP configuration is provided later in this post. Overview The customer routers are placed in the customer cage in the FastConnect colocation. Cross-Over cables are provisioned between the customer routers in the customer cage and OCI equipments in the OCI FastConnect cage. Both sets of equipments are configured in high-availability for layer 2 and layer 3. The following graphic shows a logical view of the configuration: FastConnect configuration information for setting up the circuit is located at https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/fastconnectprovider.htm. Peering Oracle Cloud Infrastructure supports only IPv4 peering, and Oracle Cloud Infrastructure regions support both public and private peering. Public Peering Connect edge service resources via FastConnect to access public services in Oracle Cloud Infrastructure without using the internet (for example, Object Storage, the Oracle Cloud Infrastructure Console and APIs, or public load balancers in your VCN). Communication across the connection is with IPv4 public IP addresses. Without FastConnect, the traffic destined for public IP addresses would be routed over the internet. With FastConnect, that traffic goes over your private physical connection. Private Peering Connect IoT edge services infrastructure to a VCN in Oracle Cloud Infrastructure. Communication across the connection is with IPv4 private addresses (typically RFC 1918). BGP Configuration on the Customer Colocated Routers Following is a sample BGP configuration. The scenario has been simplified by representing the customer router pair at the Oracle Cloud Infrastructure PoP (Colocation) as a single router, focusing on the eBGP for path resiliency.     As depicted in the picture, to add resiliency to the edge services in case of a region failure, use AS Path prepending. AS Path prepending artificially lengthens the AS Path that is advertised to a neighbor to make the neighbor think that the path is much longer than it actually is. For step-by-step configuration guidance for collocated routers, see the following resources: Cisco: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_bgp/configuration/xe-3se/3850/irg-xe-3se-3850-book/irg-prefix-filter.html Juniper: https://www.juniper.net/documentation/en_US/junos/topics/example/routing-policy-security-routing-policy-to-prepend-to-as-path-configuring.html As a result of this configuration, if there is an outage of the first (preferred) region, the IoT sensor network or the edge network will follow the second best preferred path advertised through BGP and reach the second region. Note: All the IP addresses and ASNs mentioned here are for testing purposes only. Oracle Cloud Infrastructure uses the same ASN (31898) for all of its regions.

This blog post specifically explores how to use Border Gateway Protocol (BGP) for resiliency and high availability for IP-based applications (not DNS-enabled) hosted in Oracle Cloud Infrastructure...

Developer Tools

Creating a Secure SSL VPN Connection Between Oracle Cloud Infrastructure and a Remote User

Companies have increasingly mobile workforces and therefore need to be able to provide their employees with convenient and secure access to their networks. A VPN allows users to connect securely to their networks over the public internet which is a convenient way to support mobility. IPSec VPN can be used to provide a dedicated connection to remote locations. IPSec is used with Network Access Control to make sure that only approved users can connect to the enterprise. The other type of VPN is an SSL VPN which uses Secure Socket Layer protocols. SSL VPN provides more granular access control than IPSec. It allows companies to control the types of resources a user can access through the VPN. This blog post explains how to create a secure SSL VPN connection between Oracle Cloud Infrastructure and remote users using OpenVPN. At a high level, these are the steps required to create an SSL Tunnel between Oracle Cloud Infrastructure and the OpenVPN client. Configure Oracle Cloud Infrastructure for OpenVPN Install and configure the OpenVPN server Install the OpenVPN client Configuration Diagram The following diagram shows the high-level architecture of the proposed setup: The diagram shows a VCN with two subnets: Public (10.0.1.0/24) - a public subnet with access to the internet through an internet gateway. Private (10.0.2.0/24) - a private subnet with no access to the internet. 1. Configure Oracle Cloud Infrastructure for OpenVPN The following steps outline how to create and prepare an Oracle Cloud Infrastructure VCN for OpenVPN. Create a VCN Create a VCN with two subnets in an availability domain to house OpenVPN server and a Linux host. For more information on how to create a VCN and associated best practices, see VCN Overview and Deployment Guide. Public Subnet Configuration Public subnet route table Default route table for datacenter has a route rule, where the internet gateway is configured as the route target for all traffic (0.0.0.0/0). For the subnet's security list Default Security List, create an egress rule to allow traffic to all destinations. Create ingress rules that allow access on: TCP Port 22 for SSH TCP Port 443 for OpenVPN TCP connection TCP Port 943 for OpenVPN Web-UI UDP Port 1194 for OpenVPN UDP Port For details about how to create subnets, see VCNs and Subnets. Launch an Instance Launch an instance in the newly created public subnet. In this case, we are using a VMStandard2.1 shape running Centos 7. Use this instance to install OpenVPN server. For details, see Launching an Instance. Private Subnet Configuration The private subnet's route table Private RT has a routing rule, where OpenVPN (10.0.1.9) is configured as the route target for all traffic 0.0.0.0/0. The security list has an egress rule to allow traffic to all destinations. Ingress rules allow only specific address ranges (like on-premises network or any other private subnets in the VCN).   2. Install and Configure the OpenVPN Server After the new instance starts, connect to it through SSH and install the OpenVPN package. You can download the software package for your OS platform from the OpenVPN website. Use the RPM command to install the package. Note: Make sure that you change the password using the “passwd openvpn” command. Connect to the Admin UI address (https://public-ip:943 /admin), using the password for OpenVPN User. Once you are logged in, click Network Settings and replace the Hostname or IP address with the public IP of the OpenVPN server instance. Next, click VPN settings and add the private subnet address range in the routing section. In the Routing section, ensure that the option Should client Internet traffic be routed through the VPN? is set to Yes. Under Have clients use these DNS servers, manually set the DNS resolvers that will be used by your VPN client machines. Inter-Client Communication In the Advanced VPN section, ensure that the option Should clients be able to communicate with each other on the VPN IP Network? is set to Yes. Once you’ve applied your changes, press Save Settings. You are prompted to Update Running Server to push your new configuration to the OpenVPN server. 3. Install OpenVPN Client Connect to the OpenVPN Access Server Client UI https://Public-IP-OpenVPN-VM:943 Download the OpenVPN client for your platforms. Once the installation process has completed, you see an OpenVPN icon in your OS taskbar. Right-click this icon to bring up the context menu to starting your OpenVPN connection. Clicking Connect brings up a window asking for the OpenVPN username and password. Enter the credentials for your OpenVPN user and click Connect to establish a VPN tunnel. Verification Launch a host instance by using any operating system in the private subnet. Open a terminal window on your laptop and connect to the host using the private IP. Conclusion This blog discusses how to create a secure and encrypted SSL VPN tunnel between Oracle and a remote user, allowing the user to be able to access the resources in a private subnet of Oracle Cloud Infrastructure.

Companies have increasingly mobile workforces and therefore need to be able to provide their employees with convenient and secure access to their networks. A VPN allows users to connect securely...

Oracle Cloud Infrastructure

Oracle Cloud Adoption Best Practices: Digital Transformations

This post is the first in a series of posts that discuss best practices and provide practical advice for planning, implementing, operating, and evolving in the Oracle Cloud. This post covers the following topics: Digital transformations and the importance of determining the right business drivers and success criteria Defining a cloud strategy and understanding how the strategy impacts transformations The framework of people, process, and technology that is necessary for successful cloud adoption and transformations Business Transformations Powered by the Cloud Much has been said and written about the role of the cloud in digital disruption and how the cloud is powering digital transformation for enterprises. A vast majority of companies say that the cloud is an important or critical part of their digital transformation strategy, and analysts agree that enterprises will spend trillions of dollars on these business transformations. The only disagreement is about how many trillions will be spent and in how many years. I’m not going to cover the basics of digital transformations here, but let me provide a couple of links with good insights for you to explore: Oracle CEO Safra Catz shares her thoughts about digital transformation and how to manage your business through the change. You can read this insider’s take on Oracle’s cloud transformation in which Mark Sunday, the CIO of Oracle, provides some key insights about Oracle’s own transformation journey. Digital Transformation: State of Affairs “There is a difference between knowing the path and walking the path.” – Morpheus, The Matrix A recent MIT Sloan Management Review and Deloitte Digital report shows that companies are making slow progress in their digital transformation initiatives. The number of companies reporting that their digital transformation projects are at a mature stage rose by five percentage points last year, which is the first meaningful uptick in the four years of the study. But about 70 percent of the companies are still in the early or developing stages of their digital transformation journey. This study and others like it show that progress is slow and we are still scratching the surface; a lot of transformation work still needs to be done across a lot of enterprises. Start with Why… “He who has a Why to live can bear with almost any How” – Friedrich Nietzsche Why do you want to transform your business? What are your specific reasons? Start with those reasons and tie them to your business goals as much as possible. Say you want to reduce your technical debt. That’s great, but to sustain and drive the initiative to conclusion, you need to figure out how it would benefit the business. How will you measure success? For example, to reduce technical debt you can start participating in the latest open source projects, and refactor your code and set up R&D and dev teams to contribute to and use the latest open source code. But do these activities align with your long-term business strategy? Is this part of your core competency? Does it add value to your products and services in an effective manner to benefit your customers? In this example, whether or not leveraging open source aligns with your business objectives, you can still use Oracle Cloud to execute on the strategy. The chances of your project being successful, however, will be largely determined by how closely aligned it is with business outcomes. Business Drivers for Transformation The business drivers for digital transformation are as varied as the organizations making the investments. For many enterprises, transformations are about becoming more responsive to customer needs and preferences. For others, they are about becoming more agile as a response to more nimble competition disrupting their business. Some have compliance needs and strive to implement security controls for global expansion or in response to mandates like GDPR. Others want to focus on innovation as their core competencies instead of mundane and undifferentiated work that doesn’t add any direct value to their customers. For some, the main driver is cost savings and replacing capex with opex. Increasing experimentation and reducing the risk of failure are also important drivers. Other drivers include higher revenue, better ROI, decluttering, rationalization, consolidation, modernization, higher employee productivity, and collaboration. After you determine your business drivers, you need to define and quantify what success looks like. Defining Your Cloud Strategy Your business drivers will have a major impact on your cloud strategy, enterprise architecture, and solution design. For example, projects driven by cost savings or increased efficiency will likely have a return on investment target expressed as expense reductions. In this case, a common approach is to increase asset utilization through consolidation of workloads onto less costly virtual machines (VMs). In Oracle Cloud Infrastructure, using VM instances for compute, containers through Container Engine for Kubernetes, or both will likely be suitable choices with applications consolidated on shared infrastructure. On the other hand, mandates focused on business agility, like acceleration of product development and faster response to market conditions, are more likely to introduce higher levels of automation early in the project. Oracle Cloud adoption strategies for your application portfolio include retire, rehost (IaaS), replatform (PaaS), replace (SaaS), and rebuild (Cloud Native). I’ll cover this topic in detail in another post. The bottom line is that for your digital transformation initiative to be successful, you need to clearly articulate your reasons, business drivers, success criteria, and cloud strategy. Otherwise, your digital transformation initiative runs the risk of being just a buzzword and a one-off innovation project that fizzles out without tangible outcomes. Foundation for Successful Cloud Adoption Digital transformations are about more than just adopting the latest technology. To execute digital transformation successfully, you need to address several important factors, including employee skills and learning, company culture and readiness for change, and commitment to updating old processes and leveraging the latest technologies. Businesses have to want to change and have to commit to doing so in an effective way, by bringing in new skills, adapting roles, encouraging innovation, and instilling confidence in new business models. They must also have the technology and the infrastructure to enable change to happen. I think there are three essential pillars of any successful digital transformation and cloud adoption initiative: people, process, and technology. A good example of these three elements at play is embracing the DevOps method. While adopting cloud, most enterprises realize that the traditional distinction between application developers and IT operations is often replaced by a practical division of responsibilities that is more situational and less rigid. A DevOps approach that integrates development and operations into a single role or as a shared responsibility makes a lot of sense in the cloud. The transformation to a DevOps approach involves developing skills, possibly restructuring organizational boundaries, updating processes for implementation and operations, and retooling to a common set of tools. In essence, you need to transform people, process, and technology, and you need to be effective with all three elements to be successful. Let’s look at all three in more detail. People The people are all the stakeholders, including employees, leaders, users, and customers. People also includes the company culture and its appetite for change. It is critical for all stakeholders to be onboard, enabled, and aligned, and for the company culture to be conducive for transformation. The first group of people I want to highlight are the employees. You can empower employees with the agility, scale, and global reach of the cloud to improve their productivity and their impact. Cloud can reduce repetitive work such as racking and stacking servers, provisioning, and patching and backing up databases. You need to enable employees to gain new skills and refocus their time on differentiated work and problem solving. The cloud requires new skills, for which your employees need training and enablement. Oracle University offers good resources for Oracle Cloud training and certification. Digital transformations need new digital leaders that are cloud savvy. Developing or hiring effective and experienced leaders who can successfully lead such initiatives takes time and must be prioritized. Closely related is developing a culture with a growth mindset, continuous learning, experimenting, and iterating. Finally, the most important group of people you need to focus on is your end users and customers. You need to seek continuous feedback to improve how well and how quickly you meet your customers' needs. Many enterprises have started following, with success, the approach of building minimum viable products and seeking feedback to either drop them or iterate on them based on user feedback. This approach aligns well with the agile method, and the cloud, with its pay-as-you-go pricing model, ability to scale quickly, and elastic resources, is an excellent way to execute this strategy. In fact, most cloud services are built this way. Process The cloud works very well with newer paradigms for developing, deploying, and managing applications. For example, there is more focus on microservices, APIs, serverless, agile, and DevOps. Leveraging these relatively new paradigms requires changes to dev, test, integration, deployment, operations, and incident management processes that many enterprises still use. Continuous learning, experimentation, automation, and agility should be part of the processes used to determine, implement, and operate new products and services. Security and compliance processes need to be updated. Oracle Cloud infrastructure and platform services operate under a shared responsibility model, where Oracle is responsible for the security of the underlying cloud infrastructure, and you are responsible for securing your workloads. Governance, auditing, pen testing, incident management, and response processes need to be updated as well. You also need to update your procurement process for the cloud. Cloud offers usage-based metering, so monthly bills might vary. Licensing models are typically different in the cloud with new pricing and service-level options available. Oracle Cloud provides a flexible buying and usage model for Oracle Cloud Services, called Universal Credits. When you sign up for an Oracle Cloud account, you have unlimited access to all eligible IaaS and PaaS services. You can sign up for a pay-as-you-go subscription, or you can save money and pay in advance for a year, based on your estimated monthly usage, which is the Monthly Flex plan. Bring Your Own License (BYOL), metered, and non-metered options are also available. For successful transformations, you should also re-evaluate your current vendors and partners. Determine which partners have the cloud skills and experience to help you accelerate and be successful with your transformation initiative. Technology Many of the latest breakthroughs and innovations in technology are being delivered primarily through the cloud. Autonomous services, blockchain, artificial intelligence, Internet of Things (IoT), and microservices are a few good examples. You can use the cloud to leverage these latest technologies. Your tried and trusted technology stacks are also available on Oracle Cloud. As a result, Oracle Cloud enables you to transform your internal IT and your customer-facing products and services. Oracle Cloud is the industry's broadest and most integrated cloud provider, with deployment options ranging from the public cloud to your data center. You can leverage your existing infrastructure investments by implementing hybrid architectures using services like FastConnect. For data sovereignty or compliance reasons, you can also leverage Oracle Cloud at Customer to run Oracle Cloud in your own data centers.                 Oracle Cloud offers best-in-class services across software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Use Oracle Cloud Infrastructure (IaaS) offerings to quickly set up the compute, storage, networking, and database capabilities that you need to run just about any kind of workload. Your infrastructure is managed, hosted, and supported by Oracle. Use Oracle Cloud Platform (PaaS) offerings to provision ready-to-use environments for your enterprise IT and development teams, so they can build and deploy applications based on proven Oracle databases and application servers. Use Oracle Cloud Applications (SaaS) offerings to run your business from the cloud. Oracle offers cloud-based solutions for Human Capital Management, Enterprise Resource Planning, Supply Chain Management, and many other applications, all managed, hosted, and supported by Oracle. Conclusion Most enterprises pursue their digital transformations and cloud strategies in tandem. In this post, I covered this topic with a focus on Oracle Cloud offerings, and offered a framework based on people, process, and technology to help execute a transformation initiative in Oracle Cloud. The focus of this blog was on the why and the what. In the next posts in this series, I’ll cover the how.

This post is the first in a series of posts that discuss best practices and provide practical advice for planning, implementing, operating, and evolving in the Oracle Cloud. This post covers...

Oracle Cloud Infrastructure

Deploy HA Availability Domain Spanning Cloudera Enterprise Data Hub Clusters on Oracle Cloud Infrastructure

Hello, my name is Zachary Smith, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure. We're proud to announce that availability domain spanning Terraform automation is now available for use with Cloudera Enterprise Data Hub deployments on Oracle Cloud Infrastructure. This deployment architecture includes enhanced security and fault tolerance, while maintaining performance.  Cloudera Enterprise Data Hub: Availability Domain Spanning Availability domain spanning is ideal for customers who want to maintain the performance of Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure while leveraging the cloud constructs to enhance fault tolerance and high availability. Cloudera Enterprise Data Hub cluster hosts are deployed across all three availability domains in a region, and Zookeeper, NameNode, and HDFS services are distributed across the nodes in each availability domain. Cloudera Cluster Hosts on a Private Subnet With our continued focus on enabling enterprise customers to deploy secure environments in the cloud, we have included in this architecture the deployment of master and worker cluster hosts on a private subnet not accessible directly from the internet. To achieve this, the bastion host in the deployment is set up as a NAT gateway, which is leveraged by hosts on the private subnet to route internet-destined traffic to the internet gateway. This architecture provides enhanced security without sacrificing cluster performance. Performance Testing To test the performance of Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure, Terasort was chosen as a benchmark. This benchmark is a standard for Hadoop because it tests the I/O of all elements involved in a Hadoop deployment: compute, memory, storage, and network. The following graph shows a comparison running a 10-TB Terasort across two cluster types on each deployment architecture. The first cluster type is a virtual machine using six 1.5-TB block volumes for HDFS. The second cluster type is bare metal using local NVMe for HDFS. The cluster topology is the same for both architectures: five worker nodes, one Cloudera Manager node, two master nodes for cluster services, and one bastion host. Not only are the performance results extremely fast for sorting 10 TB with five workers, but the sort times are extremely close when comparing single availability domain versus availability domain spanning architecture. These tests were run multiple times in a row, and the results returned almost identical results regardless of the time of day that the job ran. This is a great example of Oracle’s industry-leading SLA for cloud. We have more improvements in this space, and a white paper that details a Reference Architecture for Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure, and the use of these Terraform templates. Have questions or want to learn more? Join us at the Cloudera Now Virtual Event Booth on August 2 from 9 a.m. to 1 p.m. PDT. Register Now. We hope you will be as excited as we are about the improvements we’re making to the Cloudera plus Oracle solution. Let us know what you think! Zachary Smith Senior Member of Technical Staff https://www.linkedin.com/in/zachary-c-smith/

Hello, my name is Zachary Smith, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure. We're proud to announce that availability domain spanning Terraform automation is now...

Oracle Cloud Infrastructure

Foundational Oracle Cloud Infrastructure IAM Policies for Managed Service Providers

This post describes some Identity and Access Management (IAM) policies that Oracle Cloud Infrastructure partners and managed service providers (MSPs) can use as a foundation for managing Oracle Cloud Infrastructure services on behalf of their end customers. In particular, we focus on the initial IAM policy use cases that MSPs can leverage to manage the overall end-customer tenancies and provision entitlements for various customer administrator groups for self-management of their respective compartments. For information about Oracle Cloud Infrastructure IAM best practices, read the blog post and white paper created by fellow blogger, Changbin Gong. Use Case Overview This post illustrates the following IAM use cases: As A Tenant Admin, the MSP Wants To manage all the Oracle Cloud Infrastructure assets of its tenant (customer enterprise) So That the MSP can create compartments (aligned to the requirements of the customer) and troubleshoot any issues escalated from the customer administrator groups.   As A Tenant Admin, the MSP Wants To delegate the administration of the non-root compartments to the corresponding customer administrators, So That the customer administrators have the entitlements for the resources in their respective compartments.   As A Tenant Admin, the MSP Wants To create role-specific entitlements for the tenant, So That the MSP administrator groups have a clear separation of duties. For example, enabling specific roles such as server administrators to have entitlements for computing-related services and network administrators to have entitlements for the network resources across compartments in the customer tenancy.   As An Operations (OPS) Admin, the OPS team Wants To create and manage customer and user groups, but Should Not have access to the Tenant Admin group for unrestricted access. Requirements The MSP creates the tenancy and the compartments according to customer requirements. For this example, the MSP is ACME_Cloud_provider (or ACP for short), the tenancy is ACP_Tenant, and the compartments are Root, ACP_Client_Prod, and ACP_Client_Dev. The MSP administrator groups are ACP_OPS_Admin, ACP_Server_Admin, and ACP_Network_Admin. The customer administrator groups are ACP_Prod_Admin and ACP_Dev_Admin. The customer administrator for user provisioning, if required, is ACP_Customer_Admin. The policies are ACP_Tenant_Policy, ACP_Prod_Policy, ACP_Dev_Policy, and ACP_Customer_Policy. Steps For each use case, you create the necessary groups, add users to the groups, and create the policies by performing the following steps in the Oracle Cloud Infrastructure Console. Links to detailed instructions in the IAM documentation are provided. Create the groups. See “To create a group” in Managing Groups. Add users to the groups. See “To add a user to a group” in Managing Users. Add the policies. See “To create a policy” in Managing Policies. Use Case 1 As A Tenant Admin, the MSP Wants To manage all the Oracle Cloud Infrastructure assets of its tenant (customer enterprise) So That the MSP can create compartments (aligned to the requirements of the customer) and troubleshoot any issues escalated from the customer administrator groups. Key Policy: ALLOW GROUP ACP_OPS_Admin to manage all-resources IN TENANCY Note: This policy is for the MSP Operations team. They might require the same access as the administrators group. Use Case 2 As A Tenant Admin, the MSP Wants To delegate the administration of the non-root compartments to the corresponding customer administrators, So That the customer administrators have the entitlements for the resources in their respective compartments. In this use case example, the MSP will create policies for the client's production and dev compartments. Key Policy for Prod Compartment: Allow group ACP_Client_Prod to manage all-resources in compartment ACP_Client_Prod Key Policy for Dev Compartment: Allow group ACP_Client_Dev to manage all-resources in compartment ACP_Client_Dev Use Case 3 As A Tenant Admin, the MSP Wants To create role-specific entitlements for the tenant, So That the MSP administrator groups have a clear separation of duties, such as server administrators having entitlements for computing-related services and network administrators having entitlements for the network resources across compartments in the customer tenancy. Key Policies for Network Administrators Allow group ACP_Network_Admin to manage virtual-network-family in tenancy Allow group ACP_Network_Admin to manage load-balancers in tenancy Allow group ACP_Network_Admin to read instances in tenancy Allow group ACP_Network_Admin to read audit-events in tenancy Key Policies for Server Administrators Allow group ACP_Server_Admin to manage instance-family in tenancy Allow group ACP_Server_Admin to manage volume-family in tenancy Allow group ACP_Server_Admin to use virtual-network-family in tenancy Allow group ACP_Server_Admin to read instances in tenancy Allow group ACP_Server_Admin to read audit-events in tenancy Key Policies for Security Administrators Allow group ACP_Security_Admin to read instances in tenancy Allow group ACP_Security_Admin to read audit-events in tenancy Key Policies for Database Administrators Allow group ACP_DB_Admin to manage database-family in compartment Prod Allow group ACP_DB_Admin to manage database-family in compartment Dev Allow group ACP_DB_Admin to read instances in tenancy Use Case 4 As An OPS Admin, the OPS team Wants To create and manage customer and user groups, but Should Not have access to the Administrators group for unrestricted access. Key Policies Allow group ACP_OPS_Admin to use users in tenancy where target.group.name != 'Administrators' Allow group ACP_OPS_Admin to use groups in tenancy where target.group.name != 'Administrators' Note: The order of IAM verbs from more granular to less granular or more restrictive to less restrictive is as follows: We will continue to add more blogs and whitepapers to highlight Oracle Cloud Infrastructure IAM policies for managed service providers. For more information about IAM, see the IAM documentation.

This post describes some Identity and Access Management (IAM) policies that Oracle Cloud Infrastructure partners and managed service providers (MSPs) can use as a foundation for managing Oracle...

Customer Stories

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Miranda Swenson

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed Miranda Swenson of Cintra Software and Services. Miranda is a long-time techie with a passion for using technology to solve business challenges. She's worked in IT for the past 20 years as a technical consultant, presales engineer, and solution architect. Over the past three years, she has focused on cloud technology and hybrid cloud solution architecture. Miranda is currently working as a Principal Solution Architect at Cintra Software and Services. In her spare time, she enjoys playing with her pets, learning Spanish, traveling, and hula hooping! Here Miranda shares some of her key learnings and tips. Greg: How did you prepare for the exam? Miranda: Part of my role at Cintra is building customer workshops, so we can show our customers how Oracle Cloud Infrastructure (OCI) works. I had been putting together labs that included a lot of the topics I found on the exam, such as the grand tour of the console and how the networking works. By putting together labs based on the GitHub account, actually using OCI and learning it well enough so that I could share it with other people really helped prepare me for the certification. Also reading the online documentation, about Terraform, and GitHub and the cloud documentation all helped me prepare. I found that working with the environment was extremely beneficial. Greg: How is life after getting certified? Miranda: I’ve had a whole lot of people checking out my profile on LinkedIn. My company is happy because getting certified has helped with our partner recognition. Taking the exam helped reinforce what I knew and also helped identify where my gaps are. I wanted to get a 100% on the exam, but I still have some things to learn. I found it to be a good way to see what you know and what you don’t. My certification helps when working with customers. It shows that I can bring solid solutions and know some of the “gotchas” that can prevent a smooth implementation. It helps demonstrate my level of knowledge. And being introduced as a certified architect builds my credibility. Greg: Any other advice you’d like to share? Miranda: When a lot of people think about Oracle, they think database. This is NOT a database exam, it’s an infrastructure exam. If you’re coming from an Oracle software perspective, whether it’s middleware or database, you're going to have to know things you never thought you’d need to know. You’re going to have to know networking, hardware, and storage. Networking is a huge component. You also have to know the orchestration tools, such as Terraform. Get in and play with it. Get a trial account. In general, I felt that this was a good exam. It felt meaningful and tested the things you need to know.   Subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam. Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series: Umair Siddiqui Nitin Vengurlekar Rajib Kundu Miranda Swenson Robby Robertson Chris Riggin Anuj Gulati

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed...

Oracle Cloud Infrastructure

Get to the Bottom of Website Performance Issues

It's a familiar scenario: A person clicks a link to your business's website, types in the URL, or opens the mobile app—then the waiting begins. If it takes more than a few seconds for the website or app to load, chances are strong that the user will move on to the next activity. The result? You just lost a potential customer, and they probably blame your business for the poor experience. But here's the thing: website performance issues might not be your fault. The internet today is an extension of your corporate network and cloud environment. It's a big place, and latency problems can stem from several factors. The slowness could be the result of problems with the internet service provider, the infrastructure platform the service is hosted on, or the Software as a Service platform that delivers it. Or maybe there's a problem with the route the internet traffic is taking to access your services. What's clear is that you need to quickly identify the cause of the performance problems and take steps to mitigate the latency before the business loses any more revenue (or brand reputation, for that matter). Time to put on the Sherlock Holmes hat. Here are some straightforward steps you can take to determine the cause of the website performance issues. 1. Make sure that the problem isn't on your end If you're hosting the servers, or if they are hosted in the cloud, the first thing to do is consult performance monitoring tools to make sure that the problem isn't onsite in one of your data centers or in your cloud infrastructure. Monitoring tools can tell if the latency is caused by some runaway process or by a problem with a database application, for example. It's also a good idea to check on any third-party scripts embedded in the services to see if they're the culprits. Depending on how the site is architected, with dozens of objects on the page, slowness could result from problems with ad servers, JavaScript components, tracking pixels, fonts, and other components outside your control. When you're certain the website performance issues aren't inside your servers or in the application code, it's time to look outside of your immediate environment. 2. Run traceroutes When latencies begin to creep up—and users start complaining about site or app slowness—it's important to look at the path that internet traffic is taking to access your services. You can accomplish this by running traceroutes. Traceroute is a utility that displays the route from a user's device through the internet to a specified endpoint, such as your site. Traceroute shows the routers encountered at each hop and displays the amount of time that each hop takes. If you run a traceroute and determine, for example, that your internet service provider is taking your traffic across the ocean and back for no discernable reason, you'd better pick up the phone. Find out what the provider is doing and why they're doing it. 3. Consult the Internet Intelligence Map Another step you can take to gauge the health of the global internet is to consult Oracle's Internet Intelligence Map. The map is a free resource that lets users know how things like natural disasters, government-imposed internet shutdowns, and fiber-optic cable cuts affect internet traffic across the globe. If you notice that users from a particular country are complaining about latency problems, you can look at the Internet Intelligence Map to see if an issue with internet connectivity in that country has been identified. You can also drill down a little deeper to examine latency and connectivity trends for individual network service providers in that country. The online resource is broken up into two sections: Country Statistics and Traffic Shifts. The Country Statistics section reports any potential internet disruptions seen during the past week, highlighting any that have occurred over the previous 48 hours. Disruption severity is based on three primary measures of internet connectivity in that country: border gateway protocol (BGP) routing information, traceroutes to responding hosts, and DNS queries from that country received by Oracle Dyn's authoritative DNS servers. The Traffic Shifts section is based on traceroute data and illustrates changes in how traffic is reaching target networks, as well as associated changes in latency. As an example, the following Internet Intelligence Map image clearly depicts a network connectivity dip in Iraq on June 21. This particular dip occurred as the result of a government-imposed internet shutdown that was enacted to deter students from cheating during high school exams. It's important to work with cloud infrastructure providers who offer visibility into internet traffic patterns. This provides added peace of mind as your business migrates to the cloud, builds cloud-native applications, and troubleshoots website performance issues. The internet is the world's most important network, but it's incredibly volatile. Disruptions on the internet can affect your business in profound ways. That's why today's businesses need better visibility into the health of the global internet. Once you have these insights, you can find ways to reroute traffic and work around outages and latency issues. The result is improved overall website and application performance and, more importantly, happier customers.

It's a familiar scenario: A person clicks a link to your business's website, types in the URL, or opens the mobile app—then the waiting begins. If it takes more than a few seconds for the website or...

Oracle Cloud Infrastructure

Announcing SAP NetWeaver® Support for VM Shapes on Oracle Cloud Infrastructure

The industry’s broadest and most integrated public cloud, Oracle Cloud Infrastructure, offers best-in-class services for Infrastructure as a Service (IaaS), with deployment options ranging from the public cloud to the ability to consume cloud services in your own data center. By reducing IT complexity, Oracle Cloud helps organizations increase agility, drive innovation, and transform businesses. Starting in June 2018, Oracle Cloud Infrastructure virtual machine shapes are supported with SAP NetWeaver based applications as well. These new shapes expand the instance options beyond the already-supported bare metal instances for SAP NetWeaver. With this step, we offer more flexibility and a broader portfolio to SAP customers.  Extreme Performance, Availability, and Security for SAP Business Suite Applications Oracle works with SAP to certify and support SAP NetWeaver applications on Oracle Cloud Infrastructure, making it easier for organizations to move Oracle-based SAP applications to the cloud. Oracle Cloud enables customers to run the same Oracle Database and SAP applications, preserving their existing investments while reducing costs and improving agility. Unlike products from first-generation cloud providers, Oracle Cloud Infrastructure is uniquely architected to support enterprise workloads. And it is the only cloud optimized for Oracle Database. Oracle Cloud Infrastructure is also designed to provide the performance predictability, isolation, security, governance, and transparency required for SAP and other enterprise workloads. With this announcement, you can run SAP Oracle-based applications in the cloud with the same control and capabilities as in your data center. And there is no need to retrain your teams. Be able to take advantage of performance and availability equal to or better than on-premises, while gaining the ability to deploy your highest performance applications (ones that require millions of consistent IOPs and millisecond latency) on elastic resources with pay-as-you-go flexibility. This means that you can run your Oracle-based SAP applications faster and at lower cost in the cloud! What's more, you can benefit from simple, predictable, and flexible pricing with universal credits. And when it comes to governance, you can compartmentalize shared cloud resources using simple policy language to provide self-service access while still maintaining centralized governance and visibility, even across complex organizations.  Multiple Options Available Oracle offers various shapes and grades—both bare metal and virtual—on Oracle Cloud Infrastructure. These offerings enable more customers to deploy and access Oracle Database applications in the cloud with performance, security, and availability equal to or better than on-premises systems. You’ll gain performance that scales with ease. Oracle and SAP have certified SAP NetWeaver and SAP NetWeaver Business Warehouse-based applications to run on Oracle Cloud Infrastructure and Exadata Cloud Service. SAP Business Objects based on 4.2 SP level 5 and above are supported as well. SAP Hybris is supported on Oracle Cloud, provided the requirements on the SAP Hybris Help Portal are met.  Read more in the Oracle Cloud for SAP public portal.  

The industry’s broadest and most integrated public cloud, Oracle Cloud Infrastructure, offers best-in-class services for Infrastructure as a Service (IaaS), with deployment options ranging from the...

Oracle Cloud Infrastructure

Protecting Yourself from Email Imposters

Having spent my entire career in technology, I feel like I am pretty savvy about email scams. They used to be fairly obvious and I know better than to try to help a Nigerian prince get their fortune back so that they can share it with me. But as we have all become more savvy, unfortunately so have the threat actors. There are three primary categories of email-based advance threats including impersonation, imposters, and URLs and attachments. The URLs and attachments scams are looking for someone to click a URL or attachment that performs an action. You can use best practices like only opening attachments and URLs from trusted sources, but having a tool like FireEye helps ensure that mistakes don’t happen. I think the scariest threats are impersonations and imposters. Once a threat actor has convinced a person that the threat actor is someone else, the imposter is able to convince even the most well-informed end users to provide them with all the access and information they request. For example, if my executive is Mike Smith and he sent me an urgent message to take care of payment, I would fulfill his request. In this example, the email address is clearly not my executive’s email since it was sent from a personal account. This is easier to avoid. In the following example, the treat actor is getting savvier. If you look closely, observe that the email address has an extra “l” in the domain name. It may be tricky to identify that this is an email scam when reading emails quickly. More so, like most of us, I am busy and read many of my emails on my mobile device. I no longer get the visual hint that something is off about this email. Now, the likelihood of action being taken from this email has increased. As threat actors continue to manipulate the visual appearance of emails, I no longer feel confident that I can protect myself and my company from email threats. In order for organizations to protect themselves, it is critical to use tools that help identify these threats before they reach employees. To protect against malicious emails organizations, simply route messages to FireEye’s Email Security, which analyzes the emails for spam and known viruses first. It then uses the signatureless detonation chamber, MVX engine, to analyze every attachment and URL for threats and stop advanced attacks in real time. To identify imposters, FireEye’s Email Security also looks for: Newly Registered Domains Looks-Like & Sounds-Like Domains Reply-to-Address & Message Header Analysis Friendly Display Name & Username Matching CEO Fraud Algorithms Keeping in mind that email volume is inconsistent, FireEye is able to scale effectively because they have built their product on Oracle Cloud Infrastructure. They can move suspicious emails into separate VMs and can burst up since threat actors are unpredictable.  See our relationship in action by watching the Oracle Cloud Infrastructure and FireEye Webinar or you can experience our joint offering immediately through FireEye’s free Jump Start demo lab environment. In this Jump Start lab, you can follow a step-by-step guide and experience FireEye’s Email Security offering.

Having spent my entire career in technology, I feel like I am pretty savvy about email scams. They used to be fairly obvious and I know better than to try to help a Nigerian prince get their fortune...

Oracle Cloud Infrastructure

Windows Custom Startup Scripts and Cloud-Init on Oracle Cloud Infrastructure

We are excited to announce an easy way to configure and customize Microsoft Windows Server compute instances on Oracle Cloud Infrastructure using Cloudbase-Init - the Windows equivalent of Linux Cloud-Init. With the new integrated Cloud-Init experience for Windows Server, you can easily bootstrap an instance with more applications, host configurations, and custom setups. This capability is taken care of by a Cloud-Init custom user data startup script, a feature that is now available on Oracle Cloud Infrastructure compute instances running either Linux or Windows Server. What is User Data? User data is a mechanism to inject a script or custom metadata when a compute instance is initializing on Oracle Cloud Infrastructure. This data is passed to the instance at provisioning time to customize the instance as needed. Instance user data can be implemented using variety of scripting languages. See Windows Cloudbase-Init for more information. Windows Instance User Data Startup Script The Windows Cloudbase-Init experience is available for bare metal and virtual machine Windows Server compute instances, across all regions. There is no additional cost for this feature and all Windows Server OS images now come with Cloudbase-Init installed by default. Cloudbase-Init also comes with a feature that fully automates the Windows Remote Management (WinRM) configuration, without any manual user setup. Getting Started The first step is to create your user data script. The following content-type formats as supported: PEM Certificate / Batch / PowerShell / Bash / Python / EC2 Format / Cloud config. For more detailed information, see Cloudbase-Init user data. See the following example of a simple PowerShell script that changes the hostname and writes an output to a custom file on the local boot volume. The Sysnative parameter is required and must be on the first line. For PowerShell, use: #ps1_sysnative Copy the following script and save it as a .ps1 file. (This script changes the compute name to ‘WIN_OCI_INSTANCE_AD1_FE1’) #ps1_sysnative function Get-TimeStamp {        return "[{0:MM/dd/yy} {0:HH:mm:ss}]" -f (Get-Date)    } $computerName='WIN_OCI_INSTANCE_AD1_FE1' $path = $env:SystemRoot + "\Temp\" $logFile = $path + "CloudInit_$(get-date -f yyyy-MM-dd).log" Write-Host -fore Green "Creating Log File" New-Item $logFile -ItemType file Write-Output "$(Get-TimeStamp) Logfile created..." | Out-File -FilePath $logFile -Append Write-Host -fore yellow "Changing ComputerName" Rename-Computer -NewName $computerName Write-Host -fore green "Changed ComputerName" Write-Output "$(Get-TimeStamp) Changed ComputerName" | Out-File -FilePath $logFile -Append   Custom user data startup script is implemented as part of the Create Instance setup, via either the Console or CLI (Command Line Interface). Steps via Console  Log in to the Oracle Cloud Infrastructure Console. Select Menu, then Compute, followed by Instances. Click Create Instance and complete the required instance section fields. The Startup Script option can be found under Show Advanced Options. Browse for the PS1 script created in step 2. Complete the Networking section and click Create Instance. After your instance is provisioned, Cloudbase-Init will execute your script and configure WinRM automatically. Steps via CLI The CLI provides the same functionality as the Console, to install the CLI follow these installation options. First obtain the values for required parameters using the CLI command in the table  (This is run from a PowerShell command line) Parameter CLI Command --compartment-id [CompartmentOCID]   ./oci iam compartment list $C = 'ocid1.compartment.oc1..aaaaaaaa....' --availability-domain [ADName]  ./oci iam availability-domain list --shape [ShapeName]  ./oci compute shape list --compartment-id $C --image-id ./oci compute image list -c $C | ConvertFrom-Json | ForEach-Object{$_.data} | where -Property display-name -Match 'Windows-Server-2016' | fl -Property display-name, id --subnet-id [SubnetOCID]  ./oci network vcn list -c $C  Select Subnet OCID that matches chosen AD above:  ./oci network subnet list -c $C --vcn-id ocid1.vcn.oc1.iad.aaaaaaa…. --user-data-file [filename]  enter path and filename for user data startup script --display-name [StringinstanceName]  enter free form Instance display name --assign-public-ip true Syntax to launch a compute instance ./oci compute instance launch --availability-domain [ADName] --compartment-id [CompartmentOCID] --shape [ShapeName] --subnet-id [SubnetOCID] --user-data-file [filename] --display-name [StringinstanceName] --assign-public-ip  example: ./oci compute instance launch --availability-domain mgRc:US-ASHBURN-AD-3 --compartment-id $C --shape VM.Standard2.1 --image-id ocid1.image.oc1.iad.aaaaaaaag.... --subnet-id ocid1.subnet.oc1.iad.aaaaaaaar.... --user-data-file PScloudbaseinit1.ps1 --display-name MyCloudInitInstance Query instance state, take the instance id from the previous command successful output. ./oci compute instance get --instance-id ocid1.instance.oc1.iad.abuwcljr32gb5....   Typical User Data Custom Script Use Cases: Update server host configuration, including the registry Enable GPU support – custom script to install GPU driver Add and change local user accounts Join instance to domain controller Install certificates into the certificate store Enable more Windows features, like IIS Copy any required application workload files from Object Storage directly to the local instance Download and install client agents, like Chef, Puppet or SCOM agents WinRM Windows Remote Management (WinRM) is a native Windows alternative to SSH that provides you with the capability to remotely manage a Windows Host.  Windows PowerShell command line has a benefit of integrated WinRM cmdlets, this provides full functionality via a single tool for all Windows management tasks. How to use WinRM on Oracle Cloud Infrastructure Windows instance Open the Console. Add an ingress rule to the VCN security list used by the instance. a. In the Console, navigate to the newly launched instance with startup script to view instance details. b. Under Subnet Settings, click the subnet name. c. Under Resources, navigate to Security Lists and open the security list. d. Click Edit All Rules. e. Under Allow Rules for Ingress, click Add Rule:  i. Destination Port Range: 5986 ii. SOURCE PORT RANGE: All iii. IP Protocol: TCP iv. Source CDIR: 0.0.0.0/0  (Recommend Source is from your authorized CIDR block) v. Source Type: CIDR f. Save Security List Rules Get the public IP of your instance from the instance details screen. On your Windows client, open PowerShell command window. Use the following PowerShell snippet to connect to your instance: # Get the public IP from your OCI running windows instance $ComputerName = "USE PUBLIC IP OF INSTANCE" # Store your username and password credentials (default username is opc) $c = Get-Credential # Options $opt = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck # Create new PSSession (Pre-requisite: ensure security list has Ingress Rule for port 5986)  $PSSession = New-PSSession -ComputerName $ComputerName -UseSSL -SessionOption $opt -Authentication Basic -Credential $c # Connect to Instance PSSession Enter-PSSession $PSSession # To close connection use: Exit-PSSession You can now remotely manage your Windows Server compute instance from your local PowerShell client. Windows Server users now have two great options to setup a custom compute instance. They also benefit from being able to use WinRM to remotely manage and securely access a Windows instance. For more information, see the following documentation: Custom User Data Startup Script on Windows Images CLI reference to launch instance with User Data (There will also be additional documented script examples in the future)  

We are excited to announce an easy way to configure and customize Microsoft Windows Server compute instances on Oracle Cloud Infrastructure using Cloudbase-Init - the Windows equivalent of Linux...

Developer Tools

Deploying Microsoft SQL Server on Oracle Cloud Infrastructure

Introduction There are several databases and applications running on Oracle Cloud Infrastructure. In addition to other databases, Microsoft SQL Server is a relational database system widely used for online transaction processing and decision support systems. This blog post describes how to deploy a Microsoft SQL Server database running on Microsoft Windows server on a single Oracle Cloud Infrastructure Virtual Machine (VM). The Microsoft SQL Server installation wizard allows you to choose the different SQL server components to be installed, such as database engine, analysis services, reporting services, integration services, master data services, data quality services, and connectivity components. Starting with SQL Server 2016 (13.x), SQL Server Management Tools is no longer installed from the main feature tree. You may need to manually download and install the SQL Server Management Tools on Windows server to access and manage the Microsoft SQL server database through the graphical user interface (GUI).   Before You Start Before you start installation of Microsoft SQL Server Database, consider the following: Identify IOPS or I/O throughput requirements. Choose the appropriate Oracle Cloud Infrastructure VM shape (OCPU, memory, and storage). Create a secured network on Oracle Cloud Infrastructure to access the MS SQL Server database. Choose and install supported Windows server version. Identify required MS SQL Server services to be installed. Choose the VM Shape and Install Windows Server  1. Before installing Windows server, create an Oracle Cloud Infrastructure VCN (virtual cloud network) and choose the appropriate availability domain, subnet, etc. to build your Windows server. You can choose the Windows image from the Oracle Cloud Infrastructure repository or you can bring your own Windows image to deploy on our virtual machine. We strongly recommend checking the Windows server version support on Oracle Cloud Infrastructure before you start deploying.  Here, we choose the Windows Server 2012 R2 Standard edition from the image repository and VM Standard2.8. 2. In addition to the existing ingress stateful security rules, you may need to add the additional ingress security rules to allow the RDP (Remote Desktop) access to the Windows server. The following screenshot shows the security rule added to the list to allow RDP access. 3. Once the Windows server is provisioned, you see the following screen, which shows the username and initial temporary password. Log in to the Windows server with the username “opc” and the initial temporary password through remote desktop. Change the password after you first access Windows server. 4. Choose the local boot volume to install the Windows server and SQL server binary, and all the required supporting tools. However, use the block storage volume to store the SQL Server database. The following screen shows the block storage volume added to the Windows server.  5. Run the following command using Windows server PowerShell as an administrator to enable iSCSI to target this block volume at the Windows operating system level. 6. After you run the commands shown on the preceding screenshot, you may need to format and level the disk using computer management and disk management on Windows server. Microsoft recommends using the NTFS filesystem format for better performance. Install MS-SQL Server  1. Download the appropriate SQL Server version from Microsoft. If you have already downloaded SQL Server, copy it to the Windows server. Run the installer file to install Microsoft SQL Server and choose the required tools to be installed on the Windows server.  2. By default, the MS SQL Server creates system databases such as master, model, msdb, and tempdb. You may need to create application/user databases to store application/user data. You can either access your MS SQL Server database using command line or on the user interface through Microsoft SQL Server Management Studio. 3. You can store the application database’s datafile and logfile in the block storage which is already mounted and leveled on the Windows server. In this blog post, we use a block storage volume and attach it to the Windows server. Format and label the new disk as “D”. Now, we use the “D” drive to store the datafile and logfile of the newly created application database.  Conclusion In this blog post, you learned how to deploy Microsoft SQL Server database on Oracle Cloud Infrastructure on a Windows server environment. We also discussed storing the application data on Oracle Cloud Infrastructure block storage to achieve higher performance.   

Introduction There are several databases and applications running on Oracle Cloud Infrastructure. In addition to other databases, Microsoft SQL Server is a relational database system widely used for...

Oracle Cloud Infrastructure

Introducing Oracle Cloud Infrastructure Data Transfer Appliance

Migrating data is often the first step towards adopting the cloud. However, when uploading data to the cloud, sometimes even the fastest available public internet connections fall short. For example, on a leased T3 line, migrating 100 TB of data can take up to 8 months – an untenable situation! Oracle Cloud FastConnect offers a great alternative to quickly upload data to the cloud. But it’s understandable that using FastConnect may not always be feasible for you, especially when you don’t expect to upload data frequently or when the data migration is a part of an effort to retire your on-premise datacenter. A few short months ago, when we announced the availability of Data Transfer Disk, we promised that there was more to come. Today, I am excited to announce the general availability of Oracle Cloud Infrastructure Data Transfer Appliance.  Oracle Cloud Infrastructure Data Transfer Appliance is a PB-scale offline data transfer service. You can now use an Oracle-branded, purpose-built storage appliance to cost-effectively and easily migrate your data to the cloud. Each transfer appliance supports migrating up to 150 TB of data. To migrate PB-scale data sets, you can simply order multiple transfer appliances. The best part is that we charge you exactly $0 to use the service. That’s right, Oracle Cloud customers are able to use the Data Transfer Appliance for free. We even pay for the cost of shipping the appliance. From the time you receive the transfer appliance, you have up to 30 days to copy your data and ship it back to the nearest Oracle data transfer site. When we receive the data transfer appliance, we upload the data to your Oracle Cloud Object or Archive Storage using high-speed internet connections. Large datasets that would’ve taken weeks or months to upload can now be uploaded in a fraction of the time. The data transfer appliance is a 2u device that can rest standalone on a desk or fit in a standard rack. Weighing just 38 pounds, the appliance is easily handed by one person. The appliance was built with safety at the forefront. It’s tamper resistant and tamper evident. Only the serial port and the network ports are exposed. Any attempt to access the transfer appliance hardware in non-standard ways is detected. All the data copied to the transfer appliance is encrypted by default. The encryption passphrase is stored separately, never on the device with the data. The transfer appliance is shipped to you in a ruggedized case to shield it from the G-forces of transportation. You must ship the transfer appliance back to Oracle in the same shipping case. Oracle Cloud Infrastructure Data Transfer Appliance Shipping Case   How It Works Order the Data Transfer Service To use the data transfer appliance to ship your data, place an order for the desired quantity of data transfer appliances. Your Oracle sales rep can help you with the order. Make sure that you have also purchased sufficient Oracle cloud credits, so that we can upload your data to your Oracle cloud tenancy. Placing an order for the data transfer service entitles you to the use of this service. Requesting the Transfer Appliance To request an appliance, log into the Oracle Cloud Infrastructure Console and create a Transfer Job of the type Appliance, in a region of your choice. While creating the Transfer Job, you must also specify the bucket to which the data must be uploaded. Currently, all data from a single transfer appliance can be uploaded to only one bucket. Next, select the transfer job that you created and click the Request Transfer Appliance button. Specify the address to which the appliance must be shipped. A transfer appliance label is generated with the status Requested, which indicates that Oracle has received your request. When the status of the appliance changes from Requested to Oracle Preparing, your request has been accepted and the transfer appliance you requested will be shipped shortly. If you are requesting more than one transfer appliance, you can request that the appliances be shipped to multiple locations.     Preparing the Transfer Appliance When you receive the data transfer appliance, it comes with a security tag with a unique number engraved on it. Verify that the tag label matches the number posted in the Oracle Cloud Console. If the number matches, retrieve the transfer appliance from the case, plug it into your network, and assign an IP to it through the serial console. You can use the provided USB – serial cable and your favorite terminal emulator to access the serial console. You need to unlock the transfer appliance before you can use it. Download the Data Transfer Utility on a Linux host and follow the instructions to prepare the transfer appliance. Retrieve the encryption passphrase using the Data Transfer Utility. This encryption passphrase is used to encrypt the data on the transfer appliance. When the transfer appliance is unlocked and ready for use, create a dataset. A dataset is essentially an NFSv3 mount point. Currently, we support creating one dataset per transfer appliance. That’s it! You just configured the Data Transfer Appliance as an NFS filer. Copying Data to the Data Transfer Appliance Mount the NFSv3 dataset on any Linux compatible host of your choice and copy data to it using regular file system commands. We preserve the source data file/folder hierarchy by storing objects as a flattened file name, For exampe, a file in the folder hierarchy Logs->July2018->DBLog001.txt will be stored as an object name /Logs/July2018/DBLog001.txt, which simulates a virtual folder hierarchy in Oracle Object or Archive Storage. Once you have copied all the data to the transfer appliance, seal the dataset. Sealing the dataset creates a manifest file that contains an index of all the files copied, including the file MD5 hashes, which are used to verify the integrity of data as we upload data to your Oracle cloud tenancy. Finally, Finalize the transfer appliance. At this point, you can no longer access the appliance for dataset operations. The transfer appliance is now ready to be shipped back to the Oracle transfer site. Shipping the Appliance Back to Oracle When we ship you the data transfer appliance, included in the shipping case is a return shipping label, which you must use to ship the transfer appliance back to the nearest Oracle data transfer site. If you misplace the return shipping label, reach out to us and we are happy to provide you a copy of the shipping label. Make sure that you return the transfer appliance within the allocated 30 days period. If you need more time, request an extension by creating a support request (SR). Chain of Custody Using the Oracle Cloud Console or the Data Transfer Utility, you can track the status of the data transfer process throughout its lifecycle, from the time you requested the appliance to the time the data is uploaded to your Oracle cloud tenancy. Confirmation that Data was Uploaded to your Oracle Cloud Tenancy When Oracle processes your transfer appliance and uploads data to your Oracle cloud tenancy, a data upload summary is posted to the same bucket where the data was uploaded. The following is a sample of the upload summary: The upload summary provides a summary and detailed view of the successful and unsuccessful file uploads. It provides information on why some files were skipped so that you can take the necessary corrective action. Before you delete the primary copy of the data, it’s important that you review the upload summary and verify the content in your Object Storage bucket. Once the upload process is complete, your transfer appliance status changes to Complete. Once the transfer job is complete, you must close it out. Closing a transfer job requires that the status of every associated transfer appliance is in a completed state.   Getting Support If you need help, reach out to the Oracle support channels. The Data Transfer Appliance service is available for use in the US regions (Phoenix and Ashburn), but we will be rolling out the service to other Oracle Cloud Infrastructure regions soon. For more information, please refer to the FAQs and Data Transfer Appliance product documentation.   

Migrating data is often the first step towards adopting the cloud. However, when uploading data to the cloud, sometimes even the fastest available public internet connections fall short. For example,...

Oracle Cloud Infrastructure

Migrate Servers to Oracle Cloud using PlateSpin Migrate

We are pleased to announce the availability of PlateSpin Migrate support for Oracle Cloud Infrastructure. Micro Focus offers PlateSpin Migrate which is an industry-proven workload migration solution enabling customers to migrate their servers to Oracle cloud over the network. Here is a quick overview from Micro Focus on migrating servers to Oracle cloud with PlateSpin Migrate. To read the full instructions on the migration process, download the best practices white paper from PlateSpin Migrate here.  PlateSpin Migrate  PlateSpin Migrate is a powerful server portability solution that automates the process of migrating servers over the network between physical machines, virtual hosts, and enterprise cloud platforms— all from a single point of control. When migrating such servers, PlateSpin Migrate refers to these servers as “workloads.” A workload in this context is the aggregation of the software stack installed on the server: the operating system, applications and middleware, and any data that resides on the server volumes. PlateSpin Migrate provides enterprises and service providers with a mature, proven solution for migrating, testing, and rebalancing workloads across infrastructure boundaries. PlateSpin Migrate has horizontal scalability with up to 40 concurrently active migrations per PlateSpin Migrate server. Overview of the Migration Process and Pre-Requisites PlateSpin Migrate offers the capability to replicate machines to Oracle Cloud Infrastructure Compute. At the moment, only the full migration process, which replicates the entire volume data from source to target, is available. To avoid any changes that won't be replicated to the target, you must ensure the applications on the source machine are not being utilized for the duration of the full migration. Once the full migration is complete, the source is powered down and the target is brought online.   Migration to Oracle cloud using PlateSpin Migrate includes the following steps: Install the Migrate server and Migrate client. The Migrate server runs on Windows OS. It can be installed either at the source machine location (see the following diagram) or inside Oracle Cloud Infrastructure. The PlateSpin Migrate client is the Graphical User Interface. It can be installed either on the PlateSpin Migrate server or on a separate machine. Using the Migrate client, discover the source machine that needs to be migrated to Oracle Cloud Infrastructure Compute. Create the target VM instance in Oracle Cloud Infrastructure manually. It has to be launched from the PlateSpin custom image. Once the target instance is launched, provide details to register it to Migrate server. Set up a migration job between the source machine and the registered target machine using the PlateSpin Migrate client. The Migrate server orchestrates the migration process. The source machine transfers data directly to the target instance and the data can be encrypted during transfer.     Additional Resources To evaluate PlateSpin Migrate, download a free trial here. Read documentation from PlateSpin Migrate. The PlateSpin Migrate listing on Oracle Marketplace can be found here.   

We are pleased to announce the availability of PlateSpin Migrate support for Oracle Cloud Infrastructure. Micro Focus offers PlateSpin Migrate which is an industry-proven workload migration solution...

Customer Stories

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Rajib Kundu

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed Rajib Kundu of SmartDog Services. Rajib is a Database Architect and SQL Server evangelist. His primary passion is performance tuning, and he frequently rewrites queries for better performance and performs in-depth analysis of index implementation and usage. He has worked for two years as a Cloud Architect, where he has supervised and participated in the implementation of technologies and platforms supporting global internet 24x7 application. Greg: How did you prepare for the certification? Rajib: My focus was on the Networking service, Database service, OCI (Oracle Cloud Infrastructure), especially for Identity Access Management, High Availability solution, public and private subnet. These were the types of things that I focused on first. I also got quite familiar with Terraform. You need to familiarize yourself with the exam topics. I also reviewed the videos that are posted, and different documentation and blogs. I also signed up for the free account to test scenarios. Working with the console helped me improve my confidence with OCI and helped me learn how to create and configure the resource. I also enrolled in the available training and reviewed the OCI user guide. Greg: How is life after getting certified? Rajib: I have always considered myself a SQL Server guy, but once I earned the OCI certification, I felt very good about myself! I updated my Facebook and LinkedIn and received a lot of positive response from coworkers. I’ve found that when I'm demonstrating in front of a client, having the certification reinforces their trust in my abilities. I’ve included the digital badge in my business card as well, and this always gets the attention of the client. Greg: Any other advice you’d like to share? Rajib: I must suggest to everyone, once you completed the training, PLEASE do the practice test! Do it at least one or two times to ensure that you are ready for the exam.   Rajib’s blog: https://rajibsqldba.wordpress.com   Subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam. Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series: Umair Siddiqui Nitin Vengurlekar Rajib Kundu Miranda Swenson Robby Robertson Chris Riggin Anuj Gulati

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed...

Oracle Cloud Infrastructure

Read the RedMonk report on getting the most for your IaaS dollar

Analysts have a tough job when comparing and analyzing IaaS options for their customers. Cloud service providers offer different services with different SLAs, based on hardware that you really shouldn’t have to worry about, and all with varying pricing models. Which is why Oracle appreciates that the developer-focused analyst firm RedMonk took the time to dig in to the details in a recent article IaaS Pricing Patterns and Trends 2018. The report highlights the providers that offer the most compute, disk, and memory at various pricing levels and list prices. Here are two highlights: “Oracle in particular is pricing aggressively on this front, offering more memory per dollar across all of their instances. They offer roughly 2.5x more memory/dollar in their VM.Standard2.24 instance as compared to their next nearest competitor." “According to Oracle, 1 of their OCPUs is equivalent to 2 vCPUs, and on that basis Oracle emerges as the pricing leader for compute, offering the highest amount of vCPU compute capacity per dollar spent. Other providers are clustered with no clear competitive standouts.” The report is based on prices in the lowest-cost US-based region, with no special pricing or discounts. With Oracle, it just gets better from there. Oracle provides discounts through a Universal Credits model instead of requiring a commitment to specific reserved instances (with a pre-defined region, size, or OS) that limit your flexibility. Oracle provides discounts based on committed spend and the length of commitment, with the flexibility to use any IaaS or PaaS service. Mix, match, and change freely between resources or regions at any time. Any overages also receive the same discount. Take a moment to read RedMonk’s article. Then, give Oracle Cloud a try.

Analysts have a tough job when comparing and analyzing IaaS options for their customers. Cloud service providers offer different services with different SLAs, based on hardware that you really...

Oracle Cloud Infrastructure

Does Your Cloud Provider Really Understand Enterprise Applications?

When people talk about an enterprise cloud migration, they usually focus on the infrastructure. After all, it is called infrastructure as a service. Servers, storage, networking, and compute form the backbone, and cloud computing would not be possible without them. An often-overlooked aspect of enterprise cloud migration is the actual enterprise applications that run these businesses—specifically, how a cloud provider supports, manages, and secures these applications after the migration is complete. This aspect is especially important in scenarios where a business migrates multiple critical applications to the cloud and finds itself sharing its network operations center with its cloud vendor. Why is this aspect so important? Because no application exists in a vacuum. On-premises enterprise applications rely on connections to other software and backend systems, such as an Oracle database, to help users get work done. These interdependencies are complex, and they become even more so in hybrid and multi-cloud scenarios with workloads living across on-premises data centers and public clouds. Further, in the cloud, enterprises do not have the same level of control—either real or perceived—over the underlying infrastructure. The increasing reliance on APIs to tie cloud applications to third-party data repositories and other external systems adds even more complexity to the equation. Enterprises have decades of application lifecycle management and security tooling, and they expect the same from their cloud vendor. When they choose a provider that does not have experience managing these enterprise applications and their interdependencies, they are opening themselves up to the potential for data loss, other security breaches, and performance problems. Oracle is one of the world’s leading providers of enterprise applications and has been for more than four decades. Our database, ERP, HCM, CRM, supply chain management, and other software drive business for 430,000 customers worldwide. We have more than 1,000 software as a service (SaaS) applications, all running in our cloud, and many of our cloud competitors rely on our software to exist at all. Nineteen of the top 20 cloud providers run on Oracle. We have a long history of supporting, managing, and securing enterprise applications, ensuring that our customers' data is safe. We built Oracle Cloud Infrastructure (OCI)—which supports bare metal, virtual machine, and GPU instances, plus containers and serverless computing—on that same foundation. OCI is optimized for Oracle Database, which is auto-configured with encryption, and is the only cloud that supports Exadata and RAC. It also uses local NVMe flash storage and all flash-based block volumes and avoids network oversubscription to ensure enterprise applications perform to the best of their abilities. And we back it up with performance-based service-level agreements. Our approach enables all of your software, not just Oracle applications or your front-end website, to run better and more securely after an enterprise cloud migration.

When people talk about an enterprise cloud migration, they usually focus on the infrastructure. After all, it is called infrastructure as a service. Servers, storage, networking, and compute form the...

Developer Tools

How to Deploy a Virtual Firewall Appliance on Oracle Cloud Infrastructure

Although Oracle Cloud Infrastructure includes firewall capabilities, some customers prefer to run their own custom firewalls. This post describes how to deploy vSRX Virtual Firewall, a Juniper virtual security appliance that provides security and networking services for virtualized private or public cloud environments. In a public cloud environment, vSRX provide benefits like stateful firewall protection, and application and content security features like IPS, antivirus, web filtering, and antispam. This post covers the following topics: Configuring Oracle Cloud Infrastructure for vSRX Launching a vSRX instance in a virtual cloud network (VCN) Configuring vSRX Configuration Diagram The following diagram shows a high-level architecture of the proposed setup: The diagram shows a VCN with three subnets: Public (10.0.1.0/24), for management interfaces with access to the internet through an internet gateway Public (10.0.2.0/24), for revenue (data) interfaces with access to the internet through an internet gateway Private (10.0.3.0/24), a private subnet with no access to the internet 1. Configuring Oracle Cloud Infrastructure for vSRX The following procedures outline how to create and prepare an Oracle Cloud Infrastructure VCN for vSRX. Create a VCN In the Oracle Cloud Infrastructure Console, create a VCN without any resources. The VCN will have a default empty route table, a default security list, and DHCP options. In this example, the VCN is called DataCenter-1. For information about how to create a VCN, see the VCN Overview and Deployment Guide. Create an internet gateway and assign it the name IGW. Create Subnets for vSRX vSRX requires two public subnets and one or more private subnets for each individual instance group. One public subnet is for the management interface (fxp0), and the other is for a revenue (data) interface. The private subnets, connected to the other vSRX interfaces, ensure that all traffic between applications on the private subnets and the internet must pass through the vSRX instance. Configure the Public Subnet (Management Interface) Create this public subnet, and define a route rule for the route table Default Route Table in which the internet gateway is configured as the route target for all traffic (0.0.0.0/0). For details about how to create subnets, see VCNs and Subnets. For the subnet's security list Default Security List, create an egress rule to allow traffic to all destinations. Create ingress rules that allow access on TCP port 22 from the public internet and on TCP port 80/443 for accessing the web application from the public internet. Configure the Public Subnet (Revenue Interface) Create this public subnet, and define a route rule for the route table Public RT in which the internet gateway is configured as the route target for all traffic (0.0.0.0/0). For the subnet's security list Public Subnet SL, create an egress rule to allow traffic to all destinations. Create ingress rules that allow access on TCP port 80/443 for accessing the web application from the public internet and on ICMP if needed to check the connectivity. Configure the Private Subnet Create this private subnet, and define a route rule for the route table Private RT in which the vSRX second vNIC’s  private IP address (10.0.3.3) is configured as the route target for all traffic 0.0.0.0/0. Note: Configure the route rule after you create and attach the secondary VNICs. For the subnet's security list Private Subnet SL , create an egress rule to allow traffic to all destinations. Create ingress rules that allow only specific address ranges (like an on-premises network or any other private subnets in the VCN). Import the Image by Using the Console The next step is to upload the vSRX image file to Oracle Cloud Infrastructure Object Storage and import the image by using the Console. For information about how to import custom images, see the Deploying Custom Operating System Images white paper. 2. Launching a vSRX Instance in a VCN Launch the vSRX instance in the management subnet (public subnet). This example uses the VM.Standard1.8 shape. For details, see Launching an Instance. After the instance is provisioned, details about it appear in the Instance list. Create and Attach Secondary VNICs Create two VNICs. Deploy one in the public subnet (revenue data interface), and the other in the private subnet. For details about how to create and attach a VNIC, see Virtual Network Interface Cards (VNICs). After the VNIC is created and attached, details about it appear in the VNICs list. Create a Console Connection To access the vSRX instance, create a console connection. For more information, see Instance Console Connections. 3. Configuring vSRX Through the Console connection, connect to the vSRX instance. Configure the management interface, the SSH password, and the SSH RSA key, and enable root authentication: Assign IP addresses to the revenue public and private interfaces that you created using VNICs: Configure routing to add a separate virtual router and routing option for the public and private interfaces. Note: We recommend putting the revenue (data) interfaces in routing instances to avoid asymmetric traffic/routing.  Set up the trust zone, and configure the revenue private interface in the trust zone: Set up the untrust zone, and configure the revenue public interface in the untrust zone. Set up security policies: Configure NAT: Verification Launch a host instance by using any operating system in the private subnet. It can connect to the internet without a public IP address assigned, and no connections originated on the internet are possible directly to your server. For more information about vSRX, see the Juniper website. Conclusion This post explained how to launch a virtual firewall appliance on Oracle Cloud Infrastructure that can provide you benefits such as antivirus, web filtering, and antispam.

Although Oracle Cloud Infrastructure includes firewall capabilities, some customers prefer to run their own custom firewalls. This post describes how to deploy vSRX Virtual Firewall, a Juniper virtual...

Recovering opc user SSH Key on Oracle Cloud Infrastructure

Imagine a situation in which you are trying to connect into your Oracle Cloud Infrastructure instance but either you forgot which key you used or, for some unknown reason, your opc user SSH key got corrupted or deleted. It might be scary at first, but the process to recover an opc user SSH key on Oracle Cloud Infrastructure is easier than you think. So if you get a "Permission denied (publickey,gssapi-keyex,gssapi-with-mic)" error when trying to connect via SSH, follow this process to recover your key. You can also use Oracle Cloud Infrastructure Serial Console maintenance mode boot option as an alternative method to recover opc ssh key. See "Troubleshooting Instances from Instance Console Connections" public documentation section for more details. Summary Stop the instance that you can't log in to.  Detach the boot volume. Attach the boot volume to a running Linux instance. Run the iSCSI commands to attach the device and make it visible to the local operating system. Fix the authorized_keys file. Unmount the device and detach it by running the iSCSI commands. Attach the boot volume to the original instance and start it. Process  Stop the instance that you can’t connect to. In the Oracle Cloud Infrastructure Console, go to the details page for the instance and click Stop. More details in "the public documentation". Detach the boot volume. In the Boot Volume section, click the Actions icon and choose Detach. See "Detaching a Boot Volume" for additional details if needed.   Attach the boot volume to another Linux instance by going to the details page of a different VM, clicking Attach Block Volume, and then selecting the boot volume that you just detached in the previous step. Be sure to select Read/Write access. For additional details see "Attaching a Boot Volume" option available in the public documentation portal.   After the boot volume attachment is completed (the BV icon is green), connect through SSH in the running VM and run the iSCSI commands to make that new disk available and visible by the OS. Your boot-volume should appear as /dev/sdb.   Make /dev/sdb3, which is the root (/) partition where you can recover the opc SSH key file, available to the local operating system using "mount" command. Be sure to use the -o nouuid option; otherwise, you will see the "mount: wrong fs type, bad option, bad superblock on /dev/sdb3" error message. $ sudo mount -o nouuid /dev/sdb3 /mnt Fix the opc SSH key by editing the /mnt/home/opc/.ssh/authorized_keys file and adding your SSH key public file. $ sudo vi /mnt/home/opc/.ssh/authorized_keys After you add or change the SSH public key you need to use, save and exit it. Run umount /mnt. $ sudo umount /mnt Detach the iSCSI boot volume by running the detach iSCSI commands.   Ensure that the /dev/sdb disk is no longer available or visible through the SSH connection, and then detach it.   Reattach the boot volume to the instance where you wanted to recover the SSH key, wait for it to become operational (green icon) and start it. That's it. You recovered your opc user SSH key and you can now log back in to the instance. You can also use this process for troubleshooting the root (/) partition.

Imagine a situation in which you are trying to connect into your Oracle Cloud Infrastructure instance but either you forgot which key you used or, for some unknown reason, your opc user SSH key...

Developer Tools

Announcing Oracle Cloud Infrastructure Ansible Modules

We are happy to announce that Oracle Cloud Infrastructure Ansible modules are now available! As more customers deploy applications to Oracle Cloud Infrastructure, we are observing an increased need for DevOps capabilities to automate these operations. Ansible addresses this need by allowing you to provision and configure your resources using automation. The new Oracle Cloud Infrastructure Ansible modules provide the capabilities to provision and configure your Oracle Cloud Infrastructure resources with the current release supporting all core services (more services will be covered in future releases). Also, Ansible contains a toolbox of built-in modules that you can use together with Oracle Cloud Infrastructure Ansible modules to meet your end-to-end needs. Ansible doesn't require setting up complex agents, customized security, or centralized configuration servers. All you need to do is describe your automation jobs. To provision and configure Oracle Cloud Infrastructure resources, you just declare your desired state by using Ansible playbooks. When these playbooks are executed by Ansible, your Oracle Cloud Infrastructure resources are provisioned and configured according to your requirements. Oracle Cloud Infrastructure Ansible modules are located in the oci-ansible-modules GitHub repo, and you can refer to the documentation for a list of supported modules. We would like your feedback on these modules and any future improvements; tell us what you think on the oci-ansible-modules GitHub issues page. Getting Started Install the Python SDK: pip install oci Configure the SDK with your Oracle Cloud Infrastructure credentials. Install Ansible by using Ansible Installation Guide. Clone the Ansible modules repository: $ git clone https://github.com/oracle/oci-ansible-modules.git $ cd oci-ansible-modules Install the Ansible modules by running one of the following commands: If Ansible is installed as a user: $ ./install.py If Ansible is installed as root: $ sudo ./install.py Write a sample playbook (e.g. list_buckets) and run it: $ ansible-playbook list_buckets.yml To learn more, see the following resources: Oracle Cloud Infrastructure Ansible modules documentation and getting started Ansible Modules Samples Getting started with Oracle Cloud Infrastructure Try for free with credits If you need help, use the following channels: oci-ansible-modules GitHub issues page Stack Overflow, using the oracle-cloud-infrastructure and oci-ansible-modules tags in your post Developer Tools section of the Oracle Cloud forums My Oracle Support

We are happy to announce that Oracle Cloud Infrastructure Ansible modules are now available! As more customers deploy applications to Oracle Cloud Infrastructure, we are observing an increased need for...

Developer Tools

Openswan on Oracle Cloud Infrastructure

Users who migrate or integrate on-premises services with a cloud provider like Oracle Cloud Infrastructure usually use IP Security (IPSec) technology to create an encrypted tunnel between environments to transfer data or integrate applications. Using IPSec technology, such as Openswan, enables users to avoid exposing data and applications to the public internet. The goal of this post is to clarify the Openswan and Libreswan IPSec technologies. Openswan is a well-known IPSec implementation for Linux. It begun as a fork of the now-defunct FreeS/WAN project in 2003. Unlike the FreeS/WAN project, it didn’t exclusively target the GNU/Linux operating system, but expanded its usability to other operating systems. In 2012, it renamed itself to The Libreswan Project because of a lawsuit over the trademark of the name openswan. As a result, when you try to install or query the Openswan package on Oracle Linux, by default the Libreswan package is installed or shown instead. The yum search query command illustrates this behavior: $ sudo yum search openswan Loaded plugins: langpacks, ulninfo Matched: openswan ============================================================================= NetworkManager-libreswan.x86_64 : NetworkManager VPN plug-in for libreswan NetworkManager-libreswan-gnome.x86_64 : NetworkManager VPN plugin for libreswan - GNOME files libreswan.x86_64 : IPsec implementation with IKEv1 and IKEv2 keying protocols Libreswan is maintained by The Libreswan Project and has been under active development for over 15 years, going back to the FreeS/WAN Project. For more information, see the project's history.  Having a secure, encrypted, point-to-point channel through which your data can travel from a specific location to the cloud contributes to a safer solution that helps avoid breaches and data loss. If you want to create an IPSec point-to-point, encrypted tunnel between Oracle Cloud Infrastructure and a different cloud provider, on-premises environment, or both, see the following blog post, which describes how to accomplish this by using Libreswan: Creating a Secure Connection Between Oracle Cloud Infrastructure and Other Cloud Providers        

Users who migrate or integrate on-premises services with a cloud provider like Oracle Cloud Infrastructure usually use IP Security (IPSec) technology to create an encrypted tunnel between environments...

Customer Stories

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam - Nitin Vengurlekar

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed Nitin Vengurlekar of Viscosity North America (Viscosity). Nitin is the cofounder and CTO of Viscosity, responsible for partner relationships and end-to-end solution implementation. With numerous, successful cloud implementations under his belt, Nitin has been working on cloud projects for five years and specifically with Oracle Cloud Infrastructure for 10 months. As an Oracle ACE Director, Nitin is a well-known Oracle technologist and speaker in the areas of storage, high availability, Oracle RAC, and database cloud. He is also the author of the Oracle Automatic Storage Management Guide, Exadata Handbook, and Data Guard Handbook. For a full list of Nitin's books, click here. Greg: How did you prepare for the certification? Nitin: I attended several different Oracle conferences, and these Oracle sessions touched on the basics of terminology, network connectivity, storage, and compute. Oracle conferences do a good job on educating at a very high level. After that, it’s really up to you to get onto the system and see how it works. The tests are geared for individuals with practical knowledge, with very situational and practical questions. You have to know how to work with it hands-on. I did read quite a bit and practiced quite a bit online. Viscosity sold one of the largest cloud deals last year where I gained a lot of hands-on experience. That deal provided me the opportunity to work in a real-world environment. The Oracle Cloud Infrastructure Users Guide is one of the best documents I’ve seen. I read it cover-to-cover, even the appendix section. It’s worth its weight in gold, as a good primer, and it's kept current. Read it online since it’s a live document that is constantly being updated. Greg: How is life after getting certified? Nitin: I posted the digital badge on LinkedIn. My colleagues were thrilled and happy for me. People realize how hard these tests are; they are not simple tests that you fly-by on. Actually, they are very practical and detailed. Being certified helps a lot when doing presentations. Not only are we a solutions integrator and implementation team, we do quite a few Oracle Cloud presentations ourselves. Viscosity partners with Oracle, as well as independently, to promote the virtues of cloud. It helps to have that certification behind me, validating our skills. Greg: Any other advice you’d like to share? Nitin: Apart from having the hands-on skills, I should impart to readers that there are quite a few questions on Terraform and Load Balancer as a Service (LBaaS). Understanding how load balancing, Kubernetes, and Terraform works will be a big help. The exam gives you 105 minutes. I suggest you use the entire time allotted. As you go through the test, mark the questions that you’re not certain about and go back to them. Sometimes a question in another section may trigger an answer for a previous “marked” question.   Please subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam. Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series: Umair Siddiqui Nitin Vengurlekar Rajib Kundu Miranda Swenson Robby Robertson Chris Riggin Anuj Gulati

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed...

Oracle Cloud Infrastructure

Automate Application Deployment Across Availability Domains on Oracle Cloud Infrastructure with Terraform

The goal of this blog post is to provide some tips on how to automate application deployment across multiple availability domains on Oracle Cloud Infrastructure by using Terraform.  Oracle Cloud Infrastructure regions contain multiple availability domains. These availability domains are isolated from each other, fault tolerant, and unlikely to fail simultaneously or be impacted by the failure of another availability domain. To ensure high availability and to protect against resource failure, we recommend deploying your application across multiple availability domains.  To illustrate how to automate this deployment with Terraform, I am using a sample cluster application that consists of bastion, public agent, master, and worker nodes. These nodes need to be deployed across multiple availability domains to ensure high availability. The bastion and public agent nodes are deployed in public subnets, and the master and worker nodes are deployed in private subnets. Create Subnets in Each Availability Domain To accomplish high availability and redundancy for this cluster deployment, you need to create three subnets (bastion, public, and private) in each availability domain, and then deploy corresponding cluster nodes into these subnets. In Terraform, you can use the count variable to create these subnets in each availability domain rather than creating each of them individually. The tip here is to use count.index to refer to each of the availability domains when creating these subnets. For example, the following code creates three private subnets in each availability domain. p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures} data "oci_identity_availability_domains" "ADs" {   compartment_id = "${var.tenancy_ocid}" } p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #00e6e6} span.s1 {font-variant-ligatures: no-common-ligatures} span.s2 {font-variant-ligatures: no-common-ligatures; background-color: #00e6e6} resource "oci_core_subnet" "private" {   count = "3"   availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[count.index],"name")}"   cidr_block = "${var.sampleapp_cidr[count.index]}"   display_name = "private_ad${count.index}"   compartment_id = "${var.compartment_ocid}"   vcn_id = "${oci_core_virtual_network.sampleapp_vcn.id}"   route_table_id = "${oci_core_route_table.sampleapp.id}"   security_list_ids = ["${oci_core_security_list.PrivateSubnet.id}"]   dhcp_options_id = "${oci_core_virtual_network.sampleapp_vcn.default_dhcp_options_id}"   dns_label = "private${count.index}" }   Deploy Cluster Nodes Across Availability Domains With the same approach, you can provision and deploy each of the cluster nodes into a corresponding availability domain. The trick here is to use the count variable and the mod operator % so that you can easily distribute these nodes into different availability domains. For example, you can use count.index%3 to determine which availability domain to deploy and to get the subnet_id from the list of subnets created in the preceding section. The following example code illustrates how to create the number of worker nodes specified by the user and then deploy them across different availability domains. p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff; min-height: 13.0px} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #00e6e6} span.s1 {font-variant-ligatures: no-common-ligatures} span.s2 {font-variant-ligatures: no-common-ligatures; background-color: #00e6e6} resource "oci_core_instance" "WorkerNode" {   count = "${var.worker_node_count}"   availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[count.index%3],"name")}"   compartment_id      = "${var.compartment_ocid}"   display_name        = "Worker ${format("%01d", count.index+1)}"   hostname_label      = "Worker-${format("%01d", count.index+1)}"   shape               = "${var.WorkerInstanceShape}"   subnet_id           = "${oci_core_subnet.private.*.id[count.index%3]}"     source_details {     source_type = "image"     source_id = "${var.image_ocid}"     boot_volume_size_in_gbs = "${var.boot_volume_size}"   }     metadata {     ssh_authorized_keys = "${var.ssh_public_key}"   }     timeouts {     create = "30m"   } }   Attach Block Volumes to Cluster Nodes When creating and attaching block volumes to cluster nodes that are distributed across availability domains, you can use the similar approach to perform the mod (%) operation based on the count variable. For example, the following code illustrates creating block volumes and attaching them to corresponding master nodes in each availability domain.   p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures} resource "oci_core_volume" "MasterVolume" {   count="${var.MasterNodeCount}"   availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[count.index%3],"name")}"   compartment_id = "${var.compartment_ocid}"   display_name = "Master ${format("%01d", count.index+1)} Volume"   size_in_gbs = "${var.blocksize_in_gbs}" }   resource "oci_core_volume_attachment" "MasterAttachment" {   count="${var.MasterNodeCount}"   attachment_type = "iscsi"   compartment_id = "${var.compartment_ocid}"   instance_id = "${oci_core_instance.MasterNode.*.id[count.index]}"   volume_id = "${oci_core_volume.MasterVolume.*.id[count.index]}" }   I hope that this blog post made it simple to automate your application deployment across multiple availability domains on Oracle Cloud Infrastructure. p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures}

The goal of this blog post is to provide some tips on how to automate application deployment across multiple availability domains on Oracle Cloud Infrastructure by using Terraform.  Oracle Cloud...

Oracle Cloud Infrastructure

Deploy JD Edwards on Oracle Cloud Infrastructure in Just Three Steps

Hello everyone, I'm Brian Casper, a Solution Architect on the Oracle Cloud Infrastructure team working to bring Oracle Applications to Oracle Cloud Infrastructure. I recently looked at the automation tools that Oracle provides for quickly deploying JD Edwards on Oracle Cloud Infrastructure, and in this post I'm going to share three basic steps that you can follow to set up an environment for development, test, or production. The intent of this post is to describe the process and steps at a high level; a deeper dive with more detailed steps is available in the resources provided at the end of the post. Oracle Cloud Infrastructure provides all of the essential elements for a secure JD Edwards operating environment. To get going, you install Terraform and a set of Oracle-provided Terraform sample scripts. You then create a configuration file to point to the tenancy where you want the JD Edwards environment to be provisioned. You can customize the number of servers to deploy and details like hostnames and account credentials used during the deployment. The automation creates private subnets and security lists to allow only necessary communications between tiers, provisions the required storage and compute instances, installs the Oracle WebLogic and JD Edwards EnterpriseOne software, and provisions a JD Edwards database instance by using the Oracle Cloud Infrastructure Database service. JD Edwards is a highly customizable platform that can meet a wide variety of business needs. You establish a base configuration by using the Oracle One-Click Provisioning tool. After the Terraform step is completed, you set up the One-Click software on the Server Manager Console host. Finally, you launch and connect to the provisioning console, provide the required JD Edwards configuration and orchestration details, and then sit back and watch as the environment is fully deployed. The following sections walk through these steps in more detail. Step 1: Set Up and Run Terraform For simplicity, I created a “bootstrap” host to host the Terraform components by setting up an Oracle Linux instance. To prevent conflicts later, you should set up this instance in a different compartment from the one in which you intend to deploy the JD Edwards environment. I used a 200-GB boot volume to ensure enough room to download and stage all of the software for the installation. After the bootstrap instance has started, download the Terraform binaries and the latest Oracle Cloud Infrastructure provider and install it. Finally, unpack the sample Terraform scripts for JD Edwards (available on request). For more information, see Getting Started with the Terraform Provider. Before you can run the Terraform, you must set up SSH and PEM keys, and configure your environment variables in the env-vars file and settings in the variables.tf file. More information about the JD Edwards Terraform configuration is available in the documentation that accompanies the package. The env-vars file contains all of the information that Terraform needs to connect to the cloud, including OCID data for your compartment, tenancy, and a user with sufficient privileges to perform the required tasks, as well as encryption key information. Following is an example env-vars file: The variables.tf file is the key to the Terraform deployment. There are many configurable options, but the following ones are the important ones to set: region = the region where you want to deploy (for example, phoenix or ashburn) jde_ent_count = the number of enterprise servers to be provisioned jde_web_count = the number of web servers to be provisioned jde_smc_count = the number of Server Manager Console servers to be provisioned JDK_INSTALL_BINARY_NAME = the name of the Java JDK installation tar file (for example, jdk-8u161-linux-x64.tar.gz) WLS_INSTALL_BINARY_NAME = the name of the WebLogic installation jar file (for example, fmw_12.2.1.2.0_wls.jar) Note any passwords that are used in the variables.tf file because you will need them later. The Terraform scripts completely build the JD Edwards infrastructure, including installing and configuring the WebLogic servers. You must download the installation binaries from the Oracle Technology Network and stage them so that the Terraform scripts can use them:  Download the WLS 12.1.3/12.2.0 generic jar files and place them in the /u01/jde/ automation-oci-v2/web/wlsbinary directory. http://download.oracle.com/otn/nt/middleware/12c/wls/1213/fmw_12.1.3.0.0_wls.jar http://download.oracle.com/otn/nt/middleware/12c/12212/fmw_12.2.1.2.0_wls_Disk1_1of1.zip Download the JDK for Linux-x64 platform and place it in the /u01/jde/ automation-oci-v2/web/wlsbinary directory. http://download.oracle.com/otn-pub/java/jdk/8u161-b12/2f38c3b165be4555a1fa6e98c45e0808/jdk-8u161-linux-x64.tar.gz Now Terraform can complete the provisioning. Load the environment into the shell by running the source env-vars command: Initialize Terraform by running the terraform init and terraform plan commands (the following output is truncated for brevity): Run the Terraform by running the terraform apply command (the following output is truncated for brevity): A final output that describes the environment that was configured is displayed. Note this output for use in Step 2 to set up One-Click.  The terraform apply command completely performs all of the steps that are required for manual provisioning on all of the Linux hosts, and no further interaction is required. However, an additional step is required on the Windows hosts. Terraform creates a PowerShell script on the bootstrap host in the /u01/jde/automation-oci-v2/jdedeploy directory. Copy the jdewin_pre.ps1 script to the Windows hosts and execute it as administrator in a command prompt window. This action completes the configuration of the Windows host so that it is prepared for the One-Click installation. Now the Terraform step is complete, and the environment is ready for the One-Click step of the deployment. Step 2: Set Up One-Click This step installs and launches the One-Click Provisioning tool. Log in to the Oracle Software Delivery Cloud, search for the JD Edwards One-Click Provisioning 3.1 for Apps 9.2 Tools 9.2.2.4 software package, and add it to the cart. Download all of the parts by using the download manager or the wget.sh script. After the parts are downloaded, copy them to the Server Manager Console host and unzip them. This action expands the zip files into the DiskPart files that need to be reassembled, a checksum file, and a reassembly script. Ensure that the rebuild script is executable and then run the script. The script performs the following actions: Ensures that adequate space exists in the /u01 directory to perform the extraction Combines the unzipped archive files into a single tar file Verifies the recombined file by using the checksum file Moves the JD Edwards packages to the /u01 directory When the rebuild script is finished, run the setupPr.sh script to install and launch the One-Click Provisioning software on the Server Manager Console host. The One-Click Provisioning tool is ready to begin the orchestration and deployment of JD Edwards. Step 3: Deploy JD Edwards with the One-Click Provisioning Tool Launch the One-Click Provisioning tool and connect to the interface at the following URL: https://<public_IP_address>:3000 <public_IP_address> is the public IP address for the One-Click Provisioning Server Instance running on the Server Manager Console host. The One-Click console looks as follows: Perform the following steps to complete the deployment.   Click the Configure box and enter the server manager details. Next, click the Orchestrate box and choose a Quick Start deployment. You are prompted for details about the environment that was built during the Terraform step and details about the upcoming deployment, such as user IDs, hostnames, path names, and so on. The inputs are validated on each screen. When the Quick Start is complete, review or edit the inputs by again clicking the Orchestrate box. Walk through the server details in the Advanced settings, where you can configure more complicated environments by using multiple server configurations. When the orchestration step is complete, click the Deploy box to begin the installation. This step completes the installation and configuration of the JD Edwards environment from start to finish without any further interaction. Allow a few hours for the deployment to complete. In the meantime, monitor the progress on the Deployment Status screen. When the deployment is complete, the JD Edwards environment is fully available.  From this point, you can customize the JD Edwards application environment to suit your business needs by using the same JD Edwards management tools that are familiar from non-cloud deployments.  You might want to lift-and-shift the entire operational environment from your existing JD Edwards installation by using the JD Edwards Migration Utilities to export your database and configuration settings and then import them into the newly provisioned environment on Oracle Cloud Infrastructure. To change the configuration after the JD Edwards instance is up and running, revisit both Terraform and One-Click Provisioning to modify the configuration.  For example, to scale the environment by adding or removing instances and services, go to the configuration steps and make changes.  Then rerun the automation to orchestrate the changes. For more information about JD Edwards in the cloud, visit these additional resources: Preparing for a Deployment of JD Edwards EnterpriseOne on Oracle Cloud Infrastructure on Linux Administering Your JD Edwards EnterpriseOne Release 9.2 One-Click Deployment Migrating JD Edwards EnterpriseOne Release 9.2 to Oracle Cloud for Linux JD Edwards in the Cloud: Oracle Apps on Oracle Cloud Infrastructure I hope you found this post informative. Check back for updates as new features or enhancements to the automation tools become available.

Hello everyone, I'm Brian Casper, a Solution Architect on the Oracle Cloud Infrastructure team working to bring Oracle Applications to Oracle Cloud Infrastructure. I recently looked at the...

Oracle Cloud Infrastructure

Migrating an Oracle Database to an Oracle Cloud Infrastructure Database Service Virtual Machine

This blog post outlines the process of migrating a single-instance, version 12.2 Oracle Database from on-premises, Amazon Web Services (AWS), or an instance in Oracle Cloud Infrastructure to an Oracle Cloud Infrastructure Database service virtual machine (VM) instance. If your source database is on a Linux operating system, you can take a backup of the source database by using Oracle Recovery Manager (RMAN) and restore it to the Database service VM instance on Oracle Cloud Infrastructure. Before You Start Before you perform the migration, consider the following: Identify the CPU, memory, storage, IOPS, and I/O throughput requirements for your database instance, and provision a Database service VM instance on Oracle Cloud Infrastructure that is large enough to handle those requirements. The Database service VM instance that you provision should have the same database name as your source database (for example, prddb). The DB_UNIQUE_NAME and the SERVICE_NAME can be different from the target. During the migration, the DB_UNIQUE_NAME and SERVICE_NAME of the target database will be retained on the Database service VM. This example assumes that the datafiles are created as Oracle Managed files. If the tablespaces in the source database are not encrypted, consider encrypting them before moving the database to Oracle Cloud Infrastructure. Alternatively, you can convert the unencrypted tablespaces to Transparent Data Encryption (TDE) after the migration. The procedures outlined in this post assume that your source database is not using TDE. Identify the patch level of the source database. Identify the bundle patch levels (CPU, PSU, RU, RUR) of the source database, and any critical one-off patches that you might need. The database home on the Database service instance is usually patched up to the latest version of the bundle patches. If possible, apply the bundle patch that matches the Database service instance on the source database. Alternatively, you can run datapatch.sql to patch the database objects after the database migration is complete. Validate the list of one-off patches with Oracle support and identify the ones you need, and get one-off patches issued that can coexist with the bundle patch. Determine the destination for the RMAN backups taken during the migration. You can place the RMAN backup files either on a file system or directly in Oracle Cloud Infrastructure Object Storage. In this example, the files are placed in Object Storage. Ensure that the source database is in ARCHIVELOG mode. Install the Oracle Database Cloud Backup Module You can use the Oracle Database Cloud Backup Module, with RMAN, to back up the source database directly to Oracle Cloud Infrastructure Object Storage. After the database is backed up to Object Storage, you can use RMAN on the target database instance to restore the backup from Object Storage directly to the target host on Oracle Cloud Infrastructure. Using the Oracle Cloud Infrastructure Console, create a new Object Storage bucket to hold the backup. In this example, this bucket is called prdbkup. In the Console, navigate to the user settings and create a new authorization token. Note this token string; you use it when configuring the backup module. Download the Oracle Database Cloud Backup Module, and upload it to the /tmp directory of both the source and target database instances. Log in as the oracle user, change the directory to /tmp, unzip opc_installer.zip, and run the following command to install the backup module: $ORACLE_HOME/jdk/bin/java -jar opc_install.jar -opcId <user_id> -opcPass '<auth_token>' -container <bucket_name> -walletDir ~/hsbtwallet/ -libDir ~/lib/ -configfile ~/config -host https://swiftobjectstorage.<region>.oraclecloud.com/v1/<tenant> Perform a Backup In this example, you take a full backup of the database and ARCHIVELOG files, and transfer the backup set to the target database instance. Note the DBID of the source database. You need this to perform the restore on the target database instance. sqlplus / as sysdba SQL> select dbid from v$database; Note the file names for the online redo logs. You will rename these redo log files on the Database service instance to use Oracle Automatic Storage Management (ASM) disk groups. sqlplus / as sysdba SQL> select member from v$logfile; In the Oracle Cloud Infrastructure Console, go to the target database instance and note the database's unique name. Also note the host domain name of the target database system. Set up a few initialization parameters in the source database's server parameter file (spfile) so that you don't have to change these on the target after the migration. sqlplus / as sysdba SQL> alter system set audit_file_dest='/u01/app/oracle/admin//adump' scope=spfile; SQL> alter system set service_names='.' scope=spfile; SQL> alter system set db_unique_name= scope=spfile; Use RMAN to take a full backup of the database and ARCHIVELOG files. This example uses encryptit as the password to encrypt the backup, but you can change it to any string you want. CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/lib/libopc.so, SBT_PARMS=(OPC_PFILE=/home/oracle/config)' FORMAT "BACKUP_%U"; CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE; CONFIGURE BACKUP OPTIMIZATION ON; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; CONFIGURE ENCRYPTION FOR DATABASE ON; set encryption identified by encryptit only; run { backup as compressed backupset incremental level 0 SECTION SIZE=512M DATABASE PLUS ARCHIVELOG TAG='$tag'; } Restore and Recover the Database When you created the Database service VM instance, a database was created for you. Before you restore the files from the source database, you need to delete the files that pertain to the database that was created for you. You do not have to re-create all the configurations (such as the cluster registry, storage locations, or dbcli metadata). This section describes the following steps: Restore the server parameter file (spfile). Restore the database controlfiles. Restore the database files and ARCHIVELOG files, and recover the database. Implement TDE and encrypt the tablespaces. Clean up. Note: If the source database is encrypted using TDE, copy the wallet file from the source database environment and create an autologin wallet before proceeding with the restore. Restore the Server Parameter File Before you start this step, shut down the database and delete all the files belonging to this database. Log in as the oracle user and shut down the database. Then, log in as the grid user and use asmcmd to locate and delete all the files under the +DATA/<db_unique_name> and +RECO/<db_unique_name> directories. In this step, you configure the RMAN parameters to restore the spfile from a tape device, which will be configured to point to Oracle Cloud Infrastructure Object Storage. Use the DBID that you noted from the source database, and use the same password (in set DECRYPTION) that you used to encrypt the backup on the source instance during the restore. rman target / set dbid 3114159043; startup force nomount; set DECRYPTION identified by encryptit; run { SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT TO '%F'; allocate channel c1 device type sbt PARMS 'SBT_LIBRARY=/home/oracle/lib/libopc.so, SBT_PARMS=(OPC_PFILE=/home/oracle/config)'; RESTORE SPFILE to '/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/spfileprddb.ora' FROM AUTOBACKUP; } Set up the database initialization file to point to the newly restored spfile, so that all subsequent database startups use this spfile restored from the source database. echo SPFILE=/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/spfileprddb.ora > $ORACLE_HOME/dbs/initprddb.ora sqlplus / as sysdba SQL>shutdown immediate; SQL>startup nomount; Restore the Database Control Files Before restoring the controlfiles, set a few initialization parameters in the spfile so that the controlfiles and the database files are restored to the correct new locations. On the source database, the controlfiles and database files were on a file system. In the Oracle Cloud Infrastructure Database service, the controlfiles and database files are placed on ASM disk groups. Set the db_create_file_dest parameter to +DATA so that the database files will be restored to the +DATA disk group, and set up the control_files parameter to place one copy of the controlfile in the +DATA disk group and one copy in the +RECO disk group. sqlplus / as sysdba SQL> alter system set db_create_file_dest='+DATA' scope=spfile; SQL> alter system set db_recovery_file_dest='+RECO' scope=spfile; SQL> alter system set db_recovery_file_dest_size=4385144832 scope=spfile; SQL> alter system set control_files='+RECO' scope=spfile; shutdown immediate; startup nomount; -- Restore the controlfile rman target / set dbid 3114159043; set DECRYPTION identified by encryptit; run { SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT TO '%F'; allocate channel c1 device type sbt PARMS 'SBT_LIBRARY=/home/oracle/lib/libopc.so, SBT_PARMS=(OPC_PFILE=/home/oracle/config)'; RESTORE CONTROLFILE FROM AUTOBACKUP; } alter database mount; Restore and Recover the Database Files Now you are ready to restore and recover the database files. CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/lib/libopc.so, SBT_PARMS=(OPC_PFILE=/home/oracle/config)'; CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE; CONFIGURE BACKUP OPTIMIZATION ON; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; CONFIGURE ENCRYPTION FOR DATABASE ON; run { set ARCHIVELOG DESTINATION to '+RECO'; set NEWNAME for database to '+DATA'; allocate channel c1 device type sbt PARMS 'SBT_LIBRARY=/home/oracle/lib/libopc.so, SBT_PARMS=(OPC_PFILE=/home/oracle/config)' FORMAT "BACKUP_%U"; SQL "ALTER DATABASE RENAME FILE ''/u01/oradata/prddb/redo01.log'' TO ''+RECO/<db_unique_name>/ONLINELOG/redo01.log'' "; SQL "ALTER DATABASE RENAME FILE ''/u01/oradata/prddb/redo02.log'' TO ''+RECO/<db_unique_name>/ONLINELOG/redo02.log'' "; SQL "ALTER DATABASE RENAME FILE ''/u01/oradata/prddb/redo03.log'' TO ''+RECO/<db_unique_name>/ONLINELOG/redo03.log'' "; restore database; switch datafile all; restore archivelog all; recover database; } alter database open resetlogs; Convert Tablespaces to use TDE Often the on-premises databases that are migrated to the Database service don’t have their tablespaces encrypted. If your tablespaces aren’t already encrypted, after the migration is complete, use the following high-level steps to encrypt the tablespaces with TDE: Add the master key for the Container Database (CDB) and the Pluggable Databases (PDBs) to the wallet. Encrypt the tablespaces. When the original Database service instance was created using the Oracle Cloud Infrastructure Console, the Oracle wallet location was already set up in the sqlnet.ora file and the master keys for the original database were already present in the wallet. A password-based wallet and autologin wallet were created in the /opt/oracle/dcs/commonstore/wallets/tde/<db_unique_name>/ directory. You will use this same wallet and add your new keys to this wallet, using the following process. If the source database was already configured to use TDE, and you copied the wallet before performing the restore, you can skip this step. Remove the Autologin Wallet To add the master keys for the database to the wallet, you need to use the password-based wallet. Remove the autologin wallet: cd /opt/oracle/dcs/commonstore/wallets/tde/<db_unique_name>/ mv cwallet.sso cwallet.sso-orig Add the Master Key for the CDB to the Wallet export ORACLE_UNQNAME=<db_unique_name> sqlplus / as sysdba SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY <walletpassword>; SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY <walletpassword> WITH BACKUP; The wallet_password is the same password that you specified as the admin password during the target instance creation on Oracle Cloud Infrastructure via the Console. Now you can encrypt the user-created tablespaces in the CDB. The following command performs an online conversion of the USERS tablespace: SQL>ALTER TABLESPACE users ENCRYPTION ONLINE USING 'AES192' encrypt; Add the Master Key for the PDBs to the Wallet Run the following processes for each of the PDBs in the CDB: SQL>alter pluggable database <pdbname> open; SQL>alter session set container=<pdbname>; sqlplus / as sysdba SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/opt/oracle/dcs/commonstore/wallets/tde/<db_unique_name>' IDENTIFIED BY ; The command above, will create the single signon wallet file cwallet.sso in the /opt/oracle/dcs/commonstore/wallets/tde/<db_unique_name> directory. This single signon wallet will now have the new Master keys we have created. The commands we used above, for opening the wallet and adding the Master key to the wallet will work for both 12.1 and 12.2 databases. For commands to do this operation for 11.2.0.4 databases, please refer to the white paper http://www.oracle.com/technetwork/database/availability/tde-conversion-dg-3045460.pdf. SQL>ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY WelCome#_12; SQL>ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY WelCome#_12 WITH BACKUP; SQL>ALTER TABLESPACE users ENCRYPTION ONLINE USING 'AES192' encrypt; SQL>ALTER TABLESPACE hrts ENCRYPTION ONLINE USING 'AES192' encrypt; SQL>ALTER TABLESPACE oets ENCRYPTION ONLINE USING 'AES192' encrypt; Re-Create the autologin wallet sqlplus / as sysdba SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/opt/oracle/dcs/commonstore/wallets/tde/<db_unique_name>' IDENTIFIED BY <WalletPasword>; The command above, will create the single signon wallet file cwallet.sso in the /opt/oracle/dcs/commonstore/wallets/tde/<db_unique_name> directory. This single signon wallet will now have the new Master keys we have created. The commands we used above, for opening the wallet and adding the Master key to the wallet will work for both 12.1 and 12.2 databases. For commands to do this operation for 11.2.0.4 databases, please refer to the white paper titled Convert to Transparent Database Encryption. Cleanup Steps Perform these steps to finalize the migration of the database. Reset the spfile Location During the database restore process, the database spfile was temporarily restored to the $ORACLE_HOME/dbs directory. Now you need to move this spfile to the ASM disk group and update the location for the spfile in the cluster registry. Log in as the grid user and run the following commands to copy the spfile to ASM: export ORACLE_HOME=/u01/app/12.2.0.1/grid export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH export ORACLE_SID=+ASM1 asmcmd ASMCMD> cp '/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/spfileprddb.ora' '+DATA/<db_unique_name>/PARAMETERFILE/spfileprddb.ora' Log in as the oracle user and run the following command to set the spfile for the database to point to the location of the file on ASM: srvctl modify database -d <db_unique_name> -p +DATA/<db_unique_name>/PARAMETERFILE/spfileprddb.ora Reset RMAN Configuration Parameters Run the following commands to reset the configuration parameters that would have been reset during the database restore phase. You perform this step to ensure that changed configuration parameters don't negatively impact the automatic backups scheduled for the database. rman target / RMAN> configure controlfile autobackup clear; RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' clear; RMAN> configure backup optimization clear; RMAN> configure default device type clear; RMAN> configure encryption for database clear; You have now completed the migration of the database to an Oracle Cloud Infrastructure Database service VM instance, and you are now ready to connect to this database and use it.

This blog post outlines the process of migrating a single-instance, version 12.2 Oracle Database from on-premises, Amazon Web Services (AWS), or an instance in Oracle Cloud Infrastructure to an Oracle...

Oracle Cloud Infrastructure

Why Cloud Compliance Requires Web Application Security

If regulatory compliance isn't a top priority as you move infrastructure and applications to the cloud, it should be. Organizations of all sizes and across all verticals are subject to various industry and government regulations. Cloud infrastructure migration and cloud-native application development complicate compliance because they open up protected data to be handled by more parties and stored in more locations. The top cause of data breaches are attacks on websites and applications, so web app security must be a part of any cloud compliance strategy. The Many Facets of Cloud Compliance Web application attacks caused 21% of all data breaches that occurred in 2017, up from 10% the year before, according to the Verizon Data Breach Investigations Report. And 23% of organizations fell victim to at least one of these attacks in the past year, according to a Spiceworks survey on web application security. Web application servers are appealing targets because they may contain valuable customer data, including medical records and credit card information, both of which have regulations (HIPAA and PCI DSS, respectively) that govern their handling and protection. Effective cloud compliance depends on technology that prevents unauthorized access to these types of data, and a cloud-based web application firewall helps do just that. Web application security alone, however, can't completely address cloud compliance. Startups and enterprises alike are building more cloud-native applications, which often rely on third-party APIs and other components that are not directly controlled or managed by the organization itself. That's why these companies need an enterprise-grade cloud that provides a secure and compliant infrastructure and application platform. Oracle's Approach to Compliance in the Cloud Oracle's offerings address not only the web application security aspect of cloud compliance, but also the broader concerns around Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). The Oracle Dyn Web Application Firewall hides the origin of the web server, which makes it harder for attackers to reach. It also inspects all incoming traffic and blocks malicious requests, and it inspects outgoing traffic to protect against breaches of regulated data. The service has prebuilt rule sets for PCI DSS and other regulations that check whether the web server is attempting to transmit data in a manner that would result in a compliance violation—and if so, it blocks that traffic. And because, as I mentioned, the increasing use of APIs creates a larger attack surface, we offer Oracle Dyn API Protection to prevent malicious calls from reaching your network. In addition, Oracle Cloud Infrastructure, an IaaS and PaaS offering, holds a PCI DSS Attestation of Compliance for more than a dozen infrastructure and application services. Oracle Cloud Infrastructure also holds an attestation for HIPAA's rules around security, breach notifications, and where applicable, privacy. Your customers' data is your biggest asset. It can also be your biggest liability. Web application security, combined with the right cloud platform, can help keep it safe and compliant. Kyle York VP of Product Strategy, Oracle Cloud Infrastructure and GM, Oracle Dyn Global Business Unit

If regulatory compliance isn't a top priority as you move infrastructure and applications to the cloud, it should be. Organizations of all sizes and across all verticals are subject to various industry...

Oracle Cloud Infrastructure

Connect Private Instances with Oracle Services Through an Oracle Cloud Infrastructure Service Gateway

If you're a typical Oracle Cloud Infrastructure customer, you may have resources in your virtual cloud network (VCN) that need to access the Oracle Cloud Infrastructure Object Storage service, which has publicly addressable endpoints. Until now, you could use either public subnets or a NAT instance, with an internet gateway in your VCN to access the service. However, you might not have wanted to use these options because of privacy, security, or operational concerns. We are happy to announce the availability of the service gateway, which alleviates the preceding concerns by enabling the following functions: Private connectivity between your VCNs and Object Storage: You can add a service gateway to a VCN and use the VCN's private address space to access Object Storage without exposing the instances to the public internet. You don't need a public subnet, NAT instance, or internet gateway in your VCN.  Enhanced security for your Object Storage buckets: You can limit access to Object Storage buckets from an authorized VCN or from a specific range of IP addresses within the subnet. You can add conditional references to VCN and IP addresses in IAM policies, which can only be satisfied when you initiate connections through a service gateway.  Accessing Object Storage Through a Service Gateway You might have a private instance in your VCN that accesses the Object Storage bucket through a NAT instance and internet gateway. This section walks through that scenario and an example in which you enable the same private instance to access the same bucket privately and securely through a service gateway. NAT Instance and Internet Gateway First consider a typical scenario in which private instances in the VCN access the Object Storage bucket through a NAT instance and internet gateway, as shown in the preceding figure. Your VCN has one public subnet and private subnet with their associated route tables, security lists, and DHCP options created based on steps 1-3 described in this blog post.  You also have an Object Storage bucket. You can use the Oracle Cloud Infrastructure Command Line Interface (CLI) to access the Object Storage bucket from the instances. The bucket is accessible from the instance in the private subnet through the NAT instance and the internet gateway. Service Gateway Now create a service gateway in the VCN and enable private connectivity between the private subnet and the Object Storage endpoint. You create a service gateway as a resource in the VCN just like you did with an internet gateway   For traffic to be routed from a subnet in your VCN to the service gateway, add a route rule accordingly in the private subnet's route table. Note: With the launch of service gateway, we have now introduced Service CIDR labels, that can be used in place of a CIDR block in the route rules and/or security rules. This label maps to all IP addresses of the service within the regions. You don't have to know the specific CIDR blocks for the service's public endpoints, which could change over time. As you can see from the image above, you can now choose Service Gateway as the Target Type and provide the label of the OCI Object Storage Service.  You may choose to secure your private instance optionally by setting up egress security rules in the subnet's security list for the traffic to the Object Storage service. As with route rules, you can specify the Object Storage service as the destination service in the security rule. You have now removed the dependency on the NAT instance and internet gateway for Object Storage access. The Object Storage bucket is now accessible from the instance in the private subnet through the service gateway. Voila! You have now successfully introduced a service gateway to your VCN to establish private connectivity between the private instance and an Object Storage endpoint.  Note: If your private instances need internet access for software updates or other functions, you may use the internet gateway and NAT instance in the VCN. In this case, you will need one route rule for the Object storage CIDR blocks through the service gateway and an additional default route rule in the private subnet route tables that direct all other traffic through NAT instance as shown below:   Let's take a minute to look at other features of the service gateway: You can block or allow traffic through the service gateway. Blocking the service gateway prevents all traffic regardless of any existing route rules or security lists in your VCN. This flexibility provides administrative-level control at a single point in the network without having to update security or route rules at different subnets. You can also lock access to an Object Storage bucket to specific VCNs by using IAM policies for providing enhanced security for your objects. Following is a sample IAM policy that locks down an Object Storage bucket to be accessible from a specific VCN.  We recommend that you use the service gateway for all your Object Storage access needs. You can find more information about the service gateway in the Networking documentation. Thank you for reading this post. Your feedback and recommendations for the post are most welcome. Vijay Arumugam Kannan Principal Product Manager Oracle Cloud Infrastructure, Networking  

If you're a typical Oracle Cloud Infrastructure customer, you may have resources in your virtual cloud network (VCN) that need to access the Oracle Cloud Infrastructure Object Storage service, which...

Oracle Cloud Infrastructure

ANSYS and Oracle: ANSYS Fluent on Bare Metal IaaS

If you’ve ever seen a rocket launch, flown on an airplane, driven a car, used a computer, touched a mobile device, crossed a bridge, or put on wearable technology, you’ve likely used a product in whose creation ANSYS software played a critical role. ANSYS is a global leader in engineering simulation. Oracle is pleased to announce its partnership with ANSYS. Oracle Cloud Infrastructure bare metal compute instances enable you to run ANSYS in the cloud with the same performance as you would see in your on-premises data center. Why Bare Metal Is Better for HPC Oracle Cloud Infrastructure continues to invest in HPC. Nothing beats the performance of bare metal. The virtualized, multi-tenant platforms common to most public clouds are subject to performance overhead. Traditional cloud offerings require a hypervisor to enable management capabilities that are required to run multiple virtual machines on a single physical server. This additional overhead has been demonstrated by hardware manufacturers to significantly affect performance [i]. Bare metal servers, without a hypervisor, deliver uncompromising and consistent performance for high performance computing (HPC). Instances with the latest generation NVMe SSDs, providing millions of IOPS and very low latency, combined with Oracle Cloud Infrastructure's managed POSIX file system, ensure that Oracle Cloud Infrastructure supports the most demanding HPC workloads. Our bare metal compute instances are powered by the latest Intel Xeon Processors and secured by the most advanced network and data center architecture, yet they are available in minutes when you need them—in the same data centers, on the same networks, and accessible through the same portals and APIs as other IaaS resources. With Oracle Cloud Infrastructure’s GPU instances, you also have a high performance graphical interface to pre- and post-process ANSYS simulations. ANSYS Performance on Bare Metal OCI Instances The performance of ANSYS Fluent software on Oracle Cloud Infrastructure bare metal instances meets and in some cases exceeds the raw performance of other on-premises HPC clusters, demonstrating that HPC can run well in the cloud. Additionally, consistent results demonstrate the predictable performance and reliability of bare metal instances. The following chart shows the raw performance data of the ANSYS Fluent f1_racecar_140m benchmark on Oracle Cloud Infrastructure's Skylake and Haswell compute instances. The model is 140 million cell CFD model. Visit the ANSYS benchmark database to see how Oracle Cloud Infrastructure compares favorably to on-premises clusters. Figure 1: ANSYS Fluent Rating on Oracle Cloud Infrastructure Instances Installation and configuration of ANSYS Fluent on Oracle Cloud Infrastructure is simple, and the experience is identical to the on-premises installation process. Bare metal enables easy migration of HPC applications; no additional work is required for compiling, installing specialized virtual machine drivers, or logging utilities. Although the performance is equal to an on-premises HPC cluster, the pricing is not. You can easily spend $120,000 or more on a 128-core HPC cluster [ii], and that's just for the hardware; that number doesn’t include power, cooling, and administration. That same cluster costs just $8 per hour on Oracle Cloud Infrastructure. That’s an operating expense you’re paying for only when you use it, not a large capital expense you have to try to “right-size” and keep constantly in use to experience the best ROI. Running on Oracle Cloud Infrastructure means that you can budget ANSYS Fluent jobs precisely, in advance, and the elastic capacity of the cloud means that you never have to wait in a queue. Scaling Is Consistent with On-Premises Environments When virtualized in your data center, CPU-intensive tasks that require little system interaction, normally, experience very little impact or CPU overhead.[iii] However, virtualized environments in the cloud include monitoring, which adds significant overhead on per node. Virtualization overhead is not synchronized across an entire cluster, which creates problems for MPI jobs, such as ANSYS Fluent, which effectively have to wait for the slowest node in a cluster to return data before advancing to the next simulation iteration. You’re only as fast as your slowest node, noisiest neighbor, or overburdened network. With Oracle Cloud Infrastructure’s bare metal environment, no hypervisor or monitoring software runs on your compute instance. With limited overhead, ANSYS Fluent scales across multiple nodes just as well as it would in your data center. Our flat, non-oversubscribed network virtualizes network IO on the core network, instead of depending on a hypervisor and consuming resources on your compute instance. The two 25Gb network interfaces on each node guarantee low latency and high throughput between nodes. As shown in the following chart, many ANSYS Fluent models scale well across the network.     Figure 2: ANSYS Fluent Scaling on an Oracle Cloud Infrastructure Instance The following chart illustrates greater than 100% efficiency with respect to a single core from 400,000 cells per core to below 50,000 cells per core. Figure 3: Efficiency Remains at 100% Even as Cells Per Core Drop Serious HPC Simulations in the Cloud Oracle Cloud Infrastructure has partnered with ANSYS to provide leading HPC engineering software on high performance bare metal instances so that you can take advantage of cloud economics and scale for your HPC workloads. Our performance and scaling with ANSYS matches on-premises clusters. It’s easy to create your own HPC cluster, and the cost is predictable and consistent. No more waiting for the queue to clear up for your high-priority ANSYS Fluent job or over-provisioning hardware. Sign up for 24 free hours of a 208-core cluster or learn more about Oracle Cloud Infrastructure's HPC offerings. [i] http://en.community.dell.com/techcenter/high-performance-computing/b/general_hpc/archive/2014/11/04/containers-docker-virtual-machines-and-hpc [ii] Example price card: https://www.hawaii.edu/its/ci/price-card/ [iii] https://personal.denison.edu/~bressoud/barceloleggbressoudmcurcsm2.pdf

If you’ve ever seen a rocket launch, flown on an airplane, driven a car, used a computer, touched a mobile device, crossed a bridge, or put on wearable technology, you’ve likely used a product in...

Events

Oracle’s Continued HPC Investments at ISC High Performance 2018

Over the last year we’ve continued to make Oracle Cloud one of the the best platforms for high-performance computing (HPC) workloads. We designed our cloud with HPC as one of the core use-cases, and this influenced everything from our choice of server hardware to our data-center design with nonblocking network to ensure low-latency and high-bandwidth connectivity between compute nodes. Our new managed File Storage service is built with performance as one of its most important characteristics, enabling you to offload the management of a high-performance clustered file system. We also introduced instances powered by NVIDIA’s Tesla V100 GPUs released at NVIDIA’s GPU Tech Conference in March. Initially these instances were generally available only in our Ashburn, VA, data center in bare metal instance types. Today, we’re excited to announce the expansion of these instances to our London region along with the preview availability of virtual machine shapes allowing customers to get instances based on Intel® Xeon® Scalable Processors with 1, 2 or 4 NVIDIA Tesla V100 Tensor Core GPUs. You can find more information on our NVIDIA GPU microsite. We understand how important the ISV and Partner ecosystem is to the HPC community. We’ve been making steady progress through our previously announced partnerships with important global ISVs such as Altair, Citrix, and Teradici. Today we’re excited to announce a partnership with one of the global leaders in engineering simulation, ANSYS. ANSYS offers a portfolio of engineering simulation products to help customers solve the most complex design challenges and engineer products limited only by imagination. “The partnership with Oracle Cloud Infrastructure enables our customers to run simulations in a bare metal environment with the same world-class experience and consistency that they have on-premises,” said Wim Slagter, Director HPC & Cloud Alliances at ANSYS. “The partnership expands on ANSYS' Open Cloud Strategy—empowering customers with flexibility to run simulations on their cloud platform of preference.” Read more about our work with ANSYS, including initial benchmarks and performance testing on Oracle Cloud Infrastructure. Over the course of the year, you’ll see benchmarks, easy-to-deploy templates, and other fantastic tools to make it easy to deploy and run HPC software on Oracle Cloud Infrastructure. Future HPC Investments Through our 20-year collaborative and impactful partnership with Intel, we’re continuing to accelerate our HPC efforts. Our recent work on X7 Intel Xeon Scalable Platinum instances was released at the last Oracle OpenWorld, and longer-term work is occurring on projects such as Oracle in-memory databases and Oracle Exadata. “Intel’s innovative HPC foundation, based on the Intel® Xeon® Scalable processors, includes critical platform innovations in memory, storage and acceleration technologies to address the complex spectrum of diverse HPC workload requirements. We’ll continue to work with Intel over the course of the year with some exciting announcements relating to HPC later in the year”, said Vinay Kumar, Vice President of Product Management, Oracle Cloud Infrastructure HPC is a major area of investment for us. We’re trying to solve the challenges that customers face with both cloud- and on-premises based deployments in which compute and network performance is a major challenge for tightly coupled or MPI workloads. “Intel has a long history working with Oracle on offerings that are optimized for enterprise workloads. We’re excited about our collaboration with Oracle Cloud Infrastructure, which started with the initial launch of X5 Compute Instances in 2016, and then the launch of X7 Compute Instances at the end of 2017. Intel continues to work closely with Oracle Cloud Infrastructure on further collaborations on HPC where users will experience significant performance improvements in workloads compared to previous generation hardware” said Jeff Wittich, Director, Business Strategy Platform Enablement, Intel Enterprises today are using our HPC capabilities in our various regions around the globe. Customers such as YellowDog and Zenotech are already benefiting from these super-computing capabilities. YellowDog provides a platform that enables animation studios and VFX facilities to access tens of thousands of cores or GPUs to deliver intensive rendering workloads within seemingly impossible deadlines. Zenotech provides a simulation-as-a-service platform that enables customers to run computational fluid dynamics. Customers find us to be a consistently faster, cheaper, and more efficient option compared to other cloud providers or even on-premises clusters. We’ll be demonstrating and talking about some of these HPC use cases in our sessions next week, as well as at our session Run Cloud HPC and GPU Applications Without a Virtual Layer on Wednesday, June 27, from 3 p.m. to 3:20 p.m. at booth N-210, and at a quick vendor showdown session. You can also visit our booth at G-822 to meet our engineering teams, learn more, and get free credits to run your own HPC jobs and workloads on Oracle. Looking forward to seeing you all there! Karan

Over the last year we’ve continued to make Oracle Cloud one of the the best platforms for high-performance computing (HPC) workloads. We designed our cloud with HPC as one of the core use-cases, and...

Oracle Cloud Infrastructure

Announcing Boot Volume Backups and Clones for Application Protection and Lifecycle Management

We are excited to announce that you can now backup and clone your boot volumes online without any downtime on Oracle Cloud Infrastructure. Put all your worries about application protection and life-cycle management behind you! We continue to invest heavily in adding comprehensive application and data protection solutions to our cloud offering. Boot volumes provide remote boot disks that are encrypted by default, offer high performance and fast launch times, and durability for your bare metal and virtual machine (VM) instances. By using the boot volume backup and clone capabilities, combined with the recently announced volume groups feature, you can easily create point-in-time consistent backups and clones of your running enterprise applications that span multiple instances and storage volumes across one or more compute instances, while they are online without any downtime. These capabilities, available only from Oracle Cloud Infrastructure, expand the breadth of built-in application, data management, and protection capabilities that you should expect from a cloud provider. Backing up a boot volume enables you to preserve the entire state of your running operating system as a backup. All of the block volume backup capabilities also apply to boot volumes: you can configure policy-based automated and scheduled backups, you can choose full or incremental backups, the backup completes in a minute, and a restored boot volume becomes available to use within a matter of seconds. Cloning a boot volume allows you to quickly provision an exact, isolated copy of a running instance without going through a backup and restore process. A clone of a boot volume becomes available for use within a matter of seconds, making it trivial to spin off new environments for scale up, development, QA, UAT, and troubleshooting. You have a choice: either backup your instance and keep its backup for future restore, or clone it and use the clone immediately. These new capabilities are provided at no additional cost to Oracle Cloud Infrastructure customers beyond the cost of the amount of consumed block and object storage. The rest of this post walks through creating a boot volume backup and clone in the Console. Backup a Boot Volume On the Boot Volume details page, click Backups and then click Create Backup. Specify a name for the backup. The backup of the boot volume becomes available in a few seconds. Assign a Backup Policy to a Boot Volume You can easily assign a backup policy to a boot volume, so the backups happen automatically on a schedule. On the volume details page, Backup Policy field, click on Assign: And select from three predefined policies: Backups for the boot volume will happen automatically and will be retained based on the policy you selected.   Clone a Boot Volume Similarly, you can clone a boot volume in a few clicks. On the Boot Volume details page, click Clones and then click Create Clone. Specify a name for the clone. Your cloned boot volume is available to use immediately. We want you to experience these new features and all the enterprise-grade capabilities that Oracle Cloud Infrastructure offers. It’s easy to try them out with $300 free credit. For more information, see the Oracle Cloud Infrastructure Getting Started guide, Block Volume service overview, and FAQ. Watch for announcements about additional features and capabilities in this space. We value your feedback as we continue to make our service the best in the industry. Send me your thoughts on how we can continue to improve or if you want more details on any topic. Max Verun

We are excited to announce that you can now backup and clone your boot volumes online without any downtime on Oracle Cloud Infrastructure. Put all your worries about application protection and...

Oracle Cloud Infrastructure

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam

To help our customers become cloud ready and validate their cloud skills, we launched the first-ever Oracle Cloud Infrastructure Architect Associate certification in January 2018. In May, we released a study guide and a practice exam to help our customers prepare for the certification. Hundreds of customers are embarking on the certification journey, and the feedback has been great. To help you with this journey, we are introducing a series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam. These interviews will provide strategies to help you prepare for the exam. To kick of the series, I spoke with Umair Siddiqui, who has over 17 years of industry experience spanning cloud computing, network function virtualization, software-defined networking, customer enablement, professional services, and product management. Currently, Umair works for Oracle as part of the Product Management team focused on customer enablement. Greg: Umair, how did you prepare for the certification? Umair: While I already had cloud experience, I had to ramp up my knowledge on OCI (Oracle Cloud Infrastructure). I watched the available eLearnings, Service Intro, Just in Time, I Built It in the Cloud videos, Business Essentials videos, and Fundamentals training. I watched all of the videos for a high-level overview and then read up on the OCI documentation. I highly recommend that you get the $300 free promo account. That is the first thing I did. Using the demo account, I set my own learning objectives, such as creating and deploying an application using load balancing. I practiced through the actual deployment. I also read the documentation at docs.oracle.com, which provided me with more focused information on topics such as Terraform and Database as a Service. Greg: How is life after getting certified? Umair: One thing I like about OCI certification is that I got the digital badge almost instantaneously after clearing the certification. I displayed my digital badge on various social media platforms where I received “a lot of congrats, a lot of thumbs-up.” Having passed the exam not only helped me validate my own skills on OCI but has also established me as an OCI practitioner in the industry. I have observed that people come to me for their questions about OCI and advice on how to prepare for certification. This has increased my desire to dive deeper into learning more about Oracle’s cloud offerings. Please subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam. Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series: Umair Siddiqui Nitin Vengurlekar Rajib Kundu Miranda Swenson Robby Robertson Chris Riggin Anuj Gulati

To help our customers become cloud ready and validate their cloud skills, we launched the first-ever Oracle Cloud Infrastructure Architect Associate certificationin January 2018. In May, we released...

Oracle Cloud Infrastructure

New Oracle Linux KVM Image for Oracle Cloud Infrastructure

Easy deployment of virtual machines is important to most cloud deployments. The Oracle Linux KVM image simplifies the deployment of VMs by integrating with services such as block storage and virtual network interfaces through the use of scripted tools, including oci-utils. These tools make it easy to define VM guest domains, allocating specific block volumes or VNICs, and launch or remove VMs on Oracle Cloud Infrastructure. We've just updated the Oracle Linux KVM Image for Oracle Cloud Infrastructure, with the following enhancements: Support for VNIC creation through the oci-utils script: oci-network-config --create-vnic Example of VNIC creation: $ sudo oci-network-config --create-vnic --vnic-name vnic-guest3 --assign-public-ip This creates a VNIC and assigns a public IP address. Support for block volume creation through the oci-utils script: oci-iscsi-config --create-volume Example of block device creation: $ sudo oci-iscsi-config --create-volume 100 --volume-name vol-guest3 This creates a 100 GB volume and attaches the iSCSI device. Full configuration of Virtual Function network interfaces using the native Oracle Linux systemd LSB networking (ifcfg network configuration files) Updated to oci-utils version 0.6 Updated to the Oracle Linux 2018-05-08 base image To deploy the new KVM image on Oracle Cloud Infrastructure, import it by using the URL on this page, and create the instance by using the custom image. For more information, visit the following pages: Oracle Linux KVM Image for Oracle Cloud Infrastructure Getting Started: Oracle Linux KVM Image for Oracle Cloud Infrastructure

Easy deployment of virtual machines is important to most cloud deployments. The Oracle Linux KVM image simplifies the deployment of VMs by integrating with services such as block storage and virtual...

Developer Tools

Deploy Jenkins on Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)

In my previous blog post, I demonstrated how to deploy Jenkins in a master/slave architecture on Oracle Cloud Infrastructure by using the Oracle Cloud Infrastructure Compute plugin for Jenkins. This plugin enables you to spin up virtual machine (VM) or bare metal instances as slaves/agents on-demand within Oracle Cloud Infrastructure and then tear them down automatically after the job is complete. By spinning up a large number of agents, Jenkins can run many jobs in parallel. As an alternative to using the VM-based plugin, you can instead create container-based Jenkins agents, which can be spun up more quickly than VMs (seconds versus minutes) and can be torn down quickly after the build job is complete. Jenkins container-based agents are provisioned based on a Docker container image with all the tools and environment settings that you need. In this blog post, I'll demonstrate how to set up Jenkins agents as Docker containers and deploy them within a Kubernetes cluster in a few steps. I'll use a different plugin for Jenkins to accomplish this: the Kubernetes plugin for Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE). OKE delivers secure, highly available Kubernetes clusters and manages containerized applications on Oracle Cloud Infrastructure. Following the steps in this blog post, you can create a Jenkins deployment as shown in the following figure: Prerequisites A Jenkins server (master), as described in Deploy Jenkins on Oracle Cloud Infrastructure. A Kubernetes cluster already deployed in Oracle Cloud Infrastructure. For information about how to create a Kubernetes cluster, see the service documentation. Step 1: Install the Kubernetes Plugin for Jenkins The Kubernetes plugin for Jenkins is used to run dynamic Jenkins agents in a Kubernetes cluster. On the Jenkins Dashboard, click Manage Jenkins and then click Manage Plugins. On the Available tab, search for the Kubernetes plugin and install it without restarting. Step 2: Configure the Kubernetes Plugin with OKE In the Jenkins Dashboard, click Manage Jenkins and then click Configure System. Click Add a new cloud, and choose Kubernetes.   Go to the Kubernetes cloud configuration section and enter the following information: Name: The name of the Kubernetes cloud configuration. Kubernetes URL: The OKE cluster endpoint. You can obtain the Kubernetes URL by running the following command (assuming you already downloaded the kubeconfig file): Kubernetes server certificate key: The X509 PEM encoded certificate. This is an optional parameter if you select Disable https certificate check. You can obtain the Kubernetes secret token by running the following command (assuming you already downloaded the kubeconfig file): Kubernetes namespace: The namespace of the Kubernetes cluster. Credentials: The secret text that stores your Kubernetes secret token. You can obtain the Kubernetes server certificate key in your kubeconfig file. Be sure to base64 decode it. Configure Kubernetes role-based access control (RBAC) to enable the default service account token to interact with the cluster by giving it the admin role: Click Test Connection and ensure that the response is “Connection test successful,” as shown in the following screenshot: Note: Occasionally, an error message stating “No valid crumb was included in the request” is displayed. This error is a bug in the Jenkins Kubernetes plugin. To fix the error, go back to the previous page and retry. Configure the Jenkins URL by entering your Jenkins Master URL. You can use the default values for the rest of the fields. Configure the Kubernetes Pod Template as shown in the following screenshot. Remember the label that you set for Jenkins agents because you will need it later when running the build jobs. Configure the Container Template as shown in the following screenshot. For the purpose of this blog post, we are using Oracle Container Registry as the source to pull a custom Jenkins jnlp-slave docker image. Ensure that the Jenkins jnlp-slave docker image is already available in the registry. You can also use the public Docker Hub registry to pull in the Jenkins jnlp-slave image, in which case you enter jenkins/jnlp-slave in the Docker image field. Save the configuration. Notes: I marked the repository in Oracle Container Registry as public to be able to pull the custom jnlp-slave docker image. If you are using a private repository, configure Jenkins with the right credentials to access the repository. If you are using public Docker Hub registry to pull the jnlp-slave image, be sure to enter jenkins/jnlp-slave. Many online resources indicate to enter jenkinsci/jnlp-slave, but it is being deprecated. Step 3: Test the Kubernetes Plugin with OKE Create a simple project in Jenkins by clicking New and selecting Freestyle project. Name the project testOKE, and use the default values. In the Label Expression field, enter k8s, which we used it in the preceding configuration. Build the project. After the build starts, the jnlp-slave container is provisioned on the Jenkins Dashboard. Check the status by running the following commands: This completes the deployment of Jenkins agents on Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE). Extending the Deployment In the preceding deployment, we ran the Jenkins master as a VM and scaled out Jenkins agents/slaves as Docker containers on the Container Service for Kubernetes (OKE). We can extend this deployment by deploying the Jenkins master inside the Kubernetes cluster, alongside the Jenkins slaves. This configuration provides fault tolerance for Jenkins containers (both master and slaves), service resiliency, and better resource utilization. Let’s look at how to achieve this in a few steps. The deployment looks similar to the setup shown in the following figure: Prerequisites A Kubernetes cluster already deployed in Oracle Cloud Infrastructure. Deployment The deployment of the Jenkins master in Kubernetes includes the following steps: Prepare the Kubernetes manifest files for Jenkins. Deploy the Jenkins master along with a Persistent Volume (PVC) on Oracle Cloud Infrastructure Block Volume. Expose the Jenkins service through a load balancer. Configure the Jenkins master. Step 1: Prepare the Kubernetes Manifests for Jenkins The jenkins-master.yaml manifest file contains the deployment configuration for the Jenkins master, which creates a single replica. We'll useg the latest Jenkins image in this setup while exposing ports 8080 and 50000 on Jenkins master containers. The jenkins-dir volume mount is associated with the PVC called jenkins-master-pvc. The jenkins-pvc.yaml file consists of a PVC configuration that includes Oracle Cloud Infrastructure block volumes. We'll reserve a 50-GB block volume, which will be used to store Jenkins build files and artifacts, if needed. Step 2: Deploy to the Kubernetes Cluster Store the manifest files in a directory (for example, jenkins-k8s) and run the following command to create a Jenkins master deployment with a PVC: Verify that the deployment and pod are created: Verify that the PVC is created, either by running the following command or going to the Block Volumes section of the Oracle Cloud Infrastructure Console: Step 3: Expose the Deployment via a Load Balancer Now that the deployment is created, we can create a service and expose it via an Oracle Cloud Infrastructure load balancer on port 80 while setting the target port to 8080 (the Jenkins master listens on 8080 by default). You can see that the load balancer is provisioned on the Oracle Cloud Infrastructure Console. After it is provisioned, it will have a public IP address exposed with port 80 as the listener. You can verify the public IP address of your service by running the following command: Step 4: Configure the Jenkins Master Access the Jenkins dashboard on the public IP address that we obtained in step 3, on port 80. You should see the following screen: Run the following commands to get the initial admin password for Jenkins. After this, the process of configuring the Jenkins master is similar to what I illustrated in my previous blog post on deploying Jenkins on Oracle Cloud Infrastructure. After configuring the master, you can install the Kubernetes plugin and scale out the Jenkins slaves as illustrated in the first part of this post. Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) offers seamless integration of the Jenkins Kubernetes plugin. Configuring OKE on Jenkins Kubernetes plugin is similar to configuring other Kubernetes engines. OKE delivers secure, highly available Kubernetes clusters and manages containerized applications on Oracle Cloud Infrastructure. Abhiram Annangi | Twitter  LinkedIn

In my previous blog post, I demonstrated how to deploy Jenkins in a master/slave architecture on Oracle Cloud Infrastructure by using the Oracle Cloud Infrastructure Compute plugin for Jenkins. This...

Oracle Cloud Infrastructure

Cloudera Enterprise Data Hub now Validated on Oracle Cloud Infrastructure

We are proud to announce a validated reference architecture of Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure. Starting today you can deploy Cloudera's industry-leading big data technology on Oracle's high performance cloud with full Cloudera support. The Cloudera and Oracle partnership allows customers to deploy comprehensive data strategies, from business operations to data warehousing, data science, data engineering, streaming, and real-time analytics, all on a unified enterprise cloud platform with unmatched performance, security, and availability. Cloudera Enterprise Data Hub Provides Comprehensive Big Data Capabilities Cloudera Enterprise Data Hub brings together the best big data technologies from the Apache Hadoop ecosystem, including HDFS, HBase, Hive, Spark, Impala, Solr, and Kudu, and adds consistent security, granular governance, and full support. It is the fastest, most secure, and easiest to use big data software available. Cloudera is a great choice for a variety of big data use cases, including:   Growth: Customer 360-degree view Connect: Internet of Things (IOT), prescriptive and predictive analytics Protection: Fraud prevention, compliance, GDPR Learn more about Cloudera Enterprise Data Hub Oracle Cloud Infrastructure Adds Big Data Flexibility and Performance Cloudera on Oracle Cloud Infrastructure is a joint solution that combines the flexibility and performance of Oracle Cloud Infrastructure with the scalable data management of Cloudera Enterprise Data Hub. Our solution enables customers to realize their data strategies, from operational to analytics, with amazing performance, an unmatched data ecosystem, and the inherent benefits of moving from on-premises fixed infrastructure to elastic cloud infrastructure. Blazing Fast Big Data Performance Oracle offers the most powerful bare metal compute instances with local flash storage in the industry. In a TeraSort benchmark test, sorting 10 terabytes of data using 10 worker nodes on Oracle Cloud can be done in about 45 minutes. Although this scale is only a fraction of what is possible, this graph of the benchmark shows the impact of both bare metal versus VM and of local storage versus block storage. Only Oracle offers this big data ready local storage, based on advanced NVMe SSD technology, and backed by a storage performance SLA. The bare metal compute instances are connected in clusters to a non-oversubscribed 25-gigabit network infrastructure, guaranteeing extremely low latency and very high throughput, which is a key requirement for high performance big data workloads. In fact, Oracle Cloud Infrastructure is the only cloud, with a network throughput performance SLA. Unmatched Data Ecosystem Cloudera clusters that are spun up in the cloud can sit right next to Exadata or Oracle Database environments over private networks, allowing easy data sharing for analytics purposes. Gartner regards Oracle as one of the top three vendors in the Data Management Storage Analytics space, making Cloudera on Oracle Cloud Infrastructure a great choice for running analytics workloads Right-Size Your infrastructure in the Cloud Cloud infrastructure enables you to deploy the optimal amount of infrastructure to meet your demands. No more under-utilization of too much infrastructure, or long queues due to under-forecasting. In addition, Oracle offers: The lowest compute pricing from a pay-as-you-go (PAYG) perspective Additional discounts available from a sales perspective for critical partners like Cloudera The lowest network egress costs in the industry Reduced complexity and risk of migration from on-premises with bare metal Deploying Cloudera Enterprise Data Hub You can easily deploy Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure by using Terraform automation. There are multiple Terraform templates for deploying a fully configured Cloudera Enterprise Data Hub instance or cluster on Oracle Cloud Infrastructure. Currently you can choose Sandbox, Development, Production Starter, and N-Node (which is configurable for clusters of any scale). For details about the Terraform templates, see the Readme.md file. For more information about installing and using Terraform on Oracle Cloud Infrastructure, see Terraform on Oracle Cloud Infrastructure A white paper that details a reference architecture for Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure and the use of these Terraform templates is located at Cloudera Enterprise Data Hub Reference Architecture for Oracle Cloud Infrastructure Deployments Have questions or want to learn more? Join our Free Webinar on 6/12 to cover Faster Insights with Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure. Register Now. Let Us Know What You Think We hope you will be as excited as we are about the Cloudera plus Oracle solution. Let us know what you think! Mahesh Thiagarajan Director, Product Management https://www.linkedin.com/in/mthiagarajan/  

We are proud to announce a validated reference architecture of Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure. Starting today you can deploy Cloudera's industry-leading big data...

Announcing New Autonomous Cloud Services on Oracle Cloud Infrastructure

We're excited to announce the availability of four new autonomous cloud services now deployable on Oracle Cloud Infrastructure, one of the best-performing infrastructure platforms in the industry:  Oracle Autonomous Content and Experience Cloud: A cloud-based content hub that drives omnichannel content management and accelerates experience delivery. Learn more.  Oracle Autonomous Mobile Cloud Enterprise: Build and deploy mobile apps and intelligent chatbots that connect to any backend system quickly, securely, and easily. Learn more. Oracle Autonomous API Platform Cloud: An end-to-end platform for designing, prototyping, documenting, testing, and managing the proliferation of business-critical APIs. Learn more.  Oracle Autonomous Data Integration Platform Cloud: Migrate and extract value from data by bringing together capabilities of a complete data integration, data quality, and data governance solution powered by machine intelligence and artificial intelligence. Learn more.  These services add to a growing family of services that take advantage of Oracle Cloud Infrastructure, including Java Cloud Service, SOA Cloud Service, Big Data Cloud Service, Autonomous Analytics Cloud, Autonomous Integration Cloud, and Autonomous Visual Builder Cloud.  We firmly believe that the underlying infrastructure on which your workloads run matters. Oracle Cloud Infrastructure's enhanced regions provide a highly available infrastructure that is ideal for enterprise applications, along with the choice of deploying those applications on either bare metal or virtual machine compute instances. Stay tuned for more exciting updates coming soon.  Justin Smith, Principal Product Manager

We're excited to announce the availability of four new autonomous cloud services now deployable on Oracle Cloud Infrastructure, one of the best-performing infrastructure platforms in the industry:  Ora...

Oracle Cloud Infrastructure

Kubernetes: A Cloud (and Data Center) Operating System?

As Kubernetes adoption grows across major cloud providers, it's interesting to compare Kubernetes itself to the concept of an operating system. According to Wikipedia, an operating system is defined as “system software that manages computer hardware and software resources and provides common services for computer programs.” Abstractly, this isn't so different from the current model of Kubernetes running on top of a cloud provider, servicing applications that are built to run on top.  If we start to think about Kubernetes in this context, what can we learn about where Kubernetes has been and where it is heading? Kubernetes Value Proposition Those who operate data centers have long understood the value of standardizing on a smaller set of underlying components, including the operating system, to minimize operating costs and overhead. Customers and vendors alike have rallied around Kubernetes ahead of the alternatives, recognizing the value of an open (albeit somewhat complex) standard for container orchestration. As enterprises have adopted container technology, they too have recognized the opportunity to build on this open Kubernetes platform, as a way to ease their transition from on-premises applications to the cloud, avoid lock-in across cloud providers, and provide the future fabric for hybrid deployments and even serverless applications. Cloud OS - Present and Future We typically think about an operating system as part of a “sandwich”—the layer between the (hardware and software) resources below it, and the applications running on top. In the context of our Kubernetes analogy, a cloud provider (or on-premises data center) is underneath, and business applications are on top. In general, the job of the operating system layer here is to abstract away the complexity of interacting with the underlying resources, and make it easier for applications to be built and run. Of course, not all providers are created equal here. Just as I can run Linux on a Raspberry Pi or on a high-end bare metal server, I can run Kubernetes on clouds with varying degrees of sophistication. The right cloud fabric—with high predictable performance from underlying compute, storage, and network, as well as security, governance, and control interfaces—is crucial to enabling enterprise-grade Kubernetes and the applications that use it. Just as Linux has expanded way beyond the kernel, the “Cloud OS” of the future will go beyond base Kubernetes to include what are generally thought of today as “Kubernetes add-ons” but are really necessary enabling components of a cloud (or data center) OS. Relevant examples are things like service meshes (Istio and Linkerd), serverless functions (Fn project), monitoring, and logging add-ons, as well as Kubernetes “operators”—a framework that enables (stateful) applications on Kubernetes (for example, a WebLogic Operator and a MySQL Operator, as well potentially even operators for Kubernetes itself). Cloud providers will move to package all these components into managed Cloud OSs, which can shield their users, developers, and enterprises from the complexities of managing their own container infrastructure, particularly in high-availability contexts, and ensure the ongoing integrity of the OS and the compatibility with the service layers underneath. Announcing Availability This is what we are working towards at Oracle Cloud Infrastructure: an open, standards-based Cloud OS that is based on unmodified, upstream, open-source projects, managed on an enterprise-grade cloud infrastructure with superior performance, availability, and security. Our customers will be able to use it to run their business applications with confidence and with the freedom to move them between data centers and clouds. Two key pieces of this Cloud OS are now generally available from Oracle: Oracle Cloud Infrastructure Registry and Container Engine for Kubernetes. Registry is a highly available, private container registry service for storing and sharing container images within the same regions as the deployments. Container Engine for Kubernetes is a fully managed, enterprise-ready container service that combines the production-grade container orchestration of standard upstream Kubernetes with the control, security, and high predictable performance of Oracle’s next generation cloud infrastructure. Where are you on the journey towards a Cloud OS? We’d welcome the opportunity to talk to you about your current container strategy, see if we can help you, and get your feedback about our plans. Jonathan Reeve Sr. Director, Product Management

As Kubernetes adoption grows across major cloud providers, it's interesting to compare Kubernetes itself to the concept of an operating system. According to Wikipedia, an operating system is defined...

Oracle Cloud Infrastructure

Oracle Shatters Cloud Storage Limits: 32 TB Volumes, 1 PB per Instance with the Best Performance!

We are excited to announce that the Oracle Cloud Infrastructure Block Volume service now supports up to 32 TB volumes, doubled from the prior limit of 16 TB per volume.  With 32 attachments on a compute instance, you can now have a total of 1 petabyte of high performance block storage per instance, twice as much as before. All our storage uses best-in-class NVMe SSDs, and has data plane availability, control plane availability, and performance that are backed by the Oracle Cloud Infrastructure SLA. Now is the time to bring your storage-hungry workloads, both Oracle and non-Oracle to our cloud platform. With this announcement Oracle Cloud Infrastructure will further strengthen its position as the best public cloud for storage intensive applications, as highlighted by the first Editor's Choice Award granted to a cloud provider by StorageReview. Compared to virtualized all-flash storage array solutions, Oracle Cloud Infrastructure block volumes have better peak performance and more usable IOPS at peak. Rest assured that the performance you expect remains the same with these new limits. We continue to offer predictable, consistent, and linearly scaling performance regardless of the volume size. For each block volume, you get 60 IOPS/GB and 480 KBPS/GB throughput, up to a maximum of 25,000 IOPS and 320 MB/S throughput per volume. Just provision the capacity that you need and the performance is there. The 32 TB storage volumes are available to all Oracle Cloud Infrastructure customers according to their account's block storage capacity limits. These larger volumes continue to be billed at the same rate of $0.0425 per GB per month, and they support all of the same functionality as before, such as policy-based automated and scheduled backups and deep disk-to-disk clones in seconds. To compare pricing, for example the Dell EMC Unity 350F All-Flash Storage has a starting price of $33,685 for 4 TB storage, which is five times costlier than the Oracle Cloud Infrastructure Block Volume service for 3 years amortized usage of the same starter 4TB storage capacity. Oracle Siebel, PeopleSoft, E-Business Suite environments, and other large enterprise databases can greatly benefit from this massive increase in high-performance storage capability and price advantage. There is no longer any reason or excuse to stay behind with on-premises storage solutions. Watch this page for announcements about additional features and capabilities. We value your feedback as we continue to make our service the best in the industry. Send me your thoughts on how we can continue to improve or if you want more details on any topic. Max Verun

We are excited to announce that the Oracle Cloud Infrastructure Block Volume service now supports up to 32 TB volumes, doubled from the prior limit of 16 TB per volume.  With 32 attachments on a...

Introducing Volume Groups: Enabling Coordinated Backups and Clones for Application Protection and Lifecycle Management

We are excited to introduce a new feature in the Oracle Cloud Infrastructure Block Volume service: volume groups! Volume groups enable you to group together multiple block storage volumes and boot volumes (i.e., system boot disks that are backed by Block Volume service), and perform crash-consistent, point-in-time, coordinated backups and clones across all the volumes in the group. Enterprise applications typically require multiple volumes across multiple compute instances in order to function: boot volumes that power the system disks of the compute instances, block volumes for the web tier, app tier, and database tier. Each of these volumes will have different capacities and deployments. For example, application tiers that require multi-disk RAID configurations for high availability and scale, Oracle Cloud Infrastructure services such as Database, and Oracle enterprise applications such as E-Business Suite provide customers with the ability to run large-scale deployments that require multiple block storage volumes. It is a chore and a challenge to manage data protection and management across a single application, let alone multiple applications and instances with boot disks and storage across your enterprise. There are a few on-premises storage solutions that address this, such as NetApp consistency groups. We at Oracle Cloud Infrastructure are introducing an innovative new solution and data management capability among the public cloud providers to help. With volume groups, you can create point-in-time consistent and coordinated backups and clones of running enterprise applications that span multiple boot volumes and storage volumes across one or more compute instances. These capabilities combined with our upcoming enhancements bring data management and protection capabilities to the Oracle Cloud Infrastructure that up until now have only existed in on-premises storage solutions. Coordinated backups provide a solution for creating, managing, and restoring backups for applications by leveraging and extending the existing single-volume backup and restore features that are already available for block storage and boot volumes. Similarly, the existing capabilities of deep disk-to-disk clones feature we announced earlier are now extended across multiple volumes via the volume groups. A deep disk-to-disk and fully isolated clone of a volume group becomes available for use within a matter of seconds, making it trivial and fast to spin off new environments for development, QA, UAT, and troubleshooting. These new capabilities are provided at no additional cost to Oracle Cloud Infrastructure customers beyond the cost of the amount of consumed block and object storage. Volume groups and coordinated backups and clones are generally available now via CLI and SDK, with Console support coming soon. Following are a few sample commands for creating and managing volume groups and coordinated backups, restores, and clones:   ##### get supported operations #### oci bv volume-group --help oci bv volume-group-backup --help   ##### get specific operation help #### oci bv volume-group <operation_name> --help -- example : oci bv volume-group list --help oci bv volume-group-backup <operation_name> --help -- example : oci bv volume-group-backup create --help   ##### list volume groups ##### oci bv volume-group list --compartment-id <compartment_ID> -- example : oci bv volume-group list --compartment-id ocid1.compartment.oc1..exampleaakjghfkjahdfkhadkfjhakdfhkjashfkja   #### create volume group from existing volumes #### oci bv volume-group create --compartment-id <compartment_ID> --availability-domain <external_AD> --source-details <JSON_input_specifying_source_details> -- example : oci bv volume-group create --compartment-id ocid1.compartment.oc1..exampleaakjghfkjahdfkhadkfjhakdfhkjashfkja --availability-domain ABbv:PHX-AD-1 --source-details '{"type": "volumeIds", "volumeIds":["ocid1.volume.oc1.phx.exampler6wero24cdyx5bia36ikdo6w2wxmsylqkytpj37wwud3iyt43ud4q", "ocid1.volume.oc1.phx.exampler4uzq4v2pq6tm3vc4aaaerp5a2qml4iebhar4l3glprbc52awcmtq"]}'   #### create volume group from another volume group (clone) #### oci bv volume-group create --compartment-id <compartment_ID> --availability-domain <external_AD> --source-details <JSON_input_specifying_source_details> -- example : oci bv volume-group create --compartment-id ocid1.compartment.oc1..exampleaakjghfkjahdfkhadkfjhakdfhkjashfkja --availability-domain ABbv:PHX-AD-1 --source-details '{"type": "volumeGroupId", "volumeGroupId": "ocid1.volumegroup.oc1.phx.examplerypkk7wjmkpzufhuohong2um6unl6cplq2mrfnnanja3fsam2i3ra"}'   #### create volume group from a volume group backup (restore) #### oci bv volume-group create --compartment-id <compartment_ID> --availability-domain <external_AD> --source-details <JSON_input_specifying_source_details> -- example : oci bv volume-group create --compartment-id ocid1.compartment.oc1..exampleaakjghfkjahdfkhadkfjhakdfhkjashfkja --availability-domain ABbv:PHX-AD-1 --source-details '{"type": "volumeGroupBackupId", "volumeGroupBackupId": "ocid1.volumegroup.oc1.sea.examplerqxknyke4gwwobd5rny65dwshwwbzht5wididrqhlkrqs2w2m2llq"}'   #### get volume group #### oci bv volume-group get --volume-group-id <volume_group_ID> -- example : oci bv volume-group get --volume-group-id ocid1.volumegroup.oc1.phx.examplerypkk7wjmkpzufhuohong2um6unl6cplq2mrfnnanja3fsam2i3ra   #### update volume group (for example, to add a new volume) #### oci bv volume-group update --volume-group-id <volume_group_ID> --volume-ids <JSON_document_representing_volume_IDs> --display-name <new_display_name> (optional) -- example: oci bv volume-group update --volume-group-id ocid1.volumegroup.oc1.phx.examplerypkk7wjmkpzufhuohong2um6unl6cplq2mrfnnanja3fsam2i3ra --volume-ids '["ocid1.volume.oc1.phx.exampler2tnxuof4j5nuumcaz4r7ngndya4qxknw7tdlrnhlz4b2wg2syihq","ocid1.volume.oc1.phx.examplerdln3yob366mra5a3rnu3trfcqj45uuzeaarybswlcpbdcp3ko5ra"]' --display-name "new display name"   #### delete volume group #### oci bv volume-group delete --volume-group-id <volume_group_ID> -- example: oci bv volume-group delete --volume-group-id ocid1.volumegroup.oc1.phx.examplerypkk7wjmkpzufhuohong2um6unl6cplq2mrfnnanja3fsam2i3ra   ##### list volume group backups ##### oci bv volume-group-backup list --compartment-id <compartment_ID> -- example : oci bv volume-group-backup list --compartment-id ocid1.compartment.oc1..exampleaakjghfkjahdfkhadkfjhakdfhkjashfkja   ##### create volume group backup ##### oci bv volume-group-backup create --volume-group-id <volume_group_ID> -- example : oci bv volume-group-backup create --volume-group-id ocid1.volumegroup.oc1.phx.examplerypkk7wjmkpzufhuohong2um6unl6cplq2mrfnnanja3fsam2i3ra   #### get volume group backup #### oci bv volume-group-backup get --volume-group-backup-id <volume_group_backup_ID> -- example : oci bv volume-group-backup get --volume-group-backup-id ocid1.volumegroupbackup.oc1.phx.examplerqleqyex626sbvc5v7ccpfcjzivkcoytkigzqycmc6deasmnmlypa   #### update volume group backup (change display name) #### oci bv volume-group-backup update --volume-group-backup-id <volume_group_backup_ID> --display-name <new_display_name> -- example: oci bv volume-group-backup update --volume-group-backup-id ocid1.volumegroupbackup.oc1.phx.examplerqleqyex626sbvc5v7ccpfcjzivkcoytkigzqycmc6deasmnmlypa --display-name "new display name"   #### delete volume group backup #### oci bv volume-group-backup delete --volume-group-backup-id <volume_group_backup_ID> -- example : oci bv volume-group-backup delete --volume-group-backup-id ocid1.volumegroupbackup.oc1.phx.examplerqleqyex626sbvc5v7ccpfcjzivkcoytkigzqycmc6deasmnmlypa   We want you to experience these new block storage volume features and all the enterprise-grade capabilities that Oracle Cloud Infrastructure offers. It’s easy to take advantage of these capabilities with $300 free credit at the Oracle Store. For more information, see the Oracle Cloud Infrastructure Getting Started guide, Block Volume service overview, and FAQ. Watch for announcements about additional features and capabilities in this space. We value your feedback as we continue to make our service the best in the industry. Send me your thoughts on how we can continue to improve or if you want more details on any topic. Max Verun

We are excited to introduce a new feature in the Oracle Cloud Infrastructure Block Volume service: volume groups! Volume groups enable you to group together multiple block storage volumes and boot...

Oracle Cloud Infrastructure

Oracle Announces HIPAA Attestation for Oracle Cloud Infrastructure

Enterprises must continue to improve their security posture to meet strict compliance requirements and protect their businesses. Oracle Cloud Infrastructure is continuing to invest in services that help our customers more easily meet their security and compliance needs. We recently announced ISO/IEC 27001:2013 certification, Service Organization Controls (SOC) 1 Type 2, SOC 2 Type 2 and SOC 3 attestations and Payment Card Industry Data Security Standard (PCI DSS) Attestation of Compliance covering Oracle Cloud Infrastructure services. Now we are pleased to announce that for the period of November 1, 2017 through March 31, 2018, Oracle has received an attestation performed in accordance with American Institute of Certified Public Accountants (AICPA) Statement on Standards for Attestation Engagements (SSAE) 18, AT-C sections 105 and 205, covering controls aligned with the requirements of the Health Insurance Portability and Accountability Act (HIPAA) Security Rule, Breach Notification Rule, and the applicable parts of the Privacy Rule. The Security Rule establishes national standards to protect individuals’ electronic personal health information that is created, received, used, or maintained by a covered entity. The Security Rule requires appropriate administrative, physical, and technical safeguards to ensure the confidentiality, integrity, and security of protected health information (PHI). The Breach Notification Rule requires covered entities and their business associates to provide notification following a breach of unsecured PHI. By law, the Privacy Rule applies only to covered entities (for example, health plans, health care clearinghouses, and certain health care providers); however, parts may be applicable to business associates. Oracle Cloud Infrastructure is categorized as a “no-view cloud service provider” and can support customers who are in scope for HIPAA by entering into a Business Associate Agreement (BAA). The BAA is required for identifying and establishing the respective responsibilities of Oracle Cloud Infrastructure and the customer for appropriately safeguarding PHI in accordance with HIPAA and any amending legislation. Performed by Ernst & Young LLP, our HIPAA attestation provides reasonable assurance that Oracle Cloud Infrastructure has designed and implemented administrative, physical, and technical safeguards relevant to the HIPAA Security Rule, Breach Notification Rule, and the applicable parts of the Privacy Rule.  Oracle Cloud Infrastructure services covered in our HIPAA attestation include Compute, Networking, Load Balancing, Block Volume, Object Storage, Archive Storage, File Storage, Data Transfer, Database, Exadata, FastConnect, and Governance. The development, deployment, configuration, and management of underlying services, infrastructure, and systems are the responsibility of Oracle Cloud Infrastructure. Customers are responsible for maintaining and managing their HIPAA compliance with respect to applications and workloads that they use on Oracle Cloud Infrastructure. For details about Oracle Cloud Infrastructure security capabilities, see the Oracle Cloud Infrastructure Security white paper and other security and compliance resources.

Enterprises must continue to improve their security posture to meet strict compliance requirements and protect their businesses. Oracle Cloud Infrastructure is continuing to invest in services that...

Oracle Cloud Infrastructure

Best Practices for Identity and Access Management Service on Oracle Cloud Infrastructure

The Oracle Cloud Infrastructure Identity and Access Management (IAM) service lets you control who has access to your cloud resources. You can control what type of access a group of users has and to which specific resources. The service enables you to enforce the security principle of least privilege by default. New users are not allowed to perform any actions on any resources until they are granted appropriate permissions. With the IAM service, you can leverage a single model for authentication and authorization across all Oracle Cloud Infrastructure services. IAM makes it easy to manage access for organizations of all sizes, from one person working on a single project to large companies with many groups working on many projects at the same time, all within a single account. I recently published a white paper with a list of best practices for the IAM service on Oracle Cloud Infrastructure. In this blog post, I want to highlight a couple of those best practices.   Compartment and Group Policy Design Strategy Compartments are the primary building blocks that you use to organize and isolate resources, which makes it easier to manage and secure access to them. Compartment provides flexibility and granularity to separate resources for the purposes of measuring usage and billing, access and isolation.  It is a very unique and useful feature of OCI to meet customers' security and governance requirements.  When you start working with Oracle Cloud Infrastructure, you must carefully consider how you want to use compartments to organize and isolate your cloud resources. It is important to consider the compartment design for your organization before you implement anything. Please consider the following aspects when you start working with compartments: When you create a resource (for example, a compute instance, block storage volume, VCN, or subnet), you must place it in a compartment. Compartments are logical, not physical, so related resource components can be placed in different compartments. For example, your cloud network subnets with access to an internet gateway can be secured in a separate compartment from other subnets in the same cloud network. After a resource is created, it can’t be moved to another compartment. When you write a policy rule to grant a group of users access to a resource, specify the compartment that you want to apply the access rule to. If you distribute resources across compartments, you must provide the appropriate permissions for each compartment for users who need access to those resources. Compartments can’t be deleted, so do not create multiple "test" compartments with the intent to delete them later. When planning for compartments, consider how you want aggregate usage and auditing data, which might be a consideration for your company in the future. Your compartment design depends on your use cases and how you want to organize and isolate your resources.   The following scenario illustrates how to design your compartments and define related policies. Company ACME wants to have three dedicated environments for their workload management. One environment is dedicated to network management and all the network resources are managed in this environment.  Another environment is for production workloads, and the last environment is for non-production workloads. ACME has multiple types of administrators:  DBAs, network admins, storage admins, and security admins. Each DBA manages databases of production and non-production environments respectively. Network admins, storage admins, and security admins need to access and manage corresponding network, storage, and security-related resources in the tenancy. To accommodate these needs, create three compartments to align with ACME’s three different environments. For instance,  we define "network", "production" and "non-production" compartments. Then, define groups that map to each type of administrator. For instance, "Network_Admin" group is dedicated for network administrator who has full rights to manage all network related resources. Finally, define policies to control who can access which resources.   The following diagram illustrates a possible compartment and policy design for this scenario: Federation Oracle Cloud Infrastructure IAM supports federation with Oracle Identity Cloud Service (IDCS), Microsoft Active Directory Federation Services (ADFS), and other Security Assertion Markup Language (SAML) 2.0 compliant IDPs.  When you sign up for Oracle Cloud Infrastructure, your tenant administrator account is automatically federated with Oracle Identity Cloud Service. Federating Oracle Cloud Infrastructure with Oracle Identity Cloud Service automatically allows you to have a seamless connection between services without having to create a separate username and password for each one. We recommend that customers federate their favorite IDP with IDCS which will automatically provide federation for all Oracle cloud offerings, including Oracle Cloud Infrastructure.     Check out our white paper for more best practices on how to securely manage and control access to your cloud resources. You can also learn more about security best practices for Oracle Cloud Infrastructure's Identity and Access Management (IAM) service. Please take a look at these resources and share your feedback. 

The Oracle Cloud Infrastructure Identity and Access Management (IAM) service lets you control who has access to your cloud resources. You can control what type of access a group of users has and to...

Oracle Cloud Infrastructure

Build a Continuous Integration pipeline using GitHub, Docker and Jenkins on Oracle Cloud Infrastucture

In my previous blog post, we discussed how to deploy Jenkins on Oracle Cloud Infrastructure, and dynamically scale it by leveraging the Master/slave architecture of Jenkins, using the Oracle Cloud Infrastructure Compute plugin. In this post, let’s look into how to setup a Continuous Integration pipeline on Oracle Cloud Infrastructure (OCI), utilizing the Jenkins setup we already created. The most important step for continuous delivery of software is Continuous Integration (CI). CI is a development practice where developers commit their code changes (usually small and incremental) to a centralized source repository, which in turns kicks off a set of automated build and tests. This gives them an opportunity to capture the bugs early and automatically before passing them on to production. Continuous Integration pipeline usually involves a series of steps the software takes starting from code commit to performing basic automated linting/static analysis, capturing dependencies and finally building the software along with performing some basic unit tests before creating a build artifact. Source Code Management systems like Github, Gitlab etc. offer web hooks integration to which CI tools like Jenkins can subscribe to start running automated builds and tests after each code check-in. In this tutorial let's look at how to run a continuous integration pipeline using Jenkins on OCI. Pre-requisites To run through this tutorial, you will need the following Install and configure Jenkins as discussed in this blog post A Github account Access to Oracle Cloud Infrastructure Registry Overview Jenkins, like a few other CI/CD softwares provides us flexibility to define the entire build pipeline (build/test/deploy) programmatically. This is called pipeline as code. The pipelines in Jenkins are defined in Groovy DSL in a special file called Jenkinsfile. In this tutorial, utilizing the Jenkinsfile, we shall create an automated Continuous Integration (CI) pipeline where a code commit or a pull request to Github triggers the following Pipeline jobs in Jenkins and return the status to Github indicating whether it failed or succeeded. Checkout source code from Github Fetch necessary code dependencies Build a Docker image Perform a set of unit tests Push the docker image to a private Docker registry - Oracle Cloud Infrastructure Registry Oracle Cloud Infrastructure Registry is an Oracle-managed registry that enables you to simplify your development to production workflow. We will be using this as a private registry to push docker images. Note: We already setup Jenkins to operate in Master/slave mode, where master instance purely plays a management role in the build process, while the entire build/test happens on Jenkins slave node(s) Let's look at how to create this Jenkins CI build pipeline on Oracle Cloud Infrastructure, in a few easy steps: Step 1: Configuring Jenkins slaves Login to the Jenkins Master instance and navigate to Cloud config section under Manage Jenkins > Configure System menu Under the Advanced section in Instance Templates, edit the init script to include the following. This init script is used to install and run Java, Git and Docker-engine. We will be using Docker to build and push docker images. sudo yum update -y sudo yum install -y java git docker-engine sudo systemctl start docker sudo systemctl enable docker Make sure the Labels field under the same Instance Templates section has jenkinslaves as it's value. We will be using this label in our Jenkinsfile. The above init script installs the Git executable in /usr/bin/git directory on the slave nodes. Go to Manage Jenkins > Global Tool Configuration and edit the Path to Git executable for Jenkins to locate it. Finally, navigate to Manage Jenkins > Manage Plugins section, under Available tab, search for Blue Ocean plugin and Github Pipeline for Blue Ocean plugin. Install them without restart. The Blue Ocean plugin creates a more sophisticated visualization for the build pipeline on Jenkins. It also makes it easier to integrate with SCMs like Github. We will be utilizing these plugins in steps 4 and 5. Step 2: Configuring Oracle Cloud Infrastructure Registry Our Jenkins pipeline pushes the final docker build artifacts to Oracle Cloud Infrastructure Registry. We shall require an auth token to access it. If you already have an auth token available, you can skip this step. If not, please refer to this documentation to generate an auth token in a few easy steps. Keep this token handy, as we will be using this later to configure our pipeline. Step 3: Configuring Jenkinsfile and Dockerfile Let’s configure the Jenkinsfile for running a build pipeline and Dockerfile to build the resulting Docker image. The Jenkinsfile for our setup specifies an agent with label jenkinslave, which tells Jenkins to run the build jobs on the slave node, instead of on the master (since we configured the slave node with label jenkinslave in step 1 of our configuration). The stages of the build pipeline contain 1) Fetch Dependencies 2) Build Docker image 3) Test image 4) Push image to Registry, as shown Note:  For the purpose of this demo we are just running shell commands with credentials in clear text, to push the docker image. The ideal way would be to install plugin cloudbees docker build and publish and use it's wrappers within Jenkinsfile. Update the Jenkinsfile with your OCI Registry credentials. The user name is "OCI Tenancy name/OCI username" (For ex: foo/abhiram.annangi@oracle.com). The password is the OCI auth token you generated in step 2. In this example, I used the Registry in Ashburn (iad.ocir.io), but if your tenancy is in a different region, use the appropriate region specific Registry name (For ex: Registry in phx has phx.ocir.io) In the Dockerfile, we just set the maintainer, health checks and expose an arbitrary port. The Dockerfile looks like Step 4: Integrating Jenkins with Github Clone this Github repository used in this tutorial to your personal Github. If you already have not created Github access token, create one. You will be needing it to have Jenkins to scan through your private repositories. Go to Open Blue Ocean in your Jenkins main dashboard and create a new pipeline by using the steps listed in this post. Note: The above integration with Github creates an on-demand build pipeline. For auto-triggering builds for any changes in your Github, you should subscribe to Github webhook. The official documentation from Cloudbees, for GitHub integration with Jenkins using webhooks can be found here. Once you successfully link Github with Jenkins, the build pipeline will automatically get triggered. This will take a few minutes, as the Jenkins master dynamically launches a Jenkins slave instance to run the build process. Jenkins job gets queued until the Jenkins slave agent with the label jenkinslave gets provisioned. If you go to OCI console, you should be seeing a Jenkins slave instance being provisioned. The build will begin as soon as the slave instance is provisioned and the init scripts are executed. Step 5: Build execution Once the Jenkins slave instance is provisioned, you should see the entire build pipeline go through. This is how it should look on the Blue Ocean dashboard. The docker image is finally pushed to Oracle Cloud Infrastructure Registry. You can log into your OCI console and under Containers > Registry you will notice the docker image. This concludes this tutorial for setting up a Continuous Integration pipeline using Jenkins on Oracle Cloud Infrastructure. Suggested Enhancements Configuring Jenkins with SSL using a reverse proxy - By default, Jenkins comes with an embedded Winstone server which enables Jenkins to run as a standalone application. Winstone supports basic web server functionalities, however if you are planning for a production deployment of Jenkins, we recommend using some kind of reverse proxy servers to front your Jenkins installation. This helps secure Jenkins with SSL to protect passwords and other sensitive data. You can use an Nginx or Squid reverse proxy or even an OCI Load balancer service to terminate SSL. Here is a blog post to configure Squid on Oracle Cloud Infrastructure. Containerizing Jenkins Master - In this tutorial we were running Jenkins master as a virtual machine and scaling out Jenkins slave virtual machine instances dynamically and on-demand. Since Jenkins master is mostly involved in directing the traffic to slave instances for build jobs, it can as well be run as a docker container as opposed to a full fledged virtual machine, while still utilizing Oracle Cloud Infrastructure Compute plugin for Jenkins. This gives better resource efficiency. In my next blog post, we shall complete the continuous delivery by setting up a Jenkins continuous deployment (CD) pipeline to pull the docker image from our private registry and deploy it into Kubernetes cluster, using Oracle Container Engine for Kubernetes (OKE)   Abhiram Annangi | Twitter  LinkedIn

In my previous blog post, we discussed how to deploy Jenkins on Oracle Cloud Infrastructure, and dynamically scale it by leveraging the Master/slave architecture of Jenkins, using the Oracle...

Developer Tools

Automate Oracle Cloud Infrastructure VCN Peering with Terraform

The goal of this blog post is to show how to automate Oracle Cloud Infrastructure VCN peering setup using Terraform.  Oracle Cloud Infrastructure VCN peering is the process of connecting multiple virtual cloud networks (VCNs). You can use VCN peering to privately connect one VCN to another, so that traffic does not traverse the internet.  To setup VCN peering manually,  it requires multiple steps with creation of several different resources.  So, this blog post describes how to automate the VCN peering setup using Terraform.     The recommended Terraform and Oracle Cloud Infrastructure Terraform Provider versions are: Terraform:  v0.11.7 OCI Terraform Provider:  v2.1.8   (This version also supports remote VCN peering.) Local VPN peering across different tenancies is one of the most challenging setups since it requires different tenancies and very complex group policies. I use this example to describe how to use Terraform to automate the VCN peering setup. For local VCN peering with Terraform, a key resources is the "oci_core_local_peering_gateway" (LPG). To establish a local VCN peering connection, the LPG of one tenancy will be a requestor and initiate the peering connection request with another LPG as an acceptor that is in a different tenancy.  As LPG resources are across tenancies, we need to define two different "oci" Terraform providers in the Terraform code.  For instance, one "oci" provider is defined as a requestor, the other is defined as an acceptor.  provider "oci" {   alias = "requestor"   region = "${var.requestor_region}"   tenancy_ocid = "${var.requestor_tenancy_ocid}"   user_ocid = "${var.requestor_user_ocid}"   fingerprint = "${var.requestor_fingerprint}"   private_key_path = "${var.requestor_private_key_path}" } p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures} provider "oci" {   alias = "acceptor"   region = "${var.acceptor_region}"   tenancy_ocid = "${var.acceptor_tenancy_ocid}"   user_ocid = "${var.acceptor_user_ocid}"   fingerprint = "${var.acceptor_fingerprint}"   private_key_path = "${var.acceptor_private_key_path}" }   To create resources on each tenancy, you must use the corresponding "oci" provider.  For instance, to create an LPG resource in the requestor tenancy, specify: resource "oci_core_local_peering_gateway" "requestor" {   depends_on = ["oci_identity_policy.requestor_policy"]   provider = "oci.requestor"   compartment_id = "${var.requestor_compartment_ocid}"   vcn_id = "${oci_core_vcn.requestor_vcn1.id}"   display_name = "requstor_localPeeringGateway"   peer_id = "${oci_core_local_peering_gateway.acceptor.id}" } Peering between two VCNs requires explicit agreement from both parties in the form of Oracle Cloud Infrastructure Identity and Access Management (IAM) policies that each party implements for their own VCN's compartment or tenancy. In our example, where the VCNs are in different tenancies, each administrator must provide their tenancy OCID and specify special policy statements to enable the peering. For instance, the requestor policies are defined as: resource "oci_identity_policy" "requestor_policy" {   provider = "oci.requestor"   name = "requestorPolicy"   description = "Requestor policy"   compartment_id = "${var.requestor_tenancy_ocid}"   statements = ["Define tenancy Acceptor as ${var.acceptor_tenancy_ocid}",                 "Endorse group Administrators to manage local-peering-to in tenancy Acceptor",                 "Endorse group Administrators to associate local-peering-gateways in compartment                              ${var.requestor_compartment_name} with local-peering-gateways in tenancy Acceptor"                ] } The acceptor policies are defined as: resource "oci_identity_policy" "acceptor_policy" {   provider = "oci.acceptor"   name = "acceptorPolicy"   description = "Acceptor policy"   compartment_id = "${var.acceptor_tenancy_ocid}"   statements = ["Define tenancy Requestor as ${var.requestor_tenancy_ocid}",                 "Define group RequestorGrp as ${var.requestor_administrators_group_ocid}",                 "Admit group RequestorGrp of tenancy Requestor to manage local-peering-to in compartment                     ${var.acceptor_compartment_name}",                 "Admit group RequestorGrp of tenancy Requestor to associate local-peering-gateways in tenancy                     Requestor with local-peering-gateways in compartment ${var.acceptor_compartment_name}"                ] } Please note, the resource "oci_identity_policy"  depends on the home region.   You need to ensure that the region of "oci" provider is the home region of the corresponding tenancy for this "oci_identity_policy".  Finally, to enable communication between VCNs through the local VCN peering connection, you need to update the VCN's routing to enable traffic between the VCNs. The route rule specifies the destination traffic's CIDR and that your LPG is the target. Your LPG routes the traffic that matches that rule to the other LPG, which in turn routes the traffic to the next hop in the other VCN. For instance,  the resource "oci_core_route_table" for requestor is defined as: resource "oci_core_route_table" "requestor_route_table" {   depends_on = ["oci_identity_policy.requestor_policy"]   provider = "oci.requestor"   compartment_id = "${var.requestor_compartment_ocid}"   vcn_id = "${oci_core_vcn.requestor_vcn1.id}"   display_name = "requestorRouteTable"   route_rules {     cidr_block = "${var.acceptor_cidr}"     network_entity_id = "${oci_core_local_peering_gateway.requestor.id}"   } }       We hope that this blog post made it simple to automate the VCN peering connection setup with Terraform on Oracle Cloud Infrastructure.    p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures}

The goal of this blog post is to show how to automate Oracle Cloud Infrastructure VCN peering setup using Terraform.  Oracle Cloud Infrastructure VCN peering is the process of connecting multiple...

Oracle Cloud Infrastructure

New Autonomous Platform Services, Powered by Infrastructure

All clouds aren't equal. And the inequality goes beyond one cloud application having better or different features than another. You know how browsing feels different on your latest phone versus how it felt two phones ago? The underlying platform has a big impact. Why Cloud Platform Customers Should Care About Infrastructure Just like with apps on your phone, new hardware, faster networking, and updated software infrastructure make a big difference in your experience. In the enterprise cloud, such updates make analysis move faster, applications run more reliably, and integrations have more robust shapes to do their job. Although it might be more obvious in some services than in others, our core infrastructure values often shine through: Superior and predictable performance: Oracle is still the only cloud to offer bare metal compute, as well as options for substantial amounts of local NVMe SSD storage with multiple times the performance of other clouds. We're so confident, we're the only IaaS to offer performance SLAs for storage and even networking. Layers of resilience: Oracle offers multiple layers of availability and protection, including unique capabilities like RAC for Oracle Database and policy-based backups for block storage, and easy-to-use availability domains that enable fast multisite redundancy. Superior scale: Not only do we support the largest Oracle Databases in the cloud, offering 268 terabytes of usable capacity, but all of our processing and persistence capabilities are similarly high-scale across compute, file storage, block storage, and object storage. All services that run on top of our infrastructure can take advantage of these values, and your business can benefit. Platform Gets Turbocharged We're excited to support more and more cloud services with our next generation infrastructure. The latest services include: Oracle Autonomous Analytics Cloud: Faster time-to-insights with automated data discovery and analysis. Check out this popular service with a trial, or learn more. Oracle Autonomous Integration Cloud: Faster time-to-service with predefined integrations and flows. Integrate Oracle and third-party services, both on-premises and cloud-based. Learn more. Oracle Autonomous Visual Builder Cloud: Faster time-to-market with new applications with deployment automation. Build web and mobile JavaScript applications with visual tools or code. Learn more. These services add to the many services that already leverage Oracle Cloud Infrastructure, including Java Cloud Service, SOA Cloud Service, and Big Data Cloud Service. And there's so much more to come. - Leo Leung, Senior Director of Product Management

All clouds aren't equal. And the inequality goes beyond one cloud application having better or different features than another. You know how browsing feels different on your latest phone versus how it...

Oracle Cloud Infrastructure

Oracle Announces PCI DSS Attestation of Compliance (AoC) for Oracle Cloud Infrastructure

Data security has never been more important and enterprises must continue to improve their security posture to meet strict compliance requirements and protect their businesses. Oracle Cloud Infrastructure is continuing to invest to provide services to help our customers more easily meet their security and compliance needs.  We recently announced ISO/IEC 27001 Certification and the availability of Service Organization Controls (SOC) 1, 2 and 3 Reports, and are pleased to announce that, effective May 1, 2018, Oracle has received a Payment Card Industry Data Security Standard (PCI DSS) Attestation of Compliance (AoC) covering Oracle Cloud Infrastructure services. Oracle Cloud Infrastructure provides Infrastructure as a Service (IaaS) that enables customers to build, deploy and maintain reliable, secure, scalable environments. As a PCI Level 1 Service Provider, customers can now use these services for workloads that store, process or transmit cardholder data. Conducted by independent third party Schellman & Company, LLC, Oracle Cloud Infrastructure’s AoC demonstrates compliance with all PCI DSS requirements applicable to a Service Provider and enables customers to run payment-card related applications and workloads on Oracle’s PCI compliant Cloud Infrastructure services. Oracle Cloud Infrastructure services covered in our AoC include Compute, Networking, Load Balancing, Block Volumes, Object Storage, Archive Storage, File Storage, Data Transfer Service, Database, Exadata, Container Engine for Kubernetes, Registry, FastConnect, and Governance Services. The development, deployment, configuration and management of underlying services, infrastructure and systems are the responsibility of Oracle Cloud Infrastructure. Customers are responsible to maintain and manage their PCI DSS compliance with respect to applications and workloads they use on Oracle Cloud Infrastructure.  For details about Oracle Cloud Infrastructure security capabilities, see the Oracle Cloud Infrastructure Security white paper and other security and compliance resources. PCI DSS is a globally recognized security standard for payment workloads, including the storage, processing or transmission of cardholder data. The issuance of Oracle Cloud Infrastructure’s PCI DSS AoC reaffirms our commitment to security and data protection. Customers may use this PCI DSS AoC to assess how Oracle's cloud services can meet their payment card related compliance needs.

Data security has never been more important and enterprises must continue to improve their security posture to meet strict compliance requirements and protect their businesses. Oracle...

Oracle Cloud Infrastructure

Be Cloud Ready! Get the Oracle Cloud Infrastructure Architect Associate Certification

In January 2018, we announced the Oracle Cloud Infrastructure 2018 Architect Associate certification (See Kash Iftikhar’s launch blog here). Since then, the momentum has been great and hundreds of individuals have taken and passed this certification. The Oracle Cloud Infrastructure team takes pride in being customer obsessed – listening to our customers and acting on feedback to make our offerings better. And based on your feedback, we have added two new training assets to help you prepare for your journey to Oracle Cloud Infrastructure certification: A Study Guide which contains all the information you need before taking the Oracle Cloud Infrastructure Architect Associate certification. This guide consolidates all the links to the recommended training and documentation, topics covered in the exam, and steps to register for the exam. A Practice Exam to help you gain confidence as you prepare for the certification. I am sure that the study guide and practice exam will help you as you embark on your journey to being certified. If you are a new customer, make sure to attend our Oracle Cloud Onboarding Session hosted on the second and fourth Tuesday of every month. These hands-on sessions give you all the information you need to get started on Oracle Cloud Infrastructure. The sessions also provide a brief overview of the certification. Happy learning and wishing you all the best! Rashim Mogha Senior Director, OCI Product Management rashim.mogha@oracle.com Twitter: @rmogha LinkedIn: https://www.linkedin.com/in/rashimmogha/

In January 2018, we announced the Oracle Cloud Infrastructure 2018 Architect Associate certification (See Kash Iftikhar’s launch blog here). Since then, the momentum has been great and hundreds of...

Developer Tools

Deploy Jenkins on Oracle Cloud Infrastructure

Faster software development has become a competitive advantage for companies. The automation of software development processes facilitates speed and consistency, which led to the rise for having a Continuous Integration (CI) and Continuous Delivery and Deployment (CD) pipelines. Jenkins is a very popular product among Oracle Cloud Infrastructure customers, which can automate all of the phases of CI and CD. You can host Jenkins on Oracle Cloud Infrastructure to centralize your build automation and scale your deployment as the needs of your software projects grow. This is the first in a series of blog posts on how to set up a CI/CD build pipeline on Oracle Cloud Infrastructure using Jenkins. Jenkins is extensible by design via plugins. Plugins give Jenkins the flexibility to automate a wide range of processes on diverse platforms. Without delving too much into the architecture of Jenkins, let’s quickly understand the concept of master/slave in Jenkins. Jenkins supports the master and slave/agent mode, where the workload of building projects is delegated to multiple agent nodes by the master, allowing a single Jenkins installation to host multiple projects, or to provide different environments needed for builds/tests. A master operating by itself is the basic installation of Jenkins and in this configuration the master handles all the tasks for your build system. If you start to use Jenkins frequently with just a master, it's common to find that you run out of resources (memory, CPU, etc.). At this point, you can either upgrade your master or you can set up agents to pick up the load. Alternatively, in a scenario where you need several different environments to test your builds, using an agent to represent each of your required environments can be a better solution. An agent is a computer that is set up to offload build projects from the master. Once the agent has been set up, this distribution of tasks is fairly automatic. In this tutorial, we demonstrate how to create a Jenkins master/slave architecture on Oracle Cloud Infrastructure using the Jenkins Oracle Cloud Infrastructure Compute plugin. When installed on Jenkins master, the plugin allows you to spin up instances (slaves/agents) on demand within Oracle Cloud Infrastructure, and remove instances or free resources automatically once the build job completes. Let's look at how to launch a Jenkins master/slave deployment on Oracle Cloud Infrastructure, in a few easy steps: Step 1 - VCN set up and Jenkins installation Create a VCN with a single subnet in an availability domain to house Jenkins master and agent nodes. In this tutorial, we create both master and agents in the same subnet (this is not mandatory). Launch an instance in the newly created subnet. In this case, we are using a VMStandard1.1 shape running Oracle Linux 7.5. We use this instance to run our Jenkins master node. A good practice is to select a slightly smaller instance shape for the Jenkins master and larger shapes for agent nodes, as the heavy lifting of running the actual build is done by the agent nodes. Log into this instance using the public IP address and the associated public key of the instance. As Jenkins runs on Java, update the yum packages and install Java 8 on this instance, by issuing the following commands: sudo yum -y update sudo yum -y install java   Now that the dependencies have been installed, go ahead and install Jenkins using these commands: sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key sudo yum -y install jenkins   Start the Jenkins service using the command: sudo service jenkins start If successful, the command output looks similar to this:   By default, Jenkins listens on port TCP 8080. Open this port on the instance firewall by configuring the firewall. sudo firewall-cmd --zone=public --permanent --add-port=8080/tcp sudo firewall-cmd --reload Now Jenkins is configured on the instance. This instance acts as the Jenkins master node. To access this node from the outside internet, open the TCP port 8080 on the security list for the subnet which houses the Jenkins master instance. Step 2 - Configuring Jenkins Master On the web browser, access Jenkins dashboard using the public IP of the instance and port 8080. You will see the Unlock Jenkins screen, which displays the location of the initial password.    In the terminal window, use the cat command to display the password: sudo cat /var/lib/jenkins/secrets/initialAdminPassword Copy the 32-character alphanumeric password from the terminal and paste it into the Administrator password field, then click Continue. The next screen presents the option of installing suggested plugins or selecting specific plugins. Go ahead and install the suggested plugins. When the installation is complete, you are prompted to set up the first administrative user. You can skip this step and continue as an admin using the initial password used in the previous step. Here, we proceed as admin. You should see a Jenkins is ready! confirmation screen. Click Start using Jenkins to visit the main Jenkins dashboard. At this point, basic configuration of Jenkins master has been successfully completed. Next, go ahead and install Oracle Cloud Infrastructure Compute plugin which allows us to launch slave nodes. To do this, go to Manage Jenkins and click Manage Plugins.   Step 3 - Installing and Configuring Jenkins Oracle Cloud Infrastructure Compute plugin In the Manage Plugins section, under the Available tab, search for Oracle Cloud Infrastructure Compute plugin and perform Install without restart on it.   Go back to the Manage Jenkins page and go to Configure System. Scroll all the way to the bottom and click Add a new cloud. Then, click Oracle Cloud Infrastructure Compute. Populate the fields with your API Fingerprint, API Key, User, and Tenancy OCID which you can locate on the Oracle Cloud Infrastructure console. For more information on how to locate these fields refer to https://github.com/oracle/oci-compute-jenkins-plugin. Click  “Test Connection”, If all the information you entered is correct, the dialog displays Successfully connected to Oracle Cloud Infrastructure Compute. Now, scroll down and click on Instance Templates, where you specify the shape and subnets of your Jenkins agent nodes. We suggest selecting the shape of slaves that are bigger than the master, as these run the actual build. In this tutorial, we selected a VMStandard2.1 instance running Oracle Linux 7.5 in the same availability domain and subnet as the Jenkins Master node. Click the Advanced button and configure an init script for the agent nodes. Agent nodes also require having Java installed on them before communicating with the master, so go ahead and specify that in the init script section.   Step 4 - Launching Jenkins Agent Nodes The previous step completes the configuration of Jenkins agent nodes. Go ahead and save this template. Navigate to Build execution status and click on it. You will see that the Oracle Cloud Infrastructure compute plugin is listed there. Finally, launch the agent node as shown. You can launch multiple agent nodes, by repeating the same operation multiple times. Once you launch, the console displays: This takes a couple of minutes. If you go back to the Oracle Cloud Infrastructure console, you see the slave instance being provisioned: Once the instance is fully provisioned, it is listed in the Jenkins Dashboard also, as indicated in the following figure: Conclusion This concludes this tutorial on setting up Jenkins master/slave nodes on Oracle Cloud Infrastructure (OCI). There are other ways to deploy Jenkins on OCI which are also quite suitable for enterprise deployments. We shall demonstrate these deployment strategies in my subsequent blog posts: 1) Traditional deployment - which involves manually launching and configuring the agent nodes by just using Jenkins SSH slave plugins and not Oracle cloud infrastructure compute plugin. A setup like this involves a lot of manual configuration, but on the contrary, it gives a lot more granular control over how to launch Jenkins agents. Currently, using the Oracle cloud infrastructure compute plugin, you can launch agent nodes within a single Availability Domain, if you plan on launching agent nodes across multiple Availability Domains, we recommend this approach. 2) Containerized deployment - which involves deploying and running Jenkins master and the worker nodes as Docker containers. Running Jenkins in Docker containers allows you to use servers running Jenkins agent nodes more efficiently. It also simplifies the configuration of the agent node servers. Using containers to manage builds allows the underlying servers to be pooled into a cluster. The Jenkins agent containers can then run and execute a build on any of the servers with resources available to support the build. This ability to pool multiple builds to run independently of each other on the server improves the utilization of the server. In my next blog post, we shall demonstrate how to create a Jenkins build pipeline on Oracle Cloud Infrastructure using the setup we just created.    Abhiram Annangi | Twitter  LinkedIn

Faster software development has become a competitive advantage for companies. The automation of software development processes facilitates speed and consistency, which led to the rise for having a...

Developer Tools

Automating Infrastructure Provisioning and Deployment of a Three-Tier App on Oracle Cloud Infrastructure with Terraform and Chef

I am Upendra Vellanki, Principal Technologist, Platform Technology Solutions group of Oracle Product development. Oracle Cloud Infrastructure combines the elasticity and agility of public cloud with the granular control, security, and predictability of on-premises infrastructure to deliver high-performance, high availability and cost-effective infrastructure services. Because cloud infrastructure encompasses so many areas of IT and cloud engineering, it's essential to have cloud infrastructure automation tools that can help cloud infrastructure engineers, IT professionals, and sysadmins in nearly every area of the field. This post describes several cloud infrastructure automation tools, both open source and enterprise, that perform tasks from automatically provisioning required infrastructure to deploying a three-tier application with a single click (command). In this post, a three-tier application is used to show how an application can be deployed with chef cookbooks and how a data source can be configured dynamically using the cloud-init feature. Three tier applications are the most common type of application and are a good example to show a mix of infrastructure deployment, software deployment, and database connectivity. The process outlined here that uses these automation tools consists of the following high-level tasks: Create a MySQL private image with required database schema. Provision the required number of VM instances on Oracle Cloud Infrastructure by using Terraform. Deploy a web application by using cloud-init and Chef cookbooks. Access the web app, which internally calls the MySQL database and displays dynamic content. Prerequisites Install Terraform. Install a Chef cluster and create the required cookbooks at the organizational central Chef server. The java, tomcat, and webapp cookbooks are used in this example. Architecture Terraform provisions one instance with MySQL and two instances with Tomcat. A custom image is used to create the MySQL instance. The Terraform script has the user data attribute. The cloud-init script is passed in a YAML format to the user data, and this script installs Tomcat by using Chef cookbooks. Step 1: Create a MySQL Private Image with Required Database Schema Log in to the Oracle Cloud Infrastructure Console. Create a CentOS VM instance. For instructions, see Launching an Instance. Install the MySQL database by using these instructions. Create the custom image by using the Create Custom Image option from the VM. For detailed instructions, see Managing Custom Images. Step 2: Provision the Required Number of VM Instances by Using Terraform Ensure that your Oracle Cloud Infrastructure credentials are either set as environment variables or provided in the tfvars file in the Terraform scripts. Create the required script files to create network, storage, instances, and so on. Sample script files are provided for reference, and you can download the files here. The following sample Terraform script creates MySQL and Tomcat instances: Before you run the scripts, ensure that the image ID for creating the MySQL database instance is your custom image. It you want this image to be available in another tenant, copy the image across tenants by following these instructions. To validate your scripts, run the terraform plan command before running the terraform apply command. The following plan output shows that nine resources will be added and one will be updated. Run the terraform apply command. The script displays the public IP address of the MySQL instance. This IP address is dynamically added to create a data source in Tomcat. You can see the MySQL instance and two instances of Tomcat in the Oracle Cloud Infrastructure Console. Step 3: Deploy a Web Application by Using cloud-init and Chef Cookbooks The user data attribute that’s part of the Terraform scripts runs Chef cookbooks to install the Tomcat server and the sample application. The following script performs these actions: Installs the Tomcat server by using the Chef cookbook, and deploys the sample application, which is customized as part of the tomcat cookbook. This cookbook customization can be modified to deploy a WAR or EAR file, for any enterprise applications. These cookbooks are downloaded from the Chef server, which is configured in the cloud-init user data file. Downloads the mysql-java driver, and copies the JAR file to the Tomcat server. Creates a MySQL data source by capturing the IP address of the MySQL VM instance dynamically, using the metadata service. Step 4: Access the Application From the Oracle Cloud Infrastructure Console, note the public IP address of the Tomcat app instance.   Open a browser and access the application by using the following URL: http://<tomcat-instance-public-ip>:8080/MyApp/welcome.jsp/p> A Hello World page is displayed. This page is static (no database interaction). Now enter the following URL: http://<tomcat-instance-public-ip>:8080/MyApp/products.jsp A product catalogue page is displayed. This page has dynamic content that is fetched from the MySQL database. Conclusion This post has shown how to provision the required compute instances and network related resources; and install the required software like tomcat, MySql database. It also covered how a data source can be dynamically configured using cloud-init. Further scope The MySQL data source in Tomcat can have more dynamic properties, such as credentials, number of database connections, and so on. WAR or EAR files can be deployed for complex enterprises applications.

I am Upendra Vellanki, Principal Technologist, Platform Technology Solutions group of Oracle Product development. Oracle Cloud Infrastructure combines the elasticity and agility of public cloud with...

Oracle Cloud Infrastructure

Oracle Cloud Infrastructure Backbone

I am Monika Machado, the Director of Border and Backbone Engineering on the Oracle Cloud Infrastructure Backbone team, and this is the first in a series of blog posts about the Oracle Cloud network backbone. Oracle Cloud Infrastructure regions rely on a state-of-the-art network technology and design. Each region is composed of availability domains that are interconnected by a metro-area network, and the regions are interconnected by a dedicated, custom-built network, the Oracle Cloud Infrastructure backbone. This post provides a high-level overview of the backbone, the use cases that it serves, and how it can enable your enterprise workloads in Oracle Cloud Infrastructure. Future articles will focus on specific use cases and offer additional insights into the technologies used to realize our high-performance backbone. The OCI backbone was designed to enable customer workloads by providing a high performance, reliable and scalable transport. The backbone is a dedicated, secure network for interconnecting Oracle Cloud Infrastructure regions in diverse geographic locations. The backbone network provides privately routed inter-region connectivity with consistent inter-region performance for bandwidth, latency, and jitter when compared to the public Internet. This enables enterprise workloads that typically don't work well over the internet, including disaster recovery, real-time replication, clustering, and other scenarios. We designed and built a modern and smart backbone from the ground up to enable the enterprise use cases that our customers demand. The backbone interconnects all Oracle Cloud Infrastructure data centers and provides high-performance and reliable connectivity between data centers.   As of May 1 2018, all the Oracle Cloud Infrastructure regions are connected to the backbone, and you can utilize it for your applications needs. Security First All the communications traversing the backbone are secured by industry-standard encryption protocol, which ensures confidentiality for all transactions between the Oracle data centers. With this level of encryption, you can be assured that your data is protected while in transit on the backbone. High Availability Because any physical infrastructure is subject to some type of failure, the backbone is designed to route around failures to maintain the availability of the connections between the data centers. In case of failure, all your applications that use the backbone will be rerouted to new paths without being disrupted. Automation Our engineering teams put a lot of effort in making sure automation was ready for all the aspects of building the backbone, this includes automation on the planning for the fiber routes, device deployment and maintenance and operations. We are making every effort to remove human failures from the engineering perspective. Use Cases The Oracle backbone enables you to create solutions that span multiple regions such as multi-region VCN peering, high-availability applications, and seamless replication for your data. The backbone also provides transit between a backbone point-of-presence (POP) location and the rest of the Oracle Cloud Infrastructure data centers to extend the connectivity for Oracle Cloud Infrastructure FastConnect service customers. My team and I look forward to continuing the dialogue on the next post in the series.

I am Monika Machado, the Director of Border and Backbone Engineering on the Oracle Cloud Infrastructure Backbone team, and this is the first in a series of blog posts about the Oracle Cloud network...

Oracle Cloud Infrastructure

Reduce Cold Data Footprint with Commvault and Oracle Cloud Infrastructure Archive Storage

Authored by: Khye Wei, Product Manager, Oracle. Anoop Srivastava, Product Manager, Oracle. Mark Rytwinski, Senior Director, Product Management, Commvault Nearly 80% of stored data is unused after 90 days! Whether you need a cost-effective cloud storage tier to store large amounts of infrequently accessed data for archiving purposes, or you want to seamlessly extend your on-premises data center to the cloud for inactive data and fixed content, Commvault and Oracle have partnered to meet your needs. The integration between Commvault and Oracle Cloud Infrastructure Archive Storage enables you to meet governance, compliance, and preservation requirements in the cloud, while achieving your SLAs and budgetary requirements. At $0.0026 per GB currently, Archive Storage offers a compelling online alternative to offline vaulted storage. Archive Storage is ideal for compliance and audit mandates, log data for analysis or debugging purposes, historical or infrequently accessed content repository data, or application-generated data for future analysis or legal purposes. Starting in Commvault v11 SP11, Commvault is integrated with both Oracle Cloud Infrastructure Object Storage and Archive Storage. Commvault seamlessly moves and manages data across on-premises, private cloud, Object Storage, and Archive Storage while providing data compression, deduplication, and encryption features. In addition, Commvault delivers integrated alerting, reporting, and data verification. With Commvault’s content-indexing functionality, admins can make eDiscovery requests from the Commvault console regardless of where the data lives (on-premises or the cloud).      With the integrated capabilities of Commvault and Oracle Cloud Infrastructure, businesses can now effectively tier storage costs according to SLA profile by automatically tiering data to Oracle Cloud Infrastructure Object Storage and Archive Storage. This ability will significantly reduce the footprint of secondary backup and archive data, reducing overall infrastructure and storage costs while improving SLAs.     For more details, see the following resources: Archive Storage blog Archive Storage technical documentation Oracle Cloud Infrastructure Data Protection solutions Commvault v11 documentation Commvault Cloud Storage Getting Started Guide  

Authored by: Khye Wei, Product Manager, Oracle. Anoop Srivastava, Product Manager, Oracle. Mark Rytwinski, Senior Director, Product Management, Commvault Nearly 80% of stored data is unused after 90...

Easily Deploy a KVM Host Environment Using Oracle Linux on Oracle Cloud Infrastructure

KVM is built into the Unbreakable Enterprise Kernel (UEK) for Oracle Linux by default, and it enables you to use the UEK as a hypervisor. Deploying the Oracle Linux KVM host environment on Oracle Cloud Infrastructure gives you full control and flexibility to configure and manage your virtual machines (VMs) within a bare metal instance. We’ve now made it easy to deploy an Oracle Linux KVM host and guest VMs on Oracle Cloud Infrastructure. Tools packaged in the new Oracle Linux KVM image for Oracle Cloud Infrastructure automate the guest VM creation process, and make it simple to create and delete VMs, and to allocate Oracle Cloud Infrastructure resources such as block storage devices and VNICs. This post tells you how to get started. For detailed information and instructions, see Getting Started: Oracle Linux KVM Image for Oracle Cloud Infrastructure. Deploy an Oracle Linux KVM Host To use the Oracle Linux KVM image, you deploy it on an Oracle Cloud Infrastructure Compute instance. First you use the custom Image Import feature in the Oracle Cloud Infrastructure Console to import the image. Then you launch the Oracle Linux KVM instance on one of the following supported Oracle Cloud Infrastructure Compute bare metal shapes: BM.Standard1.36 BM.Standard2.52 To create your guest VM, you need to configure a dedicated block storage device and VNIC for your KVM instance in the Oracle Cloud Infrastructure Console. You can use the oci-iscsi-config --show utility to display the details for all of the storage devices attached to your KVM instance. Create a VM The oci-kvm tool provided with the Oracle Linux KVM image uses the virt-install command-line tool to create new KVM guests by using the libvirt hypervisor management library. It allows you to create and configure KVM guests on Compute instances that use Oracle Cloud Infrastructure resources such as block storage volumes and VNICs.  Following is an example invocation using oci-kvm to create a guest VM: # oci-kvm create -D my_guest -V --vcpus 4 --memory 8192 --boot cdrom,hd --location /mnt/OracleLinux-R7-U4-Server-x86_64-dvd.iso --nographics --console pty,target_type=serial --console pty,target_type=virtio --noautoconsole --os-variant=rhel7 --extra-args "console=tty0 console=ttyS0,115200n8 serial" This example creates an Oracle Linux 7.4 guest, and is configured to use a serial console for console output.  If you want to use a particular block storage device, specify -d/--disk with the path to the device. If you want to use a particular VNIC, specify -n/--net with its private IP address. You can also pass arguments directly to virt-install by using the –V option. Delete a VM The oci-kvm tool can also remove and unconfigure all the system resources assigned to the guest VM and make them available for reuse. Following is an example of how to delete a guest VM: # oci-kvm destroy -D my_guest For detailed information, see the following resources: Oracle Linux KVM Image for Oracle Cloud Getting Started: Oracle Linux KVM Image for Oracle Cloud Infrastructure

KVM is built into the Unbreakable Enterprise Kernel (UEK) for Oracle Linux by default, and it enables you to use the UEK as a hypervisor. Deploying the Oracle Linux KVM host environment on Oracle...

Creating a Secure Connection Between Oracle Cloud Infrastructure and Other Cloud Providers

In today's world, having a secure, encrypted, point-to-point channel through which your data can travel from a specific location to the cloud contributes to a safer solution if you want to avoid breaches and data loss. There are different ways to allow multiple locations to establish secure connections with each other over a public network, such as the internet-like user-based-authentication VPN, IPSec site-to-site technologies, and other third-party application options. This blog post explains how to create a secure and encrypted IPSec site-to-site tunnel between Oracle Cloud Infrastructure and another third-party cloud provider like Amazon Web Services (AWS) by using Libreswan. This process can also be used with Oracle Cloud Infrastructure Classic and other cloud platforms like Microsoft Azure and Google Cloud, or to connect to your own on-premises data centers. Minor adjustments to the steps, below, might be required for other cloud providers. Getting Started The following components are the main requirements for enabling a secure channel between Oracle Cloud Infrastructure and an external third-party network: Provision and configure Oracle Cloud Infrastructure dynamic routing gateways (DRGs) and customer-premises equipment (CPE) through the dashboard A third-party cloud Libreswan instance There is no need of additional hardware  Configuration Diagram The following diagram illustrates this configuration: Configuration Process At a high level, these are the steps required to create a secure and encrypted IPSec site-to-site tunnel between Oracle Cloud Infrastructure and another third-party cloud provider by using Libreswan. Our example uses AWS as the third-party cloud provider: Architecture Review. Provision an AWS Libreswan VM. Start the Libreswan configuration. Configure AWS network rules. Configure the Oracle Cloud Infrastructure DRG and CPE. Configure the Oracle Cloud Infrastructure network rules. Configure the Oracle Cloud Infrastructure route information. Finish the AWS Libreswan configuration by using Oracle Cloud Infrastructure information. Test the IPSec communication between Oracle Cloud Infrastructure and AWS. Architecture Libreswan uses the terms "left" and "right" to describe endpoints. Left is going to be represented by Oracle Cloud Infrastructure and AWS for right. The following table shows how to set up these components: Left Side: Oracle Cloud Infrastructure DRG/CPE Right Side: Third-Party Cloud Provider Libreswan VM VCN: 172.0.0.0/16 VPC: 10.0.0.0/16 DRG Public IP: 129.146.13.53  Public IP / ID: 34.200.255.174 / i-016ab864b43cb368e CPE Internal IP: 10.0.0.10 Location: Oracle Cloud Infrastructure - US-Phoenix-1 Location: Amazon Web Services (AWS) - US East (N. Virginia)   Configure Libreswan on the Third-Party Cloud Provider (AWS, Right Side) Create a Libreswan VM on AWS by using its provisioning process. Use Oracle Linux, CentOS, or Red Hat as the main operating system. After the new instance starts, connect to it through SSH and install the libreswan package. $ sudo yum -y install libreswan  Disable source and destination checks on the Libreswan VM instance by right-clicking the instance in the console, selecting Networking, selecting Change Source/Dest. Check, and then clicking Yes, Disable. Configure IP_forward to allow AWS clients to send and receive traffic through the Libreswan VM. In the /etc/sysctl.conf file, set the following values and apply the updates with sudo sysctl -p. net.ipv4.ip_forward=1 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 net.ipv4.conf.eth0.send_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv4.conf.eth0.accept_redirects = 0 Edit your AWS route table to add a rule that has the Oracle Cloud Infrastructure subnet CIDR (172.0.0.0/16) as the destination and the AWS Libreswan instance ID (i-016ab864b43cb368e) as the target. Edit AWS security groups and network ACLs, and add TCP/UDP ports 4500 and 500 to allow Oracle Cloud Infrastructure DRG/CPE IPSec communication with the AWS Libreswan VM (the source option can be the Oracle Cloud Infrastructure public IP instead of 0.0.0.0/0). Create the Libreswan IPSec configuration file. $ sudo mv /etc/ipsec.conf /etc/ipsec.conf.bck $ sudo vi /etc/ipsec.conf and include the following config setup include /etc/ipsec.d/*.conf Configure DRG and CPE on Oracle Cloud Infrastructure (Left Side) Create a customer-premises equipment (CPE) that points to the Libreswan AWS instance public IP address (34.200.255.174). Create a DRG and attach it to the Oracle Cloud Infrastructure local VCN (172.0.0.0/16). Create an IPSec connection and point it to the AWS VPC CIDR (10.0.0.0/16). Initially the IPSec tunnel will be in the DOWN state (offline) because some additional configurations need to be done on the AWS Libreswan VM. Open ports 500 and 4500 (TCP/UDP) in the Oracle Cloud Infrastructure security list for 0.0.0.0/0, as you did with AWS security groups and networks ACLs. You can use the AWS Libreswan VM public IP address instead of 0.0.0.0/0 if it's a persistent public IP. In addition to that, open ports/protocols for the AWS CIDR (10.0.0.0/16). Add a route rule to the AWS VPC network (10.0.0.0/16) using the DRG and CPE that you just created. Finish the Third-Party Cloud Provider (AWS) Libreswan Configuration Connect through SSH to the AWS Libreswan instance and create the Libreswan IPSec connection file. $ sudo vi /etc/ipsec.d/oci.conf and include the below options conn oci1 authby=secret auto=start pfs=yes   salifetime=2500s leftid=129.146.13.53 #OCI DRG IPSec Public IP left=129.146.13.53 #OCI DRG IPSec Public IP leftsubnets=172.0.0.0/16 #OCI VCN CIDR right=10.0.0.10 #AWS Libreswan local VPC internal address rightid=34.200.255.174 #AWS Libreswan Public IP address rightsubnet=10.0.0.0/16 #AWS VPC CIDR For authentication, use the pre-shared key (PSK) option to create a secret file with a format similar to the following one: $ sudo vi /etc/ipsec.secrets #OCI_DRG-Public-IP-IPSEC-Tunel1 AWS_OpenSWAN-PublicIP : PSK "DRG Secret Key" 129.146.13.53 34.200.255.174 : PSK "OCI DRG IPSec Secret Key" Run sudo service ipsec restart  to start IPsec and sudo ipsec auto --status |grep "==="  to verify that the tunnels were started correctly. $ [centos@ip-10-0-0-10 ~]$ sudo ipsec auto --status |grep "===" 000 "oci1/1x0": 10.0.0.0/16===10.0.0.10<10.0.0.10>[34.200.255.174]...129.146.13.53<129.146.13.53>===172.0.0.0/16; erouted; eroute owner: #7 000 "v6neighbor-hole-in": ::/0===::1<::1>:58/34560...%any:58/34816===::/0; prospective erouted; eroute owner: #0 000 "v6neighbor-hole-out": ::/0===::1<::1>:58/34816...%any:58/34560===::/0; prospective erouted; eroute owner: #0 The configuration is complete, and in the Oracle Cloud Infrastructure Console, the IPSec tunnel should be the UP state. Quick IPSec Communication Test Between Oracle Cloud Infrastructure and AWS Setup has been finalized so it's time to validate the configuration and check if an OCI VM (Left Side) can communicate through an IPSec tunnel with an AWS VM (Right Side). One easy way to check the communication is through the ping command. The below table explains both sides configuration. Left Side: Oracle Cloud Infrastructure VM Right Side - AWS VM Public IP: 129.146.74.114 Public IP: 34.201.24.5 VCN Local IP: 172.0.0.10 VPC Local IP: 10.0.0.11   and as the following image illustrates, both Cloud providers VMs can "talk" to each other. Conclusion This blog explained how to create a secure and encrypted site-to-site IPSec tunnel between Oracle and Amazon environments allowing the VMs to be able to communicate with each other through their private IP addresses as if they were in the same network segment. Additional Resources Oracle Cloud Infrastructure Dynamic Routing Gateways (DRGs) Oracle Cloud Infrastructure Security List Overview of Networking Libreswan Portal

In today's world, having a secure, encrypted, point-to-point channel through which your data can travel from a specific location to the cloud contributes to a safer solution if you want to...

Oracle Cloud Infrastructure

Easily Connect Isolated Networks using Oracle Cloud Infrastructure's VCN Peering Solution: Part 2

In my last post, we covered an overview of Oracle Cloud Infrastructure's Local VCN Peering Solution. In this post, I will walk you through a Local VCN peering scenario where three Spoke consumer VCNs are enabled to access the shared resources on the Hub VCN. Two Spoke VCNs are in the same tenancy and compartment of the Hub VCN. One Spoke VCN is in a different tenancy. Local VCN Peering Scenario Let us discuss the following example of a “hub-and-spoke” model in detail: Create four VCNs, their corresponding subnets, and the associated default route tables, security lists, and DHCP options based on the steps 1-3 described here. Create the Hub VCN and two Spoke VCNs in the first tenancy as shown in the following figure. Create the Spoke3 VCN on a different tenancy as shown in the following figure. Create the subnet in the Hub VCN based on the appropriate CIDR details  Spoke1 subnet details: Spoke2 subnet details: Spoke3 subnet details: The following steps illustrate how to create peering gateways on Hub VCN and Spoke1 VCN and then establish a peering connection between Hub VCN and Spoke1 VCN. Create a Local Peering Gateway(LPG) in Hub VCN Create a Local Peering Gateway(LPG) in Spoke1 VCN. Establish a peering connection between the two VCNs using the two peering gateways created in previous two steps. In this case, Local Peering Gateway(LPG) in Hub VCN is accepting the connection. The Local Peering Gateway (LPG) in Spoke1 VCN is used to initiate the connection. The established connection enables advertisement of the CIDRs to the peering gateway. Similarly, another pair of peering gateways is enabled to realize communication between Spoke2 VCN and Hub VCN. Create another Local Peering Gateway(LPG) in Hub VCN Create a Local Peering Gateway(LPG) in Spoke2 VCN. Establish a peering connection between the two VCNs using the two peering gateways created in previous two steps. In this case, the Local Peering Gateway (LPG) in the Hub VCN is accepting the connection and the Local Peering Gateway (LPG) in Spoke2 VCN is used to initiate the connection The established connection enables the advertisement of CIDR to the peering gateways. Another pair of peering gateways and IAM policies are enabled to realize cross-tenancy peering between Spoke3 VCN and Hub VCN. Create another Local Peering Gateway (LPG) in Hub VCN. Create a Local Peering Gateway (LPG) in Spoke3 VCN. Establish a peering connection between the two VCNs across two tenancies using the two peering gateways created in previous two steps. In this case, Local Peering Gateway(LPG) in Hub VCN is accepting the connection and the Local Peering Gateway(LPG) in Spoke3 VCN is used to initiate the connection.   Administrator of the Spoke VCN shares the OCID of tenancy and user group. Administrator of the Hub VCN uses this information to set up IAM policies for facilitating the peering connection. Administrator of the Hub VCN shares the OCID of its tenancy and LPG to the administrator of the Spoke3 VCN. Administrator of the Spoke3 VCN uses this tenancy OCID information to set up IAM policies for facilitating the peering connection. The Local Peering Gateway (LPG) in Spoke3 VCN is used to initiate the connection The corresponding LPGs in Spoke VCN and Hub VCN move to a peered state by establishing a cross-tenancy peering connection. Listed are the peering gateway details after establishing the hub-and-spoke communication model as required by the preceding figure. Peering Gateways details in Hub VCN: Peering Gateway details in Spoke1 VCN: Peering Gateway details in Spoke2 VCN: Peering Gateway details in Spoke3 VCN Local VCN peering will be presented to the customers as a direct route target in their VCN. VCN subnet level route rules will send the traffic directly to VCN Peering. Route Table entries in Hub VCN Route Table entries in Spoke1 VCN Route Table entries in Spoke2 VCN Route Table entries in Spoke3 VCN As a final step, evaluate the security rules associated with the subnets and update them accordingly to ensure that all inbound or outbound traffic that you permit is well defined as intended. In the simplest case, add a security rule in Hub VCN subnets to accept ICMP traffic from Spoke VCNs With these steps, you now have three Spoke VCNs peered with the Hub VCN from the networking stand-point. Customers can deploy shared services in the instances attached to Hub VCN. Peered Spoke VCNs can now access shared services deployed in Hub VCN using private IP addresses. I hope you enjoyed using Local VCN Peering feature in Oracle Cloud Infrastructure. We'd love to hear any feedback you have.  Vijay Arumugam Kannan Principal Product Manager Oracle Cloud Infrastructure

In my last post, we covered an overview of Oracle Cloud Infrastructure's Local VCN Peering Solution. In this post, I will walk you through a Local VCN peering scenario where three Spoke consumer VCNs...

Oracle Cloud Infrastructure

Easily Connect Isolated Networks using Oracle Cloud Infrastructure's VCN Peering Solution: Part 1

This is part one of a two-part series that will look at Oracle Cloud Infrastructure's Local VCN peering solution. You can find the part two of the series here. Virtual Cloud Network (VCN) is a customizable private network in Oracle Cloud Infrastructure. Oracle Cloud Infrastructure now enables customers to connect two virtual cloud networks using our VCN peering solution. The Oracle Cloud Infrastructure VCN peering solution falls into two major categories: Local VCN peering (or Intra-region peering) which refers to connecting two VCNs within the same region and tenancy. Oracle Cloud Infrastructure supports peering between two VCNs that are in the same tenancy (whether they are in the same compartment or not), or in different tenancies. Remote VCN peering (or Inter-region VCN peering) which refers to connecting two VCNs across two different regions. This blog describes the Local VCN peering solution in detail.  Oracle Cloud Infrastructure's local VCN peering solution offers customers many benefits: A no cost, reliable alternative to connectivity models such as VPN by eliminating internet gateways, encryption, and performance bottlenecks. Ease of peering enablement between Oracle VCNs with no scheduled downtime. Private connectivity for resources in peered virtual cloud networks using Oracle Cloud Infrastructure fabric’s highly redundant links with predictable bandwidth and latency. In some cases, VCNs are owned by the same company but in different departments. In other cases, the two VCNs are in different companies (a.k.a a service-provider model). Our local VCN peering solution supports many use cases by allowing customers to deploy multiple VCNs (within their governance boundaries) and providing private connectivity across the VCNs by the appropriate use of policies such as security rules and routing rules. Access to peer resources:  Traditionally, large enterprises have one or more virtual cloud networks with their operational isolation and business goals. Each business unit can deploy and operate resources independently. In this scenario, resources that are required by two business units (such as forecast tracking, budgetary information, and employee data) tend to get duplicated in both the virtual network boundaries. This situation often results in increased compute expenses and triggers additional cost to sync the versions of these applications and data. Local VCN peering provides private access to the resources in the peered VCN, eliminating duplicate resources and reducing OPEX. Access to centralized resources (hub and spoke model): Customers can realize significant TCO benefits from Local VCN peering solution by deploying all shared resources (such as logging server, DNS servers, Active Directory, etc.)  on a single Hub VCN.  Other VCNs (spokes) are then allowed to peer with the shared (Hub) VCN.  Setting Up VCN Local Peering In reality, the two VCNs can be in different tenancies/compartments and can have different administrators Administrators create Local Peering Gateways (LPG) on each VCN. Administrators of these VCNs have to provide explicit agreement to enable the peering relationship. They must share information using out-of-band mechanisms and set up Identity and Access Management (IAM) policies for their own VCN’s compartment and tenancy. If the two VCNs are in same tenancy and same compartment, the same network administrator will most likely have access to information from both VCNs and can proceed to create a peering between the VCNs If the two VCNs are in same tenancy, but in different compartments: The administrator of the VCN which is accepting the peering connection shares information such as VCN’s name, compartment name and LPG name. The administrator of the VCN which is initiating the peering connection shares information such as the name of the IAM group. If the two VCNs are in different tenancies: The administrator of the VCN which is accepting the peering connection shares their tenancy OCID (Oracle Cloud Indentifier) and the LPG’s OCID and sets up special policy statements to accept the peering connection. The administrator of the VCN which is initiating the peering connection shares the tenancy and the administrator group’s OCID and sets up policy statements to initiate the peering connection. Administrators establish a peering relationship between the VCNs. This involves creating a peering connection between the two LPGs. The peering connection indicates permission to exchange route advertisements and the willingness of each VCN to accept packets from the other VCN.   Administrators update their VCN's route tables and security rules to enable traffic between the peered VCNs as desired. I hope you enjoyed learning about the Local VCN Peering feature in Oracle Cloud Infrastructure. In my next post, I will walk you through a Local VCN peering scenario where three Spoke consumer VCNs are enabled to access the shared resources on the Hub VCN. Stay tuned... Vijay Arumugam Kannan Principal Product Manager Oracle Cloud Infrastructure, Networking

This is part one of a two-part series that will look at Oracle Cloud Infrastructure's Local VCN peering solution. You can find the part two of the series here. Virtual Cloud Network (VCN) is...

Events

Join Oracle Cloud Infrastructure at COLLABORATE 18 to Hear the Latest

As Oracle's cloud solutions continue to expand across SaaS, IaaS, and PaaS, customers are eagerly evaluating how these offerings can help transform how they run their businesses. Whether users are looking to modernize their business and optimize with new cloud investments, integrate and extend an existing hybrid environment with on-premise systems, or build a personalized path to the cloud, COLLABORATE is the annual Oracle user conference where attendees can learn how to accelerate business innovation and digital transformation with Oracle Cloud. At this year's program at COLLABORATE, nearly 50% of the 1,200+ sessions will focus on cloud, developer, and emerging technologies to complement Oracle's on-premise solutions. Here's a preview of what you can expect at COLLABORATE. The Oracle Keynote: How to Build Your Own Personalized Path to Cloud In the Oracle keynote session on Monday, April 23 at 2:30 p.m., Steve Daheb, Senior Vice President for Oracle Cloud, will illuminate how the Oracle Cloud Platform makes it possible for organizations to develop their own unique path to the cloud from wherever they choose—SaaS, PaaS, or IaaS—and share how organizations have designed their unique journeys. Attend the Oracle Cloud Infrastructure Sessions With the introduction of the world's first-ever autonomous database, COLLABORATE attendees will also hear about exciting developments, get a sneak peek into the Oracle Autonomous Database Cloud, and see how Oracle is integrating AI and machine learning to its suite of cloud services to make them fully autonomous and cognitive. These sessions will explore how organizations can benefit from more autonomy in their software, from business users to app developers to DBAs. Additionally, there are more than 500 sessions that span across Oracle's SaaS, IaaS, and PaaS solutions where attendees can learn how our cloud offerings can accelerate business transformation, increase agility, and optimize security with their existing solutions. Some of these sessions include: Oracle Cloud Infrastructure: The Best Place to Run Your Oracle Applications [Session ID: 107800] JD Edwards on Oracle Cloud Service [Session ID: 111370] Advanced Architectures for Deploying Oracle Applications on Oracle Cloud Infrastructure [Session ID: 107820] Give Us Your Most Challenging Workloads and Migrate Them to the Cloud! [Session ID: 108590] Advanced Practices for Databases with Oracle Cloud Infrastructure [Session ID: 108520] Oracle Cloud Infrastructure - The Best of On-Premises and Cloud in a Single Infrastructure Solution [Session ID: 112020] Oracle Managed Cloud Services for Your PeopleSoft on Cloud [Session ID: 110690] Oracle Real Application Clusters (RAC) in the Oracle Cloud [Session ID: 1432] Hands-On Lab: Lift and Shift to Oracle Cloud for Oracle E-Business Suite [Session ID: 10583, 10584, 10585] Join the Conversation at Demo Pods # 5 & 6 At COLLABORATE, you can connect with the Oracle Cloud Infrastructure team and experience our solutions through demos. COLLABORATE is the largest annual technology and applications forum for the Oracle user community in North America. Taking place on April 22-26 in Las Vegas, Nevada, and hosted by three Oracle user groups – IOUG, OAUG, and Quest International Users Group – the five-day conference will host more than 5,000 attendees in keynotes, sessions, workshops, networking events, and an exhibitor showcase with 200+ vendors. See what COLLABORATE 18 has to offer. You can also view the full agenda and search by keyword, education track, product line, or business goal. Get Started Today with Jump Start Demo Labs Oracle Cloud Jump Start allows you to try preconfigured solutions running on Oracle Cloud Infrastructure, for free. A demo lab is launched in minutes, enabling you to start learning about these innovative solutions from Oracle’s consulting and technology partners. To give it a try, visit http://cloud.oracle.com/JumpStart. Stay Connected Stay connected with Oracle Cloud Infrastructure by following us on Twitter @OracleIaaS. We hope to see you there!

As Oracle's cloud solutions continue to expand across SaaS, IaaS, and PaaS, customers are eagerly evaluating how these offerings can help transform how they run their businesses. Whether users are...

Bring Your Own Image of Windows Server to Oracle Cloud Infrastructure

We continue to extend on our lift-and-shift capabilities with the Bring Your Own Image (BYOI) feature in Oracle Cloud Infrastructure. We already provide BYOI for UNIX VM operating systems (OS's), and now we are closing the gap to enable customers with Windows VM workloads. Our latest release supports the following virtualized Windows Server images in emulation mode and is Generally Available from today. Windows Server 2016 Datacenter Windows Server 2016 Standard Windows Server 2012 R2 Datacenter Windows Server 2012 R2 Standard Windows Server 2012 Datacenter Windows Server 2012 Standard Windows Server 2008 R2 Datacenter Windows Server 2008 R2 Enterprise Windows Server 2008 R2 Standard The following image formats are supported: QCOW2 and VMDK. We also handle the licensing. Imported VMs will be metered for Windows usage based on the pricing for Oracle Cloud Infrastructure Compute Windows Server OSs, $0.0204 OCPU Per Hour. Simplified Import Experience The Oracle Cloud Infrastructure Console provides a simplified experience to bring your own image by importing it to the Oracle Cloud Infrastructure Object Storage service and launching it. You can also use the CLI to import and launch the image.  Getting Started The import capability uses hardware emulation to launch existing on-premise Windows Server VM images with QCOW2 or VMDK formats. It takes only a few steps to get your own image up and running on Oracle Cloud Infrastructure. Prepare the image to remove computer specific information, such as installed drivers and computer security identifier (SID). This is called image generalization. Upload your image to the Object Storage service service by dragging it into the Oracle Cloud Infrastructure Console. In the Console, click Compute, then click Images, and then click Import Image. Enter the Object Storage location and the OS version, and then click Import. After the image is imported, from the Images page, click the Actions menu and select Create Instance.  That’s it!  This simplified import process reduces the overhead to move your Windows workloads very easily into Oracle Cloud Infrastructure. As part of this release emulated mode is only supported on X5 shapes.  Further updates will be posted when X7 support is available. For detailed information about and steps for this process, see the following topics: Bring Your Own Custom Image for Emulation Mode Virtual Machines Preparing a Custom Windows Image for Emulation Mode Importing Custom Images for Emulation Mode

We continue to extend on our lift-and-shift capabilities with the Bring Your Own Image (BYOI) feature in Oracle Cloud Infrastructure. We already provide BYOI for UNIX VM operating systems (OS's),...

Oracle Cloud Infrastructure

FireEye Email Security Powered by Oracle Cloud

Nothing compromises trust in an organization more than a data breach. A data breach potentially places an organization's customers, their information, and their data at risk. Such breaches also disrupt daily business and tarnish the organization’s reputation. Email remains the primary vector for initiating an advanced attack or delivering ransomware because it can be targeted and personalized, which increase the odds of a threat’s success. Having an email security solution is critical for any organization. Oracle is excited to be partnering with FireEye, an industry leader with a comprehensive portfolio of solutions that combine best-of-breed technologies with 360-degree threat intelligence and expertise. To prevent spam campaigns, ransomware, spear-phishing, and impersonation attacks, an email security solution needs to evolve quickly to adapt to the threat landscape. It must provide threat protection that meets the following requirements: Detects without relying on signatures Identifies critical threats with minimal false positives 
 Blocks inline to keep threats such as ransomware out of the environment 
 Uses cyber threat intelligence gained from the front lines to respond quickly to protect the organization 
 FireEye meets all these requirements. It collects extensive threat intelligence on adversaries, conducting first-hand breach investigations through millions of sensor feeds on the internet. FireEye Email Security draws on real evidence and contextual intelligence about attacks and attackers to prioritize alerts and block threats in real time – before they hit your inbox. FireEye Email Security delivers dynamic defense to detect attacks from the first time they're seen and blocks the most dangerous cyber threats, including malware-laden attachments and URLs, credential phishing sites, and business email compromise attacks. FireEye Email Security customers can now experience the benefits of FireEye and the power of Oracle Cloud together. Oracle Cloud Infrastructure was created to provide an infrastructure that matches and surpasses the performance, security, control, and governance of enterprise data centers, while delivering the scale, elasticity, and cost-savings of public clouds. As a result, Oracle Cloud Infrastructure is built from the ground up to be an Enterprise Cloud easily capable of running traditional multi-tiered enterprise applications and high-performance workloads like FireEye’s Email Security offering. See our relationship in action at RSA Wednesday April 17 from 12-2 p.m. at the St. Regis Grand Ballroom, or you can experience our joint offering immediately via FireEye’s free Jump Start lab environment. In this Jump Start lab, users can follow a step-by-step guide and experience a sample of FireEye’s Email Security offering.

Nothing compromises trust in an organization more than a data breach. A data breach potentially places an organization's customers, their information, and their data at risk. Such breaches also...

Oracle Cloud Infrastructure

ServiceNow and Oracle Cloud Infrastructure Integration for Enhanced Cloud Operations & Management

Continuing our expansion of the Oracle Cloud Infrastructure Partner ecosystem, we are pleased to announce ServiceNow configuration management database (CMDB) support for Oracle Cloud Infrastructure. ServiceNow CMDB provides a repository for discovered Oracle Cloud Infrastructure resources that can be used to understand service health and improve availability for IT service, operations, and support management for enterprise customers. This integration enables joint Oracle and ServiceNow customers to maintain a single repository of Oracle Cloud Infrastructure resource inventories, build a relationship map in the CMDB, and summarize resource usage across different applications, projects, business, services, cost centers, and users. ServiceNow provides process and service automation with orchestration, approvals, and service catalog capabilities. It also packages and delivers Oracle Cloud Infrastructure resource elements such as compute, network, and storage through the service catalog. This integration of ServiceNow for Oracle Cloud Infrastructure has been developed by MapleLabs (part of Xoriant Corporation).   ServiceNow CMDB        ServiceNow is an Enterprise-as-a-Service CMDB platform that provides security, real-time analytics, IT service, and operations management solutions. It is a single platform to automate business processes across the enterprise using a single data model, intelligent automation, and a modern user experience. It provides visibility of both virtualized and bare metal resources. It also provides dynamic Service Maps that help keep your automation up to date, and can help you maintain regulatory compliance.   Oracle Cloud Infrastructure: Compute, Network, and Storage resources Your Oracle tenancy can be thought of, roughly speaking, as your "account," in which you create and administer cloud resources.   In order to enable you to partition, organize, and control access to subsets of resources, Oracle Cloud Infrastructure also provides compartments - logical groups of resources.  All of the compute, network, and storage resources you use will exist in one of these compartments.  You can assign users rights to access resources in one compartment, while denying them access to another.  While a simple deployment might require only one compartment, more complex environments benefit from the organization and access control compartments provide.     The ServiceNow CMDB integration and discovery application uses the Oracle Cloud Infrastructure IaaS REST APIs to retrieve the resources managed by Oracle Cloud Infrastructure.  These resources are imported into the ServiceNow CMDB.  The application builds a relationship map to enable easy identification of dependencies among resources to help accelerate inventory, event, change, and configuration management functionalities. Resources can also be assigned user-defined tags to view the usage of applications, projects, business services, cost centers, and users.  A dashboard view of resource usage is provided to administrators, operators, and users. The ServiceNow portal for Oracle Cloud Infrastructure is an intuitive UI dashboard that exposes the inventory assets from Oracle Cloud Infrastructure. The dashboard in the ServiceNow portal provides fine-grained role-based permissions for users to gain visibility into Oracle cloud-based inventory and usage control. ServiceNow CMDB can help enterprises: Create a consistent single system of record for Oracle Cloud Infrastructure. Bring existing Oracle Cloud Infrastructure resources under management using service-aware discovery. Assist in change management planning using Configuration Items (CI) discovered from Oracle Cloud Infrastructure. Gain visibility into resources provisioned and used by cloud administrators and users using role-based permissions.     The Oracle and ServiceNow partnership helps enterprises take full advantage of Oracle Cloud Infrastructure’s advanced capabilities, while maintaining full visibility and control of their cloud resources. To get started, take a look at the ServiceNow listing in the Oracle Cloud marketplace: https://cloudmarketplace.oracle.com/marketplace/en_US/listing/19672546 The ServiceNow CMDB integration for Oracle Cloud Infrastructure on the ServiceNow Appstore is available at no additional cost: https://store.servicenow.com/sn_appstore_store.do#!/store/application/443b8f450f3c72009ba9adabe1050e8d/3.0.0?referer=sn_appstore_store.do%23!%2Fstore%2Fsearch%3Fq%3DOracle

Continuing our expansion of the Oracle Cloud Infrastructure Partner ecosystem, we are pleased to announce ServiceNow configuration management database (CMDB) support for Oracle Cloud Infrastructure....

Oracle Cloud Infrastructure

Oracle Cloud Infrastructure and the GDPR

I’m Yuecel Karabulut, a Director of Product Management for the Oracle Cloud Infrastructure Security & Compliance team. I want to tell you about the work that the Oracle Cloud Infrastructure team is doing to help customers with the General Data Protection Regulation (GDPR), as part of our continued commitment to help ensure they can comply with European Union (EU) Data Protection requirements. The EU GDPR is a new, comprehensive data protection law that goes into effect on May 25, 2018. It applies broadly to organizations based in the EU and elsewhere that collect and process the personal information of individuals residing in the EU. Oracle Cloud Infrastructure is an Infrastructure as a Service (IaaS) product in which responsibility for security is shared between Oracle Cloud Infrastructure and the customer. For details, see the Oracle Cloud Infrastructure Security white paper.  Enterprise need scalable, hybrid cloud solutions that meed all their security, data protection, and compliance requirements. To meet this need, Oracle developed Oracle Cloud Infrastructure, which offers customers a virtual data center in the cloud that allows enterprises to have complete control with unmatched security.  Oracle Cloud Infrastructure offers best-in-class security technology and operational processes to secure its enterprise cloud services. However, for customers to securely run their workloads in Oracle Cloud Infrastructure, they must be aware of their security and compliance responsibilities. By design, Oracle provides security of cloud infrastructure and operations (cloud operator access controls, infrastructure security patching, and so on), and customers are responsible for securely configuring their cloud resources. Security in the cloud is a shared responsibility between the customer and Oracle. Likewise, privacy compliance is also a shared responsibility between Oracle and the customer. We've recently published a GDPR white paper that explains some of this shared responsibility in the context of the GDPR and Oracle Cloud Infrastructure. The GDPR defines three key actors: Data subject: An individual whose personal data is gathered and processed by the controller Controller: An entity that determines the purposes and means by which the data is processed Processor: An entity that only processes data at the controller’s command
 Generally speaking, Oracle Cloud Infrastructure handles two types of data in the context of its interactions with its customers: Customer account information: Information needed to operate the customer’s Oracle Cloud Infrastructure account. This information is primarily used to contact and bill the customer. The use of any personal information that Oracle gathers from the customer for purposes of account management is governed by the Oracle Privacy Policy. With customer account information, Oracle Cloud Infrastructure acts as a controller in this narrow instance.  Customer services data: Data that customers choose to store within Oracle Cloud Infrastructure, which may include personal information gathered from data subject users. Oracle does not have insight into the contents of this data or the customer’s decisions regarding its collection and use. Additionally, it is important to note that Oracle does not have a direct relationship with the data subject users. In this situation, the customer is the controller and manages the data. Oracle Cloud Infrastructure is the processor that acts on the commands of the customer. The Oracle Cloud Infrastructure GDPR white paper focuses on customer services data and any personal information that it may contain from the customer’s data subject users. GDPR Article 5 defines “principles related to processing of personal data.” In this regard, personal data must be: Processed lawfully, fairly, and transparently Collected and processed for a limited purpose (purpose limitation) The minimum amount necessary for the purpose (data minimization) Accurate Stored only as long as necessary (storage limitation) Processed securely (integrity and confidentiality) The Oracle Cloud Infrastructure GDPR white paper outlines how Oracle Cloud Infrastructure and its customers allocate or share the responsibilities for some of these principles. More specifically, the paper does a great job of explaining how customers can use Oracle Cloud Infrastructure security processes, services, and features to meet the requirements of the GDPR, including services for auditing, authentication, administrative access controls, network security controls, isolation, high availability, and encryption. Oracle’s mission is to build cloud infrastructure and platform services where Oracle customers have effective and manageable security to run their mission-critical workloads and store their data with confidence and meet their regulatory requirements. As we head toward May 2018, we will continue to assist our customers in answering their GDPR-related questions and help them comply with the GDPR.     p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 13.0px Helvetica; color: #323333; -webkit-text-stroke: #323333} span.s1 {font-kerning: none} span.s2 {text-decoration: underline ; font-kerning: none; color: #042eee; -webkit-text-stroke: 0px #042eee}

I’m Yuecel Karabulut, a Director of Product Management for the Oracle Cloud Infrastructure Security & Compliance team. I want to tell you about the work that the Oracle Cloud Infrastructure team is...

Oracle Cloud Infrastructure

Launching Oracle Cloud Infrastructure in the United Kingdom

I'm James Stanbridge, VP and lead product manager for Oracle Cloud Infrastructure (OCI) in Europe and Asia. We're pleased to announce the general availability of infrastructure services from our UK-London datacenter. The UK region uses Oracle's modern IaaS architecture, and will be of particular interest to those looking for service locality and the lowest possible latency. Oracle has already established a strong base of customers in the UK, helping customers run their applications with greater agility and efficiency and enabling the launch of brand new services with organizations like YellowDog and Interactive Scientific. The UK region provides bare metal and VM compute instances, file, block, object, and archive storage, Oracle Database Cloud on VMs or bare metal, Exadata Cloud Services, and truly unique features like Oracle RAC, all running on the same 25 GbE, highly configurable Virtual Cloud Network. For more details on the services, please visit cloud.oracle.com. The UK region is a great choice for businesses serving internal or external customers in the United Kingdom. It takes advantage of Oracle's next generation IaaS architecture, including enterprise optimized hardware and network technology, to offer the highest performance for UK customers. Whether you're moving current enterprise applications like Oracle E-Business Suite or developing new cloud-native services, Oracle Cloud Infrastructure is designed to provide exceptional performance, availability, and manageability, backed by the industry's first end-to-end SLA. Like the enhanced US and EU Germany regions, this new datacenter region has three fault-decorrelated Availability Domains with high speed low latency interconnects. This architecture enables you to build and run highly resilient applications. Both traditional n-tier and distributed scale-out applications can benefit. Oracle Cloud Infrastructure also offers the disaster recovery, data replication, and redundancy options enterprises expect. For example, Oracle Database customers can follow the Oracle Maximum Availability Architecture (MAA) guidelines and leverage Data Guard for asynchronous replication and disaster protection. We invite you to try out Oracle's cloud infrastructure. Reach out to your local Oracle representative to discuss your use cases or try out our service for free online. My team and I are excited to work with you.

I'm James Stanbridge, VP and lead product manager for Oracle Cloud Infrastructure (OCI) in Europe and Asia. We're pleased to announce the general availability of infrastructure services from our...

Oracle Cloud Infrastructure

Deployment of a Highly Available Memcached Cluster on Oracle Cloud Infrastructure using Terraform

Caching is one of the most effective techniques to speed up a website and has become a staple of modern web architectures. Effective caching strategies will allow you to get the most out of your website, ease the pressure on your database, and offer a better experience for users. An effective caching strategy is perhaps the single biggest factor in creating an application that performs well at scale. Let’s now look at the in-memory caching solutions offered by the most widely adopted in-memory cache, Memcached. Memcached is an open source, high performance distributed object caching system. It is simply a distributed key/value store that stores its objects in memory. It can be thought of as a standalone distributed hash table or a dictionary.  A typical Memcached system is comprised of 4 elements. Memcached client software, which resides on the application servers that make calls to the Memcached server. There are various libraries of client software available with polyglot support. Memcached server, which runs the Memcached software and stores the key-value pairs. Client-based hashing strategy to distribute the keys on servers Cache eviction strategy on server. In a typical Memcached setup, the servers are disconnected from each other, and usually, they are unaware of each other. There is no communication between Memcached servers of any kind such as synchronization or broadcasting. If you are running low on resources on a Memcached server, you can add another server, and you can continue adding servers as your data volume grows. This increases the flexibility to be able to scale out Memcached servers. The cached items drop out of Memcached as the cache becomes full and this is called cache eviction. In Memcached, the Least Recently Used (LRU) objects are dropped from cache to create room for newer entries. The most common way to use Memcached is as a demand-filled look-aside cache. For more information on caching strategies, refer to this white paper. In this blog post, we discuss how to deploy a simple LAMP stack involving Ubuntu Linux, Apache 2 web server, Python Flask Application and a MySQL database, with Memcached on Oracle Cloud Infrastructure. Flask is a BSD licensed microframework for Python, based on Werkzeug and Jinja 2 extensions. Subsequently, we scale the application and Memcached instances across multiple availability domains. At each step, we'll give you the necessary Terraform code to automate the deployment. For the purposes of this blog post, we will use the following OCI instance shapes and services. For more information on instance shape selection for Memcached, please refer to this white paper. Oracle Cloud Infrastructure instance shapes:   VM.Standard2.4 (Application servers), VM.Standard2.2 (Memcache),  BM.Standard2.1(database) Oracle Cloud Infrastructure services: VCN, Internet Gateway, Route tables, Security Lists, Route Tables, Public Load balancer pair Operating system: Ubuntu Linux 16.04 Application server: Python Flask Memcached client library: py-memcache Database: MySQL Community-edition version 5.7.x   Scenario 1: Single instance LAMP application in a single AD In this scenario, we start by creating a simple LAMP stack with one instance each of the Apache 2 web server, Memcached server and a MySQL database, within separate subnets in a single availability domain. This is the simplest scenario to start with, which is suitable when the traffic volume is low and predictable or for a typical Dev/Test environment. This setup not only emphasizes the practice of starting with a simple design and not scaling pre-maturely but also prepares a good foundation to scale out instances independently in each tier as the traffic volumes grow. This way, when the traffic volume grows you can add more Memcached instances to handle the read-heavy data and add more MySQL instances for write-heavy data, thereby scaling out each data tier independently. The high-level deployment architecture is illustrated as below. Let’s go ahead and set up this scenario. Create a VCN with 4 subnets to house a bastion server, web server, Memcached server, and database server. Make sure the VCN CIDR is big enough to accommodate more subnets to expand in the future. Refer to VCN Overview and Deployment Guide for more information on how to create a VCN and the associated best practices. Create bastion and web server in separate public subnets, and Memcached and MySQL database in separate private subnets. By doing this, the cache and database servers are secure from public access and only the application & bastion servers have public access. Here is the VCN with 4 subnets: BastionSubnet: (10.0.0.0/24), a public subnet which can be used as a jump box to access the instances in the private subnets. WebSubnet (Public): (10.0.1.0/24), with access to the internet through an internet gateway. CacheSubnet (Private): (10.0.2.0/24), a private subnet with no access to the internet, where cache instances reside. DBSubnet (Private): (10.0.3.0/24), a private subnet with no access to the internet, where database instances reside. Attach the following security lists to each subnet to restrict the access further. By default, the security list rules are stateful in nature so use the following stateful rules: Security list for Bastion subnet: Allow ingress access on TCP port 22 from the public internet, to allow SSH access to the Bastion host. Allow egress of all protocols.  Security list for App subnet: Allow ingress access on TCP port 80/443 for accessing the web application from the public internet. Also, allow ingress access on TCP port 22 to allow SSH access to the application server from BastionSubnet private IP address range only. Allow egress of all protocols.  Security list for Memcached subnet: Allow ingress access on TCP port 11211 for accessing the Memcached instance from the AppSubnet only. This is because no other instance has to directly access the cache. Also, allow ingress access on TCP port 22 for SSH access from the BastionSubnet private IP address range. Allow egress access to all protocols. Security list for DB subnet: Allow ingress access on TCP port 3306 for accessing the MySQL instance from the AppSubnet. Also, allow ingress access on TCP port 22 for SSH access from BastionSubnet private IP address range. Allow egress access to all protocols. Note: In this setup, since the private instances do not have internet access, running commands for updating apt-get repositories and downloading Memcached and MySQL libraries will fail. To work around this, create NAT instances in a public subnet and route internet-bound traffic through the NAT instances. Refer to this blog post for more information on how to setup NAT instances and automate the NAT instance deployment using Terraform. We will now configure the individual instances. Let’s start with configuring the web server instance.   Configuring web server instance: Install the Apache2 web server: The server starts listening on port 80 soon after installation. If you would like Apache2 to also listen on 443, include it in the ports.conf file. sudo apt-get -y update sudo apt-get -y install apache2 Allow Apache2 (HTTP and HTTPS) through the instance firewall sudo apt-get install firewalld -y sudo firewall-cmd --permanent --add-port=80/tcp sudo firewall-cmd --permanent --add-port=443/tcp sudo firewall-cmd --reload Next, install the python client libraries for Memcached and MySQL, for the webserver to interact with Memcached and MySQL instances. sudo apt-get -y install python-pip sudo -H pip install --upgrade pip sudo pip install python-Memcached -y sudo apt-get install python-mysqldb -y   Configuring Memcached instance Let’s proceed with configuring the Memcached instance. Update the package manager and allow ingress connection through instance firewall sudo apt-get -y update sudo apt-get install firewalld -y sudo firewall-cmd --permanent --add-port=11211/tcp sudo firewall-cmd --permanent --add-port=11211/udp sudo firewall-cmd --reload Install the Memcached server and the service starts automatically after installation and by default starts listening on port 11211. You can also launch multiple threads of Memcached by specifying the "-t" parameter while starting Memcached.  sudo apt-get -y install Memcached   Configuring MySQL database instance Update the package manager and allow ingress connection through instance firewall sudo apt-get -y update sudo apt-get install firewalld -y sudo firewall-cmd --permanent --add-port=3306/tcp sudo firewall-cmd --reload Let's go ahead with installing and starting MySQL server. In this scenario, install MySQL 5.7.x version. The installation steps are different for versions 5.5.x and 5.6.x of MySQL. For more information, please refer to MySQL's official literature. sudo apt-get install mysql-server -y The MySQL server automatically starts listening on port 3306 after installation. Next, run a security script provided by MySQL. This changes some of the less secure default options for things like remote logins and sample users. This can be done by running sudo apt-get install mysql-server -y The entire deployment highlighted in this scenario can be automated using the following Terraform code. https://github.com/abannang/OCI/tree/master/Memcached-OCI/Memcache%20TF%20scenario-1 The Terraform also contains a sample application (a python script named scenario-1.py) can be used to interact with the Memcached and MySQL instances. The script upon successful execution should return. Success! Connected to Memcached instances and MySQL DB. Please note that since we did not setup NAT, the Terraform code demonstrated here deploys the instances in public subnets. As discussed earlier, you can change this behavior by installing NAT instances and configuring the subnets to be private-only. Here is a script snippet indicating the calls to Memcached and MySQL database. Note: When you create a MySQL Database, you will be able to sign into the database as a root user. By default and by design, the remote access to MySQL database is not permitted. To enable remote access for the web server to interact with MySQL database, create a separate user and give it the right privileges to enable access. While making calls from web server, use the same username as created in MySQL database. Here is the snippet to do it.   Scenario 2: Scaling Memcache - Application with multiple cache instances Scenario 1 laid a good foundation to start scaling the Memcached instances as the traffic volume grows for our web application. We do this by adding an extra instance of Memcached in the Memcached subnet and updating the application server’s config file to locate the new cache instance. We can also configure the Memcached client library in the application server to partition the key space to use consistent hashing. This setup also ensures load balancing across the cache instances and provides high availability of the cache to a certain extent. If any of the cache instances goes down, only a subset of data is lost which might temporarily put load on the back end database. This situation can be quickly recovered by bringing up another cache instance. The high level deployment architecture is illustrated as below. Note: The discovery of the Memcached servers happens by adding the private IP of the second cache server in the scenario-2.py file. You can separate the config from your application code by using a separate config file to add the IPs of Memcached instances as you scale. memc = memcache.Client(['10.0.2.1:11211’,'10.0.2.2:11211’], debug=1); By default, the hashing mechanism used to divide the keys among multiple servers is crc32. To change the function used, set the value of memcache.serverHashFunction to the alternate function to use. For example: from zlib import adler32 memcache.serverHashFunction = adler32 If you are interested in using consistent hashing, install the Python module python_ketama or hash_ring. The deployment in this scenario can be automated using this Terraform script. Since we did not setup NAT, the Terraform code deploys the instances in public subnets. You can change this behavior by installing NAT instances and changing the deployment to private subnets. https://github.com/abannang/OCI/tree/master/Memcached-OCI/Memcache%20TF%20scenario-2   Scenario 3: Highly Available LAMP application In this deployment we setup a highly available LAMP application, by scaling out each tier of our stack across two availability domains. We install Python Flask Application on our web server instances and create two instances of them across two availability domains and leverage the help of Oracle Cloud Infrastructure’s public load balancer to spread the inbound traffic across both application servers. In the data tiers, we scale out our Memcached instances across two availability domains and configure our application servers to consistently hash the keys across the Memcached servers in two availability domains. In the case of database, we configure MySQL primary in one availability domain and secondary in the second availability domain to act as an Active/Standby pair. There is no extra configuration element required to enable communication of the instances across different availability domains. This happens out of the box leveraging Oracle Cloud Infrastructure’s built-in SDN network. You can also scale the instances across three availability domains and have the public load balancer spread the web traffic across application servers in all the three availability domains. This applies to the instances in cache and DB tier as well. Note: Since the subnets cannot spawn across availability domains, we create separate subnets for application, cache, and database tier in the second availability domains. Attach the same security lists we used in scenario 1 to these subnets. The additional subnets in this setup will be AppSubnet2 (Public): (10.0.4.0/24), with access to the internet through an internet gateway. CacheSubnet2(Private): (10.0.5.0/24), a private subnet with no access to the internet, where cache instances reside. DBSubnet2 (Private): (10.0.6.0/24), a private subnet with no access to the internet, where DB instances reside. BastionSubnet2: (10.0.7.0/24), a public subnet which can be used as a jump box to access the instances in the private subnets. Install and start the Python Flask server on both the web instances in the application subnets sudo yum install python-setuptools sudo easy_install pip sudo pip install flask Now, you can configure your own Flask application or use the Flask application given in this blog post. Deploy the Public Load balancer pair in the BastionSubnet and BastionSubnet2 to load balance traffic to the application servers. We have configured the Flask application server to listen on port 8080. To allow 8080 into our application instances, we need to edit the security list rules and the instance firewalls.                                                                                                     Security List for App subnet: Allow ingress access on TCP port 8080 for accessing the Flask application from the public internet. sudo firewall-cmd --permanent --add-port=8080/tcp sudo firewall-cmd --reload The high level deployment architecture is illustrated as below. Note: The Flask application in this example runs on port 8080 and uses its built-in web server to proxy traffic to Flask application server. We are not using Apache2 to proxy traffic to Flask as this adds extra configuration overhead, but if you would like to do so, please refer to this documentation. We also need to add a listener for TCP port 8080 on Oracle Cloud Infrastructure’s load balancer to route the traffic to Flask application instances. The sample application indicated in this scenario, does the following: When invoked the first time, it loads data from MySQL database and populates the cache. The MySQL database in this example holds a pre-populated collection of movies, which get loaded into the cache. When invoked the second and subsequent times, the items are fetched directly out of cache, instead of doing a database lookup. The deployment in this scenario, including running the Flask application server on web instances, can be automated using this Terraform script. The Flask application resides in scenario-3.py file. https://github.com/abannang/OCI/tree/master/Memcached-OCI/Memcache%20TF%20scenario-3 There is one more additional step required before we start testing our application and Memcached deployment. Let’s go ahead and download the sample data set of movies needed to populate MySQL database. Here are the steps to do so: Login to your primary MySQL instance in AD1 and download the dataset curl -L http://downloads.mysql.com/docs/sakila-db.tar.gz | tar -xz Once downloaded, you can feed the dataset into MySQL instance by running mysql -u root -p < sakila-schema.sql mysql -u root -p < sakila-data.sql Now enter command mysql -u root -p and enter your password. This takes you into MySQL system with default database. To use the downloaded dataset (sakila), enter use sakila; To see the tables in the dataset of sakila-db, enter show tables; Now, we are all set. We downloaded the movies dataset to our MySQL database and we can now proceed with testing. Upon accessing the Flask application from the internet for the first time using the Load Balancer’s Listener IP address, you should see the following displayed on your web browser. Updated Memcached with MySQL data This is when the data is loaded from the MySQL database and stored to the Memcached server. Upon accessing the second time, you should see the items retrieved out of Memcached and the following should be displayed on the browser. Loaded data from Memcached 2, ACE GOLDFINGER 7, AIRPLANE SIERRA 8, AIRPORT POLLOCK 10, ALADDIN CALENDAR 13, ALI FOREVER This concludes our discussion on how to deploy and scale Memcached using a LAMP stack on Oracle Cloud Infrastructure.   Future extensions We looked at how to deploy and scale Memcached instances on Oracle Cloud Infrastructure. There are many topics and services that were not covered in this blog post, which can particularly be helpful for a large scale deployment. Auto service discovery – Currently when we create new Memcached instances, there is no way for the application servers to automatically detect the private IP addresses of the new cache instance and start sending traffic to it. This has to be manually updated in the application servers’ config file. To alleviate this, we can use a centralized service for maintaining information, configuration and providing distributed synchronization. There are various open source services available to do this such as Zookeeper, Etcd, Consul. By having this centralized service, you can automatically scale the Memcached instances by registering their IP with these centralized services and the application servers need no longer have to track them manually. Containerization –  We currently used Virtual Machines (VM) as our fundamental unit of deployment. Instead we can use Docker containers as our unit of deployment, thereby deploying our application, Memcached and MySQL instances as Docker containers. This has many benefits. Platform independence – build it once, run it anywhere VM Resource efficiency and density Improved development velocity Container Orchestration – Once you have containerized Docker images of your LAMP application, you can effectively leverage many container orchestration services like Kubernetes, Apache Mesos etc. to deploy and manage these containers. This has many benefits like autoscaling of containers, dynamic resource scheduling, centralized service discovery etc. This is the ideal way to build an application which is fully cloud native, and is easy to deploy and scale.   In my following blog posts, I shall demonstrate how to deploy and scale Redis on OCI and subsequently how to containerize your applications and orchestrate them automatically using Kubernetes.   Abhiram Annangi | Twitter  LinkedIn

Caching is one of the most effective techniques to speed up a website and has become a staple of modern web architectures. Effective caching strategies will allow you to get the most out of your...

Oracle Cloud Infrastructure

Deploying, Securing, and Scaling Redis on Oracle Cloud Infrastructure

Redis (REmote DIctionary Server) is a popular open-source, in-memory data store that supports a wide array of data structures in addition to simple key-value pairs. It is a key-value database where values can contain more complex data types, such as strings, hashes, lists, sets, sorted sets, bitmaps, and hyperloglogs, with atomic operations defined on those data types. Redis combines in-memory caching with built-in replication, persistence, sharding, and the master-slave architecture of a traditional database. Given the rich features offered by Redis out of the box, a wide variety of deployment options are available. First let's go over a few important Redis constructs. Single-Instance Architecture: Redis runs as a single threaded application, called Redis server. Redis server is responsible for storing data in memory. It handles all kinds of management and forms the major part of the architecture. A Redis client can be the Redis console client or any application that uses the Redis API. Persistence: Redis stores everything in primary memory. Because primary memory is volatile, you lose all your stored data when you restart your Redis server. Therefore, you need a way for the data to persist. Redis can persist data by using a Redis Database File (RDB) or an Append Only File (AOF). RDB is a snapshot-style persistence format, which copies all the data in memory and stores the copies in secondary storage. This happens at specified intervals, so you could lose data that is set after RDB’s last snapshot. AOF is a change-log-style persistence format, which logs all the write operations received by the server. Therefore, every operation is persisted. The problem with using AOF is that it writes to disk for every operation and it's an expensive task. Also the size of the AOF file is larger than the RDB file. Backup and Recovery: Redis doesn't provide any mechanism for data store backup and recovery. Therefore, if there's a hard disk crash or any other kind of disaster, all data is lost. You can use RDB snapshots or AOF logs and store them in Oracle Cloud Infrastructure Object Storage, which provides durable storage. I'll discuss this more when we start architecting a Redis cluster on Oracle Cloud Infrastructure. Partial High Availability: Redis supports replication, both for high availability and to separate read workloads from write workloads. Redis asynchronously replicates its data to one or more nodes, called read replicas. This is similar to a master-slave architecture, with the Redis primary node being the master, which handles both reads and writes, and the read-replicas (slaves) handling only reads. All the slaves contain exactly the same data as the master. If the master node fails (crash of master with loss of data on disk), Redis gives you the ability to convert a slave into a master. Many monitoring solutions can perform this action; the most commonly used is Redis Sentinel, which can handle service discover and automatic failover of Redis instances. Maximum High Availability: Clustering, although complicated, provides the highest level of availability for Redis instances. Redis achieves clustering by partitioning and replication of data. Partitioning involves sharding your data into multiple Redis instances so that every instance contains only a subset of the keys. Sharding helps by taking some of the load of a particular instance as the data volume grows, and also reduces the impact when a node fails. Redis supports primitive types of partitioning like range partitioning and hash partitioning. It doesn't natively support consistent hashing because its data structures (such as multidimensional sets, lists, and hashes) can't be horizontally sharded. But if you are using Redis just to store simple key-value pairs, you can leverage consistent hashing. This blog post starts with a simple walkthrough of installing and securing a Redis instance on Oracle Cloud Infrastructure. Subsequently, we then scale the Redis instances across multiple availability domains to demonstrate Redis replication and clustering, which provides high availability to Redis instances. This post uses the following setup. For more information about Oracle Cloud Infrastructure instance shapes for Redis, see this white paper. Oracle Cloud Infrastructure instance shapes: VM.Standard2.4 (application servers) VM.Standard2.2 (memcache) BM.Standard2.1 (database) Oracle Cloud Infrastructure services: Networking (VCN, internet gateway, route tables, security lists) Load Balancing Operating system: Oracle Linux running on a KVM hypervisor (Oracle Cloud Infrastructure provided image "Oracle-Linux-7.4-2017.12.18-0") Third-party service: Redis Scenario 1: Installing and Securing a Single Redis Instance on Oracle Cloud Infrastructure This scenario demonstrates how to install and secure a standalone Redis instance running on Oracle Linux 7. Redis was designed for use by trusted clients in a trusted environment, and it has no robust security features of its own. Redis does, however, have a few security features that include a basic unencrypted password, and command renaming and disabling. This scenario provides instructions on how to configure these security features, and also covers a few other settings that can boost the security of a standalone Redis installation on Oracle Linux 7 to fully secure the access of your Redis instance. The following diagram shows the high-level deployment architecture: Create a VCN and Subnets Create a VCN with two subnets to house a bastion server and Redis server. Ensure that the VCN CIDR is big enough to accommodate more subnets if you plan to expand your deployment in the future by introducing Redis clustering and adding application servers or database servers. For more information about how to create a VCN and associated best practices, see VCN Overview and Deployment Guide. Build the bastion server in a public subnet and the Redis server in a private subnet. By doing this, you're restricting the public access of the Redis instance and allowing access only via the bastion server. Following are the two subnets: BastionSubnet: (10.0.0.0/24), a public subnet that can be used as a jump box to access the instances in the private subnets CacheSubnet (Private): (10.0.1.0/24), a private subnet with no access to the internet, where cache instances reside Attach the following security lists to each subnet to restrict the access further. By default, the security list rules are stateful in nature, and these are stateful. Security list for Bastion subnet: Allow ingress access on TCP port 22 from the public internet, to allow SSH access to the bastion host. Allow egress of all protocols. Security list for Redis subnet: Redis server by default listens on TCP port 6379. Allow ingress access on TCP port 6379 for accessing the Redis instance from the bastion server only. Also allow ingress access on TCP port 22 for SSH access from the BastionSubnet private IP address range. Allow egress access to all protocols. Note: Because the Redis instances are in a private subnet, they will not have internet access. So, running commands for updating yum repositories and downloading Redis binaries will fail. To work around this issue, create NAT instances in the same public subnet as the bastion host or a different public subnet and route the internet-bound traffic through the NAT instances. For the sake of brevity, this post doesn’t show you how to do that, but you can read another blog post more information about how to set up NAT instances and automate their deployment by using Terraform. Install Redis Before you can install Redis, you must first add an Extra Packages for Enterprise Linux (EPEL) repository to the server’s package lists. EPEL is a package repository that contains a number of open-source add-on software packages, most of which are maintained by the Fedora Project. sudo yum install epel-release After the EPEL installation has finished, install Redis: sudo yum install redis -y After the installation completes, start the Redis service: sudo systemctl start redis.service If you want Redis to start on boot, you can enable it with the enable command: sudo systemctl enable redis Check the status of Redis by running the following command: sudo systemctl status redis.service You should see output similar to the following: After you confirm that Redis is running, test the setup with this command: redis-cli ping If PONG is the response, it means that Redis is running on your server, and you can begin configuring it to enhance its security. Secure the Redis Instance An effective way to safeguard Redis is to secure the server on which it’s running. You can do this by ensuring that Redis is bound only to either localhost or to a private IP address, and that the server has a firewall up and running. In this case, because we plan on setting up a Redis cluster to interact with other Redis instances and application servers in the subsequent scenarios, we’ll bind the Redis server to accept connections on the private IP address of the instance. Locate the private IP address of your instance: Update it in the /etc/redis.conf file: Update the firewall rules on the instance to allow inbound access to the Redis server. By default, Redis listens on TCP port 6379. You should only allow access to your Redis server from your hosts by using their private IP addresses in order to limit the number of hosts your service is exposed to. sudo firewall-cmd --permanent --new-zone=redis sudo firewall-cmd --permanent --zone=redis --add-port=6379/tcp sudo firewall-cmd --permanent --zone=redis --add-source=<client_server_private_IP> sudo firewall-cmd --reload The preceding steps ensure security at the instance level to limit access to the Redis instance. You can further secure it by adding password and authentication, which requires clients to authenticate before being allowed access to the Redis store. Like the bind setting, the password is configured directly in the Redis configuration file, /etc/redis.conf. In the /etc/redis.conf file, scroll to the SECURITY section and look for a commented directive that reads as follows: Uncomment it and change foobared to a very strong password of your choosing. Rather than make up a password yourself, you can use a tool like apg or pwgen to generate one. If you don't want to install an application just to generate a password, though, you can use the following command. Note that entering this command as written generates the same password every time. To create a password different from the one that this command would generate, change the word in quotation marks to any other word or phrase. echo "OCI" | sha256sum Although the generated password is not pronounceable, it is a very strong and very long one, which is exactly the type of password required for Redis. After copying and pasting the output of that command as the new value for requirepass, that section should now look as follows: /etc/redis.conf requirepass password_copied_from_output After setting the password, save and close the file then restart Redis: sudo systemctl restart redis.service Note: Redis also gives an option to rename or completely disable certain commands that are considered dangerous. When run by unauthorized users, such commands can be used to reconfigure, destroy, or otherwise wipe your data. See Redis Security for more information about this feature. Scenario 2: Scaling Redis by Using Replication (Master-Slave model) Scenario 1 laid a solid foundation for launching a single Redis instance on Oracle Cloud Infrastructure. This scenario describes how you can scale Redis, which is especially useful for easing the pressure on a single instance as the traffic volume increases while providing high availability. This scenario demonstrates a master-slave replication by creating three Redis instances spread across two availability domains. One of the nodes will be the primary, or master, and the other two will be read replicas, or slaves. By designing the architecture this way, you are splitting your database read traffic to the read replicas while the master node handles all the write traffic. Spreading the nodes across two availability domains also provides high availability if an entire availability domain fails. If the availability domain that contains the master node fails, you can quickly elect one of the slave instances to be the new master, either manually or automatically using Redis Sentinel (which is covered in the next scenario). Note: If you plan to use Redis Sentinel to monitor using instances, open the TCP port 26379 both on the instance firewall and also in the subnet’s security lists. Redis Sentinel requires at least a three-node instance of Redis running. For more information, see the official documentation on Redis Sentinel. The following diagram shows the high-level deployment architecture: Create a VCN and Subnets Create a VCN with three subnets to house a bastion server and Redis instances. Ensure that the VCN CIDR is big enough to accommodate more subnets if you plan to expand your deployment in the future by introducing Redis clustering and adding application servers or database servers. For more information about how to create a VCN and associated best practices, see the VCN Overview and Deployment Guide. As in Scenario 1, build the bastion server in a public subnet and the Redis instances in private subnets. BastionSubnet: (10.0.0.0/24), a public subnet in AD1 that can be used as a jump box to access the instances in the private subnets CacheSubnet-1 (Private): (10.0.1.0/24), a private subnet in AD1 with no access to the internet, where the Redis master and one read replica reside. CacheSubnet-2 (Private): (10.0.2.0/24), a private subnet in AD2 where the second read replica resides. Attach the following stateful security lists to each subnet to restrict the access further: Security list for Bastion subnet: Allow ingress access on TCP port 22 from the public internet, to allow SSH access to the bastion host. Allow egress of all protocols. Security list for Redis subnets: Allow ingress access on TCP port 6379 for accessing the Redis instance from the other Redis subnets only. This facilitates the communication between the Redis instances. Also allow ingress access on TCP port 22 for SSH access from the BastionSubnet private IP address range. Allow egress access to all protocols. Install Redis Install Redis on the three instances as described in Scenario 1.  Then you can configure replication on the Redis instances. Configure Replication on the Master Node Configuring a Redis primary node (master) is not very different from configuring a standalone Redis instance as described in Scenario 1. There are a few things worth noting: Secure the master by using a strong password, as discussed earlier. Specify your cache eviction policies or just use the defaults. Specify a sensible value for the TCP keepalive timeout value. This ensures that the master node is connected with its clients on a periodic basis. Bind the server to the private IP address of the instance. Redis provides backup of the data by persisting it to disk. You can specify the backup type that you need and the file used to back up this data. For more information about backups, see this white paper. You can edit all of the preceding settings in the /etc/redis.config file. Be sure to restart the Redis server after making changes to the config file. You can test the newly created master by running the following command. The output should indicate that no slave nodes are connected to it. Configure Replication on the Slave Nodes You need to make a few changes on the slave instances to allow communication with the master. Bind the server to the private IP address of the instance. Secure your slave instance by using a strong password. Uncomment the following line in /etc/redis.config file and indicate the IP address where the master server can be reached, followed by the port set on that machine. By default, the port is 6379. slaveof <your_redis_master_ip> 6379 Uncomment the masterauth line and provide the password/passphrase that you set up earlier on the master server: masterauth <your_redis_master_password> Restart the service like you did in your master server, to reinitialize Redis and load the modified config file. Verify this configuration with the Redis info command, which reports information about replication: If you happen to look at the same information on the Redis master server, you would see something like this: As you can see, the master and slave servers correctly identify one another in their defined relationship. You can follow the same approach in setting up the second slave in the second availability domain. The process should look exactly the same. After the complete replication is set up, you can try manually to promote a slave to master to test failover. We recommend using Redis Sentinel for doing this automatically, but the deployment of Redis Sentinel is beyond the scope of this post. Scenario 3: Scaling Redis by Using Clustering This scenario discusses clustering in Redis, which involves sharding of data in addition to some other constructs provided by Redis. According to Redis official documentation, Redis Cluster provides a way to run a Redis installation where data is automatically sharded across multiple Redis nodes. Redis Cluster also provides some degree of availability during partitions, which is in practical terms the ability to continue the operations when some nodes fail or are not able to communicate. So in practical terms, what you get with Redis Cluster? The ability to automatically split your dataset among multiple nodes. The ability to continue operations when a subset of the nodes are experiencing failures or are unable to communicate with the rest of the cluster. Notes: Every Redis Cluster node requires two TCP connections open: the normal Redis TCP port used to serve clients, for example 6379, plus the port obtained by adding 10000 to the data port, for example 16379. This second high port is used for the Cluster bus, which is a node-to-node communication channel that uses a binary protocol. This means you should also open an additional TCP port 16379 in your security list on all the Redis instance subnets. Only Redis 3.0 and later support clustering. The Redis documentation strongly encourages a minimum cluster of six Redis nodes with three masters. The following diagram shows the high-level deployment architecture: Use the following table to track the Redis instances you plan to have and their roles: Availability Domain Role Private IP Address AD1 Master1 10.0.1.66 AD1 Master3 10.0.1.68 AD2 Master2 10.0.2.66 AD, AD2 Slave1,2,3 10.0.1.69,10.0.1.67, 10.0.2.69   Create a VCN and Subnets Create a VCN with three subnets to house a bastion server and the Redis instances. As in the preceding scenarios, build the bastion server in a public subnet and the Redis instances in private subnets. BastionSubnet: (10.0.0.0/24), a public subnet in AD1 that can be used as a jump box to access the instances in the private subnets CacheSubnet-1 (Private): (10.0.1.0/24), a private subnet in AD1 with no access to the internet, where one Redis master and one read replica reside CacheSubnet-2 (Private): (10.0.2.0/24), a private subnet in AD2 where the second Redis master and read replica reside Attach the following stateful security lists to each subnet to restrict the access further: Security list for Bastion subnet: Allow ingress access on TCP port 22 from the public internet, to allow SSH access to the bastion host. Allow egress of all protocols. Security list for Redis subnets: Allow ingress access on TCP ports 6379 and 16379 for communication between Redis instance and the other Redis subnets only. Also allow ingress access on TCP port 22 for SSH access from the BastionSubnet private IP address range. Allow egress access to all protocols. Install and Secure Redis Install and secure Redis instances as described in Scenario 1. Enable Cluster Communication Enable the cluster mode on all the Redis instances and add some other information for enabling cluster communication, and then restart the Redis server. To do this, edit the following lines in /etc/redis.conf: cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 appendonly yes Create and Test the Cluster Now that you have a number of instances running, you can create the cluster by writing some meaningful configuration to the nodes. This is very easy to do with the Redis Cluster command-line utility called redis-trib, a Ruby program that executes special commands on instances in order to create new clusters, check or reshard an existing cluster, and so on. The redis-trib utility is in the src directory of the Redis source code distribution. You need to install Redis gem to be able to run redis-trib. Install Ruby on your machine prior to running the following command: gem install redis To create your cluster, enter the following command: ./redis-trib.rb create --replicas 1 10.0.1.66:6379 10.0.1.69:6379 \ 10.0.2.66:6379 10.0.2.69:6379 The --replicas 1 option means that you want a slave for every master created. The other arguments are the list of addresses of the instances you want to use to create the new cluster. The cluster will be configured and joined, which means that instances will be bootstrapped into talking with each other. If everything went well, you'll see a message like the following one: Test your cluster by running the following command on any cluster node: redis-cli -h 10.0.1.66 cluster nodes The output should looks as follows: This is the simplest way to deploy a Redis cluster on Oracle Cloud Infrastructure. The clustering in Redis is fairly complex. For more information about Redis clustering, see Redis Clustering. In this blog post, I discussed how to deploy and further scale Redis instances on Oracle Cloud Infrastructure. There can be many enhancements to this deployments, like Dockerizing the Redis instances and orchestrating them using Kubernetes. I’ll cover those enhancements in my next blog post.   Abhiram Annangi | Twitter  LinkedIn

Redis (REmote DIctionary Server) is a popular open-source, in-memory data store that supports a wide array of data structures in addition to simple key-value pairs. It is a key-value database where...

Developer Tools

IP Failover Using Python SDK, Instance Principals, and Reserved Public IPs

Building resilient architecture is much easier in the cloud than in a private data center, and the Oracle Cloud Infrastructure team is constantly working to make it even easier. In the last few weeks, our Python SDK added support for many features, including these key capabilities: Instance Principals for IAM Instance Principals provide automatic authentication capability for API calls made from an instance without having to place an API key on the machine. The authentication certificates are automatically generated, provided through metadata server and rotated on a regular basis. Reserved Public IPs Reserved Public IP addresses can be assigned to any compute instance private IP address, can float between instances, and are reserved for their tenancy until explicitly deleted. We can use these features together to simplify failover between compute instances. You can read more about the differences between ephemeral and reserved IP addresses in the Reserved Public IPs article, but the important factor to note is that although ephemeral addresses are allocated within an availability domain, reserved ones can be moved within a region. This allows for easy regional, multiple-availability-domain redundancy without the need to change route tables or manage DNS records - just reassign a reserved IP address from an instance in one availability domain to an instance in another. Now I'll show you how you can failover IP addresses and use Instance Principals in your Python code. First create resources by using the CLI, if you're unfamiliar with oci-cli utility, review the documentation: https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/cliconcepts.htm oci network public-ip create --compartment-id ocid1.compartment.oc1..aaaaaaaa --lifetime RESERVED Now create a dynamic group and policy for virtual machines: oci iam dynamic-group create --compartment-id ocid1.tenancy.oc1..aaaaaaaa --description FailoverGroup --matching-rule "instance.compartment.id = ocid1.compartment.oc1..aaaaaaa" --name FailoverGroup oci iam policy create --compartment-id ocid1.tenancy.oc1..aaaaaaaas --name FailoverPolicy --description "Failover management policy" --statements '["allow dynamic-group FailoverGroup to manage public-ips in compartment id ocid1.compartment.oc1..aaaaaaaae", "allow dynamic-group FailoverGroup to manage route-tables in compartment id ocid1.compartment.oc1..aaaaaaaa ", "allow dynamic-group FailoverGroup to manage private-ips in compartment id ocid1.compartment.oc1..aaaaaaaa"]' Then install the Python SDK on the machine that will perform the failover. Some good examples are to use an external monitoring instance, running Monit or utilize corosync & pacemaker installed directly on sudo yum install -y python2-pip sudo pip install oci Using Instance Principals in Python Code To use the Instance Principals functionality, instead of providing configuration to Signer, you can leverage oci.auth.signers.InstancePrincipalsSecurityTokenSigner(). Defining signer and then passing it to the API client are the only required things in the program: signer = oci.auth.signers.InstancePrincipalsSecurityTokenSigner() network = oci.core.virtual_network_client.VirtualNetworkClient(config={}, signer=signer) It's that simple! Failover Examine the following example script (failover.py), and use it to fail over a public IP address between instances. Note the public IP OCID that you want to reassign and the private IP OCID that you want to associate with the public IP. Run the following command: python failover.py -u <public_ip_ocid> -p <private_ip_ocid> Now you can leverage this script or a similar one with your Keepalived or Corosync/Pacemaker configuration. For a great example, see Automatic Virtual IP Failover on Oracle Cloud Infrastructure Looks Hard, But it isn't by Gilson Melo.  Failover.py #!/usr/bin/python ''' Author: Marcin Zablocki Description: Example script to move public IP to Private IP for failover purpose using Instance Principal. ''' import argparse import sys import oci # configuration parameters SIGNER = oci.auth.signers.InstancePrincipalsSecurityTokenSigner() NETWORK = oci.core.virtual_network_client.VirtualNetworkClient(config={}, signer=SIGNER) COMPUTE = oci.core.compute_client.ComputeClient(config={}, signer=SIGNER) PARSER = argparse.ArgumentParser() PARSER.add_argument("-u", "--public", help="public IP OCID", required=True) PARSER.add_argument("-p", "--private", help="private IP OCID. If not specified public IP will be detached") PARSER.add_argument("-r", "--rt_id", help="route table OCID") ARGS = PARSER.parse_args() PRIVATE = ARGS.private PUBLIC = ARGS.public ROUTE_TABLE_ID = ARGS.rt_id if not PRIVATE: PRIVATE = str("") def update_default_route(route_table_id, private): """ This routine will update specified route table and replace the rules with 0.0.0.0/0 via Route rules are list of RouteRule objects. Updating Route Table uses RouteTableDetails object containing RouteRules and Description. """ route_table_details = oci.core.models.UpdateRouteTableDetails() route_rules = [] route_rules.append( oci.core.models.RouteRule( cidr_block='0.0.0.0/0', network_entity_id=private) ) route_table_details.route_rules = route_rules ip_details = oci.core.models.UpdatePublicIpDetails() ip_details.private_ip_id = private try: request = NETWORK.update_route_table(route_table_id, route_table_details).data print "Route table "+str(request.display_name)+" updated" except Exception as exception_message: print("Failed to update the route rule. Exception: ") print(exception_message) sys.exit(1) def activate(private, public): ''' This function maps public IP to private IP or detaches Public IP if private IP is not specified. ''' if private: if not ROUTE_TABLE_ID: print "route table OCID not found. Updating only IP association" else: update_default_route(ROUTE_TABLE_ID, private) ip_details = oci.core.models.UpdatePublicIpDetails() ip_details.private_ip_id = private public_ip_details = NETWORK.get_public_ip(public).data if public_ip_details.private_ip_id == private: private_ip_details = NETWORK.get_private_ip(private).data print("IP "+str(public_ip_details.ip_address)+ " already assigned to private IP "+ str(private_ip_details.ip_address) ) sys.exit(1) try: NETWORK.update_public_ip(public, ip_details) if not private: print("IP unassigned") else: print("IP assigned") except Exception as exception_message: print("Failed to updated Public IP. Exception:") print(exception_message) if __name__ == "__main__": activate(PRIVATE, PUBLIC)

Building resilient architecture is much easier in the cloud than in a private data center, and the Oracle Cloud Infrastructure team is constantly working to make it even easier. In the last few weeks,...

Events

NVIDIA Tesla GPUs Based on Volta Architecture Generally Available in Oracle Cloud Infrastructure

Just six months ago at Oracle’s OpenWorld last October, we announced a collaboration with NVIDIA and released our first bare-metal GPU offering based on the company’s Pascal Architecture. This offering saw customers running production workloads with use cases in both traditional HPC and AI & Deep Learning. We also made a promise to our customers to continue to innovate on our offerings and be at the cutting edge of accelerated hardware in the cloud. Hence, we’re excited to announce the General Availability of NVIDIA’s Tesla GPUs, based on the Volta architecture, as a new Oracle Cloud Infrastructure compute instance offering. Today, you can launch a compute instance with eight NVIDIA Tesla V100 GPUs with NVLINK on our high performance cloud, which provides industry leading networking stack and NVMe Block Storage. This new compute instance is available today in our US Ashburn Region, with global expansion in the near future. This offering joins our previously released instances that provide up to two Pascal-based Tesla GPUs, available in both US and Europe regions. This new offering makes Oracle Cloud Infrastructure the best price-performance option compared to other public cloud providers. Along with the bare metal offering available today, we will be releasing virtual machine support for NVIDIA Tesla Volta as well in the coming weeks allowing customers to launch VMs with 1, 2, or 4 GPUs per VM. We’re also going to innovate further and continue our collaboration with NVIDIA by providing customers the newly announced Tesla V100 32GB GPUs. We are excited to offer customers these GPUs with double the memory capacity as part of our future offering. Expanded Access to Deep Learning and HPC Tools, On-Demand Additionally, we’re excited to announce limited availability for NVIDIA GPU CLOUD (NGC) on Oracle Cloud Infrastructure for both Pascal and Volta-based compute offerings. With NVIDIA GPU Cloud, researchers and data scientists get access to a wide range of GPU-optimized software tools for deep learning and HPC. It also includes tuned, tested, and maintained containers for top deep learning frameworks. You can launch the NGC image in our Oracle Cloud Infrastructure (OCI) web console, and find more information at https://cloud.oracle.com/iaas/gpu. “AI is a strategic imperative for every industry. With the availability of Tesla V100 in OCI, researchers and developers can tap into the world’s fastest accelerators to fuel faster discoveries and insights,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “The integration of NVIDIA GPU Cloud’s software containers optimized to fully leverage the Tesla V100 will ensure that enterprises around the world can access the technology they need to accelerate their AI research and deliver powerful new AI products and services.” We’re Not Done Yet! Now, Design and Engineering Apps on the Oracle Cloud We are also launching limited availability of NVIDIA GRID for GPU-accelerated graphics on Oracle Cloud Infrastructure. With it, creative and technical professionals can maximize their productivity from anywhere by accessing the most demanding professional design and engineering applications from the cloud. We’re partnering up with leading providers Citrix and Teradici to enable a more complete experience for customers. Customers can run Citrix’s XenApp and XenDesktop, powered by Citrix HDX technologies. HDX 3D Pro technologies optimize the performance of graphics-intensive 3D professional applications for Windows and Linux virtual desktops. “The availability of NVIDIA’s Tesla GPUs on Oracle Cloud infrastructure is a compelling offering for any customers that need to provide access to 3D graphics applications,” said Sridhar Mullapudi, vice president, product management, Citrix. “By combining the power of Oracle Cloud Infrastructure, NVIDIA Tesla GPUs and Citrix HDX 3D Pro, organizations can provide premium graphics performance to engineers, designers and other professionals without worrying about the cost or the limitations of requiring expensive graphics workstations. It’s a particularly compelling solution for businesses that need to provide remote access to graphics intensive applications.” Oracle and Teradici are also making Teradici Cloud Access Software, powered by the PCoIP protocol, available to try on Oracle’s GPU instances. Whether it's a specialized tool for analyzing seismic models or an entire workspace, Teradici Cloud Access Software is simple to deploy and allows users to securely access the same applications and workflows they are already familiar with, from anywhere and at any time. You can start testing Teradici Cloud Access Software today through the Oracle Cloud Infrastructure Jump Start program. “Teradici is thrilled to work with Oracle to enable customers to easily try Cloud Access Software on Oracle Cloud Infrastructure. NVIDIA Tesla GPU instances on Oracle Cloud Infrastructure with Teradici Cloud Access Software deliver a highly-responsive user experience for graphics-intensive applications in industries such as Media & Entertainment, Design Manufacturing, Architecture, Engineering & Construction, and Oil & Gas,” said Ziad Lammam, vice-president, product management & marketing, Teradici. You’ll have a chance to see and try out these offerings live in person this week at NVIDIA’s GPU Technology Conference (GTC). Come by Oracle’s booth #822 this week to chat with our engineering and product teams. Additionally, mark your calendars to attend our technical sessions this week as well. “Advantages of a Bare-Metal Cloud for GPU workloads”, Tuesday, Mar 27, 3:30 PM - 4:20 PM at Marriott Ballroom 2 “Compute Engineering Simulation Processing in Oracle Cloud Infrastructure”, Wednesday, Mar 28, 2:30 PM - 2:55 PM at Marriott Ballroom 2 We are certain that after today’s announcements, Oracle Cloud Infrastructure is the best place for all things accelerated. Whether you are running compute intensive workloads or graphics intensive applications – Oracle Cloud Infrastructure has it all! For more information on any of today’s announcements please visit https://cloud.oracle.com/iaas/gpu. To learn more about High Performance Computing on Oracle Cloud Infrastructure please visit https://cloud.oracle.com/iaas/hpc.  

Just six months ago at Oracle’s OpenWorld last October, we announced a collaboration with NVIDIA and released our first bare-metal GPU offering based on the company’s Pascal Architecture. This...

Deploying Highly Available DC/OS on Oracle Cloud Infrastructure with Terraform

Introduction This post provides a Terraform template to automatically deploy DC/OS on Oracle Cloud Infrastructure.  DC/OS is an open source and distributed data center operating system based on the Apache Mesos distributed system kernel. It manages multiple systems from a single interface and enables the deployment of containers, distributed services, and applications into these systems.  DC/OS consists of a group of master and agent nodes that form a cluster.  The Terraform template automatically deploys this DC/OS cluster on Oracle Cloud Infrastructure. The template consists of a set of Terraform modules and an example base configuration that is used to provision and configure the resources that are needed to run a highly available and configurable DC/OS cluster on Oracle Cloud Infrastructure.  Oracle Cloud Infrastructure Environment You can deploy the DC/OS cluster to any region within Oracle Cloud Infrastructure.  For your high availability needs, we recommend deploying DC/OS master and agent nodes across multiple availability domains of an Oracle Cloud Infrastructure region.  The following diagram illustrates an example deployment of DC/OS cluster with master, public agent, GPU agent, and regular private agent nodes.  The Terraform template provisions the following compute instances:  Bootstrap node Master node Public agent node Agent node GPU agent node The template accepts several input variables to choose instance shapes (including GPU),  the number of these instances, and how these instances are placed across multiple availability domains etc.  If your requirements extend beyond the base configuration, you can customize the related modules to form your own configuration.  Deploy DC/OS Cluster Prerequisites Download and install Terraform (v0.10.3 or later). Download and install the Oracle Cloud Infrastructure Terraform Provider (v2.0.0 or later). Download Oracle Cloud Infrastructure Terraform DC/OS Installer template. This template is in process to be uploaded to Oracle public Github. Please reach out to me to get this template.  Quick Start Customize your configuration Open "env-vars" file in the project root that specifies your configuration.  Set mandatory Oracle Cloud Infrastructure input variables related to your tenancy, user, and compartment: Tenancy OCID User OCID API fingerprint API private key Compartment OCID Public and private key pairs for SSH access to DC/OS instances Check key default input variables and update them accordingly: dcos_installer_url:   This is the URL to get the DC/OS code.  The default value is set to the early access release. You can change to stable release as: https://downloads.dcos.io/dcos/stable/dcos_generate_config.sh dcos_master_ad1_count, dcos_master_ad2_count, dcos_master_ad3_count:  The number of master nodes in each availability domain.  dcos_agent_ad1_count, dcos_agent_ad2_count, dcos_agent_ad3_count:  The number of agent nodes in each availability domain. dcos_public_agent_ad1_count, dcos_public_agent_ad2_count, dcos_public_agent_ad3_count:  The number of public agent nodes in each availability domain. dcos_gpu_agent_ad1_count, dcos_gpu_agent_ad2_count, dcos_gpu_agent_ad3_count:  The number of GPU agent nodes in each availability domain. Source "env-vars" `$  .  env-vars ` Initialize Terraform `terraform init` View what Terraform plans to do before actually doing it `terraform plan` Provision DC/OS resources and setup DC/OS cluster on Oracle Cloud Infrastructure `terraform apply` When resources provisioning is completed,  the Terraform template shows the following output: p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #34bc26; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures} Outputs:   master_private_ips = [   xx.xx.xx.xx,   xx.xx.xx.xx,   xx.xx.xx.xx ] master_public_ips = [   xx.xx.xx.xx,   xx.xx.xx.xx,    xx.xx.xx.xx, ] Notes: The deployment of the agent, public agent, and GPU agent nodes depends on the deployment of the bootstrap node. However, with the Terraform module, there is no mechanism to explicitly set the dependency as is typically done in a Terraform "resources" dependency. The keyword "depends_on" does not work in Terraform modules. To work around this, we use the implicit dependency by using variables.   For instance, in module "dcos_agent_ad1", a variable "dcos_bootstrap_instance_id" is defined which depends on the creation of "dcos_bootstrap" instance id of module "dcos_bootstrap".  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures} module  "dcos_agent_ad1" {     source              = "./instances/agent"     count               = "${var.dcos_agent_ad1_count}"     availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[0],"name")}"     compartment_ocid    = "${var.compartment_ocid}"     tenancy_ocid        = "${var.compartment_ocid}"     dcos_cluster_name   = "${var.dcos_cluster_name}"     image               = "${var.InstanceImageOCID[var.region]}"     shape               = "${var.AgentInstanceShape}"     subnet_id           = "${module.vcn.subnet_ad1_id}"     ssh_public_key      = "${var.ssh_public_key}"     ssh_private_key     = "${var.ssh_private_key}"     display_name_prefix = "ad1"     dcos_bootstrap_instance_id = "${module.dcos_bootstrap.instance_id}" } Access DC/OS Cluster Once the DC/OS deployment is completed,  you can access the DC/OS dashboard at http://<master_public_ip>/ Next Steps You can scale up and down the number of agent, public agent, or GPU agent nodes by using input variables.  For instance,  you can scale up the number of agent nodes as: $ terraform apply -var "dcos_agent_id1_count=xx"   Conclusion Oracle Cloud Infrastructure provides a high performance and flexible environment to run DC/OS and its services.  With this terraform DC/OS  installer,  you can automate deployment of DC/OS cluster on OCI.          p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff; min-height: 13.0px} span.s1 {font-variant-ligatures: no-common-ligatures} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures}

Introduction This post provides a Terraform template to automatically deploy DC/OS on Oracle Cloud Infrastructure.  DC/OS is an open source and distributed data center operating system based on the...

Oracle Cloud Infrastructure

PeopleSoft Modernization and Benchmarking on Oracle Cloud Infrastructure

Processing payroll is challenging because it has the highest visibility if things go wrong. Time and Labor has volatile cycles that demand out-of-cycle testing to assure success. Human Capital Management is a top line expense that interfaces with all enterprise business systems. Creating paychecks has hundreds of variables beyond managing standard deductions, such as raises, new hires, benefits enrollments, leaves of absence, holidays, bargaining units (unions) etc. Despite these challenges, employees need to be paid on a regular cadence. Many organizations rely on PeopleSoft to help with these challenges. In the past decade, there have been industry-wide initiatives within IT organizations moving traditional commodity infrastructure workloads from data centers to cloud service providers. One of the challenges customers face when adopting cloud technologies is quality assurance. Adopters face many choices for sizing their core business technologies and applications across software, platform, and infrastructure cloud services. They must be assured the cloud service provides performance, stability, agility, and operability at a value for their investment. Oracle has decades of experience provisioning and running these Oracle products like PeopleSoft. Our hardware and software choices, staff expertise, and long-honed IT processes are all best-of-breed for managing Oracle workloads. Our cloud is designed with enterprise applications in mind. This makes Oracle Cloud Infrastructure the best place to run PeopleSoft. Oracle Cloud Jump Start allows users to try pre-configured solutions running on Oracle Cloud Infrastructure, for free. Within minutes, a Demo Lab is launched enabling users to start learning about these innovative solutions from Oracle’s consulting and technology partners. Users can register with the partner and try any Jump Start Lab for a few hours - for free. One of the great benefits of Jump Start is that customers become familiar with the level of effort and capacity needed to deploy application environments on Oracle Cloud Infrastructure. They can envision the build out of development and test environments rapidly to forecast production needs. In Mythics' Demo Lab, you can see how modern and scalable infrastructure improves PeopleSoft performance with demanding variable loads. The Mythics Jump Start Demo Lab provides hands-on experience building, running, and testing a freshly installed PeopleSoft 9.2 HCM instance. The Demo Lab consists of (2) two instances deployed in some of the smallest (and cheapest) Virtual Machine sizes available in Oracle Cloud Infrastructure. The Jump Start provides an User Guide that contains a test script for processing a full payroll cycle against demo data. During the Jump Start user has the opportunity to run against both instances. Your instance is monitored by Oracle Management Cloud (OMC) to provide performance analytics to assist with sizing user testing. The Mythics Jump Start Demo Labs showcases the value-added resources of Oracle Cloud Infrastructure. Mythics uses Terraforms to showcase automation and orchestration for all of our virtual and bare metal deployments. Infrastructure as a Service (IaaS) makes it easy to spin up environments and scale them up and out with a snap of a finger. Spin up environments in a matter of minutes and hours, and not days and weeks. Mythics is a multiple Oracle Partner of the Year winner with certifications in 36 technologies and a 16-year track record of serving government and private sector customers. They focus on helping customers to select, acquire, and deploy the right Oracle solutions to meet their needs. Ready to test-drive Oracle Applications on our Cloud for free? Hop in and give the Mythics Jump Start demo lab a try.

Processing payroll is challenging because it has the highest visibility if things go wrong. Time and Labor has volatile cycles that demand out-of-cycle testing to assure success. Human...

Oracle Cloud Infrastructure

Silver Peak WAN Optimization in Oracle Cloud Infrastructure

We are pleased to announce the availability of Unity EdgeConnect, a software defined wide area network (SD-WAN) solution by Silver Peak. With Unity EdgeConnect companies can connect their branch offices to applications on Oracle Cloud Infrastructure (OCI) over broadband internet, providing higher availability, better and more predictable bandwidth, lower latency, better application performance, and much lower costs than traditional dedicated WAN links. New branch offices can be added to the SD-WAN in minutes, as opposed to the lengthy provisioning cycles for a traditional WAN. Employees at the branch offices will enjoy faster and more reliable access to their business applications. Unity EdgeConnect Unity EdgeConnect includes embedded WAN optimization, and provides secure and reliable virtual overlays to connect enterprise branch locations to workloads in the Oracle Cloud. It pools and optimizes any combination of connectivity types (standard Internet, FastConnect, MPLS, LTE, etc.) into high-performance virtual WANs to deliver an unmatched user experience from branch to Oracle Cloud. Silver Peak software uses real-time optimization techniques to maximize available bandwidth and accelerate data mobility over the WAN, ensuring enterprise applications in Oracle Cloud perform super-fast. Oracle Cloud Infrastructure Regions This blog focuses on integration of Silver Peak EdgeConnect (virtual) on the Oracle Cloud Infrastructure for accelerating customer applications running in the Oracle Cloud Infrastructure. To provide data availability and durability, Oracle Cloud Infrastructure enables customers to select from infrastructure with distinct geographic and threat profiles. A region is the top-level component of the infrastructure. Each region is a separate geographic area with multiple, fault-isolated locations called Availability Domains. An Availability Domain is the sub-component of a region and designed to be independent and highly reliable. Availability Domains within the same region are connected by a secure, high-speed, low-latency network, which allows customers to build and run highly reliable applications and workloads with minimum impact to application latency and performance. Each region has at least three Availability Domains, which allows customers to deploy highly available applications. Silver Peak’s Unity EdgeConnect Portfolio creates a SD-WAN fabric that is used to provide secure, high-performance connectivity to interconnect enterprise branch locations with public clouds like Oracle Cloud, private clouds and service provider hosted services.  EdgeConnect instances are placed at enterprise branch locations as a physical or a virtual appliance, and in the OCI Virtual Cloud Network as a virtual appliance.  Silver Peak’s state-of-the-art WAN optimization technology is embedded within all EdgeConnect appliances.  The entire EdgeConnect solution is centrally managed thru Silver Peak’s Unity Orchestrator. Here are some Silver Peak on Oracle Cloud Key use cases: Distributed enterprises migrating workloads to Oracle Cloud IaaS (OCI or OCI-Classic) and PaaS High-performance / high-availability access to Oracle Cloud and hybrid-cloud Silver Peak value proposition for Enterprises in the Oracle Cloud is as follows: Cost Savings – Silverpeak makes accessing enterprise applications running in the Oracle Cloud Infrastructure even faster. Very fast access translates into quick access results in increased productivity, faster response times, quick resolution and cost savings for the end customer Performance – With Silverpeak high throughput and low latency can be achieved in remote branch office offices for applications/workloads running in the Oracle Cloud Infrastructure Simplicity - A Silver Peak EdgeConnect Virtual (EC-V) appliance can be deployed easily in Oracle Cloud Infrastructure (OCI) to establish and enhance the WAN connectivity, and accelerate the migration of data from branch offices and data centers to OCI Oracle & Silver Peak’s partnership provides a seamless experience to run your geographically distributed applications super-fast in Oracle Cloud Infrastructure. To get started, please have a look at the joint collateral and videos available here: Silver Peak EdgeConnect Solution Overview Silver Peak Oracle Cloud Marketplace Application

We are pleased to announce the availability of Unity EdgeConnect, a software defined wide area network (SD-WAN) solution by Silver Peak. With Unity EdgeConnect companies can connect their branch...

Oracle Cloud Infrastructure

Oracle Database Service Now Available on the High Performance X7 Platform

Run larger Oracle Databases in the cloud, faster, more securely, and at lower cost than ever before. Last quarter we announced the availability of new compute shapes based on the high performance X7 platform. These shapes feature the latest generation of Skylake Intel CPUs, up to dual 25 Gbps non-oversubscribed network connections, and storage on the highest performance NVMe SSDs. Today, we are happy to announce that Oracle Cloud Infrastructure Database Service is now available on these new compute shapes, on both Bare Metal and Virtual Machines. Advantages With these new powerful shapes, more database-intensive applications can be migrated to the cloud at a lower cost per performance ratio. They offer Oracle Database users key advantages such as: More database storage capacity: With the X7 Platform users can now get increased storage, which allows consolidation of more databases on a single database server and enables Oracle Enterprise Application 'Lift & Shift' scenarios like EBS, as these have larger storage requirements. The new bare metal shape comes with local NVMe SSD storage for ultimate performance, and provides up to ~16 TB usable data storage for dev/test databases (2-way mirroring with ~4TB RECO) and up to ~9.5 TB usable data storage for production databases (3-way mirroring with ~2.3TB RECO). The new virtual machines provide up to 40 TB of usable storage on Block volumes with ~8TB of RECO.  More memory: The new bare metal shape offers 768 GB RAM which will appeal to users of our in-memory database. Increased network bandwidth: Users have up to 50 Gbps bandwidth for three different types of database related traffic on the instance – data, Data Guard, & backup traffic. Also, increased network bandwidth for all VM shapes for example, 25 Gbps for VM.Standard2.24. Increased number of cores and memory for VMs: The new VM shapes (Standard2) double the available memory as compared to Standard1. There is also a new 24 core VM shape available. Specifications With the X7 platform, we are adding BM.DenseIO2.52 with 52 cores to the DenseIO family and a VM.Standard2 shape with 1 to 24 cores to the VM family. There is increased memory and network bandwidth for both shapes. Product Shape Core(s) Memory (GB) Usable storage (GB) Network Standard Virtual Machine (with Block Volume) VM.Standard2.1 1 15 Block up to 40 TB 1 Gbps VM.Standard2.2 2 30 Block up to 40 TB 2 Gbps VM.Standard2.4 4 60 Block up to 40 TB 4 Gbps VM.Standard2.8 8 120 Block up to 40 TB 8 Gbps VM.Standard2.16 16 240 Block up to 40 TB 16 Gbps VM.Standard2.24 24 320 Block up to 40 TB 25 Gbps DenseIO - Bare Metal BM.DenseIO2.52 2 - 52 768 Local 51.2 TB 2 x 25 Gbps   Features With the new shapes the following features remain unchanged: Support for all four database editions – Standard, Enterprise, Enterprise Edition High Performance & Enterprise Edition Extreme Performance 2-node RAC configuration (only for VMs) – Supported for Enterprise Edition Extreme Performance All current database versions – 11.2, 12.1, 12.2 and 18.1 supported 40TB of usable storage – with scale out of storage without any downtime Experience To provision new database instances, go to the Oracle Cloud Infrastructure Console and launch DB System under Database: To launch the new bare metal shape, select BM.DenseIO2.52: To launch a new virtual machine instance, select VM.Standard2.x (where x is the number of OCPUs, 1, 2, 4, 8, 16, or 24): Note: You can select any virtual machine shape with more than 2 OCPUs to build a RAC database by specifying node count = 2. Starting today, all new X7 shapes are available in all OCI regions and are accessible via Console, API, CLI, SDKs, and Terraform. Try out the new shapes today and discover higher performance for your database workloads.

Run larger Oracle Databases in the cloud, faster, more securely, and at lower cost than ever before. Last quarter we announced the availability of new compute shapes based on the high performance...

Developer Tools

Using the Multi-Attach Block Volume Feature to Create a Shared File System on Oracle Cloud Infrastructure

Having a shared file system is a very common request to allow applications to be able to access the same data or to allow multiple users to get access to same information at the same time for example. On-premises this is a very easy task to achieve using NAS or SAN devices but how can it be done in the cloud?   There are different technologies like iSCSI, NFS, SMB, DRBD and other services that allow you to share a block device with two or more Cloud instances but you still need to configure those services and on top of that you also need a cluster file system like OCFS2 or GlusterFS that will allow your users to read/write simultaneously.   With Oracle Cloud Infrastructure you have the multi-attach block device option that allows you to attach the same block device with two or more Cloud instances. This feature is under Limited Availability which means you need to have your tenancy enabled to be able to use it. This allows customers to easily connect the same block storage volume(s) to all the instances that need to get access to the same data. It basically acts as a NAS Cloud device.   As of today, the process is done through a preview version of OCI CLI which needs to be requested from Oracle. Once you get access to that new OCI CLI  version and your tenancy has been enabled to use such feature you will be able to run the OCI command line to attach a block device to multiple cloud instances where you plan to use to hold your cluster file system. Here is an example:   "oci compute volume-attachment attach --instance-id ocid1.instance.oc1.OCID --type iscsi --volume-id ocid1.volume.oc1.REGION.OCID --is-shareable true"    Now that you have your block volume attached to all instances you need, next step is creating a file system that is cluster aware. For this blog we will use OCFS2 (Oracle Cluster File System) as the following diagram illustrates.   Why OCFS2? Oracle Cluster File System version 2 (OCFS2) is a general-purpose shared-disk file system intended for use in clusters to increase storage performance and availability.  Almost any application can use OCFS2 because it provides local file-system semantics. Applications that are cluster-aware can use cache-coherent parallel I/O from multiple cluster nodes to balance activity across the cluster, or they can use the available file-system functionality to fail over and run on another node in the event that a node fails.   OCFS2 has a large number of features that make it suitable for deployment in an enterprise-level computing environment: - Support for ordered and write-back data journaling that provides file system consistency in the event of power failure or system crash. - Block sizes ranging from 512 bytes to 4 KB, and file-system cluster sizes ranging from 4 KB to 1 MB (both in increments of powers of 2). The maximum supported volume size is 16 TB, which corresponds to a cluster size of 4 KB. A volume size as large as 4 PB is theoretically possible for a cluster size of 1 MB, although this limit has not been tested. - Extent-based allocations for efficient storage of very large files. - Optimized allocation support for sparse files, inline-data, unwritten extents, hole punching, reflinks, and allocation reservation for high performance and efficient storage. - Indexing of directories to allow efficient access to a directory even if it contains millions of objects. - Metadata checksums for the detection of corrupted inodes and directories. - Extended attributes to allow an unlimited number of name:value pairs to be attached to file system objects such as regular files, directories, and symbolic links. - Advanced security support for POSIX ACLs and SELinux in addition to the traditional file-access permission model. - Support for user and group quotas. - Support for heterogeneous clusters of nodes with a mixture of 32-bit and 64-bit, little-endian (x86, x86_64, ia64) and big-endian (ppc64) architectures. - An easy-to-configure, in-kernel cluster-stack (O2CB) with a distributed lock manager (DLM), which manages concurrent access from the cluster nodes. - Support for buffered, direct, asynchronous, splice and memory-mapped I/O. - A tool set that uses similar parameters to the ext3 file system.   Getting Started Below is a summary of the configuration steps required for this architecture: 1. Attach your Oracle Cloud Infrastructure Block Device(s) using oci CLI as explained above 2. Set up your OCFS2/O2CB cluster Nodes 3. Create your OCFS2 file system and mount point   You also need to open ports 7777 and 3260 on the Oracle Cloud Infrastructure Dashboard. Edit the VCN Security List and either open all ports for your tenancy Internal Network (NOT PUBLIC NETWORK) as shown below for network 172.0.0.0/16 Source: 172.0.0.0/16 IP Protocol: All Protocols Allows: all traffic for all ports or open only the required 7777 and 3260 ports for the internal network and here is an example for port 7777: Source: 172.0.0.0/16 IP Protocol: TCP Source Port Range: All Destination Port Range: 7777 Allows: TCP traffic for ports: 7777   NOTE: Ports 7777 and 3260 need to opened in the local OS firewall as well as shown below - sudo firewall-cmd --zone=public --permanent --add-port=7777/tcp - sudo firewall-cmd --zone=public --permanent --add-port=3260/tcp - sudo firewall-cmd --complete-reload Make sure DNS is working properly and your bare metal instances can communicate properly across your tenancy availability domains (ADs). Here is a quick example of /etc/resolv.conf based on this setup $ cat /etc/resolv.conf ; generated by /usr/sbin/dhclient-script search baremetal.oraclevcn.com publicsubnetad3.baremetal.oraclevcn.com publicsubnetad1.baremetal.oraclevcn.com publicsubnetad1.baremetal.oraclevcn.com nameserver 169.254.169.254 As you can see above, all ADs DNS entries are available in that resolv.conf file.   Environment ROLE INSTANCE IP OS OCFS2 Node1 node1.publicsubnetad1.baremetal.oraclevcn.com 172.0.0.41 Oracle Linux 7.4 x86_64 OCFS2 Node2 node2.publicsubnetad2.baremetal.oraclevcn.com 172.0.1.42 Oracle Linux 7.4 x86_64   OCFS2 Creating the Configuration File for the Cluster Stack Install the required OCFS2 packages $ sudo yum install ocfs2-tools-devel ocfs2-tools -y   Now, create the configuration file by using the o2cb command or a text editor. Lets use the following command to create a cluster definition. $ sudo o2cb add-cluster ociocfs2 The above command creates the configuration file /etc/ocfs2/cluster.conf if it does not already exist.   For each node, use the following command to define the node. $ sudo o2cb add-node ociocfs2 node1 --ip 172.0.0.41 $ sudo o2cb add-node ociocfs2 node2 --ip 172.0.1.42 NOTE: The name of the node must be same as the value of the system's HOSTNAME that is configured in /etc/sysconfig/network and the IP address is the one that the node will use for private communication in the cluster. You need to copy the cluster configuration file /etc/ocfs2/cluster.conf to each node in the cluster. Any changes that you make to the cluster configuration file do not take effect until you restart the cluster stack.   The following /etc/ocfs2/cluster.conf configuration file defines a 2-node cluster named ociocfs2 with a local heartbeat which is the configuration used for this tutorial. $ sudo cat /etc/ocfs2/cluster.conf cluster:         heartbeat_mode = local         node_count = 2         name = ociocfs2   node:         number = 0         cluster = ociocfs2         ip_port = 7777         ip_address = 172.0.0.41         name = node1   node:         number = 1         cluster = ociocfs2         ip_port = 7777         ip_address = 172.0.1.42         name = node2   Configuring the Cluster Stack Run the following command on each node of the cluster: $ sudo /sbin/o2cb.init configure Configuring the O2CB driver.   This will configure the on-boot properties of the O2CB driver. The following questions will determine whether the driver is loaded on boot.  The current values will be shown in brackets ('[]').  Hitting <ENTER> without typing an answer will keep that current value.  Ctrl-C will abort.   Load O2CB driver on boot (y/n) [y]: Cluster stack backing O2CB [o2cb]: Cluster to start on boot (Enter "none" to clear) [ocfs2]: ociocfs2 Specify heartbeat dead threshold (>=7) [31]: Specify network idle timeout in ms (>=5000) [30000]: Specify network keepalive delay in ms (>=1000) [2000]: Specify network reconnect delay in ms (>=2000) [2000]: Writing O2CB configuration: OK checking debugfs... Setting cluster stack "o2cb": OK Registering O2CB cluster "ociocfs2": OK Setting O2CB cluster timeouts : OK Starting global heartbeat for cluster "ociocfs2": OK Explanation of the above options can be found in OCFS2 public documentation   To verify the settings for the cluster stack, enter the /sbin/o2cb.init status command: $ sudo /sbin/o2cb.init status Driver for "configfs": Loaded Filesystem "configfs": Mounted Stack glue driver: Loaded Stack plugin "o2cb": Loaded Driver for "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted Checking O2CB cluster "ociocfs2": Online   Heartbeat dead threshold: 31   Network idle timeout: 30000   Network keepalive delay: 2000   Network reconnect delay: 2000   Heartbeat mode: Local Checking O2CB heartbeat: Active Debug file system at /sys/kernel/debug: mounted In this example, the cluster is online and is using local heartbeat mode. If no volumes have been configured, the O2CB heartbeat is shown as Not Active rather than Active.   Configure the o2cb and ocfs2 services so that they start at boot time after networking is enabled. $ sudo systemctl enable o2cb $ sudo systemctl enable ocfs2 These settings allow the node to mount OCFS2 volumes automatically when the system starts.   Configuring the Kernel for Cluster Operation For the correct operation of the cluster, you must configure the kernel settings shown in the following table: KERNEL SETTING DESCRIPTION panic Specifies the number of seconds after a panic before a system will automatically reset itself. If the value is 0, the system hangs, which allows you to collect detailed information about the panic for troubleshooting. This is the default value. To enable automatic reset, set a non-zero value. If you require a memory image (vmcore), allow enough time for Kdump to create this image. The suggested value is 30 seconds, although large systems will require a longer time. panic_on_oops Specifies that a system must panic if a kernel oops occurs. If a kernel thread required for cluster operation crashes, the system must reset itself. Otherwise, another node might not be able to tell whether a node is slow to respond or unable to respond, causing cluster operations to hang.   On each node, enter the following commands to set the recommended values for panic and panic_on_oops: $ sudo sysctl kernel.panic=30 $ sudo sysctl kernel.panic_on_oops=1   To make the change persist across reboots, add the following entries to the /etc/sysctl.conf file: # Define panic and panic_on_oops for cluster operation kernel.panic=30 kernel.panic_on_oops=1   Starting and Stopping the Cluster Stack The following table shows the commands that you can use to perform various operations on the cluster stack. COMMAND DESCRIPTION /sbin/o2cb.init status Check the status of the cluster stack. /sbin/o2cb.init online Start the cluster stack. /sbin/o2cb.init offline Stop the cluster stack. /sbin/o2cb.init unload Unload the cluster stack.   Creating OCFS2 volumes Use mkfs.ocfs2 command to create an OCFS2 volume on a device. If you want to label the volume and mount it by specifying the label, the device must correspond to a partition. You cannot mount an unpartitioned disk device by specifying a label. $ sudo mkfs.ocfs2 -L "ocfs2" /dev/sdb mkfs.ocfs2 1.8.6 Cluster stack: classic o2cb Label: ocfs2 Features: sparse extended-slotmap backup-super unwritten inline-data strict-journal-super xattr indexed-dirs refcount discontig-bg Block size: 4096 (12 bits) Cluster size: 4096 (12 bits) Volume size: 12455405158400 (3040870400 clusters) (3040870400 blocks) Cluster groups: 94274 (tail covers 512 clusters, rest cover 32256 clusters) Extent allocator size: 780140544 (186 groups) Journal size: 268435456 Node slots: 16 Creating bitmaps: done Initializing superblock: done Writing system files: done Writing superblock: done Writing backup superblock: 6 block(s) Formatting Journals: done Growing extent allocator: done Formatting slot map: done Formatting quota files: done Writing lost+found: done mkfs.ocfs2 successful   Mounting OCFS2 Volumes As shown in the following example, specify "_netdev" and "nofail" options in /etc/fstab if you want the system to mount an OCFS2 volume at boot time after networking is started, and to unmount the file system before networking is stopped. $ sudo mkdir /ocfs2 $ sudo vi /etc/fstab #include the below line to mount your ocfs2 after a restart /dev/sdb /ocfs2 ocfs2     _netdev,defaults   0 0   Run "mount -a" to mount the OCFS2 partition based on the fstab entry you created above and the setup is concluded. You should have a cluster file system mounted on /ocfs2 on both Oracle Linux 7.4 node1 and node2  servers.   Finally, you're finished!  Your applications can now use this storage as they would with any local file storage. Planning your environment thoughtfully and making use of Availability Domains and capabilities such as Oracle Cluster File System can help you increase the performance and availability of the solutions you build on Oracle Cloud Infrastructure.  

Having a shared file system is a very common request to allow applications to be able to access the same data or to allow multiple users to get access to same information at the same time for example....

Oracle Cloud Infrastructure

Migration and Disaster Recovery in the Oracle Cloud with Rackware

We are pleased to announce the availability of Rackware RMM on Oracle Cloud Infrastructure for Live Migration and Disaster Recovery. Intellectual property, financial transactions, and business data are amongst the most valuable assets of any organization. Protecting them is the paramount responsibility of customers in the cloud. Many enterprise customers need Migration and Disaster Recovery solutions with a cost effective agile approach to protect and optimize applications running in the Oracle Cloud Infrastructure.  RMM    RackWare RMM platform provides a flexible and all-encompassing solution for Migration and disaster recovery. RackWare helps Enterprises and large Organizations take advantage of the agility promised by Oracle Cloud Infrastructure. Rackware's platform eliminates the complexity protecting, moving, and managing large-scale applications, including critical business applications and their workloads into the Oracle Cloud. It is now possible for enterprise customers to forgo the upfront purchase of duplicate recovery hardware, the cost of set up, configuring, and maintaining that hardware by leveraging Oracle cloud infrastructure. OCI Regions and Availability Domains This blog focuses on integration of Rackware on the Oracle Cloud Infrastructure for Migration/DR use cases. Oracle Cloud Infrastructure is hosted in regions and availability domains. A region is a localized geographic area, and an availability domain is one or more data centers located within a region. A region is composed of three availability domains. Availability domains are isolated from each other, fault tolerant, and very unlikely to fail simultaneously. Availability domains do not share infrastructure such as power or cooling, or the internal availability domain network. All the availability domains in a region are connected to each other by a low latency, high bandwidth network, which makes it possible to provide highly available connectivity to the Internet and customer premises, and to build replicated systems in multiple availability domains for both high-availability and disaster recovery. Enterprises can use the Rackware RMM platform achieve Live Migration/ Disaster recovery using the Oracle Cloud Infrastructure Regions and Availability Domains. Rackware RMM Migration/DR platform is a non-intrusive Agentless Technology with pre- and post- Migration Configuration Capabilities that is easy to setup and configure for complicated enterprise environments/applications. Rackware RMM supports both Linux and Windows based workloads for migration to the Oracle Cloud Infrastructure. Rackware RMM value proposition for enterprises in the Oracle Cloud is as follows: Non-disruptive / Live Captures -No agents installed, safe and secure replication of your production environments Network and Application Discovery - Automatically discover network configurations and applications allowing you to reconfigure them in the OCI environment during migration Universal DR Protection - RackWare support spans all physical and virtual confluences, even for complex environments with Large SQL Clusters, and Network Attached Storage Seamless Failback -  To physical and virtual environments, for simple disaster recovery drills Cost Reduction - Orchestration engine for multiple polices of RPOs and RTOs based on tolerance to reduce costs with less expensive compute, network, and storage utilization. Oracle & Rackware partnership provides a seamless experience to Migrate to the Oracle Cloud Infrastructure and secure customer workloads with dynamic provisioning and disaster recovery. To get started, please have a look at the joint collateral and videos available here: http://www.rackwareinc.com/partner-oracle/ https://cloudmarketplace.oracle.com/marketplace/en_US/listing/7459350 https://cloudmarketplace.oracle.com/marketplace/en_US/listing/29367738

We are pleased to announce the availability of Rackware RMM on Oracle Cloud Infrastructure for Live Migration and Disaster Recovery. Intellectual property, financial transactions, and business...

Oracle Cloud Infrastructure

Denovo Helps Customers Get More From Oracle E-Business (EBS) with Oracle Cloud Infrastructure

Oracle E-Business Suite is an  enterprise resource planning application suite covering customer relationship management to supply chain management. E-Business Suite customers manage the complexities of modern businesses, make better decisions, reduce cost, and improve performance with its suite of global business applications. There are many challenges that customers face when managing an EBS environment that runs on premises.  IT Managers want to deploy solutions rapidly to meet changing business and technical needs. They want keep cost down while balancing the ability to drive innovation. However, Developers are constrained by capacity limitations of their on premises environment.  When you manage  EBS on premises, you have  to deal with procurement lags and additional costs for dev, test, and training.  Without the ability to quickly expand on premises environments it makes it difficult for Developers and IT Managers to provide end users with innovative solutions. Moving your EBS environment to the cloud eliminates these common problems. OCI’s dynamic and flexible model allows development to spin up environments in minutes whether it is for development, testing or training. IT managers benefit with only having to pay for capacity that is actually used.  This allows organizations to provide faster service and greater value to their line of business. When you move EBS the cloud, you no longer have to refresh your hardware and you have the flexibility of expanding your environment when you have needs like testing or training. Since EBS licenses transfer seamlessly to OCI, you can leverage your existing license that your organization has already purchased and only pay for when you are using the capacity.  Having this freedom allows for organization to focus their time on innovation. Denovo has created an Oracle EBS Jump Start Lab that shows how to effectively run EBS on OCI. This Jump Start Lab allows you to freely navigate EBS Vison on OCI and validate the user experience and performance.  Denovo’s EBS Jump Start allows you to try this pre-configured solution running on Oracle Cloud Infrastructure, for free!  Within minutes, a Demo Lab will be available to start learning about  EBS on OCI. Denovo is an Oracle Platinum partner that has expertise in Oracle E-Business Suite and Oracle Cloud infrastructure.  They are experts in helping  customers easily transition to the cloud.  They provide consulting services that help customers get more out of their investment in EBS and OCI allowing customers to focus on their business and industry specific software solutions.  Try their Jump Start Demo Lab Today!    

Oracle E-Business Suite is an  enterprise resource planning application suite covering customer relationship management to supply chain management. E-Business Suite customers manage the complexities...

Developer Tools

Introducing the Go SDK for Oracle Cloud Infrastructure

Today, we are happy to announce the Go SDK for Oracle Cloud Infrastructure is now available! The Go SDK supports all Oracle Cloud Infrastructure services and will continue to support the same set of features as the Java, Python and Ruby SDKs. Similar to our open source developer tools and SDKs, the Go SDK is available at GitHub oci-go-sdk repo. The Go programming language attracts developers with its ease of use and simplicity. It is particularly popular when it comes to cloud development, including the use of Kubernetes and Terraform. To provide developers with similar experiences, we are launching the Go SDK for Oracle Cloud Infrastructure. The Oracle Cloud Infrastructure Terraform Provider also uses the latest Go SDK, you can refer to details about Terraform Provider here. Along with launching the first version of the Go SDK for Oracle Cloud Infrastructure, we are looking for your feedback on future improvements. We’d love to hear from you on the GitHub issues page. Here is how you can get started: Install the Go SDK go get -u github.com/oracle/oci-go-sdk Configure the SDK with your Oracle Cloud Infrastructure credentials Start making API calls, such as this example: package example import ( "context" "fmt" "log" "github.com/oracle/oci-go-sdk/common" "github.com/oracle/oci-go-sdk/example/helpers" "github.com/oracle/oci-go-sdk/identity" ) // ExampleListAvailabilityDomains Lists the Availability Domains in your tenancy. // Specify the OCID of either the tenancy or another of your compartments as // the value for the compartment ID (remember that the tenancy is simply the root compartment). func ExampleListAvailabilityDomains() { c, err := identity.NewIdentityClientWithConfigurationProvider(common.DefaultConfigProvider()) helpers.LogIfError(err) // The OCID of the tenancy containing the compartment. tenancyID, err := common.DefaultConfigProvider().TenancyOCID() helpers.LogIfError(err) request := identity.ListAvailabilityDomainsRequest{ CompartmentId: &tenancyID, } r, err := c.ListAvailabilityDomains(context.Background(), request) helpers.LogIfError(err) log.Printf("list of available domains: %v", r.Items) fmt.Println("list available domains completed") // Output: // list available domains completed } To learn more, you can also refer to: Oracle Cloud Infrastructure Go SDK documentation Full documentation on the godocs site Code samples to get started Getting started with Oracle Cloud Infrastructure Try for free with credits If you need help, you can utilize these channels: Stack Overflow, use the oracle-cloud-infrastructure and oci-go-sdk tags in your post Developer Tools of the Oracle Cloud forums My Oracle Support

Today, we are happy to announce the Go SDK for Oracle Cloud Infrastructure is now available! The Go SDK supports all Oracle Cloud Infrastructure services and will continue to support the same set of...

Oracle Cloud Infrastructure

High Performance Boot Disk Now Up to 16 TB!

You want a compute instance with a high-performance system boot disk of up to 16 TB? You now have it on the Oracle Cloud Infrastructure! This is enabled for you immediately as a seamless service update to the recently announced boot volumes feature. Our customers demand 300 GB or larger system disks, especially for Microsoft Windows operating systems. Some customer legacy applications are deployed to the system drive, taking significantly more space than our default operating system images provided. The large system boot disks we are announcing address this need, and continue to be provided by secure, durable, and high-performance Oracle Cloud Infrastructure Block Volumes boot volumes. When you launch a compute instance on Oracle Cloud Infrastructure, you now have an option to customize its boot volume size to be equal to or larger than the size of the selected operating system image, up to a maximum 16 TB in 1 GB increments. Note that 16 TB is the current maximum block volume size, and we continue to raise it based on your feedback. Larger boot volumes come to you with predictable and linearly scaling performance based on size, with better performance than smaller volumes, just like the block volumes. All boot volumes continue to be NVMe-SSD based with best-in-class block storage performance that is backed by the Oracle Cloud Infrastructure SLA at the competitive Block Volumes pricing. For example, with current pricing and performance characteristics as of the time of this writing, for a 1/2 TB boot volume you are guaranteed to get 25,000 IOPS with less than 1 msec 99-percentile latency for all types of workloads, and it will cost you $21 per month at the $0.0425 per GB-month rate. Boot volume consumption is metered and included in the block storage allocation for your tenancy, making it easy to plan and manage all your block storage needs with a simple model. Aligned with our stance on simplicity, configuring a large boot volume takes only a few clicks when you launch an instance. The following example shows how to launch a compute instance with a large system boot disk: Larger boot volume sizes can be customized for the following operating system image selections: Oracle Operating System Image Custom Image Image OCID Stay tuned for updates on additional features and capabilities. The Oracle team and I value your feedback as we continue to make our service the best in the industry. Send me your thoughts as we continue to partner in your cloud journey, or if you want more details on any topic. Max Verun Principal Product Manager, Oracle Cloud Infrastructure

You want a compute instance with a high-performance system boot disk of up to 16 TB? You now have it on the Oracle Cloud Infrastructure! This is enabled for you immediately as a seamless service...

Oracle

Integrated Cloud Applications & Platform Services