X

Recent Posts

Events

Inside NVIDIA and Oracle's Partnership on AI and HPC in the Cloud

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Oracle is now offering NVIDIA's unified artificial intelligence (AI) and high performance computing (HPC) platform on Oracle Cloud Infrastructure. I recently caught up with Karan Batta, who manages HPC for Oracle Cloud Infrastructure, to find out what this partnership means for Oracle customers who run performance-intensive workloads and are looking to move to the cloud. He also explains how Oracle makes it easy for customers to transfer NVIDIA HPC workloads to the cloud. Listen to our conversation and read a condensed version: Your browser does not support the audio player Why is the partnership between Oracle and NVIDIA such a big deal? Karan Batta: It's a big deal in part because we are the first public cloud provider to support NVIDIA HGX-2, the company's unified AI and HPC platform. But let's talk about the GPU market for a minute. I would say that the GPU-accelerated market is going to be a huge portion of the future market. Obviously, it doesn't make sense to move everything to a GPU. But certainly, a lot of computationally intensive tasks like risk modeling, DNA sequencing, and a lot of real-time analysis makes sense for GPUs. The big use cases today are things like AI and ML, and in the future, it will be things like autonomous driving and weather simulation. Many tasks can benefit from GPUs. Why did Oracle choose to partner with NVIDIA? Batta: NVIDIA is the global leader right now in terms of not just the GPU hardware but the software ecosystem as well. They've done a fantastic job of growing their ecosystem around CUDA and different open source libraries such as cuDNN and cuML. What we're trying to do at Oracle Cloud Infrastructure is enable the entire ecosystem on our platform. We're not going to tell people to rip up their application and use our APIs instead of anybody else's like other cloud providers do. If you're already invested in the ecosystem, you want to come to Oracle. Not only do we offer the best GPU infrastructure, you can also get the ecosystem along with it. As part of that effort, we also announced that we've integrated the NVIDIA GPU Cloud (NGC) container registry. NVIDIA essentially builds, manages, qualifies, certifies, benchmarks, tests, and publishes many containers for deep learning, ML, AI, HPC, and now they're moving into data analytics as well. We're supporting all of that in our public cloud. Are we certified for this? Batta: Yes. Right now, we're the only ones that have RAPIDS available on a public cloud certified through NGC. RAPIDS is a suite of open-source software libraries for executing data science training pipelines entirely on NVIDIA GPUs. It's generally available and you can find documentation on NVIDIA's and Oracle's websites. What do we offer in terms of making it easier for customers to transfer NVIDIA HPC workloads to Oracle Cloud Infrastructure? Batta: We've made it much easier for customers to use the NVIDIA stack on top of Oracle. I think that is one of the biggest things that people are starting to notice. You can take any framework or application that is already running on GPUs and quickly run it on Oracle Cloud Infrastructure without changing the image or anything else. That's true even if you have an on-premises image. You can run it para-virtualized on Oracle Cloud Infrastructure and it just works. On top of that, we are co-building this hardware with NVIDIA. We're doing special things in regard to how we build that hardware and especially how we spec that hardware for different types of markets, whether it's AI or a legacy HPC workload. Can you tell me how many Oracle Cloud Infrastructure regions have these capabilities right now and what are the future plans? Batta: This is available today in all of our regions. We have four major regions today - Virginia, Phoenix, London, and Frankfurt. And we've announced numerous new regions that will come online in the next 12 months in places like Korea, Japan, and India. We're also going to have quite a few government regions along with additional regions in Europe and Asia-Pacific—so we are in this for the long term. All of these capabilities are going to be uniform across of our regions. Okay, I'm sold. I want to take this for a test drive. How do I try it out? Batta: We offer 300$ in free credits so you can go to our website and try it out. If you have additional questions or if you want to try out something different, feel free to reach out to me and my team. We'd be more than happy to guide you and make sure that you're successful on Oracle Cloud Infrastructure.  

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Oracle is now...

Breaking Down the HPC Barriers

In my last post, I discussed some reasons why most enterprise workloads still haven't moved to the cloud. One of those reasons is that mission-critical applications require levels of performance and reliability that earlier-generation clouds simply haven't offered. This has been especially true for high-performance computing (HPC) applications. Oracle Cloud Infrastructure's wide variety of compute options—combined with our networking innovations, robust security features, and industry partnerships—makes us uniquely suited for these data-intensive workloads. (And our prowess in database for the past four decades doesn’t hurt!) At Oracle OpenWorld last month, I announced that Oracle Cloud Infrastructure is ready for any and all workloads. Case in point: our news at Supercomputing 2018 this week. We've launched bare metal compute instances that enable organizations to run HPC workloads in the cloud with the same levels of performance that they get on premises. I envision massive server farms, sitting idle, with millions of dollars of unused hardware and gear waiting for the next simulation! Networking Innovations These bare metal cloud instances are powered by a new feature called clustered networking, which removes the need for enterprises to run specialized networking equipment on premises for their HPC workloads. Additionally, Oracle Cloud Infrastructure is the first and only public bare metal cloud with a high-bandwidth, low-latency RDMA network. For more information about what we're doing with networking, watch this video with Karan Batta, Senior Principal Product Manager, and Jag Brar, Network Architect: Robust Security HPC applications are addressing some of the biggest problems in the world. They help develop new cancer treatments, improve weather prediction, and make cars safer. Being able to run these workloads in the cloud without a performance hit is a major advancement, but it's not enough to clear all cloud migration roadblocks. For the organizations running these workloads, keeping them safe is the top priority. These workloads are as mission critical as it gets, and we take the responsibility to run them effectively in the cloud seriously, as backed by our industry-leading service-level agreements. Security is also a priority, and is a fully integrated pillar of Oracle Cloud Infrastructure. Oracle has protected enterprises' mission-critical data for more than 40 years, and that experience drove how we built our cloud. As Larry Ellison, CTO and Executive Chairman, explained at OpenWorld, we can't see customer data and customers can't access our control plane. Plus, we're using the latest in artificial intelligence and machine learning to protect against increasingly advanced attackers. Industry Partnerships Oracle is able to provide a secure, high-performance cloud, in part, because we collaborate with other innovators and leaders in the market. Our new HPC bare metal instances are powered by Intel Xeon Scalable processors, with RDMA functionality provided by Mellanox Technologies. We also offer AMD EPYC instances and GPU instances powered by NVIDIA. No other cloud provider offers this combination of hardware choice, networking prowess, and commitment to security. It's what makes Oracle Cloud Infrastructure the only enterprise-grade cloud—especially for HPC workloads. Today is the last day you can see on the exhibit floor at SC18 in Dallas.  Stop by booth #2806, say hello, and learn more!

In my last post, I discussed some reasons why most enterprise workloads still haven't moved to the cloud. One of those reasons is that mission-critical applications require levels of performance and...

Security

Mapping the Future of Security at Oracle Cloud Infrastructure

"Make it easy for people to do the right thing, and they tend to." Those words were first spoken to me by a very wise and talented Chief Information Officer (CIO) who mentored me early in my career. The quote really stuck with me, and it came to define my overall approach to leading security teams at the companies I've worked at over the last 15 years. It's certainly on my mind as I move into the new role of Chief Security Officer (CSO) for Oracle Cloud Infrastructure. Whether it's securing a public cloud or securing personal computers at home, most people want to do what's right. But if the process of enabling security is overly difficult— if it requires too many steps, takes too much time, or is impossible to understand—people tend to procrastinate, and that can lead to gaps or weakness in their security posture. If security is simple enough and features are built directly into processes whenever possible, people will do the right thing. I'm beyond thrilled to be the new CSO for Oracle Cloud Infrastructure. Moving forward, I'll always be asking questions like these: How do we continue to make it easier to scan code for typical coding errors? Are we constantly integrating security features into day-to-day standups and weekly sprints in keeping with Oracle tradition? How do we consistently reinforce and uphold Oracle's long held commitment to highly secure systems as defined by our core security pillars? I'm very much looking forward to the challenge. After serving as Chief Information Security Officer (CISO) at companies like PricewaterhouseCoopers, Google, and startup Jet.com, where I was also a cloud customer, it's a challenge I'm ready to meet. Why I Chose Oracle Cloud Infrastructure Early in my career, I never thought I'd end up working at Oracle. Back then, I didn't know much about the company other than the fact that the world's largest enterprises depended on its relational database technologies. The first thing I noticed when I met the Oracle team is that the atmosphere in the Oracle Cloud Infrastructure organization is much like that of a startup—but a startup that's backed by resources that only a large, successful software company can provide. Things move fast in the Oracle Cloud Infrastructure team. Innovation and new ideas aren't just encouraged, they're mandatory. I became absolutely certain I wanted the position when I realized that Oracle Cloud Infrastructure exceeded my two main criteria for selecting a company to work at. First, I knew the job would be fascinating—that I would have the opportunity to solve complex problems that others hadn’t solved before. I like the sound of that. Second, it was clear that I was going to enjoy working with the Oracle Cloud Infrastructure team. Everyone here has been amazing. I also liked the fact that Oracle had taken on the colossal challenge of entering a crowded market and building a public cloud from scratch. And it's not just any cloud. It's a cloud designed for large and small enterprises that truly care about security features. It's for government agencies and other organizations that need to run highly secure workloads. Oracle is doing cloud better than the competition, and I'm proud to be part of the team that's making it happen. I've learned a lot in the short time since I joined the team. Oracle's devotion to enabling security features is evident in every corner of the organization. It's evident in the design of Oracle's enterprise resource planning, human capital management, and other business applications. And it's evident in the architecture of Oracle Cloud Infrastructure, where database systems are deployed into a virtual cloud network by default. This allows a high level of security and privacy and gives users control over the networking environment. Closing Thoughts from a Former Cloud Customer One other thing will significantly inform my approach as CSO for Oracle Cloud Infrastructure: the time I spent using a competing cloud when I was CISO at another company. Cloud providers offer access to certain systems via user interfaces (UIs) and application program interfaces (APIs). As a cloud customer, I found some of those UIs and APIs didn't adequately enable security teams to perform anomaly detection, incident response, and forensics. As CSO, I will ensure that my team upholds Oracle's commitment to customers having access to the right systems. A great example of this are Oracle's bare metal offerings, where customers can directly access hardware, memory, storage, and other systems with no need for virtualization. As a CSO, I have strong demands before I'll allow sensitive data to be stored in our cloud. As a former cloud user, I can put myself in the customers' place and understand the true impact of our security decisions. I'm excited to use those skills and experiences as my team builds the security roadmap and the future of Oracle Cloud Infrastructure.

"Make it easy for people to do the right thing, and they tend to." Those words were first spoken to me by a very wise and talented Chief Information Officer (CIO) who mentored me early in my...

Product News

Bring Your Own Custom Image in Paravirtualized Mode for Improved Performance

We are excited to announce the availability of a new way to move and improve existing workloads with Oracle Cloud Infrastructure. You can now import a range of new and legacy operating systems using paravirtualized hardware mode. VMs using paravirtualized devices provide much faster performance compared to running in emulated mode, with at least six times faster disk I/O performance. To get started, you simply export VMs from your existing virtualization environment and import them directly to Oracle Cloud Infrastructure as custom images. You can import images in either QCOW2 or VMDK format. The following screenshot shows the Import Image dialog box, where you can choose to import your image in paravirtualized mode. Once the image is imported successfully, you can launch new VM instances with this image to run your workloads. Paravirtualized mode is available on X5 and X7 VM Instances running Linux OS which include the kvm virtio drivers. Linux kernel versions 3.4 and higher include the drivers by default. This includes the following Linux OSs: Oracle Linux 6.9 and later, Red Hat Enterprise Linux 7.0 and later, CentOS 7.0 and later, and Ubuntu 14.04 and later. We recommend using emulated mode to import older Linux OS and Windows OS images. If your image supports paravirtualized drivers, it is easy to convert your existing emulated mode instances into paravirtualized instances. You can create a custom image of your instance, then export it to object storage, and re-import it in paravirtualized mode. Bring your own custom image in paravirtualized mode is offered in all regions at no extra cost. Now you have another way to bring existing workloads to Oracle Cloud Infrastructure, with improved performance! Bringing existing OS images: [NEW] Bring your own custom image to Oracle Cloud Infrastructure VMs using paravirtualized mode: Import existing Linux OS images using VMDK or QCOW2 formats and run in paravirtualized mode VMs for improved performance. For details, see Bring Your Own Custom Image for Paravirtualized Mode Virtual Machines. Bring your own custom image to Oracle Cloud Infrastructure VMs using emulation mode: Import existing Linux OS images using VMDK or QCOW2 formats and run in emulation mode VMs. For details, see Bring Your Own Custom Image for Emulation Mode Virtual Machines. Move an entire virtualized workload to Oracle Cloud Infrastructure by using your existing hypervisor, management tools, and processes. OS images or older OSs, such as Ubuntu 6.x, Red Hat Enterprise Linux 3.x, or CentOS 5.4, can use KVM on Oracle Cloud Infrastructure bare metal instances. For detailed instructions, see Bring Your Own KVM and Bring Your Own Nested KVM on VM Shapes. Bring Your Own Oracle VM:  For details, see Oracle VM on Oracle Cloud Infrastructure. You can also building new OS images: Oracle Cloud Infrastructure Published OS Images: Oracle provides prebuilt images for Oracle Linux, Microsoft Windows, Ubuntu, and CentOS. For details, see the complete list of Oracle-provided images. Red Hat Enterprise Linux 7.4 to Oracle Cloud Infrastructure bare metal and VM instances: You can also generate a Red Hat Enterprise Linux 7.4 image for bare metal and VM instances by using a Terraform template available from the Terraform provider. To learn more about bringing your workload to Oracle Cloud Infrastructure, including custom images, see Bring Your Own Image.

We are excited to announce the availability of a new way to move and improve existing workloads with Oracle Cloud Infrastructure. You can now import a range of new and legacy operating systems using...

Customer Stories

Why you need to bet on Oracle Cloud Infrastructure as THE Cloud for your HPC needs

You’d imagine, with the growth of the public cloud, that the majority of the HPC workloads and applications would have transitioned to the cloud; however, almost all enterprise HPC workloads are still running in on-premises datacenters. This means millions of mission critical use-cases such as engineering crash simulations, cancer research, visual effects and new cutting edge workloads such as deep learning in Artificial Intelligence (AI) are still constrained by on-premise environments. What’s stopping these HPC workloads from moving to the cloud? Simply – bad or incomplete cloud infrastructure solutions – inconsistent performance, no flexibility, high costs and no integration. If cloud infrastructure were as good as it needed to be, all these workloads would already be in the cloud. But they’re not. There is still clearly a lot of innovation to be done to move entire HPC and AI workloads and applications to the cloud. Enterprise HPC workloads have specialized needs. Traditional cloud providers don’t support these. If you want to run the most demanding HPC, AI or Database workloads, you need clusters of servers working as a single piece of infrastructure. Most cloud providers see this as a hard problem. Oracle solved these challenges on-premises 10 years ago with Exadata. What made Exadata so good? We built a clustered network, connected high speed compute and storage, and wrote software to optimize it end-to-end for performance and security. Today, we’re going to solve this problem for customers! First, we’re starting with announcing a brand-new capability called “Clustered Networking”. Clusters seem like an old idea, but everyone still runs them on-premises for their tough workloads: HPC Clusters, AI Research GPU Clusters, Simulation Clusters, etc. With Oracle Cloud, customers no longer need expensive, specialized networking gear on-premises. Customers can now get single digit micro-second latency and a 100G bandwidth with the first and only public cloud provider with a bare-metal RDMA capability. You can now migrate workloads into Oracle Cloud with better performance than on-premises or any other cloud provider. None of our competitors offer anything close. A cloud provider like Microsoft Azure offers a more expensive and niche solution with their H-Series Instances. You don’t have to compromise anymore! As part of Clustered Networking capability, we are announcing a new set of HPC instances available in preview today in our London (UK) and Ashburn (US) regions with expansion into other regions in the future: These new HPC instances are powered by Intel® Xeon® processors with 3.7Ghz all-core frequency. Additionally, to support local data check-pointing for MPI workloads or local file access for cutting edge Deep Learning workloads; these instances also contain local NVMe SSD storage for predictable high-performance IO. Additionally, we’ve worked with Mellanox to deliver 100G RDMA capability with ultra-low latency for MPI workloads supporting all market leading MPI frameworks including IntelMPI, OpenMPI or PlatformMPI. This is truly new ground-breaking innovation no other cloud provider has been able to solve at this scale. “As organizations look to ensure they stay ahead of the competition, they are looking for more efficient services to enable higher performing workloads. This requires fast data communication between CPUs, GPUs and storage, in the cloud,” said Michael Kagan, CTO, Mellanox Technologies. “Over the past 10 years we have provided advanced RDMA enabled networking solutions to Oracle for a variety of its products and are pleased to extend this to Oracle Cloud Infrastructure to help maximize performance and efficiency in the cloud.” Finally, we’re excited to offer these new instances in the cloud with leading on-demand cost of $0.075 cents per core hour. You no longer need to spend hundreds of millions of dollars on purpose-built super computers like Cray when you have on-demand HPC Clusters in Oracle CIoud Infrastructure for a couple of dollars an hour! Further innovation and commitment to Artificial Intelligence If you’re a data scientist or an AI developer, we’ve got great news. You will be able use our RDMA Clustered Network along with new GPU instances based on the HGX-2 architecture, providing over 1 petaflop of performance! With these new instances, Oracle Cloud becomes the first cloud provider with 32GB Tesla Volta GPUs along with the new NVSWITCH based architecture. We’ve plugged these GPUs into our Clustered Network as well, so that customers can launch GPUs with a click of their finger and are able to enable workloads utilizing RDMA across 1000s of GPUs! These new instances will be available in 2019 in our major regions globally at launch: Our HPC ISV Ecosystem At the recent Altair Global Conference – we also announced a new collaboration with Altair to offer HyperWorks CFD Unlimited as a new ground-breaking Engineering Simulation on Oracle Cloud Infrastructure. This new ground-breaking service offers computational fluid dynamics (CFD) solvers as a service in Oracle. Advanced CFD solvers such as Altair ultraFluidX™ and Altair nanoFluidX™ are optimized on Oracle to provide overnight simulation results for the most complex cases on a single server. You can find out more information on this service at https://www.altair.com/oracle. “We are excited to expand our relationship with Oracle,” said Sam Mahalingam, Chief Technical Officer for Enterprise Solutions at Altair. “We find that access to GPU compute resources can be challenging for our customers. The integration with Oracle’s cloud platform addresses this challenge, and provides customers the ability to use GPU-based solvers in the cloud for accelerated performance without the need to purchase expensive hardware. Ultimately this leads to improved productivity, optimized resource utilization, and faster time to market.” Come see us at Supercomputing Conference 2018 We’re extremely proud and excited to be showcasing these new capabilities this week at Supercomputing Conference in Dallas with our partners and customers in full force. Come see us at booth #2806 to talk to our engineering and product teams, get free credits and hands on demos. Some of the other activities you should check out: Come talk to us at the Oracle + Altair Happy Hour on Tuesday at 5pm HPC instance demos in Intel’s booth #3223 and a tech talk on the 15th November at 12.00pm. AMD instance demos at the AMD booth #2824 including a presentation on 13th November at 11am. Altair HyperWorks CFD Unlimited Presentation at booth #2833 at 11.30am on Tuesday 13th November. NVIDIA’s Theater (Booth #2417, Hall D) on Wednesday 14th November at 3pm HPC and AI at your fingertips… Most cloud infrastructure is just a set of unrelated and commodity parts. It’s the enterprise’s responsibility to figure out which parts will work, and which portions of the application need to be rebuilt for the cloud. Oracle Cloud enables customers to run new AI workloads, next to HPC workloads, next to database workloads, next to traditional applications. We’ve figured out how to run the hardest pieces of your applications, so you don’t have to. We provide the performance that enterprises need, with guarantees you require. We’re enabling HPC in a way that no other cloud can match. And we’re charging less for it. See you in Dallas!

You’d imagine, with the growth of the public cloud, that the majority of the HPC workloads and applications would have transitioned to the cloud; however, almost all enterprise HPC workloads are still...

Events

On-Premises HPC Performance with Oracle Cloud Infrastructure

In conjunction with Supercomputing 2018 we are announcing the availability of one of the fastest high performance cloud computing offerings. Oracle Cloud Infrastructure now offers the BM.HPC2.36 shape, which provides the exact same HPC performance you see on-premises. This new shape strengthens the end-to-end HPC experience on Oracle Cloud Infrastructure, read more about how Oracle is addressing HPC here. Migrating HPC workloads to the cloud involves surmounting several challenges, not the least of which is ensuring you have the same levels of performance, security, and control as your on on-premises infrastructure.  With this new bare metal compute instance, it is possible. Built on Intel's 6154 processor, this new bare metal compute instance offers an all core turbo clock speed of 3.7 GHz, and because it's bare metal there’s no virtualization performance penalty. In addition to the 6.7 TB local NVME drive and the 384 GB of dual rank memory, Oracle Cloud Infrastructure's new HPC shape provides the world's first public cloud Bare Metal RDMA network, enabled by a Mellanox 100 Gbps Network card, in addition to the 25 Gbps Network card for standard traffic. No virtualization means no jitter or bulky and unnecessary cloud monitoring agents. Run any MPI or any HPC workload in the cloud with a performance similar to your on-premises infrastructure. We're going to share a lot of data in this blog and we encourage you to take a free HPC test drive to validate for yourself. You can deploy a 1,000 core cluster for a few hours for free , you can access these clusters in our Ashburn, VA datacenter or our London datacenter. Raw Performance First let's look at raw performance. The BM.HPC2.36 shape has two 18-core Intel Xeon Gold 6154 processors. Intel integrates world-class compute with powerful fabric, memory, storage, and acceleration. You can move your research and innovation forward faster to solve some of the world’s most complex challenges. Working with leading HPC hardware providers like Intel and Mellanox ensures that Oracle Cloud Infrastructure customers get access to on-premises levels of performance with cloud flexibility. HPC applications perform the same on Oracle Cloud Infrastructure as they do on-premises, for both large and small models. A common benchmark for compute intensive workloads comes from the Standard Performance Evaluation Corporation or SPEC. SPEC has designed test suites to provide a comparative measure of compute-intensive performance across the widest practical range of hardware using workloads developed from real user applications. Publicly available results for some on-premises clusters compared to BM.HPC2.36 is shown below. Typically cloud vendors are hesitant to share their numbers because virtualized environments do not perform nearly as well as on-premises environments, we are happy to share our results.   Scaling Oracle Cloud Infrastructure scales HPC workloads efficiently. Some cloud vendors have typically expected you to pay for poor single node performance and to overlook their lack of scaling. We invite you to bring your workload to OCI and let us show you that you can run your MPI, compiler, and application workloads on bare metal. HPC applications do not handle virtualization well, on-premises HPC vendors have shown the significant negative impact that virtualization has on HPC workloads. The performance hit you take when running on some cloud vendors grows exponentially when you run an HPC cluster. Cloud monitoring agents will run frequently and are not synchronized across a cloud cluster, with bare metal you have complete control over the servers in your cluster, this makes a huge performance difference. Running RDMA in a virtualized environment undercuts the value of RDMA, to get the best performance out of RDMA it has to be run on bare metal. To get the best performance from RDMA, it must run on bare metal, as illustrated in the following graph: When running simulation applications across an HPC cluster the ability to efficiently scale at high node counts is important. It guarantees predictability of the simulation and increases the return-on-investment for expensive application licenses. In a CFD simulation BM.HPC2.36 scales over 100% from 450,000 cells per core to below 6,000 cells per core, consistently, the same performance that you see with on-premises clusters. Price With true HPC performance all of the cost and flexibility benefits of the cloud can now be applied to HPC workloads. Our customers are seeing a significant advantage in terms of simulation time, cost per jobs, and capacity. Additionally, on the cloud the concept of “one user, one cluster” means no queue times. and "one user, one cluster." Many HPC customers are able to attach a per job cost to their jobs. It is very easy to optimize per job cost in the cloud, in fact if the job utilizes RDMA the cost of the job remains the same independent of the speed at which it completes. When a customer is able to specify the number of jobs that they burst per month or per year the value of high performance cloud computing becomes clear.  See the table below, even with conservative numbers for an on-premises HPC cluster, can help customers save money in the short and long term running in the cloud. Oracle Cloud Infrastructure enables ad-hoc on-demand HPC clusters. This means that each user can spin up a cluster as needed. There is no need to support hundreds of users and a massive file server for your HPC cluster. You can size your HPC cluster specifically for the workload and stop paying for it when you are done with your job. In addition to the performance, scalability, and price performance Oracle Cloud Infrastructure provides an end-to-end HPC experience with GPUs, Intel and AMD bare metal processors, high performance block storage, and a full POSIX File Storage Service. Conclusion You can now run any HPC workload on Oracle cloud with the same predictable performance of your on-premises HPC infrastructure. With the fast Intel processors and RDMA technology, jobs will scale efficiently. At 7.5 cents per core hour, Oracle Cloud Infrastructure's HPC offering provides one of the most FLOPs per penny in the cloud. Navigate to https://cloud.oracle.com/iaas/hpc to test drive an HPC cluster for yourself or signup for our free HPC benchmarking service. Come talk with us about HPC on Oracle Cloud Infrastructure at SC18 in Dallas next week in booth #2806.

In conjunction with Supercomputing 2018 we are announcing the availability of one of the fastest high performance cloud computing offerings. Oracle Cloud Infrastructure now offers the...

Oracle Cloud Infrastructure

End of Sale of First-Generation X5 Compute Instances

One of the great things about using cloud infrastructure is that you don't have to upgrade hardware. Over a year ago, Oracle Cloud Infrastructure introduced new X7 options for its Compute service. Today, we are announcing the end of sale for older X5 options to manage the capacity of this older hardware. As of November 9, 2018, we are restricting the following X5 options (SKUs): Oracle Cloud Infrastructure – Compute – Bare Metal Standard – X5 (BM.Standard1.36) Oracle Cloud Infrastructure – Compute – Bare Metal Dense I/O – X5 (BM.DenseIO1.36) Oracle Cloud Infrastructure – Compute – Virtual Machine Standard – X5 (VM.Standard1.x) – all shapes: 1, 2, 4, 8, and 16 OCPUs Oracle Cloud Infrastructure – Compute – Virtual Machine Dense I/O – X5 (VM.DenseIO1.x) – all shapes: 4, 8, and 16 OCPUs Oracle Cloud Infrastructure – Database – Virtual Machine – X5 Standard Capacity – Bring Your Own License (BYOL) Oracle Cloud Infrastructure – Database – Bare Metal – X5 – Dense I/O Capacity – Bring Your Own License (BYOL) Oracle Cloud Infrastructure – Database – Virtual Machine – X5 Standard Capacity – License Included (non-BYOL): Database Standard Edition Database Enterprise Edition Database Enterprise Edition High Performance Database Enterprise Edition Extreme Performance Oracle Cloud Infrastructure – Database – Bare Metal – X5 – Dense I/O Capacity – License Included (non-BYOL): Database Standard Edition Database Enterprise Edition Database Enterprise Edition High Performance Database Enterprise Edition Extreme Performance For current monthly universal credit customers, we will continue to support the use of these options in the three regions in which they were offered—Phoenix (PHX), Ashburn (IAD), and Frankfurt (FRA). However, the availability of these options is limited, and we recommend that you use X7 shapes or AMD shapes as your deployment grows. Starting November 9, 2018, new monthly universal credit customers will have access only to the X7 and AMD shapes. We will set the service limits for these restricted options to zero for Pay-As-You-Go customers, unless they requested a limit increase before November 9. Pay-As-You-Go customers will be able to launch only new X7 or AMD instances, although any X5 instances that are currently being used will continue to work. The following table lists the comparable, recommended X7 or AMD shape for each X5 shape. These options generally provide increased resources compared to the older instances. X7 compute instance shapes are priced at the same cost per OCPU per hour as the X5 shapes, and AMD EPYC processor-based instances cost less. Compute Recommendations SKU Compute Instance Shape Recommended Alternative Shapes Bare Metal Standard – X5 BM.Standard1.36: OCPU: 36 Memory: 512 GB Network bandwidth: 10 Gbps X7 BM.Standard2.52: OCPU: 52 Memory: 768 GB Network bandwidth: 2x25 Gbps AMD BM.Standard.E2.64: OCPU: 64 Memory: 512 GB Network bandwidth: 2x25 Gbps X7 VM.Standard2.24: OCPU: 24 Memory: 320 GB Network bandwidth: 24.6 Gbps Bare Metal Dense IO – X5 BM.DenseIO1.36: OCPU: 36 Memory: 512 GB Local disk: 28.8 TB NVMe SSD Network bandwidth: 10 Gbps X7 BM.DenseIO2.52: OCPU: 52 Memory: 768 GB Local disk: 51.2 TB NVMe SSD Network bandwidth: 2x25 Gbps X7 VM.DenseIO2.24: OCPU: 24 Memory: 320 GB Local disk: 25.6 TB NVMe SSD Network bandwidth: 24.6 Gbps Virtual Machine Standard – X5 VM.Standard1.1: OCPU: 1 Memory: 7 GB Network bandwidth: Up to 600 Mbps X7 VM.Standard2.1: OCPU: 1 Memory: 15 GB Network bandwidth: 1 Gbps AMD VM.Standard.E2.1: OCPU: 1 Memory: 8 GB Network bandwidth: 0.7 Gbps Virtual Machine Standard – X5 VM.Standard1.2: OCPU: 2 Memory: 7 GB Network bandwidth: Up to 1.2 Gbps X7 VM.Standard2.2: OCPU: 2 Memory: 30 GB Network bandwidth: 2 Gbps AMD VM.Standard.E2.2: OCPU: 2 Memory: 16 GB Network bandwidth: 1.4 Gbps Virtual Machine Standard – X5 VM.Standard1.4: OCPU: 4 Memory: 28 GB Network bandwidth: 1.2 Gbps X7 VM.Standard2.4: OCPU: 4 Memory: 60 GB Network bandwidth: 4.1 Gbps AMD VM.Standard.E2.4: OCPU: 4 Memory: 32 GB Network bandwidth: 2.8 Gbps Virtual Machine Standard – X5 VM.Standard1.8: OCPU: 8 Memory: 56 GB Network bandwidth: 2.4 Gbps X7 VM.Standard2.8: OCPU: 8 Memory: 120 GB Network bandwidth: 8.2 Gbps AMD VM.Standard.E2.8: OCPU: 8 Memory: 64 GB Network bandwidth: 5.6 Gbps Virtual Machine Standard – X5 VM.Standard1.16: OCPU: 16 Memory: 112 GB Network bandwidth: 4.8 Gbps X7 VM.Standard2.16: OCPU: 16 Memory: 240 GB Network bandwidth: 16.4 Gbps Virtual Machine Dense IO – X5 VM.DenseIO1.4: OCPU: 4 Memory: 60 GB Local storage: 3.2 TB NVMe SSD Network bandwidth: 1.2 Gbps X7 VM.DenseIO2.8: OCPU: 8 Memory: 120 GB Local storage: 6.4 TB NVMe SSD Network bandwidth: 8.2 Gbps Virtual Machine Dense IO – X5 VM.DenseIO1.8: OCPU: 8 Memory: 120 GB Local storage: 6.4 TB NVMe SSD Network bandwidth: 2.4 Gbps X7 VM.DenseIO2.8: OCPU: 8 Memory: 120 GB Local storage: 6.4 TB NVMe SSD Network bandwidth: 8.2 Gbps Virtual Machine Dense IO – X5 VM.DenseIO1.16: OCPU: 16 Memory: 120 GB Local storage: 12.8 TB NVMe SSD Network bandwidth: 4.8 Gbps X7 VM.DenseIO2.16: OCPU: 16 Memory: 240 GB Local storage: 12.8 TB NVMe SSD Network bandwidth: 16.4 Gbps Database Recommendations For customers using the BM.DenseIO1.36 shape (BYOL or non-BYOL), we recommend upgrading to the X7 bare metal instance BM.DenseIO2.52 shape. It provides newer Intel Skylake processors, higher additional OCPU count, and more memory (768 GB of RAM). Additionally, the BM.DenseIO2.52 shape offers higher network bandwidth: 2x25 Gbps network connections versus a single 10-Gbps network connection for bare metal X5 Dense I/O. For customers using the VM.Standard1.N virtual machine shapes, we recommend upgrading to X7 Standard virtual machine instances with newer Intel Skylake processors and higher network bandwidth. SKU Compute Instance Shape Recommended Alternative Shapes Bare Metal – X5 Dense I/O – Standard Edition Bare Metal X5 – Dense/IO: OCPU: 2 enabled, up to 6 additional OCPUs (purchased separately) Memory: 512 GB Storage: 28.8 TB NVMe SSD raw, ~9.4 TB with two-way mirroring, ~5.4 TB with three-way mirroring Network bandwidth: 10 Gbps Bare Metal X7 – Dense I/O: OCPU: 2 enabled, up to 6 additional OCPUs (purchased separately) Memory: 768 GB Storage: 51.2 TB NVMe SSD raw, ~16 TB with two-way mirroring, ~9 TB with three-way mirroring Network bandwidth: 2x25 Gbps Bare Metal – X5 Dense I/O – Enterprise Editions (Enterprise Edition, High Performance, Extreme Performance) Bare Metal X5 – Dense/IO: OCPU: 2 enabled, up to 34 additional OCPUs (purchased separately) Memory: 512 GB Storage: 28.8 TB NVMe SSD raw, ~9.4 TB with two-way mirroring, ~5.4 TB with three-way mirroring Network bandwidth: 10 Gbps Bare Metal X7 – Dense I/O: OCPU: 2 enabled, up to 50 additional OCPUs (purchased separately) Memory: 768 GB Storage: 51.2 TB NVMe SSD raw, ~16 TB with two-way mirroring, ~9 TB with three-way mirroring Network bandwidth: 2x25 Gbps Bare Metal – X5 Dense I/O – BYOL Bare Metal X5 – Dense/IO: OCPU: 2 enabled, up to 6 additional OCPUs (purchased separately) for Standard Edition, up to 34 additional OCPUs (purchased separately) for Enterprise Edition Memory: 512 GB Storage: 28.8 TB NVMe SSD raw, ~9.4 TB with two-way mirroring, ~5.4 TB with three-way mirroring Network bandwidth: 10 Gbps Bare Metal X7 – Dense I/O: OCPU: 2 enabled, up to 6 additional OCPUs (purchased separately) for Standard Edition, up to 50 additional OCPUs (purchased separately) for Enterprise Edition Memory: 768 GB Storage: 51.2TB NVMe SSD raw, ~16 TB with two-way mirroring, ~9 TB with three-way mirroring Network bandwidth: 2x25 Gbps Virtual Machine Standard – X5 – all editions (Standard, Enterprise, High Performance, Extreme Performance, BYOL) VM.Standard1.1: OCPU: 1 Memory: 7 GB Network bandwidth: Up to 600 Mbps X7 VM.Standard2.1: OCPU: 1 Memory: 15 GB Network bandwidth: 1 Gbps Virtual Machine Standard – X5 – all editions (Standard, Enterprise, High Performance, Extreme Performance, BYOL) VM.Standard1.2: OCPU: 2 Memory: 14 GB Network bandwidth: Up to 1.2 Gbps X7 VM.Standard2.2: OCPU: 2 Memory: 30 GB Network bandwidth: 2 Gbps Virtual Machine Standard – X5 – all editions (Standard, Enterprise, High Performance, Extreme Performance, BYOL) VM.Standard1.4: OCPU: 4 Memory: 28 GB Network bandwidth: 1.2 Gbps X7 VM.Standard2.4: OCPU: 4 Memory: 60 GB Network bandwidth: 4.1 Gbps Virtual Machine Standard – X5 – all editions (Standard, Enterprise, High Performance, Extreme Performance, BYOL) VM.Standard1.8: OCPU: 8 Memory: 56 GB Network bandwidth: 2.4 Gbps X7 VM.Standard2.8: OCPU: 8 Memory: 120 GB Network bandwidth: 8.2 Gbps Virtual Machine Standard – X5 – all editions (Standard, Enterprise, High Performance, Extreme Performance, BYOL) VM.Standard1.16: OCPU: 16 Memory: 112 GB Network bandwidth: 4.8 Gbps X7 VM.Standard2.16: OCPU: 16 Memory: 240 GB Network bandwidth: 16.4 Gbps   For more information, see the Database Shapes Details in the service documentation, or contact your Oracle Cloud Infrastructure CSM.

One of the great things about using cloud infrastructure is that you don't have to upgrade hardware. Over a year ago, Oracle Cloud Infrastructure introduced new X7 options for its Compute...

Events

Bare Metal vs. Virtual Machines: Which is Best for HPC in the Cloud?

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Companies that want to run high performance computing (HPC) workloads in the cloud can get a significant performance boost by choosing bare metal servers over virtual machines (VMs)—and nobody does bare metal like Oracle Cloud Infrastructure. I recently sat down with Karan Batta, who manages HPC for Oracle Cloud Infrastructure, to discuss several HPC topics, including the key differences between running HPC workloads on bare metal and running them on VMs. We also talk about Oracle's approach to bare metal cloud and how it differs significantly from the competition. Listen to our conversation here and read a condensed version:   Your browser does not support the audio player   You often speak about the concept of bare metal cloud. Can you explain why HPC workloads are some of the best types of workloads to run in a bare metal cloud environment? Karan Batta: Certainly. But first, let's take a step back. A lot of cloud providers have tried bare metal, but they haven't done it the way we have. With them, bare metal cloud always comes with an "if" or a "but" and there is always a catch. They say things like: "You want bare metal? Great. Tell us how many servers you need. We'll go buy them and provision them manually and you can come back in three months." For us, bare metal is all about providing the same consistent performance compared to your on-premises cluster or on-premises data center—but with the added benefits and flexibility of the cloud. That's really what we've enabled here. Our bare metal offering is a fully multi-tenant bare metal environment where any customer can come in and spin up an instance that looks just like any other instance. It just so happens that there is no Oracle software running on it, there is no hypervisor running on it, and you get better performance for what you pay. This is really what it means to be running on a bare metal cloud. The reason HPC workloads are well-suited for bare metal is because of the great performance boost that bare metal provides. You mentioned that there is no hypervisor. But Oracle Cloud Infrastructure offers virtual machines (VMs) as well, correct? Batta: Yes, definitely. We were initially called Bare Metal Cloud, but we've rebranded as Oracle Cloud Infrastructure because we offer VMs as well. So, if you want to do some test dev workloads on a VM and then move them to bare metal, you can absolutely do that. Why would an organization avoid running HPC workloads in cloud-based VMs? Batta: When you use a hypervisor, you're essentially looking at anywhere from 10-15 percent performance tax. That's a rough idea of how much performance you're going to lose because you're adding overhead on top of your server. If I'm already paying $3, $4, or $5 per hour for an instance and losing 10-15 percent of performance, that kind of defeats the purpose of running HPC in the cloud. We've tried to make sure that when we talk about HPC, we mean that we're going to match your on-premises performance and we're going to give you an amazing price for it. You mentioned that bare metal cloud offers a 10-15 percent performance boost over virtualized cloud environments. What does that mean for our customers? Batta: What it means is that customers can reduce the time that workloads take from days to hours to minutes. Some people might say a 10-15 percent performance boost is not a big deal. But for anyone who runs resource-intensive HPC workloads, that is not the case. For them, 10% could translate to hours. If you're running, for example, a machine learning or an artificial intelligence job, or if you're running a distributed deep learning training job for image recognition or voice translation —those types of jobs can take 16-20 hours. In some of the bigger cases, like a search engine optimization, those things take weeks to run. So, a 10 percent performance boost there could mean that you're reducing the job by hours if not days. So, I think there is a huge difference between bare metal and VMs. Suppose an enterprise wants to run a combinational HPC workload, with some on bare metal cloud and some in a virtualized environment simultaneously. Is it possible to run that and scale up and down? Batta: Yes, you could do that today on Oracle Cloud Infrastructure. And the great thing is you can scale this up, down, left, right—you know you name it, we can do it. With Oracle Cloud Infrastructure you get performance and flexibility side by side. If you're running an HPC job, and you just want to quickly test it, you can spin up a couple of VMs and one core or even a fraction of a core. Then you can move to a fully bare metal instance with something like 52 physical cores—the largest bare metal instance you can find on any cloud—and you can run your production workloads. The other thing bare metal provides is flexibility. Not only do you have the ability to run HPC on our VMs, but you can move your entire virtualized environment and we will para-virtualize it on top of our bare metal nodes.  Come talk with Karan and the rest of the team about HPC on Oracle Cloud Infrastructure at SC18 in Dallas next week in booth #2806.

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Companies that...

Copying Instances or Images Across Regions

Sometimes you need to move the data that is stored on a block volume between Oracle Cloud Infrastructure regions. In this case, you can use cross-region block volume backup copy. Unfortunately, you can’t use this method with boot volumes. If you need to transfer your instance or image between regions, use one of the methods outlined in this post, depending on the configuration of your instance. In this post, I assume that you're familiar with  the Oracle Cloud Infrastructure CLI and have it set up. If not, use our quick start guide to help you do that. Custom Image for Boot Volumes Smaller Than 50 GB By default, every Linux instance launched in Oracle Cloud Infrastructure is created with a 50-GB boot volume. This value can be altered during instance creation, but that might lead to some limitations, which I’ll explain in the next section. This scenario assumes that the size of your boot volume is 50 GB or less. Create a custom image of an instance. The instance reboots during the process, so ensure that it’s not a production workload. oci compute image create --display-name <display_name> --instance-id <instance_ID> --compartment-id <compartment_ID> --wait-for-state AVAILABLE Export the custom image to an Object Storage bucket. oci compute image export to-object --image-id <custom_image_ID> --namespace <tenancy_namespace> --bucket-name <object_storage_bucket> --name <object_name> Wait for the process to complete. You can monitor the life cycle status using the following command: oci compute image get --image-id <image_ID> When the life cycle state changes from EXPORTING to AVAILABLE, the process is complete. Copy the object to the new region. Ensure that you have relevant permissions and policies applied as outlined in https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/copyingobjects.htm). Cross-region copy lets you asynchronously copy objects to other buckets in the same region, to buckets in other regions, or to buckets in other tenancies within the same region or in other regions. When copying the objects, you can keep the same name or modify the object name. The object copied to the destination bucket is considered a new object with unique ETag values and MD5 hashes. Before you start the operation, ensure that destination bucket exists or create one by using the oci os bucket create ... command); otherwise, the operation fails. oci os object copy --bucket-name <bucket_name> --source-object-name <object_name> --destination-region <destination_region_name> --destination-bucket <destination_bucket_name> Create a pre-authenticated request for the object. oci --region <new_region_name> os preauth-request create --bucket-name <bucket_name> --name <object_name> --access-type ObjectRead --time-expires <date_time> The URL is objectstorage.<region>.oraclecloud.com/<preauth_path>. Import the custom image in to the region by using the Object Storage URL. oci --region <region_name> compute image import from-object-uri --uri <URI> --compartment-id <compartment> --display-name <display_name> --launch-mode NATIVE --source-image-type QCOW2 After the image is imported, you should see it in the custom images list and be able to launch an instance by using the Console or CLI. Custom Image for Boot Volumes Larger Than 50 GB If the image is created from an instance with a boot volume larger than 50 GB, the process might fail because of a limitation of the pre-authenticated request object size. As a workaround, we clone the boot volume, use a disposable instance to create an image, and then upload and import the image. You can also use this method with a regular-sized image to avoid restarting the original instance. Before you start, ensure that you have required permissions in the IAM policies, that you have an API key that you can apply on the remote machine, and that the Oracle Cloud Infrastructure CLI is installed locally. Create a boot volume clone. This process is almost instantaneous. oci bv boot-volume create --source-boot-volume-id <boot_volume_ID> --display-name <new_boot_volume_display_name> Create a block volume big enough to store the temporary image in the same availability domain. oci bv volume create --availability-domain <availability_domain> --size-in-gbs 1024 Launch a new Oracle Linux 7.5 instance in the same availability domain. This is your disposable instance. Attach the block volume and the boot volume. oci compute volume-attachment attach --instance-id <instance_ID> --volume-id <block_volume_ID> --type ISCSI oci compute volume-attachment attach --instance-id <instance_ID> --volume-id <cloned_boot_volume_ID> --type ISCSI Use SSH to connect to the instance. Install the required packages: sudo yum -y install qemu-img pv python-pip sudo pip install oci-cli Configure the CLI on the remote system, following the procedure at https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliinstall.htm. Configure disks on the temporary system and format the partition. sudo oci-iscsi-config sudo parted -s -a optimal /dev/sdb mklabel gpt sudo parted -s -a optimal /dev/sdb mkpart primary 0% 100% sudo mkfs.xfs /dev/sdb1 sudo mount /dev/sdb1 /media Create an image from the boot volume. This process might take a long time, even hours depending on the disk size. sudo qemu-img convert -p -S1M /dev/sdc -O qcow2 /media/image.qcow2 The result is an image.qcow2 file that contains a QCOW2 formatted disk image of a boot volume that you can upload to Object Storage and import into your new region. Upload the file to the Object Storage bucket. oci --region <region_name> os object put -ns <namespace> -bn <bucket_name> --file /media/image.qcow2 --name image.qcow2 Import the image from the bucket. oci --region <region_name> compute image import from-object --namespace <ns> --bucket-name <bucket_name> --compartment-id <compartment> --display-name <display_name> --launch-mode NATIVE --source-image-type QCOW2 After the image is imported, you should see it in the custom images list and be able to launch an instance by using the Console or the CLI. Now you can terminate the temporary instance and delete the block volumes that you created during the process. I hope this helps!

Sometimes you need to move the data that is stored on a block volume between Oracle Cloud Infrastructure regions. In this case, you can use cross-region block volume backup copy. Unfortunately, you...

Oracle Cloud Infrastructure

Enhanced Compute Instance Management on Oracle Cloud Infrastructure

Oracle Cloud Infrastructure has released two new features that augment compute instance management with the introduction of Instance Configurations and Instance Pools. While the cloud provides a lot of useful standards - standard OS images, standard shape configurations, etc. - there is still additional overhead in provisioning and attaching resources like volumes and VNIC's. Provisioning at scale and managing instance setup has been difficult, until now. The release of Instance Configurations feature simplifies the provisioning of an instance and all required resources with a single API call. This simplification then extends to Instance Pools, where Instance Configurations are used to create a logical grouping of many identical compute instances and automatically launched at scale. What are Instance Configurations? An Instance Configuration is a template that defines a set of required and optional parameters needed to create a compute instance on Oracle Cloud Infrastructure, including OS image, shape and resources, such as block volumes attached to the instance as a single configuration entity. You can create an Instance Configuration from an existing running instance or construct a custom Instance Configuration via the CLI. When Boot or Data storage Volumes do not already exist, these resources will automatically be created for you when launching an instance. With one single action, you can launch an instance, we create storage volumes, attach VNIC's and stripe the set number of Instances evenly across the desired availability domains (AD's) for you. This is something that would normally require manual provisioning of each individual resource on the platform to launch an instance.. Creating an Instance Configuration Create an instance configuration from an existing running instance with the new Create Instance Configuration button. Select a compartment and give your configuration a name. All the metadata from this instance is then captured for you. On the left menu, go to Compute, select Instances, and then click Instance Details. To view the saved configuration, go to Compute, and then Instance Configuration. How Instance Pools Work Oracle Cloud Infrastructure has created a new powerful approach that launches and manages identical VM instances in a logical group called an Instance Pool. The pool automatically provisions a horizontal scalable pool of VM instances. An Instance Pool uses an instance configuration template that contains all the settings for how you want an instance created. Instance Pools manage the launching of identical instances based on the instance configuration template. The pool maintains your configured instance count and can be updated to scale on demand. The Instance Pool constantly monitors its own health state to ensure all instances are in a running state. In the event of any instance failure, the pool will automatically self-heal and take corrective action to bring the pool back to a healthy state. Easily Create and Launch a New Instance Pool Create an instance pool in less than 30 seconds.  Go to Compute, and select Instance Pools. Click the Create Instance Pool button, and enter the number of instances you want for the pool. Select the instance configuration that you created previously. Select the availability domains for desired resiliency. (Instances will evenly be distributed across selected AD's) Select the primary VNIC and subnet. Provisioning the Instance Pool launches the configured instances.   After the Instance Pool is running, you can perform power actions on the pool.  The Edit button allows you to update the pool size with the number instances (0-50). Stopping the pool stops all instances in the pool. Reboot restarts all instances and Terminate destroys all the instances and the pool itself. Common Use Cases Instance Configurations: Clone an instance and save to a configuration file. Create standardized baseline instance templates. Easily deploy instances from CLI with a single configuration file. Automate the provisioning of many instances, its resources and handle the attachments. Instance Pools: Centrally manage a group of instance workloads that are all configured with a consistent configuration. Scale out instances on-demand by increasing the instance size of the pool. Update a large number of instances with a single instance configuration change. Maintain high availability and distribute instances across availability domains within a region. Scale up the VM size within a pool easily by updating the instance configuration with a larger shape. Enable automatic self-healing within the pool to maintain pool size and availability. Keep up with customer demand with large-scale support for hundreds of custom VM images.  With the synergy of both Instance Configurations and Instance Pools, these features have removed the complexity it takes to manage and deploy hundreds of VMs with ease on Oracle Cloud Infrastructure. There is no additional cost for Instance Configurations and Instance Pools. You only pay for the resources consumed from a launched VM instance.  Next Steps Learn more about how to get started with Oracle Cloud Infrastructure Instance Configurations and Instance Pools in our Managing Compute Instances documentation.

Oracle Cloud Infrastructure has released two new features that augment compute instance management with the introduction of Instance Configurations and Instance Pools. While the cloud provides a lot...

Events

What is HPC in the Cloud? Exploring the Need for Speed

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. High Performance Computing (HPC) refers to the practice of aggregating computing power in a way that delivers much higher horsepower than traditional computers and servers. HPC is used to solve complex, performance-intensive problems—and organizations are increasingly moving HPC workloads to the cloud. HPC in the cloud is changing the economics of product development and research because it requires fewer prototypes, accelerates testing, and decreases time to market. I recently sat down with Karan Batta, who manages HPC for Oracle Cloud Infrastructure, to discuss how HPC in the cloud is changing the way that organizations new and old, develop products and conduct cutting-edge scientific research. We talked about varying topics including the key differences between legacy, on-premises HPC workloads, and newer HPC workloads that were born in the cloud. Listen to our conversation here and read a condensed version below: Your browser does not support the audio player Let's start with a basic definition. What is HPC and why is everyone talking about it? Karan Batta: HPC stands for High Performance Computing—and people tend to bucket a lot of stuff into the HPC category. For example, artificial intelligence (AI) and machine learning (ML) is a bucket of HPC. And if you're doing anything beyond building a website—anything that is dynamic—it's generally going to be high performance. From a traditional perspective, HPC is very research-oriented, or scientifically-oriented. It's also focused on product development. For example, think about engineers at a big automotive company making a new car. The likelihood is that the engineers will bucket all of that development—all of the crash testing analysis, all of that modeling of that car—into what's now called HPC. The reason the term HPC exists is because it's very specialized. You may need special networking gear, special compute gear, and high-performance storage, whereas less dynamic business and IT applications may not require that stuff. Why should people care about HPC in the cloud? Batta: People and businesses should care because it really is all about product development. It's about the value that manufacturers and other businesses provide to their customers. Many businesses now care about it because they've moved some of their IT into the cloud. And now they're actually moving stuff into the cloud that is more mission-critical for them—things like product development. For example, building a truck, building a car, building the next generation of DNA sequencing for cancer research, and things like that. Legacy HPC workloads include things like risk analysis modeling and Monte Carlo simulation, and now there are newer kinds of HPC workloads like AI and deep learning. When it comes to doing actual computing, are they all the same or are these older and newer workloads significantly different? Batta: At the end of the day, they all use computers and servers and network and storage. The concepts from legacy workloads have been transitioned into some of these modern cloud-native type workloads like AI and ML. Now, what this really means is that some of these performance-sensitive workloads like AI and deep learning were born in the cloud when cloud was already taking off. It just so happened that they could use legacy HPC primitives and performance to help accelerate those workloads. And then people started saying, "Okay, then why can't I move my legacy HPC workloads into the cloud, too?" So, at the end of these workloads all use the same stuff. But I think that how they were born and how they made their way to the cloud is different. What percentage of new HPC workloads coming into the cloud are legacy, and what percentage are newer workloads like AI and deep learning? Which type is easier to move to the cloud? Batta:  Most of the newer workloads like AI, ML, containers, and serverless were born in the cloud so there already ecosystems available to support them in the cloud. Rather than look at it percentage-wise, I would suggest thinking about it in terms of opportunity. Most HPC workloads that are in the cloud are in the research and product development phase. Cutting-edge startups are already doing that. But the big opportunity is going to be in legacy HPC workloads moving into the cloud. I'm talking about really big workloads—think about Pfizer, GE and all these big monolithic companies that are running production workloads of HPC on their on-premises clusters. These things have been running 30 or 40 years and they haven't changed. Is it possible to run the newer HPC workloads in my old HPC environment if I already have it set up? Can companies that have invested heavily in on-premises HPC just stay on the same trajectory? Batta: A lot of the latest HPC workloads are the more cutting-edge workloads were born in the cloud. You can absolutely run those on old HPC hardware. But they're generally cloud-first, meaning that they have been integrated into graphics processing units (GPUs). Nvidia, for example, is doing a great job of making sure any new workloads that pop up are already hardware accelerated. In terms of general-purpose legacy workloads, a lot of that stuff is not GPU accelerated. If you think about crash testing, for example, that's still not completely prevalent on GPUs. Even though you could run it on GPUs if you wanted, there's still a long-term timeline for those applications to move on. So, yes, you can run new stuff on the old HPC hardware. But the likelihood is that those newer workloads have already been accelerated by other means, and so it becomes a bit of a wash. In other words, these newer workloads are built cloud-native, so trying to run them on premises on legacy hardware is a bit like trying to put a square peg in a round hole. Is that correct? Batta: Exactly. And you know, somebody may do that, because they've already invested in a big data center on premises and it makes sense. But I think over time this is going to be the case less and less. Come talk with Karan and others about HPC on Oracle Cloud Infrastructure at SC18 in Dallas next week in booth #2806.

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. High...

Oracle Cloud Infrastructure

Part 2 of 4 - Oracle IaaS and Seven Pillars of Trusted Enterprise Cloud Platform

This is the second part of our blog series where we do a deep dive into the Oracle Cloud Infrastructure security approach. As a recap, we design our security architecture and build security solutions based on seven core pillars. And under each of these pillars, we focus on delivering solutions and capabilities to help ensure our customers can improve the security posture of their overall cloud infrastructure. In the first post, we discussed how we enable customers to achieve isolation and encrypt their data. In this post, we dig into our 3rd and 4th pillars, and discuss how you can obtain the security controls and visibility needed for your cloud environment. 3. Security Controls Security controls offer customers effective and easy-to-use security management. The solutions that we offer allow you to control access to your services and segregate operational responsibilities to reduce the risk associated with potentially malicious and accidental user actions. User authentication and authorization-based security controls: Each user has one or more of the following credentials to authenticate themselves to Oracle Cloud Infrastructure. Users can generate and rotate their own credentials. In addition, a tenancy security administrator can reset credentials for any user within their tenancy. Console password: Used to authenticate a user to the Oracle Cloud Infrastructure Console. API key: All API calls are signed using a user-specific 2048-bit RSA private key. The user creates a public key pair and uploads the public key in the Console. A user can access an instance via SSH. This requires that the user has an SSH key pair.  Swift password: Used by Recovery Manager (RMAN) to access the Object Storage service for database backups. To ensure sufficient complexity, the IAM services create the password and the customer cannot provide it. Customer secret key: Used by Amazon S3 clients to access the Object Storage service’s S3-compatible API. To ensure sufficient complexity, the IAM services create the password and the customer cannot provide it. Instances: Instances are a new principal type in IAM. Customers no longer need to configure user credentials on the services running on their compute instances or rotate those credentials. Each compute instance has its own identity, and it authenticates using the certificates that get added to the instances by instance principals. Because these certificates are automatically created, assigned to instances, and rotated, customers do not need to distribute credentials to their hosts nor rotate them. You can group instances in logical groups called dynamic groups and you can define IAM policies for these groups. ynamic groups allow you to group Oracle Cloud Infrastructure instances as principal actors, similar to user groups. You can then create policies to permit instances in these groups to make API calls against Oracle Cloud Infrastructure services. Membership in the group is determined by a set of matching rules. Federated Users: Federated users who attempt to authenticate to the Oracle Cloud Infrastructure graphical administration console are redirected to the configured identity provider, after which they can manage Oracle Cloud Infrastructure resources in the console just like a native IAM user. Currently Oracle Cloud Infrastructure supports the Oracle Identity Cloud Service and Microsoft Active Directory Federation Service (ADFS) as identity providers. Federated groups can be mapped to native IAM groups to define what policy applies to a federated user. Security Lists: Oracle IaaS also provides a native firewall-as-a-service in the form of security lists that are applied at the subnet level. The security list rules for the database subnet restrict it to connecting from and to the web server’s subnet. The security list for the web server subnet allows all outgoing connections while restricting incoming connections. A security list provides a virtual firewall for an instance, with ingress and egress rules that specify the types of traffic allowed in and out. Each security list is enforced at the instance level. However, you configure your security lists at the subnet level, which means that all instances in a given subnet are subject to the same set of rules. The security lists apply to a given instance whether it's talking to another instance in the VCN or a host outside the VCN. When you create a security list rule, you choose whether it's stateful or stateless.  Stateful: If you add a stateful rule to a security list, that indicates that you want to use connection tracking for any traffic that matches that rule (for instances in the subnet the security list is associated with). This means that when an instance receives traffic matching the stateful ingress rule, the response is tracked and automatically allowed back to the originating host, regardless of any egress rules applicable to the instance. When an instance sends traffic that matches a stateful egress rule, the incoming response is automatically allowed, regardless of any ingress rules. Stateless: If you add a stateless rule to a security list, that indicates that you do not want to use connection tracking for any traffic that matches that rule (for instances in the subnet that the security list is associated with). This means that response traffic is not automatically allowed. To allow the response traffic for a stateless ingress rule, you must create a corresponding stateless egress rule. Containers: For containers, the Kubernetes RBAC Authorizer can enforce more fine-grained access control for users on specific clusters via Kubernetes RBAC roles and clusterroles. A Kubernetes RBAC role is a collection of permissions. For example, a role might include read permission on pods and list permission for pods. A Kubernetes RBAC clusterrole is just like a role, but can be used anywhere in the cluster. A Kubernetes RBAC rolebinding maps a role to a user or set of users, granting that role's permissions to those users for resources in that namespace. Similarly, a Kubernetes RBAC clusterrole binding maps a clusterrole to a user or set of users, granting that clusterrole's permissions to those users across the entire cluster. IAM and the Kubernetes RBAC Authorizer work together to enable users who have been successfully authorized by at least one of them to complete the requested Kubernetes operation. When a user attempts to perform any operation on a cluster (except for create role and create clusterrole operations), IAM first determines whether the group that the user belongs to has the appropriate permissions. If so, the operation succeeds. If the attempted operation also requires additional permissions granted through a Kubernetes RBAC role or clusterrole, the Kubernetes RBAC Authorizer then determines whether the user has been granted the appropriate Kubernetes role or clusterrole. By default, users are not assigned any Kubernetes RBAC roles (or clusterroles). So before attempting to create a new role (or clusterrole), users must be assigned an appropriately privileged role (or clusterrole). You can connect to worker nodes using SSH. If you provided a public SSH key when creating the node pool in a cluster, the public key is installed on all worker nodes in the cluster. On UNIX and UNIX-like platforms (including Solaris and Linux), you can then connect through SSH to the worker nodes using the SSH utility (an SSH client) to perform administrative tasks. Before you can connect to a worker node using SSH, you must define a security ingress rule in the security list for the worker node subnet to allow SSH access. 4. Visibility In order to give you the visibility you need over your cloud infrastructure, Oracle offers comprehensive log data and security analytics that you can use to audit and monitor actions on your resources. This allows you to meet your audit requirements and reduce security and operational risk. The Oracle Cloud Infrastructure Audit service records all API calls to resources in a customer’s tenancy as well as login activity from the graphical management console. Using the Audit service, customers can achieve their own security and compliance goals by monitoring all user activity within their tenancy. Because all Console, SDK, and command line (CLI) calls go through our APIs, all activities from those sources are included.  Audit records are available through an authenticated, filterable query API or can be retrieved as batched files from Oracle Cloud Infrastructure Object Storage. You can also search for API calls via the Console. Audit log contents include what activity occurred, the user that initiated it, the date and time of the request, as well as source IP, user agent, and HTTP headers of the request. New activities are usually appended to the audit logs within 15 minutes of occurrence. By default, audit logs are retained for 90 days, but you can configure it to retain logs for up to 365 days. In addition to the audit services, Oracle CASB-based security monitoring performs Oracle Cloud Infrastructure resource activity configuration checks, IAM user behavior analysis, and IP reputation analysis. Examples of CASB Oracle Cloud Infrastructure security checks: Publicly accessible object store buckets Open VCN Security Lists 0.0.0.0/0 VCN accessible to the internet IAM user password not rotated for more than 90 days IAM user API keys not rotated for more than 90 days IAM user password complexity checks MFA not enabled on admin account In my next blog post, I will cover the next 2 pillars - secure hybrid cloud and high availability. In the meantime, use these resources to learn more about Oracle Cloud Infrastructure security: • Oracle Cloud Infrastructure Security White Paper  • Oracle Cloud Infrastructure GDPR White Paper • Oracle Cloud Infrastructure Security Best Practices Guide • Services Security Documentation   Blogs: Part 1 of 4 - Oracle IaaS and Seven Pillars of Trusted Enterprise Cloud Platform Guidance for PCI Compliance Guidance for cSOC Guidance for third-party firewall installation on Oracle Cloud Infrastructure - Check Point, vSRX Guidance for IAM configuration for MSPs Guidance for IAM Best Practices Guidance for Migration and DR using Rackware

This is the second part of our blog series where we do a deep dive into the Oracle Cloud Infrastructure security approach. As a recap, we design our security architecture and build security solutions...

Strategy

Enterprise Cloud Infrastructure Shouldn't Be a Commodity

Oracle entered the infrastructure as a service (IaaS) market for two main reasons. From an internal perspective, we're a pioneer and leader in databases. Today's databases need the performance and scalability of cloud infrastructure to meet the demands of enterprise customers. That means we needed a cloud to support our business and our customers' businesses. Even more importantly, we believe IaaS is not and should not be a commodity. There are significant opportunities to innovate and improve cloud migration, integration, and performance. We're capitalizing on these opportunities to provide a truly enterprise-grade cloud to businesses of all sizes. Enterprise Migration Obstacles Most enterprise workloads still haven't moved to the cloud, and those that remain on-premises are typically the most mission-critical. Why? There are several obstacles to the large-scale migration of enterprise workloads: Many organizations have serious security concerns—rightfully so, given how some cloud providers handle customer tenancy, keys, and data. Many applications simply can't run in earlier-generation clouds without significant re-architecture, because of their existing hardware and software dependencies. Enterprises must be able to run their entire businesses in the cloud. Unfortunately, they haven't had that option for much of the past decade. Mission-critical applications require high, consistent performance and reliability. Customers haven't found this, or the proper service levels, in earlier-generation clouds. The migration process itself is often risky, from downtime to the complexity of translating on-premises security to the cloud. Purpose-Built for the Enterprise In a real enterprise-grade cloud, large businesses can easily and securely migrate entire systems, even those that rely on multiple technologies from multiple vendors. An on-premises Oracle application, for example, might run on Exadata and use Real Application Clusters (RAC), all in a virtualized environment, protected by several different monitoring and security tools. Oracle Cloud Infrastructure enables enterprises to move these integrated systems to the cloud all at once. Our approach ensures the integrity of customers' existing security capabilities and significantly reduces the risk of migration. It's not enough to enable seamless cloud migration, however. Enterprise-grade clouds must also provide room to grow, from scaling existing systems to supporting and integrating with new technologies. At Oracle Cloud Infrastructure, we're committed to openness and we embrace transformative new technologies and methodologies, including DevOps, containers, serverless, Kubernetes, and Terraform. Other clouds support these too, but it's often through a hodgepodge of services that don't integrate well with each other or with existing systems. We're doing cloud infrastructure better because we're purpose-built for the enterprise of today, and the enterprise of tomorrow.

Oracle entered the infrastructure as a service (IaaS) market for two main reasons. From an internal perspective, we're a pioneer and leader in databases. Today's databases need the performance and...

Solutions

Oracle Jump Start Learning: Introducing Self-Paced Hands-On Labs

I remember my early days of trying out a cloud platform to create a couple of virtual machines. I got “ready” by reading documentation and watching quite a few online tutorials. But when I first logged into the platform, I was lost. I had to revisit the documents and tutorials to navigate my way around the platform. After I had some hands-on experience, everything became a lot easier. Moral of the story: hands-on experience beats theoretical knowledge. Oracle Cloud Infrastructure provides unparalleled price and performance for customers deploying workloads in a cloud environment. We recognize that our existing and future customers have a varied skill set. It's important for us to provide our customers and partners with tools and solutions to bridge any skill gap so that our customers and partners can successfully use and deploy solutions on Oracle Cloud Infrastructure. In our endeavor to enable, empower, and expedite our customers, we are delighted to introduce Jump Start Learning. These self-paced, hands-on labs provide a live environment with step-by-step instructions for performing different tasks in Oracle Cloud Infrastructure. Best of all, the instructions and the Oracle Cloud Infrastructure Console are visible in a single split screen. No more switching back and forth between browser windows to read instructions. Want to create a virtual cloud network and deploy a compute instance on it? There's a lab for that. Want to learn how to use Terraform to deploy infrastructure as code? There's a lab for that. Want to deploy and configure Oracle Autonomous Data Warehouse? Well, there's a lab for that, too. Following are some of the key advantages and features of the labs: Five beginner level labs are free. There is minimal cost for rest of the labs Step-by-step instructions and access to the Console in a single browser screen. Labs based on skill level. Start as a beginner, learn the basics, work your way to advanced, and then use the experience for a production rollout. Configure and deploy the latest features, such as a service gateway and Autonomous Data Warehouse. No need to install any tools on your laptop; all necessary tools are built in. Best hands-on experience, period! Start taking the labs today by registering your account at https://ocitraining.qloudable.com/. Remember to rate the lab at the end and provide your feedback. We are always listening to our customers and, more importantly, acting on your feedback, so let us know if you want to see a specific lab. Happy learning!

I remember my early days of trying out a cloud platform to create a couple of virtual machines. I got “ready” by reading documentation and watching quite a few online tutorials. But when I first...

Security

Part 1 of 4 - Oracle IaaS and Seven Pillars of Trusted Enterprise Cloud Platform

Oracle Cloud Infrastructure’s security approach is based on seven core pillars. Each pillar has multiple solutions designed to maximize the security and compliance of the platform. You can read more about Oracle Cloud Infrastructure's security approach here. The seven core pillars of trusted enterprise cloud platform are:  Customer Isolation Data Encryption Security Controls Visibility Secure Hybrid Cloud High Availability Verifiably Secure Infrastructure Oracle employs some of the world’s foremost security experts in information, database, application, infrastructure, and network security. By using Oracle Cloud Infrastructure, our customers directly benefit from Oracle’s deep expertise and continuous investments in security. In this blog (Part 1), I am going to explain how Oracle Cloud Infrastructure security services map to our first two pillars - Customer Isolation and Data Encryption. In the next blog (Part 2), I will cover the next 2 pillars. 1. Customer Isolation Customer isolation allows customers to deploy application and data assets in an environment that commits full isolation from other tenants and Oracle’s staff. Let's dive into how we offer isolation at different resource levels. Compute At the Compute level, we offer two types of instance isolation. Bare metal instances offer complete workload and data isolation. Customers have full control of these instances. Every bare metal instance is a single-tenant solution. Oracle personnel have no access to memory or local storage while the instance is running, and there is no Oracle-managed hypervisor on bare metal instances. Virtual machine instances are a multi-tenant solution. VM instances run on an Oracle-managed hypervisor and come with strong isolation controls. Both instances offer strong security controls. Customers who want to have higher performance instances and complete workload and data isolation often prefer bare metal instances.  Networking Next, the Oracle Cloud Infrastructure Networking service offers a customizable private network (a VCN, or virtual cloud network) to customers. VCNs enforce the logical isolation of a customer's Oracle Cloud Infrastructure resources.  Oracle’s VCN gives you the complete set of network services you need in the cloud with the same network flexibility you have today on-premises. You can build an isolated virtual network with granular controls, including subnets and security lists. We provide secure and dedicated connectivity from your data center to the cloud through FastConnect with multiple providers like Equinix and Megaport. You can provide end-customers high performance and predictable access to your applications with services like provisioned bandwidth load balancing. All networking services are API-driven and programmable for more automated management and application control. As with an on-premises network in a data center, customers can set up a VCN with hosts and private IP addresses, subnets, route tables, and gateways using VCN. The VCN can be configured for internet connectivity using an Internet Gateway, or connected to the customer's private data center through an IPSec VPN gateway or FastConnect. FastConnect offers a private connection between an existing network's edge router and dynamic routing gateways. In this case, traffic does not traverse the internet. Subnets, the primary subdivision of a VCN, are specific to an availability domain. They can be marked as private upon creation, which prevents instances launched in that subnet from having public IP addresses.  Compartments and Policies From an authorization perspective, Identity and Access Management (IAM) compartments can be used for isolation. A compartment is a heterogeneous collection of resources for the purposes of security isolation and access control. All end-user calls to access Oracle Cloud Infrastructure resources are first authenticated by the IAM service and then authorized based on IAM policies. A customer can create a policy that gives a specific set of users permission to access the infrastructure resources (network, compute, storage, and so on) within a compartment in the tenancy. These policies are flexible and are written in a human-readable form that is easy to understand and audit. The easy-to-understand syntax include verbs which define the level of access given to end-users.  2. Data Encryption Our second core security pillar, data encryption protects customer data at-rest and in-transit in a way that allows customers to meet security and compliance requirements with respect to cryptographic algorithms and key management. Block Volume Encryption The Oracle Cloud Infrastructure Block Volumes service provides persistent storage that can be attached to compute instances using the iSCSI protocol. The volumes are stored in high-performance network storage and support automated backup and snapshot capabilities. Volumes and their backups are accessible only from within a customer's VCN and are encrypted at-rest using unique keys. For additional security, iSCSI CHAP authentication can be required on a per-volume basis. Object Storage Encryption The Oracle Cloud Infrastructure Object Storage service provides highly scalable, strongly consistent, and durable storage for objects; ideal for media archives, data lakes, and data protection applications like backup and restore. API calls over HTTPS provide high-throughput access to data. All objects are encrypted at rest using unique keys. Objects are organized by bucket, and, by default, access to buckets and objects within them requires authentication. Users can use IAM security policies to grant users and groups access privileges to buckets. To allow bucket access by users who do not have IAM credentials, the bucket owner (or a user with necessary privileges) can create pre-authenticated requests that allow authorized actions on buckets or objects for a specified duration.  Alternately, buckets can be made public, which allows unauthenticated and anonymous access. Given the security risk of inadvertent information disclosure, Oracle highly recommends carefully considering the business case for making buckets public. Object Storage enables you to verify that an object was not unintentionally corrupted by allowing an MD5 hash to be sent with the object (or with each part, for multipart uploads) and returned upon successful upload. This hash can be used to validate the integrity of the object.  In addition to a native API, the Object Storage service supports Amazon S3 compatible APIs. Using the Amazon S3 Compatibility API, customers can continue to use existing S3 tools (for example, SDK clients). Partners can also modify their applications to work with Object Storage with minimal changes to their applications. Their native API can co-exist with the Amazon S3 Compatibility API, which supports CRUD operations. Before customers can use the Amazon S3 Compatibility API, they must create an S3 Compatibility API key. After generating the necessary key, customers can use the Amazon S3 Compatibility API to access Object Storage in Oracle Cloud Infrastructure. Key Management Service In addition, Oracle provides an enterprise-grade Key Management service with following characteristics: Backed by FIPS 140-2 Level 3 HSMs Tightly integrated with Oracle Block Volumes and Object Storage Full control of key creation and lifecycle (with automatic rotation options) Full audit of key usage (with signed attestation by HSM vendor) Choice of key shape via Advanced Encryption Standard (AES) keys with three key lengths: AES-128, AES-192, and AES-256 Load Balancer For data at-transit, applications should use TLS-based certificates and encryption. Oracle IaaS load balancer services support customer-provided TLS certificates.  In addition, the Load Balancing service supports TLS 1.2 by default, and prioritizes the following forward-secrecy ciphers in the TLS cipher-suite: ECDHE-RSA-AES256-GCM-SHA384 ECDHE-RSA-AES256-SHA384 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-RSA-AES128-SHA256 DHE-RSA-AES256-GCM-SHA384 DHE-RSA-AES256-SHA256 DHE-RSA-AES128-GCM-SHA256 DHE-RSA-AES128-SHA256 Many customers prefer EC cipher suites due to high performance. However, customers can add weaker cipher suites changes via a support ticket if their legacy clients need them. Database Encryption Database encryption is achieved by using Transparent Data Encryption (TDE).  In the second part, I will cover the next 2 pillars. In the meantime, please use these resources to learn more about Oracle Cloud Infrastructure Security: • Oracle Cloud Infrastructure Security White Paper  • Oracle Cloud Infrastructure GDPR White Paper • Oracle Cloud Infrastructure Security Best Practices Guide • Services Security Documentation Blogs: Guidance for PCI Compliance Guidance for cSOC Guidance for Security Checklist for Application Migration Guidance for third party firewall installation on Oracle Cloud Infrastructure - Checkpoint, vSRX Guidance for IAM configuration for MSPs Guidance for IAM Best Practices Guidance for Migration and DR using Rackware  

Oracle Cloud Infrastructure’s security approach is based on seven core pillars. Each pillar has multiple solutions designed to maximize the security and compliance of the platform. You can read more...

So the Internet is Your Corporate Network. Now What?

Almost every major enterprise is considering, or in the process of, moving their sensitive workloads to the cloud. Within the next 18 months, the amount of enterprise workloads running on the cloud will surpass those living on premises for the first time ever, according to IDG, increasing from 47% to a whopping 69%. This means the internet will be part of the enterprise network.  To be completely confident about this “new network,” enterprises need to have complete visibility into the internet -- or at least the portion through which their messages pass. The internet is a core component of the cloud's physical infrastructure, with its own nuances around reliability, performance, volatility, and security issues. You can’t really fix how the overall internet behaves, but you should be aware of these nuances and how they affect your network. When enterprises built their networks, they always assumed that everything inside the firewall was secure, controlled, monitored, measured, and metered, and that the entire network was visible all the time. But when sales, marketing, and support organizations tried to move their applications to the cloud, more often than not, IT had neither the inclination nor the bandwidth to support them. This gave birth to the shadow IT concept. Those organizations had money but no patience or time to waste, waiting for the enterprise IT department to come around. They tried to replicate corporate security, compliance, and privacy standards the best that they could. Today, with the need for faster innovation, combined with the cost, scalability, and availability benefits of the cloud, enterprises are building cloud native applications that are either cloud-first or cloud-only concepts. More and more workloads will live entirely in the cloud, without ever touching the corporate network. In these scenarios, organizations need complete visibility from edge to core. But how?    Enterprise IT, security, and compliance teams have a choice to make: either completely trust the standards and services supported by their cloud provider of choice at the infrastructure level, or build everything from the ground up on the cloud that can mimic the standards they are used to. Most cloud providers provide basic infrastructure performance details but not broader insights into internet performance, largely because they are not measuring, collecting, or analyzing the relevant metrics. Not only should this insight be made available, but it should be available on a real-time basis via a dashboard or APIs. In addition, these insights should be able to enable performance optimization of cloud workloads through integration into other tools. Key insights and services that should be provided by your cloud provider should include: Internet telemetry: Understanding historical internet performance between multiple worldwide locations and your cloud infrastructure, and being able to test that performance in real time, can help ensure that your users are experiencing the best possible application performance. In addition, this performance data can be used for dynamic steering of inbound traffic, enabling you to direct or balance incoming traffic across multiple cloud locations. This Oracle Internet Intelligence dashboard shows historical latency to multiple global markets within the Oracle Cloud Infrastructure customer console: Security: It is critical to incorporate the same security measures that your enterprise IT built for over the years when shifting workloads to the cloud. Your cloud provider should support tightly integrated web application firewall, DDoS mitigation, bot management, and API protection services. Insights from these services should be easily accessible through the customer console, and the underlying data available for security information and event management ingestion and security operations center analysis. Routing: Monitoring and alerting on routing announcements associated with your organization’s address space enables immediate responses to accidental leaks or malicious hijacks, limiting the effects and lowering the risk that user traffic is sent to fake sites or applications designed to steal information. If you are using a cloud provider, confirm that they are taking these steps for the address space in which their critical service platforms (compute, storage, DNS, etc.) reside, and that they have a plan for immediately addressing issues should they arise. Additionally, your cloud provider should have visibility into network paths to and from their regions to ensure traffic isn’t taking circuitous routes, increasing latency, or transiting through hostile nation-states. Performance issues in real time with measurements from a global network of vantage points Moving to the cloud is a big deal for most organizations. When making a choice on cloud provider, ask some tough questions about whether or not they will give you visibility into both their infrastructure and the performance and visibility of the broader internet. Oracle Cloud Infrastructure offers better performance at a lower price point than competing infrastructure as a service providers. And because it can isolate customers from one another using bare metal compute, it avoids the risk and exposure that comes with shared instances. In other words, Oracle Cloud Infrastructure is built for the enterprise from the ground up, with security from edge to core. As you move your enterprise network to the cloud, make sure your provider is enterprise grade.

Almost every major enterprise is considering, or in the process of, moving their sensitive workloads to the cloud. Within the next 18 months, the amount of enterprise workloads running on the cloud...

Events

Cloud Transformation: Recapping Oracle OpenWorld 2018

We had a number of exciting announcements at Oracle OpenWorld 2018, but I'll summarize the conference through the eyes of our customers and partners, and what they shared throughout the week. Customers Are Ready to Transform Their Mission-Critical Applications We've long said that our focus is building a true enterprise cloud, one that can handle tough, mission-critical database workloads. All week, in customer briefings, in roundtable discussions, and in the expo hall, customers told us about these workloads. They want to move and improve E-Business Suite, PeopleSoft, JD Edwards, EPM, Cognos, and many more. It was exciting to see so many customers eager to begin their cloud transformation. It was equally great to hear the successful transformations of companies like Covanta Energy, HID Global, 7-Eleven, and Allianz. Allianz is one of the largest insurance companies in the world. They specifically chose a mission-critical business-intelligence workload as their first project on Oracle Cloud Infrastructure, not only because it would help them meet delivery timelines, but also, maybe more importantly, to accelerate their people's cloud transformation. Lessons and best practices learned from moving SAS and MicroStrategy to the cloud have convinced Allianz to form a cloud operations practice to operate the new environment and drive additional projects throughout this 140,000 employee, multinational enterprise. Allianz describes their use of @OracleIaaS #oow18 pic.twitter.com/UWLPnQ0Yz4 — Rex Wang (@wrecks47) October 23, 2018 External customers aren't the only ones moving mission-critical applications; Oracle is drinking our own champagne. Oracle NetSuite, which serves tens of thousands of businesses as a large SaaS provider, is integrated with Oracle Cloud Infrastructure and will be provisioning new customers on the new infrastructure starting next year. Brian Chess, the EVP of Infrastructure, Security, and Compliance for NetSuite has a great video about the benefits of running on Oracle Cloud Infrastructure. Our roadmap for region expansion, particularly into Asia, is important for growing the NetSuite business. @NetSuite + Oracle Cloud Infrastructure means the utmost reliability! 👍🏻👍🏻 #suiteconnect @OracleIaaS pic.twitter.com/QwIuJoyMDQ — Danielle Tarp (@danielletarp) October 25, 2018 High Performance Computing Applications Are Also Transforming Many categories of applications have never run on the cloud, often because most cloud infrastructure vendors have been unable to meet performance and other requirements. Product engineering is one of these categories. Altair is one of the key software vendors in the product engineering space, which has largely transformed to completely digital designs and simulations. Altair software has helped companies design everything from planes, trains, and automobiles to medical devices and buildings, from improving aerodynamics to reducing product weight. This type of software has been stuck on-premises because the cloud hasn't been able to meet the high and predictable performance required. Excellent presentation from @Altair_US CTO Sam Mahalingam on their use of @OracleIaaS and new Altair product announcements based on it! #oow18 https://t.co/kDmd7brecX — Phil Francisco (@frisco0303) October 23, 2018 So there was a large opportunity to broaden the market for product engineering software by moving it into the cloud. Altair chose to run their new Hyperworks CFD Unlimited cloud service on Oracle Cloud Infrastructure because of the unique performance capabilities of our bare metal instances and nonblocking network, and our significantly superior price performance. Our announcements around new lower-cost AMD EPYC based compute instances and RDMA-powered cluster networking will further benefit HPC customers and partners like Altair. Transforming Newer Real-Time and Big Data Apps More and more companies require massive amounts of real-time data processing for business analytics and use cases like security. Use cases for IoT and streaming data, as well as more traditional Hadoop, were actually fairly common in my conversations with customers. Like HPC, these applications have also been traditionally performed on-premises, in custom environments built by enterprises and software vendors. Cisco, which is investing heavily in the software and security markets, chose to build a SaaS version of their Cisco Tetration product on Oracle Cloud Infrastructure. This application ingests and processes millions of events per second at their current scale, and is growing with each end customer. Cisco went from inception to production on Oracle Cloud Infrastructure in only two months, achieving significantly better performance than on-premises or other cloud providers, and lowering costs. Cisco Tetration moves to OCI, lower costs and 60x perf improvement vs other cloud provider. Praises OCI agility. #OOW18 pic.twitter.com/j7FjF0XMDN — blaine noel (@blainenoel) October 23, 2018 If Cisco can build a big data security product on Oracle Cloud, it's certainly an interesting option for other software vendors and enterprises. I engaged in a number of interesting discussions with customers after they heard the Cisco story. This deployment was also a strong vote of confidence for Oracle's core security architecture and continued efforts around core (Key Management, Cloud Access) and edge security. It's About People Transformation, Too Mark Hurd predicted that 60 percent of IT jobs haven't been created yet. That provoked a reaction among attendees and analysts, but there's no denying the continued, accelerating change in skills required for IT success. .@MarkVHurd: 60% of IT jobs (in 2025) have not been created yet. I love the spirit of this but people are quite slow to change. Note, 5.7 million people work in enterprise IT in the USA Today. That’s a lot of retraining in just 7 years. #OOW18 — Matt Eastwood (@matteastwood) October 23, 2018 At OpenWorld, we were excited to work with customers and partners to teach them more about cloud operations with technologies like Terraform and Kubernetes; to give them the basics on Autonomous Data Warehouse, machine learning, and Big Data; and to help certify some of them on our platform. We heard repeatedly how "Peopleware" is critical to cloud transformation. While customers were attending sessions to learn more about how autonomous databases would make their day-to-day administration much simpler, the discussions about skills gaps were ever-present. Increasing the skills of internal IT is important, but expert partners can accelerate time to market. Throughout the week, Oracle partners like Astute Associates, Velocity Technology, Accenture, DXC, and Viscosity shared insights and best practices, in presentations and interactive sessions, on how to succeed in the cloud. It's never easy to make big changes in technology infrastructure, but it was encouraging to see a dramatic rise in the level of expertise and experience this year from the partner ecosystem. What Did You Experience at This Year's OpenWorld? The level of real change and success felt different this year. Some interesting innovations were revealed. What was your experience at OpenWorld 2018? We'd be excited to continue the conversation at our official handle (@OracleIaaS) or my personal one (@lleung). Leo Leung Senior Director, Product Management, Oracle Cloud Infrastructure

We had a number of exciting announcements at Oracle OpenWorld 2018, but I'll summarize the conference through the eyes of our customers and partners, and what they shared throughout the week. Customers...

Product News

Tracking Costs with Oracle Cloud Infrastructure Tagging

We understand the importance of being able to attribute the right Oracle Cloud Infrastructure usage costs to the right department or cost center in your organization. Oracle Cloud Infrastructure enables you to track costs at the service or compartment level by using the My Services dashboard, but our users also need the flexibility to track costs for projects that have resources across multiple compartments or that share a compartment with other projects. With that in mind, I'm pleased to announce that we are introducing cost tracking tags, which allow you to tag resources by user, project, department, or any other metadata that you choose for billing purposes. A cost tracking tag is in essence a type of defined tag that is sent to our billing system and show up on your online statement in the My Services dashboard. This feature builds on our easy-to-control, schema-based defined tagging approach. While other clouds support free-form tags, Oracle Cloud Infrastructure offers better control by providing defined tags. Defined tags support a schema to help you control tagging, ensure consistency, and prevent tag spam; all critical attributes when it comes to ensuring proper usage and billing management. Read my prior blog post as a primer for setting up defined tags. Creating Cost Tracking Tags Let's explore how you can create a cost tracking tag and how it flows through the system so that you can attribute costs. We'll start by looking at a tag namespace that I defined in the Oracle Cloud Infrastructure Console. Note the new field, Number of Cost-tracking Tags. This value shows all the cost tracking tag definitions in the tag namespace. The number is important to know because you can have a maximum of 10 cost tracking definitions at any given time.   Now, let's see how I set up my cost tracking tags. I need to track my costs along four separate dimensions, so I set up four cost tracking tags: CostCenter is the internal department to which these costs are attributed. Project groups customers together inside a single product offering. Customer is the customer to which the usage is billed. Customer_Job is the actual job that is running on the Compute instance. Note that three of these tags already show Cost-tracking set to Yes, which indicates that they are sent to Oracle's billing system. Customer_Job has Cost-tracking set to No, which is in error, so I need to convert this defined tag to a cost tracking tag. To do that, I open the Customer_Job tag key definition, click the pencil icon next to Cost-tracking: No, and select the Cost-tracking check box. Now that these tag key definitions are set up as cost tracking, the tags are included in the usage data sent to My Services. When a tag is marked as cost tracking, it can take from two to four hours before it’s processed by My Services and included in the online statement. Viewing Cost Tracking Tags in My Services You can now view these tags in the My Services dashboard. After logging in to My Services, click Account Management and select a filter based on a cost tracking tag. As shown in the following screenshot, you can filter your costs based on the cost tracking tags that you define and determine how much cost a particular cost center (for instance, a Finance department) has incurred. This example shows the costs associated with a database I was running with the tag Finance:CostCenter=w1234. Not only can you see this information in the My Services dashboard, but you can also download the results into a CSV file, which is ideal for analyzing in Excel or other tools. If you want to automate the process of gathering cost-tracking data by tag by using the API, you can do that as well. You can use the API documentation to get started, but following is an explicit example. This is a sample URL that I made of the Metering API service:     https://itra.oraclecloud.com/metering/api/v1/usagecost/cacct-{your caact}/tagged?     startTime=2018-09-01T00:00:00.000&endTime=2018-10-04T00:00:00.000&computeTypeEnabled=Y&tags=CostTracking%3ACostCenter%3Dw1234     &timeZone=America/Los_Angeles&usageType=DAILY&rollupLevel=RESOURCE   The URL now includes /tagged to indicate that you are filtering for a particular tag. The tag field must be URL encoded, which means that you must convert the colon (:) to %3A and the equal sign (=) to %3D. In this example, I used CostTracking:CostCenter=w1234, which URL encoded is CostTracking%3ACostCenter%3Dw1234. The following example shows the costs associated with a database I was running with the tag Finance:CostCenter=w1234. {     "accountId": "cacct-your caact",     "canonicalLink": "/metering/api/v1/usagecost/cacct-caact /tagged?timeZone=America%2FLos_Angeles&startTime=2018-09-01T00%3A00%3A00.000&endTime=2018-10-04T00%3A00%3A00.000&computeTypeEnabled=Y&tags=Finance%3ACostCenter%3Dw1234&usageType=DAILY&rollupLevel=RESOURCE",     "items": [         {             "costs": [                 {                     "computedAmount": 19.44,                     "computedQuantity": 48.0,                     "overagesFlag": "N"                 }             ],             "currency": "USD",             "endTimeUtc": "2018-09-19T17:00:00.000",             "gsiProductId": "B88331",             "resourceDisplayName": "Database Standard Added CPUs",             "resourceName": "PIC_DATABASE_STANDARD_ADDITIONAL_CAPACITY",             "startTimeUtc": "2018-09-18T17:00:00.000",             "tag": "Finance:CostCenter=w1234"         },         ... Please add your comments and question in the comments section below if you have questions about how cost tracking tags can benefit your organization.

We understand the importance of being able to attribute the right Oracle Cloud Infrastructure usage costs to the right department or cost center in your organization. Oracle Cloud Infrastructure...

Events

Oracle Cloud Infrastructure Is Ready for Any and All Workloads

At Oracle OpenWorld this week, we have one clear message: Oracle Cloud Infrastructure is ready for any and all workloads. For over 40 years, Oracle has provided the information technology that powers the world’s best enterprises. Over the years, this technology has come in different packages: databases, middleware, applications, and hardware. Today, it is also delivered via the cloud, which gives customers flexibility. Oracle Cloud Infrastructure, which launched at Oracle OpenWorld 2016, is the underlying platform for Oracle’s applications and autonomous database. It enables companies of any size to run even their most mission-critical, high-volume, high-performance applications and databases. It was built by a team of cloud industry veterans to uniquely meet enterprise-grade computing requirements—and also to live up to cloud's promises of competitive costs, rapid provisioning, and nearly limitless scale. Helping our existing customer base modernize and extend their businesses to the cloud remains a major priority. We are also focused on extending enterprise-grade capabilities to developers, startups, and small- and medium-size businesses. Oracle Cloud Infrastructure runs production workloads for large companies such as Verizon and for startups such as Snap Tech, which makes an innovative visual search tool. To achieve these ambitious goals, we continue to make strategic investments. Let me share some of our latest announcements in greater detail.  Databases and Applications As I mentioned in a recent interview with Seeking Alpha, Oracle Cloud’s product strategy has two key areas of focus: Cloud applications: We are quickly becoming the world’s leading cloud applications provider, with unparalleled innovation and an expanding SaaS portfolio. Oracle Cloud Infrastructure: Our consolidated platform in the hyperscale infrastructure as a service (IaaS) market is the underlying platform for Oracle’s applications portfolio and autonomous database. Oracle Autonomous Database and our leading suite of applications run better on Oracle Cloud Infrastructure than on any other cloud. The networking architecture of Oracle Cloud Infrastructure is designed to support optimal performance of Oracle Database and the applications that depend on it. We accomplish this by using direct point-to-point connections between compute and database instances running within Oracle Cloud Infrastructure. Those point-to-point connections translate to low latency and superior application performance. Read our exciting new announcement about Autonomous Database. Security Enhancements Security is a pillar of everything we do, from deploying data centers and architecting networks to monitoring and scaling services. Oracle Cloud Infrastructure helps secure the most mission-critical, hardened applications and databases on the planet. Our security capabilities are designed to protect applications and services whether they live within Oracle Cloud Infrastructure, in other clouds, or on premises. This hybrid and multicloud approach differentiates Oracle from other cloud platforms. We’re able to do all of this because we think about security from the core of the infrastructure to the edge of the cloud. We’re extending our commitment to security with several announcements, which you can read about in our press release. We aren’t talking about security as a standalone market, but as a fully integrated pillar of our cloud. Making Collaboration Easier The internet infrastructure industry is a collaborative one. Customers simply want to solve problems. This is why we’re continuing to build a robust ecosystem to support Oracle Cloud Infrastructure. Oracle announced a new integrated experience for partners and customers that makes it easier for them to publish and deploy business applications from Oracle Cloud Marketplace on Oracle Cloud Infrastructure. Cloud Performance We have long touted that Oracle Cloud Infrastructure outperforms the competitors. Our price and performance advantages continue to be well documented. The market is taking note, and the media and third-party analysts are strongly validating our leadership position. StorageReview gave Oracle an Editor’s Choice award for the performance and innovation that they saw when testing Oracle Cloud Infrastructure bare metal and virtual machine instances. Gartner published its updated IaaS score cards and included Oracle Cloud Infrastructure as one of the four hyperscale cloud providers that they reviewed. Global Footprint This news and industry recognition is important only because it helps our customers. We are proud of the growing customer base using Oracle Cloud Infrastructure to expand their business. Two examples are IdentityMind, which offers a widely used RegTech SaaS platform that builds, maintains, and analyzes digital identities worldwide, and FICO, which helps lenders make accurate, reliable, and fast credit-risk decisions across the customer life cycle. The expansion of our business is also important from a network-design standpoint. We need to be where our customers’ customers are. Read more about our cloud region roadmap and our high-capacity edge network. The Oracle Cloud Infrastructure Edge Network The cloud edge is the point where people and devices connect to the network, making it both a crucial point for users’ interactions with applications in the cloud and a potential launch point for attacks. Our cloud edge is mature, proven, and fully scaled. The Oracle Cloud Infrastructure edge network is built to deliver the following advantages in a multicloud environment:  Ensure high-speed web traffic with minimal latency Defend against targeted application-layer attacks Protect against volumetric attacks on network infrastructure Community Involvement Products can help, but we believe we must also tackle the security problem at the macro level. We want to be part of the global conversations happening around the internet. We are happy to further our commitment to the internet infrastructure community. That’s why today we’re announcing partnerships with both the Internet Society, a global non-profit organization dedicated to the open development, evolution, and use of the internet, and the Internet Infrastructure Coalition (i2Coalition), which ensures that those who build the infrastructure of the internet have a voice in global public policy. We’re also recommitting Oracle to the Cloud Security Alliance through a revised engagement. “We are excited to be partnering with Oracle Cloud Infrastructure,” said Andrew Sullivan, CEO of the Internet Society. “Their Internet Intelligence team does deep analysis of the internet and its many nuances and is developing routing security tools that can aid our efforts of making the internet more secure.” “As a leader in the cloud industry, Oracle Cloud Infrastructure recognizes the importance of Internet innovation, and we look forward to working with them on important public policy and Internet governance issues.” said Hillary Osborne, membership director, i2Coalition. Read more about our relationship with the Internet Society and with the i2Coalition. The cloud holds the promise of accelerating innovation and simplifying operations. But that can’t come at the expense of performance, security, or manageability. That’s why the mission of Oracle Cloud is to enable our customers to run any and every enterprise application and workload securely in the cloud. And with today's news, they can—far more confidently than ever before.

At Oracle OpenWorld this week, we have one clear message: Oracle Cloud Infrastructure is ready for any and all workloads. For over 40 years, Oracle has provided the information technology that powers...

Customer Stories

Altair Engineering Brings the Power of Supercomputing to CFD Engineers with the Help of Oracle Cloud Infrastructure

At Oracle Cloud, we want to bring the power of supercomputing to every engineer and scientist. To deliver on this vision, we strive to achieve the best performance in the cloud for our high-performance computing (HPC) customers, investing in technologies like bare metal compute, high-performance networking, and NVMe SSD-based high-performance storage. These core Oracle Cloud technologies and cutting-edge offerings, like our bare metal GPU instances with 8x NVIDIA Tesla Volta V100s, enable us to deliver predictable performance for applications like engineering simulation, AI/ML, seismic processing, and reservoir modeling. Our customers can realize the potential of these technologies only when a rich ISV ecosystem of applications is running on our platform. After collaborating for over a year, we are excited to announce our work with Altair Engineering. Together, Altair and Oracle will better serve customers globally. Altair provides enterprise-class engineering software that enables innovation, reduces development times, and lowers costs through the entire product lifecycle from concept design to in-service operation. Altair’s simulation-driven approach to innovation is powered by their integrated suite of software, which optimizes design performance across multiple disciplines encompassing structures, motion, fluids, thermal management, electromagnetics, system modeling, and embedded systems, while also providing data analytics and true-to-life visualization and rendering. Today Altair is announcing the availability of HyperWorks CFD Unlimited on Oracle Cloud Infrastructure. HyperWorks CFD Unlimited is a service that offers computational fluid dynamics (CFD) solvers as a service in the Oracle Cloud. Advanced CFD solvers such as Altair ultraFluidX™ and Altair nanoFluidX™ are optimized on the Oracle Cloud to provide overnight simulation results for the most complex cases on a single server. ultraFluidX provides fast prediction of the aerodynamic properties of passenger and heavy-duty vehicles, buildings, and other environmental use cases. nanoFluidX predicts the flow in complex geometries with complex motion, such as oiling in powertrain systems with rotating gears and shafts, using the Smoothed-Particle Hydrodynamics (SPH) simulation method. Both solvers are now available on Oracle Cloud Infrastructure and can leverage GPU instances, bringing the power of HPC to advanced CFD simulation. “The combination of Oracle’s HPC capabilities, such as our cutting edge bare-metal GPU infrastructure, including the recently announced GPUs, our new leading low latency RDMA network, and high-performance storage options combined with Altair’s market leading CFD solvers makes this collaboration extremely compelling for large enterprises looking to optimize their product development,” said Vinay Kumar, Vice President, Product Management, Oracle Cloud Infrastructure. “We’re working together with Altair to truly define what it means to run HPC workloads in the cloud, and today’s availability of HyperWorks CFD Unlimited proves this." Both technologies leverage our GPU offering powered by 8x Tesla V100 GPUs and 2x Tesla P100 GPUs. With the launch of a service like HyperWorks CFD Unlimited on Oracle Cloud Infrastructure from Altair, you can truly bring the power of supercomputing to CFD engineer’s fingertips. “We are excited to expand our relationship with Oracle,” said Sam Mahalingam, Chief Technical Officer for Enterprise Solutions at Altair. “We find that access to GPU compute resources can be challenging for our customers. The integration with Oracle’s cloud platform addresses this challenge, and provides customers the ability to use GPU-based solvers in the cloud for accelerated performance without the need to purchase expensive hardware. Ultimately this leads to improved productivity, optimized resource utilization, and faster time to market.”             We can’t wait to see what customers do with this service. To find out more about HyperWorks CFD Unlimited and to test the service, visit www.altair.com/oracle. You can also find out more about Oracle Cloud Infrastructure's GPU offerings at https://cloud.oracle.com/iaas/gpu or HPC offerings at https://cloud.oracle.com/iaas/hpc.

At Oracle Cloud, we want to bring the power of supercomputing to every engineer and scientist. To deliver on this vision, we strive to achieve the best performance in the cloud for...

Product News

Announcing the Launch of AMD EPYC Instances

As the world of computing continues to evolve, you require a diverse set of hardware and software tools to tackle your workloads in the cloud. With this in mind, I am excited to share that today, at Oracle OpenWorld 2018, we announced a collaboration with AMD to provide a new “E” series of compute instances on Oracle Cloud Infrastructure. The “E” series compute instances, showcase the higher core count, memory bandwidth, I/O bandwidth, advanced security features, and value of AMD EPYC processors.       Today, we are announcing the general availability of "Compute Standard E2 platform" which is the first addition to the E series. The Compute Standard E2 platform will be available in both Bare Metal and also 1 core, 2 core, 4 core and 8 core VM shapes. With the launch of Compute Standard E2 instances, Oracle Cloud Infrastructure becomes the first public cloud to have a generally available AMD EPYC processor-based compute instance. With 64 cores per server, Oracle has the largest core count instance available in production in the public cloud. With 33 percent more memory channels than comparable x86 instances, this new instance provides more than 269 Gb/s memory bandwidth, the highest recorded by any instance in the public cloud. Additionally, AMD EPYC processors are not affected by Meltdown and Foreshadow security vulnerabilities. You get all of this for $0.03 per core hour, which is 66 percent less than general purpose instances offered by other clouds, 53 percent lower than Oracle's other compute instances, and is the lowest price offered by any non-burstable compute instance in the public cloud. Initial capacity is available in the Ashburn (IAD) region, expanding to other regions soon, for bare metal compute instances and 1-, 2-, 4-, and 8-core VM compute instances. The 16- and 24-core shapes will be offered in the first half of 2019, as shown in the following table.   Launching these instances through the Oracle Cloud Infrastructure Console or by using tools such as Terraform is the same as launching other x86 instances on Oracle Cloud Infrastructure. At launch, all of the images except the bare metal Windows image are available.  Key Use Cases AMD EPYC-based instances are ideal for general-purpose workloads where you want to maximize price performance. On low-level CPU benchmarks, namely SPEC_int and SPEC_FP, the AMD instance performs on par with the comparable x86 instance, at a lower cost. Oracle applications, including E-Business Suite, JD Edwards, and PeopleSoft, are supported on any Oracle Cloud Infrastructure x86 compute instances of appropriate size, including AMD EPYC-based instances. AMD EPYC-based instances are ideally suited for Big Data analytics workloads that rely on higher core counts and are hungry for memory bandwidth. AMD has a partnership with, and is certified to run software from, leading ISVs who are a part of the Hadoop ecosystem, including Cloudera, Hortonworks, MapR, and Transwarp. On a 10-TB full TeraSort benchmark, including TeraGen, TeraSort and TeraValidate, the AMD system demonstrated a 40 percent reduction in cost per OCPU compared to the other x86 alternatives with only a very slight increase in run times. AMD EPYC-based instances are also ideally suited to certain high-performance computing (HPC) workloads that rely on memory bandwidth, like computational fluid dynamics (CFD). On a 4-node, 14M cell Fluent CFD simulation of an aircraft wing, the AMD EPYC-based instance demonstrated a 30 percent reduction in cost along with a slight reduction in overall run times as compared to an x86 alternative.  Performance Numbers We compared the AMD EPYC-based instances to our current x86 standard alternatives. The following table shows detailed configurations.   AMD EPYC System x86 Alternative System CPU 2 x AMD 7551, 32 cores per Socket @ 2.0 GHz 2   x86 processor, 26 cores per Socket @ 2.0 GHz Memory   512 GB DDR4   786 GB DDR4 Network    2 x 25 Gbps    2 x 25 Gbps   We ran performance tests to exercise the CPU performance, memory subsystem performance, floating point compute power, and performance of server-side Java with emphasis on the middle tier. All of the tests were run on vendor-recommended proprietary compilers. The tests were run a number of times, and the results were averaged.       Tests     Benchmark Target  SPECrate 2017 Integer Integer performance SPECrate 2017 Floating Point Floating point performance STREAM Memory subsystem performance SPECjbb2015 Middle tier performance   The following graphs show how the AMD system compared against the x86 alternative. Figure 1 shows a normalized bare-metal-to-bare-metal comparison at the system level. Figure 2 shows a normalized performance-per-core comparison. Figure 3 shows a normalized performance-per-dollar-per-core comparison. The AMD system fared well in basic CPU and memory benchmarks, which can be attributed to the increased number of cores and the higher number of memory channels in the AMD system.    Figure 1: Bare Metal Comparison of AMD EPYC and x86 Standard SystemFigure 2: Figure 2:  Performance/Core Comparison of AMD EPYC and x86 Standard System Figure 3: Performance/Dollar/Core Hour Comparison of AMD EPYC and x86 Standard System At Oracle OpenWorld on October 24 in Moscone South, Room 154, from 12:30–1:15 p.m., we'll be presenting a session with AMD about these new compute instances. We'll also be showcasing AMD EPYC-based compute instances at SC18 in Dallas on November 12–15.  Thanks to the Compute team, our friends at OHD, and the rest of the Oracle Cloud Infrastructure team that worked day and night to launch the AMD EPYC offering. If you have any questions, feel free to reach out. Rajan Panchapakesan Principal PM, Compute and HPC

As the world of computing continues to evolve, you require a diverse set of hardware and software tools to tackle your workloads in the cloud. With this in mind, I am excited to share that today, at...

Product News

Announcing Oracle Cloud Infrastructure Key Management

Customers of Oracle Cloud Infrastructure moved their workloads to the cloud knowing that their data would be protected by encryption keys that are securely stored and controlled by Oracle. However, some customers, especially those operating in regulated industries, asked Oracle to help them verify their security governance, regulatory compliance, and homogeneous encryption of their data where it is stored. Effective immediately, Oracle Cloud Infrastructure Key Management is available to customers in all Oracle Cloud Infrastructure regions. Key Management is a managed service that enables you to encrypt your data using keys that you control. Key Management durably stores your keys in key vaults that use FIPS 140-2 Level 3 certified hardware security modules (HSMs) to protect the security of your keys. You can use the Key Management service through the Console, API, or CLI to create, use, rotate, enable, and disable Advanced Encryption Standard (AES) symmetric keys. As a managed service, Key Management lets you focus on your data encryption needs without requiring you to worry about procuring, provisioning, configuring, updating, and maintaining HSMs and key management software or appliances.  Integration with Oracle Cloud Infrastructure Block Volumes, Oracle Cloud Infrastructure Compute boot volumes, and Oracle Cloud Infrastructure Object Storage means that encrypting your data with keys that you control is as straightforward as selecting a key from the Key Management service when you create or update a block volume or bucket. Example: Creating a Block Volume using keys from Key Management Example: Edit or unassign a previously assigned key from a Block Volume Integration with Oracle Cloud Infrastructure Identity & Access Management (IAM) and Oracle Cloud Infrastructure Audit lets you control the permissions on individual keys and key vaults, and monitor their life cycles. Example: Enable Block and Boot Volume encryption using Key Management Learn more about how to get started with Oracle Cloud Infrastructure Key Management in our documentation and our FAQs.         This post was written by guest blogger Ulf Schoo, a consulting member of the technical staff on the Oracle Cloud Infrastructure team.

Customers of Oracle Cloud Infrastructure moved their workloads to the cloud knowing that their data would be protected by encryption keys that are securely stored and controlled by Oracle. However,...

Product News

Oracle CASB Enables Security Monitoring for Oracle Cloud Infrastructure

At Oracle Cloud Infrastructure, customer security is of paramount importance. We understand that enterprises of all industries and sizes require comprehensive visibility, security and compliance monitoring over their cloud resources. Oracle Cloud Infrastructure provides maximum visibility to customers regarding the actions taken on their cloud resources through the availability of various logs, including the Oracle Cloud Infrastructure Audit service which tracks all actions taken on Oracle Cloud Infrastructure tenancy resources. Oracle Cloud Access Security Broker (CASB) Cloud Service helps take security a step further by providing automated capabilities for customers to monitor the security of their cloud infrastructure resources. Additionally, Oracle CASB supports monitoring of Oracle Cloud Applications (SaaS), Oracle Cloud Platform (PaaS), and other public clouds, including AWS, Azure, Office 365, and Salesforce. The solution helps customers with heterogeneous multiple-cloud deployments achieve better security postures for their cloud resources. Security Monitoring Use Cases Oracle CASB monitors the security of Oracle Cloud Infrastructure deployments through a combination of predefined Oracle Cloud Infrastructure-specific security controls and policies, customer-configurable security controls and policies, and advanced security analytics that use machine learning for detecting anomalies. Following are the different types of security monitoring that Oracle CASB performs: Security misconfiguration of Oracle Cloud Infrastructure resources: Oracle CASB monitors configurations of Oracle Cloud Infrastructure compute, virtual cloud networks (VCNs), and storage, based on Oracle Cloud Infrastructure security best practices. For example, Oracle CASB can alert administrators on Oracle Cloud Infrastructure Object Storage buckets that are made public. Monitoring of credentials, roles and privileges: Oracle Cloud Infrastructure Identity and Access Management (IAM) security policies assign various privileges (inspect, read, use, and manage) to IAM groups. Oracle CASB monitors IAM users and groups for excessive privileges and for changes to administrator groups. For example, Oracle CASB monitors the use and age of IAM credentials that are used to authenticate users, such as console passwords and API keys. Any deviations from the acceptable standards can result in alerts. User behavior analysis (UBA) for anomalous user actions: User logins and access patterns are analyzed to establish expected behavior, and deviations from expected baselines are detected with advanced analytics based on machine-learning (ML) algorithms. UBA generates risk scores for events, and customers have options to configure security alerts based on risk-score thresholds. Risk events from threat analytics: Oracle CASB is integrated with third-party threat intelligence feeds, and it uses them to analyze access events to customer Oracle Cloud Infrastructure tenancies. This is done in order to detect potential security threats such as accesses to Oracle Cloud Infrastructure resources from suspicious IP addresses or any anomalous patterns of IP addresses used. Register Your Tenancy with Oracle CASB This section provides an overview of how to register your Oracle Cloud Infrastructure tenancy with Oracle CASB and how to view security alerts. To enable CASB monitoring, you create an Oracle Cloud Infrastructure application instance with Oracle CASB and provision it by using the API key credentials of a least-privilege IAM user that is authorized to get configuration information and audit logs from your Oracle Cloud Infrastructure tenancy. The following screenshot (Figure 1) shows the registration page where you provide the tenancy OCID, IAM user OCID, public key fingerprint of the IAM user API key, and private key of the IAM user API key to register an Oracle Cloud Infrastructure application instance. Figure 1. Oracle Cloud Infrastructure Application Instance Registration Oracle CASB has preconfigured security controls and prebuilt policy controls for Oracle Cloud Infrastructure security monitoring. Examples include checking for public buckets, open (0.0.0.0/0) VCN security lists, monitoring privileges granted using IAM policies, and more. The following screenshot (Figure 2) shows predefined Oracle Cloud Infrastructure security controls that you can enable. Figure 2. Oracle Cloud Infrastructure Security Controls  At this point, Oracle CASB is ready to get Oracle Cloud Infrastructure audit logs and configuration information from your tenancy to conduct security monitoring based on security and policy controls. The following screenshot shows the dashboard with Oracle Cloud Infrastructure security alerts generated by Oracle CASB. Figure 3. Oracle Cloud Infrastructure Security Alerts  As a recap, Oracle CASB provides comprehensive security monitoring for customer Oracle Cloud Infrastructure tenancies and generates security alerts with actionable remediation steps to triage the issues. What's more, Oracle CASB enables you to get going quickly because it doesn't require installation of any software agent and uses customer-provided privileges to get security configuration information and logs required for analytics. For more information about how to configure Oracle CASB for use with Oracle Cloud Infrastructure, see the Using Oracle CASB Cloud Service documentation. Oracle CASB is currently used by Oracle Cloud Infrastructure customers, including large enterprises, whose feedback is integrated into the product, enabling us to continue to improve security and user experience. As new Oracle Cloud Infrastructure services and features are released, Oracle CASB will transparently offer corresponding security checks to Oracle Cloud Infrastructure customers. Oracle CASB provides maximum Oracle Cloud Infrastructure security monitoring for customers, with a relatively low total cost of ownership (TCO). And our Universal Credits Model (UCM) covers Oracle CASB and can be used to pay by consumption for CASB security monitoring. For more information about Oracle CASB and Oracle Cloud Infrastructure-specific security checks, see the following documentation: Oracle CASB Cloud Service documentation Viewing Key Security Indicators and Reports for OCI This post was written by guest blogger Nachiketh Potlapally, a consulting member of the technical staff on the Oracle Cloud Infrastructure team.

At Oracle Cloud Infrastructure, customer security is of paramount importance. We understand that enterprises of all industries and sizes require comprehensive visibility, security and compliance...

Product News

Introducing the Generation 2 Cloud at Oracle OpenWorld 2018

Oracle built its Generation 2 Cloud from the ground up to provide businesses with better performance, pricing, and—above all else—security. That was the message from founder and CTO Larry Ellison during his opening keynote at Oracle OpenWorld 2018, where he announced new security features and explained the overall benefits of Oracle Cloud Infrastructure. "Other clouds have been around for a long time, and they were not designed for the enterprise," Ellison said. Security First The Oracle Cloud is a secure, unified architecture for all applications, from the Oracle Autonomous Database and SaaS applications to enterprise and cloud native applications. Generation 1 clouds place user code and data on the same computers as the cloud control code with shared CPU, memory, and storage. That means cloud providers can see customer data, and it enables customer code to access cloud control code, which can lead to breaches and cyberattacks, Ellison said. Oracle's Generation 2 Cloud, on the other hand, puts customer code, data, and resources on a bare metal computer, while cloud control code lives on a separate computer with a different architecture. With this approach, Oracle cannot see customer data, and there is no user access to the cloud control code. "We will never put our cloud control code in the same computer that has customer code," Ellison said. We at @forrester have talked about this for years. Security and privacy done right engenders customer trust and creates competitive differentiation. @oracle announced today that it’s Gen 2 cloud was built from the ground up for security. Security is its design point. #oow18 https://t.co/5uhlRAhh5l — Stephanie Balaouras (@sbalaouras) October 22, 2018 Oracle's Generation 2 Cloud also uses the latest artificial intelligence and machine learning technologies to level the security playing field, because malicious hackers are using these same technologies. "It's their robots versus your people," Ellison said. "Who do you think is faster? Who do you think's going to win?" Ellison also announced four new Oracle Cloud Infrastructure security features: a web application firewall, DDoS protection, cloud access security broker support, and a key management service. Price and Performance Security was the primary reason that Oracle Cloud Infrastructure was built from the ground up, Ellison said. Other major drivers were the opportunity to improve the cloud migration process and to provide greater performance and pricing to customers who make the move. By the by, if anyone is wondering, Oracle Cloud IaaS is legit, Oracle is a hyperscale provider and I called it 3 years ago. Might share some scratch math to explain tomorrow. #OOW18 — Carl Brooks (@eekygeeky) October 22, 2018 If you run an enterprise application in a Generation 1 cloud, it usually costs more to run than it did on-premises, but that's not the case on Oracle Cloud Infrastructure, Ellison said. He also provided benchmarks that showed significant price and performance benefits over Amazon Web Services (AWS). Ellison shows Amazon network charges 100x to move data out of their cloud compared to moving data out of Oracle cloud. The @awscloud comparisons continue, including compute, block storage, network costs where Oracle Cloud is significantly less. #OOW18 — Hyoun Park (박현경) 🏳️‍🌈 (@hyounpark) October 22, 2018 To stay on top of all the Oracle Cloud Infrastructure news at OpenWorld 2018, follow @OracleIaaS on Twitter and follow the #oow18 hashtag.

Oracle built its Generation 2 Cloud from the ground up to provide businesses with better performance, pricing, and—above all else—security. That was the message from founder and CTO Larry Ellison...

Partners

Migration to Oracle Cloud Infrastructure with Deloitte ATADATA™

Oracle and Diamond Partner Deloitte Consulting LLP Jointly Validate Enterprise Workload Migration to the Cloud with ATADATA Oracle Cloud Infrastructure provides a true enterprise cloud, with the consistent high performance and predictable low pricing that enterprises require to consider moving their most critical workloads to cloud. We are seeing many enterprise customers leverage the advantages of Oracle Cloud to fuel innovation in their businesses, build environments more securely, avoid the cost and risk of refresh, and realize significant cost savings. This has led to significant interest in automated workload migration to accelerate the move to Oracle Cloud. Customers frequently look for their System Integrator (SI) partners to facilitate the migration of key workloads and reduce risk through proven experience, streamlined operations, and automated capabilities. Oracle applications play a central role in complex enterprise workflows. Varied infrastructure types and inherent application complexity make it challenging to move applications and associated points of integration to the cloud without unintended disruption to functionality. Effective cloud migration requires investigation, data collection, compatibility assessment, financial measurement, and lockstep coordination between the target cloud platform and migration partners to ensure a smooth and efficient transition. With Deloitte ATADATA, the effort and risk of moving critical workloads from traditional environments to Oracle Cloud Infrastructure is dramatically reduced through all phases of migration, from discovery and planning all the way to the physical migration to Oracle Cloud Infrastructure and validation after completion. We're proud of this partnership as a key enabler to realizing cloud benefits without being overwhelmed by a migration process that is cost-prohibitive and risky. In July 2018, Oracle started a joint validation initiative with Deloitte ATADATA for automated cloud migration. For the evaluation, use cases were designed to validate ATAVision discovery and ATAMotion migration modules across four (4) phases that are typical when migrating an application to Oracle Cloud Infrastructure: Discovery of a source environment across all infrastructure elements Provisioning of compute instances, storage, and network connections in the target Oracle Cloud Infrastructure environment Migration of VMs and data from the source to Oracle Cloud Infrastructure Validation of migrated data and application configurations ATAVision Discovery Overview Although most organizations possess basic inventory and utilization data, the level of detail and accuracy is often not sufficient to address the needs of a successful migration project. The ATAVision module collects all the data required to develop a comprehensive migration plan including, but not limited to, infrastructure details, affinity relationships, compatibility issues, and software dependencies. The discovery software is agentless, i.e. no installation or reboots are required on the source candidate servers, and the discovery process doesn’t impact system performance. By combining these elements, ATAVision’s automated move group engine creates a detailed migration plan based on a full view of the environment. ATADATA software can be installed anywhere, provided it has access to the source environment. ATAVision can collect data on physical servers, on-premises VM servers, VMware clusters, or across any hypervisor or competitive cloud platform. ATAMotion Migration Overview A significant benefit of Deloitte ATADATA products is their automated integration capability. The ATAVision module combines dependent servers into migration units called move groups. Move groups are imported into an integrated migration module, ATAMotion, and can be migrated independently or combined into a larger orchestrated migration wave plan. Consequently, servers with a high affinity relationship are moved together without omitting critical pieces of a complex application architecture. The ATAMotion migration technology orchestrates provisioning through integration with Oracle Cloud Infrastructure APIs. When a migration job is created, all volumes can either be migrated as a set, or specific volumes can be selected for migration. Although ATAMotion can use all available bandwidth, throttling is supported to minimize disruption to production workloads. Once migration is initiated, Oracle Cloud Infrastructure APIs are leveraged to provision cloud resources. After the target is up and running, the agentless ATAMotion software is deployed at the target. The target then communicates back to the source server to enable data transfer over a secure connection (using either secure cypher key encoding or AES encryption). The direct connectivity between source and target is the key to migrating data and workloads at scale. Final Thoughts The Oracle team finds ATADATA tools to be easy to use and effective. By deeply integrating with the Oracle Cloud Infrastructure APIs, ATADATA has differentiated their offering from the competition and enabled customers to accomplish comprehensive migrations quickly and successfully. Specifically, the team has noted innovative approaches to automatic provisioning of cloud compute instances based on configuration schemas and seamless migration of VMware servers to Oracle Cloud Infrastructure. Based on our evaluation, ATADATA’s capabilities and integration with Oracle Cloud APIs are “best in class” for migrating enterprise applications smoothly and effectively. For additional questions, please contact Donald Schmidt Jr., Managing Director Deloitte Consulting, at doschmidt@deloitte.com. Co-authored by: Donald Schmidt Jr., Managing Director Deloitte Consulting LLP; Manoj Mehta, Director of Product Management, OCI Development and Andrew Reichman, Sr. Director, OCI Development

Oracle and Diamond Partner Deloitte Consulting LLP Jointly Validate Enterprise Workload Migration to the Cloud with ATADATA Oracle Cloud Infrastructure provides a true enterprise cloud, with the...

Developer Tools

Cloud-Native Technologies and Solutions Make a Strong Showing at Oracle OpenWorld 2018

If you're one of the more than 60,000 attendees at Oracle OpenWorld next week, you'll have a dizzying array of choices from thousands of sessions, hands-on labs, birds-of-a-feather sessions, case-study presentations, meetings with experts and peers, and parties! No matter what you choose, you’re certain to hear a lot from us about the role of cloud computing and how we think it will transform the way in which you create applications, manage your IT infrastructure, and conduct your business. We’ll cover why and how you move workloads from your data centers to the cloud, which we refer to as "move and improve." But we'll also extensively cover new technologies and solutions that are "cloud native"—cloud-based tools and services that you can use to develop, deploy, and manage your cloud-based applications. Here are some of the sessions about cloud-native technologies and solutions that I plan to attend. I hope this selection will be helpful to you as you create your calendar for the coming week.   Managing the Transformation Moving to cloud native is about more than deploying in the cloud and using new tools. These sessions will inform you about new cloud technologies, how they impact your business, and the best practices for adopting them. Your Cloud Transformation Roadmap on Oracle Cloud Infrastructure (PKN6351) Clay Magouyrk (Senior Vice President, Software Development, Oracle) and Rahul Patil (Vice President, Software Development, Oracle) will review the developments in Oracle Cloud Infrastructure and what they are working on. A great session to get the "big picture." Making Cloud Native Universal and Sustainable (KEY6962) Dee Kumar, Vice President at the Cloud Native Computing Foundation, CNCF, a division of the Linux Foundation, which plays an important role in the ecosystem, incubating many of the leading open source cloud native technologies, delivers a keynote session. Cloud Native Architectures on Oracle Cloud Infrastructure with Linkd (MYC6865) Linkd (formerly Wireflare) chose to run its performance-sensitive MEAN stack application on Oracle Cloud Infrastructure. This session discusses the decision process and experience with the CTO and founder of the company. Learn about the performance requirements and the benefits achieved in comparison to alternative cloud providers and gain insight into cloud selection and results in a cloud-native application environment. Cloud Native Developer Panel: Innovative Startup Use Cases (DEV5600) Startup development teams are pushing the limits with novel use cases and advanced architectures—from Kubernetes to AI/ML workloads and serverless microservice deployments. Representatives of startups in this panel discussion will walk through how they are using open source technologies on top of a high-performance cloud, lessons learned, and what’s on the horizon.   DevOps It is in the cloud that DevOps reaches its full potential. With resources that can be described and versioned as code, and provisioned and scaled on demand, and with automation for every step of the application life cycle, Oracle Cloud Infrastructure dramatically increases the productivity of development teams and the quality of their applications.   DevOps on Oracle Cloud Infrastructure (FLP6872) Oracle Cloud Infrastructure provides DevOps practitioners with the services required to automate the deployment of large, complex distributed systems while giving engineers the flexibility to choose the languages and tools of their preference. In this session, explore DevOps solutions, including expanded support for popular tools and languages. Learn how to solve problems with this suite of offerings, how it’s differentiated from other options on the market, and how the product team made these choices. Introducing DevOps Solutions on Oracle Cloud Infrastructure (THT6958) Join this session to learn about the services Oracle Cloud Infrastructure provides DevOps practitioners, and a sneak peek into upcoming solutions such as monitoring services, expanded support for popular tools and languages, integrated development environments, continuous integration/continuous delivery, and collaboration tools such as ChatOps. Learn how to deploy Oracle Cloud Infrastructure resources using Terraform, including a fully managed service and a group of open source Terraform modules. Using Ansible, Terraform, and Jenkins on Oracle Cloud Infrastructure (DEV5582) DevOps teams need the right tools and technologies to safely and reliably build and support large, complex cloud systems. This session explores an example architecture and then walks through building it out using Terraform to define infrastructure as code, Ansible for configuration management, HashiCorp's Vault for secret management, and Jenkins for continuous integration/continuous delivery.   Containers A key characteristic of cloud-native applications is that they benefit from the distributed nature, resilience, and elastic scalability of cloud infrastructure. In most cases, that means these applications are deployed as loosely coupled, containerized components that can be scaled up and down with ease. Container Registry 2.0: Enabling Enterprise Container Deployments (DEV5604) Container registries are evolving as container workloads move to production. This session explores some of the new must-have requirements for security, policy, automation, and additional artifact storage. It also examines how registries work closely with coupled Kubernetes deployments and presents best practices for building container-native deployment strategies. A Guide to Enterprise Kubernetes: Journeys to Production (DEV5623) This session presents a guide for enterprises looking to move to production with Kubernetes. You’ll hear from customers who've made the journey and their stories of operationalizing Kubernetes. The presentation covers best practices and lessons learned across areas such as network and storage integration, scaling, monitoring, logging, and deploying across multiple regions. Kubernetes in an Oracle Hybrid Cloud (BUS5722) Are you moving to the cloud? Looking at containers? Keeping some workloads on-premises? Shifting workloads from on-premises to the cloud, and from the cloud to on-premises? Maybe splitting workloads between the two into a hybrid cloud? If you answered "yes" to any of these questions, you are not alone. In this session, learn how Kubernetes can be used to run cloud-native and existing workloads both in Oracle Cloud and on-premises on Oracle Cloud at Customer. See how customers are using both use cases. Kube me this! Kubernetes Ideas and Best Practices (DEV5369) This session covers best practices you’ll want to consider when making a shift from deploying applications to web servers to moving to a microservices model and Kubernetes. You'll learn about topics you should consider while moving to Kubernetes and the principles you should follow when building out your Kubernetes-based applications or infrastructure. You'll leave the session with best practices to implement in your own organization when it comes to Kubernetes.   Serverless Serverless approaches take cloud-native principles an abstraction step further than containers. Forget about provisioning infrastructure for your applications—Oracle Cloud Infrastructure will provide it when you need it and make it go away when you don’t, so you pay only for what you actually use. Bringing Serverless to Your Enterprise with the Fn Project (PRO4600) Serverless computing is one of the hottest trends in computing because of its simplicity and cost-efficiency. Oracle recently open-sourced a new project that enables developers to run their own serverless infrastructure anywhere. In this session, learn how to use the functions platform with a demo, how to deploy functions in multiple languages, the benefits of bringing serverless to your organization, how to identify low-hanging fruit projects, and best practices. Serverless Java: Challenges and Triumphs (DEV5525) This session examines the challenges of using Java for serverless functions and the latest Java platform features that address them. It also digs into the open source Fn project’s unparalleled Java support, which makes it possible to build, test, and scale out Java-based functions applications.   Hands-on Labs Learn more about Terraform, Kubernetes, Big Data, AI/ML, and HPC in these instructor-led classes: Oracle Cloud Infrastructure Hands-on Labs. Bring your own laptop!   Open Source Technologies Oracle has made a strategic commitment to open source and standards for Oracle Cloud Infrastructure. It builds its services on unforked, supported, open source projects. It ensures that it's as easy to bring workloads to its cloud as it is to take them elsewhere. And Oracle’s developer teams are very active participants in the Cloud Native Computing Foundation and many of the projects in that ecosystem. Check out some of these sessions that discuss exciting new open source projects that can be deployed on Oracle Cloud Infrastructure: Using Terraform with Oracle Cloud Infrastructure (HOL6376) GraphPipe: Blazingly Fast Machine Learning Inference (DEV5593) Istio and Envoy: Enabling Sidecars for Microservices (BOF5714) Istio, Service Mesh Patterns on Container Engine for Kubernetes (DEV6078) Building a Stateful Interaction with Stateless FaaS with Redis (THT6878) Serverless Kotlin in Action: A Black/Silver Combo? (DEV5695)

If you're one of the more than 60,000 attendees at Oracle OpenWorld next week, you'll have a dizzying array of choices from thousands of sessions, hands-on labs, birds-of-a-feather...

Oracle Cloud Infrastructure

Deploying Elasticsearch on Oracle Cloud Infrastructure Using a Terraform Template

We are proud to announce a reference architecture for Elasticsearch on Oracle Cloud Infrastructure. Starting today, you can deploy Elasticsearch, an open source, distributed, RESTful search and analytics engine on Oracle's high-performance cloud by using Terraform templates. With this announcement, Oracle Cloud Infrastructure enhances the Big Data ISV ecosystem of partners. To get started with Elasticsearch on Oracle Cloud Infrastructure, you can use the Terraform automated deployment template. The template performs the steps to deploy and configure Elasticsearch and Kibana: provisions instances and storage, deploys and configures the software, sets up networking and a load balancer, and starts it all. Deployment Architecture The following diagram shows the Elasticsearch and Kibana deployment using the template. The following components are part of the deployment: Bastion host: A bastion host is used as a NAT instance for Elasticsearch master and data nodes to update and install software from the internet. Load balancer: Oracle Cloud Infrastructure Load Balancing is used to load balance index operations onto the data nodes and Kibana access to master nodes. It uses two listeners, one for Kibana and one for index data access, backed by backend sets with master node backends and data node backends. The load balancer is launched into a public subnet with a public IP address, but you can modify this by changing lbaas.tf to make it a private load balancer. Elasticsearch, master nodes: Master nodes perform cluster management tasks like creating new indexes and rebalancing shards. They don’t store data. Three master nodes (recommended for bigger clusters) are deployed across three availability domains to ensure high availability. Elasticsearch, data nodes: Four data nodes are deployed across two availability domains (two nodes in each availability domain) for high availability. Memory-optimized compute instances are recommended because Elasticsearch is dependent on the amount of memory available. A complete list of compute instance shapes to select is available here. Each data node is configured with a 200-GiB block storage. In addition to VMs, Oracle Cloud Infrastructure offers powerful bare metal instances that are connected in clusters to a nonoversubscribed 25-gigabit network infrastructure. This configuration guarantees low latency and high throughput, which is a key requirement for high-performance distributed streaming workloads. Oracle Cloud Infrastructure is the only cloud with a network throughput performance SLA. Kibana: Like the master nodes, Kibana has relatively light resource requirements. Most computations are pushed to Elasticsearch. In this deployment, Kibana runs on the master nodes.    To customize your Terraform deployment, you can perform the following actions: Choose the shape for master node and data node instances. Specify storage capacity for data node instances Change CIDR block sizes for the virtual cloud network and subnets, and other configuration settings. For details about the Terraform templates, see the Readme.md file. What’s Next? If you don’t have an Oracle Cloud Infrastructure account yet, you can sign up for a 30-day free trial account. Follow the instructions on the GitHub oci-elasticsearch page to install an Elasticsearch cluster on Oracle Cloud Infrastructure. Come and meet us at the Oracle OpenWorld booth #OCI-A01 to learn more about our Big Data ecosystem offerings.  We hope you are as excited as we are about Elasticsearch on Oracle Cloud Infrastructure. Let us know what you think! Pinkesh Valdria Principal Solutions Architect, Big Data https://www.linkedin.com/in/pinkesh-valdria/  

We are proud to announce a reference architecture for Elasticsearch on Oracle Cloud Infrastructure. Starting today, you can deploy Elasticsearch, an open source, distributed, RESTful search and...

Improved Availability of Your Instances with Customer-Managed VM Maintenance

We are excited to announce customer-managed virtual machine (VM) maintenance, a major step in Oracle Cloud Infrastructure’s ongoing effort to improve the availability of your VM instances. You can now easily reboot your instances and avoid scheduled downtime for planned infrastructure maintenance. What is customer managed VM maintenance? Today when an underlying infrastructure component needs to undergo maintenance, we notify you in advance of the planned maintenance downtime. To avoid this planned downtime, you can opt to terminate and re-redeploy your instances prior to the planned maintenance. With the introduction of customer managed VM maintenance, we give you another option. Instead of terminating and re-deploying your instance manually, you can now reboot your instance from the Console, API or CLI. This new experience makes it easy for you to control your instance downtime during the notification period. The reboot or restart of a VM instance during the notification period is different from a normal reboot. The reboot or stop/start workflow stops your instance on the existing VM host which needs maintenance and starts it on a healthy VM host. Customer managed VM maintenance makes it easier for you to avoid the planned maintenance downtime. If you choose not to reboot during the notification period, then Oracle Cloud Infrastructure will reboot your instance for you before we proceed with the planned infrastructure maintenance. How do I get started? Getting started is easy. When there is a maintenance event, Oracle Cloud Infrastructure will notify you via email. You can identify the affected VMs in the Console by checking the Maintenance Reboot field (or check the timeRebootMaintenanceDue property using the API/CLI) - which will show the date and time after which the infrastructure maintenance will occur. The instance reboot will occur within a 24 hour period following the specified time. Both the Instance list view and the Instance details view (below) will display the Maintenance Reboot field. For Standard VM instances with a boot volume, additional iSCSI block volume attachments, and a single VNIC, you can proceed to reboot or stop and start the instance. If you have non-iSCSI (paravirtualized or emulated) block volume attachments or secondary VNICs, you must detach them first before rebooting or restarting your instance. When you reboot or stop and start the instance, it is migrated to a different physical VM host - while preserving all the instance configuration properties, including ephemeral and public IP addresses. When the Reboot Maintenance field is blank, the instance is no longer impacted by the maintenance event. Finding affected instances To make it easier to find and act on your instances, you can search for the instances that have been set to reboot in your tenancy by using the Advanced Search and choosing the Query for all instances which have an upcoming scheduled maintenance reboot sample query. Customer-managed VM maintenance is currently supported on Standard VM instances running Linux OS. It supports instances launched from Oracle Cloud Infrastructure images and images imported from external sources. It is offered in all regions at no extra cost.   To learn more about customer-managed VM maintenance on Oracle Cloud Infrastructure, see the Best Practices for Your Compute Instances. For more information about the Oracle Cloud Infrastructure Compute service, see the Oracle Cloud Infrastructure Getting Started guide, Compute service overview, and FAQ.

We are excited to announce customer-managed virtual machine (VM) maintenance, a major step in Oracle Cloud Infrastructure’s ongoing effort to improve the availability of your VM instances. You can now...

Oracle Cloud Infrastructure

Data Tiering Enhancement for Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure

Hello, my name is Zachary Smith, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure. In June 2018, we announced the availability of Terraform automation to easily deploy Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure. Today we are proud to introduce the next version of the automation templates, which enables you to use data tiering with Cloudera Enterprise Data Hub deployments on Oracle Cloud Infrastructure. You can now leverage the multiple classes of storage available in Oracle Cloud Infrastructure—block volumes, local NVMe SSD, object, file, and archive—in a single Hadoop cluster. You can also define storage policies customized for your workloads, which can help lower costs without compromising on SLAs. The storage policies can now be defined through the command line automation tool. You can continue to use the Cloudera Manager to set up new policies or update defined policies. You can find out more in Cloudera's documentation. Splitting up data within tiers reduces costs. Less frequently used data resides on block volumes or is copied to Object Storage, which are both less expensive and allow for higher storage density. This enables you to meet storage capacity requirements while minimizing compute costs to meet workload demands. We are already seeing large enterprise customers leverage this feature to drive cost and operational efficiencies by using the fast bare metal NVME storage for hot data while using the Block Volumes storage for cooler data. In addition to using the updated automation scripts to configure Enterprise Data Hub to use various data tiers, you can use the mover tool periodically to move data between storage classes for greater efficiency in data storage and to ensure compliance with storage policies. In our initial experiments, we found an average transfer rate of 4 GB/s between local NVME and block volumes in a six-worker-node cluster with 12 2-TB block volumes per worker. Additionally, the data movement between tiers scales with the number of nodes. Because the recommended guidance is to run the mover tool on a regular basis, we don't expect the data movement overhead to be significant during regular operations of the cluster. You can find the Terraform automation template on GitHub, included with the availability domain spanning architecture that we announced last month.

Hello, my name is Zachary Smith, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure. In June 2018, we announced the availability of Terraform automation to easily deploy...

Solutions

How to Remediate Application Slowness Due To Incomplete DNS Resolutions

In my role as a Solutions Architect, I encountered instances of application slowness across Oracle internal workloads that were migrated to Oracle Cloud Infrastructure. Andy Herm, Cloud Architect and Jim Sirk, Cloud Network Architect discovered that the application slowness was due to incomplete DNS resolutions. We wrote this blog post to help other Oracle Cloud Infrastructure users troubleshoot and resolve this issue.   Why is This Happening? The issue is related to glibc (starting in 2.9) issuing both IPv4 (A) and IPv6 (AAAA) DNS queries from the client. The IPv6 query doesn’t get a response back from our custom DNS and times out, causing a 5-second delay. For more details, check out Unix & Linux Stack Exchange. One option is to separate out the IPv6 and IPV4 queries. But this means that you would have to touch all the existing clients that have been migrated to Oracle Cloud Infrastructure. We took the following steps to troubleshoot the issue. Troubleshooting Steps Packets were captured from the client servers to identify and isolate the issue. The yellow highlights show the corresponding slowness of packets to the TCP request sequence number. [root@ddpt0jnsb0 tmp]# tcpdump -nvvv -i ens3 host x.y.z.67 -w /var/tmp/tcpdump_byhost.pcap [root@ddpt0jnsb0 tmp]# tcpdump -r tcpdump_byhost.pcap  > tcpdump_byhost.txt   07:09:37.292378 IP ddpt0jnsb0.xxx.com.64168 > x.y.z.67.domain: 49707+ A? ddpt0jnsc0.xxx.com. (59) 07:09:37.292396 IP ddpt0jnsb0.xxx.com.64168 > x.y.z.67.domain: 10048+ AAAA? ddpt0jnsc0.xxx.com. (59) 07:09:37.292933 IP x.y.z.67.domain > ddpt0jnsb0.xxx.com.64168: 49707 1/6/0 A x.y.z.24 (232) 07:09:42.297054 IP ddpt0jnsb0.xxx.com.64168 > x.y.z.67.domain: 49707+ A? ddpt0jnsc0.xxx.com. (59) 07:09:42.297583 IP x.y.z.67.domain > ddpt0jnsb0.xxx.com.64168: 49707 1/6/0 A x.y.z.24 (232) 07:09:42.297638 IP ddpt0jnsb0.xxx.com.64168 > x.y.z.67.domain: 10048+ AAAA? ddpt0jnsc0.xxx.com. (59) 07:09:42.300937 IP x.y.z.67.domain > ddpt0jnsb0.xxx.com.64168: 10048 0/1/0 (124) Additional tests show that the behavior impacts normal operations. [rgbu_ui@ddpt0jnsb0 ~]$ time ssh -o StrictHostKeyChecking=yes ddpt0jnsc0.xxx.com No ECDSA host key is known for ddpt0jnsc0.xxx.com and you have requested strict checking. Host key verification failed.   real    0m5.032s user    0m0.007s sys     0m0.005s   [rgbu_ui@ddpt0jnsb0 ~]$ nslookup ddpt0jnsc0.xxx.com Server:         x.y.z.67 Address:        x.y.z.67#53   Non-authoritative answer: Name:   ddpt0jnsc0.xxx.com Address: x.y.z.24   [rgbu_ui@ddpt0jnsb0 ~]$ time ssh -o StrictHostKeyChecking=yes x.y.z.24 No ECDSA host key is known for x.y.z.24 and you have requested strict checking. Host key verification failed.   real    0m0.028s user    0m0.006s sys     0m0.005s Stracing either process shows it pausing here, waiting for a response from the DNS server, timing out, and then retrying.  This likely also explains the exact 5-second increase in time. connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("x.y.z.67")}, 16) = 0 poll([{fd=3, events=POLLOUT}], 1, 0)    = 1 ([{fd=3, revents=POLLOUT}]) sendmmsg(3, {{{msg_name(0)=NULL, msg_iov(1)=[{"\177\242\1\0\0\1\0\0\0\0\0\0\nddpt0jnsd0\3iad\7icst"..., 59}], msg_controllen=0, msg_flags=0}, 59}, {{msg_name(0)=NULL, msg_iov(1)=[{"\226\270\1\0\0\1\0\0\0\0\0\0\nddpt0jnsd0\3iad\7icst"..., 59}], msg_controllen=0, msg_flags=0}, 59}}, 2, MSG_NOSIGNAL) = 2 poll([{fd=3, events=POLLIN}], 1, 5000)  = 1 ([{fd=3, revents=POLLIN}]) ioctl(3, FIONREAD, [232])               = 0 recvfrom(3, "\177\242\201\200\0\1\0\1\0\6\0\0\nddpt0jnsd0\3iad\7icst"..., 2048, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("x.y.z.67")}, [16]) = 232 poll([{fd=3, events=POLLIN}], 1, 4998 …. It pauses here …. )  = 0 (Timeout) poll([{fd=3, events=POLLOUT}], 1, 0)    = 1 ([{fd=3, revents=POLLOUT}]) sendto(3, "\267\5\1\0\0\1\0\0\0\0\0\0\nddpt0jnsc0\3iad\7icst"..., 59, MSG_NOSIGNAL, NULL, 0) = 59 poll([{fd=3, events=POLLIN}], 1, 5000)  = 1 ([{fd=3, revents=POLLIN}]) ioctl(3, FIONREAD, [232])               = 0 recvfrom(3, "\267\5\201\200\0\1\0\1\0\6\0\0\nddpt0jnsc0\3iad\7icst"..., 2048, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("100.127.143.67")}, [16]) = 232 poll([{fd=3, events=POLLOUT}], 1, 4999) = 1 ([{fd=3, revents=POLLOUT}]) sendto(3, "<0\1\0\0\1\0\0\0\0\0\0\nddpt0jnsc0\3iad\7icst"..., 59, MSG_NOSIGNAL, NULL, 0) = 59 poll([{fd=3, events=POLLIN}], 1, 4998)  = 1 ([{fd=3, revents=POLLIN}]) ioctl(3, FIONREAD, [124])               = 0 brk(NULL)                               = 0x5611833af000 brk(0x5611833de000)                     = 0x5611833de000 recvfrom(3, "<0\201\200\0\1\0\0\0\1\0\0\nddpt0jnsc0\3iad\7icst"..., 65536, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("x.y.z.67")}, [16]) = 124 close(3)                                = 0 …and then it goes onto opening a port 22 connection to the IP address. Possible Solution This could be resolved by adding ‘options single-request-reopen’ to /etc/named.conf to address the 5-seconds delay. After that change, the output of tcpdump looks like the following: 07:55:28.647859 IP ddpt0jnsb0.xxx.com.36657 > x.y.z.67.domain: 5112+ A? ddpt0jnsc0.xxx.com. (59) 07:55:28.648469 IP x.y.z.67.domain > ddpt0jnsb0.xxx.com.36657: 5112 1/6/0 A x.y.z.24 (232) 07:55:28.648547 IP ddpt0jnsb0.xxx.com.46795 > x.y.z.67.domain: 28682+ AAAA? ddpt0jnsc0.xxx.com. (59) 07:55:28.648945 IP x.y.z.67.domain > ddpt0jnsb0.xxx.com.46795: 28682 0/1/0 (124) But that's not ideal. The better way to handle it is to change to stateless rules for DNS so that we don't have to modify the clients at all.  The Recommended Solution When clients initiate DNS queries to their resolver, by default they send both an AAAA and A request to the name server in a single transaction.  Both queries are issued concurrently, and the state table entry gets removed when the first response comes back, dropping the second response.  By allowing ingress/egress traffic to be stateless for DNS (TCP/UDP 53), the second response from the name server is no longer dropped. In this example, the VCN is the /18 aggregate subnet. It is also necessary to update the security list, allowing the clients to send DNS queries to the DNS servers. Here is the screenshot of the security list for the DNS servers. Ingress Rules (Stateless) Egress Rules (Stateless)   We hope this blog post helps you address any application slowness due to incomplete DNS resolutions. I'd like to recognize Andy Herm, Cloud Architect and Jim Sirk, Cloud Network Architect who were instrumental in troubleshooting the issue.  Ryan Otis from the Oracle Cloud Infrastructure team also helped with this investigation. I have replaced the FQDN with xxx.com and the real IP addresses for the hosts with x.y.z.67 and x.y.z24 so that our internal IPs are not exposed. See more guidance on resolving common issues with DNS on Oracle Cloud Infrastructure here.  

In my role as a Solutions Architect, I encountered instances of application slowness across Oracle internal workloads that were migrated to Oracle Cloud Infrastructure. Andy Herm, Cloud Architect and...

Oracle Cloud Infrastructure

Rubrik and Oracle Cloud Accelerate Your Cloud Journey, at Your Own Pace

Cloud has become the standard deployment model for enterprises that want to modernize their data centers and accelerate digital transformation. However, cloud adoption presents unique challenges for organizations of all types. To help customers create their own paths to the cloud, Oracle offers a variety of enterprise-centric choices with a complete, integrated stack that spans on-premises data centers and cloud. Rubrik shares our customer-first obsession and knows that freedom of choice and easy on-ramp to cloud are paramount. We’re excited to collaborate to accelerate enterprise deployment of cost-effective, scalable cloud services together. Simplified Data Protection and Cloud Mobility for Oracle Cloud Our joint integration delivers core data protection capabilities—backup, recovery, and archive—with Rubrik's single software fabric on Oracle Cloud Infrastructure. With Rubrik and Oracle Cloud, you can: Leverage the cost efficiencies of Oracle Cloud Infrastructure by replacing tape complexity with cloud data archive. Enjoy pay-as-you-go economics and increase data recovery reliability by eliminating cumbersome and unreliable tapes. Use an incremental-forever approach to reduce the amount and cost of storage by uploading and storing only the data that has changed between snapshots. Instantly locate applications in Oracle Cloud Infrastructure for faster recoveries. Rubrik indexes all metadata, so you can find files in seconds. Our approach minimizes egress bandwidth and transfer costs by retrieving only requested files instead of entire VMs. Secure your data with encryption in transit and at rest. We understand that security is paramount for organizations that want to adopt cloud. That’s why Rubrik ensures that data in transit and at rest is secure, and all data is encrypted before being sent to Oracle Cloud. Automate long-term retention to Oracle Cloud Infrastructure with management simplicity by using the same interface for both on-premises and cloud environments, you can simply click to assign backup, recovery, and archival schedules through a single policy engine, reducing daily management times from hours to minutes. At the heart of the integration is a new way to accelerate enterprises’ cloud journey and scale in a hybrid cloud world. Through our collaboration, we are excited to develop new capabilities that help our customers grow and deliver differentiated offerings built on Oracle Cloud. Stay tuned for upcoming updates as we continue to innovate together. For more information, stop by both the Rubrik and the Oracle Cloud Infrastructure booth at Oracle OpenWorld '18 or explore the following resources: Oracle Cloud Infrastructure Oracle Cloud Infrastructure Storage services Rubrik Cloud Data Management white paper   Shayan Shafii, Product Marketing, Rubrik Khye Wei, Product Manager, Oracle  

Cloud has become the standard deployment model for enterprises that want to modernize their data centers and accelerate digital transformation. However, cloud adoption presents unique challenges for...

Partners

Deep Learning with NVIDIA GPUs, Oracle Cloud Infrastructure, and MapR

Guest Author: Andy Lerner, Partner Solutions Architect, MapR The MapR and Oracle Cloud Infrastructure (OCI) partnership allows customers to benefit from a highly integrated data platform for big data and machine learning applications. Oracle and MapR share a common vision for delivering data insights across the enterprise and both are committed to developing and delivering a best in class platform. Get started: Terraform module to deploy MapR on Oracle Cloud Infrastructure In this blog post, I will talk about using GPUs for deep learning on Oracle Cloud Infrastructure. Using GPUs to train neural networks for deep learning is becoming commonplace. However, the cost of GPU servers and the storage infrastructure required to feed GPUs as fast as they can consume data is significant. I wanted to see if I could use a highly reliable, low-cost, easy-to-use Oracle Cloud Infrastructure environment to reproduce the deep-learning benchmark results published by some of the big storage vendors. I also wanted to see if a MapR distributed filesystem in this cloud environment could deliver data to the GPUs as fast as those GPUs could consume data residing in memory on the GPU server.  Setup For my deep learning job, I created the following setup: I trained the ResNet-50 and ResNet-152 networks with the TensorFlow CNN benchmark from tensorflow.org using a batch size of 256 for ResNet-50 and 128 for ResNet-152. I used an Oracle Cloud Infrastructure Volta Bare Metal GPU BM.GPU.3.8 instance using ImageNet data stored on a five-node MapR cluster running on five Oracle Cloud Infrastructure Dense I/O BM.DenseIO1.36 instances. The 143-GB ImageNet data was preprocessed into TensorFlow record files of around 140 MB each. To simplify my testing, I installed NVIDIA Docker 2 on the GPU server and ran tests from a Docker container. I used MapR’s mapr-setup.sh script to build a MapR persistent application client container (PACC) from the NVIDIA GPU Cloud (NGC) TensorFlow container. As a result, my container had NVIDIA’s optimized version of TensorFlow with all their necessary libraries and drivers, and MapR’s container-optimized POSIX client for file access. Benchmark Execution First, I ran one benchmark by using data in the local file system, which loaded the Linux buffer cache with all 143 GB of data. Next, I ran the benchmarks through one epoch against this data with one, two, four, and all eight GPUs on the server. In the following charts, that’s the Buffer Cache number. Then, I cleared the buffer cache and reran the benchmarks by pulling the data from MapR. I cleared the MapR filesystem caches on each of the MapR servers between each run to ensure that I was pulling data from the physical storage media. I got some of the best performance numbers that I’ve seen for training these models, and the MapR performance was almost identical to in-memory reads on the local file server. ResNet-50 Results I used nvidia-smi, provided in the NGC container, to collect GPU utilization metrics on the eight GPUs in the cluster to confirm that the GPUs were working at full speed to process the data. The following graphs show the GPU utilization for the 1 GPU and 8 GPU runs pulling data from MapR. ResNet-152 Results The 1 GPU and 8 GPU utilization numbers from nvidia-smi for ResNet-152 were as follows: For just a few dollars per hour, Oracle Cloud Infrastructure gives you the highest-performing NVIDIA GPU enabled servers with highly available, reliable, and massively scalable MapR storage to perform machine-learning tasks faster and more effectively than similar storage infrastructure solutions, with the latter priced orders of magnitude higher. Try out your own Machine Learning use case on OCI with MapR and let us know what you think.  Additional Resources Oracle Cloud Instance pricing: https://cloud.oracle.com/compute/pricing MapR PACC image: https://mapr.com/docs/60/AdvancedInstallation/CreatingPACCImage.html NVIDIA NGC TensorFlow image: https://ngc.nvidia.com/registry/nvidia-tensorflow TensorFlow CNN benchmark: https://github.com/tensorflow/benchmarks/tree/master/scripts/tf_cnn_benchmarks ImageNet: http://image-net.org/ Terraform module to deploy MapR on Oracle Cloud Infrastructure: https://github.com/cloud-partners/oci-mapr

Guest Author: Andy Lerner, Partner Solutions Architect, MapR The MapR and Oracle Cloud Infrastructure (OCI) partnership allows customers to benefit from a highly integrated data platform for big data...

Security

Windows Server FIPS Compliance

Do your systems require compliance with FIPS on security then Oracle Cloud Infrastructure provides the flexibility to meet a key area of compliance for your Windows Server workloads: FIPS (Federal Information Processing Standard) compliance. The ability to create a FIPS-compliant server is a critical milestone in moving to the cloud. This post discusses how to achieve a FIPS-compliant Windows Server, describing the core steps for Windows Server 2016 server and referring to the necessary resources for Windows Server 2008 R2 and 2012. In particular, this post focuses on how to bring this functionality to Oracle Cloud Infrastructure and some things to consider while attempting to meet the FIPS 140 standard. The reference section provides links to Microsoft sites that define and document FIPS compliance for building Windows Servers that are compliant. The FIPS 140-2 Standard The United States and Canadian governments have created specific requirements for ensuring security with in their environments, these requirements have led to the FIPS standard. FIPS is a standard for government computer security. The standard was initially published in May 2001 and then updated in December 2002 by the National Institute of Standards and Technology (NIST) and the Communications Security Establishment of Canada. The standard identifies levels of security and cryptographic module validation. This standard applies to any security system that is used within the US Federal government. The current standard is FIPS 140-2 for all government systems. For more information, see FIPS 140-2. Deeper into FIPS One of the key things to know is that to be FIPS compliant you must disable some encryption algorithms and enable others. The Secure Channel Security Package is forced to use TLS, so the following key Cipher Suites must be disabled: TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 SSL_CK_RC4_128_WITH_MD5 SSL_CK_DES_192_EDE3_CBC_WITH_MD5 TLS_RSA_WITH_NULL_MD5, and TLS_RSA_WITH_NULL_SHA Remote Desktop Protocol is scoped to use the following algorithms: CALG_RSA_KEYX - RSA public key exchange algorithm CALG_3DES - Triple DES encryption algorithm CALG_AES_128 - 128 bit AES CALG_AES_256 - 256 bit AES CALG_SHA1 - SHA hashing algorithm CALG_SHA_256 - 256 bit SHA hashing algorithm CALG_SHA_384 - 384 bit SHA hashing algorithm CALG_SHA_512 - 512 bit SHA hashing algorithm For Windows Server 2008 and later, ensure that your disk encryption is AES-256. For .Net, ensure that you are using the correct CNG Validated Cryptographic Modules. For more information, see FIPS 140 Validation. Making Your Windows Server FIPS Compliant Now that you have an idea of what FIPS is, you need to know how to make your Windows Server environment FIPS compliant. Microsoft has done most of the work for the compliant DLLs and encryption integration. Follow the processes outlined in How to Use FIPS Compliant Algorithms to produce a FIPS-compliant Windows Server. To install and use the FIPS-compliant algorithms, use the instructions in CAPI Validated Cryptographic Modules. You must install the correct DLLs and make some changes to the WebHost\config and MonitoringView\web.config files. Implementing FIPS Compliance on Windows Server 2016 Start with the base Windows Server 2016 image from the Oracle Cloud Infrastructure Console. After the server is built, connect to it and then start the FIPS update outlined by Microsoft.  The hardest part of the process is getting the gacutil.exe program. To get this program, download the .Net 4.0 SDK. When you have the gacutil.exe program, you can start making the necessary changes. Follow the instructions from Microsoft to start the process, as shown here. Open CMD.exe as an administrator, and then run secpol.msc.   In the Local Security Policy window, click Local Policies and then click Security Options.   Scroll to System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing and double-click it.   Select Enabled and then click Apply. Your Windows Server 2016 is now FIPS compliant. The Microsoft website has more in-depth information about enabling such things as Operations Manager and Web Services and enabling Windows Server 2008 R2 and 2012. Now it is time to go build your own FIPS compliant Windows Servers in the Oracle Cloud Infrastructure to start your free trial make sure to check out the try it page (https://cloud.oracle.com/tryit) for more details. Resources Microsoft FIPS: Microsoft FIPS validation: https://docs.microsoft.com/en-us/previous-versions/tn-archive/cc750357(v=msdn.10) or http://technet.microsoft.com/en-us/library/cc750357.aspx FIPS algorithm implementation: https://support.microsoft.com/en-us/help/811833/system-cryptography-use-fips-compliant-algorithms-for-encryption-hashi FIPS system cryptography: https://support.microsoft.com/en-us/help/811833/system-cryptography-use-fips-compliant-algorithms-for-encryption-hashi FIPS 140 validation: https://docs.microsoft.com/en-us/previous-versions/tn-archive/cc750357(v=msdn.10)#_capi_validated_cryptographic FIPS 140-2: https://csrc.nist.gov/csrc/media/publications/fips/140/2/final/documents/fips1402.pdf Wikipedia: https://en.wikipedia.org/wiki/FIPS_140-2

Do your systems require compliance with FIPS on security then Oracle Cloud Infrastructure provides the flexibility to meet a key area of compliance for your Windows Server workloads: FIPS...

Performance

File Storage Performance on Oracle Cloud Infrastructure

Oracle Cloud Infrastructure is built to deliver consistent high performance for demanding enterprise workloads.  A recent white paper we published on File Storage performance demonstrates the levels of performance the service can deliver and offers recommendations for achieving optimal performance when using it. Oracle Cloud Infrastructure File Storage is a fully-managed cloud storage service where read and write throughput and IOPs increase proportionally to the size of a file system. This means as you add more data, you can expect a corresponding growth in performance.  File Storage eliminates over-provisioning for performance and low utilization of purchased capacity that are typical in on-premises environments, resulting in significant cost reduction as well as lower management overhead. Oracle Cloud Infrastructure File Storage allows customers to stop managing individual storage appliances and volumes manually.  This frees them up from worry and effort around capacity management, hardware refreshes, software upgrades, and system and component failures. We believe that Oracle Cloud Infrastructure File Storage is the most cost-effective and easy to manage solution for hosting enterprise applications such as E-Business Suite (EBS), PeopleSoft and Siebel as well as for the deployment and management of clustered file systems that are commonly used for high-performance computing (HPC) workloads. Oracle customer YellowDog enables animation studio and visual effects facilities to access tens of thousands of cores of GPUs to deliver intensive rendering workloads within seemingly impossible deadlines.  Using Oracle Cloud Infrastructure File Storage along with a wide range of compute services, they find us to be consistently faster, cheaper, and easier to manage than other cloud providers or on-premises deployments. As CEO Gareth Williams puts it: "File Storage service is our favourite part of OCI for its simplicity and reliability."  What other use cases can File Storage help you with? Big data and analytics workloads on Oracle Cloud Infrastructure File Storage benefit from distributed shared file systems for storing persistent data, and for accessing to an unlimited pool of file system capacity for managing growth of both structured and unstructured data, as well as for running Test and Dev workloads such as Ravello, MySQL or other databases. What Should You Expect for File Storage Performance? When performing reads and writes of large blocks (~1MB), for each terabyte of data stored you can expect: Overall read performance of at least 100 MB/sec Overall write performance of at least 50 MB/sec At least 2,500 read IOPs The highest levels of performance assume concurrent access that can be achieved by using multiple clients, multiple threads, and multiple mount targets. This table describes the level of performance that you can expect for different sizes of file systems. Although not guaranteed, you can expect to achieve this level of performance with File Storage. Next, let's talk about price.  Although Oracle Cloud delivers significant performance advantages across all services, the cost is far lower than most cloud alternatives, as validated in independent analysis; and File Storage is no exception. With a simple billing model, you only pay a low fixed rate for capacity stored at $0.0425/GB/month. What Other Factors Can Impact Performance? There are other factors on the client side that impact your performance: Available bandwidth The bandwidth available to a file system significantly impacts performance. As bandwidth scales with core count, Oracle Cloud Infrastructure bare metal compute instances provide the greatest bandwidth for demanding workloads. Virtual machine bandwidth is variable based on the core count.  You should select an instance type that offers adequate performance capabilities. Mount options While it’s common practice to provide explicit values for mount options such as rsize and wsize, File Storage performance will be significantly reduced when specifying these mount options.  We recommend you do not pass any mount options when mounting file systems for optimal results. Latency  Latency is largely tied to the distance from where your application compute instances run to the cloud availability domain in which your File Storage systems reside. We recommend using the same availability domain for your file system as for your Oracle Cloud Infrastructure compute instances, or that you pick the availability domain that is closest to your own data center to achieve the lowest possible latency. Workload and Access Patterns The nature of your workload has a significant impact on your performance. File Storage works best with highly parallelized workloads. The following access patterns will cause latency to play a higher role in your performance and may negatively impact response time and your perceived throughput: Accessing files sequentially Using a flat directory structure with many (hundreds of thousands or more) files in a single directory Performing frequent metadata operations on files and directories, such as changing permissions or access times Capacity File Storage offers a fixed amount of bandwidth for every terabyte stored in your file system which scales linearly with capacity. You can expect better performance as you store more data. Try It for Yourself File Storage provides high durability in any availability domain of your choice, where your data is replicated on NVMe SSD drives on five different storage hosts. With unbounded scalability and high durability, File Storage provides on-disk encryption, enables frequent space-efficient snapshots for your data protection, and reduces complexity and operational costs to your business. Interested in trying OCI File Storage? I can help. Just sign up for a free trial or drop me a line. Mona Khabazan, Principal Product Manager, Oracle Cloud Infrastructure File Storage Related Article FSS Tutorials Reference File Storage Service Performance Guide

Oracle Cloud Infrastructure is built to deliver consistent high performance for demanding enterprise workloads.  A recent white paper we published on File Storage performance demonstrates the levels...

Oracle Cloud Infrastructure

StreamSets Data Collector on Oracle Cloud Infrastructure

We are proud to announce a validated reference architecture for StreamSets Data Collector™ on Oracle Cloud Infrastructure. Starting today, you can deploy StreamSets Data Collector, an open source, award-winning solution that efficiently builds, tests, runs, and maintains dataflow pipelines that connect a variety of batch and streaming data sources on Oracle's high-performance cloud by using Terraform templates. With this announcement, Oracle Cloud Infrastructure enhances the Big Data ISV ecosystem of partners. The partnership between StreamSets and Oracle enables customers to use Data Collector like a pipe for a data stream to move, collect, and process data on the way to its destination. Data Collector connects the hops in the stream, on a unified enterprise cloud platform with unmatched performance, security, and availability. StreamSets Data Collector The Data Collector is a design and execution engine that streams data in real time. You use Data Collector to route and process data in your data streams by defining the flow of data (the pipeline). A pipeline consists of stages that represent the origin and destination of the pipeline, and any additional processing. The graphical UI lets you efficiently build batch and streaming data flows with minimal schema specification, connecting many sources to multiple big data solutions with built-in transformations for data normalization and cleansing. Figure 1: StreamSets Data Collector Web UI Learn more about StreamSets Data Collector Oracle Cloud Infrastructure Provides Big Data Flexibility and Performance Blazing Fast Performance Oracle offers the most powerful bare metal compute instances with local NVMe flash storage in the industry. Only Oracle offers this local storage, based on advanced NVMe SSD technology, and backed by a storage performance SLA. Oracle also offers DenseIO virtual machines (VMs), a new high-performance instance with large local storage, backed by NVMe SSD. DenseIO VMs are available in multiple shapes, including 4, 8, and 16 OCPUs, allowing you to customize compute resources for your I/O and storage bound applications.  Oracle also offers standard VM instances with block storage. See our compute page for more details. Data Collector can take advantage of the bare metal compute instances, which are connected in clusters to a nonoversubscribed 25-gigabit network infrastructure, guaranteeing low latency and high throughput, which is a key requirement for high-performance, distributed, streaming workloads. Oracle Cloud Infrastructure is the only cloud provider that offers guaranteed a 25-Gbps connection between any two nodes (network throughput performance SLA). Unmatched Data Ecosystem Data Collector instances that are spun up in the cloud can sit right next to your favorite Hadoop/Spark clusters using Cloudera, Hortonworks, or MapR, and also connect to many other data sources to route and process data on the way to its destination. Data Collector comes with a large number of data origin and destination connectors ready to use without any coding to build data pipelines in hours (not weeks) to reduce development costs.    Right-Size Your Infrastructure in the Cloud Cloud infrastructure enables you to deploy the optimal amount of infrastructure to meet your demands. No more underutilization of too much infrastructure or higher latency caused by underforecasting. In addition, Oracle offers: The lowest compute pricing from a pay-as-you-go (PAYG) perspective The lowest network egress costs in the industry Deploying StreamSets Data Collector You can deploy Data Collector on Oracle Cloud Infrastructure by using Terraform automation, which is fast becoming the leading cross-cloud framework for infrastructure as code (IaC). The Terraform template deploys a standalone StreamSets Data Collector, and performs all of the steps necessary to deploy and configure a Data Collector instance.  Optionally, Data Collector instances can connect to StreamSets Control Hub to manage all Data Collector instances. You can customize the Terraform deployment template by choosing the shape for the Data Collector instance, changing the CIDR block sizes for the virtual cloud network and subnets, and changing other configuration settings. For details about the Terraform templates, see the readme.md file. Figure 2: StreamSets Data Collector Standalone on Oracle Cloud Infrastructure Architecture In the future, we will add information and templates for deploying Data Collector standalone with a Cloudera Enterprise Data Hub cluster and Data Collector via the Cloudera CDH Parcel Manager. What’s Next? If you don’t have an Oracle Cloud Infrastructure account yet, you can sign up for a 30-day free trial account. Follow the instructions on the GitHub Oracle Cloud Infrastructure StreamSets page to install Data Collector on Oracle Cloud Infrastructure. Come and meet us at the Oracle OpenWorld booth #OCI-A01 to learn more about our Big Data ecosystem offerings. We also encourage you to read how StreamSets view the new partnership and why OCI and StreamSets are a great fit to move, collect and process data in the cloud. We hope you are as excited as we are about the StreamSets Data Collector on Oracle Cloud Infrastructure solution. Let us know what you think! Pinkesh Valdria Principal Solutions Architect, Big Data https://www.linkedin.com/in/pinkesh-valdria/

We are proud to announce a validated reference architecture for StreamSets Data Collector™ on Oracle Cloud Infrastructure. Starting today, you can deploy StreamSets Data Collector, an open source, awar...

Developer Tools

Unveiling the New Oracle Cloud Infrastructure Console Homepage

We are proud to announce a major refresh of the Oracle Cloud Infrastructure Console homepage. Based on customer feedback, we’ve redesigned the homepage to be more visual, and to focus on the tools and resources that customers use most. Our goal is to make the homepage a one-stop shop to allow you to configure your cloud more quickly and to manage your infrastructure more effectively. I'd like to highlight three key new features on the redesigned Console homepage. Quickly Launch the Resources You Use Most At the top of the homepage, we give you one-click access to quickly launch the resources needed to configure your infrastructure. Start creating a virtual cloud network, launching compute instances, accessing developer tools, storing data, and managing a domain with one click. If you're looking for other resources to launch, it's easy to browse our menu or use our search capabilities. Get Up to Speed on What's New Learn about the latest features and services available on Oracle Cloud Infrastructure, right when you log in. We've made it easier for you to access documentation, so you can get building quickly. And, if you like learning from other users, use the link to the Oracle Cloud Infrastructure forum to connect with other customers. Service Health Right at Your Fingertips Another great new feature is the ability to view the Service Health status right on the homepage. Instantly see if a service is experiencing an issue or outage and navigate to the page for more details.    We hope you continue to provide us feedback as we work on making our Console experience even better. This is just the first step in our journey to make your cloud transformation seamless.

We are proud to announce a major refresh of the Oracle Cloud Infrastructure Console homepage. Based on customer feedback, we’ve redesigned the homepage to be more visual, and to focus on the tools and...

Partners

MapR Now Validated on Oracle Cloud Infrastructure

We are proud to announce a validated reference architecture for MapR Data Platform on Oracle Cloud Infrastructure. You can now deploy the MapR Platform on Oracle's high-performance cloud with full MapR support. The MapR and Oracle partnership enables customers to benefit from a highly integrated data platform for big data and machine learning applications. Oracle and MapR share a common vision for delivering data insights across the enterprise, and both are committed to developing and delivering a best-in-class platform. MapR Platform Provides Rich Big Data Capabilities MapR offers a unified data platform that simultaneously runs analytics and applications with speed, scale, and reliability. It converges all data into a data fabric that can store, manage, process, apply, and analyze as the data happens. The MapR Platform supports Hadoop, Spark, and Apache Drill with real-time database capabilities, global event streaming, and scalable enterprise storage to power a new generation of Big Data applications. It enables writing against open APIs across MapR and Oracle Cloud Infrastructure through JSON (OJAI), HBase, S3, HDFS, NFS, REST and Kafka.   Learn more about MapR Converged Data Platform Oracle Cloud Infrastructure Provides Big Data Flexibility and Performance Blazing Fast Performance Oracle offers the most powerful bare metal compute instances with local flash storage in the industry. Only Oracle offers this local storage, based on advanced NVMe SSD technology, and backed by a storage performance SLA. Unlike other cloud infrastructure providers that oversubscribe networking, Oracle delivers low latency and high throughput via a nonoversubscribed 25-gigabit network infrastructure, which is a key requirement for high-performance, distributed, streaming workloads. Oracle Cloud Infrastructure is the only cloud with a network throughput performance SLA. Unmatched Data Ecosystem MapR clusters that are spun up in the cloud can sit right next to Exadata or Oracle Database environments over private networks, allowing easy data sharing for analytics. Gartner regards Oracle as one of the top three vendors in the Data Management Storage Analytics space, making MapR on Oracle Cloud Infrastructure a great choice for running analytics workloads Right-Size Your Infrastructure in the Cloud Cloud infrastructure enables you to deploy the optimal amount of infrastructure to meet your demands. No more underutilization of too much infrastructure or long queues caused by underforecasting. In addition, Oracle offers: The lowest compute pricing from a pay-as-you-go (PAYG) perspective Additional discounts available from a sales perspective for critical partners like MapR The lowest network egress costs in the industry Reduced complexity and risk of migration from on-premises with bare metal Deploying MapR Platform You can easily deploy MapR Platform on Oracle Cloud Infrastructure by using Terraform automation. The recommended network architecture for MAPR deployment on Oracle Cloud Infrastructure consists of a virtual cloud network (VCN), containing three separate subnets that are duplicated across all the availability domains in a target region. This configuration gives you the ability to deploy a MAPR cluster in any availability domain in the region and have the same topology and security lists associated with each network. This network model is illustrated in the following diagram, with host associations at the subnet level, showing a single cluster running in a single availability domain. The Terraform module for deploying MapR on Oracle Cloud Infrastructure is available on the Oracle Cloud Infrastructure Cloud Partners GitHub. Provisioning a fully ready cluster typically takes about 45 minutes, requiring minimal user interaction after setting a few configuration values in the Terraform template. Detailed steps for deploying MapR on Oracle Cloud Infrastructure are located in the readme file available in the GitHub repository. If you don’t have an Oracle Cloud Infrastructure account yet, you can sign up for a 30-day free trial account.

We are proud to announce a validated reference architecture for MapR Data Platform on Oracle Cloud Infrastructure. You can now deploy the MapR Platform on Oracle's high-performance cloud with full...

Product News

NVIDIA GPU Cloud on Oracle Cloud Infrastructure

NVIDIA GPU Cloud Containers on Oracle Cloud Infrastructure This week at the NVIDIA GPU Technology Conference in Munich, the Oracle Cloud Infrastructure team is happy to announce general availability of support for NVIDIA GPU Cloud (NGC) containers. You can read about this and the other exciting Oracle and NVIDIA news in the press release. With this new capability, you can now easily run the GPU-accelerated containers from NGC on the best price-performance cloud. “AI is a strategic imperative for every industry. With the availability of Tesla V100 in Oracle Cloud Infrastructure, researchers and developers can tap into the world’s fastest accelerators to fuel faster discoveries and insights,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “The integration of NVIDIA GPU Cloud’s software containers optimized to fully leverage the Tesla V100 will ensure that enterprises around the world can access the technology they need to accelerate their AI research and deliver powerful new AI products and services.” Problem that needs solving Today's installations of AI and High Performance Computing (HPC) applications are often complicated, almost always relying on libraries that need to be installed using specific versions. It's often that an application's performance or even proper operation depends on the correct dependencies. Software downloaded from Linux package managers like yum or apt is not always up-to-date and probably not built with performance in mind. Sometimes, the software is not available in a packaged form and needs to be built from source, which is a time consuming process that requires additional libraries and dependencies. The case for containers While portability is important for system administrators, others like domain scientists, researchers, and engineers are looking for computational reproducibility. Containers are a way to package applications, libraries, and configurations and run them as a self-contained and isolated environment agnostic to software installed on the host system. Since applications inside a container always use the same environment, the performance is reproducible and portable. NVIDIA GPU Cloud (NGC) NVIDIA GPU Cloud offers a container registry of Docker images for deep learning software, HPC applications, and HPC visualization tools. Containers are pre-built, optimized to take full advantage of GPUs, and ready to run on Oracle Cloud Infrastructure. There are over 35 containers in the repository, including GPU-accelerated deep learning frameworks, molecular dynamics (NAMD, GROMACS, LAMMPS), and visualization tools like ParaView with NVIDIA IndeX. The Oracle-NGC-Deep-Learning-Image contains everything needed to run NGC containers on Oracle Cloud Infrastructure using compatible Bare Metal or Virtual Machine shapes (BM.GPU2.2, BM.GPU3.8, VM.GPU2.1, VM.GPU3.1, VM.GPU3.2, VM.GPU3.4). Getting started To use NGC containers on Oracle Cloud Infrastructure, log into the Oracle Cloud Infrastructure Console, configure the settings as needed, and then create an instance based on the Oracle-NGC-Deep-Learning-Image by specifying the image OCID. After launching the instance, you can SSH into the instance and pull your desired container from the NGC container registry. To access all of the containers available from the NGC container registry, you will need to authenticate to NGC. To do this, sign-up for an NGC account at no charge, and create an NGC API key on the NGC Website.  Once you’ve signed up, on the NGC Registry page, click Get API Key, then Generate API Key, and then Confirm to generate the key. If you have an existing API key, it becomes invalid once you generate a new key. Launch the Instance using the Oracle-NGC-Deep-Learning-Image Enter your instance name and choose an availability domain. For Boot Volume, select the option Image OCID. In the Image OCID field, specify the image OCID applicable to your region. us-ashburn-1: ocid1.image.oc1.iad.aaaaaaaaldqvugev7ssa43nozfkab6dlsvbyrenmzbo2r5cstxz4q2nks7sq eu-frankfurt-1: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaauwmn34u6e3hermwqlyhbufnqhhmej55j45mpvb4eow4umwlmgjha For the shape type, choose either Virtual Machine or Bare Metal.  Select the Shape. Choose the VCN and subnet for the instance. Upload or paste your SSH key and click Create Instance. The instance displays the status Provisioning. Once the status has changed to Running, you can connect to the instance. Since the image is Ubuntu 16.04 LTS, username for connection is: ubuntu After connecting to the instance, you are greeted with the following message: Enter your NGC API key and press ENTER. Logging into the NGC Registry at nvcr.io.....Login Succeeded You're ready to use NGC containers, let's validate our installation using the command: nvidia-docker run nvcr.io/nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04 nvidia-smi If you can see the previous output, everything is working fine. The Pull complete lines mean that layers of the Docker container have been downloaded. This allows for reusability and efficiency when creating new images. Learn more on the Docker website. As always, you can find more information in our Oracle Cloud documentation.  You can also join the conversation through NVIDIA’s DevTalk forum for Oracle Cloud Infrastructure. If you don't have an Oracle Cloud Infrastructure account yet, sign up for the trial that provides more than 100 NVIDIA GPU hours for free!

NVIDIA GPU Cloud Containers on Oracle Cloud Infrastructure This week at the NVIDIA GPU Technology Conference in Munich, the Oracle Cloud Infrastructure team is happy to announce general availability of...

Partners

Confluent Platform Now Validated on Oracle Cloud Infrastructure

We are proud to announce a validated reference architecture for Confluent Platform on Oracle Cloud Infrastructure. Starting today you can deploy Confluent's industry-leading distributed streaming platform on Oracle's high-performance cloud by using Terraform templates. With this announcement, Oracle Cloud Infrastructure enhances its Big Data ISV ecosystem of partners. The partnership between Confluent and Oracle enables you to connect all your interfaces and data systems so that you can make decisions leveraging all your internal systems in real time, all on a unified enterprise cloud platform with unmatched performance, security, and availability. Confluent Enterprise Provides a More Complete Distribution of Apache Kafka The Confluent Enterprise flavor of Confluent Platform brings together the best distributed streaming technology from Apache Kafka and addresses the requirements of modern enterprise streaming applications. Confluent Enterprise includes the following components: Confluent Control Center, for end-to-end monitoring and management Confluent Replicator, for managing multiple-data-center deployments Confluent Auto Data Balancing, for optimizing resource utilization and easy scalability Clients for C, C++, Python, and Go programming languages Connectors for JDBC, Elasticsearch, HDFS, etc. Confluent Schema Registry, for managing metadata for Kafka topics Confluent REST Proxy for integrating with web applications Figure 1: Confluent Platform Components Learn more about Confluent Enterprise Oracle Cloud Infrastructure Provides Big Data Flexibility and Performance Blazing Fast Performance Oracle offers the most powerful bare metal compute instances with local NVMe flash storage in the industry. Only Oracle offers this local storage, based on advanced NVMe SSD technology, and backed by a storage performance SLA. The bare metal compute instances are connected in clusters to a non oversubscribed 25-gigabit network infrastructure, guaranteeing extremely low latency and very high throughput, which is a key requirement for high-performance distributed streaming workloads. In fact, Oracle Cloud Infrastructure is the only cloud, with a network throughput performance SLA. Oracle Cloud Infrastructure also supports virtual machines offerings for compute instances Unmatched Data Ecosystem Confluent clusters that are spun up in the cloud can sit right next to your favorite Hadoop/Spark clusters using Cloudera, Hortonworks, or Mapr, and also next to Oracle’s Database environments—Oracle Autonomous Data Warehousing or Oracle Autonomous OLTP services. With Confluent Connectors to connect Apache Kakfa to other data systems such as Oracle Cloud Object Storage, Apache Hadoop, JDBC, Elasticsearch, Cassandra, IBM MQ, it allows for easy data sharing for analytics, monitoring and more. Integrating Confluent with Oracle Object Storage is very simple & quick using Kafka Connect S3 connector because Oracle Object Storage supports Amazon S3 Compatibility API. This ensures our customers are not being locked into a single vendor storage service and it gives them the ability to continue using their favorite client, application, or service with Oracle Object Storage. For details on how to integrate Confluent with Oracle Object Storage, see the Readme.md file.   Right-Size Your Infrastructure in the Cloud Cloud infrastructure enables you to deploy the optimal amount of infrastructure to meet your demands. No more under-utilization of too much infrastructure, or higher latency due to under-forecasting. In addition, Oracle offers: The lowest compute pricing from a pay-as-you-go (PAYG) perspective The lowest network egress costs in the industry “This release of the validated reference architecture allows customers the freedom of choice to run Confluent Platform on Oracle Cloud Infrastructure to experience the performance and SLA delivered by Oracle’s bare metal instances and local NVMe platform. Now, Confluent can be deployed right next to Oracle Autonomous DB, enabling customers to unify their data silos and react in real time to events by using a modern scalable event streaming platform powered by Apache Kafka,” said Simon Hayes, Vice President of Corporate and Business Development at Confluent. Deploying Confluent Platform You can deploy Confluent Platform on Oracle Cloud Infrastructure by using Terraform automation, which is becoming the leading cross-cloud framework for infrastructure as code (IaC). Choose on of the following Terraform templates: N-Node, which is configurable for clusters of any scale in a single availability domain N-Node-Multi-AD, which is configurable for clusters of any scale across three availability domains Figure 2: Multiple-Availability-Domain Architecture Confluent Platform topology supports three classes of service nodes. You specify three or more broker nodes and one or more worker nodes. A zookeeper quorum is required for metadata management, and it can be deployed on independent nodes or broker nodes. Secondary services such as Confluent Schema Registry and Confluent REST Proxy are deployed on the worker nodes. To customize your Terraform deployment, you can perform the following actions: Choose the version and edition of Confluent Platform to deploy. Configure the number and shape for zookeeper, broker, and worker instances. Specify storage capacity for broker instances. Change CIDR block sizes and other configuration settings. You can deploy both flavors of Confluent Platform: Confluent Enterprise or Confluent Open Source. For details about the Terraform templates, see the Readme.md file. What’s Next? If you don’t have an Oracle Cloud Infrastructure account yet, you can sign up for a 30-day free trial account. Follow the instructions on the GitHub oci-confluent page to install Confluent Platform on Oracle Cloud Infrastructure. Come and meet us at the Oracle OpenWorld booth #OCI-A01 to learn more about our Big Data ecosystem offerings.  We hope you are excited as we are about the Confluent Platform on Oracle Cloud Infrastructure solution. Let us know what you think! Pinkesh Valdria Principal Solutions Architect, Big Data https://www.linkedin.com/in/pinkesh-valdria/

We are proud to announce a validated reference architecture for Confluent Platform on Oracle Cloud Infrastructure. Starting today you can deploy Confluent's industry-leading distributed...

Oracle Cloud Infrastructure

Database Migration to Oracle Cloud Infrastructure: Enabling TDE in CDBs and PDBs and Encrypting Tablespaces Online or Offline

  This post is part of the “Database Migration to Oracle Cloud Infrastructure blog series", which includes the posts related to database migration. Use these posts as building blocks for various migration approaches. For more information on Oracle Database and Exadata Cloud Services, review the details at Oracle Cloud Infrastructure - Database   This post provides reference steps to help you enable Transparent Data Encryption (TDE) in Oracle Database container databases (CDBs) and pluggable databases (PDBs), and to encrypt tablespaces online or offline. Note: TDE is mandatory for all Oracle Cloud Infrastructure databases. If TDE not used at the source, enable it either at the source or at the target, using the sample steps in this post. During migrations, be sure to back up and restore the required TDE wallets from the source to the target. For information about Oracle Database tablespace encryption behavior in Oracle Cloud, see My Oracle Support Doc ID 2359020.1. This sample migration workflow covers the following tasks: Enable TDE in a CDB Enable TDE in a PDB Encrypt a Tablespace Online Encrypt a Tablespace Offline Enable TDE in a CDB Update sqlnet.ora to add ENCRYPTION_WALLET_LOCATION. vi $ORACLE_HOME/network/admin/sqlnet.ora ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/u01/app/oracle/admin/CDB/wallet))) For example (click image for larger view): Check the wallet status in the CDB. Note: The wallet is not present. show con_name select wrl_parameter, wallet_type, status from v$encryption_wallet; For example (click image for larger view): Add the wallet in the CDB. Create the key store, open the key store, add the master key, and create the autologin wallet. administer key management create keystore '/u01/app/oracle/admin/CDB/wallet' identified by "welcome1"; administer key management set keystore open identified by "welcome1"; administer key management set encryption key identified by "welcome1" with backup; administer key management create auto_login keystore from keystore '/u01/app/oracle/admin/CDB/wallet' identified by "welcome1"; For example (click image for larger view): Restart the database to use the autologin wallet. shutdown immediate startup For example (click image for larger view): Recheck the wallet status in the CDB. Note: The autologin wallet is open. select wrl_parameter, wallet_type, status from v$encryption_wallet; For example (click image for larger view): Enable TDE in a PDB Check the wallet status in the PDB. Note: The wallet is open with no master key. alter session set container=pdb1; select wrl_parameter, wallet_type, status from v$encryption_wallet; For example (click image for larger view): Add a master key for the PDB. administer key management set encryption key force keystore identified by "welcome1" with backup; For example (click image for larger view): Recheck the wallet status in the PDB. Note: The autologin wallet is open. select wrl_parameter, wallet_type, status from v$encryption_wallet; For example (click image for larger view): Encrypt a Tablespace Online Note: Ensure that the compatible is set to 12.2. select con_id, tablespace_name, encrypted from cdb_tablespaces where encrypted = 'YES' order by 1; alter tablespace users encryption online using 'AES256' encrypt; select con_id, tablespace_name, encrypted from cdb_tablespaces where encrypted = 'YES' order by 1; For example (click image for larger view): Encrypt a Tablespace Offline select con_id, tablespace_name, encrypted from cdb_tablespaces where encrypted = 'YES' order by 1; alter tablespace users offline; alter tablespace users encryption offline encrypt; alter tablespace users online; select con_id, tablespace_name, encrypted from cdb_tablespaces where encrypted = 'YES' order by 1; For example (click image for larger view):

  This post is part of the “Database Migration to Oracle Cloud Infrastructure blog series", which includes the posts related to database migration. Use these posts as building blocks for various...

Oracle Cloud Infrastructure

Database Migration to Oracle Cloud Infrastructure blog series

This post is the home page for the “Database Migration to Oracle Cloud Infrastructure” blog series, which includes the following posts related to database migration. Use these posts as building blocks for various migration approaches. Many customers are in the process of database migration, so we start with a checklist of things to consider and then will regularly add short topical posts including data transfer best practices, configurations using backups, migrations using backups, and encryption. Evaluation and Planning Checklist Use this post to help you evaluate and plan for the migration of your databases to Oracle Cloud Infrastructure, based on the unique requirements of your source and target databases. Planning for Database Backup Transfers to Object Storage When you want to transfer database backups to Oracle Cloud Infrastructure Object Storage, use this post to help you plan and estimate the transfer. Configuring the Cloud Backup Module for Existing or Fresh Backups Backups are an integral part of any migration, and this is also true when you are planning for your database migration to Oracle Cloud Infrastructure. Use this post to help you with the planning and consumption of your backups for your database migration. Enabling TDE in CDBs and PDBs and Encrypting Tablespaces Online or Offline This post provides reference steps to help you enable Transparent Data Encryption (TDE) in Oracle Database container databases (CDBs) and pluggable databases (PDBs), and to encrypt tablespaces online or offline. Migrating Oracle Database from On-Premises or Other Cloud Providers to Oracle DBaaS Using RMAN Backup Sets This post provides reference steps to help you migrate Oracle Database from on premises or from other cloud providers or from Oracle Cloud Infrastructure Classic to Oracle Cloud Infrastructure Database (DBaaS) by using RMAN backup sets to achieve minimal downtime.

This post is the home page for the “Database Migration to Oracle Cloud Infrastructure” blog series, which includes the following posts related to database migration. Use these posts as building blocks...

Events

Why Attend Oracle OpenWorld Sessions About Oracle Cloud Infrastructure?

The majority of enterprise workloads—between 68% and 82%, according to industry analysts —still live on-premises. These are typically the complex, mission-critical, traditional applications that businesses rely on. Organizations want to bring the benefits of public cloud to these workloads, but it's a daunting task. Oracle Cloud Infrastructure is purpose-built for the enterprise. In our Oracle OpenWorld sessions, attendees will learn why Oracle Cloud Infrastructure is different than other clouds—from its standout performance to its commitment to openness to its focus on security—and how it enables organizations to succeed. What Is Oracle Cloud Infrastructure? On Monday, October 22nd, Oracle Cloud Infrastructure: The Basics and the Next Level session covers the core capabilities that make migration of critical enterprise applications possible. Attendees will also hear about some differentiating features around machine learning, high-performance computing, containers, and other emerging technologies. And on Tuesday, October 23rd, our Your Cloud Transformation Roadmap on Oracle Cloud Infrastructure Keynote session features customers talking about how they used these capabilities to improve application performance and reduce costs. For deep dives on some of these emerging technologies, attend High-Performance Computing That's Better Than On-Premises: Real-World Stories and Kubernetes in an Oracle Hybrid Cloud. Oracle Databases and Applications Oracle built its infrastructure as a service because it's necessary to run databases in the cloud. Sessions at OpenWorld will provide a technical overview of running Oracle Autonomous Database Cloud on Oracle Cloud Infrastructure and share some dos and don'ts around Oracle Database Exadata Cloud Service. Oracle Cloud Infrastructure is also a natural fit for other Oracle applications, and two customers who recently migrated will discuss the benefits in the session Why Oracle Applications Run Best on Oracle Cloud Infrastructure. Cloud Infrastructure Security With Oracle Cloud Infrastructure, organizations can take their existing on-premises security technologies with them when they migrate, an approach called "move and improve." The Oracle Cloud Infrastructure Security Architecture: Peek Under the Covers session explains how hardware and software provide comprehensive compute and network security, and how customers benefit from these innovations. Attendees who use multiple cloud providers can get specific security advice in a session on Cisco Tetration, a cloud workload protection service built on Oracle Cloud Infrastructure. Learning from Customers An enterprise-grade cloud is a cloud that enables all businesses, regardless of their size or age, to take advantage of the latest and greatest technology. In Accomplish the Impossible with the Cloud: Hear How Startups Are Doing It, the leaders of four startups will share their stories of using Big Data, machine learning, and other enterprise-grade technologies on Oracle Cloud Infrastructure. Two large SaaS companies, FireEye and Netsuite, will share their rationale and best practices around Building and Running High-scale SaaS on Oracle Cloud Infrastructure. For an enterprise perspective, there is a panel with first-hand accounts from companies that migrated to the cloud. And in another session, customers will describe how they use Oracle Cloud Infrastructure to make their enterprise workloads more flexible, available, and performant. Over the past several years, Oracle Cloud Infrastructure has taken the lead in transforming Oracle into a cloud company. These Oracle OpenWorld sessions aim to help others on that same path.

The majority of enterprise workloads—between 68% and 82%, according to industry analysts —still live on-premises. These are typically the complex, mission-critical, traditional applications that...

Oracle Cloud Infrastructure

Database Migration to Oracle Cloud Infrastructure: Migrating Oracle Database from On-Premises or Other Cloud Providers to Oracle DBaaS Using RMAN Backup Sets

  This post is part of the “Database Migration to Oracle Cloud Infrastructure blog series", which includes the posts related to database migration. Use these posts as building blocks for various migration approaches. For more information on Oracle Database and Exadata Cloud Services, review the details at Oracle Cloud Infrastructure - Database   This post provides reference steps to help you migrate Oracle Database from on premises or from other cloud providers or from Oracle Cloud Infrastructure Classic to Oracle Cloud Infrastructure Database (DBaaS) by using RMAN backup sets. This post covers the use of incremental backups. The idea is that as long as backups are complete and consistent, you can stagger the level 0 and level 1 backups over a period of time for the final restore.   This post assumes that Transparent Data Encryption (TDE) is enabled at the source, so you should determine whether your source database uses TDE. TDE is mandatory for all Oracle Cloud Infrastructure databases. If TDE not used at the source, enable it either at the source or at the target. Be sure to back up and restore the required TDE wallets from the source to the target. This sample migration workflow covers the following tasks: Evaluate and Plan Back Up the Source Database Perform Incremental Backups of the Source Database Prepare the Target Database for the Restore Restore and Recover the Database at the Target Evaluate and Plan Use the Evaluation and Planning Checklist to help you evaluate and plan for the migration of your databases to Oracle Cloud Infrastructure, based on the unique requirements of your source and target databases. Back Up the Source Database Connect to the source database, enable backup encryption, and set the compression to medium. rman target / set encryption on; set compression algorithm 'medium'; For example (click image for larger view): Perform a level 0 backup, which is equivalent to taking a full backup. run { configure controlfile autobackup off; backup as compressed backupset device type disk tag dta_level0 cumulative incremental level 0 format '/u01/nfs/l0_%T_%d_set%s_piece%p_%U' section size 24g database include current controlfile spfile plus archivelog format '/u01/nfs/l0_%T_%d_set%s_piece%p_%U'; } For example (click image for larger view): ....... ....... ....... ....... Note: Record the backup piece name of the control file backup. You will need this to restore the control file at the target. Copy the password file and TDE wallet files. cp $ORACLE_HOME/dbs/orapwrohitdb /u01/nfs/. zip -rj /u01/nfs/tde_wallet.zip /u01/app/oracle/admin/rohitdb/tde_wallet For example (click image for larger view): Transfer the backups to Oracle Cloud Infrastructure Object Storage. Use the information in Data Transfer Guidance and Transfer Options. Perform Incremental Backups of the Source Database Perform optional incremental backups, as needed. set encryption on; set compression algorithm 'medium'; run { backup as compressed backupset device type disk tag dta_level1 cumulative incremental level 1 format '/u01/nfs/l1_%T_%d_set%s_piece%p_%U' section size 24g database include current controlfile spfile plus archivelog format '/u01/nfs/l1_%T_%d_set%s_piece%p_%U'; } For example (click image for larger view): ..... ..... ..... ..... Note: Record the backup piece name of the control file backup. You will need this to restore the control file at the target. Transfer the incremental backups to Object Storage. Use the information in Data Transfer Guidance and Transfer Options. Prepare the Target Database for Restore Create the target database in Oracle Cloud Infrastructure. To ensure that the target database has all the required metadata for Oracle Cloud Infrastructure tooling to work, create the target database by using one of the supported methods: Oracle Cloud Infrastructure Console, CLI, or Terraform provider. This target database will be cleaned to be used as a shell for the migration, as needed. Configure the Oracle Database Cloud Backup Module. Configuring the Cloud Backup Module for Existing or Fresh Backups provides an example of how to configure the Cloud Backup Module to point to the Object Storage backup bucket. For details, including variables and commands, see Installing the Oracle Database Cloud Backup Module. Stop the target database. . oraenv rohitdb export ORACLE_UNQNAME=rohitdb_phx3bv srvctl stop database -d rohitdb_phx3bv For example (click image for larger view): Manually clean up the target database. From Oracle: rm /opt/oracle/dcs/commonstore/wallets/tde/rohitdb_phx3bv/* rm $ORACLE_HOME/dbs/orapwrohitdb For example (click image for larger view): From grid: asmcmd ls +DATA/rohitdb_phx3bv asmcmd rm -rf +DATA/rohitdb_phx3bv/CHANGETRACKING asmcmd rm -rf +DATA/rohitdb_phx3bv/DATAFILE asmcmd rm -rf +DATA/rohitdb_phx3bv/TEMPFILE asmcmd rm -rf +DATA/rohitdb_phx3bv/69F189055AB223CEE053D62DC40ABE06 asmcmd rm -rf +DATA/rohitdb_phx3bv/73BBBF30958C4846E0530D01000AE2B2 asmcmd ls +RECO/rohitdb_phx3bv asmcmd rm -rf +RECO/rohitdb_phx3bv/* For example (click image for larger view): Note: Do not delete the parameter file. Copy the source password file and TDE wallet files at the target location. wget https://objectstorage.us-phoenix-1.oraclecloud.com/p/jqpeWTkKbsbsGPphsjuIn0oAVkchH-4hCxuVrtsYPE8/n/sic-dbaas/b/rohit-backups/o/orapwrohitdb wget https://objectstorage.us-phoenix-1.oraclecloud.com/p/R7zIMbjv-bV0TldZPntONowVyLtD1ljSljEhE8xL0Ro/n/sic-dbaas/b/rohit-backups/o/tde_wallet.zip cp orapwrohitdb $ORACLE_HOME/dbs/. unzip tde_wallet.zip -d /opt/oracle/dcs/commonstore/wallets/tde/rohitdb_phx3bv/ For example (click image for larger view): Ensure that sqlnet.ora has the right ENCRYPTION_WALLET_LOCATION. cat $ORACLE_HOME/network/admin/sqlnet.ora For example (click image for larger view): Adjust the control file location. sqlplus / as sysdba startup force nomount; alter system set control_files='+RECO' scope=spfile sid='*'; startup force nomount; For example (click image for larger view): Restore and Recover the Database at the Target Create the SBT metadata.xml file for the Object Storage backup pieces. run { allocate channel t1 device type sbt parms='SBT_LIBRARY=/home/oracle/cbm/cbm_lib/libopc.so, SBT_PARMS=(opc_pfile=/home/oracle/cbm/cbm_config)'; send channel t1 ' export backuppiece l0_20180823_ROHITDB_set237_piece1_7dtb9luh_1_1; export backuppiece l0_20180823_ROHITDB_set238_piece1_7etb9lul_1_1; export backuppiece l0_20180823_ROHITDB_set239_piece1_7ftb9m02_1_1; export backuppiece l0_20180823_ROHITDB_set240_piece1_7gtb9m15_1_1; export backuppiece l0_20180823_ROHITDB_set241_piece1_7htb9m29_1_1; export backuppiece l0_20180823_ROHITDB_set242_piece1_7itb9m2b_1_1; export backuppiece l0_20180823_ROHITDB_set243_piece1_7jtb9m2c_1_1; '; } run { allocate channel t1 device type sbt parms='SBT_LIBRARY=/home/oracle/cbm/cbm_lib/libopc.so, SBT_PARMS=(opc_pfile=/home/oracle/cbm/cbm_config)'; send channel t1 ' export backuppiece l1_20180823_ROHITDB_set244_piece1_7ktb9r4k_1_1; export backuppiece l1_20180823_ROHITDB_set245_piece1_7ltb9r4m_1_1; export backuppiece l1_20180823_ROHITDB_set246_piece1_7mtb9r4q_1_1; export backuppiece l1_20180823_ROHITDB_set248_piece1_7otb9r4t_1_1; export backuppiece l1_20180823_ROHITDB_set249_piece1_7ptb9r4v_1_1; export backuppiece l1_20180823_ROHITDB_set250_piece1_7qtb9r51_1_1; '; } For example (click image for larger view): Restore the control file from the Object Storage backups. Note: If you want to use incremental backups, use the level 1 control file backup piece. If only level 0 backups will be used for the final restore, then use the level 0 control file backup piece. run { allocate channel t1 device type sbt parms='SBT_LIBRARY=/home/oracle/cbm/cbm_lib/libopc.so, SBT_PARMS=(opc_pfile=/home/oracle/cbm/cbm_config)'; restore controlfile from 'l1_20180823_ROHITDB_set248_piece1_7otb9r4t_1_1'; alter database mount; } For example (click image for larger view): Catalog the Object Storage backup pieces. run { configure channel device type 'sbt_tape' parms 'SBT_LIBRARY=/home/oracle/cbm/cbm_lib/libopc.so, SBT_PARMS=(OPC_PFILE=/home/oracle/cbm/cbm_config)'; crosscheck backup device type sbt; delete noprompt expired backup; catalog device type sbt backuppiece 'l0_20180823_ROHITDB_set237_piece1_7dtb9luh_1_1', 'l0_20180823_ROHITDB_set238_piece1_7etb9lul_1_1', 'l0_20180823_ROHITDB_set239_piece1_7ftb9m02_1_1', 'l0_20180823_ROHITDB_set240_piece1_7gtb9m15_1_1', 'l0_20180823_ROHITDB_set241_piece1_7htb9m29_1_1', 'l0_20180823_ROHITDB_set242_piece1_7itb9m2b_1_1', 'l0_20180823_ROHITDB_set243_piece1_7jtb9m2c_1_1', 'l1_20180823_ROHITDB_set244_piece1_7ktb9r4k_1_1', 'l1_20180823_ROHITDB_set245_piece1_7ltb9r4m_1_1', 'l1_20180823_ROHITDB_set246_piece1_7mtb9r4q_1_1', 'l1_20180823_ROHITDB_set248_piece1_7otb9r4t_1_1', 'l1_20180823_ROHITDB_set249_piece1_7ptb9r4v_1_1', 'l1_20180823_ROHITDB_set250_piece1_7qtb9r51_1_1'; } list backup summary; For example (click image for larger view): .... .... Restore the database from the Object Storage backups. run { set newname for database to new; restore device type sbt database; switch datafile all; switch tempfile all; } For example (click image for larger view): .... .... .... .... Recover the database from the Object Storage backups. list backup of archivelog all; run { set until sequence 59 thread 1; recover device type sbt database; } For example (click image for larger view): .... .... .... .... Adjust the log files and block change tracking location. alter database rename file '/u04/app/oracle/redo/redo03.log' to '+RECO'; alter database rename file '/u04/app/oracle/redo/redo02.log' to '+RECO'; alter database rename file '/u04/app/oracle/redo/redo01.log' to '+RECO'; For example (click image for larger view): alter database disable block change tracking; alter database enable block change tracking using file '+DATA'; For example (click image for larger view): Open the database with resetlogs. alter database open resetlogs; select open_mode from v$database; For example (click image for larger view):  

  This post is part of the “Database Migration to Oracle Cloud Infrastructure blog series", which includes the posts related to database migration. Use these posts as building blocks for various...

Events

Take the Oracle Cloud Infrastructure 2018 Architect Associate Certification Exam for FREE at the Oracle OpenWorld Test Fest!

Would you like to be Oracle Cloud Infrastructure certified? Now is your chance to get certified for free! Come to the Oracle OpenWorld 2018 Test Fest and take the Oracle Cloud Infrastructure Architect Associate exam. Nine test sessions are available to you over the course of four days. We are conveniently located at the Marriott Marquis, San Francisco (rooms Foothill E and F). These free exam sessions are available to all our Partners and Conference Attendees. Date Times Monday, October 22 9:00 AM - 11:00 AM, 11:30 AM - 1:30 PM, 3:00 PM - 5:00 PM Tuesday, October 23 11:00 AM - 1:00 PM, 3:00 PM - 5:00 PM Wednesday, October 24 11:00 AM - 1:00 PM, 3:00 PM - 5:00 PM Thursday, October 25 11:00 AM - 1:00 PM, 1:30 PM - 3:30 PM You can register for your exam by clicking here. You can prepare for your exam by viewing the study guide. You can even gauge your readiness by taking the practice test. If you have any questions about the Oracle Cloud Infrastructure Architect Associate exam, please reach out to me directly at greg.hyman@oracle.com. Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification Twitter: @GregoryHyman LinkedIn: GregoryRHyman Read the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam blog series on Greg’s blog page.

Would you like to be Oracle Cloud Infrastructure certified? Now is your chance to get certified for free! Come to the Oracle OpenWorld 2018 Test Fest and take the Oracle Cloud Infrastructure Architect...

Customer Stories

Oracle OpenWorld 2018 Preview - Move and Improve your Oracle Apps to the Cloud

We're so excited that Oracle OpenWorld 2018 in San Francisco is only days away. Oracle experts will be on site to share best practices for migrating Oracle Applications like E-Business Suite, JD Edwards, PeopleSoft and Siebel to the cloud.  What's more they'll be unveiling new solutions that are improving how these business critical applications run and perform. Perhaps more importantly, you will get a chance to learn from other customers who will take to the stage to talk about how they're cutting operational costs, boosting performance and transforming IT by migrating their Oracle Applications to the cloud.  Here's a sneak peek at some of the sessions you won't want to miss. Why Oracle Applications Run Best on Oracle Cloud Infrastructure [BUS4596] Souji Madhurapantula, Principal Product Manager at Oracle will lead a session to discuss how moving Oracle Applications from on-premises to Oracle's cloud can help organizations innovate faster, improve availability and reliability, and reduce costs. Attendees will get practical insights from customers Cox Automotive and Lifescan, Inc. that have recently made the move. And attendees will also learn how easy it is to migrate with professional services from Oracle and an ecosystem of experienced third-party managed service providers (MSPs) that are invested in your success. Hear Customers Talk About Migrating to the Cloud [CAS5907] As more and more customers are starting down the path of migrating their enterprise workloads to the cloud, we've put together a panel of customers who have done it. Kash Iftikhar, our VP of Product Management and Strategy, will kick off an interactive panel discussion with IT leaders from 7-Eleven, Covanta Energy, HID Global, and MBC Group who will chat about their real-life experiences of moving applications to the cloud. Best Practices for Enterprise Workloads in Oracle Cloud Infrastructure [TIP4599] If you're already familiar with the basics and want to dive into more advanced topics like architecture patterns for building, deploying and running enterprise workloads in Oracle Cloud Infrastructure, join this session led by Karan Singh, Director of Product Management in charge of Solution Architectures. Oracle customer Alliance Data Systems will share the story of how they stood up six PeopleSoft environments - development, Q/A, user acceptance testing, disaster recovery, production, and certification – all in Oracle Cloud Infrastructure. Leverage Oracle Cloud Infrastructure, Applications, and Oracle Premier Support [BUS5737] This session will focus on the Oracle Premier Support offerings available for your Oracle Application dev/test and production environments in the Oracle Cloud. Especially if you're bringing your critical business customizations from on-premises to the cloud, learn how you can take advantage of the same experience and the same comprehensive support. You'll hear how customers Essilor of America and Atlas Roofing Corporation have been able to transform their business. Finally if you're interested in sessions dedicated to deploying specific Oracle Applications like JD Edwards, PeopleSoft or E-Business Suite in Oracle Cloud Infrastructure, we invite you to join those targeted sessions as well. We look forward to seeing all of you at OpenWorld in San Francisco.  

We're so excited that Oracle OpenWorld 2018 in San Francisco is only days away. Oracle experts will be on site to share best practices for migrating Oracle Applications like E-Business Suite, JD...

Developer Tools

Oracle and NVIDIA Announce NVIDIA HGX-2 for Oracle CIoud Infrastructure & Collaboration on RAPIDS Accelerated Data Science Software

From enabling autonomous vehicles to global climate simulations, rapid progress in AI and HPC has transformed entire industries, while demanding massive increases in complexity and compute power. As part of this transition over the last 12 months, Oracle Cloud Infrastructure has been collaborating with NVIDIA to unlock cutting-edge bare-metal and virtual machine instances for engineers, data scientists, researchers and developers. This collaboration gives them the power to run and solve the greatest AI and HPC challenges all at their fingertips. Oracle Cloud Infrastructure was the first public cloud provider to launch bare-metal NVIDIA Pascal GPU architecture-based instances in 2017 and then followed that up with another public cloud-first with general availability of NVIDIA’s Tesla V100 Tensor Core GPUs bare-metal instances, which help make deep learning workloads even faster. Today, in collaboration with NVIDIA, we’re excited to announce that Oracle Cloud Infrastructure will bring the NVIDIA HGX-2 platform to Oracle Cloud Infrastructure in both bare-metal and virtual machine instances, giving customers access to a unified HPC and AI computing architecture. HGX-2 is designed for multi-precision computing —high precision FP64 and FP32 for accurate HPC, and faster, reduced precision FP16 and INT8 for AI. The former is ideal for HPC applications. Combined with 2 petaFLOPS of compute and NVIDIA NVSwitch interconnect technology providing 300 GB/sec of GPU-to-GPU bandwidth, HGX-2 has the capability to accelerate the most demanding applications. “This new collaboration with Oracle will help fuel incredible innovation across a wide range of industries and uses,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “By taking advantage of NVIDIA’s latest technologies, Oracle is well positioned to meet surges in demand for GPU acceleration for deep learning, high-performance computing, data analytics and machine learning.” These instances on Oracle Cloud Infrastructure will also include up to 48 cores of Intel’s Xeon processors running at 3.5GHz all-core turbo frequency along with up to 768Gb of system memory and ability to get up to 25Gbps of non-oversubscribed bandwidth along with the ability to attach up to 1 Petabyte of NVMe Block Storage device. Some of the instances that Oracle Cloud Infrastructure will be offering in early 2019 include the following, we will also be offering a 16-way instance: Instance Cores (3.5 Ghz all-core turbo) Memory Storage GPUs BM.GPU4.8 48 768GB 1 Petabyte of Block Storage 8x 32GB V100 with NVSwitch VM.GPU4.4 22 360GB 1 Petabyte of Block Storage 4x 32GB V100 with NVSwitch VM.GPU4.2 11 180GB 1 Petabyte of Block Storage 2x 32 V100 with NVSwitch VM.GPU4.1 5 90GB 1 Petabyte of Block Storage 1x 32 V100   Apart from enabling HPC and AI workloads, we’re targeting data science and analytics as a major area of investment. This is bolstered by recent acquisitions and work on Oracle's Data Science Cloud, which makes it easy and intuitive for data science teams to work collaboratively on the data-driven projects that transform how companies do business. We are enabling use-cases such as bringing algorithmic decision-making to drug development, diagnostics, and clinical trials with a data science platform or bringing algorithmic decision-making to lending, investing, and banking in the fintech sector. NVIDIA RAPIDS Software Framework Hence, we’re excited to collaborate with NVIDIA to support newly announced RAPIDS open source software from NVIDIA, a set of open source libraries for accelerating end-to-end data science training pipelines on NVIDIA GPUs. RAPIDS dramatically speed up the data science pipeline by moving workflows onto the GPU, optimizes machine learning training with more iterations for better model accuracy and accelerates the Python data science toolchain with hassle-free integration and minimal code changes. Support for NGC Containers Now Generally Available on Oracle Cloud Infrastructure You can download a variety of GPU-accelerated containers from the NVIDIA GPU Cloud container registry and run them on Oracle Cloud Infrastructure! First announced with preview support at NVIDIA’s GPU Technology Conference in Silicon Valley, general availability means that now everyone can easily deploy containerized applications and frameworks from NGC for HPC, data science and AI and run them seamlessly on Oracle Cloud Infrastructure while taking advantage of the portfolio of GPU instances across multiple regions in the U.S. and Europe. Find out how to use NGC containers on Oracle Cloud Infrastructure here. For more information about Oracle Cloud Infrastructure’s GPU offerings, visit – https://cloud.oracle.com/iaas/gpu. Oracle Cloud Infrastructure at GTC Europe 2018 The Big Compute & HPC teams will be at NVIDIA’s GTC Europe in Munich in full force; so, I encourage you to come and speak to our engineering teams, get demos and hands-on experience with Oracle Cloud Infrastructure. Additionally, attend our general session titled – “E8528 - AI & HPC Infrastructure on Oracle Cloud Infrastructure” on Thursday 11th October at 16:30 in Room 22.    See you there!

From enabling autonomous vehicles to global climate simulations, rapid progress in AI and HPC has transformed entire industries, while demanding massive increases in complexity and compute power. As...

Product News

Oracle Cloud Infrastructure Storage Gateway: Leveraging the Object Storage API

Hi, my name is Douglas Copas, an Oracle Cloud Infrastructure Solution Architect. In this blog post, I'll introduce you to Oracle Cloud Infrastructure Storage Gateway. In simple terms, and in the current incarnation, Storage Gateway is piece of software that uses the Oracle Cloud Infrastructure Object Storage API to turn an ordinary NFS share into an Object Storage backed NFS share. Normally, for an application to use Object Storage, that application needs to support and use the Object Storage API. The Storage Gateway abstracts this away, letting the client simply write to a standard file directory. For a while now, I’ve been of the opinion that the age old question “What is a cloud?” actually has a simple answer: it's an API. Specifically, of course, it's an API that provides compute, storage, and network resources, but it's an API nonetheless. When seen through the lens of this API, classic computing infrastructure takes on a flexibility and fluidity not possible in the physical world and only hinted at in the monolithic enterprise virtualization solutions of yesteryear. The modern, API-driven cloud infrastructure is powerful. As we'll see with Storage Gateway, the Oracle Cloud Infrastructure API has been leveraged to achieve something very clever and useful. But first a side note about Object Storage. What Is Object Storage and How Is It Different from Block Storage? Object Storage can be thought of as something in-between the unstructured, content-agnostic block storage (most akin to the home PC’s local hard disk) and a document management system, which tracks versioning, authorship, and, to some extent, content. Object Storage is a network-connected system whereby objects are read and written to a remote system via a REST API. These objects then reside in a logical construct called a bucket. These objects can be anything, but generally the system is designed for immutable items such as photos, videos, and compressed archives. Things that are written once by one person and read many times by many others lend themselves well to this system. Of course, Oracle Cloud Infrastructure Object Storage has an API, and that API allows us to do something very clever. Introducing Storage Gateway Now, networked storage is a concept that goes back many years. A quick look at Wikipedia tells me that Sun developed NFS in 1984, “allowing a user on a client computer to access files over a computer network much like local storage is accessed.” The idea is simple: one or more servers use their own storage to store (and serve) files for clients. But what if the server didn’t write things locally? What if the files were in turn sent into the cloud? This is where Oracle Cloud Infrastructure Storage Gateway fits in. How It Works The idea is simple. Take a Linux server on Oracle Cloud Infrastructure running NFS server bits, and install the Oracle Cloud Infrastructure Storage Gateway bits. After a little configuration, when a file is written or modified in a Storage Gateway backed share, that file is automatically uploaded to a connected Object Storage bucket. NFS itself does most of the work, advertising the share, managing the transfer between the NFS server and the NFS clients, and so on. Storage Gateway handles other tasks, like read and write caching and multipart upload. This basically means that you can have a NFS server with no local storage for the files, enabling some of the scenarios that NFS itself enables. Why only some? I'll discuss that at the end of the post. How to Install It One of the awesome things about Oracle Cloud Infrastructure is the documentation, and the Storage Gateway installation instructions are no exception. Here I just want to list a few notes about the process that you might find helpful. First, when creating a server in Oracle Cloud Infrastructure to host Storage Gateway, plan ahead. Unsurprisingly, the server needs some large block volumes attached to it to act as a cache. We recommend having at least three separate volumes, for the cache, metadata, and logs. Failing to do this results in some scary warning messages in the installation process, and failing to plan ahead results in multiple aborted installation attempts while block volumes are attached. Second, a note about the management console and admin password. Without using the -a option for advanced options, the installation script makes the management (web) console available on port 443, with no URI. If you are installing Storage Gateway on a VM with a public IP address, the console will be immediately reachable by anyone (security list rules notwithstanding), and in fact the first user to connect via the management console will be prompted to set an initial password. To avoid any security exposure, a temporary explicit DENY rule for incoming traffic on the management port can be added to iptables on the Storage Gateway server, until the sudo ocisg do password:reset and sudo ocisg password:set <new_password> commands can be issued from the CLI. You also need to create an Object Storage bucket in your tenancy for each NFS share that you want to back. How to Use It (and How Not to) Overview of Storage Gateway: Recommended Uses and Workloads is a must-read at this point. Imagine a scenario where a client workload produces backups in .tgz form. A simple cron job or two, and standby systems in another region will always have the latest backup available from which to restore. I want to stress that using Object Storage to back a block storage share (NFS) has consequences. Workloads that perform frequent read/writes to files should not have those files residing in an Oracle Cloud Infrastructure Storage Gateway share. Likewise, there is no support for multiple-write merging. Consequently, the system is not a good choice for a collaboration workspace. In fact, keeping humans out of this file system altogether is probably a good idea. This is no problem in the cloud, where automation is the watchword and should always be the first choice. Stay tuned for my next post in which I'll discuss private inter-region VCN peering in EMEA (and provide a nice use case for Storage Gateway in the process).

Hi, my name is Douglas Copas, an Oracle Cloud Infrastructure Solution Architect. In this blog post, I'll introduce you to Oracle Cloud Infrastructure Storage Gateway. In simple terms, and in...

Oracle Cloud Infrastructure

Database Migration to Oracle Cloud Infrastructure: Configuring the Cloud Backup Module for Existing or Fresh Backups

  This post is part of the “Database Migration to Oracle Cloud Infrastructure blog series", which includes the posts related to database migration. Use these posts as building blocks for various migration approaches. For more information on Oracle Database and Exadata Cloud Services, review the details at Oracle Cloud Infrastructure - Database   Backups are an integral part of any migration, and this is also true when you are planning for your database migration to Oracle Cloud Infrastructure. Use the following guidance to help you with the planning and consumption of your backups for your database migration. Using Existing Backups for Migration Unless existing backups are already in an Oracle Cloud Infrastructure Object Storage bucket, transfer the backups from your on-premises environment, Oracle Cloud Infrastructure Classic, or another cloud provider by using the guidance provided in the Planning for Database Backup Transfers to Object Storage blog post. If the existing backups are located in a supported object store, such as Oracle Cloud Infrastructure Object Storage Classic, then based on your data volume, network bandwidth, and network reliability, you can point the Oracle Database Cloud Backup Module to that object store bucket and complete the migration without needing to transfer these backups to Oracle Cloud Infrastructure first. Using Fresh Backups for Migration For fresh backups, we recommend configuring the Oracle Database Cloud Backup Module to point to the backup bucket in Oracle Cloud Infrastructure Object Storage. Note: Based on your data volume, network bandwidth, and network reliability, if fresh backups take longer than 1 to 2 weeks, consider using the Data Transfer service. The following steps provide an example of how to configure the Cloud Backup Module to point to the Object Storage backup bucket. For details, including the variables and command shown, see Installing the Oracle Database Cloud Backup Module.   Set the required environment variables using appropriate values for your environment. sudo su - oracle mkdir -p /home/oracle/cbm/cbm_lib export vcbm_opcinstalljar=/opt/oracle/oak/pkgrepos/oss/odbcs/opc_install.jar export vcbm_wallet=/home/oracle/cbm/cbm_wallet export vcbm_lib=/home/oracle/cbm/cbm_lib export vcbm_config=/home/oracle/cbm/cbm_config export vcbm_host=https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/<> export vcbm_bucket='rohit-backups' export vcbm_svcac='<>' export vcbm_swiftpw='<>' For example (click image for larger view):   Run the opc_install command to install and configure the Cloud Backup Module to point to the backup bucket. java -jar $vcbm_opcinstalljar -opcId $vcbm_svcac -opcPass $vcbm_swiftpw -container $vcbm_bucket -walletDir $vcbm_wallet -libDir $vcbm_lib -configfile $vcbm_config -host $vcbm_host For example (click image for larger view):    

  This post is part of the “Database Migration to Oracle Cloud Infrastructure blog series", which includes the posts related to database migration. Use these posts as building blocks for various...

Oracle Cloud Infrastructure

Database Migration to Oracle Cloud Infrastructure: Planning for Database Backup Transfers to Object Storage

  This post is part of the “Database Migration to Oracle Cloud Infrastructure blog series", which includes the posts related to database migration. Use these posts as building blocks for various migration approaches. For more information on Oracle Database and Exadata Cloud Services, review the details at Oracle Cloud Infrastructure - Database   When you want to transfer database backups to Oracle Cloud Infrastructure Object Storage, use the following guidance and options to help you plan and estimate the transfer. Data Transfer Guidance The following table provides some high-level guidance based on "theoretical minimums" from over-the-internet data transfer time calculators. Start with these estimates to evaluate the time it might take to transfer your data, but then be sure to test for the actual numbers based on your data volume, network bandwidth, and network reliability. Note: If uploading data takes longer than 1 to 2 weeks, consider using the Data Transfer service option (see the following section).     10 Mbps 50 Mbps 100 Mbps 1 Gbps 10 Gbps 5 GB 1+ hours 13+ mins 7+ mins 1+ mins < 1 min 10 GB 2+ hours 27+ mins 13+ mins 1+ mins < 1 min 100 GB 23+ hours 4.5+ hours 2.25+ hours 14+ mins 1+ mins 1 TB 9.5+ days 1.5+ days 1+ days 2.25+ hours 14+ mins 5 TB 48+ days 10+ days 5+ days 12+ hours 1.10+ hours 10 TB 100+ days 20+ days 10+ days 1+ days 2.20+ hours Transfer Options Based on your data volume, network bandwidth, and network reliability, use one of the following options to upload the backups to Oracle Cloud Infrastructure (OCI) Object Storage.    Option Transfer Mode Options for Copying Data Public internet Online OCI CLI, OCI API, OCI Console, rclone IPSec VPN Online OCI CLI, OCI API, OCI Console, rclone FastConnect Online OCI CLI, OCI API, OCI Console, rclone Data Transfer service Offline cp/scp to NFS mount points Storage Gateway Sync cp/scp to NFS mount points Incremental Backups Considerations Based on your data volume, network bandwidth, and network reliability, you can also upload incremental backups to Object Storage by using the any of the preceding options. As long as backups are complete and consistent, you can use a combination of transfer options, staggered over a period of time, to transfer the required backups to the target bucket. For example, say that you transferred the bulk of your data by using the Data Transfer service. The incremental backups that occurred since the last level 0 or level 1 backups shipped via the Data Transfer service can be uploaded by using different methods, such as the OCI CLI or rclone, to the same target bucket, which has another level 0/1 backups. TDE Wallets Considerations As a best practice, we recommend that you do not upload TDE wallet files and backups to the same location.

  This post is part of the “Database Migration to Oracle Cloud Infrastructure blog series", which includes the posts related to database migration. Use these posts as building blocks for various...

Introducing Object Storage Lifecycle Management Policies

Not all stored data is equally valuable or mission-critical. This was the operating assumption when we launched Archive Storage into the Oracle Cloud Infrastructure storage portfolio. Archive Storage offers customers the flexibility of classifying data as hot or cold, and then storing it in a standard bucket or an archive bucket based on that classification. The benefit? Significant cost savings. Storing data in Archive Storage is 90 precent cheaper than storing data in standard Object Storage. That said, data might not always fall neatly or permanently into the hot and cold categories. Data that starts its lifecycle being hot (needing to be accessed frequently or quickly) can decrease in demand as it ages (and be suitable for archival cold storage). At some point, it might make sense to purge data to keep storage costs in check. Actively managing data placement across its lifecycle can significantly reduce overall storage costs. However, without effective data management tools, managing your data’s lifecycle could result in significant operational overhead. To alleviate this storage management pain, we are pleased to announce the general availability of Oracle Cloud Infrastructure’s Object Lifecycle Management functionality.  Object Lifecycle Management lets you define a lifecycle policy on a bucket, letting you control how objects stored in a bucket will be automatically managed for you over time. You can create up to 1000 distinct rules for each bucket that govern the lifecycle management of your objects. Object Lifecycle Management offers two types of rules: those that archive your objects for you, and those that delete your objects for you. With the first, Object Storage changes the storage tier of an object from standard Object Storage to Archive Storage based on the object’s age in days. Rules that delete objects work in the same way, except that your specified data is deleted after it ages beyond a specified number of days. You can define rules that apply to all objects stored in the bucket, or rules that operate on only on a subset of objects that contain a specified object name prefix pattern. You can mix and match rules in a lifecycle policy to drive specific lifecycle management behavior. For example, you can create a lifecycle policy that automatically migrates objects containing the name prefix "ABC" from standard Object Storage to Archive Storage 30 days after the data was created, and then delete the same group of objects 120 days after creation. Note that for data in an Archive Storage bucket, delete rules are the only type that can be defined. Sample Lifecycle Policy [     {         "name": “Archive ABC”,         "action": "ARCHIVE",         "objectNameFilter": {             "inclusionPrefixes": [                 “ABC”             ]         },         "timeAmount": 30,         “timeUnit”: “DAYS”,         "isEnabled": true     },     {         "name": “DELETE_ABC”,         "action": "DELETE",         "objectNameFilter": {             "inclusionPrefixes": [                 “ABC”             ]         },         "timeAmount": 120,         “timeUnit”: “DAYS”,         “isEnabled": true     } ] Rules can be modified after they are created. Any changes made to the rules take effect immediately. Rules are evaluated for conflicts at runtime, and rules that delete objects always take priority over rules that would archive the same objects. If you want to modify or add rules to an existing lifecycle policy by using the CLI or SDK, or the API, you must rewrite the bucket’s entire lifecycle policy, including all previously defined rules for the bucket that will not change, along with your edits and additions. (If previously defined rules in the policy are not re-created, they will be overwritten.) You also have the option of editing your lifecycle policy by using the Oracle Cloud Infrastructure Console, where you can easily add, edit, and remove individual rules. To create lifecycle policies by using the Oracle Cloud Infrastructure Console, sign in to the Console and select the Object Storage bucket that you want to define the lifecycle policy on. In the Resources list displayed on the lower-left side, click Lifecycle Policy Rules. Click Create Rule, and in the Create Lifecycle Rule dialog box, specify the rule name, the type of action, and the age of data before the lifecycle rule becomes active. Optionally, you can also specify the object prefix, if you want the rule to apply to only a subset of the objects stored in the bucket. Clicking Create creates the lifecycle policy role on the bucket. For more information about Object Lifecycle Management, review the storage FAQs and the Object Lifecycle Management documentation.

Not all stored data is equally valuable or mission-critical. This was the operating assumption when we launched Archive Storage into the Oracle Cloud Infrastructure storage portfolio. Archive Storage...

Oracle Cloud Infrastructure

Connecting VCNs by Using Multiple VNICs: Part 1

In Oracle Cloud Infrastructure, a local peering gateway (LPG) is used to connect two or more virtual cloud networks (VCNs). An LPG enables a service provider model in which the provider VCN can peer with the consumer VCN and allow private access to shared resources. However, you can connect only 10 LPGs per VCN. Let's consider an example of an IT management company that uses Oracle Cloud Infrastructure. They provision one VCN to be the hub VCN, which will be controlled by the central IT team. For each client, they provision a spoke VCN. If the hub VCN needs to connect to more than 10 client VCNs and manage them, then using a bridge instance can solve the limit problem. This post discusses a solution to connect multiple VCNs by using secondary VNICs on a bridge instance. The same concept can be expanded to connect more than two VCNs in the same region and the same tenancy. Use Case This post uses the example of connecting two VCNs (VCN-1 and VCN-2) with nonoverlapping subnets, 10.0.0.0/16 and 10.1.0.0/16. If two VCNs have overlapping subnets, the traffic is confined to the same VCN because overlapping endpoints within the same VCN is preferred for routing. How Does VCN-1 Connect to VCN-2? The answer is secondary VNICs on the bridge instance. Every instance has a primary VNIC that is connected to the VCN's subnet in which the instance is launched. You can attach secondary VNICs to this instance and connect each of these VNICs to a different VCN's subnet. This is a unique capability offered by Oracle Cloud Infrastructure as a building block for a service provider model. When the public instance, called the bridge instance, is launched in the VCN-1's management subnet, a VNIC is attached to the bridge instance from the management subnet of VCN-1. This is the default (primary) VNIC. To connect the bridge instance to VCN-2, you create an attached VNIC, also referred to as the secondary VNIC. Ensure that the new attached VNIC for the bridge instance is coming from the management subnet of VCN-2, as shown in the following diagram.   The bridge instance is now connected to VCN-1's management subnet (MgmtSubnet1) via its default VNIC and to VCN-2's management subnet (MgmtSubnet2) via the secondary VNIC. After this configuration is completed, you need to set up the basic route table and security lists to finally enable IP forwarding on the bridge instance. This step ensures that traffic can be forwarded from VCN-1 to VCN-2 and from VCN-2 to VCN-1 through the bridge instance. Network Each VCN has a management subnet and a private subnet.   VCN-1 VCN-2 Network Subnet 10.0.0.0/16 10.1.0.0/16 Management Subnet 10.0.0.0/24 10.1.0.0/24 Private Subnet 10.0.1.0/24 10.1.1.0/24   Instances Both VCN-1 and VCN-2 have a private instance each in their respective private subnets. VCN-1 additionally has a public instance, called the bridge instance, which helps to bridge the two VCNs. Note that the bridge instance can be created in either of the VCNs.   VCN-1 VCN-2 Management Subnet Bridge Instance   Private Subnet PrivateInstance1 PrivateInstance2 How Do I Implement This? To deploy this setup, you have two options. Deploy Using Terraform We have created a Terraform template that does all the work for you. Terraform deploys the preceding setup, executing the required Linux commands automatically. At the end of Terraform execution, you will see login information to your instances. As a user, all you need to do is deploy Terraform, log in to the instance, and start ping traffic from one VCN to the other. Manually Deploy Using the Oracle Cloud Infrastructure Console and Linux Commands This section provides the steps for deploying the setup by using the Oracle Cloud Infrastructure Console and Linux commands. Console Steps Create two VCNs with the Create Virtual Cloud Network Only option and nonoverlapping CIDR blocks (for the subnets). For instructions, see To create a cloud network in the Networking documentation.   Each subnet in each VCN needs an associated route table and security list. For this purpose, create four route tables and four security lists. For instructions, see To create a route table and To create a new security list. For each VCN, create one public subnet and private subnet, specifying a security list and route table for each. Your VCN subnet page should look as follows. For instructions, see To create a subnet.   Create PrivateInstance1 and relate it to the private subnet (PrivateSubnet1) in VCN-1. Similarly, create PrivateInstance2 and relate it to the private subnet (PrivateSubnet2) in VCN-2.   Create one more instance, called BridgeInstance, and attach it to VCN-1 and MgmtSubnet1.   Open BridgeInstance, click Attached VNICs, and then click Create VNIC. Attach this secondary VNIC to VCN-2 and MgmtSubnet2.   Note the IP address of the primary VNIC and secondary VNIC in BridgeInstance.   Open VCN-1, open PrivateRouteTable-1, and add a route rule to BridgeInstance's primary VNIC IP address.   Open VCN-2, open PrivateRouteTable-2, and add a route rule to BridgeInstance's secondary VNIC IP address.   Open VCN-1, open MgmtSecurityList, and specify ingress rules as shown in the following example. The additional rule that is required for cross communication with VCN-2 (10.1.0.0/16) is highlighted. Replicate similar rules in VCN-2.   In VCN-1, open PrivateSecurityList, and specify ingress and egress rules as shown in the following example. Replicate similar rules in VCN-2.   After you complete all of the Console-related configuration, log in to the Oracle Linux instance to perform the following steps. Configuring the Bridge Instance Log in to the bridge instance and bring up the secondary VNIC. Note: You must perform the steps to bring up the secondary VNIC before proceeding. Verify that the secondary VNIC (ens4) appears with the correct IP address. On the bridge instance, run the following command to enable IP forwarding: sysctl -w net.ipv4.ip_forward = 1 Run the following firewall commands to enable port forwarding: firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i ens3 -j ACCEPT   firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i ens4 -j ACCEPT   /bin/systemctl restart firewalld Note the virtualRouterIp for VCN-2's MgmtSubnet2 by checking the instance metadata using the following command: [opc@bridgeinstance ~]$ curl http://169.254.169.254/opc/v1/vnics/ [ {   "vnicId" : "ocid1.vnic.oc1.phx.abyhqljssar7g5nfkw5wfa52df6ysz5zh7qgjnz4u5mute7bsjagjixdngya",   "privateIp" : "10.0.0.2",   "vlanTag" : 1681,   "macAddr" : "00:00:17:01:E0:3B",   "virtualRouterIp" : "10.0.0.1",   "subnetCidrBlock" : "10.0.0.0/24" }, {   "vnicId" : "ocid1.vnic.oc1.phx.abyhqljsj6uqkigzjkfyauhalgeqgrytxfmhqipgrmddxgan4343tuurwk3q",   "privateIp" : "10.1.0.2",   "vlanTag" : 1683,   "macAddr" : "00:00:17:01:2B:46",   "virtualRouterIp" : "10.1.0.1",    <<<<<<<<   "subnetCidrBlock" : "10.1.0.0/24" } ][opc@bridgeinstance ~]$ Add an IP route rule for traffic to be routed to the virtual router IP of VCN-2 MgmtSubnet2: ip route add <VCN-2-Network> dev <secondary-vnic> via <MgmtSubnet2.Virtual_router_ip>   Ex: ip route add 10.1.0.0/16 dev ens4 via 10.1.0.1 Verification Log in to PrivateInstance1 via BridgeInstance. Ping from PrivateInstance1 to PrivateInstance2. You have enabled cross-VCN communication using secondary VNICs on a bridge instance. Extension Now you can extend this solution to have a hub and spoke model by connecting up to 52 VCNs with each other. The bridge instance can have a maximum of 52 VNICs and hence a maximum of 52 VCNs can be connected using the bridge instance. Next In Part 2 of this series, we will explore how you can provide high availability (HA) to the bridge instance, so that if the active bridge instance goes down, the backup bridge instance can take over. Thank you for reading this post. Your feedback and recommendations for future posts is most welcome. I hope you will enjoy using Oracle Cloud Infrastructure. Keep watching the Oracle Cloud Infrastructure space for updates as we add more exciting capabilities. Prasanna Naik Senior Product Manager Oracle Cloud Infrastructure Networking

In Oracle Cloud Infrastructure, a local peering gateway (LPG) is used to connect two or more virtual cloud networks (VCNs). An LPG enables a service provider model in which the provider VCN can peer...

Oracle Cloud Infrastructure

Access Resources on the Public Internet Through an Oracle Cloud Infrastructure NAT Gateway

Many Oracle Cloud Infrastructure customers have compute instances in virtual cloud networks (VCNs) that, for privacy, security, or operational concerns, are connected to private subnets. To grant these resources access to the public internet for software updates, CRL checks, and so on, a customer’s only option has been to create a NAT instance in a public subnet and route traffic through that instance by using its private IP address as a route target from within the private subnet. Although many have successfully used this approach, it does not scale easily and provides a myriad of administrative and operational challenges. We are excited to announce the availability of NAT gateway, which addresses these challenges and provides Oracle Cloud Infrastructure customers with a simple and intuitive tool to address their networking security needs. NAT gateways provide the following features: Highly Scalable and Fully Managed: Instances on private subnets can initiate large numbers of connections to the public internet. Connections initiated from the internet are blocked. Secure: Traffic through NAT gateways can be disabled with the click of a button. Dedicated IP Addresses: Each NAT gateway is assigned a dedicated IP address that can be reliably added to security whitelists. This rest of this post describes how to access the public internet from a private instance through a NAT gateway. Before NAT gateway, a private instance accessed the public internet through a (public) NAT instance. The VCN had one public subnet and one private subnet with their associated route tables, security lists, and DHCP options. Through a bastion host (not shown), you used SSH to connect to the private instance and access resources on the public internet, as shown in the following example: Now, you can create a NAT gateway in the VCN: You can see the newly created gateway in the list of NAT gateways for the VCN: Finally, you replace the route rule that pointed to the NAT instance with one that points to the NAT gateway: In just a few steps, you can give all the instances in the private subnet access to resources on the internet. As with the other Oracle Cloud Infrastructure gateways (Service, Internet, and so on), the NAT gateway is highly available and scales elastically to meet your bandwidth requirements. You can now delete the public NAT instance, which is no longer required. We recommend NAT gateway as the preferred method for granting internet access to instances on private subnets. You can read more about NAT gateways in the Networking documentation. You can also watch our video demo for additional details.

Many Oracle Cloud Infrastructure customers have compute instances in virtual cloud networks (VCNs) that, for privacy, security, or operational concerns, are connected to private subnets. To grant...

Oracle Cloud Infrastructure

Database Migration to Oracle Cloud Infrastructure: Evaluation and Planning Checklist

  This post is part of the “Database Migration to Oracle Cloud Infrastructure blog series", which includes the posts related to database migration. Use these posts as building blocks for various migration approaches. For more information on Oracle Database and Exadata Cloud Services, review the details at Oracle Cloud Infrastructure - Database   Use the following checklist to help you evaluate and plan for the migration of your databases to Oracle Cloud Infrastructure, based on the unique requirements of your source and target databases. Work with your database admins, network admins, and system admins as necessary to determine all of the required information for your migration.  Downtime: Determine from your business what the downtime service level agreements (SLAs) are and how much downtime, if any, the business can accommodate. You can also review Recovery Time Objective (RTO) and Recovery Point Objective (RPO) SLAs to see how much downtime is acceptable according to your disaster recovery (DR) and business continuity (BC) guidelines.   Database Size: Determine the data volume. Typically, the size of the database is based on two factors: whether the physical or logical migration method is considered, and whether all or part of the data will be migrated to the target database.   Network Bandwidth: Determine the available network bandwidth between the source and target databases. In addition to available bandwidth, network reliability is also important. Based on the data transfer method, network interruption might require you to restart the data transfer job.   Cross-Platform Migration: Determine the endianness of the source and target platforms. Oracle Cloud Infrastructure databases are little-endian. If your source database is big-endian, you can either select the logical migration method, which is typically slower, or use Oracle Data Guard or RMAN cross-platform features for the cross-platform migrations.   Database Character Set: Determine the database character set for the source and target databases. For most migration methods, the target database character set must be a superset of the source database character set. Some methods might need the exact same character set to avoid data loss.   Data Encryption: Determine whether the source database uses Transparent Data Encryption (TDE). TDE is mandatory for all Oracle Cloud Infrastructure databases. If TDE not used at the source, enable it either at the source or at the target. Be sure to back up and restore the required TDE wallets from the source to the target.   Database Version, Edition, and Options: Determine the database version, edition, and options for the source and target databases. Based on the migration method, the target and source database version and edition must be compatible. For the Oracle Cloud Infrastructure 12c database target, the multitenant architecture is mandatory, so ensure that the selected migration method can accomplish the migration into the CDB/PDB, as needed.   Databases Patches: Determine the patch level for the source and target databases. Ensure that the source and target are at the same or compatible Patch Set and Release Upgrade level. Apply any required patches at the source to minimize any discrepancies during or after the migration. Also, as necessary, apply any one-off patches at the target.   DB Name: Determine the database name used at the source database. For full database restore methods, it is mandatory to create the target database by using the same database name as used at the source database. However, use the DB Unique Name of the target as created by the Oracle Cloud Infrastructure tooling.   DB Block Size: Determine the database block size used at the source database. For partial restore methods like transportable tablespaces, it might be necessary to adjust the cache size parameters based on the target database.   DB Time Zone: Determine the database time zone used at the source database. It might be necessary to adjust the database time zone at the target database.   DB Users, Privileges, and Objects: Determine the database users, privileges, and objects, like DB Links, from the source database that might also need to be created at the target database.   Sizing: Determine the source database sizing and consider future growth to size the target database. In addition to CPU and Memory, ensure the sizing meets your IOPS and Network Bandwidth requirements.   Target Database: To ensure the target database will have all the required metadata for OCI tooling to work, create the target database using one of the supported methods like OCI Console, OCI CLI or Terraform OCI provider. This target database will be cleaned to be used as a shell for the migration, as needed.  

  This post is part of the “Database Migration to Oracle Cloud Infrastructure blog series", which includes the posts related to database migration. Use these posts as building blocks for various...

Strategy

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Robert Ronan

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed Robert Ronan of Oracle. Robert Ronan is the Principal Outbound Product Manager for the Small and Medium Business segment and Oracle Digital accounts for Oracle Cloud Infrastructure, where he has worked for the past year. He has over 19 years of industry experience and previously worked for Rackspace, Cisco, and law and financial firms. His background ranges from Sales Engineering to Solution Architecture to various leadership roles. Greg: Robert, how did you prepare for the certification? Robert: I'd like to say it took me months to prepare, but I really spent about two weeks focusing on the exam. Being an Oracle employee and able to leverage my tenancy was a tremendous perk. For non-Oracle employees, I strongly recommend signing up for a free account to get your own tenancy to practice with while preparing for the exam. I also used the study guide and reviewed the practice test to gauge my readiness. I took the practice test to identify my deficiencies. If I got an item incorrect, I read up on the topic and retook the practice test. The available documentation was quite good. Greg: How was your experience taking the test through Pearson VUE? Robert: I signed up for the Pearson VUE Online Proctor option, which gave me the option to take the test in a location of my choice. I decided to take the test at my workplace, but I encountered issues with the test delivery because of my corporate firewall. I was able to find a workaround, but it was something unexpected that added stress to an already stressful situation. Just be aware that a corporate firewall will block access for an online proctored exam delivery! Also, I wanted to let your readers know that while the convenience of an online proctor is great, do not think that it makes for a more comfortable environment. When taking the test, there is a thumbnail video of you in one of the corners. This was distracting to me for the first 30 minutes before I was able to completely ignore it. So, there’s no doubt that the proctor is monitoring you to ensure the legitimacy of your test delivery! Greg: How is life after getting certified? Robert: My immediate team has earning the Oracle Cloud Infrastructure certification as an objective. So I was ecstatic when I passed. I felt that this was a very challenging exam, so passing it removed a major weight off my shoulders. You need to understand the concepts well to answer the questions correctly; you need to be able to identify the incorrect answers because you will not be able to guess the correct answer. Studying for the test and earning the certification has made a lot of the knowledge more top-of-mind for me. Now I feel more comfortable talking about the solutions that we're engaging with customers on every day. This has significantly increased my knowledge of Oracle Cloud Infrastructure. Greg: Any other advice you’d like to share? Robert: I’m going to share the same advice that my manager gave to me: allocate dedicated study time to get this done. I took a solid 16 hours per week to focus on preparing for the exam.   Please subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam.   Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series are listed under Greg’s blog page.

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed...

Product News

Innovation in Edge Services: The Oracle Cloud Infrastructure Edge Network

Moving to the cloud has become an imperative for enterprise organizations, but this migration comes with significant concerns. Oracle knows that components ranging from volatility to security are critical to a successful cloud framework and designed Oracle Cloud Infrastructure, which works in parallel with edge services, with this specifically in mind. The edge is where innovation happens. The Oracle Cloud Infrastructure Edge Compute Network, for example, consists of 40 high-capacity edge compute locations that combine the raw iron performance and governance control of on-premises hardware with the agility and the cost effectiveness of the cloud. The Oracle Dyn Web Application Security platform; the cloud-based, high-capacity DDoS protection platform; DNS; and data intelligence offerings are some of the services that already use the Edge Compute Network. Why an Edge Compute Network? The Oracle Cloud Infrastructure Edge Compute Network is a globally distributed compute capacity network that is used to store and process data before it is pushed to a central cloud repository. This method reduces latency and is ideal for a strong security posture because the process keeps the data close to the source without sending it over the central corporate network.  Many applications and services are designed to work at the edge, leveraging compute from the devices on which they are accessed, as well as workload on the nearest cloud server. Today, that needs to be just about anywhere to enable business-critical functions. As the capacity of core networks is outstripped by computational intensity, organizations will become more reliant on edge services, servers, and devices themselves to process business logic. Additionally, the number of connected endpoints is growing exponentially, and bandwidth is increasing to accommodate complex data flows. What Are Edge Services? Oracle delivers Edge Services securely through the Oracle Cloud Infrastructure Edge Compute Network. Edge services enable analysis and data gathering at the source, rather than routing that data over the centralized nodes of an organization's network. In the context of network resiliency and security, this means that malicious data that could potentially impact service is processed and mitigated at the edge before it reaches business-critical infrastructure. Oracle's edge services includes solutions for web application security, DDoS protection, and DNS. Key features include: Consistent performance: Responds to DNS queries in less than 30 milliseconds worldwide and propagates DNS records in under a minute for dependable performance of applications and digital assets. Vast internet data and experience: Intelligently routes user traffic across the internet control plane by geolocation with low latency, leveraging over 600 collection points that deliver over 240 billion data points collected every day. Proven reliability and security: Global anycast network of multiple data centers, strategically located across multiple continents, that leverages a mix of redundant internet transit providers for ultimate resiliency and protection against DDoS attacks. Managed security: A web application security suite managed 24x7x365 by a team of global cybersecurity experts featuring web application firewall (WAF), bot management, malware protection, and API security solutions. We believe that the Internet of Things will only continue to grow, and more and more applications and services will work at the edge. This makes reliable, consistently high performing and innovative edge security services ever more imperative.

Moving to the cloud has become an imperative for enterprise organizations, but this migration comes with significant concerns. Oracle knows that components ranging from volatility to security are...

Partners

Guidance for Setting Up a Cloud Security Operations Center (cSOC)

Establishing a security operations center (SOC) is one of the primary requirements for managing cybersecurity-related risks in the current information age. This post provides general DIY guidance for building a SOC primarily for Oracle Cloud, including both platform-as-a-service and infrastructure-as-a-service offerings. This general guidance is also applicable to hybrid cloud environments. As more businesses are relying on interconnected technologies, like IoT sensors and cloud-based platforms, it’s becoming unmanageable to respond to cyberthreats and resulting incidents without having proper visibility across the cyberthreat landscape. So it’s an imperative for enterprise information security organizations to build (in-house or outsourced) a cloud-centric SOC (cSOC) to address the following broad types of cyberthreats, based on HarvardX's categorization: Unintentional external threats Malicious external threats Malicious internal threats Unintentional internal threats    Internal Actors External Actors Unintentional Threats: Regular usage of systems by internal employees may result in discovering bugs or exploits hitherto unknown. These are always leveraged by the internal security teams to remediate the issues. Unintentional Threats: Regular usage of systems by external agencies may result in discovering bugs or exploits hitherto unknown. This may result in loss of reputation and loss of revenue. Malicious Threats: Internal agencies  like employees, contractors or vendors having privileged access intentionally targets internal systems for information theft, financial gains and / or for pure malevolence.  Malicious Threats: External agencies  like individuals, cyber criminals or enemy nation states intentionally targets corporations for information theft, financial gains and wide range disruptions. Outsourced or In-House First, let’s tackle the issue of building a cSOC. The question is whether to outsource the SOC functionality to a managed security service provider (MSSP) or to have the functionalities in-house. From experience and some research, following are the disadvantages of outsourcing: Not aligned with the enterprise's business vertical Limited services and capabilities Systems optimized for scaling across a large number of customers  Lacks intimate knowledge because of the large number of customers Lack of dedicated resources Focused on maximizing profit Provides standard security services, not customized ones Lack of specialization Short lifespan of outsourced threat intelligence Minimal opportunities for correlation unless all data is sent to the MSSP Following are the advantages of employing an MSSP: Potential cost savings (building a cSOC is expensive) Fully trained and qualified stuff Experiences in handling stressful situations Experience in addressing all types of security incidents effectively and efficiently Keeps the organization current on emerging threats (threat Intelligence) Wide industry experience Helps an organization to focus on core business 24x7x365 availability Provides an SLA Maintains and updates runbooks Automates and maintains incident response playbooks cSOC Components To build a cSOC or to take the service from an MSSP, ensure that the following components are in place: Command center Environment security monitoring Incident response Threat intelligence Forensics Environment assessment and verifiability The rest of this post briefly describes these components. Command Center The following diagram depicts the relationships between the command center and other internal or external agencies or services: Oracle Management Cloud (OMC) with custom dashboard capabilities, makes perfect sense for the cSOC Command Center tooling for Oracle Cloud IaaS. Following additional components of OMC are targeted towards SOC: Configuration and Compliance Security Monitoring and Analytics   Environment Security Monitoring Environment security monitoring should have the following components: Oracle Cloud Access Security Broker services Network logs (Oracle Cloud Infrastructure VCN flow logs) Host logs Application logs Network IDS Host IDS Malware detection feeds Security information and event management (SIEM) IOC (indicators of compromises) comparing tool  Honeypots (optional) The following diagram depicts the relationships among these components Incident Response Incident response (IR) is the central part of the cSOC. The IR team interacts with the business units, steering committees, and management while responding to a security incident by eradicating issues so that the affected system can return to service. Threat Intelligence The threat intelligence component comprises the following functions and process: Internal information systems Threat actors Open-source resources (Oracle's approach) Attribution information Forensics Cloud systems forensics can be carried out internally by the cSOC or can be further outsourced. For the purpose of this post, I am showing the relationship between the components within the cSOC. The main forensics processes are as follows: Host forensics Reverse engineering Network forensics Communication with management and the command center Maintaining the chain of custody Environment Assessment and Verifiability This component used to be the most ignored aspect in the traditional SOC. With the advent of cSOCs, this component is the pattern that connects a cSOC to the agile DevSecOps practice. The subcomponents, such as penetration testing and vulnerability assessment, can be integrated as an on-demand service with the organization's CI/CD pipeline.   I hope that this short, visuals-heavy post will help you to establish your cloud security operations center. For more helpful information, see the following resources: MGT517: Managing Security Operation: Detection, Response and Intelligence PCI Compliance on Oracle Cloud Infrastructure blog post Oracle Cloud Infrastructure Security white paper  Disclaimer: All diagrams / visuals were created using PowerPoint and no shapes were harmed (sic).  

Establishing a security operations center (SOC) is one of the primary requirements for managing cybersecurity-related risks in the current information age. This post provides general DIY guidance for...

Developer Tools

CI/CD on Steroids: Announcing Container Engine for Kubernetes as a Jenkins X Provider

Kubernetes has become the de facto tool for managing distributed containerized applications. If you build a cloud app today and want it to be truly multicloud and portable, Kubernetes is your choice! Although Kubernetes is awesome, there are a few challenges associated with it when you want to build something on top of it in a continuous delivery (CD) fashion and make developers more productive. With traditional Jenkins (Jenkins 2.0), you can implement a continuous delivery system with Kubernetes, but it is cumbersome. The process involves the following steps: Set up the Jenkins plugin for Kubernetes Set up a Kubernetes cluster and environment Set up pipelines Deploy containers to Kubernetes Generating YAML or Helm charts Adopt continuous delivery and promotion Jenkins X Jenkins X leverages Jenkins' dominant CI/CD expertise and customer population in the industry and provides a CI/CD solution that fits naturally with the Kubernetes environment. With Jenkins X, you do not need to understand or perform comprehensive Kubernetes operations, and you can significantly improve the shipping rate of a product in the Kubernetes environment. You no longer have to deal with Docker files, tweak Helm charts, or write a jenkinsfile. You just create your apps and be happy. All the CI/CD happens automatically. Jenkins X facilitates the following actions: Automates the installation and upgrade of tools required for Kubernetes deployment, all configured and optimized for Kubernetes, including the following ones: Helm (package manager for Kubernetes) Draft (build packs used to bootstrap applications so they build and run on Kubernetes) Skaffold (enabled RAD development, abstracts building and pushing images) Kaniko Jenkins Ksync Monocular Nexus Automates CI/CD for your applications on Kubernetes: Docker images Helm charts Pipelines Uses GitOps to manage promotion between environments, from test to staging to production As of July 2018, Oracle Cloud Infrastructure is an official Jenkins X cloud, with native support for running Jenkins X within Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE). Special thanks to Chenghao Shi, Hui Liu and their team for contributing to this project. Run Jenkins X in OKE There are two ways to get Jenkins X running on OKE. This post demonstrates how to use the Jenkins X command line tool, jx, to perform these operations. Create a new OKE cluster with Jenkins X installed (by using the jx create command) Install Jenkins X on existing OKE cluster (by using jx install command) Create a New OKE Cluster and Install Jenkins X The jx tool communicates with OKE by using the Oracle Cloud Infrastructure CLI. If the CLI is not installed before you create the cluster, jx identifies the CLI as a missing dependency and installs it for you at run time. Following are the steps for creating a new OKE cluster and installing Jenkins X on it. Note: Before proceeding, ensure that you have the necessary subnets, security list rules, and IAM policies configured for deploying an OKE cluster. The current release of jx doesn’t create these for you. For more information about how to create these resources, see the Cluster Engine for Kubernetes documentation. You can also use Terraform code to quickly provision these necessary resources. After these resources are in place, jx creates an ingress controller, PVCs, and so on before installing Kubernetes-related utilities and Jenkins X. Configure Git to perform development tasks: sudo yum install git -y Install kubectl: curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl Configure the oci config file to tell jx which Oracle Cloud Infrastructure resources it will work with. The CLI does not have to be installed at this point; it will be installed during run time. mkdir ~/.oci vi ~/.oci/config vi ~/.oci/oci_api_key.pem chmod 600 ~/.oci/config ~/.oci/oci_api_key.pem Install the Helm client to talk to the Helm server tiller, which will be installed in the OKE cluster: wget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz tar -xvf helm-v2.9.1-linux-amd64.tar.gz sudo mv linux-amd64/helm /usr/local/bin/ Install the jx command line tool: curl -L https://github.com/jenkins-x/jx/releases/download/v1.3.52/jx-linux-amd64.tar.gz | tar xzv sudo mv jx /usr/local/bin Use jx to create a new OKE cluster with Jenkins X installed. jx create cluster oke [flags] You can specify numerous flags. Some of these flags are specific to Oracle Cloud Infrastructure, and the rest are generic jx flags. For example, you can create an OKE cluster using the following flags, or can run just the jx create cluster oke command and enter the necessary OCIDs at run time: jx create cluster oke --name shoulderroad --compartment-id ocid1.tenancy.oc1..l3d6xxx4gziexn5sxnldyhja --vcn-id ocid1.vcn.oc1.phx.ofu4bbmfhj5ijidyde3gpdocybghidrmbq --kubernetes-version v1.10.3 --wait-for-state SUCCEEDED --serviceLbSubnetIds file:///tmp/oke_cluster_config.json --tiller-enabled false The command creates a new Kubernetes cluster on OKE, installs required local dependencies, and provisions the Jenkins X platform. Add your $HOME/bin to $PATH; otherwise, jx will have an issue invoking the CLI command. If you have already installed the CLI, ensure that it’s in $PATH. After the command finishes, you get a development environment (including Jenkins, Nexus, Docker registry, ChartMuseum, and Monocular) and other environments, like staging and production. Typically, we use Helm charts in these git repositories to define which charts are to be installed and which versions of them, and any environment-specific configuration and additional resources. Check the environment by entering the following command: kubectl get svc --all-namespaces Following is example output: Install Jenkins X on an Existing OKE Cluster If you have an existing OKE cluster, you can use jx to deploy Jenkins X on it. To begin, run the following commands to prepare your environment: chmod +x ~/get-kubeconfig.sh export ENDPOINT=containerengine.us-phoenix-1.oraclecloud.com ~/get-kubeconfig.sh ocid1.cluster.oc1.phx.rdsztcmnstcnjsgy4taytcmctdqyrzheyw > ~/kubeconfig export KUBECONFIG=~/kubeconfig git config --global user.email  "user.name@gmail.com" git config --global user.name "userName" Configure the oci config file to tell jx which Oracle Cloud Infrastructure resources it will work with. The CLI does not have to be installed at this point; it will be installed during run time. mkdir ~/.oci vi ~/.oci/config vi ~/.oci/oci_api_key.pem chmod 600 ~/.oci/config ~/.oci/oci_api_key.pem Export your kubeconfig file: export KUBECONFIG=~/kubeconfig Install Git if it’s not already installed: sudo yum install git -y Install kubectl: curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl Install the jx command line tool. If you are installing on an OS other than Linux, see these instructions). curl -L https://github.com/jenkins-x/jx/releases/download/v1.3.52/jx-linux-amd64.tar.gz | tar xzv sudo mv jx /usr/local/bin Install the Helm client to talk to the Helm server tiller if it is installed in the OKE cluster: wget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz tar -xvf helm-v2.9.1-linux-amd64.tar.gz cp linux-amd64/helm /usr/local/bin/ helm init Install Jenkins X by using jx: jx install --provider=oke The installation process takes a couple of minutes. During that time, you can enter necessary parameters such as your GitHub username. When the installation is completed, two pipelines (staging and production) should be generated on the Jenkins dashboard, and code will be checked in to GitHub. Other Jenkins X Options So far we’ve looked at the jx create and jx install commands to get Jenkins X running on OKE. A few other jx commands are useful to help you quickly bootstrap a project in any language (jx comes with rich polyglot integration) and build a pipeline. If you have an existing project, running the following command makes jx quickly detect the type of project that you are working on (for example, Spring Boot or JHipster): jx import <app_name> It also performs the following actions: Creates a build pipeline Sets up the project on a remote git service like GitHub (using your GitHub credentials) and creates the necessary web hooks Installs Kubernetes-specific tools like Helm charts and Monocular Ultimately runs the build for you and deploys it in a "staging" environment. At this point, if everything looks correct, you can promote this build to production by running the following command: jx promote --env production --version 1.0.1 <app_name> If you don’t have an existing project, the easiest way to bootstrap a project with built-in CI/CD (using jx) is to use jx create (again!). Although, in this case if you want to create a Spring Boot application, for example, import the generated code into a git repo, and use Jenkins for CI/CD, you run the following command: jx create spring [flags] This command performs the following actions: Creates a fresh Spring Boot application with defaults Checks in the code to your git repo Pushes the code to a remote git repo like GitHub Add defaults for dockerfile, jenkinsfile, and Helm charts Runs a build and deploys it in to "staging" environment More Information Overview of Container Engine for Kubernetes (OKE) Installing jx on other OSs Creating a jx cluster on OKE

Kubernetes has become the de facto tool for managing distributed containerized applications. If you build a cloud app today and want it to be truly multicloud and portable, Kubernetes is your choice! Al...

Product News

Improving the security of your containers in OCIR with Twistlock

Introduction to OCIR Last May, Oracle introduced Oracle Cloud Infrastructure Registry (OCIR) on Oracle Cloud Infrastructure for container-native developers to store Docker images. Usage of this new cloud service has grown rapidly. Its primary use is to store container images and is used along with Container Engine for Kubernetes (OKE), a managed Kubernetes service on Oracle Cloud Infrastructure. Customers have asked how to scan container images that are stored in OCIR and add more security and control to CI/CD pipelines. To answer these questions, we are highlighting a solution that focuses on vulnerability and compliance - Twistlock. Connecting a solution like Twistlock is simple. Supply the Twistlock setup screen with a username (in the form of tenancy_name/user_name), an Oracle Cloud Infrastructure-generated auth token, and the target registry, such as phx.ocir.io. You can create service accounts to fulfill this need, with policies limited to read-only access of the registry. How Twistlock Helps Twistlock is a cloud-native security platform. Started in 2015 as the first solution for container security, Twistlock’s platform now uses the benefits of cloud-native technology to make application security better - more automated, more efficient, and more effective. A key way this happens is by ‘shifting left’ and ensuring security isn’t just a runtime activity. Twistlock’s native integration with OCIR allows Twistlock to identify vulnerabilities and compliance issues for all images stored in registry, and block the use of images that contain violations. Preventing risky container images from being deployed reduces your runtime risk and helps development teams correct issues faster. Twistlock easily integrates with OCIR to provide an overview of risks in your registry But knowing about a vulnerability isn’t enough for container images. Containers pose three distinct challenges to vulnerability management: Containers can have hundreds of Common Vulnerabilities and Exposures (CVEs) present and traditional scanning tools often present several false positives. This makes it hard to know what’s a real risk, and what’s not. After you’ve weeded out the false positives, numerous CVEs remain. Knowing which fix to prioritize isn’t straightforward, because you often don’t know how the container image is deployed. Even after you know which CVEs to tackle first, tracking down which layer of the container image the CVE was introduced in, is no easy task. It requires manual effort, or in larger organizations, coordination across different development teams. To tackle these problems, Twistlock does three things: 1. Twistlock uses over 30 upstream sources to source CVE information. It then parses, correlates, and consolidates the data into the Twistlock Intelligence Stream. By comparing multiple sources and going direct to vendors, Twistlock is able to provide a reduced false positive rate when compared to traditional vulnerability management tools. 2. Twistlock generates a risk score for every CVE detected that is specific to your deployment and environment. This lets you prioritize what to fix in the registry, based on the risk that it brings to your production environment. 3. Twistlock provides a per-layer analysis of every CVE detected, showing the exact layer of the container image where the CVE was introduced. This makes fixing vulnerabilities quicker - no more hunting down which layer the CVE originated in. Twistlock factors in specifics from your environment to create a tailored risk score for each CVE.   Twistlock’s per layer analysis makes it easy to pinpoint where CVEs are introduced. To learn how the Twistlock platform provides zero-touch active threat protection, layer 3 micro-segmentation along with cloud-native layer 7 firewalls, and precise vulnerability management, visit Twistlock.com/platform.

Introduction to OCIR Last May, Oracle introduced Oracle Cloud Infrastructure Registry (OCIR) on Oracle Cloud Infrastructure for container-native developers to store Docker images. Usage of this new...

Developer Tools

Oracle Cloud Infrastructure Terraform Provider Now Supported by HashiCorp

We are pleased to announce the immediate availability of the HashiCorp Terraform Provider for Oracle Cloud Infrastructure, which is available as an official provider in Terraform. We are excited to partner with HashiCorp and support our customers in their infrastructure-as-code journey. Over the last few months, we have invested heavily in Terraform and now support Terraform provisioning across all Oracle Cloud Infrastructure resources. The provider is compatible with Terraform 0.10.1 and later. Following are some of the main resources supported by the Terraform provider: Block Volumes Compute Container Engine Database File Storage Identity and Access Management (IAM) Load Balancing Networking Object Storage A detailed list of supported resources and more information about how to get started are located on the HashiCorp website. Customers who are new to Terraform and the Oracle Cloud Infrastructure provider must run "terraform init" with the configuration, which downloads the provider automatically. Customers who are using an earlier version and want to upgrade to the latest version must remove the manually installed provider or specify 3.0.0 in their configuration files, and then run "terraform init". For more information, see the upgrade guide. For an end-to-end example on how to create a compute instance on Oracle Cloud Infrastructure, see this example. This release will be followed by Terraform modules for Oracle Cloud Infrastructure published in the official Terraform Module Registry, which will make it easier for customers to find and deploy common Oracle Cloud Infrastructure configurations. For more details about this release, see the following resources: Oracle Cloud Infrastructure Terraform Provider documentation: https://www.terraform.io/docs/providers/oci/index.html GitHub repository: https://github.com/terraform-providers/terraform-provider-oci Terraform Oracle Cloud Infrastructure provider v3.0.0 change log: https://github.com/terraform-providers/terraform-provider-oci/blob/v3.0.0/CHANGELOG.md Terraform Oracle Cloud Infrastructure provider v3.1.0 change log: https://github.com/terraform-providers/terraform-provider-oci/blob/v3.1.0/CHANGELOG.md

We are pleased to announce the immediate availability of the HashiCorp Terraform Provider for Oracle Cloud Infrastructure, which is available as an official provider in Terraform. We are excited to...

Customer Stories

Thymos Intelligence Selects Oracle Cloud Infrastructure as HPC Cloud Service Provider

Thymos Intelligence Providing an Environment in the Cloud to Run HPC Applications Tokyo, Japan - 2018/07/26   Oracle Corporation Japan announced today that Thymos Intelligence Corporation selects Oracle Cloud Infrastructure as the cloud infrastructure of their high-performance cloud computing (HPC) service, iHAB CLUSTER.   Thymos Intelligence is advocating the concept of iHAB as the future of cloud computing that allows users to use without noticing the location, on-premises or cloud. Under iHAB, Thymos is providing three services: iHAB CLUSTER, iHAB Storage, and iHAB DC. iHAB CLUSTER provides computing resources required for Computer Aided Engineering (CAE) and Deep Learning.   Thymos Intelligence has been demanding the public cloud that meets iHAB CLUSTER customers' requirements: sudden increase of computing resource demand and a stable high-performance environment to run CAE and AI workloads. To select the public cloud that meets the requirements, Thymos Intelligence tested using a CAE application that is used for actual workloads, and as a result, Thymos Intelligence selected Oracle Cloud Infrastructure due to its excellent performance.   The selection points are as follows: Excellent Performance: Bare metal instances can provide higher computing performance than virtual machines, and high storage IOPS performance, which comes from NVM Express local and remote block storage, enables the ability to run HPC workloads at high performance. Also, the latest NVIDIA Tesla V100 GPU brings excellent performance to Deep Learning and AI workloads.   Fast and Stable Network: A low-latency and high-bandwidth (25 Gbps x 2) nonblocking network enables high performance and stable internode and storage access.   High Price-Performance: Bare metal instances can archive higher performance when compared with the similar shape of virtual machines. Also, low-latency and high-bandwidth networks can be used without additional fees. As a result, it provides higher performance at a lower cost.  In addition, outbound data transfer is free of charge up to 10 TB, so when a large data download is needed, it still can be provided at a lower cost. Naohiro Saso, Sales & Marketing Manager at Thymos Intelligence Corporation commented: The high-performance cloud computing iHAB CLUSTER is the service that provides the latest computing resources on demand to mainly analytics workloads in the manufacturing industry.  IHAB CLUSTER is configured specially for each customer to meet their environment and requirements. It is provided on the high-performance cluster C540, which has the latest CPU, and the cluster is located in a data center in the Tokyo metropolitan area. The customer can use the computing resource as if it is their own on-premises resource.  In order to expand its resources, iHAB CLUSTER has selected Oracle Cloud Infrastructure for a lineup of iHAB CLUSTER service platform because of the rapid new technology adoption and cost performance of  Oracle Cloud Infrastructure.  We expect that continuous and further development of Oracle Cloud Infrastructure for HPC would lead to the expansion of the iHAB CLUSTER service.   Reference Information Timoth Intelligence Co., Ltd.: iHAB CLUSTER Oracle Cloud Infrastructure   About Oracle Japan Japan corporation of Oracle corporation. With the slogan "beyond your cloud> commit;", the provision of cloud services that maximize information value through a data-driven approach such as a broad and maximally integrated cloud application and cloud platform, We are developing business of various services to support use of. Listed on the First Section of the Tokyo Stock Exchange in 2000 (Securities Code: 4716). URL www.oracle.com/en   About Oracle  In addition to a wide range of SaaS applications covering ERP, HCM, and Customer Experience (CX), Oracle Cloud offers Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) including the industry best database to the Americas, Europe, It is offered from the data center across the Asian region. For more information on Oracle (NYSE: ORCL) please visit www.oracle.com .   * Oracle and Java are registered trademarks of Oracle Corporation and its subsidiaries, affiliates in the United States and other countries. Company names, product names, etc. in the text may be trademarks or registered trademarks of each company. This document is the sole purpose of providing information and can not be incorporated into any contract.

Thymos Intelligence Providing an Environment in the Cloud to Run HPC Applications Tokyo, Japan - 2018/07/26   Oracle Corporation Japan announced today that Thymos Intelligence Corporation selects...

Interconnecting Clouds with Oracle Cloud Infrastructure

A multicloud architecture uses more than one cloud service provider. Companies have more than one cloud provider for many reasons: to provide resiliency, to plan for disaster recovery, to increase performance, and to save costs. When companies want to migrate cloud resources from one cloud provider to another, cloud-to-cloud access and networking is required. Oracle Cloud Infrastructure provides the internet gateway (IGW) and dynamic routing gateway (DRG) service gateway options for connecting an Oracle Cloud Infrastructure virtual cloud network (VCN) with the internet, on-premises data centers, or other cloud providers. This post describes the connectivity service options that are available to help you plan your network connectivity to the Oracle Cloud in general, and it discusses connectivity options between the cloud providers. Connectivity Option Overview All major cloud service providers (CSPs) offer three distinct network connectivity service options: Public internet   IPSec VPN Dedicated connections (Oracle's service is called Oracle Cloud Infrastructure FastConnect) Depending on the workloads and the amount of data that must be transferred, one, two, or all three network connectivity service options are required.   Max (Mb/s) Latency Jitter Cost Secure Public internet < 10,000 Variable Variable Variable No IPSec VPN < 250 Variable Variable Variable Yes FastConnect < 100,000 Predictable Predictable Predictable Yes   Public internet provides accessibility from any internet-connected device. IPSec VPN is a secured encrypted network that provides access by extending your network into the cloud. FastConnect provides dedicated connectivity and offers an alternative connectivity to internet. Because of the exclusive nature of this service, it is more reliable and offers low latency, dedicated bandwidth, and secure access. FastConnect offers the following connectivity models: Connectivity via an Oracle network provider or exchange partner Connectivity via direct peering within the data center Connectivity via dedicated circuits from a third-party network Connectivity Option Details Following are optimal connectivity options. To compare the options based on speed, cost, and time, see the next section, “Choosing Your Connectivity Option.” Option 1: Connecting via an IPSec VPN IPSec VPN provides added security by encrypting data traffic. The achievable bandwidth over a VPN is limited to 250 Mbps. Therefore, multiple VPN tunnels might be required depending on the total amount of data to transfer and the required transfer rate. Steps-by-step instructions for creating a secure connection between Oracle Cloud Infrastructure and other cloud providers are available in Secure Connection between Oracle and Other Cloud Providers. Option 2: Connecting via a Cloud Exchange Exchange providers can provide connectivity to a large ecosystem of cloud providers over the same dedicated physical connection between on-premises and the exchange provider. Some available providers are Megaport, Equinix, and Digital Realty. To route between the clouds, you have the following options: Use the virtual router service from the exchange provider—for example, Megaport Cloud Router (MCR). Colocate a physical customer edge (CE) device with the exchange provider. The following table shows the pros and cons of using a virtual router service versus colocating a physical router with the exchange provider:   Pros Cons Using a virtual router service Easy to deploy Provides bandwidth on demand Is cost-effective to deploy and maintain Flexibility to make routing changes is within the scope of support from the cloud exchange Non-availability of public IP communication Using a dedicated physical router Provides flexibility in managing routing functions Gives you the ability to deploy your choice of hardware Long deployment times Scaling limitations Hardware maintenance and associated monetary costs   Although the scope of this blog is to provide optimal connectivity options with a partner-agnostic approach, we are using the Megaport Cloud Router (MCR) option as an example because it’s easy to deploy and provides a virtual router service. We are also using Amazon Web Services (AWS) for our example cloud provider connection, although Megaport supports connectivity to many cloud providers, including Azure and Google Cloud Platform.                         Setting up the connectivity involves the following steps: Connect FastConnect with Megaport through the Oracle Cloud Infrastructure Console. Connect AWS Direct Connect with Megaport through the AWS console. Create the MCR: Create a Virtual Cross Connect (VXC) connection to FastConnect from MCR Create a VXC connection to the connecting cloud provider (for example, AWS Direct Connect) from the MCR. After you set up FastConnect, the MCR, and the connection with the cloud provider (for example, AWS Direct Connect, Azure ExpressRoute, or Google Cloud Platform), you can access the resources by their private IP addresses and the traffic will be routed via the high-bandwidth, low-latency connection. Choosing Your Connectivity Option Use the following high-level information to help you choose your connectivity option. However, be aware that the best connectivity option varies for different use cases. Information is given for AWS Direct Connect as an example. Speed FastConnect offers 1G and 10G port speeds. Direct Connect offers port speeds of 50M, 100M, 200M, 300M, 400M, 500M, 1G, and 10G. IPSec VPN speeds are limited under 500Mb/s in most cases. Cost Oracle FastConnect charges a flat port-hour fee, and there are no charges for data transfer. For more information, see Oracle FastConnect Pricing. The Oracle IPSec VPN service does not charge for inbound data transfer, outbound data transfer is free up to a 10-TB transfer, and there is a small fee after the 10-TB limit is exceeded. For more information, see Oracle IPSec VPN Pricing. Amazon pricing has a port fee and data transfer charge. Inbound data is not metered but outbound data is metered and charged. For more information, see Amazon Direct Connect Pricing. Megaport pricing is based on the rate limit that you choose when you create the MCR. The options available are 100 Mbps, 500 Mbps, and 1, 2, 3, 4, and 5 Gbps. Charging rates (per monthly values) are displayed at the time of deployment based on where you are deploying the MCR and the regions that your connection spans. Time Data transfer times depend on the speed choices made at each hop. Comparing dedicated connectivity and IPSec VPN, dedicated connectivity provides a deterministic timeframe because the connectivity uses a private medium and is more reliable and consistent. The following table shows hypothetical cost scenarios based on bandwidth for the time to data transfer from AWS to Oracle Cloud Infrastructure:   Data (TB) 10 100 1,000 10,000 Rate Gb/s 1 22h13m12s 9d6h13m12s 92d14h13m12s 925d22h13m12s 10 2h13m12s 22h13m12s 9d6h13m12s 92d14h13m12s 100 13m12s 2h13m12s 22h13m12s 9d6h13m12s   Summary This post discusses the intercloud connectivity options that are available in general and how multicloud access can be implemented with Oracle Cloud Infrastructure. It provides high-level indicators that can help you define your connectivity path and compares the connectivity options available to help you choose the optimum connectivity for your use case. For more information and a detailed step-by-step guide for connectivity, see the Migrating Oracle Databases from Amazon Web Services to Oracle Cloud Infrastructure Database white paper.

A multicloud architecture uses more than one cloud service provider. Companies have more than one cloud provider for many reasons: to provide resiliency, to plan for disaster recovery, to increase...

Oracle Cloud Infrastructure

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Jean Rodrigues

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed Jean Rodrigues of Oracle. Jean is a Principal IT Consultant working in Oracle’s Managed Cloud Services group, which is a global team that implements, runs, and maintains services for customers who have their workloads fully managed by Oracle. His role includes providing technical leadership and architecting customers' Oracle Cloud Infrastructure and Cloud at Customer workloads. Greg: Jean, how did you prepare for the certification? Jean: It was an exciting journey. I’ve been working in cloud for a while. I have followed the development of Oracle Cloud Infrastructure because I truly believe it is a great offering from Oracle that will benefit many enterprise customers. When the Oracle Cloud Infrastructure Architect Associate certification launched, I immediately started the preparation by following the learning path published on the exam page. I took the training, went over the documentation, did hands-on exercises, and took the practice exam. Additionally, I attended Oracle Training, which greatly helped me prepare. The instructor explained the concepts very well and provided valuable real-world examples. I highly recommend that training. Greg: How long did it take you to prepare for the exam? Jean: I took around two months to prepare, spending around one hour a day reading and practicing in the environment. I booked the exam through Pearson VUE, showed up 15 minutes earlier, and everything went smoothly. Greg: How is life after getting certified? Jean: I received great feedback from management and coworkers on this accomplishment, and I was glad to see that some of them were inspired to prepare to take the exam as well. I’ve helped some of my colleagues with their preparation, and I am pretty sure that soon we will have more Oracle Cloud Infrastructure Architect Associates within the team. Preparing for this exam helped me acquire a huge amount of knowledge in advanced cloud topologies, mainly around networking, distributed computing, and cloud native. It’s just awesome to see how microservices architectures, Docker, Kubernetes, and other cutting-edge patterns and technologies can help customers to innovate. Today I feel confident helping customers design a highly available, high-performance, and cost-effective architecture in Oracle Cloud Infrastructure. Greg: Any other advice you’d like to share? Jean: Stay focused and have fun. As I like to say, it is not about the credential you earn, it is about all the learning and expertise you will acquire down the road. The hands-on practices using a trial account help tremendously. == If you want to follow Jean's advice, go to the Oracle Cloud Infrastructure 2018 Architect Associate page to learn more about training materials, courses, and to register for your exam. Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman   Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series are listed under Greg’s blog page.

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed...

Performance

Oracle Tests Better in Performance than Amazon Web Services

Oracle Cloud Infrastructure Compute bare metal instances is shown in independent testing by StorageReview to have a 2X-5X performance advantage with comparable or dramatically lower pricing, compared to similar configurations from Amazon Web Services (AWS) across a wide range of workloads.  The Testing: End-to-End Workload Performance In March 2018, StorageReview gave Oracle an Editor’s Choice award for the performance and innovation that they saw when testing Oracle Cloud Infrastructure bare metal and virtual machine instances. At the time, Oracle Cloud Infrastructure was the only cloud that they had tested, but the results compared favorably to on-premises configurations running the same workloads. In August 2018, StorageReview tested AWS i3.metal bare metal instances across the same range of workloads they ran for Oracle previously, and the results were a strong validation of the Oracle Cloud Infrastructure performance proposition for customers. The testing done in the lab at StorageReview covers more than storage. The testing is end-to-end workload performance testing, and it measures all the components that make up the user’s experience on the tested platforms. The results provide an aggregate measurement of performance across compute, storage, and network components, and is about as close as a lab can get to estimating the performance that’s likely to be seen by a user. The Results: Oracle is Up to 5X Faster than AWS In the testing, Oracle demonstrated up to 5X the performance when running on remote block storage, and double the performance when running workloads on local SSD storage. Every workload tested, including Oracle Database, Microsoft SQL Server, 4k random read and random write, 64k sequential read and sequential write, as well as a variety of virtual desktop workloads, all showed a similar performance advantage for Oracle Cloud Infrastructure in comparison with the results for AWS. Additionally, the latency recorded at peak performance was far lower on Oracle, and the percentage of recorded performance with latency below 1ms, the common threshold for application usability, was far higher. Latency has a powerful impact on variability of performance. Customers running performance-sensitive systems of record need performance consistency, one of the key design points of Oracle Cloud Infrastructure, and these results show that Oracle can deliver a higher level of consistency than AWS in addition to the higher level of performance. Superior Oracle Database Workload Performance When we designed Oracle Cloud Infrastructure, we knew that a primary use case for our customers would be Oracle Database and the critical business applications that run on top of our database, so we knew we had to deliver exceptional results for these demanding workloads. The results showed we hit the mark. For performance-intensive database workloads, Oracle Cloud Infrastructure offers performance results that are head and shoulders above the capabilities offered by AWS.  The results with a configuration that uses remote block storage, network connected to bare metal instances on both clouds, shows the most dramatic advantage for Oracle. Oracle provides 5X the performance, as seen here: How does Oracle get such a big advantage over AWS? With the remote block storage configuration, the answer comes down to the unique cloud architecture we've built to address the needs of enterprise users, and more specifically, how we built our network and our block storage service. Oracle has a next-generation cloud network that connects our cloud components, including between servers and the block storage sub-systems. The network has no resource over-subscription, so performance doesn't get compromised when the network gets busy.  Further, we used a flat network topology, which reduces the number of hops and the associated latency between any two devices. Off-box network virtualization offloads the effort from the server, which reduces the performance tax that customers would see without such an approach. Finally, storage traffic uses the full 25-Gbps pipe to the server, while AWS confines the storage traffic to their EBS optimized link that’s limited to 10 Gbps for their bare metal instance. The Oracle Block Volume Service is designed for maximum performance with all-SSD capacity and rates the highest IOPS per GB and IOPS per instance metrics of any block storage service in the cloud. One of the key things that you can see in the performance comparison for remote block storage is that a higher percentage of the IOPS Oracle delivers is usable, with latency below 1 ms, the common threshold for application latency tolerance. In this graph, the percentage of unusable IOPS of the peak recorded for Oracle is 10%, while Amazon records 25% of its peak IOPS at unusable latency levels, both represented by the hashed bars at the top of the peak IOPS levels. Higher levels of latency contribute to variability of performance at high levels of performance. Part of Oracle's design point in cloud is to cap performance before latency becomes a major issue, making the performance we deliver less variable, delivering better results for critical workloads that need consistency as much as they need high performance. With the local SSD configuration, the Oracle performance advantage for Oracle Database workloads is slimmer, but still significant. In this case, Oracle provides double the performance, but also gives customers more than 3X the local storage capacity, making this extremely high performance configuration far more usable for workloads that need to scale capacity over time. The comparative performance for local storage configurations can be seen here: Fewer factors go into the performance difference when local NVMe SSD storage is used. Both vendors are using a similar media type, and there's no network connection between server and storage that impacts performance since the storage sits on-board the bare metal server. In this case, the Oracle advantage comes from the SSD drive itself, which has built-in cache that increases performance enough to drive the 2X performance benefit demonstrated. In addition to twice the performance recorded when running on SSD, Oracle offers 51TB of SSD on our bare metal instances, while AWS offers just 15TB, meaning that it's much more likely that customers can accommodate big scale applications, as well as the capacity needed for data redundancy and ongoing growth of darta on local SSD with Oracle than with AWS. Superior Performance for SQL Server, Virtual Desktop, and General Workloads While we built Oracle Cloud Infrastructure to be optimized for Oracle Database, the enterprise optimized infrastructure we built also has significant performance advantages over AWS for all the other workloads that StorageReview tested.  Customers with demanding performance requirements for any category of workloads will clearly find a good home with Oracle Cloud. Here are the results for running Microsoft SQL Server, with Oracle delivering double the performance on local SSD and more than 5X with remote block storage, along with far better usable IOPS: Here's what StorageReview measured for a 4K random write workload, with Oracle showing more than double the performance on local SSD and just under 5X on remote block storage: And finally, this is how it broke down for a virtual desktop infrastructure (VDI) workload, a test of initial login, with Oracle showing 2.6X the performance on local SSD and almost 5X with remote block storage: Price for Oracle Block Storage is 19X lower for up to 5X higher performance The last thing is price. Although Oracle is delivering a huge performance advantage, the cost is lower than AWS in most cases, as has been validated in other independent analysis. For block storage, StorageReview built the highest performance configuration possible on AWS so that it would compare as favorably as possible. The problem with that, however, is that AWS makes customers pay for the amount of input/output performance that they consume, which drives up the cost dramatically. For this series of testing, Oracle delivers 4-5X more performance at 19X lower cost.  In the configuration that StorageReview tested, the total cost for the AWS solution was $69,794 per month, driven largely by the cost of storage performance which customers must forecast and pay for on Amazon's high performance storage offering, Elastic Block Storage Provisioned IOPS. The Oracle Cloud Infrastructure configuration with higher performance across all workloads cost $3,697 per month, with Oracle's Block Volume Storage service delivering superior performance without charges for IOPS consumption. In the local storage configuration, Oracle costs slightly more than AWS, by about 25%. However, Oracle also offers double the performance, more memory and 3.4X the local storage capacity, meaning that we can run bigger workloads and accommodate more workload growth over time. For customers that care about performance, this is an equation that delivers tremendous value. We built Oracle Cloud Infrastructure to deliver consistent high performance for demanding enterprise workloads of all kinds, and we’re thrilled to see the advantages of our design demonstrated so clearly. We invite users to try Oracle Cloud to see how it can help them solve their biggest business challenges with the confidence of industry-leading performance that doesn’t break the bank.

Oracle Cloud Infrastructure Compute bare metal instances is shown in independent testing by StorageReview to have a 2X-5X performance advantage with comparable or dramatically lower pricing, compared...

Product News

Taking a Look at the Oracle Cloud Infrastructure Storage Gateway

Object storage is great for managing unstructured data at scale, but often it’s not that easy to use with existing applications because you need to modify the applications and learn new APIs. Or, perhaps you simply want to work with file systems because that’s what you’re used to. In these cases, a storage gateway is what you need. Oracle Cloud Infrastructure Storage Gateway makes Oracle Cloud Infrastructure Object Storage appear like a NAS, providing seamless, no-fuss access to the cloud for businesses with file-based workflows. There’s no need to re-architect legacy applications or to disrupt users’ access to files they’re working with. Top 5 Features of Oracle Storage Gateway Here are my top 5 reasons why Storage Gateway is great for your cloud data use cases. 1. Removes Data Lock-In: Data Is Accessible in Native Format Any file that you write to a Storage Gateway file system is written as an object with the same name in its associated Oracle Cloud Infrastructure Object Storage bucket (with its file attributes stored as object metadata). This means that you don’t need the gateway to read back your data; you can access your files directly from the bucket by using Oracle APIs, SDKs, HDFS connector, third-party tools, the CLI, and the Console. A Refresh operation in Storage Gateway lets you read back, as files, any objects that were added directly to the Object Storage bucket by other applications. Your data is now available in the same format both on-premises and from within Oracle Cloud Infrastructure. 2. No Cost, Easy to Set Up Storage Gateway runs as a Linux Docker instance on a local host with local disk storage used for caching, or it can run in an Oracle Cloud Infrastructure Compute instance with attached block storage. 3. Storage Cache for High Performance to the Cloud Configure the cache storage to be large enough to hold your largest data set or the files you want low-latency, local access to. Then, any files written into file systems that you create on your local gateway are written asynchronously and efficiently over the WAN to the cloud. When this data becomes active again, it can be brought back into the local Storage Gateway cache. 4. Keep Files You Need Fast Access to Pinned to Local Storage Files that you know you’ll want high-speed access to can be pinned to remain in the cache while you need them, eliminating undesirable latency between your users and data in the cloud. 5. Capacity Without Limit Adding Storage Gateway to your existing storage environment means that you can take advantage of the durability and massive scale of Object Storage. Your data sets can expand and contract without the expense of provisioning new hardware. Grow as fast and as large as you need to while paying only for the storage that you consume.   Store Data Where It Makes the Best Sense for Your Business The gateway effectively expands your storage footprint to leverage the price-performance advantage of the highly durable and secure Object Storage. Moving less-frequently accessed data to the cloud frees up expensive on-premises storage and helps reduce NAS sprawl.   Top 5 Problems, Solved! Here are my top 5 choices for business problems that the Storage Gateway addresses today: 1. Migrating Data to the Cloud When you decide to move data into the cloud, often the initial data migration becomes an obstacle because of limited bandwidth uploading over your WAN or just sheer data volume. In these cases, the new Oracle Data Transfer Service makes sense. When network speed isn’t the issue, the Storage Gateway is a great choice. Start writing the data that you need in the cloud to your storage gateway, and your data is asynchronously and efficiently written to your storage bucket in Object Storage. After your initial data is in Object Storage, it’s easy to incrementally add new or modified files by using your on-premises storage gateway. 2. Hybrid Cloud Workloads and Data Processing If you’re considering or already running applications and big data services in Oracle Cloud, Storage Gateway makes it easy to upload local files to one or more Object Storage buckets for them to use. For cloud-native applications and services, you can access this data directly from the bucket. For file-based applications, you launch a Compute instance in the cloud, install a storage gateway on it, and then use it to read and write your data. After running applications in the cloud, you can write the results back to local storage via the gateway. 3. Nearline Content Repositories and Data Distribution When you end a project, you often need to keep some files available on less expensive, nearline cloud storage so that they are more readily sharable for reuse. Using Storage Gateway to migrate these assets from expensive NAS to a cooler tier of cloud storage shifts the storage costs from a capital expense to operational budget and provides always-on access to and reuse of these assets across geographies and organizations. 4. Back Up and Archive with 3-2-1 Data Protection Many institutions are storing backups on local NAS systems or tape. Based on business policies, these full or partial backups might be kept just a few weeks or for several months or years. Being able to tier older backups to the cloud and keep just the most recent backup in local cache can offer tremendous space and cost savings and let you meet backup and recovery SLAs. Using Storage Gateway as an on-ramp to the cloud makes it easy to adhere to the 3-2-1 best practice rule for backup and recovery: Have at least 3 copies of data.    (Move 1 or both backup copies into the cloud, keeping the original onsite.) Use 2 different storage types.    (Cloud counts as a different storage type.) Keep at least 1 copy of data offsite.    (Select your object storage cloud region.) 5. Tiered Storage and NAS Capacity Expansion Storage Gateway essentially expands your on-premises storage to include Oracle Cloud Infrastructure Object Storage. The Storage Gateway cache lets you tier data by asynchronously moving colder, tier-2 data to the cloud while keeping it readily accessible. Data you might once have considered moving to tape to help free up more expensive online local storage can now be tiered off to Object Storage where it can still be accessed as needed. By adding Storage Gateway to your existing NAS environment, you can take advantage of the Object Storage durability, massive scale, and pay-as-you-grow pricing while ensuring low-latency access to recently accessed data (or pinned data).   A Final Thought Storage Gateway is the evolution of the Storage Software Appliance gateway product. If you’re using Oracle Cloud Infrastructure Object Storage, you’ll want to use Storage Gateway with its enhanced file-to-object transparency and other sophisticated features. Over the coming months, we’re adding more features and explaining more use cases, so please stay tuned for more!

Object storage is great for managing unstructured data at scale, but often it’s not that easy to use with existing applications because you need to modify the applications and learn new APIs....

Product News

Introducing Updateable Instance Metadata

Some of our most security conscious customers are governments. In discussions with several of these customers the idea of a secure compute enclave was raised. They described it as an environment where highly sensitive data can be utilized while also not requiring or allowing inbound connectivity.  Starting today customers can update instance metadata on all OCI instances via the OCI API, SDKs and the CLI. Updateable Instance Metadata enables an atypical and secure communications channel to compute instances that does not require any externally accessible services. Customers can now more easily build secure compute enclaves for highly sensitive workloads. Instance metadata and cloud-init are two of the little pieces of magic that make IaaS so compelling. Instance metadata has always been leveraged at initial launch by customers who rely on cloud-init (and for Windows) to configure an instance. That configuration could be a simple `yum update` or it could be used to install an Oracle Management Cloud Agent for advanced monitoring and management. Installing and configuring Chef or Puppet agents, joining an Active Directory domain, much more are all simple to automate with instance metadata. Here’s what some of the metadata on an instance looks like – $ curl http://169.254.169.254/opc/v1/instance/ { "availabilityDomain" : "Uocm:PHX-AD-2", "faultDomain" : "FAULT-DOMAIN-1", "compartmentId" : "ocid1.compartment.oc1..aaaaaaaay4bxm4m5k7ii7oqyygolnuyozt5tyb5ufsl2jgcehm4hl4fslrwa", "displayName" : "updateable_metadata", "id" : "ocid1.instance.oc1.phx.abyhqljrrtcvkpxo33brxsfpykyrfg2n5r6owmyncywppxmt75ou2ap2n2xa", "image" : "ocid1.image.oc1.phx.aaaaaaaasez4lk2lucxcm52nslj5nhkvbvjtfies4yopwoy4b3vysg5iwjra", "metadata" : { "ssh_authorized_keys" : "ssh-rsa AAAAB3NzaC...4cON", "user_data" : "V2UncmUgaGlyaW5nLCBnZXQgaW4gdG91Y2ghIGNyYWlnLmNhcmxAb3JhY2xlLmNvbQ==" }, "region" : "phx", "canonicalRegionName" : "us-phoenix-1", "shape" : "VM.Standard2.1", "state" : "Running", "timeCreated" : 1536284426464 } Because instance metadata and cloud-init work so well together we often think about them as being a single thing. They aren’t. Cloud-init is an application that runs the first time an instance is launched, it gets a document from the instance metadata service and processes it per the documentation. When we decouple instance metadata from cloud-init it becomes obvious that instance metadata could be leveraged as an atypical communications channel. Traditionally we interact with compute instances by connecting to services running on the instance that accept inbound connections, SSH and HTTP are two common channels. These services introduce security risks; they can contain bugs, they can be misconfigured, they need to be regularly and carefully updated. The same applies to any application on an instance that accepts an inbound connection, they all create risk. What we need is a secure channel to communicate with a compute resource that doesn’t require any services that listen for external connections. Updateable Instance Metadata gives us this channel. Updateable Instance Metadata eliminates the need for listening services on the compute instance and allows us to leverage the strong OCI IAM permissions and policy features to secure it. Let’s imagine a dataset that is always encrypted in transit and at rest. Unfortunately, it’s still difficult to do useful work against encrypted data, it must be decrypted first. Decrypting the data increases the risk of losing control over it. Updateable Instance Metadata enables us to use the data and collect the results from a compute enclave that doesn’t accept any inbound connections. This is a significant security advantage. There are multiple pieces to this solution; A custom image that includes the analytics software plus a small application that polls the instance metadata. SSH and other services should be disabled, the firewall should be configured to deny all inbound connections. Set the GRUB menu timeout to 0. The custom image should also include a temporary key encryption key (KEK). A VCN with a private subnet and a Service Gateway. The private subnet isolates the instances from the Internet and the Service Gateway allows outbound access to the OCI Object Store without allowing access elsewhere. A bucket in the OCI Object Store. This will contain the encrypted dataset(s) as well as the results of the analysis. A Dynamic Group, matching rule, and IAM policy. These will authorize the instance to GET the data and POST the results to the object store. Now we can launch any number of instances, we’ll call them workers. When there is a dataset that needs to be processed we will use the OCI API to update the instance metadata on a worker with two key:values; “object”:”<path to object>” and “DEK”:”<data encryption key>”. The DEK should be unique to each individual unit of work. An application on the instance will get the object, decrypt the DEK and then the dataset. Once the analysis is complete the results can be encrypted with the DEK and PUT to the object store. The OCI API defines two metadata keys for an instance, `metadata` and `extendedMetadata`. The contents of the `metadata` and `extendedMetadata` PUT via the API are merged into the `metadata` key on an instance. Updating the `metadata` key via the API is subject to multiple limitations, let’s focus on `extendedMetadata`. The maximum size of the combined metadata, including userdata and SSH keys is 31.25 kibibytes. To update the metadata on our instance with our two new keys we first need to define them. Passing complex JSON on the CLI is difficult so we will source it from a file - $ cat extended-md.json { "object": "https://objectstorage.us-phoenix-1.oraclecloud.com/p/7GWMRaWucZ-dqIgocR9OVc6dUGiB5QwHX4V-QISkbCI/n/myns/b/money/o/someencypteddata", "DEK": "some DEK" } To apply the update - $ oci compute instance update --instance-id ocid1.instance.oc1.phx.abyhqljr…n2xa --extended-metadata file://./extended-md.json When we check the metadata on the instance again we can see our update - [opc@updateable-metadata ~]$ curl http://169.254.169.254/opc/v1/instance/ { "availabilityDomain" : "Uocm:PHX-AD-2", "faultDomain" : "FAULT-DOMAIN-1", "compartmentId" : "ocid1.compartment.oc1..aaaaaaaay4bxm4m5k7ii7oqyygolnuyozt5tyb5ufsl2jgcehm4hl4fslrwa", "displayName" : "updateable_metadata", "id" : "ocid1.instance.oc1.phx.abyhqljrrtcvkpxo33brxsfpykyrfg2n5r6owmyncywppxmt75ou2ap2n2xa", "image" : "ocid1.image.oc1.phx.aaaaaaaasez4lk2lucxcm52nslj5nhkvbvjtfies4yopwoy4b3vysg5iwjra", "metadata" : { "DEK": "some DEK", "user_data" : "V2UncmUgaGlyaW5nLCBnZXQgaW4gdG91Y2ghIGNyYWlnLmNhcmxAb3JhY2xlLmNvbQ==", "object" : "https://objectstorage.us-phoenix-1.oraclecloud.com/p/7GWMRaWucZ-dqIgocR9OVc6dUGiB5QwHX4V-QISkbCI/n/myns/b/money/o/someencypteddata", "ssh_authorized_keys" : "ssh-rsa AAAAB3NzaC...4cON" }, "region" : "phx", "canonicalRegionName" : "us-phoenix-1", "shape" : "VM.Standard2.1", "state" : "Running", "timeCreated" : 1536284426464 } Updateable Instance Metadata provides a highly secure, out-of-band communications channel that can be leveraged to build a secure compute enclave for highly sensitive workloads. I’m excited to see what you build with Updateable Instance Metadata, please let me know! To get started with Updateable Instance Metadata on OCI, visit https://cloud.oracle.com.  Updateable Instance Metadata are available at no additional cost in all public OCI regions and ADs. For more information, see the Oracle Cloud Infrastructure Getting Started guide, Compute service overview, and the Updateable Instance Metadata documentation. Craig

Some of our most security conscious customers are governments. In discussions with several of these customers the idea of a secure compute enclave was raised. They described it as an environment where...

Product News

Gartner Names Oracle a "Visionary" in New Magic Quadrant for Web Application Firewalls

You can't have a secure cloud without a secure edge. The internet and corporate networks are distributed systems, and web-based attacks take advantage of that. They target IoT devices, web servers and other endpoints, seeking access to your data and infrastructure. The only way to stop them is to prevent that malicious traffic from reaching your endpoints in the first place. That's why we acquired Zenedge in March. Its technologies—now available in the Oracle Dyn Web Application Security suite and coming to Oracle Cloud Infrastructure soon—enable organizations to protect against web server vulnerability exploits, DDoS attacks, bad bots, and other threats, both on-premises and in the cloud. This expertise is invaluable in an evolving threat landscape; we know where the hackers are going next and we're always working to meet them there. But don't take my word for it. Gartner has named Oracle a “Visionary” in its latest Magic Quadrant for Web Application Firewalls (WAFs). A cloud-based, globally deployed WAF is the cornerstone of any cloud edge security strategy. It sits in front of a web server, inspects traffic, and identifies and mitigates threats—both incoming (such as DDoS attacks) and outgoing (such as data breaches). Those capabilities are pretty standard across the WAF market, but they're not enough these days. There are too many types of web attacks, which are constantly evolving, and new threats are always emerging. The Oracle WAF stands out from the crowd with its use of machine learning. A supervised machine learning engine analyzes traffic queries and assigns them a score based on their potential risk. The WAF can then respond to threats by automatically blocking them or alerting security operations center analysts for further investigation. These risk scores are a valuable differentiator. Our customers told Gartner that the scores help them improve their WAF configuration and enable their security teams to focus on addressing the most important, complex threats. Oracle is committed to enterprise security as a pillar of its cloud platform. The emergence of the Oracle WAF as a visionary in the market is just the tip of the iceberg. Oracle Cloud Infrastructure embraces the hybrid and multicloud approach that customers demand. This approach provides needed flexibility and scalability, but it also makes the corporate network even more distributed than it already is. A comprehensive edge security strategy, including the use of a cutting-edge WAF, is necessary to protect your business in this environment. Partnered with the industry’s best data and insights on internet performance, availability, and security via our Internet Intelligence program, this strategy gives the market a trusted enterprise cloud for the future. Gartner, Magic Quadrant for Web Application Firewalls, Jeremy D'Hoinne, Adam Hils, Ayal Tirosh, Claudio Neiva, 29 August 2018 Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

You can't have a secure cloud without a secure edge. The internet and corporate networks are distributed systems, and web-based attacks take advantage of that. They target IoT devices, web servers and...

Events

General Availability of Virtual Machines with NVIDIA GPUs on Oracle Cloud Infrastructure

A few weeks ago, we announced preview availability for our virtual machine instances with NVIDIA Tesla Volta GPUs at the ISC conference in Germany. Customers have been using GPUs on Oracle Cloud Infrastructure for various use cases from engineering simulations and medical research to modern workloads such as machine learning training using frameworks such as TensorFlow. This week we’re at NVIDIA’s GPU Technology Conference in Tokyo and I’m excited to announce General Availability for these virtual machines with NVIDIA Tesla Volta V100 GPUs in our London (UK) and Ashburn (US) regions this week. You’ll be able to login and launch these instances just the same way you normally launch instances on Oracle Cloud Infrastructure. These virtual machines join the bare-metal compute instance we launched earlier in the year and provide you the entire server for very computationally intensive and accelerated workloads such as DNN training or run traditional high-performance computing (HPC) applications such as GROMACS or NAMD. Finally, we’re also making our Pascal generation GPU instances available on Virtual Machines in our Ashburn (US) and Frankfurt (Germany) regions as a new cost-effective GPU option. Data scientists, researchers, engineers and developers now have access to a portfolio of options ranging from a single P100 virtual machine to the cutting edge 8-way bare-metal instance with V100 Tesla GPUs. There’s something here for everyone!   Instance Shape GPU Type GPU(s) Core(s) Memory (GB) Interconnect Price (GPU/Hr) BM.GPU3.8 Tesla V100 (SXM2) 8 52 768 P2P over NVLINK $2.25 VM.GPU3.4 Tesla V100 (SXM2) 4 24 356 P2P over NVLINK $2.25 VM.GPU3.2 Tesla V100 (SXM2) 2 12 178 P2P over NVLINK $2.25 VM.GPU3.1 Tesla V100 (SXM2) 1 6 86 N/A $2.25 BM.GPU2.2 Tesla P100 2 28 192 N/A $1.275 VM.GPU2.1 Tesla P100 1 12 104 N/A $1.275 You can additionally use NVIDIA GPU Cloud to launch HPC or AI application containers by simply deploying our pre-configured images along with NGC credentials. You can follow detailed step-by-step instructions here or visit our GPU product page for more information. Finally, visit us this week in the exhibition hall at NVIDIA’s GTC Conference to talk to the engineering teams about Oracle Cloud Infrastructure or attend the breakout session on the 13th September at 12:10pm to learn more - https://www.nvidia.com/ja-jp/gtc/sessions/?sid=2018-1105. We hope to see you there! Karan

A few weeks ago, we announced preview availability for our virtual machine instances with NVIDIA Tesla Volta GPUs at the ISC conference in Germany. Customers have been using GPUs on Oracle...

Strategy

Openness at Oracle Cloud Infrastructure

Co-authored by: Bob Quillin, VP of Developer Relations, Oracle Cloud Infrastructure and Jason Suplizio, Principal Member of Technical Staff, Oracle Cloud Infrastructure Oracle is committed to creating a public cloud that embraces Open Source Software (OSS) technologies and their supporting communities. With the strong shift to cloud native technologies and DevOps methodologies, organizations are seeking an open, cloud-neutral technology stack that avoids cloud lock-in and allows them to run the same stack in any cloud or on-premises. As a participant in this competitive public-cloud ecosystem, Oracle Cloud Infrastructure respects this freedom to choose and provides the flexibility to run where the business or workloads require. Openness and OSS are cornerstones of the Oracle Cloud Infrastructure strategy, with contributions, support of open source foundations, community engagement, partnerships, and OSS-based services at the core of its efforts. Developer ecosystems grow and thrive in a vibrant and supported community—something Oracle believes in and actively supports. Oracle is one of the largest producers of open source software in the world, developing and providing contributions and other resources for projects including Apache NetBeans, Berkeley DB, Eclipse Jakarta, GraalVM, Kubernetes, Linux, MySQL, OpenJDK, PHP, VirtualBox, and Xen. This commitment naturally extends into public cloud computing, giving cloud customers the confidence to migrate their workloads with minimal impact to their business, code, and runtime. Oracle Cloud Infrastructure core services are built on open source technologies to support workloads for cloud native applications, data streams, eventing, and data transformation and processing.   Support for Open Source Communities “Oracle supports the cloud native community by, among other things, engaging at the highest level of membership with the Cloud Native Computing Foundation (CNCF). Their commitment to openness and interoperability is demonstrated by their support for the Certified Kubernetes conformance program and their continuing certification of Oracle Linux Container Services.” —Dan Kohn, Executive Director of The Cloud Native Computing Foundation (CNCF)   Oracle is an active member of several foundations committed to creating sustainable open source ecosystems and open governance. As a platinum member of the Linux Foundation since 2008, Oracle participates in a number of its projects, including the Cloud Native Computing Foundation (CNCF), the Open Container Initiative (OCI), the Xen Project, Hyperledger, Automotive Grade Linux, and the R Consortium. Since joining CNCF as a platinum member in 2017, Oracle Cloud Infrastructure engineering leadership sits on the CNCF Governing Board and continues to commit to a number of CNCF technologies, Kubernetes in particular.   The Oracle Cloud Infrastructure Container Engine for Kubernetes, for example, leverages standard upstream Kubernetes, validated against the CNCF Kubernetes Software Conformance program, to help ensure portability across clouds and on premises. As part of the first group of vendors certified under the Certified Kubernetes Conformance Program, Oracle works closely with CNCF working groups and committees to further the adoption of Kubernetes and related OSS across the industry. Oracle's strategy is to deliver open source–based container orchestration capabilities by offering a complete, integrated, and open service. To this aim, Container Engine for Kubernetes leverages Docker for container runtimes, Helm for package management, and standard Kubernetes for container orchestration. In addition to Kubernetes, Oracle works closely with CNCF teams on many of their other projects and working groups, including Prometheus, Envoy, OpenTracing, gRPC, serverless, service mesh, federation, and the Open Container Initiative. Oracle joined the Open Container Initiative to promote and achieve the initiative's primary goal, “to host an open source, technical community and build a vendor-neutral, portable and open specification and runtime for container-based solutions.” In accordance with that mission, Oracle developed the railcar project, which is an implementation of the Open Container Initiative's runtime spec. In further support of the container ecosystem, Oracle collaborates with Docker, Inc., to release Oracle's flagship databases, middleware, and developer tools into the Docker Store marketplace via the Docker Certification Program.  Open, conformant container technologies have become the tools of the trade for developers who need to move fast and build for the cloud. These developers rely on open, cloud-neutral, container-native software stacks that enable them to avoid lock-in and to run anywhere.    Built on Open Source "We believe that embracing Openness creates trust, choice, and portability for our customers. In addition to being platinum members in several Open Source Software foundations, we've also dedicated top engineering talent to contribute their leadership and software."   —Rahul Patil, Vice President, Software Development, Oracle Cloud Infrastructure   Oracle Cloud Infrastructure is built on and retains compatibility with the most advanced and prominent OSS technologies. Oracle Linux, the operating system that Oracle Cloud Infrastructure runs on, is an excellent case in point. Furthermore, we try to use the open source software, wherever possible, without modification. The reality is, however, that introducing innovative products to the market sometimes requires making enhancements to the underlying OSS code base. Under these circumstances, Oracle Cloud Infrastructure works to contribute these changes back to the open source community.  Chef and Ansible Customers who use Chef can also use the open source Chef Knife Plugin for Oracle Cloud Infrastructure. For customers who use Ansible, Oracle Cloud Infrastructure recently announced the availability of Ansible modules for orchestration, provisioning, and configuration management tasks (available on GitHub). These modules make it easy to author Ansible playbooks to automate the provisioning and configuration of Oracle Cloud Infrastructure services and resources, such as Compute, Load Balancing, and Database.  Fn Project Developers who are engaged with building cloud native applications will find a portable, open, container-native serverless solution for their development needs in Oracle's recently open-sourced Fn Project. The Fn Project can run on any cloud or on a developer's laptop. This open source serverless solution provides polyglot language support (including Java, Go, Ruby, Python, PHP, Rust, .NET, Core, and Node.js, with AWS Lambda compatibility) and will be offered as a fully managed functions-as-a-service (FaaS) offering on Oracle Cloud Infrastructure. Additionally, Oracle Cloud Infrastructure will be releasing a real-time event management service, which implements the CNCF's CloudEvents specification for a common, vendor-neutral format for event data. After they are released, the combination of the event management service and the Fn Project will be the only open source and standards-based serverless and eventing platform available among all public cloud providers. GraphPipe Recently Oracle announced the availability of GraphPipe, a new open source project that makes it easier for enterprises to deploy and query machine learning models from any framework. GraphPipe provides a standard, high-performance protocol for transmitting tensor data over the network, along with simple implementations of clients and servers that make it easy to deploy and query machine learning models from any framework. GraphPipe's efficient servers can serve models built in TensorFlow, PyTorch, mxnet, CNTK, or caffe2. All of GraphPipe’s source code, documentation, and examples to get started are all available on GitHub today. Kubernetes Through its work in the CNCF and otherwise, the Oracle Cloud Infrastructure team has invested deeply in Kubernetes.  As a part of that investment, because manually managing and maintaining a production Kubernetes cluster and the associated resources can require significant effort, the team created the Oracle Container Engine for Kubernetes.  Using standard, upstream Kubernetes, it creates and manages clusters for secure, high-performance, high-availability container deployments using Oracle Cloud Infrastructure's networking, compute, and storage resources, which includes bare metal instance types. The Oracle Cloud Infrastructure engineering team has also contributed many of its Kubernetes projects to the open source community, such as JenkinsX supported cloud provider (OKE),  flexvolume driver, volume provisioner, cloud controller manager, Terraform Kubernetes installer, crashcart, and smith (read more about these projects here). Terraform Terraform is a popular infrastructure as code (IaC) solution that aims to provide a consistent workflow for provisioning infrastructure from any provider, and a self-service workflow for publishing and consuming modules. Following the release of its Terraform provider, Oracle Cloud Infrastructure is increasing its investment in Terraform with the upcoming release a fully managed service that uses Terraform to manage infrastructure resources. That release will be accompanied by a group of open source Terraform modules for easy provisioning of Oracle Cloud Infrastructure services and of many other popular OSS technologies onto Oracle Cloud Infrastructure.   “We put our customers first in everything we do, and our customers tell us which OSS technology they want to use on Oracle Cloud Infrastructure. There are many more open source repositories which our customers use frequently, which we will support as first-class citizens over time. If you wish to see support of a specific OSS technology on Oracle Cloud Infrastructure, feel free to reach out to us or comment on this blog.” - Vinay Kumar, Vice President of Product Management, Oracle Cloud Infrastructure   There is a lot of history and momentum behind Oracle’s commitment to OSS, and Oracle Cloud Infrastructure is making rapid progress in building out a truly open public cloud platform. See it yourself, get started with Oracle Cloud Platform, with up to 3,500 free hours, by creating a free account.

Co-authored by: Bob Quillin, VP of Developer Relations, Oracle Cloud Infrastructure and Jason Suplizio, Principal Member of Technical Staff, Oracle Cloud Infrastructure Oracle is committed to creating...

Customer Stories

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Anuj Gulati

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed Anuj Gulati of IBM. Anuj works as a Technical Lead at IBM India. He has over nine years of experience managing database systems (RDBMS and non-RDBMS), ERPs, job schedulers, and web servers, and he has sound knowledge of Oracle Cloud Infrastructure concepts, including Ravello. He is certified as both an Oracle Cloud Infrastructure (OCI) Architect Associate and an OCI Classic Architect Associate. Greg: Anuj, how did you prepare for the certification? Anuj: I already had a fair understanding of Oracle Cloud Infrastructure (OCI) as I was already certified in OCI Classic. To prepare for the OCI Architect Associate Exam, the first step I took was to focus on understanding the business drivers that led to the new OCI offering. This helped me understand the cloud in more detail. Understanding the technical aspects is one thing, but understanding the reasoning for developing Oracle's next generation cloud was very beneficial. I also signed up for the 30-day trial which I found to be most beneficial. Getting my hands on OCI services greatly helped me understand the concepts. I reviewed all the use cases I could find and set these up on the trial account. And the documents found on docs.oracle.com contained almost everything that I needed to work with the Oracle Cloud. In addition, I’ve been following a lot of Oracle management on LinkedIn and whenever they posted any update, I tested out the update to familiarize myself with it. I also compared Oracle Cloud to the clouds offered by other vendors. I reviewed the technical aspects, which helped me better appreciate the offerings in Oracle Cloud that are unavailable in the other vendors’ clouds. I would say that preparing this way did take longer, but I still feel it was the best way for me to not only pass the exam but to truly understand the Oracle Cloud Infrastructure offering. Greg: Did being part of the reference program help you prepare for the exam? Anuj: Yes. We received some customized videos specifically for the OCI exam. I found these to be very helpful and assisted my overall understanding of OCI. Greg:  How long did it take you to prepare for the exam? Anuj: It took me about three months to prepare for the exam. It took longer than I had hoped to prepare due to my job responsibilities. For someone who has experience with other clouds, I think it would only take about one month to prepare for the exam. Greg:  How is life after getting certified? Anuj: I shared the digital badge for my OCI certification on LinkedIn and this received many views, which I was very pleased about. Passing this exam has given me a sense of confidence, a sense of pride. I feel like I am part of an elite group that has earned this certification. Many colleagues have reached out to me for advice on how to prepare for the exam and about the exam structure. From a technical perspective, it has helped me understand a lot of cloud concepts in general and some of the Oracle concepts in particular. Greg: Any other advice you’d like to share? Anuj: If you have the right skills and understanding, then this exam should not be too difficult for you. Go through the videos and documents that are available for free. You definitely need to create a trial account and work your way through it.   Please subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam.   Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series are listed under Greg’s blog page.

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed...

Solutions

Microsoft SQL Server Running on Linux Using Oracle Cloud Infrastructure

Microsoft SQL Server on Linux removes the barrier for organizations that prefer the Linux operating system over Microsoft Windows. It’s the same SQL Server database engine with many similar features; the only difference is the operating system. Currently, Microsoft supports the Linux version of SQL Server on Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Ubuntu. You can also run SQL Server in a Docker container. You must be able to install, update, and remove SQL Server from the command line. This post describes how to deploy a SQL Server database running on an Ubuntu Linux server on a single Oracle Cloud Infrastructure Compute VM. It also describes how you can use Oracle Cloud Infrastructure Block Volumes storage to store the SQL Server database files and transaction log files. Before You Begin Before you install SQL Server on Linux, consider the following prerequisites: Identify your IOPS or I/O throughput requirements. Check SQL Server documents for resource requirements. Choose an appropriate Oracle Cloud Infrastructure Compute VM shape (OCPU, memory, and storage). Create a secured network on Oracle Cloud Infrastructure to access the SQL Server database. Choose and install a supported Linux server version and its command-line tools. Identify required SQL Server services that must be installed. Generate the SSH key pair and secure the SSH private key and public key. Choose the Oracle Cloud Infrastructure VM Shape and OS You can choose the Linux image (Ubuntu) from the Oracle Cloud Infrastructure repository, or you can bring your own Linux image to deploy on the VM. We strongly recommend that you check the Linux server version support on Oracle Cloud Infrastructure before you start deploying. For this post, we chose Ubuntu 16.04 with Debian packages from the Oracle Cloud Infrastructure image repository, and the VM.Standard2.4 shape. Configure Network Access Before installing SQL Server, you must create an Oracle Cloud Infrastructure virtual cloud network (VCN) and choose the appropriate availability domain, subnet, and other components for your Linux server. In addition to the existing ingress stateful security rules in your VCN, you might need to add ingress security rules to allow remote SSH (secure shell) access to the Linux server. Ensure that the internet gateway route rules are enabled for internet access, which allows you to access the Linux host over the public network. The following images show the security rule added and route rules enabled to allow SSH access to the Linux host over the public network. Security list rule: Route table rule: For more information about working with security rules and route rules, see the Networking service documentation. Provision and Connect to the Linux Server When you provision the Linux (Ubuntu) server, you provide the SSH public key. After the server is provisioned, the following page is displayed in the Console, showing the public IP address to use to access the Linux host. Use SSH to connect to Linux host, using the username ubuntu and the private key of the SSH key pair.  Create a Block Storage Volume on Oracle Cloud Infrastructure We installed the operating system, OS command-line tools, SQL Server binary, and all the required SQL Server tools on the local boot volume. However, we stored the SQL Server database on a block storage volume. The following image illustrates creating a block storage volume on Oracle Cloud Infrastructure and choosing the appropriate backup option (Bronze) for the volume.  Run the following command on the Ubuntu Linux server as the root user to add the iSCSI target to this block storage volume at the operating-system level. After you run the preceding commands, you might need to partition the newly added iSCSI storage and create the file system (xfs and ext3 are both supported by SQL Server). The following image shows the mount point of the block storage volume after creating partition, creating appropriate filesystem and mounting the partition. Install SQL Server on Linux Follow these steps to install SQL Server on an Ubuntu Linux operating system. Run the following commands in a bash shell of the Ubuntu Linux terminal to install the mssql-server package. Import the public repository GPG keys: sudo wget -qO- https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add - Register the SQL Server Ubuntu repository: sudo add-apt-repository "$(wget -qO- https://packages.microsoft.com/config/ubuntu/16.04/mssql-server-2017.list)" Install SQL Server on Ubuntu Linux: sudo apt-get update sudo apt-get install -y mssql-server Run mssql-confsetup, and set the SA password and choose the SQL Server edition. sudo /opt/mssql/bin/mssql-conf setup Verify the SQL Server service. systemctl status mssql-server Connect to the SQL Server instance. To create the database, first access the database and connect with a tool that can run Transact-SQL statements on the SQL Server. You might need to install SQL Server Operations Studio, which is a cross-platform GUI database management utility, to manage your MS-SQL Server database. Conclusion This post demonstrated how to deploy Microsoft SQL Server on Ubuntu Linux using Oracle Cloud Infrastructure, and discussed how to use Oracle Cloud Infrastructure Block Volumes to store MS-SQL Server database to achieve higher performance and better manageability.

Microsoft SQL Server on Linux removes the barrier for organizations that prefer the Linux operating system over Microsoft Windows. It’s the same SQL Server database engine with many similar features;...

Security

Installing the Check Point CloudGuard Virtual Firewall Appliance on Oracle Cloud Infrastructure

Oracle Cloud Infrastructure offers a native firewall service where the customer can create Security Lists with stateful rules for packet inspections using IP addresses as source and destination with TCP and UDP ports. But customers also have the option to install and deploy other third party firewall products to satisfy additional requirements: To comply with their existing or required InfoSec policy To leverage existing operational knowledge To add security features that are not available with Security Lists like IDS/IPS In this blog we are featuring Check Point as many of our existing customers use Check Point Firewall products on their on-premise and they have enterprise licenses which they can use on Oracle IaaS as part of the "bring your own license" (BYOL) scheme. The Check Point CloudGuard family of security products can be deployed as virtual appliances to protect enterprise workloads running on cloud infrastructures (IaaS) or software services and applications (SaaS) against generation V cyberattacks. This post describes the general workflow and provides some associated steps for installing the Check Point CloudGuard IaaS virtual appliance on Oracle Cloud Infrastructure. For general guidance, see the How to Deploy a Virtual Firewall Appliance on Oracle Cloud Infrastructure blog post. Prerequisites To perform the steps in this post, you must meet the following prerequisites: You have an Oracle Cloud Infrastructure tenancy. You need have access to the Oracle Cloud Marketplace to download the Check Point CloudGuard IaaS Security Gateway. Optionally, you can store the image in your Object Storage (for example, in us-ashburn-1). You are familiar with the following Oracle Cloud Infrastructure terms: availability domain, bucket, compartment, image, instance, key pair, region, shape, tenancy, and VCN. For definitions, see the documentation glossary. Sizing The example in this post uses the VM.Standard2.4 compute shape. For a list of Oracle Compute shapes and pricing information, see the Compute pricing page. Architecture Diagram In this example, CloudGuard is deployed in a single gateway configuration, with three VNICs: one for the public internet facing traffic, the second for the DMZ, and the third for internal workloads. The internet and DMZ zones are on public subnets, and the internal zone is on a private subnet. Interface The following table lists the interface properties as shown in the architecture diagram: Zone VCN IP Address VNIC Internet Public   VNIC 1 DMZ Public   VNIC 2 Intranet Private   VNIC 3 Step 1: Create the VCN Using the Oracle Cloud Infrastructure Console, create a virtual cloud network (VCN) and its associated resources for the CloudGuard security zones. The following images show examples of the resources in the console. VCN Internet Gateway Subnets Security Lists with Ingress and Egress Rules  Route Table Route Rule Step 2: Import the CloudGuard Image as a Custom Image Import the image from Object Storage and create a custom image. If you want to create the CloudGuard gateway in another region (for example, uk-london-1), you must preauthenticate the image from Object Storage.  Then, create the custom image. Step 3: Launch an Instance from the Custom Image Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images. Find the custom image that you want to use. Click the Actions icon (three dots), and then click Launch Instance. Provide additional launch options as described in Creating an Instance. Step 4: Add More VNICs (for the DMZ Security Zone) You can create additional VNICs when the first instance is running. To complete the additional VNIC configuration, you have to reboot. Double-click the instance. In the left-side menu, click Attached VNICs. Click Create VNIC. Enter a name. For Virtual Cloud Network, select the VCN. For Subnet, select a private subnet. Select the Skip Source/Destination Check check box. Click Create VNIC. Step 5: Create a Serial Console Connection to the Running Instance Create an serial console connection to the running instance by following the instructions at Instance Console Connections. Step 6: Configure CloudGuard  Configure the gateway by using the Check Point Gaia Portal or the SmartConsole. You can manage your Check Point Security Gateway in the following ways: Standalone configuration: CloudGuard acts as its own Security Management Server and Security Gateway Centrally managed: Same virtual network or outside the gateway  On premises From a different cloud or from another Oracle Cloud Infrastructure VCN or region From a different tenant in Oracle Cloud Configure the Gateway from the Gaia Portal Open an SSH client. Set the user for the administrator. Enter set user admin password. Set the password. Enter save config. Go to the Gaia Portal: https:\\<IP_address> The First Time Configuration Wizard is displayed. Perform the following steps to configure your system. When you get to the Installation Type page, you select the specific deployment of your system. On the Deployment Options page, select Setup, Install, or Recovery. On the Management Connection page, configure your system. On the Internet Connection page, configure the interface to connect to the internet. On the Device Information page, configure the DNS and proxy settings. On the Date and Time Settings page, set the time manually, or use the Network Time Protocol (NTP). On the Installation Type page, configure the system for your needs. Configure the Gateway from the SmartConsole Open the SmartConsole and go to the Gateways & Servers view. Click the new icon and then select Gateway. The Check Point Security Gateway Creation window is displayed. Select Wizard Mode. Enter values on the General Properties page. Initiate secure internal communications. Click Finish. The Check Point Gateway General Properties window is displayed. Configure the gateway.   Please refer to CheckPoint CloudGuard documentation for the step-by-step configuration: https://cloudmarketplace.oracle.com/marketplace/en_US/listing/37604515 In the next blog we will tackle high availability options for CloudGuard on OCI in a multi-VCN configuration. Please stay tuned! p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 16.0px Helvetica} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Helvetica} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.5px Helvetica} p.p4 {margin: 0.0px 0.0px 4.8px 0.0px; font: 11.0px Helvetica} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; min-height: 14.0px} p.p2 {margin: 0.0px 0.0px 1.9px 0.0px; font: 11.0px Helvetica} p.p3 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Helvetica}

Oracle Cloud Infrastructure offers a native firewall service where the customer can create Security Lists with stateful rules for packet inspections using IP addresses as source and destination with...

Developer Tools

Deploy Kubeflow with Oracle Cloud Infrastructure Container Engine for Kubernetes

This post provides detailed instructions on how to deploy Kubeflow on Oracle Cloud Infrastructure Container Engine for Kubernetes.  Container Engine for Kubernetes is a fully managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. You can use this service when your development team wants to reliably build, deploy, and manage their cloud native applications. You just specify the compute resources that your applications require, and Container Engine for Kubernetes provisions them on Oracle Cloud Infrastructure automatically.  Kubeflow is an open-source project that makes the deployment and management of machine learning workflows on Kubernetes easy, portable, and scalable. Kubeflow automates the deployment of TensorFlow on Kubernetes. TensorFlow provides a state-of-the-art machine learning framework, and Kubernetes automates the deployment and management of containerized applications.  Step 1: Create a Kubernetes Cluster Create a Kubernetes cluster with Container Engine for Kubernetes. You can create this cluster manually by using the Oracle Cloud Infrastructure Console or automatically by using Terraform and the SDK. For better performance, we recommend using a bare metal compute shape to create nodes in your node pools.  Choose the right compute shape and number of nodes in the node pools, depending on the size of your data-set and on the compute capacity needed for your model training.  As an example, the following node pool was created with the BM.DenseIO1.36 shape which has 36 OCPUs and 512 GB memory.  Container Engine for Kubernetes creates a Kubernetes "kubeconfig" configuration file that you use to access the cluster using kubectl and Kubernetes Dashboard.  Step 2: Download the Kubernetes Configuration File Download the Kubernetes configuration file of the cluster that you just created. This configuration file is commonly known as a kubeconfig file for the cluster. At this point, you can use kubectl or the Kubernetes dashboard to access the cluster.  Please note that after your run "kubectl proxy" command,  you need to use following URL to access the Kubernetes dashboard.  http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ Step 3: Deploy Kubeflow After Kubernetes cluster is created,  you can deploy Kubeflow.  In this blog post, we will deploy Kubeflow with ksonnet.  Ksonnet is a framework for writing, sharing and deploying Kubernetes manifests. It helps to simplify Kubernetes deployment.  Please check whether ksonnet is installed on your local system. If it is not, install ksonnet before proceeding. Now you can deploy Kubeflow by using following command, provided in the Kubeflow documentation: export KUBEFLOW_VERSION=0.2.2 curl https://raw.githubusercontent.com/kubeflow/kubeflow/v${KUBEFLOW_VERSION}/scripts/deploy.sh | bash Note: The preceding command enables the collection of anonymous user data to help improve Kubeflow. If you don’t want data to be collected, you can explicitly disable it. For instructions, see the Kubeflow Usage Reporting guide.  During the Kubeflow deployment, you might encounter the following error: "jupyter-role" is forbidden: attempt to grant extra privileges: To work around this error, you need to grant your own user the required role-based access control (RBAC) role to create or edit other RBAC roles. Then, run the following command: $kubectl create clusterrolebinding default-admin --clusterrole=cluster-admin --user=ocid.user.oc1..aaaaa.... Step 4: Access Notebook Now you are ready to access Jupyter Notebook  and start to building your ML/AI models with your data sets.  To connect your notebook locally, you can run the following command: $kubectl get pods --selector="app=tf-hub" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}" $kubectl port-forward tf-hub-0 8000:8000 Summary With OCI Container Engine for Kubernetes and Kubeflow, you can easily setup a flexible and scalable machine learning and AI platform for your projects.  You can focus more on building and training your models rather than on managing the underlying infrastructure.  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures}

This post provides detailed instructions on how to deploy Kubeflow on Oracle Cloud Infrastructure Container Engine for Kubernetes.  Container Engine for Kubernetes is a fully managed, scalable, and...

Solutions

Oracle Database Offerings in Oracle Cloud Infrastructure

Oracle offers multiple cloud-based database options to meet a wide variety of use cases. The Oracle Cloud Infrastructure-based databases are available on bare-metal machines, virtual machines (VMs), and Exadata in different sizes. These offerings come with different levels of managed services, features, and price points, which makes it easy to find an option that meets your specific requirements. A 100 percent compatibility design ensures that all of Oracle’s database solutions use the same architecture and software, which enables you to leverage the same skills and support, whether you deploy the solutions on-premises, in a private cloud implementation, or in Oracle Cloud. Oracle offers Maximum Availability Architecture (MAA) guidelines and associated software and tools for high availability, disaster recovery, and data protection. All these technologies, like Real Application Clusters (RAC), Data Guard, and GoldenGate, and MAA best practices are also available for Oracle Cloud Databases. All cloud-based options are available in pay-as-you-go and monthly flex pricing options and allow you to leverage your existing licenses in a Bring Your Own License (BYOL) model. For detailed information about the included Oracle Database features, options, and packs, see the Permitted Features section of Oracle Database Licensing Information User Manual. In this post, I discuss the key features of different managed Oracle Database options for Oracle Cloud Infrastructure and compare them on the basis of performance, management, high availability, scalability, and cost. I also provide some prescriptive guidance to help you decide which option is a good choice for your use case. Scope Oracle provides a wide range of industry-leading on-premises and cloud-based solutions to meet the data management requirements of small- and medium-sized businesses as well as large global enterprises. This post covers only managed Oracle Database offerings for Oracle Cloud Infrastructure. It does not cover installing and operating Oracle (and other) databases directly on Oracle Cloud Infrastructure Compute instances or Oracle Exadata Cloud at Customer for on-premises. Oracle Cloud Infrastructure Autonomous Transaction Processing and Oracle Cloud Infrastructure Autonomous Data Warehouse are also not discussed here, these will be covered in a separate post. This post also does not cover other database options, such as Oracle Database Schema Cloud Service, Oracle NoSQL Database, or Oracle MySQL. You can find more information about these offerings and others in the Database documentation. Hardware Options for Oracle Database in Oracle Cloud Infrastructure Oracle Cloud Infrastructure supports several types of database (DB) systems that range in size, price, and performance. One way of classifying the systems is on the basis of their underlying compute options. You can provision databases in Oracle Cloud Infrastructure on Exadata machines, as well as on bare metal and virtual machine compute shapes. Exadata DB systems consist of a quarter rack, half rack, or full rack of compute nodes and storage servers, tied together by a high-speed, low-latency InfiniBand network. Exadata DB systems are available on X6 and X7 machines. Bare metal DB systems consist of a single bare metal server running on your choice of bare metal shapes. Locally attached NVMe storage is used for BM.DenseIO shapes. Virtual machine DB systems are available on your choice of VM.Standard shapes. A virtual machine DB system database uses Oracle Cloud Infrastructure Block Volume storage instead of local storage. You specify a storage size when you launch the DB system, and you can scale up the storage as needed at any time. Managed Oracle Database Offerings in Oracle Cloud Infrastructure Oracle offers the following managed database services running in Oracle Cloud Infrastructure: Oracle Exadata Cloud Service Oracle Cloud Infrastructure Database Oracle Database Cloud Service Oracle Exadata Cloud Service This service offers Oracle Databases hosted on Oracle Exadata Database machines. Exadata Cloud Service configurations were first offered on Oracle Exadata X5 systems. More recent Exadata Cloud Service configurations are based on Oracle Exadata X6 or X7 systems, which are the two currently available options in Oracle Cloud Infrastructure. You can choose from quarter-rack, half-rack, and full-rack system configurations. With Exadata X7 shapes in Oracle Cloud Infrastructure, you can get up to 8 DB nodes with 720 GB RAM per node, up to 368 OCPUs, and 1440 TB raw storage or 414 TB of usable storage with unlimited I/Os. Each Exadata Cloud Service instance is configured such that each database server of the Exadata system contains a single virtual machine (VM), called the domU, which is owned by the customer. Customers have root privileges for the Exadata database server domU and DBA privileges on the Oracle databases. Customers can configure the system as they like, and load additional agent software on the Exadata database servers to conform to business standards or security monitoring requirements. All of Oracle’s industry-leading capabilities are included with Exadata Cloud Service, such as Database In-Memory, Real Application Clusters (RAC), Active Data Guard, Partitioning, Advanced Compression, Advanced Security, Database Vault, OLAP, Spatial and Graph. Also included is Oracle Multitenant, which enables high consolidation density, rapid provisioning and cloning, efficient patching and upgrades, and significantly simplified database management. In Oracle Cloud Infrastructure, you can launch DB systems in different availability domains and configure Active Data Guard between them, along with using RAC for improved availability. Exadata Cloud Service is available through the Oracle Cloud My Services portal and the Oracle Cloud Infrastructure Console.                     Performance: Highest-performance managed Oracle Database offering in the cloud. Management: Best management features including deployment, patching, backups, and upgrading, with rolling updates for multiple nodes. High availability: Best HA with support for 8-node RAC based database clustering. Scalability: Best scale-out option. Cost: Exadata Cloud Service shapes are charged a minimum of 744 hours for the first month of the cloud service, whether or not you are actively using it, and whether or not you terminate that cloud service prior to using the entire 744 hours. For ongoing use of the same instance after the first month, you are charged for all active hours. Additional OCPUs are billed for active hours for the first month and ongoing use. This is generally the costliest managed DB option in Oracle Cloud Infrastructure, although higher-end bare metal shapes with similar resources, the pricing is not far apart. When evaluated in terms price/performance ratio, Exadata excels. More information: Features, Pricing, Documentation Guidance: Exadata Cloud Service is the most powerful Oracle Database, with all of the options, features, and Enterprise Manager Database Packs. Offering the highest performance, high availability, and scalability, this option is a great match for mission-critical and production applications. It is engineered to support OLTP, data warehouse, real-time analytic, and mixed database workloads at scale. It also typically costs more than other Oracle Cloud database options, but if you calculate in terms of price/performance ratio (like you should), the value it provides exceeds other alternatives. With the introduction of X7-based options, you can now start or scale down to zero cores, which makes the entry price point of Exadata Cloud Service lower than previous Exadata options. If your Database needs to scale beyond 2 nodes, Exadata Cloud Service that offers up to 8 nodes is recommended. Another good use case is consolidating a lot of databases using Exadata Cloud Service rather that deploying them on virtual machines. Other managed Database offerings have limitations in terms of I/O throughput and storage capacity which makes Exadata Cloud Service a good option when higher performance or capacity is required. Note: Two additional Exadata services are not available on Oracle Cloud Infrastructure but are relevant for several use cases: Exadata Cloud at Customer is similar to Oracle’s Exadata Cloud Service but is located in customers’ own data centers and managed by Oracle Cloud experts. This service enables a consistent Exadata cloud experience for customers whether on-premises or in Oracle Cloud Infrastructure data centers. This enables customers to use Exadata in their own data centers and behind their own firewalls for reasons such as data sovereignty issues, legal, regulatory, privacy or compliance requirements, sensitive data, custom security standards, extremely high SLAs or near zero latency requirements. Oracle Database Exadata Express Cloud Service is a good entry-level service for running Oracle Database in Oracle Cloud. It delivers an affordable and fully managed Oracle Database 12c Release 2 experience, with enterprise options, running on Oracle Exadata. It’s generally a good match for running line-of- business or SMB production apps. It’s also great for rapidly provisioning dev, test and quality assurance databases, and for quickly standing up multi-purpose sandbox environments.  Oracle Cloud Infrastructure Database The Oracle Cloud Infrastructure Database service is managed by the Database Control Plane running in Oracle Cloud Infrastructure and uses the platform’s native APIs. It is available through the Oracle Cloud Infrastructure Console and integrates natively with all the Oracle Cloud Infrastructure platform features and services, such as compartments, audit, tagging, search, Identity and Access Management (IAM), Block Volume, and Object Storage. The Database service offers 1-node DB systems on either bare metal or virtual machines, and 2-node RAC DB systems on virtual machines. You choose the shape when you launch a DB system. Bare Metal Shapes Bare metal DB systems consist of a single bare metal server with locally attached NVMe storage. Each DB system can have multiple database homes, which can be different versions. Each database home can have only one database, which is the same version as the database home. BM.DenseIO1.36: Provides a 1-node DB system (one bare metal server), with up to 36 CPU cores, 512 GB memory, and nine 3.2 TB (28.8 TB total) locally attached NVMe drives BM.DenseIO2.52: Provides a 1-node DB system (one bare metal server), with up to 52 CPU cores, 768 GB memory, and eight 6.4 TB (51.2 TB total) locally attached NVMe drives Virtual Machine Shapes You can provision a 1-node DB system on one virtual machine or a 2-node DB system with RAC on two virtual machines. Unlike a bare metal DB system, a virtual machine DB system can have only a single database home. The database home has a single database, which is the same version as the database home. A virtual machine DB system database uses Oracle Cloud Infrastructure block storage instead of local storage. The number of CPU cores on an existing virtual machine DB system cannot be changed. VM.Standard1 virtual machines: Provisioned on X5 machines. Five VM options are available with 1 to 16 CPU cores and 7 GB to 112 GB memory. VM.Standard2 virtual machines: Provisioned on X7 machines. Six VM options are available with 1 to 24 CPU cores and 15 GB to 320 GB memory. Performance: High performance with the bare metal option, and good performance with virtual machine shapes. Management: Very good management features including deployment and backups. High availability: Offers 2-node RAC-based database clustering. Data Guard is also available. Scalability: Very good scalability with CPU and storage scaling in bare metal option. Good scalability with storage scaling in virtual machine option. Cost: The virtual machine option is available at a very good price point. The bare metal option is more expensive than the virtual machine option but generally less expensive than the Exadata Cloud Service, depending on the shape and number of cores chosen. More information: Features, Pricing, Documentation Guidance: If you are just starting with Oracle Cloud and plan to mainly use Oracle Cloud Infrastructure services, you will find it easier to use the OCI Database service because it natively integrates with the rest of the Oracle Cloud Infrastructure features. If you want to use RAC, the Database service is a good option because Oracle Database Cloud Service does not yet offer RAC for the databases that it deploys in Oracle Cloud Infrastructure. The maximum storage available on a virtual machine database in this option is 40 TB of remote NVMe SSD block volumes. For bare metal, it is 51.2TB NVMe SSD raw, ~16TB for two-way mirroring and ~9TB with three-way mirroring. Using mirroring with bare metal option is a best practice and highly recommended for any production workloads. If your storage needs are bigger than these options and you want a managed database offering without the need for techniques like sharding, Exadata with up to 1440 TB of raw storage becomes a good option. Oracle Database Cloud Service Oracle Database Cloud Service can deploy databases on Oracle Cloud Infrastructure, Oracle Cloud Infrastructure Classic, and Oracle Cloud at Customer. As I mentioned before, I am focusing only on Oracle Cloud Infrastructure-based offerings. Database Cloud Service relies on an underlying component of Oracle Cloud named Platform Service Manager (PSM) to provide its service console and its REST API. As a result, the Database Cloud Service console has the same look and feel as the service consoles for other platform services like Oracle GoldenGate Cloud Service and Oracle Java Cloud Service, and the endpoint structure and feature set of the Database Cloud Service REST API is similar to those of the REST APIs for other platform services. Database Cloud Service also integrates nicely with Identity Cloud Service for authentication and authorization. Database Cloud Service is available through the Oracle Cloud My Services portal. With Database Cloud Service on Oracle Cloud Infrastructure, you can provision two types of databases: Single instance: A single Oracle Database instance and database data store hosted on one compute node. Single instance with Data Guard standby: Two single-instance databases, one acting as the primary database and one acting as the standby database in an Oracle Data Guard configuration. Outside of Oracle Cloud Infrastructure, Database Cloud Service can also provision 2-node clusters with RAC, two 2-node RAC clusters with one acting as a standby in a Data Guard configuration, and a 1-node database configured as a Data Guard standby. You can find more information about all possible Database Cloud Service configurations here. You must choose one of the following shapes when you use Database Cloud Service to launch a DB system in Oracle Cloud Infrastructure: Bare Metal Shapes Bare metal DB systems consist of a single bare metal server with remote block volumes. BM.Standard1.36: Provides a 1-node DB system (one bare metal server), with up to 36 CPU cores, 256 GB memory, and up to 1 PB of remote block volumes. BM.Standard2.52: Provides a 1-node DB system (one bare metal server), with up to 52 CPU cores, 768 GB memory, and up to 1 PB of remote block volumes. Virtual Machine Shapes You can provision a 1-node DB system on one virtual machine or a 2-node DB system with RAC on two virtual machines. Unlike a bare metal DB system, a virtual machine DB system can have only a single database home. The database home has a single database, which is the same version as the database home. A virtual machine DB system database uses Oracle Cloud Infrastructure block storage instead of local storage. The number of CPU cores on an existing virtual machine DB system cannot be changed. VM.Standard1 virtual machines: Provisioned on X5 machines. Five VM options are available with 1 to 16 CPU cores and 7 GB to 112 GB memory. VM.Standard2 virtual machines: Provisioned on X7 machines. Six VM options are available with 1 to 24 CPU cores and 15 GB to 320 GB memory. Performance: High performance with the bare metal option, and good performance with the virtual machine shapes. Management: Best management features, including deployment, patching, backups, and upgrading. High availability: Data Guard based standby option available. RAC based database clustering not yet available via Database Cloud Service on Oracle Cloud Infrastructure. Scalability: Very good scalability with CPU and storage scaling in bare metal option. Good scalability with storage scaling in virtual machine option. Cost: The virtual machine option is available at a very good price point. The bare metal option is more expensive than the virtual machine option but generally less expensive than Exadata Cloud Service, depending on the shape and number of cores chosen. More information: Features, Pricing, Documentation Guidance: If you are currently using Database Cloud Service with Oracle Cloud Infrastructure Classic and are migrating workloads from Oracle Cloud Infrastructure Classic to Oracle Cloud Infrastructure, then continuing to use Database Cloud Service will be the easier path for migrating to Oracle Cloud Infrastructure and using the databases will feel familiar. It also offers a more integrated management of existing PaaS services through the Oracle Cloud My Services portal. If you want to use RAC in Oracle Cloud Infrastructure, then the Exadata Cloud Service or Oracle Cloud Infrastructure Database service options are good options, as discussed earlier. As an extension, if you want nondisruptive rolling updates, then RAC or Exadata enable that because one node at a time can be updated in those options. The maximum storage available on a virtual machine database in this option is 40 TB of remote NVMe SSD block volumes. For bare metal, depending on machine type storage is up to 51.2TB NVMe SSD raw, ~16TB for two-way mirroring and ~9TB with three-way mirroring. Using mirroring with bare metal option is a best practice and highly recommended for any production workloads. If your storage needs are bigger these options, and you want a managed database offering without the need for sharding, Exadata with up to 1440 TB of raw storage becomes a good option. Summary In this post, I provide a high level overview of the three managed Oracle database offerings in Oracle Cloud Infrastructure: Oracle Exadata Cloud Service, Oracle Cloud Infrastructure Database, and Oracle Database Cloud Service. I discuss the key features of these three options and compare them on the basis of performance, management, high availability, scalability, and cost. I also provide some prescriptive guidance to help you decide which option is a good choice for your use case. For more customized guidance, and for help with any Oracle products and offerings contact your Oracle representative. Contact information is also available on this site.  

Oracle offers multiple cloud-based database options to meet a wide variety of use cases. The Oracle Cloud Infrastructure-based databases are available on bare-metal machines, virtual machines (VMs),...

Oracle Cloud Infrastructure

Configuring a Custom DNS Resolver and the Native DNS Resolver in the Same VCN

One of the main objectives of the Oracle Cloud Infrastructure Blog is to serve as a forum for Cloud Solutions Architects and Product Managers to provide best practices, introduce new enhancements and offer tips & tricks for migrating and running your most important workloads in the Oracle Cloud. I'm a Solutions Architect myself, and my job is to engage with customers from the design phase all the way through to implementation. And because I've had the privilege of working on so many customer deployments we have visibility into issues and needs that span multiple accounts. The joy in this customer-vendor feedback loop comes in finding repeatable ways to solve issues, address needs and improve our service offerings. In this blog post, I'll address a common issue that we've seen across a few customer accounts. This issue was caused by a configuration of the custom DNS resolver option in Oracle Cloud Infrastructure virtual cloud network (VCN) settings. This post explains the issue and how to resolve it. I want to acknowledge the contributions from the following team members from our Cloud Support and Operation teams for the speedy resolution of these support requests: Ankita Singh, Associate Solution Engineer Saulo Cruz, Principal Member of Technical Staff Issue When customers configure a subnet within a VCN, they can choose Internet and VCN Resolver or Custom Resolver when configuring the DHCP options. The default is Internet and VCN Resolver. If customers want to use their on-premises DNS servers (typically Microsoft Active Directory) across the FastConnect or IPSec VPN, they can select Custom Resolver. (For more information about the options, see the Networking documentation.) Generally, most enterprise customers put a DNS relay in the VCN within a shared services subnet. Typically the subnets within the VCN reflect this configuration. This works great for the applications.  However, the issue starts when customers try to provision an Oracle Database Cloud Service (DBCS) instance by using a prebuilt Oracle Database image on a subnet that is using the Custom Resolver DHCP option. The typical error message is as follows:  InvalidParameter - VCN RESOLVER FOR DNS AND DNS LABEL must be enabled for all subnets used to launch the specified shape This message goes away when the customer changes the DNS in the DHCP options to Internet and VCN Resolver. But this change breaks other applications. This issue happens because of the recursive nature of the native VCN resolver. Workaround We have found a workaround for this problem when the customer is using prebuilt DB images for a DBCS. The following diagram describes the architecture: To implement this workaround, perform the following steps: Use Terraform to create the VCN and required subnets. For instructions, see the VCN Overview and Deployment white paper. Select the VCN in which the Database instance is required to be launched. Select the Internet and VCN Resolver DHCP option (which is the default option). Launch the Database instance and make the required configuration for the instance. After the Database instance is launched, go to the DHCP options, select Custom Resolver, and enter the customer’s DNS server IP address. Instantiate the DNS relay server (or Microsoft Active Directory) in the shared resources subnet (referred in the preceding diagram as the shared subnet). Keep the DHCP option as Internet and VCN Resolver (the default). In all other application subnets, select the Custom Resolver DHCP option and enter the customer’s DNS server IP address. Note: Ensure that there is connectivity back to the customer DNS server or servers from the Oracle Cloud. Also ensure that you populate the DNS Label field when creating the VCN, or it will take the default value. This configuration also works across VCNs in the same region or across regions. For more information, see the Automate Oracle Cloud Infrastructure VCN Peering with Terraform blog post. Hopefully this post will help you avoid the rework involved in tearing down VCNs and subnets and re-creating them. If you want more information about integration with Microsoft Active Directory, Infoblox, or Bluecat, please leave a comment.

One of the main objectives of the Oracle Cloud Infrastructure Blog is to serve as a forum for Cloud Solutions Architects and Product Managers to provide best practices, introduce new enhancements and...

Customer Stories

Image Recognition Software Startup Takes on Big Players with Oracle Cloud Infrastructure

Image recognition software provider Netra is a fairly small player in the artificial intelligence (AI) market, but the company is using a high-performance, multicloud computing strategy to take on big players such as Google Cloud Vision and Amazon Rekognition. Netra helps businesses make sense of the tsunami of digital imagery on the internet, said CEO and founder Richard Lee, who shared his company's story on stage at the O'Reilly Velocity Conference in San Jose, California. Specifically, Netra uses computer vision, AI, and deep learning to help brands and agencies reach and better understand their ideal target audiences. The company's image recognition software analyzes billions of consumer photos to identify interests, life events, demographics, and brand preferences. "We provide image recognition as a service to our customers, and we deliver that through an API that gives access to our deep learning models, which are trained up on over 7,500 classifiers today," Lee said. "So, this is a little bit more complex than Hot Dog or Not Hot Dog." For those that don't watch HBO's Silicon Valley, this refers to an app on the show that identifies whether an image is of a hot dog or not. The deep learning models are deployed on Oracle Cloud Infrastructure and built on top of Apache Kafka, which is open source stream-processing software, plus Docker and Kubernetes. Netra's technology works by identifying objects of interest and looking for pattern matches around specific clusters of pixels. "For example, our humans model may detect [a human face] and then send it to our humans daemon, which then classifies age, gender, and ethnicity," Lee explained. "Likewise, our brands model looks for the presence of a logo. … And then lastly, our context and object model detects and classifies what else is in the image." The image recognition software accomplishes all this in about 200 milliseconds. Why Oracle Cloud Infrastructure? Netra's customer base has recently grown to include large enterprises, and with that comes a higher volume of images and videos to analyze, as well as higher service-level agreements. The Boston-based company is counting on Oracle Cloud Infrastructure to help it meet these increasing demands. "Fundamentally, [Oracle Cloud Infrastructure] gives a startup like us access to machines that would cost us thousands to purchase on our own, as well as the flexibility to scale up and down as needed," Lee said. "Oracle gives us really strong value in terms of pricing and performance." Lee said he likes the flexibility that Oracle Cloud Infrastructure provides, especially when there is a spike in demand for his company's services. "If we get hit with a couple million images … we're able to spin up a new instance almost within minutes, to be able to work the queue down," Lee added. "Once the queue gets below a certain threshold, we're able to spin that down to manage our costs." The deep learning models that Netra deploys in the Oracle cloud are very complex, and the amount of compute power it takes to process photo and video is "pretty intense," Lee said.  "We are always waiting for the next-generation GPU chips to be released," he said. "We're constantly pushing the envelope on the processing side, and we're always looking for the highest-performance hardware available. And from what we've seen, Oracle Cloud Infrastructure is the best price/performance on the bare metal side so far." Oracle gives startups such as Netra the computing horsepower necessary to train up deep learning models and compete with some of the biggest players out there. Running AI models in the cloud also gives Netra more bandwidth to focus on its core value proposition.  "With Oracle Cloud Infrastructure, it's not a matter of how big your capital budget is, because it's kind of democratized for everybody," Lee said. "Now it's more about: How good are your computer vision models? What kind of solutions can they build? In that case, it's a much fairer fight against competitors, and we're excited to be able to participate. That would have been impossible before the advent of cloud and really the cost/performance that Oracle Cloud Infrastructure has provided to us." Accelerating to the Cloud Netra also takes part in the Oracle Cloud Startup Accelerator program, which helps startups get up and running in a short period of time. Program participants can take advantage of several benefits, including free credits for Oracle Cloud Infrastructure, world-class mentoring and consulting, start-of-the-art cloud technology, coworking spaces, and access to Oracle customers and partners. Lee especially likes the fact that his company can now get noticed by hundreds of thousands of Oracle customers—and the free credits certainly don't hurt. "It's like nondilutive venture capital," he said. Sage Advice Lee advised that other startups that are considering a move to an enterprise cloud platform should take advantage of those free credits that cloud providers offer. "There is a lot of money to get started and build apps and to actually run high-performance services that are effectively free funding right now," he said. "So as a startup, you can really extend your runway with these credits. But in order to do that, you have to be smart about your architecture and how you deploy it. For example, we've used Docker containers or Kubernetes to be agile to be able to deploy across multiple providers and services." And don't forget to look for the best solutions in terms of pricing and performance. "I think it's an amazing time to start a company," he said. "You need fewer resources than ever before, and you can scale faster than ever before through a lot of these startup-type programs."

Image recognition software provider Netra is a fairly small player in the artificial intelligence (AI) market, but the company is using a high-performance, multicloud computing strategy to take on big...

Oracle Cloud Infrastructure

PCI Compliance on Oracle Cloud Infrastructure is EASY!

Oracle Cloud Infrastructure services have the PCI DSS Attestation of Compliance. The services covered are Compute, Networking, Load Balancing, Block Volumes, Object Storage, Archive Storage, File Storage, Data Transfer Service, Database, Exadata, Container Engine for Kubernetes, Container Registry, FastConnect, and Governance. In this blog post, we discuss the guidelines that help Oracle Cloud Infrastructure customers achieve PCI compliance for workloads running on Oracle IaaS.  Background Our guidelines for achieving PCI compliance fall on the shared-responsibility spectrum of the cloud security continuum. The following diagram describes the separation between responsibility for the security "of" the cloud and security "on" the cloud. As a customer, you are responsible for securing your workloads on Oracle Cloud Infrastructure. In some cases, you need to configure the services that Oracle provides. The responsibility is shared - Oracle maintains the services infrastructure and the customer consumes the services and configures the controls according to their security and compliance requirements. The following picture from the International Information System Security Certification Consortium (ISC2) clarifies the areas of responsibility for IaaS, PaaS, and SaaS: We follow Oracle's 7 Pillars of Trusted Secure Enterprise Platform to develop solutions that meet the customer’s security and compliance requirements. We will discuss this more in our next blog post on Security Solutions Architecture. For now, let’s focus on PCI on Oracle Cloud Infrastructure. Recommended High-Level Solutions for PCI Compliance on Oracle Cloud Infrastructure We follow the latest official publication from PCI Security Standards Council (R) - Requirements and Security Assessment Procedures version 3.2.1 (May 2018). As per the document, there are 12 detailed requirements across 6 sections that cover how to: p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Arial} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Arial} Build and Maintain a Secure Network and System Protect Cardholder Data Maintain a Vulnerability Management Program p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Arial} Implement Strong Access Control Measures p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Arial} Regularly Monitor and Test Networks p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Arial} Maintain an Information Security Policy   There are additional requirements for shared hosting providers like Oracle and we have already met the requirements through our attestation. Let's dive into the solutions.   Section 1: Build and Maintain a Secure Network and System Requirement 1: Install and maintain a firewall configuration to protect cardholder data. Solution: Use Oracle Cloud Infrastructure security lists (Oracle Cloud Infrastructure managed subnet-specific firewall rules). In addition, download Fortinet or Checkpoint firewall images from our Marketplace and provision firewall appliances on Oracle Cloud Infrastructure.  Requirement 2: Do not use vendor-supplied defaults for system passwords and other security parameters. Solution: Review the guidance in the PCI document. In addition, we have detailed documentation on how to manage user credentials on Oracle Cloud Infrastructure. Section 2: Protect Cardholder Data Requirement 3: Protect stored cardholder data. Solution: This involves protecting data at rest. By default, Oracle Cloud Infrastructure Block and Object Storage are encrypted. Additionally, with our upcoming KMS, or any supported HSM, Oracle Wallet, Oracle Key Vault and third-party vault offerings, we give you unprecedented flexibility around key and secret management. For data security, we provide Transparent Data Encryption (TDE) and column level encryption. Requirement 4: Encrypt transmission of cardholder data across open, public networks. Solution: All our control and management plane communications are protected with TLS, which is necessary for the PCI DSS attestation. We also recommend using TLS (not SSL) and front-ending the application with our load balancers, as and when required. Use of SSH and IPSec VPN along with FastConnect is highly recommended. Section 3: Maintain a Vulnerability Management Program Requirement 5: Protect all systems against malware and regularly update antivirus software or programs. Solution: Use our Dyn Malware Protection service to block malware at the edge of the your logical network before it can infect web applications running on Oracle Cloud Infrastructure. Additionally, ensure that anti-virus software is deployed at the OS level. Requirement 6: Develop and maintain secure systems and applications. Solution: We have many recommendations to develop and maintain secure systems. Have a patch management policy in place and use a managed cloud service provider for this purpose. If you're looking for a managed cloud service provider, Oracle Managed Cloud Services is an option along with many of our Oracle Cloud Infrastructure MSP partners. Section 4: Implement Strong Access Control Measures Requirement 7: Restrict access to cardholder data by business need-to-know. Requirement 8: Identify and authenticate access to system components. Solution: Review documentation on IAM access controls (compartments and policies). In addition, we suggest using Oracle CASB and Oracle IDCS for further security controls around access policies. For Oracle Container Engine for Kubernetes, our solution is to use Kubernetes Role Based Access Control in addition to IAM. Look out for a future blog post on Kubernetes security on Oracle Cloud Infrastructure. Requirement 9: Restrict physical access to cardholder data. Solution: This is covered under our physical security controls for the datacenter at the availability domain and region level. We have ISO 27001 certification as well as SOC 1, SOC 2 and SOC 3 attestations which provide the basis for control testing relevant to our PCI DSS Attestation of Compliance. Section 5: Regularly Monitor and Test Networks Requirement 10: Track and monitor all access to network resources and cardholder data Requirement 11: Regularly test security systems and processes. Solution: Use Oracle CASB and Oracle Cloud Infrastructure Audit Services for monitoring. Integrate CASB and Audit Logs with existing SIEM solutions. In addition to this, schedule regular penetration testing of environments based on Oracle Cloud Infrastructure, using the following links: Pen Testing on OCI, Schedule Pen Test via UI. More telemetry and monitoring features are coming and our teams are working on an automated OpenVAS solution. Section 6: Maintain an Information Security Policy Requirement 12: Maintain a policy that addresses information security for all personnel. Solution: While customers are responsible for their security policies, we are happy to help in anyway we can. Most customers have existing security policies and our team can help with cloud (IaaS, PaaS, or SaaS) specific perspectives. Here is a list of security policy templates per industry vertical from the SANS Institute. In conclusion, I hope these steps simplify the road to PCI compliance for your environments on Oracle Cloud Infrastructure. Look out for more blogs, white papers, and Infrastructure Security as Code (ISaC) for security and compliance on cloud to ease your migration to Oracle Cloud.  

Oracle Cloud Infrastructure services have the PCI DSS Attestation of Compliance. The services covered are Compute, Networking, Load Balancing, Block Volumes, Object Storage, Archive Storage, File...

Customer Stories

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Chris Riggin

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed Chris Riggin of Verizon. Chris is the lead Oracle Cloud Infrastructure Certified Cloud Architect for Verizon. He has been with Verizon since 1999 in several IT engineering and architecture capacities, but he has focused on cloud design since 2012. Chris holds a patent for designing and implementing a first-ever cloud management system for heterogeneous platforms and services. He regularly presents at several events and technology summits, including more than 10 speaking engagements at Oracle Open World. His work on Oracle Cloud Infrastructure technology played a key role in enabling it as a highly cost-competitive, scalable, and stable environment for his organization. Today, Chris continues to expand Oracle Cloud Infrastructure deliverables to keep up with business demands and future trending technologies, always maintaining an ambitious three-to-five-year road map. Greg: Chris, how did you prepare for the certification? Chris: I went through the training curriculum posted in Oracle University and followed the posted path. Following the path and attending some of the instructor-led courses helped me gain, or in some cases reinforce, at least 85% of the knowledge I needed to pass the exam. Also, working with a live Oracle Cloud Infrastructure (OCI) tenancy helped me identify any gaps I may have had in my skill-set as I was able to test many of the features within that tenancy. Greg: How long did it take you to prepare for the exam? Chris: Fortunately, my job was 100% OCI at the time, but I still needed at least two weeks where I was able to focus solely on exam preparations and make sure that I had the knowledge and skills necessary for the exam. Unfortunately, life got in the way and prevented me from putting in as much time and effort as I had hoped. I didn’t feel I was as prepared as I would have liked, so a day before the exam, I tried to reschedule. Unfortunately, when I called Pearson VUE, because I was within 24 hours of the exam delivery, I was not able to change the appointment. I literally was forced to cram several missed days of studies into the very last day before the exam! Turns out it was enough, or I just had plenty of experience, because I passed! The moral to this story is you should be aware that you cannot change your exam appointment within 24 hours of when it’s scheduled. Greg: How is life after getting certified? Chris: As the lead architect, earning the certification has reinforced my position as the subject matter expert. Now when I speak about OCI, I speak with authority. Before receiving my certification, there were many different opinions on how to proceed, and it seemed no one had the credentials to lead the discussion. After it became known I had earned the certification, people immediately began to listen to what I had to say. Since I’ve posted the digital badge on my signature, more than half the folks involved with OCI have gained an interest in taking the exam. They continually reach out to me for assistance, asking to be pointed in the right direction as to what to study, and even go so far as to ask for help after hours to prepare them for the exam. Greg: Any other advice you’d like to share? Chris: Do not focus solely on infrastructure. Make sure you are aware of and understand all the service offerings across the overall environment and exhibit a strong knowledge of cloud technologies and concepts outside of OCI. You should understand database, not necessarily to the level of expert, but you should understand some of the inherent services and service levels provided by Oracle. Learn about the OCI PaaS and SaaS offerings that are available. Understand DNS, connecting to the gateways, networking, and don’t forget Terraform! Finally, I would strongly suggest that you certify as soon as possible! The exam is only going to get more difficult as OCI continues to grow and mature.   Please subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam.   Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series are listed under Greg’s blog page.

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed...

Oracle Cloud Infrastructure

Customize Block Volume Backups with the Oracle Cloud Infrastructure CLI

It is a common IT Operations practice to manage the data protection of compute instances through command line and scripts.  This post provides detailed instructions on how to customize your application compute instance's block volume backup by using the Oracle Cloud Infrastructure Command Line Interface (CLI). With the CLI, you can perform a block volume backup based on your schedule and remove old backups based on your retention period. Environment You can run this customized block volume backup task from a centralized system or inside the application compute instance itself. In the example in this post, the task is run inside a compute instance, and is created as a bash shell script. Because this task runs inside a compute instance, we recommend using the instance principal feature to avoid storing user credentials locally.  Volume Group For this customized volume backup script, we recommend using the volume group feature to a create block volume backup. This feature enables you to group multiple block volumes and create a collection of volumes from which you can create consistent volume backups and clones. You can restore an entire group of volumes from a volume group backup.  Customized Volume Backup Script Before you can run this customized script, you need to install the CLI on your compute instance. Detailed instructions for installing the CLI are located in the documentation. Step 1 The first step of this script gets required information about where this compute instance is located, such as availability domain, compartment OCID, and instance OCID. You can get this information through the metadata of the compute instance.  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures} p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff} span.s1 {font-variant-ligatures: no-common-ligatures} # Get availability domain AD=$(curl -s http://169.254.169.254/opc/v1/instance/ |grep availabilityDomain | awk '{print $3;}' | awk -F\" '{print $2;}') echo "AD=$AD" # Get Compartment-id COMPARTMENT_ID=$(curl -s http://169.254.169.254/opc/v1/instance/ |grep compartmentId | awk '{print $3;}' | awk -F\" '{print $2;}') echo "COMPARTMENT_ID=$COMPARTMENT_ID" # Get Instance-id INSTANCE_ID=$(curl -s http://169.254.169.254/opc/v1/instance/ |grep ocid1.instance | awk '{print $3;}' | awk -F\" '{print $2;}') echo "INSTANCE_ID=$INSTANCE_ID"   Step 2 The second step of the script gets the tagging information from the boot volume of the compute instance. Then you can use the same tagging information to create the volume group and its backups. With the same tags, you can easily to sort or filter your volumes and their backups.      p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff; min-height: 11.0px} span.s1 {font-variant-ligatures: no-common-ligatures} # Get tags of boot volume of this instance # We will use these tags for volume group created for this instance's boot volume and other attached volumes   # Get boot Volume tag BOOTVOLUME_DEFINED_TAGS=$(oci compute boot-volume-attachment list --compartment-id=$COMPARTMENT_ID --availability-domain=$AD --instance-id=$INSTANCE_ID --auth instance_principal | jq '.data[] | ."defined-tags"')   BOOTVOLUME_FREEFORM_TAGS=$(oci compute boot-volume-attachment list --compartment-id=$COMPARTMENT_ID --availability-domain=$AD --instance-id=$INSTANCE_ID --auth instance_principal | jq '.data[] | ."freeform-tags"')   Note: The jq command is very useful for parsing the JSON output from the CLI.   Step 3 The third step of the script gets the boot volume OCID and a list of attached block volumes' OCIDs for the compute instance. These OCIDs will be used to construct JSON data for the volume group creation command.  # Get boot volume-id BOOTVOLUME_ID=$(oci compute boot-volume-attachment list --compartment-id=$COMPARTMENT_ID --availability-domain=$AD --instance-id=$INSTANCE_ID --auth instance_principal | grep boot-volume-id | awk '{print $2;}'|awk -F\" '{print $2;}') echo $BOOTVOLUME_ID   p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff; min-height: 11.0px} span.s1 {font-variant-ligatures: no-common-ligatures} # Get a list of attached block volumes BLOCKVOLUME_LIST=($(oci compute volume-attachment list --compartment-id=$COMPARTMENT_ID --availability-domain=$AD --instance-id=$INSTANCE_ID --auth instance_principal | grep volume-id | awk '{print $2;}'|awk -F\" '{print $2;}'))   # Construct JSON for volume group creat command LIST="[\"$BOOTVOLUME_ID\"" for volume in ${BLOCKVOLUME_LIST[*]} do    LIST="${LIST}, \"${volume}\"" done LIST="${LIST}]" SOURCE_DETAILS_JSON="{\"type\": \"volumeIds\", \"volumeIds\": $LIST}"   Step 4 The fourth step of the script checks whether existing volume groups have been created by the script before. If there is no existing volume group, the script creates the volume group based on the information from the previous steps, such as list OCIDs of the boot volume and all the attached block volumes. If there is an existing volume group, the script checks whether there are any changes to the member volumes inside the volume group; for example, new block volumes are attached to the compute instance. If there are changes, the script updates the volume group with the latest volumes.  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff; min-height: 11.0px} span.s1 {font-variant-ligatures: no-common-ligatures} # Check whether there is an existing available volume group created by the script. VOLUME_GROUP_NAME="volume-group-$INSTANCE_ID" VOLUME_GROUP_ID=$(oci bv volume-group list --compartment-id $COMPARTMENT_ID --availability-domain $AD --display-name $VOLUME_GROUP_NAME --auth instance_principal | jq '.data[] | select(."lifecycle-state" == "AVAILABLE") | .id' |awk -F\" '{print $2;}')   echo "VOLUME_GROUP_ID=$VOLUME_GROUP_ID"   # If volume group does not exist, then create a new volume group if [ -z "$VOLUME_GROUP_ID" ]; then   # Create volume group VOLUME_GROUP_ID=$(oci bv volume-group create --compartment-id $COMPARTMENT_ID --availability-domain $AD --source-details "$SOURCE_DETAILS_JSON" --defined-tags="$BOOTVOLUME_DEFINED_TAGS" --freeform-tags="$BOOTVOLUME_FREEFORM_TAGS" --display-name=$VOLUME_GROUP_NAME --wait-for-state AVAILABLE --max-wait-seconds 24000 --auth instance_principal | grep ocid1.volumegroup | awk '{print $2;}' |awk -F\" '{print $2;}')   echo "VOLUME_GROUP_ID=$VOLUME_GROUP_ID"   else # volume group exists and then check whehter there are any changes for the attached block volumes VOLUME_LIST_IN_VOLUME_GROUP=$(oci bv volume-group get --volume-group-id $VOLUME_GROUP_ID --auth instance_principal| jq '.data | ."volume-ids"' | grep ocid1.volume | awk -F\" '{print $2;}') # compare with attached block volume list LIST3=$(echo $BLOCKVOLUME_LIST $VOLUME_LIST_IN_VOLUME_GROUP | tr ' ' '\n' | sort | uniq -u) if [ -z "$LIST3" ]; then     echo "no change for volume group" else     # update volume group with updated volume ids list     VOLUME_GROUP_ID=$(oci bv volume-group update --volume-group-id $VOLUME_GROUP_ID --volume-ids "$LIST" --defined-tags="$BOOTVOLUME_DEFINED_TAGS" --freeform-tags="$BOOTVOLUME_FREEFORM_TAGS" --display-name=$VOLUME_GROUP_NAME --wait-for-state AVAILABLE --max-wait-seconds 24000 --auth instance_principal | grep ocid1.volumegroup | awk '{print $2;}' |awk -F\" '{print $2;}') fi fi   Step 5 The last step of the script creates the backup for this volume group. The script uses the same tags, defined-tags and freeform-tags, from the boot volume of the compute instance. However, you can define your own customized tags as needed.  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff; min-height: 11.0px} span.s1 {font-variant-ligatures: no-common-ligatures} # Create Backup VOLUME_GROUP_BACKUP_NAME="Volume-group-backup-$VOLUME_GROUP_ID"   VOLUME_GROUP_BACKUP_ID=$(oci bv volume-group-backup create --volume-group-id $VOLUME_GROUP_ID --defined-tags="$BOOTVOLUME_DEFINED_TAGS" --freeform-tags="$BOOTVOLUME_FREEFORM_TAGS" --display-name=$VOLUME_GROUP_BACKUP_NAME --wait-for-state AVAILABLE --max-wait-seconds 24000 --auth instance_principal | grep ocid1.volumegroupbackup | awk '{print $2;}' |awk -F\" '{print $2;}')   echo "VOLUME_GROUP_BACKUP_ID=$VOLUME_GROUP_BACKUP_ID" echo "VOLUME_GROUP_BACKUP_NAME=$VOLUME_GROUP_BACKUP_NAME"   You can configure the cron job to run this customized volume backup script according to your backup schedule.  Volume Backup Retention Script Based on your requirements, you might need to define a customized and flexible retention period for your volume backups. For example, say you want the retention period of the volume backups to be 14 days.  Following example script checks the creation times for your volume backups and then deletes the old backups beyond the retention period. You can configure and run this script in your cron job based on how often you want to conduct a backup retention check.  p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Menlo; color: #000000; background-color: #ffffff; min-height: 11.0px} span.s1 {font-variant-ligatures: no-common-ligatures} # get all the volume group backup RETENTION_DAYS=14 VOLUME_GROUP_BACKUP_LIST=$(oci bv volume-group-backup list --compartment-id $COMPARTMENT_ID --volume-group-id $VOLUME_GROUP_ID --display-name=$VOLUME_GROUP_BACKUP_NAME --auth instance_principal | jq -r '.data[] | select (."time-created" | sub("\\.[0-9]+[+][0-9]+[:][0-9]+$"; "Z") | def daysAgo(days):  (now | floor) - (days * 86400); fromdateiso8601 < daysAgo(14)) | .id')   echo $VOLUME_GROUP_BACKUP_LIST for backup in ${VOLUME_GROUP_BACKUP_LIST[*]} do    DELETED_VOLUME_GROUP_BACKUP_ID=$(oci bv volume-group-backup delete --volume-group-backup-id ${backup} --force --wait-for-state TERMINATED --max-wait-seconds 24000 --auth instance_principal | grep ocid1.volumegroupbackup | awk '{print $2;}' |awk -F\" '{print $2;}')    echo $DELETED_VOLUME_GROUP_BACKUP_ID done  

It is a common IT Operations practice to manage the data protection of compute instances through command line and scripts.  This post provides detailed instructions on how to customize your...

Events

Oracle Cloud Infrastructure Makes Debut on Gartner's IaaS Scorecard

Recently Gartner published their latest round of In Depth Assessments, which are a series of scorecards for the major IaaS vendors - Amazon Web Services, Microsoft Azure, Google Cloud Platform, and now Oracle Cloud Infrastructure. We're excited to have taken part in this comprehensive evaluation as one of the Big 4 IaaS players. Gartner In Depth Assessments evaluate cloud vendors' ability to address Gartner's list of required, preferred and optional criteria for production workloads running in the cloud. This year, their evaluation was based on 263 criteria points spanning everything from core computing, storage, and networking capabilities to integration with what is traditionally considered PaaS capabilities like database and data warehousing. Read Gartner's blog post to find out how the vendors scored. Our inclusion in these assessments shows strong validation from top industry analysts that Oracle is firmly established among the leading hyperscale IaaS players, and that we're being recognized for our rapid pace of adding key new services while delivering the best price / performance equation in the industry. This plays well with a recent RedMonk report that showed how Oracle offers the most compute and memory dollar for dollar when compared with other clouds. Gartner will dive further into the results of these In Depth Assessments during their popular Cloud War sessions at the Gartner Catalyst Conference later this month in San Diego. If you're attending, be sure to check out the bake-off on Sunday August 19, where Oracle solutions architects will demonstrate how you can deploy a 3-tier, highly available application environment on our cloud in 10 minutes. In addition, on Tuesday, August 21 at 10:45AM PT, Kash Iftikhar, our VP of Product Management and Strategy, will be joined on stage with Sherri Hammons, CTO of Beeline to discuss how they have been able to optimize their critical applications in the Oracle Cloud.

Recently Gartner published their latest round of In Depth Assessments, which are a series of scorecards for the major IaaS vendors - Amazon Web Services, Microsoft Azure, Google Cloud Platform,...

Oracle Cloud Infrastructure

Why I'm Betting on Oracle Cloud Infrastructure

Oracle? Seriously? That's the question people asked when I told them my destination after leaving IBM. A few months back, when I started looking for a career change, some good opportunities came my way. Some required me to move across the country to Seattle. Some required me to move to Silicon Valley. A few good local opportunities in the Boston area also came up. I had to make a hard choice. What do I want? Money? Respect? An important title? A strong company culture? After a lot of thought, I chose Oracle. Seriously. I have joined the company to lead strategy, vision, innovation, and evangelism in cloud infrastructure, edge services, and emerging technologies. Let me tell you why. About three years ago, when I was leading emerging tech strategy for IBM, we were working on technology to make Internet of Things (IoT) and edge devices collect, procure, analyze, share, decide, and act on data in a secure, autonomous, and automated fashion. One of the companies that my then-boss (and still-mentor) asked me to look at was Dyn. I argued with him, saying, “Dyn is a DNS resolution company. What value are they going to add to our mission and vision?” He said, “Trust me.” I still remember driving up to Manchester, New Hampshire, thinking, "Why am I going there?" But I also remember thinking about the fact that Dyn made a $100-million business out of DNS resolution! I at least had to learn about their go-to-market brilliance. As I became familiar with the company, I learned that Dyn is more than just DNS (more on that later). Their business acumen made me like them, and their integrity and culture made me like them even more. Integrity Dyn was facing tough competition from niche players who were offering DNS resolution services for almost free. They had to differentiate their value proposition to offer more to their customers than those competitors did, and they were consistently winning those battles. On Oct. 21, 2016, everything changed. A massive, worldwide distributed denial of service (DDoS) attack was launched against Dyn's DNS resolution service, temporarily disrupting access to much of the internet, including major sites such as Twitter, Amazon, Netflix, Spotify, PayPal, Salesforce and GitHub. This was not your garden-variety DDoS attack; it relied on tens of millions of IoT devices compromised via the infamous Mirai botnet, and it was only the second known attack of this kind. The first attack took down the blog of my favorite investigative security writer, Brian Krebs. When the hackers took him on, Akamai decided to stop hosting his blog, because it was disrupting their other customers. Everyone was watching closely to see how Dyn would respond to this new attack. The company could have surrendered to the hackers and asked for mercy. Instead, it fought back. Essentially, there were three attacks that day. Dyn mitigated the first in a couple of hours, the second in less than an hour, and the third before it happened. After that, the hackers decided to move on. The incident happened while Dyn was being acquired by Oracle. Considering the risk, Oracle could have just walked away. The fact that it didn't demonstrated its character and the value of Dyn. And unlike some major corporations who have tried to sweep security breaches under the rug, Dyn talked openly about the attack. That transparency helped other major companies prepare for future attacks and helped Dyn's reputation not only survive but thrive in the aftermath. As the Chinese proverb says, "Failure is not about falling down, but refusing to get back up." Culture Whenever I visited Dyn's Manchester office, everyone seemed to be having fun. The main attraction of the office was the slide (yes, a slide, like kids use in the park). I slid down that slide (in a suit!) the very first time I visited the office, and I still have videos to prove it. When I sent those videos to my kids, they asked, “What are you waiting for? When are you starting there?” In addition to the slide, the office had beer taps with rotating selections from local microbreweries, a big gong hanging in front of the slide, and a bunch of great restaurants within walking distance. But above all, the things that really stood out to me were the respect that Dyn employees had for others and their willingness to always learn. Vision When I was looking for a new career opportunity, I dove deeper into Oracle Dyn. It's part of the Seattle-based Oracle Cloud Infrastructure unit, which has developed an identity and culture similar to that of the original Dyn. The internet has become the most essential utility. Almost all major corporations use the internet to move their major, sensitive, and mission-critical workloads. For that to happen, every enterprise needs efficient and secure connectivity, plus full visibility into internet performance. When you are building an enterprise-grade cloud, consider the following questions: Cloud is not just about compute. It is about data. Is your cloud equipped to support enterprise data? Is your cloud provider flexible enough to allow you to build truly cloud native applications, regardless of your cloud deployment model? Is your provider secure from the edge to the core so that you can confidently send highly sensitive workloads to the cloud for processing? Can your provider support bare metal, virtual machines, serverless, Functions as a Service, containers, and a flexible orchestration system? Does your provider offer complete visibility into the internet portion of your network? Oracle Cloud Infrastructure does all these things, helping customers redefine what an enterprise version of the internet truly is. That's why I'm excited to join the team. And yes, we are hiring. Big time! Reach out to me either on LinkedIn or Twitter if you want to immerse yourself in this journey.

Oracle? Seriously? That's the question people asked when I told them my destination after leaving IBM. A few months back, when I started looking for a career change, some good opportunities came my way....

Oracle Cloud Infrastructure

Making It Easier for Organizations to Move Oracle-Based SAP Applications to the Cloud

For decades, Oracle has provided a robust, scalable, and reliable infrastructure for SAP applications and customers. For over 30 years, SAP and Oracle have worked closely to optimize Oracle technologies with SAP applications to give customers the best possible experience and performance. The most recent certification of SAP Business Applications on Oracle Cloud Infrastructure makes sense within the context of this long-standing partnership. As this blog post outlines, SAP NetWeaver® Application Server ABAP/Java is the latest SAP offering to be certified on Oracle Cloud Infrastructure, providing customers with better performance and security for their most demanding workloads, at a lower cost. Extreme Performance, Availability, and Security for SAP Business Suite Applications Oracle works with SAP to certify and support SAP NetWeaver® applications on Oracle Cloud Infrastructure, which makes it easier for organizations to move Oracle-based SAP applications to the cloud. Oracle Cloud enables customers to run the same Oracle Database and SAP applications, preserving their existing investments while reducing costs and improving agility. Unlike products from first-generation cloud providers, Oracle Cloud Infrastructure is uniquely architected to support enterprise workloads. It is designed to provide the performance, predictability, isolation, security, governance, and transparency required for your SAP enterprise applications. And it is the only cloud optimized for Oracle Database. Run your Oracle-based SAP applications in the cloud with the same control and capabilities as in your data center. There is no need to retrain your teams. Take advantage of performance and availability equal to or better than on-premises. Deploy your highest-performance applications (that require millions of consistent IOPs and millisecond latency) on elastic resources with pay-as-you-go pricing. Benefit from simple, predictable, and flexible pricing with universal credits. Manage your resources, access, and auditing across complex organizations. Compartmentalize shared cloud resources by using simple policy language to provide self-service access with centralized governance and visibility. Run your Oracle-based SAP applications faster and at lower cost. Moving SAP Workloads: Use Cases There are a number of different editions and deployment options for SAP Business Suite applications. As guidance, we are focusing on the following use cases: Develop and test in the cloud Test new customizations or new versions Validate patches Perform upgrades and point releases Backup and disaster recovery in the cloud Independent data center for high availability and disaster recovery Duplicated environment in the cloud for applications and databases Extend the data center to the cloud  Transient workloads (training, demos) Rapid implementation for acquired subsidiary, geographic expansion, or separate lines of business Production in the cloud Reduce reliance on or eliminate on-premises data centers Focus on strategic priorities and differentiation, not managing infrastructure Oracle Cloud Regions Today we have four Oracle Cloud Infrastructure regions and we’ve announced new regions in coming months. This provides the global coverage that enterprises need. Additional details at Oracle Cloud Infrastructure Regions. SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure Oracle Cloud Infrastructure offers hourly and monthly metered bare metal and virtual machine compute instances with up to 51.2 TB of locally attached NVMe SSD storage or up to 1PB (Petabyte) of iSCSI attached block storage. A Bare Metal  instance with a 51.2TB of NVMe storage with is capable of around 5.5 million 4K IOPS at < 1ms latency flash, the ideal platform for an SAP NetWeaver® workload using an Oracle Database. Get 60 IOPS per GB, up to a maximum of 25,000 IOPS per block volume, backed by Oracle's first in the industry performance SLA. Instances in the Oracle Cloud Infrastructure are attached using a 25 Gbps non-blocking network with no oversubscription. While each compute instance running on bare metal has access to the full performance of the interface, virtual machine servers can rely on guaranteed network bandwidths and latencies; there are no “noisy neighbors” to share resources or network bandwidth with. Compute instances in the same region are always less than 1 ms away from each other, which means that your SAP application transactions will be processed in less time, and at a lower cost than with any other IaaS provider.  To support highly available SAP deployments, Oracle Cloud Infrastructure builds regions with at least three availability domains. Each availability domain is a fully independent data center with no fault domains shared across availability domains. An SAP NetWeaver® Application Server ABAP/Java landscape can span across multiple availability domains. Planning Your SAP NetWeaver® Implementation For detailed information about deploying SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure, see the SAP NetWeaver Application Server ABAP/Java on Oracle Cloud Infrastructure white paper. This document also provides platform best practices and details about combining parts of Oracle Cloud Infrastructure, Oracle Linux, Oracle Database instances, and SAP application instances to run software products based on SAP NetWeaver® Application Server ABAP/Java in Oracle Cloud Infrastructure.  Topologies of SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure There are various installation options for SAP NetWeaver® Application Server ABAP/Java. You can place one complete SAP application layer and the Oracle Database on a single compute instance (two-tier SAP deployment). You can install the SAP application layer instance and the database instance on two different compute instances (three-tier SAP deployment). Based on the sizing of your SAP systems, you can deploy multiple SAP systems on one compute instance in a two-tier way or distribute those across multiple compute instances in two-tier or three-tier configurations. To scale a single SAP system, you can configure additional SAP dialog instances (DI) on additional compute instances. Recommended Instances for SAP NetWeaver® Application Server ABAP/Java Installation You can use the following Oracle Cloud Infrastructure Compute instance shapes to run the SAP application and database tiers. Bare Metal Compute BM.Standard1.36 BM.DenseIO1.36 BM.Standard2.52 BM.DenseIO2.52 Virtual Machine Compute VM.Standard2.1 VM.Standard2.2 VM.Standard2.4 VM.Standard2.8 VM.Standard2.16 VM.DenseIO2.8 VM.DenseIO2.16 For additional details, review the white paper referenced in the "Planning Your SAP NetWeaver® Implementation" section. Technical Components A SAP system consists of several application server instances and one database system. In addition to multiple dialog instances, the System Central Services (SCS) for AS Java instance and the ABAP System Central Services (ASCS) for AS ABAP instance provide message server and enqueue server for both stacks.  The following graphic gives an overview of the components of the SAP NetWeaver® Application Server: Conclusion This post provides some guidance about the main benefits of using Oracle Cloud Infrastructure for SAP NetWeaver® workloads, along with the topologies, main use cases, installation, and migration process. For more information, review the following additional resources.  Additional Resources SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure white paper Oracle Cloud Infrastructure technical documentation Oracle Cloud for SAP Overview SAP Solutions Portal SAP on Oracle Community High Performance X7 Compute Service Review and Analysis

For decades, Oracle has provided a robust, scalable, and reliable infrastructure for SAP applications and customers. For over 30 years, SAP and Oracle have worked closely to optimize Oracle...

Oracle Cloud Infrastructure

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Robby Robertson

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we interviewed Robby Robertson of Accenture. Robby has been with Accenture for over 18 years and has worked within the Oracle space during most of his time there. Robby was one of the first people to earn the Oracle Cloud Infrastructure Classic Architect Associate certification in 2016, and he recently earned the Oracle Cloud Infrastructure 2018 Architect Associate certification. Greg:  Robby, how did you prepare for the certification? Robby: I found the white papers to be amazingly helpful. They really forced me to try and duplicate what they’ve done. I also found the eLearning series to be an extremely good overview. Even stuff for the IAM; I didn’t know much about the home region as I just never had to read up about it. The introductory video forced me to research some of the topics further, which helped me prepare. Most beneficial was working with the hands-on labs. They were key to passing the exam. I installed the CLI on my laptop to test out the features and functions. I set up Terraform to see exactly how it works. This, along with walking through the white papers and trying to replicate the environments, was critical towards my preparation. Greg:  How is life after getting certified? Robby: After earning the certification, I posted the digital badge on LinkedIn. I think that’s the most that I’ve ever had a post viewed in my entire life. This was beneficial in making connections with others in the industry and building my network around Oracle Cloud. While I already had a robust network within Oracle, this helped me meet others within the Oracle Cloud team. By following these individuals on social media, I learned more about the latest OCI (Oracle Cloud Infrastructure) capabilities, features, and benefits. For my job as a Solution Architect, the OCI certification gives me the credentials I need. I’m viewed as a subject matter expert, and earning this certification helps support my status as a SME. Greg:  Any other advice you’d like to share? Robby: I’m telling my colleagues who are preparing for the exam to not take it lightly. The test is meant to be challenging. Do a little research and get a trial account to help reinforce your knowledge. The practice exam is extremely useful and right on point. It helps people understand what they are missing. Please subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam.   Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series are listed under Greg’s blog page.

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we interviewed Robby...

Oracle Cloud Infrastructure

Introducing Fault Domains for Virtual Machine and Bare Metal Instances

We are excited to introduce fault domains, a new way to manage and improve availability for Oracle Cloud Infrastructure Virtual Machine and Bare Metal compute instances within an Availability Domain. Today you can use Availability Domains to help ensure high availability for your applications, by distributing virtual machine (VM) and bare metal instances across multiple availability domains within a single region. Availability Domains are physically isolated and do not share resources (power, cooling, network), which means the likelihood of multiple availability domains within a region failing is very small. The use of multiple Availability Domains ensures high availability because a failure in any one availability domain won't impact the resources running in the others. If you want more granular control of application availability within a single Availability Domain, you can now achieve that by using fault domains. Fault domains enable you to distribute your compute instances so that they are not on the same physical hardware within a single Availability Domain, thereby introducing another layer of fault tolerance. Fault domains can protect your application against unexpected hardware failures or outages caused by maintenance on the underlying compute hardware. Additionally, you can launch instances of all shapes within a fault domain.  Oracle Cloud Infrastructure is typically designed with three availability domains per region, and each availability domain has three fault domains. When carrying out maintenance on the underlying compute hardware, Oracle Cloud Infrastructure ensures that only a single fault domain is impacted at one time to guarantee availability of your instances in the remaining fault domains. Getting started is easy. When you create a new compute instance using the API, CLI or Console, you can specify the fault domain in which to place the instance. If you don’t specify a fault domain, the instance will be distributed automatically in one of the three fault domains within that availability domain. To modify the fault domain after an instance has been created, you must terminate and re-create the instance. All existing VM and bare metal instances have been distributed automatically among the three fault domains in the their availability domain. The instance details page shows the fault domain information along with other metadata about the instance. To get started with fault domains on Oracle Cloud Infrastructure, visit https://cloud.oracle.com. Fault domains are available at no additional cost in all public regions. For more information, see the Oracle Cloud Infrastructure Getting Started guide, Compute service overview, Compute FAQ, and Fault Domains documentation for more information. Sanjay Pillai

We are excited to introduce fault domains, a new way to manage and improve availability for Oracle Cloud Infrastructure Virtual Machine and Bare Metal compute instances within an Availability Domain. To...

Product News

Announcing NFS Export Options for File Storage

Hi, I am Mona Khabazan, Product Manager for Oracle Cloud Infrastructure File Storage. At the beginning of this year we launched File Storage, a brand-new service at an extremely high scale, to support enterprise cloud strategies. File Storage provides persistent shared file systems in the cloud that are highly available, highly durable, and fully managed. With File Storage, you can start small and grow up to 8 exabyte in every file system without any upfront provisioning or allocation. File Storage is needed by nearly every enterprise application that wants to move its workloads into the cloud. We built this service on a distributed architecture to provide full elasticity in the cloud to give you a competitive advantage. You don't have to worry about storage maintenance and capacity management; instead you can focus on your business needs and simplify your operations by leveraging from File Storage service. NFS Export Options We understood your need for a more granular access and security controls on a per file system basis to enable multi-tenant environments. So, we are now announcing NFS Export Options to enable you to set permissions on your file systems for Read or Read/Write access, limit root user access, require connection from a privileged port, or completely deny access to some clients. How it works When you create a file system and associated mount target, the export options for that file system are set to the following defaults: Source: 0.0.0.0/0 (All) Require Privileged Source Port: false Access: Read_Write Identity Squash: None The default settings allow full access for all NFS client source connections. These defaults can be changed for more granular access control, even though Mount Targets in File Storage are not accessible from the Internet. By default, your file system is visible only to all the hosts that are in the Mount Target's virtual cloud network (VCN) or peered to that VCN. Additionally, VCN security rules apply another layer of control. Now by using NFS Export Options, you can set additional limits on clients' ability to connect to your file systems to view or write data, based on the clients’ IP addresses. Managing which clients have access to your file systems is straightforward. For each file system, simply set the Source parameter to define which clients should access which file systems. Clients that are not listed do not have visibility into your file systems. Try It for Yourself Let’s say that you have three clients that are sharing one mount target, but each client has its own file system. In this scenario, you want to set them up so that they can’t access each other's data, as follows: Client A is assigned to CIDR block 10.0.0.0/24 and should have Read/Write access to File system A but not File System B. Client B is assigned to CIDR block 10.1.1.0/24 and should have Read/Write access to File System B but not File System A. Client C is assigned to CIDR block 10.2.2.0/24 and should not have access to either File System A or B.     Because Client A and Client B access the mount target from different CIDR blocks, you can set the client options for both file system exports to allow access to only a single CIDR block. To create this access: Set file system A to allow Read/Write access only to Client A, who is assigned to CIDR block 10.0.0.0/24. Because neither Client B nor Client C is included in this CIDR block, they cannot access file system A. oci fs export update --export-id <File_system_A_export_ID> --export-options '[{"source":"10.0.0.0/24","require-privileged-source-port":"true","access":"READ_WRITE","identity-squash":"NONE","anonymous-uid":"65534","anonymous-gid":"65534"}]'  Next, set file system B to allow Read/Write access only to Client B, who is assigned to CIDR block 10.1.1.0/24. Because neither Client A nor Client C is included in this CIDR block, they cannot access file system B. oci fs export update --export-id <File_system_B_export_ID> --export-options '[{"source":"10.1.1.0/24 ","require-privileged-source-port":"true","access":"READ_WRITE","identity-squash":"NONE","anonymous-uid":"65534","anonymous-gid":"65534"}]' Because you did not include Client C's CIDR block in any of these export options, neither file system A nor file system B is visible to Client C. Now, let’s say in a different scenario, to increase security you want to limit root user's privileges when connecting to file system D. Use the Identity Squash option to remap root users to UID and GID 65534. In UNIX-like systems, this combination is reserved for 'nobody', which is a user with no system privileges. oci fs export update --export-id <File_System_D_export_OCID> --export-options '[{"source":"0.0.0.0/0","require-privileged-source-port":"true","access":"READ_WRITE","identitysquash":"ROOT","anonymousuid":"65534","anonymousgid":"65534"}]' CLI, SDK, or Terraform Here I have demonstrated just two scenarios using the CLI. For more scenarios and instructions on how to achieve the same control with the SDK or Terraform, see Working with NFS Export Options. For more information about how different types of security work together in your file system, see About Security. We continue to strive to find areas of differentiation in storage technology that enterprises need most to give you a competitive advantage. Bring your storage-hungry workloads, and send me your thoughts on how we can continue to improve File Storage. There is ample opportunity ahead of us; we’re just getting started.  Mona Khabazan

Hi, I am Mona Khabazan, Product Manager for Oracle Cloud Infrastructure File Storage. At the beginning of this year we launched File Storage, a brand-new service at an extremely high scale, to support...

Developer Tools

Resilient IP-Based Connectivity Between IoT Sensors and Diverse Oracle Cloud Infrastructure Regions

This blog post specifically explores how to use Border Gateway Protocol (BGP) for resiliency and high availability for IP-based applications (not DNS-enabled) hosted in Oracle Cloud Infrastructure diverse regions. The scope is limited to IPv4 addresses, but the solution presented also works for IPv6 services with some additional configuration. Most of these applications fall in the IoT application domain.  Because of the implementation of ubiquitous connectivity for the Internet of Things (IoT), devices like sensors and gateways communicate back to central processors hosted in cloud data centers. I have used this solution as a way to achieve resiliency between IoT endpoints and diverse Oracle Cloud Infrastructure regions. Although Oracle Cloud Infrastructure provides computation and data storage resources for IoT workflows across regional availability domains, resiliency or high availability for the connectivity from sensor edge services to the Oracle Cloud Infrastructure regions is always a challenge. Usually, IoT devices use IPv6 while the computation applications in cloud datacenters are only IPv4 aware. Another limiting factor is that most of the sensors can’t use DNS for the services running in cloud datacenters because of the low buffer space of the IOT devices. This negates any DNS-based high-availability solution. Services Used  The following Oracle Cloud Infrastructure services and open-source software are used in this solution: Oracle Cloud Infrastructure Block Storage Oracle Cloud Infrastructure Compute Oracle Cloud Infrastructure FastConnect Oracle Cloud Infrastructure Object Storage Oracle Cloud Infrastructure Networking, including the following components: Virtual cloud network (VCN) Dynamic routing gateway Local peering gateway Remote peering gateway Internet gateway Subnet security list Software-defined networking (SDN) routing application Open-source routing engines For information about configuring the VCNs, subnets, and other Oracle Cloud Infrastructure constructs needed for this solution, see the following resources: https://cloud.oracle.com/opc/iaas/whitepapers/OCI_WhitePaper_VCN_v1.0_LL.pdf https://blogs.oracle.com/cloud-infrastructure/automate-application-deployment-across-availability-domains-on-oracle-cloud-infrastructure-with-terraform https://blogs.oracle.com/cloud-infrastructure/automate-oracle-cloud-infrastructure-vcn-peering-with-terraform Solution Overview This solution focuses on the following components: FastConnect deployment between the local point-of-presence (PoP) and the customer IoT VCN in the Oracle Cloud Infrastructure regional datacenter BGP configurations on the collocated SDN routers and Oracle Cloud Infrastructure dynamic routing gateways (DRGs) Peering configurations between local and remote DRGs Note: This solution excludes the details of IoT-workflow-related compute and storage handling of the data collectors and analytics applications. This solution also doesn’t examine the detailed architecture of the IoT edge services. The IoT application for this use case comprises sensors installed at gas pumps to measure oil surface temperatures and to detect any significant spill. The data is uploaded to the edge services for normalization before transmitting to the Oracle Cloud Infrastructure region for processing, where the IoT processing and analytics applications are running. The edge services can run on the customer’s on-premises datacenters, in a colocation datacenter, or in the Oracle IoT Cloud. The focus of this solution is how to design the connectivity from the customer’s on-premises or colocation datacenter to dual Oracle Cloud Infrastructure regions like Phoenix and Ashburn. Network Architecture Overview Connectivity methods from the edge services datacenters can be private, dedicated circuits including IPSec VPNs, and public connections using internet IPv4 space. A pair of SDN routers are used at the FastConnect colocation for IPv6 to IPv4 translation or IPSec termination before peering with the FastConnect edge routers. Both regions are connected by means of Oracle Cloud Infrastructure inter-region backbones for disaster recovery (DR) replication using a DRG at each end for remote peering. The DRGs are inherently highly available and configured in active-active mode at each regional end. The estimated throughput for each DRG per customer VCN is around 7 GBPS. If more bandwidth is required, multiple VCN and DRGs can be deployed. The latency between regions over the backbone is around 60 ms. Customers can deploy traffic accelerators like Riverbed virtual appliances in their VCNs at either end for caching. Logical View The logical view depicts the pair of redundant routers running in each of the Oracle Cloud Infrastructure PoPs. These routers are managed by the customer network teams or the Oracle Managed Cloud Services team. This is the control plane for the data path resiliency and high availability from the IoT sensors in the field to the IoT applications running across the Oracle Cloud Infrastructure regions. Region Design Customers should provision dual circuits or IPSec VPNs using SDN routers on each of the transit PoPs. On the backend, the Oracle Cloud Infrastructure team would establish connectivity from the customer routers to the Oracle Cloud Infrastructure PoP routers by using cross-connect or peering points. Each transit PoP is connected to all three availability domains (datacenters) in the region.  There are multiple FastConnect transit PoPs (ingress/egress) for a region and multiple FastConnect routers per PoP. Each transit PoP has access to each of the availability domains. All the connections from PoPs to the availability domains (ADs) are provisioned and managed by Oracle Cloud Infrastructure teams. Apart from planning and ordering connections, following are some of the follow-up tasks: Set up DRGs in respective Oracle Cloud Infrastructure regions Set up customer cross-connect groups and cross-connects Set up cabling in the FastConnect location Check light levels for each physical connection Confirm that all the interfaces are up Activate the cross-connects Set up virtual circuits Configure your edge Confirm that the BGP session is established The next section discusses one of the two options for connecting the edge services to the Oracle Cloud Infrastructure regions. Direct Cross-Connect: Colocation In this scenario, the pair of SDN routers are placed in the same colocation facility that serves as the FastConnect PoP. The routers are establishing external BGP (eBGP) peer relationships with the other edge data center routers and the Oracle Cloud Infrastructure DRGs. For DRG configuration guidance, see https://docs.cloud.oracle.com/iaas/Content/Network/Tasks/managingDRGs.htm. Information about BGP configuration is provided later in this post. Overview The customer routers are placed in the customer cage in the FastConnect colocation. Cross-Over cables are provisioned between the customer routers in the customer cage and OCI equipments in the OCI FastConnect cage. Both sets of equipments are configured in high-availability for layer 2 and layer 3. The following graphic shows a logical view of the configuration: FastConnect configuration information for setting up the circuit is located at https://docs.cloud.oracle.com/iaas/Content/Network/Concepts/fastconnectprovider.htm. Peering Oracle Cloud Infrastructure supports only IPv4 peering, and Oracle Cloud Infrastructure regions support both public and private peering. Public Peering Connect edge service resources via FastConnect to access public services in Oracle Cloud Infrastructure without using the internet (for example, Object Storage, the Oracle Cloud Infrastructure Console and APIs, or public load balancers in your VCN). Communication across the connection is with IPv4 public IP addresses. Without FastConnect, the traffic destined for public IP addresses would be routed over the internet. With FastConnect, that traffic goes over your private physical connection. Private Peering Connect IoT edge services infrastructure to a VCN in Oracle Cloud Infrastructure. Communication across the connection is with IPv4 private addresses (typically RFC 1918). BGP Configuration on the Customer Colocated Routers Following is a sample BGP configuration. The scenario has been simplified by representing the customer router pair at the Oracle Cloud Infrastructure PoP (Colocation) as a single router, focusing on the eBGP for path resiliency.     As depicted in the picture, to add resiliency to the edge services in case of a region failure, use AS Path prepending. AS Path prepending artificially lengthens the AS Path that is advertised to a neighbor to make the neighbor think that the path is much longer than it actually is. For step-by-step configuration guidance for collocated routers, see the following resources: Cisco: https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_bgp/configuration/xe-3se/3850/irg-xe-3se-3850-book/irg-prefix-filter.html Juniper: https://www.juniper.net/documentation/en_US/junos/topics/example/routing-policy-security-routing-policy-to-prepend-to-as-path-configuring.html As a result of this configuration, if there is an outage of the first (preferred) region, the IoT sensor network or the edge network will follow the second best preferred path advertised through BGP and reach the second region. Note: All the IP addresses and ASNs mentioned here are for testing purposes only. Oracle Cloud Infrastructure uses the same ASN (31898) for all of its regions.

This blog post specifically explores how to use Border Gateway Protocol (BGP) for resiliency and high availability for IP-based applications (not DNS-enabled) hosted in Oracle Cloud Infrastructure...

Developer Tools

Creating a Secure SSL VPN Connection Between Oracle Cloud Infrastructure and a Remote User

Companies have increasingly mobile workforces and therefore need to be able to provide their employees with convenient and secure access to their networks. A VPN allows users to connect securely to their networks over the public internet which is a convenient way to support mobility. IPSec VPN can be used to provide a dedicated connection to remote locations. IPSec is used with Network Access Control to make sure that only approved users can connect to the enterprise. The other type of VPN is an SSL VPN which uses Secure Socket Layer protocols. SSL VPN provides more granular access control than IPSec. It allows companies to control the types of resources a user can access through the VPN. This blog post explains how to create a secure SSL VPN connection between Oracle Cloud Infrastructure and remote users using OpenVPN. At a high level, these are the steps required to create an SSL Tunnel between Oracle Cloud Infrastructure and the OpenVPN client. Configure Oracle Cloud Infrastructure for OpenVPN Install and configure the OpenVPN server Install the OpenVPN client Configuration Diagram The following diagram shows the high-level architecture of the proposed setup: The diagram shows a VCN with two subnets: Public (10.0.1.0/24) - a public subnet with access to the internet through an internet gateway. Private (10.0.2.0/24) - a private subnet with no access to the internet. 1. Configure Oracle Cloud Infrastructure for OpenVPN The following steps outline how to create and prepare an Oracle Cloud Infrastructure VCN for OpenVPN. Create a VCN Create a VCN with two subnets in an availability domain to house OpenVPN server and a Linux host. For more information on how to create a VCN and associated best practices, see VCN Overview and Deployment Guide. Public Subnet Configuration Public subnet route table Default route table for datacenter has a route rule, where the internet gateway is configured as the route target for all traffic (0.0.0.0/0). For the subnet's security list Default Security List, create an egress rule to allow traffic to all destinations. Create ingress rules that allow access on: TCP Port 22 for SSH TCP Port 443 for OpenVPN TCP connection TCP Port 943 for OpenVPN Web-UI UDP Port 1194 for OpenVPN UDP Port For details about how to create subnets, see VCNs and Subnets. Launch an Instance Launch an instance in the newly created public subnet. In this case, we are using a VMStandard2.1 shape running Centos 7. Use this instance to install OpenVPN server. For details, see Launching an Instance. Private Subnet Configuration The private subnet's route table Private RT has a routing rule, where OpenVPN (10.0.1.9) is configured as the route target for all traffic 0.0.0.0/0. The security list has an egress rule to allow traffic to all destinations. Ingress rules allow only specific address ranges (like on-premises network or any other private subnets in the VCN).   2. Install and Configure the OpenVPN Server After the new instance starts, connect to it through SSH and install the OpenVPN package. You can download the software package for your OS platform from the OpenVPN website. Use the RPM command to install the package. Note: Make sure that you change the password using the “passwd openvpn” command. Connect to the Admin UI address (https://public-ip:943 /admin), using the password for OpenVPN User. Once you are logged in, click Network Settings and replace the Hostname or IP address with the public IP of the OpenVPN server instance. Next, click VPN settings and add the private subnet address range in the routing section. In the Routing section, ensure that the option Should client Internet traffic be routed through the VPN? is set to Yes. Under Have clients use these DNS servers, manually set the DNS resolvers that will be used by your VPN client machines. Inter-Client Communication In the Advanced VPN section, ensure that the option Should clients be able to communicate with each other on the VPN IP Network? is set to Yes. Once you’ve applied your changes, press Save Settings. You are prompted to Update Running Server to push your new configuration to the OpenVPN server. 3. Install OpenVPN Client Connect to the OpenVPN Access Server Client UI https://Public-IP-OpenVPN-VM:943 Download the OpenVPN client for your platforms. Once the installation process has completed, you see an OpenVPN icon in your OS taskbar. Right-click this icon to bring up the context menu to starting your OpenVPN connection. Clicking Connect brings up a window asking for the OpenVPN username and password. Enter the credentials for your OpenVPN user and click Connect to establish a VPN tunnel. Verification Launch a host instance by using any operating system in the private subnet. Open a terminal window on your laptop and connect to the host using the private IP. Conclusion This blog discusses how to create a secure and encrypted SSL VPN tunnel between Oracle and a remote user, allowing the user to be able to access the resources in a private subnet of Oracle Cloud Infrastructure.

Companies have increasingly mobile workforces and therefore need to be able to provide their employees with convenient and secure access to their networks. A VPN allows users to connect securely...

Oracle Cloud Infrastructure

Oracle Cloud Adoption Best Practices: Digital Transformations

This post is the first in a series of posts that discuss best practices and provide practical advice for planning, implementing, operating, and evolving in the Oracle Cloud. This post covers the following topics: Digital transformations and the importance of determining the right business drivers and success criteria Defining a cloud strategy and understanding how the strategy impacts transformations The framework of people, process, and technology that is necessary for successful cloud adoption and transformations Business Transformations Powered by the Cloud Much has been said and written about the role of the cloud in digital disruption and how the cloud is powering digital transformation for enterprises. A vast majority of companies say that the cloud is an important or critical part of their digital transformation strategy, and analysts agree that enterprises will spend trillions of dollars on these business transformations. The only disagreement is about how many trillions will be spent and in how many years. I’m not going to cover the basics of digital transformations here, but let me provide a couple of links with good insights for you to explore: Oracle CEO Safra Catz shares her thoughts about digital transformation and how to manage your business through the change. You can read this insider’s take on Oracle’s cloud transformation in which Mark Sunday, the CIO of Oracle, provides some key insights about Oracle’s own transformation journey. Digital Transformation: State of Affairs “There is a difference between knowing the path and walking the path.” – Morpheus, The Matrix A recent MIT Sloan Management Review and Deloitte Digital report shows that companies are making slow progress in their digital transformation initiatives. The number of companies reporting that their digital transformation projects are at a mature stage rose by five percentage points last year, which is the first meaningful uptick in the four years of the study. But about 70 percent of the companies are still in the early or developing stages of their digital transformation journey. This study and others like it show that progress is slow and we are still scratching the surface; a lot of transformation work still needs to be done across a lot of enterprises. Start with Why… “He who has a Why to live can bear with almost any How” – Friedrich Nietzsche Why do you want to transform your business? What are your specific reasons? Start with those reasons and tie them to your business goals as much as possible. Say you want to reduce your technical debt. That’s great, but to sustain and drive the initiative to conclusion, you need to figure out how it would benefit the business. How will you measure success? For example, to reduce technical debt you can start participating in the latest open source projects, and refactor your code and set up R&D and dev teams to contribute to and use the latest open source code. But do these activities align with your long-term business strategy? Is this part of your core competency? Does it add value to your products and services in an effective manner to benefit your customers? In this example, whether or not leveraging open source aligns with your business objectives, you can still use Oracle Cloud to execute on the strategy. The chances of your project being successful, however, will be largely determined by how closely aligned it is with business outcomes. Business Drivers for Transformation The business drivers for digital transformation are as varied as the organizations making the investments. For many enterprises, transformations are about becoming more responsive to customer needs and preferences. For others, they are about becoming more agile as a response to more nimble competition disrupting their business. Some have compliance needs and strive to implement security controls for global expansion or in response to mandates like GDPR. Others want to focus on innovation as their core competencies instead of mundane and undifferentiated work that doesn’t add any direct value to their customers. For some, the main driver is cost savings and replacing capex with opex. Increasing experimentation and reducing the risk of failure are also important drivers. Other drivers include higher revenue, better ROI, decluttering, rationalization, consolidation, modernization, higher employee productivity, and collaboration. After you determine your business drivers, you need to define and quantify what success looks like. Defining Your Cloud Strategy Your business drivers will have a major impact on your cloud strategy, enterprise architecture, and solution design. For example, projects driven by cost savings or increased efficiency will likely have a return on investment target expressed as expense reductions. In this case, a common approach is to increase asset utilization through consolidation of workloads onto less costly virtual machines (VMs). In Oracle Cloud Infrastructure, using VM instances for compute, containers through Container Engine for Kubernetes, or both will likely be suitable choices with applications consolidated on shared infrastructure. On the other hand, mandates focused on business agility, like acceleration of product development and faster response to market conditions, are more likely to introduce higher levels of automation early in the project. Oracle Cloud adoption strategies for your application portfolio include retire, rehost (IaaS), replatform (PaaS), replace (SaaS), and rebuild (Cloud Native). I’ll cover this topic in detail in another post. The bottom line is that for your digital transformation initiative to be successful, you need to clearly articulate your reasons, business drivers, success criteria, and cloud strategy. Otherwise, your digital transformation initiative runs the risk of being just a buzzword and a one-off innovation project that fizzles out without tangible outcomes. Foundation for Successful Cloud Adoption Digital transformations are about more than just adopting the latest technology. To execute digital transformation successfully, you need to address several important factors, including employee skills and learning, company culture and readiness for change, and commitment to updating old processes and leveraging the latest technologies. Businesses have to want to change and have to commit to doing so in an effective way, by bringing in new skills, adapting roles, encouraging innovation, and instilling confidence in new business models. They must also have the technology and the infrastructure to enable change to happen. I think there are three essential pillars of any successful digital transformation and cloud adoption initiative: people, process, and technology. A good example of these three elements at play is embracing the DevOps method. While adopting cloud, most enterprises realize that the traditional distinction between application developers and IT operations is often replaced by a practical division of responsibilities that is more situational and less rigid. A DevOps approach that integrates development and operations into a single role or as a shared responsibility makes a lot of sense in the cloud. The transformation to a DevOps approach involves developing skills, possibly restructuring organizational boundaries, updating processes for implementation and operations, and retooling to a common set of tools. In essence, you need to transform people, process, and technology, and you need to be effective with all three elements to be successful. Let’s look at all three in more detail. People The people are all the stakeholders, including employees, leaders, users, and customers. People also includes the company culture and its appetite for change. It is critical for all stakeholders to be onboard, enabled, and aligned, and for the company culture to be conducive for transformation. The first group of people I want to highlight are the employees. You can empower employees with the agility, scale, and global reach of the cloud to improve their productivity and their impact. Cloud can reduce repetitive work such as racking and stacking servers, provisioning, and patching and backing up databases. You need to enable employees to gain new skills and refocus their time on differentiated work and problem solving. The cloud requires new skills, for which your employees need training and enablement. Oracle University offers good resources for Oracle Cloud training and certification. Digital transformations need new digital leaders that are cloud savvy. Developing or hiring effective and experienced leaders who can successfully lead such initiatives takes time and must be prioritized. Closely related is developing a culture with a growth mindset, continuous learning, experimenting, and iterating. Finally, the most important group of people you need to focus on is your end users and customers. You need to seek continuous feedback to improve how well and how quickly you meet your customers' needs. Many enterprises have started following, with success, the approach of building minimum viable products and seeking feedback to either drop them or iterate on them based on user feedback. This approach aligns well with the agile method, and the cloud, with its pay-as-you-go pricing model, ability to scale quickly, and elastic resources, is an excellent way to execute this strategy. In fact, most cloud services are built this way. Process The cloud works very well with newer paradigms for developing, deploying, and managing applications. For example, there is more focus on microservices, APIs, serverless, agile, and DevOps. Leveraging these relatively new paradigms requires changes to dev, test, integration, deployment, operations, and incident management processes that many enterprises still use. Continuous learning, experimentation, automation, and agility should be part of the processes used to determine, implement, and operate new products and services. Security and compliance processes need to be updated. Oracle Cloud infrastructure and platform services operate under a shared responsibility model, where Oracle is responsible for the security of the underlying cloud infrastructure, and you are responsible for securing your workloads. Governance, auditing, pen testing, incident management, and response processes need to be updated as well. You also need to update your procurement process for the cloud. Cloud offers usage-based metering, so monthly bills might vary. Licensing models are typically different in the cloud with new pricing and service-level options available. Oracle Cloud provides a flexible buying and usage model for Oracle Cloud Services, called Universal Credits. When you sign up for an Oracle Cloud account, you have unlimited access to all eligible IaaS and PaaS services. You can sign up for a pay-as-you-go subscription, or you can save money and pay in advance for a year, based on your estimated monthly usage, which is the Monthly Flex plan. Bring Your Own License (BYOL), metered, and non-metered options are also available. For successful transformations, you should also re-evaluate your current vendors and partners. Determine which partners have the cloud skills and experience to help you accelerate and be successful with your transformation initiative. Technology Many of the latest breakthroughs and innovations in technology are being delivered primarily through the cloud. Autonomous services, blockchain, artificial intelligence, Internet of Things (IoT), and microservices are a few good examples. You can use the cloud to leverage these latest technologies. Your tried and trusted technology stacks are also available on Oracle Cloud. As a result, Oracle Cloud enables you to transform your internal IT and your customer-facing products and services. Oracle Cloud is the industry's broadest and most integrated cloud provider, with deployment options ranging from the public cloud to your data center. You can leverage your existing infrastructure investments by implementing hybrid architectures using services like FastConnect. For data sovereignty or compliance reasons, you can also leverage Oracle Cloud at Customer to run Oracle Cloud in your own data centers.                 Oracle Cloud offers best-in-class services across software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). Use Oracle Cloud Infrastructure (IaaS) offerings to quickly set up the compute, storage, networking, and database capabilities that you need to run just about any kind of workload. Your infrastructure is managed, hosted, and supported by Oracle. Use Oracle Cloud Platform (PaaS) offerings to provision ready-to-use environments for your enterprise IT and development teams, so they can build and deploy applications based on proven Oracle databases and application servers. Use Oracle Cloud Applications (SaaS) offerings to run your business from the cloud. Oracle offers cloud-based solutions for Human Capital Management, Enterprise Resource Planning, Supply Chain Management, and many other applications, all managed, hosted, and supported by Oracle. Conclusion Most enterprises pursue their digital transformations and cloud strategies in tandem. In this post, I covered this topic with a focus on Oracle Cloud offerings, and offered a framework based on people, process, and technology to help execute a transformation initiative in Oracle Cloud. The focus of this blog was on the why and the what. In the next posts in this series, I’ll cover the how.

This post is the first in a series of posts that discuss best practices and provide practical advice for planning, implementing, operating, and evolving in the Oracle Cloud. This post covers...

Oracle Cloud Infrastructure

Deploy HA Availability Domain Spanning Cloudera Enterprise Data Hub Clusters on Oracle Cloud Infrastructure

Hello, my name is Zachary Smith, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure. We're proud to announce that availability domain spanning Terraform automation is now available for use with Cloudera Enterprise Data Hub deployments on Oracle Cloud Infrastructure. This deployment architecture includes enhanced security and fault tolerance, while maintaining performance.  Cloudera Enterprise Data Hub: Availability Domain Spanning Availability domain spanning is ideal for customers who want to maintain the performance of Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure while leveraging the cloud constructs to enhance fault tolerance and high availability. Cloudera Enterprise Data Hub cluster hosts are deployed across all three availability domains in a region, and Zookeeper, NameNode, and HDFS services are distributed across the nodes in each availability domain. Cloudera Cluster Hosts on a Private Subnet With our continued focus on enabling enterprise customers to deploy secure environments in the cloud, we have included in this architecture the deployment of master and worker cluster hosts on a private subnet not accessible directly from the internet. To achieve this, the bastion host in the deployment is set up as a NAT gateway, which is leveraged by hosts on the private subnet to route internet-destined traffic to the internet gateway. This architecture provides enhanced security without sacrificing cluster performance. Performance Testing To test the performance of Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure, Terasort was chosen as a benchmark. This benchmark is a standard for Hadoop because it tests the I/O of all elements involved in a Hadoop deployment: compute, memory, storage, and network. The following graph shows a comparison running a 10-TB Terasort across two cluster types on each deployment architecture. The first cluster type is a virtual machine using six 1.5-TB block volumes for HDFS. The second cluster type is bare metal using local NVMe for HDFS. The cluster topology is the same for both architectures: five worker nodes, one Cloudera Manager node, two master nodes for cluster services, and one bastion host. Not only are the performance results extremely fast for sorting 10 TB with five workers, but the sort times are extremely close when comparing single availability domain versus availability domain spanning architecture. These tests were run multiple times in a row, and the results returned almost identical results regardless of the time of day that the job ran. This is a great example of Oracle’s industry-leading SLA for cloud. We have more improvements in this space, and a white paper that details a Reference Architecture for Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure, and the use of these Terraform templates. Have questions or want to learn more? Join us at the Cloudera Now Virtual Event Booth on August 2 from 9 a.m. to 1 p.m. PDT. Register Now. We hope you will be as excited as we are about the improvements we’re making to the Cloudera plus Oracle solution. Let us know what you think! Zachary Smith Senior Member of Technical Staff https://www.linkedin.com/in/zachary-c-smith/

Hello, my name is Zachary Smith, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure. We're proud to announce that availability domain spanning Terraform automation is now...

Oracle Cloud Infrastructure

Foundational Oracle Cloud Infrastructure IAM Policies for Managed Service Providers

This post describes some Identity and Access Management (IAM) policies that Oracle Cloud Infrastructure partners and managed service providers (MSPs) can use as a foundation for managing Oracle Cloud Infrastructure services on behalf of their end customers. In particular, we focus on the initial IAM policy use cases that MSPs can leverage to manage the overall end-customer tenancies and provision entitlements for various customer administrator groups for self-management of their respective compartments. For information about Oracle Cloud Infrastructure IAM best practices, read the blog post and white paper created by fellow blogger, Changbin Gong. Use Case Overview This post illustrates the following IAM use cases: As A Tenant Admin, the MSP Wants To manage all the Oracle Cloud Infrastructure assets of its tenant (customer enterprise) So That the MSP can create compartments (aligned to the requirements of the customer) and troubleshoot any issues escalated from the customer administrator groups.   As A Tenant Admin, the MSP Wants To delegate the administration of the non-root compartments to the corresponding customer administrators, So That the customer administrators have the entitlements for the resources in their respective compartments.   As A Tenant Admin, the MSP Wants To create role-specific entitlements for the tenant, So That the MSP administrator groups have a clear separation of duties. For example, enabling specific roles such as server administrators to have entitlements for computing-related services and network administrators to have entitlements for the network resources across compartments in the customer tenancy.   As An Operations (OPS) Admin, the OPS team Wants To create and manage customer and user groups, but Should Not have access to the Tenant Admin group for unrestricted access. Requirements The MSP creates the tenancy and the compartments according to customer requirements. For this example, the MSP is ACME_Cloud_provider (or ACP for short), the tenancy is ACP_Tenant, and the compartments are Root, ACP_Client_Prod, and ACP_Client_Dev. The MSP administrator groups are ACP_OPS_Admin, ACP_Server_Admin, and ACP_Network_Admin. The customer administrator groups are ACP_Prod_Admin and ACP_Dev_Admin. The customer administrator for user provisioning, if required, is ACP_Customer_Admin. The policies are ACP_Tenant_Policy, ACP_Prod_Policy, ACP_Dev_Policy, and ACP_Customer_Policy. Steps For each use case, you create the necessary groups, add users to the groups, and create the policies by performing the following steps in the Oracle Cloud Infrastructure Console. Links to detailed instructions in the IAM documentation are provided. Create the groups. See “To create a group” in Managing Groups. Add users to the groups. See “To add a user to a group” in Managing Users. Add the policies. See “To create a policy” in Managing Policies. Use Case 1 As A Tenant Admin, the MSP Wants To manage all the Oracle Cloud Infrastructure assets of its tenant (customer enterprise) So That the MSP can create compartments (aligned to the requirements of the customer) and troubleshoot any issues escalated from the customer administrator groups. Key Policy: ALLOW GROUP ACP_OPS_Admin to manage all-resources IN TENANCY Note: This policy is for the MSP Operations team. They might require the same access as the administrators group. Use Case 2 As A Tenant Admin, the MSP Wants To delegate the administration of the non-root compartments to the corresponding customer administrators, So That the customer administrators have the entitlements for the resources in their respective compartments. In this use case example, the MSP will create policies for the client's production and dev compartments. Key Policy for Prod Compartment: Allow group ACP_Client_Prod to manage all-resources in compartment ACP_Client_Prod Key Policy for Dev Compartment: Allow group ACP_Client_Dev to manage all-resources in compartment ACP_Client_Dev Use Case 3 As A Tenant Admin, the MSP Wants To create role-specific entitlements for the tenant, So That the MSP administrator groups have a clear separation of duties, such as server administrators having entitlements for computing-related services and network administrators having entitlements for the network resources across compartments in the customer tenancy. Key Policies for Network Administrators Allow group ACP_Network_Admin to manage virtual-network-family in tenancy Allow group ACP_Network_Admin to manage load-balancers in tenancy Allow group ACP_Network_Admin to read instances in tenancy Allow group ACP_Network_Admin to read audit-events in tenancy Key Policies for Server Administrators Allow group ACP_Server_Admin to manage instance-family in tenancy Allow group ACP_Server_Admin to manage volume-family in tenancy Allow group ACP_Server_Admin to use virtual-network-family in tenancy Allow group ACP_Server_Admin to read instances in tenancy Allow group ACP_Server_Admin to read audit-events in tenancy Key Policies for Security Administrators Allow group ACP_Security_Admin to read instances in tenancy Allow group ACP_Security_Admin to read audit-events in tenancy Key Policies for Database Administrators Allow group ACP_DB_Admin to manage database-family in compartment Prod Allow group ACP_DB_Admin to manage database-family in compartment Dev Allow group ACP_DB_Admin to read instances in tenancy Use Case 4 As An OPS Admin, the OPS team Wants To create and manage customer and user groups, but Should Not have access to the Administrators group for unrestricted access. Key Policies Allow group ACP_OPS_Admin to use users in tenancy where target.group.name != 'Administrators' Allow group ACP_OPS_Admin to use groups in tenancy where target.group.name != 'Administrators' Note: The order of IAM verbs from more granular to less granular or more restrictive to less restrictive is as follows: We will continue to add more blogs and whitepapers to highlight Oracle Cloud Infrastructure IAM policies for managed service providers. For more information about IAM, see the IAM documentation.

This post describes some Identity and Access Management (IAM) policies that Oracle Cloud Infrastructure partners and managed service providers (MSPs) can use as a foundation for managing Oracle...

Customer Stories

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Miranda Swenson

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed Miranda Swenson of Cintra Software and Services. Miranda is a long-time techie with a passion for using technology to solve business challenges. She's worked in IT for the past 20 years as a technical consultant, presales engineer, and solution architect. Over the past three years, she has focused on cloud technology and hybrid cloud solution architecture. Miranda is currently working as a Principal Solution Architect at Cintra Software and Services. In her spare time, she enjoys playing with her pets, learning Spanish, traveling, and hula hooping! Here Miranda shares some of her key learnings and tips. Greg: How did you prepare for the exam? Miranda: Part of my role at Cintra is building customer workshops, so we can show our customers how Oracle Cloud Infrastructure (OCI) works. I had been putting together labs that included a lot of the topics I found on the exam, such as the grand tour of the console and how the networking works. By putting together labs based on the GitHub account, actually using OCI and learning it well enough so that I could share it with other people really helped prepare me for the certification. Also reading the online documentation, about Terraform, and GitHub and the cloud documentation all helped me prepare. I found that working with the environment was extremely beneficial. Greg: How is life after getting certified? Miranda: I’ve had a whole lot of people checking out my profile on LinkedIn. My company is happy because getting certified has helped with our partner recognition. Taking the exam helped reinforce what I knew and also helped identify where my gaps are. I wanted to get a 100% on the exam, but I still have some things to learn. I found it to be a good way to see what you know and what you don’t. My certification helps when working with customers. It shows that I can bring solid solutions and know some of the “gotchas” that can prevent a smooth implementation. It helps demonstrate my level of knowledge. And being introduced as a certified architect builds my credibility. Greg: Any other advice you’d like to share? Miranda: When a lot of people think about Oracle, they think database. This is NOT a database exam, it’s an infrastructure exam. If you’re coming from an Oracle software perspective, whether it’s middleware or database, you're going to have to know things you never thought you’d need to know. You’re going to have to know networking, hardware, and storage. Networking is a huge component. You also have to know the orchestration tools, such as Terraform. Get in and play with it. Get a trial account. In general, I felt that this was a good exam. It felt meaningful and tested the things you need to know.   Subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam. Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series are listed under Greg’s blog page.

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed...

Oracle Cloud Infrastructure

Get to the Bottom of Website Performance Issues

It's a familiar scenario: A person clicks a link to your business's website, types in the URL, or opens the mobile app—then the waiting begins. If it takes more than a few seconds for the website or app to load, chances are strong that the user will move on to the next activity. The result? You just lost a potential customer, and they probably blame your business for the poor experience. But here's the thing: website performance issues might not be your fault. The internet today is an extension of your corporate network and cloud environment. It's a big place, and latency problems can stem from several factors. The slowness could be the result of problems with the internet service provider, the infrastructure platform the service is hosted on, or the Software as a Service platform that delivers it. Or maybe there's a problem with the route the internet traffic is taking to access your services. What's clear is that you need to quickly identify the cause of the performance problems and take steps to mitigate the latency before the business loses any more revenue (or brand reputation, for that matter). Time to put on the Sherlock Holmes hat. Here are some straightforward steps you can take to determine the cause of the website performance issues. 1. Make sure that the problem isn't on your end If you're hosting the servers, or if they are hosted in the cloud, the first thing to do is consult performance monitoring tools to make sure that the problem isn't onsite in one of your data centers or in your cloud infrastructure. Monitoring tools can tell if the latency is caused by some runaway process or by a problem with a database application, for example. It's also a good idea to check on any third-party scripts embedded in the services to see if they're the culprits. Depending on how the site is architected, with dozens of objects on the page, slowness could result from problems with ad servers, JavaScript components, tracking pixels, fonts, and other components outside your control. When you're certain the website performance issues aren't inside your servers or in the application code, it's time to look outside of your immediate environment. 2. Run traceroutes When latencies begin to creep up—and users start complaining about site or app slowness—it's important to look at the path that internet traffic is taking to access your services. You can accomplish this by running traceroutes. Traceroute is a utility that displays the route from a user's device through the internet to a specified endpoint, such as your site. Traceroute shows the routers encountered at each hop and displays the amount of time that each hop takes. If you run a traceroute and determine, for example, that your internet service provider is taking your traffic across the ocean and back for no discernable reason, you'd better pick up the phone. Find out what the provider is doing and why they're doing it. 3. Consult the Internet Intelligence Map Another step you can take to gauge the health of the global internet is to consult Oracle's Internet Intelligence Map. The map is a free resource that lets users know how things like natural disasters, government-imposed internet shutdowns, and fiber-optic cable cuts affect internet traffic across the globe. If you notice that users from a particular country are complaining about latency problems, you can look at the Internet Intelligence Map to see if an issue with internet connectivity in that country has been identified. You can also drill down a little deeper to examine latency and connectivity trends for individual network service providers in that country. The online resource is broken up into two sections: Country Statistics and Traffic Shifts. The Country Statistics section reports any potential internet disruptions seen during the past week, highlighting any that have occurred over the previous 48 hours. Disruption severity is based on three primary measures of internet connectivity in that country: border gateway protocol (BGP) routing information, traceroutes to responding hosts, and DNS queries from that country received by Oracle Dyn's authoritative DNS servers. The Traffic Shifts section is based on traceroute data and illustrates changes in how traffic is reaching target networks, as well as associated changes in latency. As an example, the following Internet Intelligence Map image clearly depicts a network connectivity dip in Iraq on June 21. This particular dip occurred as the result of a government-imposed internet shutdown that was enacted to deter students from cheating during high school exams. It's important to work with cloud infrastructure providers who offer visibility into internet traffic patterns. This provides added peace of mind as your business migrates to the cloud, builds cloud-native applications, and troubleshoots website performance issues. The internet is the world's most important network, but it's incredibly volatile. Disruptions on the internet can affect your business in profound ways. That's why today's businesses need better visibility into the health of the global internet. Once you have these insights, you can find ways to reroute traffic and work around outages and latency issues. The result is improved overall website and application performance and, more importantly, happier customers.

It's a familiar scenario: A person clicks a link to your business's website, types in the URL, or opens the mobile app—then the waiting begins. If it takes more than a few seconds for the website or...

Oracle Cloud Infrastructure

Announcing SAP NetWeaver® Support for VM Shapes on Oracle Cloud Infrastructure

The industry’s broadest and most integrated public cloud, Oracle Cloud Infrastructure, offers best-in-class services for Infrastructure as a Service (IaaS), with deployment options ranging from the public cloud to the ability to consume cloud services in your own data center. By reducing IT complexity, Oracle Cloud helps organizations increase agility, drive innovation, and transform businesses. Starting in June 2018, Oracle Cloud Infrastructure virtual machine shapes are supported with SAP NetWeaver based applications as well. These new shapes expand the instance options beyond the already-supported bare metal instances for SAP NetWeaver. With this step, we offer more flexibility and a broader portfolio to SAP customers.  Extreme Performance, Availability, and Security for SAP Business Suite Applications Oracle works with SAP to certify and support SAP NetWeaver applications on Oracle Cloud Infrastructure, making it easier for organizations to move Oracle-based SAP applications to the cloud. Oracle Cloud enables customers to run the same Oracle Database and SAP applications, preserving their existing investments while reducing costs and improving agility. Unlike products from first-generation cloud providers, Oracle Cloud Infrastructure is uniquely architected to support enterprise workloads. And it is the only cloud optimized for Oracle Database. Oracle Cloud Infrastructure is also designed to provide the performance predictability, isolation, security, governance, and transparency required for SAP and other enterprise workloads. With this announcement, you can run SAP Oracle-based applications in the cloud with the same control and capabilities as in your data center. And there is no need to retrain your teams. Be able to take advantage of performance and availability equal to or better than on-premises, while gaining the ability to deploy your highest performance applications (ones that require millions of consistent IOPs and millisecond latency) on elastic resources with pay-as-you-go flexibility. This means that you can run your Oracle-based SAP applications faster and at lower cost in the cloud! What's more, you can benefit from simple, predictable, and flexible pricing with universal credits. And when it comes to governance, you can compartmentalize shared cloud resources using simple policy language to provide self-service access while still maintaining centralized governance and visibility, even across complex organizations.  Multiple Options Available Oracle offers various shapes and grades—both bare metal and virtual—on Oracle Cloud Infrastructure. These offerings enable more customers to deploy and access Oracle Database applications in the cloud with performance, security, and availability equal to or better than on-premises systems. You’ll gain performance that scales with ease. Oracle and SAP have certified SAP NetWeaver and SAP NetWeaver Business Warehouse-based applications to run on Oracle Cloud Infrastructure and Exadata Cloud Service. SAP Business Objects based on 4.2 SP level 5 and above are supported as well. SAP Hybris is supported on Oracle Cloud, provided the requirements on the SAP Hybris Help Portal are met.  Read more in the SAP on OCI public portal.  

The industry’s broadest and most integrated public cloud, Oracle Cloud Infrastructure, offers best-in-class services for Infrastructure as a Service (IaaS), with deployment options ranging from the...

Oracle Cloud Infrastructure

Protecting Yourself from Email Imposters

Having spent my entire career in technology, I feel like I am pretty savvy about email scams. They used to be fairly obvious and I know better than to try to help a Nigerian prince get their fortune back so that they can share it with me. But as we have all become more savvy, unfortunately so have the threat actors. There are three primary categories of email-based advance threats including impersonation, imposters, and URLs and attachments. The URLs and attachments scams are looking for someone to click a URL or attachment that performs an action. You can use best practices like only opening attachments and URLs from trusted sources, but having a tool like FireEye helps ensure that mistakes don’t happen. I think the scariest threats are impersonations and imposters. Once a threat actor has convinced a person that the threat actor is someone else, the imposter is able to convince even the most well-informed end users to provide them with all the access and information they request. For example, if my executive is Mike Smith and he sent me an urgent message to take care of payment, I would fulfill his request. In this example, the email address is clearly not my executive’s email since it was sent from a personal account. This is easier to avoid. In the following example, the treat actor is getting savvier. If you look closely, observe that the email address has an extra “l” in the domain name. It may be tricky to identify that this is an email scam when reading emails quickly. More so, like most of us, I am busy and read many of my emails on my mobile device. I no longer get the visual hint that something is off about this email. Now, the likelihood of action being taken from this email has increased. As threat actors continue to manipulate the visual appearance of emails, I no longer feel confident that I can protect myself and my company from email threats. In order for organizations to protect themselves, it is critical to use tools that help identify these threats before they reach employees. To protect against malicious emails organizations, simply route messages to FireEye’s Email Security, which analyzes the emails for spam and known viruses first. It then uses the signatureless detonation chamber, MVX engine, to analyze every attachment and URL for threats and stop advanced attacks in real time. To identify imposters, FireEye’s Email Security also looks for: Newly Registered Domains Looks-Like & Sounds-Like Domains Reply-to-Address & Message Header Analysis Friendly Display Name & Username Matching CEO Fraud Algorithms Keeping in mind that email volume is inconsistent, FireEye is able to scale effectively because they have built their product on Oracle Cloud Infrastructure. They can move suspicious emails into separate VMs and can burst up since threat actors are unpredictable.  See our relationship in action by watching the Oracle Cloud Infrastructure and FireEye Webinar or you can experience our joint offering immediately through FireEye’s free Jump Start demo lab environment. In this Jump Start lab, you can follow a step-by-step guide and experience FireEye’s Email Security offering.

Having spent my entire career in technology, I feel like I am pretty savvy about email scams. They used to be fairly obvious and I know better than to try to help a Nigerian prince get their fortune...

Oracle Cloud Infrastructure

Windows Custom Startup Scripts and Cloud-Init on Oracle Cloud Infrastructure

We are excited to announce an easy way to configure and customize Microsoft Windows Server compute instances on Oracle Cloud Infrastructure using Cloudbase-Init - the Windows equivalent of Linux Cloud-Init. With the new integrated Cloud-Init experience for Windows Server, you can easily bootstrap an instance with more applications, host configurations, and custom setups. This capability is taken care of by a Cloud-Init custom user data startup script, a feature that is now available on Oracle Cloud Infrastructure compute instances running either Linux or Windows Server. What is User Data? User data is a mechanism to inject a script or custom metadata when a compute instance is initializing on Oracle Cloud Infrastructure. This data is passed to the instance at provisioning time to customize the instance as needed. Instance user data can be implemented using variety of scripting languages. See Windows Cloudbase-Init for more information. Windows Instance User Data Startup Script The Windows Cloudbase-Init experience is available for bare metal and virtual machine Windows Server compute instances, across all regions. There is no additional cost for this feature and all Windows Server OS images now come with Cloudbase-Init installed by default. Cloudbase-Init also comes with a feature that fully automates the Windows Remote Management (WinRM) configuration, without any manual user setup. Getting Started The first step is to create your user data script. The following content-type formats as supported: PEM Certificate / Batch / PowerShell / Bash / Python / EC2 Format / Cloud config. For more detailed information, see Cloudbase-Init user data. See the following example of a simple PowerShell script that changes the hostname and writes an output to a custom file on the local boot volume. The Sysnative parameter is required and must be on the first line. For PowerShell, use: #ps1_sysnative Copy the following script and save it as a .ps1 file. (This script changes the compute name to ‘WIN_OCI_INSTANCE_AD1_FE1’) #ps1_sysnative function Get-TimeStamp {        return "[{0:MM/dd/yy} {0:HH:mm:ss}]" -f (Get-Date)    } $computerName='WIN_OCI_INSTANCE_AD1_FE1' $path = $env:SystemRoot + "\Temp\" $logFile = $path + "CloudInit_$(get-date -f yyyy-MM-dd).log" Write-Host -fore Green "Creating Log File" New-Item $logFile -ItemType file Write-Output "$(Get-TimeStamp) Logfile created..." | Out-File -FilePath $logFile -Append Write-Host -fore yellow "Changing ComputerName" Rename-Computer -NewName $computerName Write-Host -fore green "Changed ComputerName" Write-Output "$(Get-TimeStamp) Changed ComputerName" | Out-File -FilePath $logFile -Append   Custom user data startup script is implemented as part of the Create Instance setup, via either the Console or CLI (Command Line Interface). Steps via Console  Log in to the Oracle Cloud Infrastructure Console. Select Menu, then Compute, followed by Instances. Click Create Instance and complete the required instance section fields. The Startup Script option can be found under Show Advanced Options. Browse for the PS1 script created in step 2. Complete the Networking section and click Create Instance. After your instance is provisioned, Cloudbase-Init will execute your script and configure WinRM automatically. Steps via CLI The CLI provides the same functionality as the Console, to install the CLI follow these installation options. First obtain the values for required parameters using the CLI command in the table  (This is run from a PowerShell command line) Parameter CLI Command --compartment-id [CompartmentOCID]   ./oci iam compartment list $C = 'ocid1.compartment.oc1..aaaaaaaa....' --availability-domain [ADName]  ./oci iam availability-domain list --shape [ShapeName]  ./oci compute shape list --compartment-id $C --image-id ./oci compute image list -c $C | ConvertFrom-Json | ForEach-Object{$_.data} | where -Property display-name -Match 'Windows-Server-2016' | fl -Property display-name, id --subnet-id [SubnetOCID]  ./oci network vcn list -c $C  Select Subnet OCID that matches chosen AD above:  ./oci network subnet list -c $C --vcn-id ocid1.vcn.oc1.iad.aaaaaaa…. --user-data-file [filename]  enter path and filename for user data startup script --display-name [StringinstanceName]  enter free form Instance display name --assign-public-ip true Syntax to launch a compute instance ./oci compute instance launch --availability-domain [ADName] --compartment-id [CompartmentOCID] --shape [ShapeName] --subnet-id [SubnetOCID] --user-data-file [filename] --display-name [StringinstanceName] --assign-public-ip  example: ./oci compute instance launch --availability-domain mgRc:US-ASHBURN-AD-3 --compartment-id $C --shape VM.Standard2.1 --image-id ocid1.image.oc1.iad.aaaaaaaag.... --subnet-id ocid1.subnet.oc1.iad.aaaaaaaar.... --user-data-file PScloudbaseinit1.ps1 --display-name MyCloudInitInstance Query instance state, take the instance id from the previous command successful output. ./oci compute instance get --instance-id ocid1.instance.oc1.iad.abuwcljr32gb5....   Typical User Data Custom Script Use Cases: Update server host configuration, including the registry Enable GPU support – custom script to install GPU driver Add and change local user accounts Join instance to domain controller Install certificates into the certificate store Enable more Windows features, like IIS Copy any required application workload files from Object Storage directly to the local instance Download and install client agents, like Chef, Puppet or SCOM agents WinRM Windows Remote Management (WinRM) is a native Windows alternative to SSH that provides you with the capability to remotely manage a Windows Host.  Windows PowerShell command line has a benefit of integrated WinRM cmdlets, this provides full functionality via a single tool for all Windows management tasks. How to use WinRM on Oracle Cloud Infrastructure Windows instance Open the Console. Add an ingress rule to the VCN security list used by the instance. a. In the Console, navigate to the newly launched instance with startup script to view instance details. b. Under Subnet Settings, click the subnet name. c. Under Resources, navigate to Security Lists and open the security list. d. Click Edit All Rules. e. Under Allow Rules for Ingress, click Add Rule:  i. Destination Port Range: 5986 ii. SOURCE PORT RANGE: All iii. IP Protocol: TCP iv. Source CDIR: 0.0.0.0/0  (Recommend Source is from your authorized CIDR block) v. Source Type: CIDR f. Save Security List Rules Get the public IP of your instance from the instance details screen. On your Windows client, open PowerShell command window. Use the following PowerShell snippet to connect to your instance: # Get the public IP from your OCI running windows instance $ComputerName = "USE PUBLIC IP OF INSTANCE" # Store your username and password credentials (default username is opc) $c = Get-Credential # Options $opt = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck # Create new PSSession (Pre-requisite: ensure security list has Ingress Rule for port 5986)  $PSSession = New-PSSession -ComputerName $ComputerName -UseSSL -SessionOption $opt -Authentication Basic -Credential $c # Connect to Instance PSSession Enter-PSSession $PSSession # To close connection use: Exit-PSSession You can now remotely manage your Windows Server compute instance from your local PowerShell client. Windows Server users now have two great options to setup a custom compute instance. They also benefit from being able to use WinRM to remotely manage and securely access a Windows instance. For more information, see the following documentation: Custom User Data Startup Script on Windows Images CLI reference to launch instance with User Data (There will also be additional documented script examples in the future)  

We are excited to announce an easy way to configure and customize Microsoft Windows Server compute instances on Oracle Cloud Infrastructure using Cloudbase-Init - the Windows equivalent of Linux...

Developer Tools

Deploying Microsoft SQL Server on Oracle Cloud Infrastructure

Introduction There are several databases and applications running on Oracle Cloud Infrastructure. In addition to other databases, Microsoft SQL Server is a relational database system widely used for online transaction processing and decision support systems. This blog post describes how to deploy a Microsoft SQL Server database running on Microsoft Windows server on a single Oracle Cloud Infrastructure Virtual Machine (VM). The Microsoft SQL Server installation wizard allows you to choose the different SQL server components to be installed, such as database engine, analysis services, reporting services, integration services, master data services, data quality services, and connectivity components. Starting with SQL Server 2016 (13.x), SQL Server Management Tools is no longer installed from the main feature tree. You may need to manually download and install the SQL Server Management Tools on Windows server to access and manage the Microsoft SQL server database through the graphical user interface (GUI).   Before You Start Before you start installation of Microsoft SQL Server Database, consider the following: Identify IOPS or I/O throughput requirements. Choose the appropriate Oracle Cloud Infrastructure VM shape (OCPU, memory, and storage). Create a secured network on Oracle Cloud Infrastructure to access the MS SQL Server database. Choose and install supported Windows server version. Identify required MS SQL Server services to be installed. Choose the VM Shape and Install Windows Server  1. Before installing Windows server, create an Oracle Cloud Infrastructure VCN (virtual cloud network) and choose the appropriate availability domain, subnet, etc. to build your Windows server. You can choose the Windows image from the Oracle Cloud Infrastructure repository or you can bring your own Windows image to deploy on our virtual machine. We strongly recommend checking the Windows server version support on Oracle Cloud Infrastructure before you start deploying.  Here, we choose the Windows Server 2012 R2 Standard edition from the image repository and VM Standard2.8. 2. In addition to the existing ingress stateful security rules, you may need to add the additional ingress security rules to allow the RDP (Remote Desktop) access to the Windows server. The following screenshot shows the security rule added to the list to allow RDP access. 3. Once the Windows server is provisioned, you see the following screen, which shows the username and initial temporary password. Log in to the Windows server with the username “opc” and the initial temporary password through remote desktop. Change the password after you first access Windows server. 4. Choose the local boot volume to install the Windows server and SQL server binary, and all the required supporting tools. However, use the block storage volume to store the SQL Server database. The following screen shows the block storage volume added to the Windows server.  5. Run the following command using Windows server PowerShell as an administrator to enable iSCSI to target this block volume at the Windows operating system level. 6. After you run the commands shown on the preceding screenshot, you may need to format and level the disk using computer management and disk management on Windows server. Microsoft recommends using the NTFS filesystem format for better performance. Install MS-SQL Server  1. Download the appropriate SQL Server version from Microsoft. If you have already downloaded SQL Server, copy it to the Windows server. Run the installer file to install Microsoft SQL Server and choose the required tools to be installed on the Windows server.  2. By default, the MS SQL Server creates system databases such as master, model, msdb, and tempdb. You may need to create application/user databases to store application/user data. You can either access your MS SQL Server database using command line or on the user interface through Microsoft SQL Server Management Studio. 3. You can store the application database’s datafile and logfile in the block storage which is already mounted and leveled on the Windows server. In this blog post, we use a block storage volume and attach it to the Windows server. Format and label the new disk as “D”. Now, we use the “D” drive to store the datafile and logfile of the newly created application database.  Conclusion In this blog post, you learned how to deploy Microsoft SQL Server database on Oracle Cloud Infrastructure on a Windows server environment. We also discussed storing the application data on Oracle Cloud Infrastructure block storage to achieve higher performance.   

Introduction There are several databases and applications running on Oracle Cloud Infrastructure. In addition to other databases, Microsoft SQL Server is a relational database system widely used for...

Oracle Cloud Infrastructure

Introducing Oracle Cloud Infrastructure Data Transfer Appliance

Migrating data is often the first step towards adopting the cloud. However, when uploading data to the cloud, sometimes even the fastest available public internet connections fall short. For example, on a leased T3 line, migrating 100 TB of data can take up to 8 months – an untenable situation! Oracle Cloud FastConnect offers a great alternative to quickly upload data to the cloud. But it’s understandable that using FastConnect may not always be feasible for you, especially when you don’t expect to upload data frequently or when the data migration is a part of an effort to retire your on-premise datacenter. A few short months ago, when we announced the availability of Data Transfer Disk, we promised that there was more to come. Today, I am excited to announce the general availability of Oracle Cloud Infrastructure Data Transfer Appliance.  Oracle Cloud Infrastructure Data Transfer Appliance is a PB-scale offline data transfer service. You can now use an Oracle-branded, purpose-built storage appliance to cost-effectively and easily migrate your data to the cloud. Each transfer appliance supports migrating up to 150 TB of data. To migrate PB-scale data sets, you can simply order multiple transfer appliances. The best part is that we charge you exactly $0 to use the service. That’s right, Oracle Cloud customers are able to use the Data Transfer Appliance for free. We even pay for the cost of shipping the appliance. From the time you receive the transfer appliance, you have up to 30 days to copy your data and ship it back to the nearest Oracle data transfer site. When we receive the data transfer appliance, we upload the data to your Oracle Cloud Object or Archive Storage using high-speed internet connections. Large datasets that would’ve taken weeks or months to upload can now be uploaded in a fraction of the time. The data transfer appliance is a 2u device that can rest standalone on a desk or fit in a standard rack. Weighing just 38 pounds, the appliance is easily handed by one person. The appliance was built with safety at the forefront. It’s tamper resistant and tamper evident. Only the serial port and the network ports are exposed. Any attempt to access the transfer appliance hardware in non-standard ways is detected. All the data copied to the transfer appliance is encrypted by default. The encryption passphrase is stored separately, never on the device with the data. The transfer appliance is shipped to you in a ruggedized case to shield it from the G-forces of transportation. You must ship the transfer appliance back to Oracle in the same shipping case. Oracle Cloud Infrastructure Data Transfer Appliance Shipping Case   How It Works Order the Data Transfer Service To use the data transfer appliance to ship your data, place an order for the desired quantity of data transfer appliances. Your Oracle sales rep can help you with the order. Make sure that you have also purchased sufficient Oracle cloud credits, so that we can upload your data to your Oracle cloud tenancy. Placing an order for the data transfer service entitles you to the use of this service. Requesting the Transfer Appliance To request an appliance, log into the Oracle Cloud Infrastructure Console and create a Transfer Job of the type Appliance, in a region of your choice. While creating the Transfer Job, you must also specify the bucket to which the data must be uploaded. Currently, all data from a single transfer appliance can be uploaded to only one bucket. Next, select the transfer job that you created and click the Request Transfer Appliance button. Specify the address to which the appliance must be shipped. A transfer appliance label is generated with the status Requested, which indicates that Oracle has received your request. When the status of the appliance changes from Requested to Oracle Preparing, your request has been accepted and the transfer appliance you requested will be shipped shortly. If you are requesting more than one transfer appliance, you can request that the appliances be shipped to multiple locations.     Preparing the Transfer Appliance When you receive the data transfer appliance, it comes with a security tag with a unique number engraved on it. Verify that the tag label matches the number posted in the Oracle Cloud Console. If the number matches, retrieve the transfer appliance from the case, plug it into your network, and assign an IP to it through the serial console. You can use the provided USB – serial cable and your favorite terminal emulator to access the serial console. You need to unlock the transfer appliance before you can use it. Download the Data Transfer Utility on a Linux host and follow the instructions to prepare the transfer appliance. Retrieve the encryption passphrase using the Data Transfer Utility. This encryption passphrase is used to encrypt the data on the transfer appliance. When the transfer appliance is unlocked and ready for use, create a dataset. A dataset is essentially an NFSv3 mount point. Currently, we support creating one dataset per transfer appliance. That’s it! You just configured the Data Transfer Appliance as an NFS filer. Copying Data to the Data Transfer Appliance Mount the NFSv3 dataset on any Linux compatible host of your choice and copy data to it using regular file system commands. We preserve the source data file/folder hierarchy by storing objects as a flattened file name, For exampe, a file in the folder hierarchy Logs->July2018->DBLog001.txt will be stored as an object name /Logs/July2018/DBLog001.txt, which simulates a virtual folder hierarchy in Oracle Object or Archive Storage. Once you have copied all the data to the transfer appliance, seal the dataset. Sealing the dataset creates a manifest file that contains an index of all the files copied, including the file MD5 hashes, which are used to verify the integrity of data as we upload data to your Oracle cloud tenancy. Finally, Finalize the transfer appliance. At this point, you can no longer access the appliance for dataset operations. The transfer appliance is now ready to be shipped back to the Oracle transfer site. Shipping the Appliance Back to Oracle When we ship you the data transfer appliance, included in the shipping case is a return shipping label, which you must use to ship the transfer appliance back to the nearest Oracle data transfer site. If you misplace the return shipping label, reach out to us and we are happy to provide you a copy of the shipping label. Make sure that you return the transfer appliance within the allocated 30 days period. If you need more time, request an extension by creating a support request (SR). Chain of Custody Using the Oracle Cloud Console or the Data Transfer Utility, you can track the status of the data transfer process throughout its lifecycle, from the time you requested the appliance to the time the data is uploaded to your Oracle cloud tenancy. Confirmation that Data was Uploaded to your Oracle Cloud Tenancy When Oracle processes your transfer appliance and uploads data to your Oracle cloud tenancy, a data upload summary is posted to the same bucket where the data was uploaded. The following is a sample of the upload summary: The upload summary provides a summary and detailed view of the successful and unsuccessful file uploads. It provides information on why some files were skipped so that you can take the necessary corrective action. Before you delete the primary copy of the data, it’s important that you review the upload summary and verify the content in your Object Storage bucket. Once the upload process is complete, your transfer appliance status changes to Complete. Once the transfer job is complete, you must close it out. Closing a transfer job requires that the status of every associated transfer appliance is in a completed state.   Getting Support If you need help, reach out to the Oracle support channels. The Data Transfer Appliance service is available for use in the US regions (Phoenix and Ashburn), but we will be rolling out the service to other Oracle Cloud Infrastructure regions soon. For more information, please refer to the FAQs and Data Transfer Appliance product documentation.   

Migrating data is often the first step towards adopting the cloud. However, when uploading data to the cloud, sometimes even the fastest available public internet connections fall short. For example,...

Oracle Cloud Infrastructure

Migrate Servers to Oracle Cloud using PlateSpin Migrate

We are pleased to announce the availability of PlateSpin Migrate support for Oracle Cloud Infrastructure. Micro Focus offers PlateSpin Migrate which is an industry-proven workload migration solution enabling customers to migrate their servers to Oracle cloud over the network. Here is a quick overview from Micro Focus on migrating servers to Oracle cloud with PlateSpin Migrate. To read the full instructions on the migration process, download the best practices white paper from PlateSpin Migrate here.  PlateSpin Migrate  PlateSpin Migrate is a powerful server portability solution that automates the process of migrating servers over the network between physical machines, virtual hosts, and enterprise cloud platforms— all from a single point of control. When migrating such servers, PlateSpin Migrate refers to these servers as “workloads.” A workload in this context is the aggregation of the software stack installed on the server: the operating system, applications and middleware, and any data that resides on the server volumes. PlateSpin Migrate provides enterprises and service providers with a mature, proven solution for migrating, testing, and rebalancing workloads across infrastructure boundaries. PlateSpin Migrate has horizontal scalability with up to 40 concurrently active migrations per PlateSpin Migrate server. Overview of the Migration Process and Pre-Requisites PlateSpin Migrate offers the capability to replicate machines to Oracle Cloud Infrastructure Compute. At the moment, only the full migration process, which replicates the entire volume data from source to target, is available. To avoid any changes that won't be replicated to the target, you must ensure the applications on the source machine are not being utilized for the duration of the full migration. Once the full migration is complete, the source is powered down and the target is brought online.   Migration to Oracle cloud using PlateSpin Migrate includes the following steps: Install the Migrate server and Migrate client. The Migrate server runs on Windows OS. It can be installed either at the source machine location (see the following diagram) or inside Oracle Cloud Infrastructure. The PlateSpin Migrate client is the Graphical User Interface. It can be installed either on the PlateSpin Migrate server or on a separate machine. Using the Migrate client, discover the source machine that needs to be migrated to Oracle Cloud Infrastructure Compute. Create the target VM instance in Oracle Cloud Infrastructure manually. It has to be launched from the PlateSpin custom image. Once the target instance is launched, provide details to register it to Migrate server. Set up a migration job between the source machine and the registered target machine using the PlateSpin Migrate client. The Migrate server orchestrates the migration process. The source machine transfers data directly to the target instance and the data can be encrypted during transfer.     Additional Resources To evaluate PlateSpin Migrate, download a free trial here. Read documentation from PlateSpin Migrate. The PlateSpin Migrate listing on Oracle Marketplace can be found here.   

We are pleased to announce the availability of PlateSpin Migrate support for Oracle Cloud Infrastructure. Micro Focus offers PlateSpin Migrate which is an industry-proven workload migration solution...

Customer Stories

How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Associate Exam – Rajib Kundu

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed Rajib Kundu of SmartDog Services. Rajib is a Database Architect and SQL Server evangelist. His primary passion is performance tuning, and he frequently rewrites queries for better performance and performs in-depth analysis of index implementation and usage. He has worked for two years as a Cloud Architect, where he has supervised and participated in the implementation of technologies and platforms supporting global internet 24x7 application. Greg: How did you prepare for the certification? Rajib: My focus was on the Networking service, Database service, OCI (Oracle Cloud Infrastructure), especially for Identity Access Management, High Availability solution, public and private subnet. These were the types of things that I focused on first. I also got quite familiar with Terraform. You need to familiarize yourself with the exam topics. I also reviewed the videos that are posted, and different documentation and blogs. I also signed up for the free account to test scenarios. Working with the console helped me improve my confidence with OCI and helped me learn how to create and configure the resource. I also enrolled in the available training and reviewed the OCI user guide. Greg: How is life after getting certified? Rajib: I have always considered myself a SQL Server guy, but once I earned the OCI certification, I felt very good about myself! I updated my Facebook and LinkedIn and received a lot of positive response from coworkers. I’ve found that when I'm demonstrating in front of a client, having the certification reinforces their trust in my abilities. I’ve included the digital badge in my business card as well, and this always gets the attention of the client. Greg: Any other advice you’d like to share? Rajib: I must suggest to everyone, once you completed the training, PLEASE do the practice test! Do it at least one or two times to ensure that you are ready for the exam.   Rajib’s blog: https://rajibsqldba.wordpress.com   Subscribe to this page to help you prepare for the Oracle Cloud Infrastructure 2018 Architect Associate exam. Greg Hyman Principal Program Manager, Oracle Cloud Infrastructure Certification greg.hyman@oracle.com Twitter: @GregoryHyman LinkedIn: GregoryRHyman Associated links: Oracle Cloud Infrastructure 2018 Architect Associate exam Oracle Cloud Infrastructure 2018 Architect Associate study guide Oracle Cloud Infrastructure 2018 Architect Associate practice test Register for the Oracle Cloud Infrastructure 2018 Architect Associate exam Other blogs in the How to Successfully Prepare for the Oracle Cloud Infrastructure 2018 Architect Exam series are listed under Greg’s blog page.

As part of our series of interviews with Oracle employees, partners, and customers who have successfully passed the Oracle Cloud Infrastructure 2018 Architect Associate exam, we recently interviewed...

Oracle Cloud Infrastructure

Read the RedMonk report on getting the most for your IaaS dollar

Analysts have a tough job when comparing and analyzing IaaS options for their customers. Cloud service providers offer different services with different SLAs, based on hardware that you really shouldn’t have to worry about, and all with varying pricing models. Which is why Oracle appreciates that the developer-focused analyst firm RedMonk took the time to dig in to the details in a recent article IaaS Pricing Patterns and Trends 2018. The report highlights the providers that offer the most compute, disk, and memory at various pricing levels and list prices. Here are two highlights: “Oracle in particular is pricing aggressively on this front, offering more memory per dollar across all of their instances. They offer roughly 2.5x more memory/dollar in their VM.Standard2.24 instance as compared to their next nearest competitor." “According to Oracle, 1 of their OCPUs is equivalent to 2 vCPUs, and on that basis Oracle emerges as the pricing leader for compute, offering the highest amount of vCPU compute capacity per dollar spent. Other providers are clustered with no clear competitive standouts.” The report is based on prices in the lowest-cost US-based region, with no special pricing or discounts. With Oracle, it just gets better from there. Oracle provides discounts through a Universal Credits model instead of requiring a commitment to specific reserved instances (with a pre-defined region, size, or OS) that limit your flexibility. Oracle provides discounts based on committed spend and the length of commitment, with the flexibility to use any IaaS or PaaS service. Mix, match, and change freely between resources or regions at any time. Any overages also receive the same discount. Take a moment to read RedMonk’s article. Then, give Oracle Cloud a try.

Analysts have a tough job when comparing and analyzing IaaS options for their customers. Cloud service providers offer different services with different SLAs, based on hardware that you really...

Oracle

Integrated Cloud Applications & Platform Services