Recent Posts


Join Us at Identiverse for Identities for Everything

Photo courtesy of Dan Vogel If you're in the Washington, D.C. area on Thursday, June 27, and attending Identiverse, swing by. Dan Vogel and Norka Lucena from the Oracle Cloud Infrastructure team are presenting Identities for Everything at 2:35 in the Ballroom at the Washington Hilton. As a growing cloud service provider, we faced a problem when building our robust multitenancy identity system. The implementation for authorizing advanced services like compute and DBaaS instances was safe but could be complicated for customers to easily reason about. We solved this problem by creating a new type of principal actor, called resource principals, that abstracts both physical and logical resources, and self-identifies when communicating with infrastructure services. Resource principals are a novel way to distribute trust at scale. We have found four patterns of resource principals that can be mixed to define all of our cloud resources to date: Infrastructure—using physical identifiers (for example, compute instances) Ephemeral—using injected identifiers (for example, Kubernetes ReplicaSets) Stacked—projecting one principal into another (for example, a managed cache) Asserted—collective resources reduced into an individual (for example, object storage) By assigning Identities to everything in our infrastructure, we reduce the scope and number of distributed credentials, better capture customer intention of infrastructure interaction, and produce more precise and actionable audit logs. Dan and Norka will show how it works by building a simple cloud app and applying these patterns to result in a clear authentication and authorization story. We hope that by sharing these patterns we can improve identity solutions of other cloud products by embracing the idea of safe, secure identities for everything. If you can't attend our session but still want to learn more about how to use Resource Principals with Oracle Cloud, check out this white paper on best practices for IAM on Oracle Cloud Infrastructure.

Photo courtesy of Dan Vogel If you're in the Washington, D.C. area on Thursday, June 27, and attending Identiverse, swing by. Dan Vogel and Norka Lucena from the Oracle Cloud Infrastructure team...


Machine Learning with H2O.ai and Oracle ERP

Packaged line-of-business (LOB) applications are an area where Oracle is a market leader. These applications contain an enormous amount of data that has the potential to give amazing insight into core business functions, enabling gains in areas such as: Operational efficiency Cross sell / up sell Customer experience Oracle is investing heavily in moving these LOB applications to our cloud, Oracle Cloud Infrastructure (OCI). In parallel, we're investing in the ecosystem, partnering with key ISVs to enable their workloads on our cloud, which allows our customers to leverage the latest innovations in conjunction with the LOB applications that they depend on. H2O Driverless AI H2O.ai is a leader in the AI/ML space. Their platform, Driverless AI (DAI), automates much of the machine learning lifecycle, from data ingestion to data engineering and modeling, on to deployment. It enables both data scientists and relatively naive users to generate sophisticated ML models that can have an enormous impact on their business. Late last year, we began integration work with H2O.ai. Initial efforts focused on creating Terraform templates to automate the deployment of Driverless AI on Oracle Cloud Infrastructure. Building on that, H2O.ai was the first of our Quick Start templates to go live. Driverless AI can be deployed today on Oracle Cloud Infrastructure by using Terraform modules that our team and the H2O.ai team developed jointly. Those modules are available on the Quick Start on GitHub. We’re currently exploring ways to use this kind of data in Oracle enterprise applications to build tailored ML models. An example architecture might look like this: On Oracle Cloud Infrastructure, Driverless AI can be deployed on NVIDIA GPU machines. This accelerates the building of models, further reducing the end-to-end lifecycle for machine learning. Oracle Retail Advanced Inventory Planning The Oracle Retail Advanced Inventory Planning (AIP) module in Oracle ERP is one potential source of interesting data for ML with H2O.ai. An external merchandising system, forecasting system, and replenishment optimization system are integrated with AIP to provide the inventory/foundation data and the forecasting data to AIP to effectively plan the inventory flow across the retailer's supply chain.  Because AIP can integrate with any forecasting system, Driverless AI could be used to build a model that accounts for both high frequency (for example, weekend) and lower frequency (for example, holiday) seasonalities. Driverless AI ships with a time-series recipe based on causal splits (moving windows), lag features, interactions thereof, and the automatic detection of time grouping columns (such as Store and Dept for a dataset with weekly sales for each store and department). Oracle Retail Merchandising System The Oracle Retail Merchandising System (RMS) module in Oracle ERP is another fascinating touchpoint. This module includes the following information: Expenses: The direct and indirect costs incurred in moving a purchased item from the supplier's warehouse/factory to the purchase order receiving location. Inventory Transfers: An organized framework for monitoring the movement of stock. Return to Vendor (RTV): Transactions that are used to send merchandise back to a vendor. Inventory Adjustments: Increase or decrease inventory to account for events that occur outside the normal course of business (for example, receipts, sales, and stock counts). Purchase Order Receipts (Shipments): Record the increment to on-hand when goods are received from a supplier. Stock Counts: Inventory is counted in the store and compared against the system inventory level for discrepancies.   RMS contains a rich dataset that could be used to build models in Driverless AI for anomaly detection around RTV, inventory adjustment, and other events. Oracle Retail Price Management The Oracle Retail Price Management module in Oracle ERP includes the following information: Item ID: ID that is assigned when the price event is created at the transaction item level. Cost Change Date: The effective date of the past or future cost change. Retail Change Date: The effective date of the past or future retail change. Cost: The cost on the effective date of the cost or retail change. Retail: The regular selling retail on the effective date of the cost or retail change. Markup %: The markup percent on the effective date of the cost or retail change. The markup percent is calculated using the calculation method specified by your system options.   With Driverless AI, we could use past cost changes to train a regression model. That model could suggest future pricing, automatically incorporating both seasonality and product lifecycle. Also, by combining Retail Price Management data with marketing, clickstream, or other end-customer data, a regression model could be built to predict the benefit of pricing changes while accounting for other variables that affect sales. Oracle Retail Trade Management The Oracle Retail Trade Management module in Oracle ERP includes the following information: Landed Cost: The total cost of an item received from a vendor inclusive of the supplier cost and all costs associated with moving the item from the supplier's warehouse or factory to the purchase order receiving location. Expenses: The direct and indirect costs incurred in moving a purchased item from the supplier's warehouse/factory to the purchase order receiving location. Country Level Expenses: The costs of bringing merchandise from the origin country, through the lading port, to the import country's discharge port. Zone Level Expenses: The costs of bringing merchandise from the import country's discharge port to the purchase order receiving location. Assessments: The cost components that represent the total tax, fee, and duty charges for an item. Transportation: The facility to track information from trading partners as merchandise is transported from the manufacturer through customs clearance in the importing country. Actual Landed Costs: The actual landed cost incurred when buying an import item.   With Retail Trade Management data tracking costs and delays in items being received at their final stocking location, Driverless AI could be used to build a risk model to estimate the impact of changing exact import/transportation routes. Oracle Retail Invoice Matching The Oracle Retail Invoice Matching module in Oracle ERP includes the following information: Invoice Matching Results for Shipments: Shipment records are updated with the invoice matching, which attempts to match all invoices in ready-to-match, unresolved, or multi-unresolved status. Receiver Cost Adjustments: Updates the purchase order, shipment, and potentially the item cost in RMS, depending on the reason code action used. Receiver Unit Adjustments: Invoice matching discrepancies are resolved through a receiver unit adjustment. By joining the information in Retail Invoice Matching with data in other modules, we can build a risk model in Driverless AI for suppliers to predict the probability of invoicing issues for future orders. Next Steps This post gives a high-level view of how an open Oracle ecosystem enables our customers to leverage the latest technologies from our partner ecosystem with the LOB applications that they've relied on for decades to run their business. We're actively working with several customers to prove this out in their environments. In addition, my team is working to create a more detailed demo of the integration described here. We look forward to presenting that in more detail, both on this blog and at several upcoming meetups that Oracle and H2O.ai are jointly organizing. If you have questions, please reach out to Ben.Lackey@Oracle.com or Peter.Solimo@H2O.ai. We'd love to work with you and see what ML can do with your data!

Packaged line-of-business (LOB) applications are an area where Oracle is a market leader. These applications contain an enormous amount of data that has the potential to give amazing insight into...

HPC Investments Continue into ISC 2019

Today, 95 percent of all traditional high-performance computing (HPC) is still done in traditional on-premises deployments. The sporadic demand cycles and rapid evolution of specialized HPC technologies make flexible cloud deployment a great fit for enterprise usage. But other cloud providers simply haven’t been able to penetrate this market for a range of reasons, from performance to cost to a lack of key features, such as remote direct memory access (RDMA) capability. Over the last 12 months, we have invested significantly, in both technology and partnerships, to make Oracle Cloud Infrastructure the best place to run your Big Compute and HPC workloads. At OpenWorld 2018, Larry announced clustered networking, which lets customers run their Message-Passing Interface (MPI) workloads with performance comparable to, and in some cases better than, on-premises HPC clusters. This was the first, and is still the only, bare metal HPC offering with 100G RDMA in a public cloud.* It's in limited availability today, and we expect it to be generally available later in the year. Even further out, we’re working on a truly flexible and scalable architecture in which you can have bare metal GPUs, HPC instances, and even Exadata on a clustered network. This opens up use cases such as running a distributed training job on a cluster of GPUs that pull data from an Exadata, and then deploying the model on a set of compute nodes, all over the clustered network. We pushed the boundaries on this new offering with the ability to scale to up to 20,000 cores for a single job. This is far beyond what any other cloud can offer today for MPI workloads while maintaining efficiency and performance. To see the benchmarks that compare us and other providers, see the blog post that we published this week. Here’s a peek: We also partnered with Altair last year at OpenWorld to launch their Hyperworks CFD unlimited running our bare metal NVIDIA GPU offerings. We recently started working with them on their crash application, called Altair Radioss, and how clustered networking can help reduce the time and cost for these crash simulation jobs. For details, including benchmarks, read the blog post. This week we’re in Frankfurt at ISC 2019, along with our partners, showcasing some of these capabilities. You can talk to our engineering teams and try out some of the technologies at our booth, located at H-730. Some other things you’ll want to catch during the week: Vendor Showdown on Monday, June17, at Panorama 2, starting at 1:15 p.m. Exhibitor Forum Session on Tuesday, June 18, at Booth N-210, starting at 11:20 a.m. Blog: Accelerating DEM Simulations with Rocky on Oracle Cloud and NVIDIA Blog: Making Cars Safer with Oracle Cloud and Altair Radioss Blog: Large Clusters, Lowest Latency: Clustered Networking on Oracle Cloud Infrastructure Hands-on demos and labs at our booth H-730 Looking forward to seeing you there! Karan   * Based on comparison to AWS, Azure, and Google Cloud Platform as of June 3, 2019.

Today, 95 percent of all traditional high-performance computing (HPC) is still done in traditional on-premises deployments. The sporadic demand cycles and rapid evolution of specialized...

Accelerating DEM Simulations with Rocky on Oracle Cloud and NVIDIA

It’s a sunny afternoon, you’re mowing your lawn, and the grass buildup in your mower disrupts your smooth progress. This disruption could have been avoided if the design process for your mower involved airflow modeling with particles. Similarly, you’d hope that the discrete element method (DEM) was used to simulate the flow of beans through the coffee machines that you trust to brew your coffee.   DEM simulation packages like Rocky DEM from ESSS include particles and can be coupled with computational fluid dynamics or the finite element method to improve results. However, they add a layer of complexity that results in increased simulation time. To speed up the simulation, Rocky DEM provides the option to parallelize to a high number of CPUs or gain even more speed by unleashing multiple NVIDIA GPUs in Oracle Cloud Infrastructure. No special setup or driver is needed to run Rocky on Oracle Cloud Infrastructure. Import your model, choose the number of CPUs or NVIDIA GPUs, and start working. Using Oracle Cloud Infrastructure removes the wait time for resources in your on-premises cluster. It also avoids having people battle for high-end GPUs at peak times and then having those GPUs sit idle for the rest of the week. “Oracle Cloud Infrastructure and Rocky DEM have collaborated to provide a scalable experience to customers with performance similar to on-premises clusters. The bare metal NVIDIA GPU servers, without hypervisor overhead, further help to tackle very large problems in a reasonable amount of time,” said Marcus Reis, Vice President of ESSS. Depending on the simulation, Oracle Cloud Infrastructure provides different machine shapes to stay cost-effective without compromising on compute power. The following table shows the machine shapes suited for Rocky. Explore all the different storage options, remote direct memory access (RDMA) capabilities, and the composition of NVIDIA GPUs on our Compute service page.   Shape CPU GPU VM.Standard2.4 4 - BM.Standard2.52 52 - BM.HPC2.36 36 - VM.GPU2.1 12 1 x P100 VM.GPU3.1 6 1 x V100 BM.GPU2.2 28 2 x P100 BM.GPU3.8 52 8 x V100   “NVIDIA and Oracle Cloud Infrastructure are collaborating to help customers reduce their computation time from days to hours by providing GPUs for HPC applications. The Tesla P100 and advanced V100 GPUs increase customer productivity while reducing cost,” said Paresh Kharya, Director of Product Marketing, Accelerated Compute, NVIDIA. The following chart shows that NVIDIA GPUs offer up to 6X better price-performance than CPU-based instances for this simulation. It also shows faster results at a similar price when switching from P100 to V100 GPUs or increasing the core count of CPUs.   Oracle Cloud Infrastructure provides bare metal instances with up to 8 Tesla V100 GPUs, and it’s making a difference. Companies can start thinking about the engineering and the design of their next product rather than worrying about simulation runtimes or compute resource availability. Get started on Oracle Cloud Infrastructure, and run your Rocky DEM workloads today!

It’s a sunny afternoon, you’re mowing your lawn, and the grass buildup in your mower disrupts your smooth progress. This disruption could have been avoided if the design process for your mower...


Large Clusters, Lowest Latency: Cluster Networking on Oracle Cloud Infrastructure

Oracle Cloud Infrastructure has expanded cluster networking by enabling remote direct memory access (RDMA)-connected clusters of up to 20,000 cores on our BM.HPC2.36 instance type. Our groundbreaking, backend network fabric lets you use Mellanox’s ConnectX-5, 100-Gbps network interface cards with RDMA over Converged Ethernet (RoCE) v2 to create clusters with the same low-latency networking and application scalability that you expect on premises. Oracle Cloud Infrastructure is leading the cloud high performance computing (HPC) battle in performance and price. Over the last few months, we have set new cloud standards for internode latency, cloud HPC benchmarks, and application performance. Oracle Cloud Infrastructure's bare metal infrastructure lets you run on-premises performance in the cloud. In addition to connecting bare metal nodes together through RDMA, cluster networking provides a fabric that will enable future instances and products to communicate at extremely low latencies. Performance Ultra-low node-to-node latency is expected on HPC systems. Partners like Exabyte.io have demonstrated Oracle Cloud Infrastructure's leading edge with those metrics. But when you have applications running on thousands of cores, low node-to-node latency isn’t enough. The ability to scale models down to a very small size is more important. In computational fluid dynamics (CFD), users typically want to know the smallest amount of work they can do on a node before they hit a network bottleneck that limits the scalability of their cluster. This is the network efficiency of an HPC cluster or, in other words, getting the most “bang for your buck”! The following chart shows the performance of Oracle’s cluster networking fabric. We scale above 100% below 10,000 simulation cells per core with popular CFD codes, the same performance that you would see on premises. It’s also important to note that without the penalty of virtualization, bare metal HPC machines can use all the cores on the node without having to reserve any cores for costly overhead.   The ability for a simulation model to scale this way highlights two important design features. The first is the stability of the underlying network fabric, which can transfer data fast and consistently. The second important design feature is that there is no additional traffic or overhead on the network to limit throughput or latency. You can see this stability in the following chart, which compares on-premises HPC network efficiency to cloud HPC network efficiency. CFD is not the only type of simulation to benefit from using Oracle’s cluster networking. Crash simulations, like those run on Altair’s RADIOSS or LS-Dyna from LSTC, and financial matching simulations, like those offered by BJSS, also use cluster networking. Price Oracle Cloud Infrastructure offers the best performance by default. You don’t pay extra for performance of block storage, RDMA capability, or network bandwidth, and the first 10 TB of egress is free. Cluster networking follows that same paradigm—there is no additional charge for it. Availability Today, cluster networking is available in the regions that have our HPC instances: Ashburn, London, Frankfurt, and Tokyo. Cluster networking will continue to spread throughout all of our regions as cluster networking-enabled instances continue to roll out. To deploy your HPC cluster using cluster networking, reach out to your Oracle rep or contact us directly. Also, visit us at the ISC High Performance conference in Frankfurt June 16–20. We’re in booth H-730. Hope to see you there.

Oracle Cloud Infrastructure has expanded cluster networking by enabling remote direct memory access (RDMA)-connected clusters of up to 20,000 cores on our BM.HPC2.36 instance type. Our groundbreaking,...

Oracle Cloud Infrastructure

Overview of the Interconnect Between Oracle and Microsoft

Today we announced Oracle and Microsoft Interconnect Clouds to Accelerate Enterprise Cloud Adoption, a cloud interoperability partnership between Microsoft and Oracle. This cross-cloud interconnect enables customers to migrate and run mission-critical enterprise workloads across Microsoft Azure and Oracle Cloud Infrastructure. Enterprises can now seamlessly connect Azure services, like Analytics and AI, to Oracle Cloud services, like Autonomous Database. By enabling customers to run one part of a workload within Azure and another part of the same workload within Oracle Cloud, this partnership delivers a highly optimized, best-of-both-clouds experience. Taken together, Azure and Oracle Cloud Infrastructure offer customers a one-stop shop for all the cloud services and applications that they need to run their entire business. Connecting Azure and Oracle Cloud Infrastructure through network and identity interoperability makes move-and-improve migrations seamless. This partnership delivers direct, fast, and highly reliable network connectivity between two clouds, while continuing to provide first-class customer service and support that enterprises have come to expect from the two companies. In addition to providing interoperability for customers running Oracle software on Oracle Cloud Infrastructure and Microsoft software on Azure, it enables new and innovative scenarios like running Oracle E-Business Suite or Oracle JD Edwards EnterpriseOne on Azure against an Oracle Autonomous Database running on Exadata infrastructure in the Oracle Cloud. We envision the following common use cases for multicloud deployments: Applications run in separate clouds with consistent controls and data sharing: In this approach, customers can deploy applications fully in one cloud or the other, and will benefit from common identity management, single sign-on, and the ability to share data among clouds for analytics and other secondary processes. Applications span clouds, typically with the database layer in one cloud and the app and web tiers in another: Utilizing a low-latency connection between the clouds lets customers choose preferred components for each application, allowing a single consistent application with separate parts running in either cloud optimized for each technology stack. Cross-Cloud Interconnect As enterprises continue to evaluate the benefits of cloud, they are steadily adopting a multicloud strategy for various reasons, including disaster recovery, high availability, lower cost, and, most importantly, using the best services and solutions available in the market. To enable this diversification, customers interconnect cloud networks by using the internet, IPSec VPNs, or a cloud provider’s direct connectivity solution through the customer’s on-premises network. Interconnecting cloud networks can require significant investments in time, money, design, procurement, installation, testing, and operations, and it still doesn't guarantee a high-availability, redundant, low-latency connection. Oracle and Microsoft recognize these customer challenges and have created a unified enterprise cloud for our mutual customers. Oracle and Microsoft have already done all the tedious, time-consuming work for you by providing low-latency, high-throughput connectivity between their two clouds. The rest of this post describes how to configure the network interconnection between Oracle Cloud Infrastructure and Microsoft Azure to create a secured, private, peered network between the two clouds. Solution Oracle and Microsoft have built a dedicated, high-throughput, low-latency, private network connection between Azure and Oracle Cloud Infrastructure data centers in the Ashburn, Virginia region that provides a data conduit between the two clouds. Customers can use the connection to securely transfer data at a high enough rate for offline handoffs and to support the performance required for primary applications that span the two clouds. Customers can access the connection by using either Oracle FastConnect or Microsoft ExpressRoute, as shown in Figure 1, and they don’t need to deal with configuration details or third-party carriers. Figure 1. Connectivity Between Oracle Cloud Infrastructure and Azure FastConnect and ExpressRoute together create a path for workloads on both clouds to communicate directly and efficiently, which gives customers flexibility on how to develop and deploy services and solutions across Oracle Cloud Infrastructure and Microsoft Azure. Customers experience the following benefits when they interconnect the Oracle and Microsoft clouds: Secure private connection between the two clouds. No exposure to the internet. High availability and reliability. Built-in redundant 10-Gbps physical connections between the clouds. High performance, low latency, predictable performance compared to the internet or routing through an on-premises network. Straightforward, one-time setup. No intermediate service provider required to enable the connection. Connecting Your On-Premises Network to the Interconnect In Figure 2, the customer’s on-premises network is directly connected to Oracle Cloud Infrastructure through FastConnect and to Azure through ExpressRoute, and there’s a direct interconnection between the two clouds. In this scenario, users located in the on-premises network can access applications (web tier and app tier) directly within Azure through ExpressRoute. The applications then access the data tier located in Oracle Cloud Infrastructure. Figure 2. Traffic Flow Between Oracle Cloud Infrastructure, Azure, and Non-Cloud Networks Workloads can access either cloud through the interconnection. However, traffic from networks other than Oracle Cloud Infrastructure and Azure can’t reach one cloud through the other cloud, which ensures security isolation. In other words, this cross-cloud connection doesn’t enable traffic between your on-premises network through the Azure virtual network (VNet) to the Oracle Cloud Infrastructure virtual cloud network (VCN), or from your on-premises network through the VCN to the VNet. For example, customers can’t reach Oracle Cloud Infrastructure through Azure (see Figure 3). If you need to reach Oracle Cloud Infrastructure, you need to deploy FastConnect directly from your on-premises network. Figure 3. No Access from One Cloud Through the Other Connecting the Cloud Networks This section describes how to connect an Oracle Cloud Infrastructure VCN to an Azure VNet. Figure 4 shows the components of this connection, and the table describes the terminology between the two clouds. Figure 4. Interconnect Routing and Security  Component  Azure  Oracle Cloud Infrastructure  Virtual network  Virtual network (VNet)  Virtual cloud network (VCN)  Virtual circuit  ExpressRoute circuit  FastConnect private virtual circuit  Gateway  virtual network gateway  dynamic routing gateway (DRG)  Routing  route tables  route tables  Security rules  network security groups (NSGs)  security lists Prerequisites To deploy a cross-cloud solution between Oracle Cloud Infrastructure and Azure, you must have the following prerequisites: An Azure VNet with subnets and a virtual network gateway. For information about how to set up the environment, see Azure Virtual Network. An Oracle Cloud Infrastructure VCN with subnets and an attached DRG. For information about how to set up the environment, see Overview of Networking. No overlapping IP addresses between your VCN and VNet. Enable the Connection The direct interconnection must be enabled from each provider’s console. Following are the high-level steps. For details, see the cross-connect documentation. Sign in to the Azure portal. Create an ExpressRoute circuit through a provider and select Oracle Cloud Infrastructure from the list of providers. Record the service key that Azure generates. Sign in to the Oracle Cloud Infrastructure Console. Create a FastConnect connection through a provider and select Microsoft Azure from the list of providers. Enter the service key that you got from Azure. The private virtual circuit is provisioned automatically between the two clouds. Note: You need a separate ExpressRoute or FastConnect circuit to connect your on-premises network to Azure or Oracle Cloud Infrastructure through a private connection, as shown in Figure 2. Conclusion Oracle and Microsoft have provided customers the flexibility to build and deploy applications in Oracle Cloud Infrastructure and Azure by providing a robust, reliable, low-latency, and high-performance path between the two clouds. With this partnership, our joint customers can migrate their entire set of existing applications to the cloud without having to rearchitect anything, preserving the large investments that they have already made and opening the door for new innovation. To learn more, review the public documentation, read frequently asked questions, schedule a demo, or request a POC. To order the service, contact your sales team.

Today we announced Oracle and Microsoft Interconnect Clouds to Accelerate Enterprise Cloud Adoption, a cloud interoperability partnership between Microsoft and Oracle. This cross-cloud interconnect...

Developer Tools

Building a Bigger Tent: Cloud Native, Culture, and Complexity

At KubeCon + CloudNativeCon Europe 2019 in Barcelona, Oracle open source projects and cloud services are helping enterprise development teams embrace cloud native culture and open source. With the announcement and open sourcing of Oracle Cloud Infrastructure Service Broker for Kubernetes this week, Oracle continues to expand its commitment to open source and cloud native solutions targeted at helping move enterprise workloads to the cloud.  This includes a recent set of Oracle open source solutions that facilitate enterprise cloud migrations including Helidon, GraalVM, Fn Project, MySQL Operator for Kubernetes, and the WebLogic Operator for Kubernetes. In addition, the recently launched Oracle Cloud Developer Image provides a comprehensive development platform on Oracle Cloud Infrastructure that includes Oracle Linux, Oracle Java SE (includes Java 8, 11, and 12), Terraform and many SDKs.  To help ensure our customers have what is needed to make their move to the cloud as easy as possible, Oracle Cloud Infrastructure customers receive full support for all of this software at no additional cost. Read more here. Oracle Cloud Infrastructure Service Broker enables customers to access Oracle’s generation 2 cloud infrastructure services and manage their lifecycle natively from within Kubernetes via Kubernetes APIs. In particular, this gives Oracle Database teams a fast and efficient path to cloud native and Kubernetes by using Oracle Autonomous Database cloud services with the new Service Broker and Kubernetes. In this use case, Kubernetes automates the provisioning, configuration, and management of all the application infrastructure, using Service Broker to connect to services such as Autonomous Data Warehouse and Automated Transaction Processing. Thus, database teams are able to not only move database applications to the cloud but at the same time improve performance, lower cost, and modernize their overall application architecture. Where We Are At On the surface, cloud native has never been bigger or better. Three major factors are driving cloud native today. DevOps has changed how we develop & deploy software. Open source has democratized what platforms we use. The cloud has super-charged where we develop and run applications. But the reality is many enterprise development teams have been left behind – facing a variety of cultural change, complexity, and training challenges that survey after survey confirms. Solution: A Bigger, Better Cloud Native Tent The cloud native community needs to build a bigger tent – one that is (1) more open and supports a multi-cloud future, (2) more sustainable – that reduces complexity versus piling more on, and (3) more inclusive to all teams – modern and traditional, startups and enterprises alike. Oracle helps by starting with what enterprises already know and working from there, thus building bridges and on-ramps to cloud native from a familiar starting point. This strategy focuses on an open, sustainable, and inclusive approach. Open Source: Enabling Enterprise Developers Oracle open source projects are being directed at moving enterprise workloads to the cloud and cloud native architectures. The Oracle Cloud Infrastructure Service Broker enables provisioning and binding of Oracle Cloud Infrastructure services with the applications that depend on those services from within the Kubernetes environment. Along with MySQL and VirtualBox, other more recent key Oracle open source projects include: Helidon: Project Helidon is a Kubernetes-friendly, open source Java framework for writing microservices. GraalVM Enterprise: The recently announced GraalVM Enterprise is a high performance, multi-lingual virtual machine that delivers high efficiency, better isolation and greater agility for enterprises in cloud and hybrid environments. Fn: The Fn project is an open source, container-native, serverless platform that runs anywhere – from on-premise to public, private and hybrid cloud environments. It is also the basis for the Oracle Cloud Infrastructure Functions serverless cloud service. Grafana PlugIn: The Oracle Cloud Infrastructure Data Source for Grafana exposes the health, capacity and performance metrics for customers using Grafana for cloud native observability and management. WebLogic Operator for Kubernetes: The WebLogic Server Operator for Kubernetes enables existing WebLogic applications to easily integrate into and leverage Kubernetes cluster management. OpenJDK: Java is open and the OpenJDK project is the focal point for that effort. OpenJDK is an open source collaborative effort that is now releasing on a six-month cadence with a range of new features, many of which are targeted at optimizing Java for cloud native deployments. In addition, open source community engagement is critical to moving existing projects forward for enterprises and cloud. Oracle continues to contribute to a large number of 3rd party open source projects and is a top contributor to many including Linux. Sustainable Cloud Services: Managed and Open Over the last six months, Oracle Cloud Infrastructure has launched and updated a wide range of managed cloud native services that enable enterprises to leapfrog complexity and move to useful productivity. These services include: Functions - Scalable, multi-tenant serverless FaaS based on the Fn Project that lets users focus on writing code to meet business needs without having to know about any infrastructure concepts. Resource Manager: A managed “Terraform-as-a-Service” (based on the open source Terraform project) that provisions OCI resources and services. Streaming: A managed service that ingests and stores continuous, high-volume data streams and processes them in real-time. Monitoring: Provides fine-grained, out-of-the-box metrics and dashboards for Oracle Cloud Infrastructure resources such as compute instances, block volumes, and more and also allows users to add their own custom application metrics. Container Engine for Kubernetes (OKE): A managed Kubernetes service, launched in 2017, that leverages standard upstream Kubernetes and is certified CNCF conformant.  Inclusive: Modern + Traditional, On-Premise + Cloud The open source solutions and cloud native services described above are enabling enterprise developers to embrace cloud native culture and open source and makes it easier to move enterprise workloads to the cloud. That includes everyone, from database application teams, to Java developers, to WebLogic system engineers, to Go, Python, Ruby, Scala, Kotlin, JavaScript, Node.js developers and more. For example, the Oracle Cloud Developer Image provides a comprehensive development platform on Oracle Cloud Infrastructure that includes Oracle Linux, Oracle Java SE support, Terraform, and many SDKs.  It not only reduces the time it takes to get started on Oracle’s cloud infrastructure but makes it fast and easy to provision and run Oracle Autonomous Database in a matter of minutes. While every KubeCon introduces more new projects and exciting advancements, Oracle is expending an equal effort on helping existing enterprise development teams embrace cloud native culture and open source. The Oracle Cloud Infrastructure Service Broker for Kubernetes along with projects like Helidon, GraalVM, Fn Project, MySQL Operator for Kubernetes, and the WebLogic Operator for Kubernetes are just a few of the ways we can all help build a bigger tent to battle the growing enterprise issues of cultural change and rising complexity.

At KubeCon + CloudNativeCon Europe 2019 in Barcelona, Oracle open source projects and cloud services are helping enterprise development teams embrace cloud native culture and open source. With the...

Customer Stories

How Did an IT Services Provider Save Their Bacon by Switching to Oracle Cloud Infrastructure?

In a recent Q&A discussion, Chris Fridley, COO of Ntiva, shared details of how switching to Oracle Cloud Infrastructure dramatically enhanced Ntiva’s business and improved customer satisfaction. Ntiva, an IT service provider, is completely responsible for its customers’ technology and cloud services. To retain its customers, Ntiva must provide extremely high reliability, performance, and cost containment. Ntiva chose Oracle Cloud Infrastructure to achieve this goal. Continue reading to: Discover how moving to Oracle Cloud Infrastructure improved Ntiva’s quality of service (QoS) Learn how reduced deployment times and lower resource utilization improved margins Explore dramatic improvements in client experience, which led to universally positive feedback What made you look for another cloud provider? We started looking around for a new cloud provider mid-2017, after having problems with multiple outages that sometimes lasted an entire business day. This was unacceptable in terms of customer service, and we had many clients threatening to leave Ntiva—not just our cloud services but the entire IT services contract. We were also supporting a lot of applications that were very intensive in terms of disk I/O, and the performance was very sluggish. This also meant that certain clients were very unhappy. Those two factors were the main motivators for us to start the search for a new cloud provider. What were you looking for in a cloud provider and why did you choose Oracle Cloud Infrastructure? We started looking around at the usual suspects, including Microsoft, with whom we have an ongoing relationship, and AWS. The challenge was that it was very hard to understand their pricing, which becomes a business problem for us when we have to figure out how to charge our clients. Our pricing needs to be very clear. When we spoke to our Oracle rep, what grabbed us right away was the very clear business model surrounding pricing and performance. As a service provider ourselves, we have to be able to provide our clients with a repeatable, predictable pricing structure. What was even better is that this pricing model extends to any Oracle Cloud Infrastructure data center worldwide, making it easy for us to support clients that have a need for growth outside of the US. But really, the shining moment for Oracle versus the competition was the relationship that our Oracle team built with us right from the beginning. When we had a question, we could reach out and get a same-day response, with a very quick turnaround from the development team when we needed it. Their persistence, dedication, and turnaround time on delivery ultimately led us to choose Oracle. What was your migration experience and how did your customers react? When we started the migration to Oracle Cloud Infrastructure, we had to notify our customers upfront and coordinate logistics. And whenever you do something like this, there is naturally a lot of apprehension from our clients… is it going to be better? Is it going to be worse? 100 percent of the feedback that came in was positive, and much of it was completely unsolicited. They’re seeing much better performance now that we’re powered by the Oracle Cloud. So all the other metrics aside—CPU utilization, storage, and so on—the best thing for us is the customer feedback being universally positive. How did moving to Oracle Cloud Infrastructure improve your quality of service (QoS)? One of the big benefits that resulted from our move to Oracle Cloud Infrastructure was in terms of efficiency. Oracle’s technical architecture enabled us to support all of our existing clients with about 20 percent less compute power, and this included some of our high-end clients who have intensive requirements when it comes to compute and disk I/O. This means we can load up new customers into the same footprint, and those savings drop right to our bottom line. We also saw a significant reduction in provisioning and deployment times, which dropped from a few hours to well under 30 minutes. We’re saving about 15 percent in labor costs right there, so between the two we are seeing about a 30 percent financial pickup. Interested in learning more? Register for our webcast to learn more about Ntiva’s experience migrating to Oracle Cloud Infrastructure. Additional information about Ntiva Ntiva was officially founded in 2004 by founder and CEO Steven Freidkin and has grown almost exclusively through referrals and their unwavering focus on their core values: Focusing on customer service first Managing every client dollar as if it was their own Hiring, developing, and retaining the very best people Ntiva knows that technology is a crucial part of every business, and having the right technology in place is more than a competitive advantage—it’s critical to growth and success. 

In a recent Q&A discussion, Chris Fridley, COO of Ntiva, shared details of how switching to Oracle Cloud Infrastructure dramatically enhanced Ntiva’s business and improved customer satisfaction. Ntiva,...


Oracle Cloud Application Migration: Are You Ready to Soar?

This post was written by Mike Owens, Vice President, Cloud Advisory Practice, and Mary Melgaard, Group Vice President, Cloud Migration Services. Do you ever feel like you've been left behind in the on-premises world while everyone else has moved to the cloud? You want to move your applications to the cloud, but which of the many paths is the right one for you?   Moving Oracle Applications such as E-Business Suite, JD Edwards, PeopleSoft, Siebel, and Hyperion to the cloud can introduce a complex landscape of alternatives and choices. It's important to select a partner who has the vision, experience, tools, and commitment to help you create your cloud adoption and migration strategy, and guide your migration to its successful conclusion. Join Oracle Consulting for our May 7 webinar on cloud application migration, in which we explain our cloud adoption and strategy methodology: Vision, Frame, and Mobilize.  Vision Establish your vision: Gather and organize key data about applications, infrastructure, and IT costs. Establish your project and transformation governance plan, and conduct a visioning workshop to ensure alignment with stakeholders. What are your business and technology objectives for your Oracle Cloud migration? What are the underlying drivers for your success? Organize for transformation: Compile and analyze an application inventory subset, build the business case, and identify prototype applications to migrate. What are the common characteristics or unique business requirements for your application portfolio? Frame Frame your path forward: Based on the application subset that you identified, determine the appropriate cloud deployment model for your applications, addressing regulatory, legal, and privacy considerations. Given your requirements, will you need multiple cloud vendors, or can one provider fulfill most of your business and workload objectives? Mobilize Map your journey: Determine the activities and actions needed to drive the migration, validate the business case, and reconcile any conflicts. Is this a "move and improve" initiative that will create a new way of working by changing the look and feel of your applications? Or, is this a technical move that lifts workloads "as-is" from on-premises to the cloud with limited impact on the business users? Mobilize for the future: Orchestrate the final application modernization, migration, and transformation analysis. Sequence the initiatives, and incorporate in-flight and planned projects. Develop your cloud transformation roadmap. What internal and external resource commitments are required to successfully migrate, and how will you manage the organizational change? Does the subscription include services that reduce your internal staffing requirements, and how will you communicate and adapt to these changes? Soar After your strategy is in place, flawless execution is critical. A strategic approach coupled with a partner that automates cloud migration can help to simplify the process of moving your applications to cloud. Leveraging the Oracle Soar methodology, you can move Oracle and non-Oracle applications to the cloud rapidly and efficiently with near-zero downtime. Join Us Are you ready to choose a path for cloud application migration that is right for you? Are you ready to soar? Join our webinar on May 7 at 9 a.m. PT to: Identify best practices for your cloud migration strategy Understand the paths for moving applications to the cloud Learn from real-life customer examples for moving E-Business Suite, JD Edwards, and third-party applications to Oracle Cloud Understand how to leverage Oracle Soar to rapidly and efficiently move your applications to Oracle Cloud with near-zero downtime

This post was written by Mike Owens, Vice President, Cloud Advisory Practice, and Mary Melgaard, Group Vice President, Cloud Migration Services. Do you ever feel like you've been left behind in the...


How MSPs Can Deliver IT-as-a-Service with Better Governance

As a solutions architect, I often support partners who deliver managed IT services to their end customers. Similarly, I work with large enterprises who manage IT for multiple business units. One of the most frequent requests I get is for best practices on how to align Oracle Cloud Infrastructure solutions and Identity and Access Management (IAM) policies with business-specific governance use cases. For enterprise customers, this means having better control over usage costs across multiple business units. For managed service providers (MSPs), this involves having better cost governance over the IT environments that they manage for end customers in their Oracle Cloud Infrastructure tenancy. This post is structured like a case study, in which an example enterprise customer, ACME CORP's Central IT team, faces the following business challenge: How do they enable their departmental IT stakeholders, and the operators within those departments, to have the autonomy to use their relevant Oracle Cloud Infrastructure services while still maintaining control over cost and usage? The post shows how this can be accomplished by using nested compartments and budgets. It also demonstrates how to maintain a separation of duties by delegating the management of security lists to the departmental application teams while enabling Central IT network admins to retain control over all networking components. This is particularly applicable for any application team using a CI/CD pipeline for their projects, automating the deployment and updating of subnets and security lists as part of their infrastructure as code (IaC) pipeline. The Business Challenge ACME CORP is a large enterprise company with a Central IT team and departmental IT teams that reside within the Finance and HR departments. The Central IT team manages the base cloud infrastructure for the company, and they manage IAM for all cloud services. Additionally, the Central IT Network Admin and Database (DB) Admin teams manage networking and database systems for the Finance and HR departments. The Finance and HR Departmental IT teams teams each comprise their own applications and database groups. The Finance and HR Applications groups are responsible for application deployment and should be able to manage the relevant infrastructure services for their specific projects. Therefore, Central IT needs to grant the departmental applications groups the ability to manage their own application workloads and the associated virtual infrastructure services, including application load balancers and security lists. However, Central IT wants more centralized control over the company's databases. They plan to grant the HR Database group the ability to read HR databases only, and they will grant the Finance Database group the ability to read and write financial databases. Finally, Central IT needs to grant an additional group, a team of Accountants, the ability to control costs and usage by the Finance and HR teams through the use of budgets.     The Solution Now let's establish the right identities and access so that the Central IT team and the departmental IT teams have access to their cloud resources. Create Groups of Users Based on Role First, we create the following groups, which map to the preceding roles: NetworkAdmin DatabaseAdmin FinApplication HRApplication FinDBuser HRDBuser Accountants Align Compartment Structure with Organizational Hierarchy Next, we implement the necessary compartment structure to separate resources by department. This solution has three levels of nested compartments with associated cloud infrastructure assets, as shown in the following image: First-Level Compartment In ACME CORP's tenancy, a root compartment is automatically created for Central IT, in which they can create policies to manage access to resources in all underlying compartments. Second-Level Compartments Under the root compartment are the following child compartments. The following screenshot shows what the compartment nesting would look like in the console. Central_IT_Network: For shared networking service elements, including FastConnect, internet gateway, IPSec termination points, and DNS Finance: Contains the VCN for finance projects and applications HR: Contains the VCN for HR projects and applications When you click into these compartments, you can create the VCNs for each one. Because the Central_IT_Network compartment does the VCN transit routing for the company, it's configured to contain the following components: Dynamic routing gateways for IPSec Internet gateway having public IPs FastConnect Load balancers for public-facing web servers DNS database Local peering to VCN_ACME_FIN and VCN_ACME_HR The Finance compartment houses its own private network for their department, named VCN_ACME_FIN, and contains the following components: Load balancers for Finance intranet servers Active domain controllers for Finance File Storage for Finance users Object Storage for Finance users Databases for Finance users The HR compartment has its own network, VCN_ACME_HR, along with the following components: Load balancers for HR intranet servers Object Storage for HR users Databases for HR users Third-Level Compartments Nested under the Finance compartment are the Project A and Project B compartments. Each project can house resources leveraged by specific applications. Nested under the HR compartment are similar compartments for HR Project A and Project B. Subnets and Security Lists For Finance Project A, we created two subnets under VCN_ACME_FIN (in the Finance compartment), but because we want to enable the CI/CD pipeline to automatically update and deploy application-specific security lists, we created the security lists in the Project A compartment. We replicated the same creation and placement for Finance Project B. The following image shows how we created the security list in the Finance Project A compartment: We performed the same type of subnet and security list placements for HR Project A and Project B. Now that we've isolated resources by compartment for department and project usage, we'll create budgets to control spending. Creating Budgets to Control Spending Next we establish permissions for the Accountants group (see the "Policies" section), enabling them to administer budgets and proactively control spending for each department. For example, they can create a budget for the Finance compartment that would also cover Finance Projects A and B. The total limit of monthly spend could be set at US$1,200, and a threshold set at 50 percent for an email notification to be sent out to the right teams to take action when spending reaches 50 percent of budget. For more information about creating a budget, see Managing Budgets. Policies We created the following policies for each group and compartment. Policies applied on the ACME_CORP root compartment Policy to enable the DatabaseAdmin group to manage databases across all compartments in the tenancy: ALLOW GROUP DATABASEADMIN TO MANAGE DATABASE-FAMILY IN TENANCY    Policy to enable the NetworkAdmin group belonging to Central IT to have administrative rights over all network resources in the Central_IT_Network compartment: ALLOW GROUP NetworkAdmin TO MANAGE VIRTUAL-NETWORK-FAMILY IN COMPARTMENT CENTRAL_IT_NETWORK Policies applied on the Finance compartment Policy to enable FinDBuser to have read and write rights on Financial database: ALLOW GROUP FINDBUSER TO USE DATABASE-FAMILY IN COMPARTMENT FINANCE Policy to enable the Finance application teams, including automated CI/CD systems, to manage virtual machines, object storage, and security lists in their respective project compartments: ALLOW GROUP FINAPPLICATION TO MANAGE OBJECT-FAMILY IN COMPARTMENT FINANCE ALLOW GROUP FINAPPLICATION TO MANAGE SECURITY-LISTS IN COMPARTMENT PROJECT A ALLOW GROUP FINAPPLICATION TO MANAGE SECURITY-LISTS IN COMPARTMENT PROJECT B ALLOW GROUP FINAPPLICATION TO MANAGE INSTANCE-FAMILY IN COMPARTMENT PROJECT A ALLOW GROUP FINAPPLICATION TO MANAGE INSTANCE-FAMILY IN COMPARTMENT PROJECT B Policies applied on the HR compartment Policy to enable HRDBuser to have read only rights on Financial database: ALLOW GROUP HRDBUSER TO READ DATABASE-FAMILY IN COMPARTMENT FINANCE Policy to enable the HR application teams, including automated CI/CD systems, to manage virtual machines, object storage, and security lists in their respective project compartments: ALLOW GROUP HRAPPLICATION TO MANAGE OBJECT-FAMILY IN COMPARTMENT HR ALLOW GROUP HRAPPLICATION TO MANAGE SECURITY-LISTS IN COMPARTMENT PROJECT A ALLOW GROUP HRAPPLICATION TO MANAGE SECURITY-LISTS IN COMPARTMENT PROJECT B ALLOW GROUP HRAPPLICATION TO MANAGE INSTANCE-FAMILY IN COMPARTMENT PROJECT A ALLOW GROUP HRAPPLICATION TO MANAGE INSTANCE-FAMILY IN COMPARTMENT PROJECT B Policy to enable budget management is applied on the ACME_CORP compartment ALLOW GROUP ACCOUNTANTS TO MANAGE USAGE-BUDGETS IN TENANCY The following image explains the syntax used for policies: Conclusion An MSP or central IT organization can deliver services across multiple organizations and departments, even when all are placed within the same tenancy. You can separate resources while still maintaining governance and control by leveraging nested compartments, assigning policies at the appropriate level of the nested compartments, and configuring a budget to limit resource usage and maintain logical isolation and requisite access controls. Resources Oracle Cloud IaaS documentation Best Practices for Identity and Access Management Service on Oracle Cloud Infrastructure blog post Foundational Oracle Cloud Infrastructure IAM Policies for Managed Service Providers blog post

As a solutions architect, I often support partners who deliver managed IT services to their end customers. Similarly, I work with large enterprises who manage IT for multiple business units. One...


Optimizing Cloud-Based DDoS Mitigation with Telemetry

Today’s cybersecurity challenges demand cross-functional solutions to achieve an effective level of capability in the domains of identification, protection, detection, response, and recovery. As a result, enterprise security controls represent significant investments for IT departments. Repeatedly, incidents demonstrate that even well-resourced organizations don’t leverage solutions to their fullest potential. As organizations strive to achieve a capability baseline, the concept of control efficacy is increasingly important. Today’s IT leaders are asking, “How effective is a given solution working, under real world conditions, and what can be done to improve it?” During an incident—whether it’s an attempted network intrusion, a phishing email, or a DDoS attack—simply having a control in place isn't enough. You must also understand how that control is behaving and what can be done to improve it with hard data. Internet Intelligence To that end, Oracle deployed a deep monitoring network of sensors—known as vantage points—to collect publicly available data about internet performance and security events all over the world, with real-time information about degradation, internet routing changes, and network anomalies. Oracle Cloud Infrastructure products, such as Market Performance and IP Troubleshooting, are based on Internet Intelligence. (red dots: network sensors) Oracle’s Internet Intelligence Map monitors the volatility of the internet as a whole, based on this telemetry. With more organizations relying on third-party providers for their most critical services, monitoring the collective health of the internet can give users an early warning of impending attacks. Organizations can react and prepare before the danger reaches them. Data gathered by Oracle’s Edge Network provides valuable insight about border gateway protocol (BGP) route changes and distributed denial-of-service (DDoS) activation worldwide. Oracle can monitor 250 million route updates per day, including where DDoS protection is being activated and when attacks are occurring. We measure the quality, in near real time, of any cloud DDoS protection activation by most cloud-based DDoS vendors. This information can be used to measure the effectiveness of protection solutions. DDoS Attack Example Monitoring DDoS activation provides visibility into where attacks are happening, the length of the attack, and the effectiveness of route coverage, in near real-time. A DDoS attack at the end of summer 2018 provides a timely example of the importance of DDoS mitigation telemetry. The attack, which intermittently affected access to a company’s website and services, was easily identified by Oracle’s vantage points and visible on the Internet Intelligence Map. In this example, as is typical with cloud-based DDoS protection, a cloud-based DDoS protection provider hijacked the organization’s IP blocks so that all the attack traffic was redirected to specialized DDoS scrubbing centers. During the IP block hijack, our security team measured a number of interesting statistics about the route instability, latency jump, and other artifacts caused by the traffic reroute. Observing the propagation duration and the completeness of the route change allows for some unique quality measurements. With this example, AS announces two prefixes. As the following image shows, despite the announcement change, there are still unprotected routes via an ISP, allowing some of the attack traffic through. The DDoS protection provider intercepted the IP blocks under attack, at around 16:00 local time. DDoS Attack Leakage Although the organization's DDoS protection provider was efficiently mitigating most of the DDoS attack, our tool shows that a small portion of the attack was still going to the organization, for approximately one hour. This is called a DDoS attack leakage. Despite the fact that a very powerful cloud-based DDoS provider was mitigating the attack, the internet assets of the organization still went offline during this hour because a portion of the attack was still going to its data center. Additional visibility from trace routes confirms that not all routes were going through the DDoS protection. Two traces from London and Frankfurt confirm that some traffic was still routing through an ISP (instead of going entirely through the cloud-based DDoS mitigation provider). Note: This doesn't mean that the ISP is knowingly responsible for the DDoS leakage. In fact, quite the contrary. The internet is largely a self-adjusting network, in which traffic is sometimes routed via unexpected core locations. Routing tables need frequent adjustments (made by tier 1 providers and ISPs), and these routing adjustments always take some time to propagate, as was the case here. Similar to many of our colleagues in the DDoS industry, the Oracle Security team was able to measure and score the quality of the IP hijack and the consistency of the hijack over time. Our results also shows that, indeed, the attack leakage was captured by a low quality score: Prefix Prefix Owner (under attack) AS or prefix owner DDoS Provider AS of provider Handoff Coverage Consistent Coverage xxx.xxx.xxx.0/24 [organization]   xxxxx    [DDoS provider]   yyyyy 64.39   95.53   These statistical sensors are called the Handoff and Consistent Coverage. A Handoff Coverage of +97 percent typically shows an almost perfect coverage. Less than 90 percent might indicate a DDoS leakage. The Handoff Coverage impact is shown in the following image: As demonstrated in this example, Handoff Coverage and Consistent Coverage are important parameters that analysts and security staff can use to quantify the effectiveness of a cloud-based DDoS mitigation solutions. In particular, these parameters can help detect the residual attack traffic still leaking back to the organization during the initial handoff, which causes downtime. Conclusion DDoS attacks continue to plague organizations across all industries. Mitigation is widely available but applied unevenly. Leakage during mitigation can still have an impact on infrastructure, leading to the loss of mission-critical functions. When solutions are deployed, tracking their efficacy is critical to ensure that the organization’s security baseline is known and validated. Telemetry gathered from external monitoring, as Oracle provides for DDoS mitigations, in one such example of seeking evidence-based optimization.

Today’s cybersecurity challenges demand cross-functional solutions to achieve an effective level of capability in the domains of identification, protection, detection, response, and recovery. As a...

Importing VirtualBox Virtual Machines into Oracle Cloud Infrastructure

Large enterprises use a wide variety of operating systems. Oracle Cloud Infrastructure supports a variety of operating systems, both new and old, and enables customers to import root volumes from on-premises from multiple sources such as Oracle VM, VMware, KVM, and now VirtualBox. VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprises. This blog post and its related video, Importing VirtualBox Virtual Machines to Oracle Cloud Infrastructure, review the steps required to create and then import a VirtualBox SUSE Linux 15 virtual machine (VM). This process also works with existing VMs, which makes "move and improve" migrations to Oracle Cloud Infrastructure much easier. Create a VirtualBox VM Start by creating a simple VirtualBox VM. See the video for details. Remember, this process also works for any preexisting VMs. Prepare a VirtualBox VM for Import Prepare the VM to boot in the cloud by verifying that the image is ready for import. Images must meet the following requirements: Under 300 GiB Include a master boot record (MBR) Configured to use BIOS (not UEFI) A single disk A VMDK or QCOW2 file The example in this post uses a copy of openSUSE Leap installed with just the default settings to show how it works. You must also ensure that the instance has security settings appropriate for the cloud, such as enabling its firewall and allowing only SSH logins that use private key identifiers. For more information about custom image requirements, see the "Custom Image Requirements" section of Bring Your Own Custom Image for Paravirtualized Mode Virtual Machines. As required, also perform the following tasks: Add a serial console interface to troubleshoot the instance later, if required. Add the KVM paravirtualized drivers. Configure the primary NIC for DHCP. Enable the Serial Console After you confirm the image settings, enable the serial console to troubleshoot the VM later, if required. Edit the /etc/default/grub file to update the following values: Remove resume= from the kernel parameters; it slows down boot time significantly. Replace GRUB_TERMINAL="gfxterm" with GRUB_TERMINAL="console serial" to use the serial console instead of graphics. Add GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200" to configure grub’s serial connection. Replace GRUB_CMDLINE_LINUX="" with GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200" to add the serial console to the Linux kernel boot parameters. Regenerate initramfs as follows: grub2-mkconfig -o /boot/grub2/grub.cfg To verify, reboot the machine, and then run dmesg and look for the updated kernel parameters. dmesg | grep console=ttyS0 For more information about these requirements, see the "Enabling Serial Console Access" section in Preparing a Custom Linux Image for Import. Enable Paravirtualized Device Drivers Next, add paravirtualized device support by building the virtio drivers into the VM's initrd. Because this action works only on machines with a Linux kernel of version 3.4 or later, check that the system is running a modern kernel: uname -a Rebuild initrd with the dracut tool, telling it to add the qemu module: dracut --logfile /var/log/dracut.log --force --add qemu Check lsinitrd to verify that the virtio drivers are now present: lsinitrd | grep virtio For more information, see the dracut(8) and lsinitrd(1) manuals. Enable Dynamic Networking Next, clear any persistent networking configurations so that the VM doesn’t try to keep using the interfaces that had been available in VirtualBox. To do that, empty the 70-persistent-net.rules file (but keep the file in place) by running: > /etc/udev/rules.d/70-persistent-net.rules Note: This change is reset when the VM is restated. If the VM is restarted in VirtualBox before it is imported, you must perform this step again. For more information, see the udev(7) manual and Predictable Network Interface Names. Power Off the VM With all that done, power off the instance by typing: halt –p Now the VM is fully prepared. Upload the Image After you have powered off the VM, start copying the VM disk from your desktop to Object Storage. Your desktop needs to have the Oracle Cloud Infrastructure CLI installed and configured. See the CLI Quickstart if you are doing this for the first time. Use the OCI CLI to upload the file. The Object Storage web console supports only files up to 2 GiB, so it’s unlikely it would work for your image. But even if the image is small, the CLI uploads much faster. In the Oracle VM VirtualBox Manager window, go to the VM index. Right click on the instance and select Show in Finder. Note the directory path and file name of the VM disk. Open the terminal and upload the file by using the VM’s directory path and file name, your Oracle Cloud Infrastructure tenancy's namespace, and the Object Storage bucket name: VM_DIR="/Users/j/VirtualBox VMs/openSUSE-Leap-15.0/" VM_FILE="opensuse-leap-15.0.vmdk" NAMESPACE="intjosephholsten" BUCKET_NAME="images" cd "${VM_DIR}" oci os object put -ns "${NAMESPACE}" -bn "${BUCKET_NAME}" --file "${VM_FILE}" The upload might take some time, depending on the size of the image and available bandwidth. For more information about uploading objects, see the CLI instructions for uploading an object to a bucket in Managing Objects. Import the Image After the upload is complete, log in to the Oracle Cloud Infrastructure Console to import the image. Go to the details page of the bucket to which you uploaded the image, and look at the details of the image to find its URL. You will use the URL to import the image. In the navigation menu, select Compute, and then select Custom Images. Click Import Image. If the system was able to use paravirtualized drivers, select paravirtualized mode to get the best performance. After the image import process starts, it takes some time to complete. For more information, see Importing Custom Linux-Based Images. Access the Instance After the image is imported, you can launch a new instance directly from the image details page. After the instance is running, use SSH to connect to it by using its public IP address and the same login credentials used to access the machine when it was running on VirtualBox. You have successfully imported a VirtualBox VM to Oracle Cloud Infrastructure! Try it for yourself. If you don’t already have an Oracle Cloud account, go to http://cloud.oracle.com/tryit to sign up for a free trial today.

Large enterprises use a wide variety of operating systems. Oracle Cloud Infrastructure supports a variety of operating systems, both new and old, and enables customers to import root volumes from...