X

Recent Posts

Partners

Machine Learning with H2O.ai and Oracle ERP

Packaged line-of-business (LOB) applications are an area where Oracle is a market leader. These applications contain an enormous amount of data that has the potential to give amazing insight into core business functions, enabling gains in areas such as: Operational efficiency Cross sell / up sell Customer experience Oracle is investing heavily in moving these LOB applications to our cloud, Oracle Cloud Infrastructure (OCI). In parallel, we're investing in the ecosystem, partnering with key ISVs to enable their workloads on our cloud, which allows our customers to leverage the latest innovations in conjunction with the LOB applications that they depend on. H2O Driverless AI H2O.ai is a leader in the AI/ML space. Their platform, Driverless AI (DAI), automates much of the machine learning lifecycle, from data ingestion to data engineering and modeling, on to deployment. It enables both data scientists and relatively naive users to generate sophisticated ML models that can have an enormous impact on their business. Late last year, we began integration work with H2O.ai. Initial efforts focused on creating Terraform templates to automate the deployment of Driverless AI on Oracle Cloud Infrastructure. Building on that, H2O.ai was the first of our Quick Start templates to go live. Driverless AI can be deployed today on Oracle Cloud Infrastructure by using Terraform modules that our team and the H2O.ai team developed jointly. Those modules are available on the Quick Start on GitHub. We’re currently exploring ways to use this kind of data in Oracle enterprise applications to build tailored ML models. An example architecture might look like this: On Oracle Cloud Infrastructure, Driverless AI can be deployed on NVIDIA GPU machines. This accelerates the building of models, further reducing the end-to-end lifecycle for machine learning. Oracle Retail Advanced Inventory Planning The Oracle Retail Advanced Inventory Planning (AIP) module in Oracle ERP is one potential source of interesting data for ML with H2O.ai. An external merchandising system, forecasting system, and replenishment optimization system are integrated with AIP to provide the inventory/foundation data and the forecasting data to AIP to effectively plan the inventory flow across the retailer's supply chain.  Because AIP can integrate with any forecasting system, Driverless AI could be used to build a model that accounts for both high frequency (for example, weekend) and lower frequency (for example, holiday) seasonalities. Driverless AI ships with a time-series recipe based on causal splits (moving windows), lag features, interactions thereof, and the automatic detection of time grouping columns (such as Store and Dept for a dataset with weekly sales for each store and department). Oracle Retail Merchandising System The Oracle Retail Merchandising System (RMS) module in Oracle ERP is another fascinating touchpoint. This module includes the following information: Expenses: The direct and indirect costs incurred in moving a purchased item from the supplier's warehouse/factory to the purchase order receiving location. Inventory Transfers: An organized framework for monitoring the movement of stock. Return to Vendor (RTV): Transactions that are used to send merchandise back to a vendor. Inventory Adjustments: Increase or decrease inventory to account for events that occur outside the normal course of business (for example, receipts, sales, and stock counts). Purchase Order Receipts (Shipments): Record the increment to on-hand when goods are received from a supplier. Stock Counts: Inventory is counted in the store and compared against the system inventory level for discrepancies.   RMS contains a rich dataset that could be used to build models in Driverless AI for anomaly detection around RTV, inventory adjustment, and other events. Oracle Retail Price Management The Oracle Retail Price Management module in Oracle ERP includes the following information: Item ID: ID that is assigned when the price event is created at the transaction item level. Cost Change Date: The effective date of the past or future cost change. Retail Change Date: The effective date of the past or future retail change. Cost: The cost on the effective date of the cost or retail change. Retail: The regular selling retail on the effective date of the cost or retail change. Markup %: The markup percent on the effective date of the cost or retail change. The markup percent is calculated using the calculation method specified by your system options.   With Driverless AI, we could use past cost changes to train a regression model. That model could suggest future pricing, automatically incorporating both seasonality and product lifecycle. Also, by combining Retail Price Management data with marketing, clickstream, or other end-customer data, a regression model could be built to predict the benefit of pricing changes while accounting for other variables that affect sales. Oracle Retail Trade Management The Oracle Retail Trade Management module in Oracle ERP includes the following information: Landed Cost: The total cost of an item received from a vendor inclusive of the supplier cost and all costs associated with moving the item from the supplier's warehouse or factory to the purchase order receiving location. Expenses: The direct and indirect costs incurred in moving a purchased item from the supplier's warehouse/factory to the purchase order receiving location. Country Level Expenses: The costs of bringing merchandise from the origin country, through the lading port, to the import country's discharge port. Zone Level Expenses: The costs of bringing merchandise from the import country's discharge port to the purchase order receiving location. Assessments: The cost components that represent the total tax, fee, and duty charges for an item. Transportation: The facility to track information from trading partners as merchandise is transported from the manufacturer through customs clearance in the importing country. Actual Landed Costs: The actual landed cost incurred when buying an import item.   With Retail Trade Management data tracking costs and delays in items being received at their final stocking location, Driverless AI could be used to build a risk model to estimate the impact of changing exact import/transportation routes. Oracle Retail Invoice Matching The Oracle Retail Invoice Matching module in Oracle ERP includes the following information: Invoice Matching Results for Shipments: Shipment records are updated with the invoice matching, which attempts to match all invoices in ready-to-match, unresolved, or multi-unresolved status. Receiver Cost Adjustments: Updates the purchase order, shipment, and potentially the item cost in RMS, depending on the reason code action used. Receiver Unit Adjustments: Invoice matching discrepancies are resolved through a receiver unit adjustment. By joining the information in Retail Invoice Matching with data in other modules, we can build a risk model in Driverless AI for suppliers to predict the probability of invoicing issues for future orders. Next Steps This post gives a high-level view of how an open Oracle ecosystem enables our customers to leverage the latest technologies from our partner ecosystem with the LOB applications that they've relied on for decades to run their business. We're actively working with several customers to prove this out in their environments. In addition, my team is working to create a more detailed demo of the integration described here. We look forward to presenting that in more detail, both on this blog and at several upcoming meetups that Oracle and H2O.ai are jointly organizing. If you have questions, please reach out to Ben.Lackey@Oracle.com or Peter.Solimo@H2O.ai. We'd love to work with you and see what ML can do with your data!

Packaged line-of-business (LOB) applications are an area where Oracle is a market leader. These applications contain an enormous amount of data that has the potential to give amazing insight into...

HPC Investments Continue into ISC 2019

Today, 95 percent of all traditional high-performance computing (HPC) is still done in traditional on-premises deployments. The sporadic demand cycles and rapid evolution of specialized HPC technologies make flexible cloud deployment a great fit for enterprise usage. But other cloud providers simply haven’t been able to penetrate this market for a range of reasons, from performance to cost to a lack of key features, such as remote direct memory access (RDMA) capability. Over the last 12 months, we have invested significantly, in both technology and partnerships, to make Oracle Cloud Infrastructure the best place to run your Big Compute and HPC workloads. At OpenWorld 2018, Larry announced clustered networking, which lets customers run their Message-Passing Interface (MPI) workloads with performance comparable to, and in some cases better than, on-premises HPC clusters. This was the first, and is still the only, bare metal HPC offering with 100G RDMA in a public cloud.* It's in limited availability today, and we expect it to be generally available later in the year. Even further out, we’re working on a truly flexible and scalable architecture in which you can have bare metal GPUs, HPC instances, and even Exadata on a clustered network. This opens up use cases such as running a distributed training job on a cluster of GPUs that pull data from an Exadata, and then deploying the model on a set of compute nodes, all over the clustered network. We pushed the boundaries on this new offering with the ability to scale to up to 20,000 cores for a single job. This is far beyond what any other cloud can offer today for MPI workloads while maintaining efficiency and performance. To see the benchmarks that compare us and other providers, see the blog post that we published this week. Here’s a peek: We also partnered with Altair last year at OpenWorld to launch their Hyperworks CFD unlimited running our bare metal NVIDIA GPU offerings. We recently started working with them on their crash application, called Altair Radioss, and how clustered networking can help reduce the time and cost for these crash simulation jobs. For details, including benchmarks, read the blog post. This week we’re in Frankfurt at ISC 2019, along with our partners, showcasing some of these capabilities. You can talk to our engineering teams and try out some of the technologies at our booth, located at H-730. Some other things you’ll want to catch during the week: Vendor Showdown on Monday, June17, at Panorama 2, starting at 1:15 p.m. Exhibitor Forum Session on Tuesday, June 18, at Booth N-210, starting at 11:20 a.m. Blog: Accelerating DEM Simulations with Rocky on Oracle Cloud and NVIDIA Blog: Making Cars Safer with Oracle Cloud and Altair Radioss Blog: Large Clusters, Lowest Latency: Clustered Networking on Oracle Cloud Infrastructure Hands-on demos and labs at our booth H-730 Looking forward to seeing you there! Karan   * Based on comparison to AWS, Azure, and Google Cloud Platform as of June 3, 2019.

Today, 95 percent of all traditional high-performance computing (HPC) is still done in traditional on-premises deployments. The sporadic demand cycles and rapid evolution of specialized...

Accelerating DEM Simulations with Rocky on Oracle Cloud and NVIDIA

It’s a sunny afternoon, you’re mowing your lawn, and the grass buildup in your mower disrupts your smooth progress. This disruption could have been avoided if the design process for your mower involved airflow modeling with particles. Similarly, you’d hope that the discrete element method (DEM) was used to simulate the flow of beans through the coffee machines that you trust to brew your coffee.   DEM simulation packages like Rocky DEM from ESSS include particles and can be coupled with computational fluid dynamics or the finite element method to improve results. However, they add a layer of complexity that results in increased simulation time. To speed up the simulation, Rocky DEM provides the option to parallelize to a high number of CPUs or gain even more speed by unleashing multiple NVIDIA GPUs in Oracle Cloud Infrastructure. No special setup or driver is needed to run Rocky on Oracle Cloud Infrastructure. Import your model, choose the number of CPUs or NVIDIA GPUs, and start working. Using Oracle Cloud Infrastructure removes the wait time for resources in your on-premises cluster. It also avoids having people battle for high-end GPUs at peak times and then having those GPUs sit idle for the rest of the week. “Oracle Cloud Infrastructure and Rocky DEM have collaborated to provide a scalable experience to customers with performance similar to on-premises clusters. The bare metal NVIDIA GPU servers, without hypervisor overhead, further help to tackle very large problems in a reasonable amount of time,” said Marcus Reis, Vice President of ESSS. Depending on the simulation, Oracle Cloud Infrastructure provides different machine shapes to stay cost-effective without compromising on compute power. The following table shows the machine shapes suited for Rocky. Explore all the different storage options, remote direct memory access (RDMA) capabilities, and the composition of NVIDIA GPUs on our Compute service page.   Shape CPU GPU VM.Standard2.4 4 - BM.Standard2.52 52 - BM.HPC2.36 36 - VM.GPU2.1 12 1 x P100 VM.GPU3.1 6 1 x V100 BM.GPU2.2 28 2 x P100 BM.GPU3.8 52 8 x V100   “NVIDIA and Oracle Cloud Infrastructure are collaborating to help customers reduce their computation time from days to hours by providing GPUs for HPC applications. The Tesla P100 and advanced V100 GPUs increase customer productivity while reducing cost,” said Paresh Kharya, Director of Product Marketing, Accelerated Compute, NVIDIA. The following chart shows that NVIDIA GPUs offer up to 6X better price-performance than CPU-based instances for this simulation. It also shows faster results at a similar price when switching from P100 to V100 GPUs or increasing the core count of CPUs.   Oracle Cloud Infrastructure provides bare metal instances with up to 8 Tesla V100 GPUs, and it’s making a difference. Companies can start thinking about the engineering and the design of their next product rather than worrying about simulation runtimes or compute resource availability. Get started on Oracle Cloud Infrastructure, and run your Rocky DEM workloads today!

It’s a sunny afternoon, you’re mowing your lawn, and the grass buildup in your mower disrupts your smooth progress. This disruption could have been avoided if the design process for your mower...

Events

Large Clusters, Lowest Latency: Cluster Networking on Oracle Cloud Infrastructure

Oracle Cloud Infrastructure has expanded cluster networking by enabling remote direct memory access (RDMA)-connected clusters of up to 20,000 cores on our BM.HPC2.36 instance type. Our groundbreaking, backend network fabric lets you use Mellanox’s ConnectX-5, 100-Gbps network interface cards with RDMA over Converged Ethernet (RoCE) v2 to create clusters with the same low-latency networking and application scalability that you expect on premises. Oracle Cloud Infrastructure is leading the cloud high performance computing (HPC) battle in performance and price. Over the last few months, we have set new cloud standards for internode latency, cloud HPC benchmarks, and application performance. Oracle Cloud Infrastructure's bare metal infrastructure lets you run on-premises performance in the cloud. In addition to connecting bare metal nodes together through RDMA, cluster networking provides a fabric that will enable future instances and products to communicate at extremely low latencies. Performance Ultra-low node-to-node latency is expected on HPC systems. Partners like Exabyte.io have demonstrated Oracle Cloud Infrastructure's leading edge with those metrics. But when you have applications running on thousands of cores, low node-to-node latency isn’t enough. The ability to scale models down to a very small size is more important. In computational fluid dynamics (CFD), users typically want to know the smallest amount of work they can do on a node before they hit a network bottleneck that limits the scalability of their cluster. This is the network efficiency of an HPC cluster or, in other words, getting the most “bang for your buck”! The following chart shows the performance of Oracle’s cluster networking fabric. We scale above 100% below 10,000 simulation cells per core with popular CFD codes, the same performance that you would see on premises. It’s also important to note that without the penalty of virtualization, bare metal HPC machines can use all the cores on the node without having to reserve any cores for costly overhead.   The ability for a simulation model to scale this way highlights two important design features. The first is the stability of the underlying network fabric, which can transfer data fast and consistently. The second important design feature is that there is no additional traffic or overhead on the network to limit throughput or latency. You can see this stability in the following chart, which compares on-premises HPC network efficiency to cloud HPC network efficiency. CFD is not the only type of simulation to benefit from using Oracle’s cluster networking. Crash simulations, like those run on Altair’s RADIOSS or LS-Dyna from LSTC, and financial matching simulations, like those offered by BJSS, also use cluster networking. Price Oracle Cloud Infrastructure offers the best performance by default. You don’t pay extra for performance of block storage, RDMA capability, or network bandwidth, and the first 10 TB of egress is free. Cluster networking follows that same paradigm—there is no additional charge for it. Availability Today, cluster networking is available in the regions that have our HPC instances: Ashburn, London, Frankfurt, and Tokyo. Cluster networking will continue to spread throughout all of our regions as cluster networking-enabled instances continue to roll out. To deploy your HPC cluster using cluster networking, reach out to your Oracle rep or contact us directly. Also, visit us at the ISC High Performance conference in Frankfurt June 16–20. We’re in booth H-730. Hope to see you there.

Oracle Cloud Infrastructure has expanded cluster networking by enabling remote direct memory access (RDMA)-connected clusters of up to 20,000 cores on our BM.HPC2.36 instance type. Our groundbreaking,...

Oracle Cloud Infrastructure

Overview of the Interconnect Between Oracle and Microsoft

Today we announced Oracle and Microsoft Interconnect Clouds to Accelerate Enterprise Cloud Adoption, a cloud interoperability partnership between Microsoft and Oracle. This cross-cloud interconnect enables customers to migrate and run mission-critical enterprise workloads across Microsoft Azure and Oracle Cloud Infrastructure. Enterprises can now seamlessly connect Azure services, like Analytics and AI, to Oracle Cloud services, like Autonomous Database. By enabling customers to run one part of a workload within Azure and another part of the same workload within Oracle Cloud, this partnership delivers a highly optimized, best-of-both-clouds experience. Taken together, Azure and Oracle Cloud Infrastructure offer customers a one-stop shop for all the cloud services and applications that they need to run their entire business. Connecting Azure and Oracle Cloud Infrastructure through network and identity interoperability makes move-and-improve migrations seamless. This partnership delivers direct, fast, and highly reliable network connectivity between two clouds, while continuing to provide first-class customer service and support that enterprises have come to expect from the two companies. In addition to providing interoperability for customers running Oracle software on Oracle Cloud Infrastructure and Microsoft software on Azure, it enables new and innovative scenarios like running Oracle E-Business Suite or Oracle JD Edwards EnterpriseOne on Azure against an Oracle Autonomous Database running on Exadata infrastructure in the Oracle Cloud. We envision the following common use cases for multicloud deployments: Applications run in separate clouds with consistent controls and data sharing: In this approach, customers can deploy applications fully in one cloud or the other, and will benefit from common identity management, single sign-on, and the ability to share data among clouds for analytics and other secondary processes. Applications span clouds, typically with the database layer in one cloud and the app and web tiers in another: Utilizing a low-latency connection between the clouds lets customers choose preferred components for each application, allowing a single consistent application with separate parts running in either cloud optimized for each technology stack. Cross-Cloud Interconnect As enterprises continue to evaluate the benefits of cloud, they are steadily adopting a multicloud strategy for various reasons, including disaster recovery, high availability, lower cost, and, most importantly, using the best services and solutions available in the market. To enable this diversification, customers interconnect cloud networks by using the internet, IPSec VPNs, or a cloud provider’s direct connectivity solution through the customer’s on-premises network. Interconnecting cloud networks can require significant investments in time, money, design, procurement, installation, testing, and operations, and it still doesn't guarantee a high-availability, redundant, low-latency connection. Oracle and Microsoft recognize these customer challenges and have created a unified enterprise cloud for our mutual customers. Oracle and Microsoft have already done all the tedious, time-consuming work for you by providing low-latency, high-throughput connectivity between their two clouds. The rest of this post describes how to configure the network interconnection between Oracle Cloud Infrastructure and Microsoft Azure to create a secured, private, peered network between the two clouds. Solution Oracle and Microsoft have built a dedicated, high-throughput, low-latency, private network connection between Azure and Oracle Cloud Infrastructure data centers in the Ashburn, Virginia region that provides a data conduit between the two clouds. Customers can use the connection to securely transfer data at a high enough rate for offline handoffs and to support the performance required for primary applications that span the two clouds. Customers can access the connection by using either Oracle FastConnect or Microsoft ExpressRoute, as shown in Figure 1, and they don’t need to deal with configuration details or third-party carriers. Figure 1. Connectivity Between Oracle Cloud Infrastructure and Azure FastConnect and ExpressRoute together create a path for workloads on both clouds to communicate directly and efficiently, which gives customers flexibility on how to develop and deploy services and solutions across Oracle Cloud Infrastructure and Microsoft Azure. Customers experience the following benefits when they interconnect the Oracle and Microsoft clouds: Secure private connection between the two clouds. No exposure to the internet. High availability and reliability. Built-in redundant 10-Gbps physical connections between the clouds. High performance, low latency, predictable performance compared to the internet or routing through an on-premises network. Straightforward, one-time setup. No intermediate service provider required to enable the connection. Connecting Your On-Premises Network to the Interconnect In Figure 2, the customer’s on-premises network is directly connected to Oracle Cloud Infrastructure through FastConnect and to Azure through ExpressRoute, and there’s a direct interconnection between the two clouds. In this scenario, users located in the on-premises network can access applications (web tier and app tier) directly within Azure through ExpressRoute. The applications then access the data tier located in Oracle Cloud Infrastructure. Figure 2. Traffic Flow Between Oracle Cloud Infrastructure, Azure, and Non-Cloud Networks Workloads can access either cloud through the interconnection. However, traffic from networks other than Oracle Cloud Infrastructure and Azure can’t reach one cloud through the other cloud, which ensures security isolation. In other words, this cross-cloud connection doesn’t enable traffic between your on-premises network through the Azure virtual network (VNet) to the Oracle Cloud Infrastructure virtual cloud network (VCN), or from your on-premises network through the VCN to the VNet. For example, customers can’t reach Oracle Cloud Infrastructure through Azure (see Figure 3). If you need to reach Oracle Cloud Infrastructure, you need to deploy FastConnect directly from your on-premises network. Figure 3. No Access from One Cloud Through the Other Connecting the Cloud Networks This section describes how to connect an Oracle Cloud Infrastructure VCN to an Azure VNet. Figure 4 shows the components of this connection, and the table describes the terminology between the two clouds. Figure 4. Interconnect Routing and Security  Component  Azure  Oracle Cloud Infrastructure  Virtual network  Virtual network (VNet)  Virtual cloud network (VCN)  Virtual circuit  ExpressRoute circuit  FastConnect private virtual circuit  Gateway  virtual network gateway  dynamic routing gateway (DRG)  Routing  route tables  route tables  Security rules  network security groups (NSGs)  security lists Prerequisites To deploy a cross-cloud solution between Oracle Cloud Infrastructure and Azure, you must have the following prerequisites: An Azure VNet with subnets and a virtual network gateway. For information about how to set up the environment, see Azure Virtual Network. An Oracle Cloud Infrastructure VCN with subnets and an attached DRG. For information about how to set up the environment, see Overview of Networking. No overlapping IP addresses between your VCN and VNet. Enable the Connection The direct interconnection must be enabled from each provider’s console. Following are the high-level steps. For details, see the cross-connect documentation. Sign in to the Azure portal. Create an ExpressRoute circuit through a provider and select Oracle Cloud Infrastructure from the list of providers. Record the service key that Azure generates. Sign in to the Oracle Cloud Infrastructure Console. Create a FastConnect connection through a provider and select Microsoft Azure from the list of providers. Enter the service key that you got from Azure. The private virtual circuit is provisioned automatically between the two clouds. Note: You need a separate ExpressRoute or FastConnect circuit to connect your on-premises network to Azure or Oracle Cloud Infrastructure through a private connection, as shown in Figure 2. Conclusion Oracle and Microsoft have provided customers the flexibility to build and deploy applications in Oracle Cloud Infrastructure and Azure by providing a robust, reliable, low-latency, and high-performance path between the two clouds. With this partnership, our joint customers can migrate their entire set of existing applications to the cloud without having to rearchitect anything, preserving the large investments that they have already made and opening the door for new innovation. To learn more, review the public documentation, read frequently asked questions, schedule a demo, or request a POC. To order the service, contact your sales team.

Today we announced Oracle and Microsoft Interconnect Clouds to Accelerate Enterprise Cloud Adoption, a cloud interoperability partnership between Microsoft and Oracle. This cross-cloud interconnect...

Developer Tools

Building a Bigger Tent: Cloud Native, Culture, and Complexity

At KubeCon + CloudNativeCon Europe 2019 in Barcelona, Oracle open source projects and cloud services are helping enterprise development teams embrace cloud native culture and open source. With the announcement and open sourcing of Oracle Cloud Infrastructure Service Broker for Kubernetes this week, Oracle continues to expand its commitment to open source and cloud native solutions targeted at helping move enterprise workloads to the cloud.  This includes a recent set of Oracle open source solutions that facilitate enterprise cloud migrations including Helidon, GraalVM, Fn Project, MySQL Operator for Kubernetes, and the WebLogic Operator for Kubernetes. In addition, the recently launched Oracle Cloud Developer Image provides a comprehensive development platform on Oracle Cloud Infrastructure that includes Oracle Linux, Oracle Java SE (includes Java 8, 11, and 12), Terraform and many SDKs.  To help ensure our customers have what is needed to make their move to the cloud as easy as possible, Oracle Cloud Infrastructure customers receive full support for all of this software at no additional cost. Read more here. Oracle Cloud Infrastructure Service Broker enables customers to access Oracle’s generation 2 cloud infrastructure services and manage their lifecycle natively from within Kubernetes via Kubernetes APIs. In particular, this gives Oracle Database teams a fast and efficient path to cloud native and Kubernetes by using Oracle Autonomous Database cloud services with the new Service Broker and Kubernetes. In this use case, Kubernetes automates the provisioning, configuration, and management of all the application infrastructure, using Service Broker to connect to services such as Autonomous Data Warehouse and Automated Transaction Processing. Thus, database teams are able to not only move database applications to the cloud but at the same time improve performance, lower cost, and modernize their overall application architecture. Where We Are At On the surface, cloud native has never been bigger or better. Three major factors are driving cloud native today. DevOps has changed how we develop & deploy software. Open source has democratized what platforms we use. The cloud has super-charged where we develop and run applications. But the reality is many enterprise development teams have been left behind – facing a variety of cultural change, complexity, and training challenges that survey after survey confirms. Solution: A Bigger, Better Cloud Native Tent The cloud native community needs to build a bigger tent – one that is (1) more open and supports a multi-cloud future, (2) more sustainable – that reduces complexity versus piling more on, and (3) more inclusive to all teams – modern and traditional, startups and enterprises alike. Oracle helps by starting with what enterprises already know and working from there, thus building bridges and on-ramps to cloud native from a familiar starting point. This strategy focuses on an open, sustainable, and inclusive approach. Open Source: Enabling Enterprise Developers Oracle open source projects are being directed at moving enterprise workloads to the cloud and cloud native architectures. The Oracle Cloud Infrastructure Service Broker enables provisioning and binding of Oracle Cloud Infrastructure services with the applications that depend on those services from within the Kubernetes environment. Along with MySQL and VirtualBox, other more recent key Oracle open source projects include: Helidon: Project Helidon is a Kubernetes-friendly, open source Java framework for writing microservices. GraalVM Enterprise: The recently announced GraalVM Enterprise is a high performance, multi-lingual virtual machine that delivers high efficiency, better isolation and greater agility for enterprises in cloud and hybrid environments. Fn: The Fn project is an open source, container-native, serverless platform that runs anywhere – from on-premise to public, private and hybrid cloud environments. It is also the basis for the Oracle Cloud Infrastructure Functions serverless cloud service. Grafana PlugIn: The Oracle Cloud Infrastructure Data Source for Grafana exposes the health, capacity and performance metrics for customers using Grafana for cloud native observability and management. WebLogic Operator for Kubernetes: The WebLogic Server Operator for Kubernetes enables existing WebLogic applications to easily integrate into and leverage Kubernetes cluster management. OpenJDK: Java is open and the OpenJDK project is the focal point for that effort. OpenJDK is an open source collaborative effort that is now releasing on a six-month cadence with a range of new features, many of which are targeted at optimizing Java for cloud native deployments. In addition, open source community engagement is critical to moving existing projects forward for enterprises and cloud. Oracle continues to contribute to a large number of 3rd party open source projects and is a top contributor to many including Linux. Sustainable Cloud Services: Managed and Open Over the last six months, Oracle Cloud Infrastructure has launched and updated a wide range of managed cloud native services that enable enterprises to leapfrog complexity and move to useful productivity. These services include: Functions - Scalable, multi-tenant serverless FaaS based on the Fn Project that lets users focus on writing code to meet business needs without having to know about any infrastructure concepts. Resource Manager: A managed “Terraform-as-a-Service” (based on the open source Terraform project) that provisions OCI resources and services. Streaming: A managed service that ingests and stores continuous, high-volume data streams and processes them in real-time. Monitoring: Provides fine-grained, out-of-the-box metrics and dashboards for Oracle Cloud Infrastructure resources such as compute instances, block volumes, and more and also allows users to add their own custom application metrics. Container Engine for Kubernetes (OKE): A managed Kubernetes service, launched in 2017, that leverages standard upstream Kubernetes and is certified CNCF conformant.  Inclusive: Modern + Traditional, On-Premise + Cloud The open source solutions and cloud native services described above are enabling enterprise developers to embrace cloud native culture and open source and makes it easier to move enterprise workloads to the cloud. That includes everyone, from database application teams, to Java developers, to WebLogic system engineers, to Go, Python, Ruby, Scala, Kotlin, JavaScript, Node.js developers and more. For example, the Oracle Cloud Developer Image provides a comprehensive development platform on Oracle Cloud Infrastructure that includes Oracle Linux, Oracle Java SE support, Terraform, and many SDKs.  It not only reduces the time it takes to get started on Oracle’s cloud infrastructure but makes it fast and easy to provision and run Oracle Autonomous Database in a matter of minutes. While every KubeCon introduces more new projects and exciting advancements, Oracle is expending an equal effort on helping existing enterprise development teams embrace cloud native culture and open source. The Oracle Cloud Infrastructure Service Broker for Kubernetes along with projects like Helidon, GraalVM, Fn Project, MySQL Operator for Kubernetes, and the WebLogic Operator for Kubernetes are just a few of the ways we can all help build a bigger tent to battle the growing enterprise issues of cultural change and rising complexity.

At KubeCon + CloudNativeCon Europe 2019 in Barcelona, Oracle open source projects and cloud services are helping enterprise development teams embrace cloud native culture and open source. With the...

Customer Stories

How Did an IT Services Provider Save Their Bacon by Switching to Oracle Cloud Infrastructure?

In a recent Q&A discussion, Chris Fridley, COO of Ntiva, shared details of how switching to Oracle Cloud Infrastructure dramatically enhanced Ntiva’s business and improved customer satisfaction. Ntiva, an IT service provider, is completely responsible for its customers’ technology and cloud services. To retain its customers, Ntiva must provide extremely high reliability, performance, and cost containment. Ntiva chose Oracle Cloud Infrastructure to achieve this goal. Continue reading to: Discover how moving to Oracle Cloud Infrastructure improved Ntiva’s quality of service (QoS) Learn how reduced deployment times and lower resource utilization improved margins Explore dramatic improvements in client experience, which led to universally positive feedback What made you look for another cloud provider? We started looking around for a new cloud provider mid-2017, after having problems with multiple outages that sometimes lasted an entire business day. This was unacceptable in terms of customer service, and we had many clients threatening to leave Ntiva—not just our cloud services but the entire IT services contract. We were also supporting a lot of applications that were very intensive in terms of disk I/O, and the performance was very sluggish. This also meant that certain clients were very unhappy. Those two factors were the main motivators for us to start the search for a new cloud provider. What were you looking for in a cloud provider and why did you choose Oracle Cloud Infrastructure? We started looking around at the usual suspects, including Microsoft, with whom we have an ongoing relationship, and AWS. The challenge was that it was very hard to understand their pricing, which becomes a business problem for us when we have to figure out how to charge our clients. Our pricing needs to be very clear. When we spoke to our Oracle rep, what grabbed us right away was the very clear business model surrounding pricing and performance. As a service provider ourselves, we have to be able to provide our clients with a repeatable, predictable pricing structure. What was even better is that this pricing model extends to any Oracle Cloud Infrastructure data center worldwide, making it easy for us to support clients that have a need for growth outside of the US. But really, the shining moment for Oracle versus the competition was the relationship that our Oracle team built with us right from the beginning. When we had a question, we could reach out and get a same-day response, with a very quick turnaround from the development team when we needed it. Their persistence, dedication, and turnaround time on delivery ultimately led us to choose Oracle. What was your migration experience and how did your customers react? When we started the migration to Oracle Cloud Infrastructure, we had to notify our customers upfront and coordinate logistics. And whenever you do something like this, there is naturally a lot of apprehension from our clients… is it going to be better? Is it going to be worse? 100 percent of the feedback that came in was positive, and much of it was completely unsolicited. They’re seeing much better performance now that we’re powered by the Oracle Cloud. So all the other metrics aside—CPU utilization, storage, and so on—the best thing for us is the customer feedback being universally positive. How did moving to Oracle Cloud Infrastructure improve your quality of service (QoS)? One of the big benefits that resulted from our move to Oracle Cloud Infrastructure was in terms of efficiency. Oracle’s technical architecture enabled us to support all of our existing clients with about 20 percent less compute power, and this included some of our high-end clients who have intensive requirements when it comes to compute and disk I/O. This means we can load up new customers into the same footprint, and those savings drop right to our bottom line. We also saw a significant reduction in provisioning and deployment times, which dropped from a few hours to well under 30 minutes. We’re saving about 15 percent in labor costs right there, so between the two we are seeing about a 30 percent financial pickup. Interested in learning more? Register for our webcast to learn more about Ntiva’s experience migrating to Oracle Cloud Infrastructure. Additional information about Ntiva Ntiva was officially founded in 2004 by founder and CEO Steven Freidkin and has grown almost exclusively through referrals and their unwavering focus on their core values: Focusing on customer service first Managing every client dollar as if it was their own Hiring, developing, and retaining the very best people Ntiva knows that technology is a crucial part of every business, and having the right technology in place is more than a competitive advantage—it’s critical to growth and success. 

In a recent Q&A discussion, Chris Fridley, COO of Ntiva, shared details of how switching to Oracle Cloud Infrastructure dramatically enhanced Ntiva’s business and improved customer satisfaction. Ntiva,...

Events

Oracle Cloud Application Migration: Are You Ready to Soar?

This post was written by Mike Owens, Vice President, Cloud Advisory Practice, and Mary Melgaard, Group Vice President, Cloud Migration Services. Do you ever feel like you've been left behind in the on-premises world while everyone else has moved to the cloud? You want to move your applications to the cloud, but which of the many paths is the right one for you?   Moving Oracle Applications such as E-Business Suite, JD Edwards, PeopleSoft, Siebel, and Hyperion to the cloud can introduce a complex landscape of alternatives and choices. It's important to select a partner who has the vision, experience, tools, and commitment to help you create your cloud adoption and migration strategy, and guide your migration to its successful conclusion. Join Oracle Consulting for our May 7 webinar on cloud application migration, in which we explain our cloud adoption and strategy methodology: Vision, Frame, and Mobilize.  Vision Establish your vision: Gather and organize key data about applications, infrastructure, and IT costs. Establish your project and transformation governance plan, and conduct a visioning workshop to ensure alignment with stakeholders. What are your business and technology objectives for your Oracle Cloud migration? What are the underlying drivers for your success? Organize for transformation: Compile and analyze an application inventory subset, build the business case, and identify prototype applications to migrate. What are the common characteristics or unique business requirements for your application portfolio? Frame Frame your path forward: Based on the application subset that you identified, determine the appropriate cloud deployment model for your applications, addressing regulatory, legal, and privacy considerations. Given your requirements, will you need multiple cloud vendors, or can one provider fulfill most of your business and workload objectives? Mobilize Map your journey: Determine the activities and actions needed to drive the migration, validate the business case, and reconcile any conflicts. Is this a "move and improve" initiative that will create a new way of working by changing the look and feel of your applications? Or, is this a technical move that lifts workloads "as-is" from on-premises to the cloud with limited impact on the business users? Mobilize for the future: Orchestrate the final application modernization, migration, and transformation analysis. Sequence the initiatives, and incorporate in-flight and planned projects. Develop your cloud transformation roadmap. What internal and external resource commitments are required to successfully migrate, and how will you manage the organizational change? Does the subscription include services that reduce your internal staffing requirements, and how will you communicate and adapt to these changes? Soar After your strategy is in place, flawless execution is critical. A strategic approach coupled with a partner that automates cloud migration can help to simplify the process of moving your applications to cloud. Leveraging the Oracle Soar methodology, you can move Oracle and non-Oracle applications to the cloud rapidly and efficiently with near-zero downtime. Join Us Are you ready to choose a path for cloud application migration that is right for you? Are you ready to soar? Join our webinar on May 7 at 9 a.m. PT to: Identify best practices for your cloud migration strategy Understand the paths for moving applications to the cloud Learn from real-life customer examples for moving E-Business Suite, JD Edwards, and third-party applications to Oracle Cloud Understand how to leverage Oracle Soar to rapidly and efficiently move your applications to Oracle Cloud with near-zero downtime

This post was written by Mike Owens, Vice President, Cloud Advisory Practice, and Mary Melgaard, Group Vice President, Cloud Migration Services. Do you ever feel like you've been left behind in the...

Solutions

How MSPs Can Deliver IT-as-a-Service with Better Governance

As a solutions architect, I often support partners who deliver managed IT services to their end customers. Similarly, I work with large enterprises who manage IT for multiple business units. One of the most frequent requests I get is for best practices on how to align Oracle Cloud Infrastructure solutions and Identity and Access Management (IAM) policies with business-specific governance use cases. For enterprise customers, this means having better control over usage costs across multiple business units. For managed service providers (MSPs), this involves having better cost governance over the IT environments that they manage for end customers in their Oracle Cloud Infrastructure tenancy. This post is structured like a case study, in which an example enterprise customer, ACME CORP's Central IT team, faces the following business challenge: How do they enable their departmental IT stakeholders, and the operators within those departments, to have the autonomy to use their relevant Oracle Cloud Infrastructure services while still maintaining control over cost and usage? The post shows how this can be accomplished by using nested compartments and budgets. It also demonstrates how to maintain a separation of duties by delegating the management of security lists to the departmental application teams while enabling Central IT network admins to retain control over all networking components. This is particularly applicable for any application team using a CI/CD pipeline for their projects, automating the deployment and updating of subnets and security lists as part of their infrastructure as code (IaC) pipeline. The Business Challenge ACME CORP is a large enterprise company with a Central IT team and departmental IT teams that reside within the Finance and HR departments. The Central IT team manages the base cloud infrastructure for the company, and they manage IAM for all cloud services. Additionally, the Central IT Network Admin and Database (DB) Admin teams manage networking and database systems for the Finance and HR departments. The Finance and HR Departmental IT teams teams each comprise their own applications and database groups. The Finance and HR Applications groups are responsible for application deployment and should be able to manage the relevant infrastructure services for their specific projects. Therefore, Central IT needs to grant the departmental applications groups the ability to manage their own application workloads and the associated virtual infrastructure services, including application load balancers and security lists. However, Central IT wants more centralized control over the company's databases. They plan to grant the HR Database group the ability to read HR databases only, and they will grant the Finance Database group the ability to read and write financial databases. Finally, Central IT needs to grant an additional group, a team of Accountants, the ability to control costs and usage by the Finance and HR teams through the use of budgets.     The Solution Now let's establish the right identities and access so that the Central IT team and the departmental IT teams have access to their cloud resources. Create Groups of Users Based on Role First, we create the following groups, which map to the preceding roles: NetworkAdmin DatabaseAdmin FinApplication HRApplication FinDBuser HRDBuser Accountants Align Compartment Structure with Organizational Hierarchy Next, we implement the necessary compartment structure to separate resources by department. This solution has three levels of nested compartments with associated cloud infrastructure assets, as shown in the following image: First-Level Compartment In ACME CORP's tenancy, a root compartment is automatically created for Central IT, in which they can create policies to manage access to resources in all underlying compartments. Second-Level Compartments Under the root compartment are the following child compartments. The following screenshot shows what the compartment nesting would look like in the console. Central_IT_Network: For shared networking service elements, including FastConnect, internet gateway, IPSec termination points, and DNS Finance: Contains the VCN for finance projects and applications HR: Contains the VCN for HR projects and applications When you click into these compartments, you can create the VCNs for each one. Because the Central_IT_Network compartment does the VCN transit routing for the company, it's configured to contain the following components: Dynamic routing gateways for IPSec Internet gateway having public IPs FastConnect Load balancers for public-facing web servers DNS database Local peering to VCN_ACME_FIN and VCN_ACME_HR The Finance compartment houses its own private network for their department, named VCN_ACME_FIN, and contains the following components: Load balancers for Finance intranet servers Active domain controllers for Finance File Storage for Finance users Object Storage for Finance users Databases for Finance users The HR compartment has its own network, VCN_ACME_HR, along with the following components: Load balancers for HR intranet servers Object Storage for HR users Databases for HR users Third-Level Compartments Nested under the Finance compartment are the Project A and Project B compartments. Each project can house resources leveraged by specific applications. Nested under the HR compartment are similar compartments for HR Project A and Project B. Subnets and Security Lists For Finance Project A, we created two subnets under VCN_ACME_FIN (in the Finance compartment), but because we want to enable the CI/CD pipeline to automatically update and deploy application-specific security lists, we created the security lists in the Project A compartment. We replicated the same creation and placement for Finance Project B. The following image shows how we created the security list in the Finance Project A compartment: We performed the same type of subnet and security list placements for HR Project A and Project B. Now that we've isolated resources by compartment for department and project usage, we'll create budgets to control spending. Creating Budgets to Control Spending Next we establish permissions for the Accountants group (see the "Policies" section), enabling them to administer budgets and proactively control spending for each department. For example, they can create a budget for the Finance compartment that would also cover Finance Projects A and B. The total limit of monthly spend could be set at US$1,200, and a threshold set at 50 percent for an email notification to be sent out to the right teams to take action when spending reaches 50 percent of budget. For more information about creating a budget, see Managing Budgets. Policies We created the following policies for each group and compartment. Policies applied on the ACME_CORP root compartment Policy to enable the DatabaseAdmin group to manage databases across all compartments in the tenancy: ALLOW GROUP DATABASEADMIN TO MANAGE DATABASE-FAMILY IN TENANCY    Policy to enable the NetworkAdmin group belonging to Central IT to have administrative rights over all network resources in the Central_IT_Network compartment: ALLOW GROUP NetworkAdmin TO MANAGE VIRTUAL-NETWORK-FAMILY IN COMPARTMENT CENTRAL_IT_NETWORK Policies applied on the Finance compartment Policy to enable FinDBuser to have read and write rights on Financial database: ALLOW GROUP FINDBUSER TO USE DATABASE-FAMILY IN COMPARTMENT FINANCE Policy to enable the Finance application teams, including automated CI/CD systems, to manage virtual machines, object storage, and security lists in their respective project compartments: ALLOW GROUP FINAPPLICATION TO MANAGE OBJECT-FAMILY IN COMPARTMENT FINANCE ALLOW GROUP FINAPPLICATION TO MANAGE SECURITY-LISTS IN COMPARTMENT PROJECT A ALLOW GROUP FINAPPLICATION TO MANAGE SECURITY-LISTS IN COMPARTMENT PROJECT B ALLOW GROUP FINAPPLICATION TO MANAGE INSTANCE-FAMILY IN COMPARTMENT PROJECT A ALLOW GROUP FINAPPLICATION TO MANAGE INSTANCE-FAMILY IN COMPARTMENT PROJECT B Policies applied on the HR compartment Policy to enable HRDBuser to have read only rights on Financial database: ALLOW GROUP HRDBUSER TO READ DATABASE-FAMILY IN COMPARTMENT FINANCE Policy to enable the HR application teams, including automated CI/CD systems, to manage virtual machines, object storage, and security lists in their respective project compartments: ALLOW GROUP HRAPPLICATION TO MANAGE OBJECT-FAMILY IN COMPARTMENT HR ALLOW GROUP HRAPPLICATION TO MANAGE SECURITY-LISTS IN COMPARTMENT PROJECT A ALLOW GROUP HRAPPLICATION TO MANAGE SECURITY-LISTS IN COMPARTMENT PROJECT B ALLOW GROUP HRAPPLICATION TO MANAGE INSTANCE-FAMILY IN COMPARTMENT PROJECT A ALLOW GROUP HRAPPLICATION TO MANAGE INSTANCE-FAMILY IN COMPARTMENT PROJECT B Policy to enable budget management is applied on the ACME_CORP compartment ALLOW GROUP ACCOUNTANTS TO MANAGE USAGE-BUDGETS IN TENANCY The following image explains the syntax used for policies: Conclusion An MSP or central IT organization can deliver services across multiple organizations and departments, even when all are placed within the same tenancy. You can separate resources while still maintaining governance and control by leveraging nested compartments, assigning policies at the appropriate level of the nested compartments, and configuring a budget to limit resource usage and maintain logical isolation and requisite access controls. Resources Oracle Cloud IaaS documentation Best Practices for Identity and Access Management Service on Oracle Cloud Infrastructure blog post Foundational Oracle Cloud Infrastructure IAM Policies for Managed Service Providers blog post

As a solutions architect, I often support partners who deliver managed IT services to their end customers. Similarly, I work with large enterprises who manage IT for multiple business units. One...

Security

Optimizing Cloud-Based DDoS Mitigation with Telemetry

Today’s cybersecurity challenges demand cross-functional solutions to achieve an effective level of capability in the domains of identification, protection, detection, response, and recovery. As a result, enterprise security controls represent significant investments for IT departments. Repeatedly, incidents demonstrate that even well-resourced organizations don’t leverage solutions to their fullest potential. As organizations strive to achieve a capability baseline, the concept of control efficacy is increasingly important. Today’s IT leaders are asking, “How effective is a given solution working, under real world conditions, and what can be done to improve it?” During an incident—whether it’s an attempted network intrusion, a phishing email, or a DDoS attack—simply having a control in place isn't enough. You must also understand how that control is behaving and what can be done to improve it with hard data. Internet Intelligence To that end, Oracle deployed a deep monitoring network of sensors—known as vantage points—to collect publicly available data about internet performance and security events all over the world, with real-time information about degradation, internet routing changes, and network anomalies. Oracle Cloud Infrastructure products, such as Market Performance and IP Troubleshooting, are based on Internet Intelligence. (red dots: network sensors) Oracle’s Internet Intelligence Map monitors the volatility of the internet as a whole, based on this telemetry. With more organizations relying on third-party providers for their most critical services, monitoring the collective health of the internet can give users an early warning of impending attacks. Organizations can react and prepare before the danger reaches them. Data gathered by Oracle’s Edge Network provides valuable insight about border gateway protocol (BGP) route changes and distributed denial-of-service (DDoS) activation worldwide. Oracle can monitor 250 million route updates per day, including where DDoS protection is being activated and when attacks are occurring. We measure the quality, in near real time, of any cloud DDoS protection activation by most cloud-based DDoS vendors. This information can be used to measure the effectiveness of protection solutions. DDoS Attack Example Monitoring DDoS activation provides visibility into where attacks are happening, the length of the attack, and the effectiveness of route coverage, in near real-time. A DDoS attack at the end of summer 2018 provides a timely example of the importance of DDoS mitigation telemetry. The attack, which intermittently affected access to a company’s website and services, was easily identified by Oracle’s vantage points and visible on the Internet Intelligence Map. In this example, as is typical with cloud-based DDoS protection, a cloud-based DDoS protection provider hijacked the organization’s IP blocks so that all the attack traffic was redirected to specialized DDoS scrubbing centers. During the IP block hijack, our security team measured a number of interesting statistics about the route instability, latency jump, and other artifacts caused by the traffic reroute. Observing the propagation duration and the completeness of the route change allows for some unique quality measurements. With this example, AS announces two prefixes. As the following image shows, despite the announcement change, there are still unprotected routes via an ISP, allowing some of the attack traffic through. The DDoS protection provider intercepted the IP blocks under attack, at around 16:00 local time. DDoS Attack Leakage Although the organization's DDoS protection provider was efficiently mitigating most of the DDoS attack, our tool shows that a small portion of the attack was still going to the organization, for approximately one hour. This is called a DDoS attack leakage. Despite the fact that a very powerful cloud-based DDoS provider was mitigating the attack, the internet assets of the organization still went offline during this hour because a portion of the attack was still going to its data center. Additional visibility from trace routes confirms that not all routes were going through the DDoS protection. Two traces from London and Frankfurt confirm that some traffic was still routing through an ISP (instead of going entirely through the cloud-based DDoS mitigation provider). Note: This doesn't mean that the ISP is knowingly responsible for the DDoS leakage. In fact, quite the contrary. The internet is largely a self-adjusting network, in which traffic is sometimes routed via unexpected core locations. Routing tables need frequent adjustments (made by tier 1 providers and ISPs), and these routing adjustments always take some time to propagate, as was the case here. Similar to many of our colleagues in the DDoS industry, the Oracle Security team was able to measure and score the quality of the IP hijack and the consistency of the hijack over time. Our results also shows that, indeed, the attack leakage was captured by a low quality score: Prefix Prefix Owner (under attack) AS or prefix owner DDoS Provider AS of provider Handoff Coverage Consistent Coverage xxx.xxx.xxx.0/24 [organization]   xxxxx    [DDoS provider]   yyyyy 64.39   95.53   These statistical sensors are called the Handoff and Consistent Coverage. A Handoff Coverage of +97 percent typically shows an almost perfect coverage. Less than 90 percent might indicate a DDoS leakage. The Handoff Coverage impact is shown in the following image: As demonstrated in this example, Handoff Coverage and Consistent Coverage are important parameters that analysts and security staff can use to quantify the effectiveness of a cloud-based DDoS mitigation solutions. In particular, these parameters can help detect the residual attack traffic still leaking back to the organization during the initial handoff, which causes downtime. Conclusion DDoS attacks continue to plague organizations across all industries. Mitigation is widely available but applied unevenly. Leakage during mitigation can still have an impact on infrastructure, leading to the loss of mission-critical functions. When solutions are deployed, tracking their efficacy is critical to ensure that the organization’s security baseline is known and validated. Telemetry gathered from external monitoring, as Oracle provides for DDoS mitigations, in one such example of seeking evidence-based optimization.

Today’s cybersecurity challenges demand cross-functional solutions to achieve an effective level of capability in the domains of identification, protection, detection, response, and recovery. As a...

Importing VirtualBox Virtual Machines into Oracle Cloud Infrastructure

Large enterprises use a wide variety of operating systems. Oracle Cloud Infrastructure supports a variety of operating systems, both new and old, and enables customers to import root volumes from on-premises from multiple sources such as Oracle VM, VMware, KVM, and now VirtualBox. VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprises. This blog post and its related video, Importing VirtualBox Virtual Machines to Oracle Cloud Infrastructure, review the steps required to create and then import a VirtualBox SUSE Linux 15 virtual machine (VM). This process also works with existing VMs, which makes "move and improve" migrations to Oracle Cloud Infrastructure much easier. Create a VirtualBox VM Start by creating a simple VirtualBox VM. See the video for details. Remember, this process also works for any preexisting VMs. Prepare a VirtualBox VM for Import Prepare the VM to boot in the cloud by verifying that the image is ready for import. Images must meet the following requirements: Under 300 GiB Include a master boot record (MBR) Configured to use BIOS (not UEFI) A single disk A VMDK or QCOW2 file The example in this post uses a copy of openSUSE Leap installed with just the default settings to show how it works. You must also ensure that the instance has security settings appropriate for the cloud, such as enabling its firewall and allowing only SSH logins that use private key identifiers. For more information about custom image requirements, see the "Custom Image Requirements" section of Bring Your Own Custom Image for Paravirtualized Mode Virtual Machines. As required, also perform the following tasks: Add a serial console interface to troubleshoot the instance later, if required. Add the KVM paravirtualized drivers. Configure the primary NIC for DHCP. Enable the Serial Console After you confirm the image settings, enable the serial console to troubleshoot the VM later, if required. Edit the /etc/default/grub file to update the following values: Remove resume= from the kernel parameters; it slows down boot time significantly. Replace GRUB_TERMINAL="gfxterm" with GRUB_TERMINAL="console serial" to use the serial console instead of graphics. Add GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200" to configure grub’s serial connection. Replace GRUB_CMDLINE_LINUX="" with GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200" to add the serial console to the Linux kernel boot parameters. Regenerate initramfs as follows: grub2-mkconfig -o /boot/grub2/grub.cfg To verify, reboot the machine, and then run dmesg and look for the updated kernel parameters. dmesg | grep console=ttyS0 For more information about these requirements, see the "Enabling Serial Console Access" section in Preparing a Custom Linux Image for Import. Enable Paravirtualized Device Drivers Next, add paravirtualized device support by building the virtio drivers into the VM's initrd. Because this action works only on machines with a Linux kernel of version 3.4 or later, check that the system is running a modern kernel: uname -a Rebuild initrd with the dracut tool, telling it to add the qemu module: dracut --logfile /var/log/dracut.log --force --add qemu Check lsinitrd to verify that the virtio drivers are now present: lsinitrd | grep virtio For more information, see the dracut(8) and lsinitrd(1) manuals. Enable Dynamic Networking Next, clear any persistent networking configurations so that the VM doesn’t try to keep using the interfaces that had been available in VirtualBox. To do that, empty the 70-persistent-net.rules file (but keep the file in place) by running: > /etc/udev/rules.d/70-persistent-net.rules Note: This change is reset when the VM is restated. If the VM is restarted in VirtualBox before it is imported, you must perform this step again. For more information, see the udev(7) manual and Predictable Network Interface Names. Power Off the VM With all that done, power off the instance by typing: halt –p Now the VM is fully prepared. Upload the Image After you have powered off the VM, start copying the VM disk from your desktop to Object Storage. Your desktop needs to have the Oracle Cloud Infrastructure CLI installed and configured. See the CLI Quickstart if you are doing this for the first time. Use the OCI CLI to upload the file. The Object Storage web console supports only files up to 2 GiB, so it’s unlikely it would work for your image. But even if the image is small, the CLI uploads much faster. In the Oracle VM VirtualBox Manager window, go to the VM index. Right click on the instance and select Show in Finder. Note the directory path and file name of the VM disk. Open the terminal and upload the file by using the VM’s directory path and file name, your Oracle Cloud Infrastructure tenancy's namespace, and the Object Storage bucket name: VM_DIR="/Users/j/VirtualBox VMs/openSUSE-Leap-15.0/" VM_FILE="opensuse-leap-15.0.vmdk" NAMESPACE="intjosephholsten" BUCKET_NAME="images" cd "${VM_DIR}" oci os object put -ns "${NAMESPACE}" -bn "${BUCKET_NAME}" --file "${VM_FILE}" The upload might take some time, depending on the size of the image and available bandwidth. For more information about uploading objects, see the CLI instructions for uploading an object to a bucket in Managing Objects. Import the Image After the upload is complete, log in to the Oracle Cloud Infrastructure Console to import the image. Go to the details page of the bucket to which you uploaded the image, and look at the details of the image to find its URL. You will use the URL to import the image. In the navigation menu, select Compute, and then select Custom Images. Click Import Image. If the system was able to use paravirtualized drivers, select paravirtualized mode to get the best performance. After the image import process starts, it takes some time to complete. For more information, see Importing Custom Linux-Based Images. Access the Instance After the image is imported, you can launch a new instance directly from the image details page. After the instance is running, use SSH to connect to it by using its public IP address and the same login credentials used to access the machine when it was running on VirtualBox. You have successfully imported a VirtualBox VM to Oracle Cloud Infrastructure! Try it for yourself. If you don’t already have an Oracle Cloud account, go to http://cloud.oracle.com/tryit to sign up for a free trial today.

Large enterprises use a wide variety of operating systems. Oracle Cloud Infrastructure supports a variety of operating systems, both new and old, and enables customers to import root volumes from...

Product News

Oracle IaaS and PaaS Console Available in 29 Languages

For over four decades, Oracle has been a trusted technology provider for enterprises worldwide. We deliver solutions to help you capture, store, and leverage data more effectively so that you can run your business and continue to evolve. As infrastructure as a service (IaaS) adoption has spread from developer teams to global enterprise IT organizations, we have had to evolve our cloud infrastructure service and user experience. As the next step in this evolution, we are announcing the localization of our web-based console into 28 languages, in addition to English. This enhancement complements our plans to rapidly extend global coverage of our next-generation enterprise cloud infrastructure so that we can address the data sovereignty and latency needs of our multinational customer base. Additionally, we're continuing to improve the experience for users located around the world. Available Languages The following 29 languages are available on our console: Chinese (Simplified) French (Canada) Norwegian Slovenian Chinese (Traditional) French (Europe) Polish Spanish Croatian German  Portuguese (Brazil)  Swedish Czech Greek Portuguese (Portugal) Thai  Danish Hungarian Romanian Turkish Dutch Italian Russian   English Japanese Serbian   Finnish Korean Slovak     Intuitive Language Selection From any page in the console, click the globe icon on the upper-right corner of the screen to select your language. The language selections appear in the local language. Localized Console and Navigation After you select your language, the console, including the quick action links on the home page and the navigation bar selections, is translated.  Not all product or service names are translated in every language. We followed local preferences and guidelines in each country when deciding whether to translate Oracle and third-party product names.   Create and Manage Cloud Resources in Your Language of Choice Now you can use your core IaaS and PaaS services—everything from Compute, Storage, and Networking to Autonomous Data Warehouse and Edge Services—in your language of choice. For example, the following example shows how you can manage your virtual cloud networks (VCNs) in Turkish. Currently, documentation and announcements are available only in English. However, we’re invested in improving the Oracle Cloud user experience for our growing global customer base. Stay tuned for future enhancements. We invite you to try the Oracle Cloud today and comment on this post with any feedback about how we can further improve your experience using Oracle Cloud services.

For over four decades, Oracle has been a trusted technology provider for enterprises worldwide. We deliver solutions to help you capture, store, and leverage data more effectively so that you can run...

Product News

How to Get Control of Your Spending in Oracle Cloud Infrastructure

It's critical for organizations to track spending, especially for services with consumption-based billing like Oracle Cloud Infrastructure. Today, we're announcing the release of Oracle Cloud Infrastructure Cost Analysis, Budgets, and Usage Reports. Over the last few weeks, we've rolled out this new suite of tools to help customers understand spending patterns, monitor consumption, analyze their bill, and, ultimately, reduce spending. Cost Analysis: Understand Spending Patterns at a Glance You need to know where your Oracle Cloud Credits are being spent and how your consumption compares to your commitment amount. Use the Cost Analysis dashboard to view your spending by service or by department (compartment or cost tracking tag). Use trend lines to understand how spending patterns are changing and where to focus cost reduction efforts. For more information, see the documentation. Expanded Cost-Analysis Detail Get Proactive About Controlling Spend with Budgets You need to be proactive about your spending and get early warning if spending accelerates unexpectedly. Use budgets to track actual and forecasted spending for the entire tenancy or per compartment. Set up actionable email alerts at important thresholds to keep the right people informed and direct them to take action before overages are incurred. For more information, see the documentation. See All Your Budgets in One Place Sample Budget Alert Email Analyze Your Cloud Bill with Usage Reports If you want to get a granular view of spending or find ways to save, you need detailed information about your Oracle Cloud Infrastructure consumption. Usage reports enable you to get more insight on your bill or create custom billing applications. The reports contain one record per resource (for example, and instance, DB system, or Object Storage bucket) per hour with metadata and tags. When joined with your rate card, usage reports drive scenarios such as: Invoice reconciliation Custom reporting Cross-charging Cost optimization Resource inventory In addition to enabling new billing scenarios, usage reports provide transparency into how the billing system works. For example, you can now see how and where rounding occurs, and how resources that existed for less than an hour are billed. You can download usage reports through the web-based console or by using APIs. For more information, see the documentation. Sample Dashboard Built from Usage Report Data Enabling Access for a New Set of Users: Cloud Controller or Accountant It's important to enable a new set of users—the cloud controller or accountant—to access billing information but not have the ability to manage or use cloud resources. For the new suite of billing features, access can be granted to nonadministrator users who need access only to billing features. Read more about access management for billing features in the documentation. Cost Management Best Practices The new cost management features in Oracle Cloud Infrastructure build on fundamentals that have existed in our service for a while. Here are some best practices for managing costs: Create a budget that matches your commitment amount and an alert at 100 percent of the forecast. This gives you an early warning if your spending increases and you're at risk of getting an overage. Use compartments primarily as an access-control mechanism, but consider that you can also see cost per compartment. In practice, many enterprise customers set up one compartment per department, and having one compartment per department works well for cross-charging. Use cost-tracking tags (like cost-center) to allocate cost in more granular ways. We have recently rolled out tag defaults to make it easier to tag resources. Enable monitoring on all resources. You can merge monitoring data with cost data to gain powerful insights on how to improve resource utilization.  Use the usage report to analyze costs and drive custom solutions.

It's critical for organizations to track spending, especially for services with consumption-based billing like Oracle Cloud Infrastructure. Today, we're announcing the release of Oracle...

Product News

Enabling Any Federated User to Invoke the Oracle Cloud Infrastructure SDK and CLI

At Oracle, we’ve been focused on improving our identity federation support for enterprises that integrates seamlessly with Oracle Cloud Infrastructure. We want you to be able to use your existing identity provider (IdP) to access the Oracle Cloud Infrastructure Console, SDK, and CLI. In the past few months, we’ve delivered SDK and CLI access to users federated through Oracle Identity Cloud Service (IDCS) and Okta. Today, I’m excited to introduce the token-based CLI and SDK, which allows any federated user to invoke the CLI and SDK. This feature uses a CLI command that allows you to log in by using a web browser for CLI and SDK sessions. When to Use the Token-Based CLI Versus API Signing Keys Oracle Cloud Infrastructure already supports API signing keys, which allow a local user to access the SDK and CLI. So when would you use the token-based CLI versus API signing keys? This section provides some use cases. Flexibility to Support Any Identity Provider Currently, API signing keys are supported only by Okta and SCIM. Therefore, if you use any other IdP, you can use the token-based CLI to enable CLI and SDK access to Oracle Cloud Infrastructure. User Interactive SDK and CLI Sessions After API signing keys are set up, they can support user interactive sessions. However, token-based CLI access is ideal for sessions in which user interaction is required because, by default, it uses a web browser to authenticate each user.     For example, say that you need to tag a few databases and you want to do it interactively. If you use the token-based CLI, you can enter your username and password in the web-based login page and then use the CLI to perform the tagging tasks. The session lasts one hour, but you can refresh it without using the web browser for up to 24 hours. The exact length of time depends on your IdP. What About Headless Computers? You might have administrative computers from which you want to run Oracle’s CLI but that don’t have web browsers installed. You can still use the token-based CLI with those computers. You first create an authenticated session on a computer with a web browser, and then using the token-based CLI, you export this session and then import it on the headless computer. Users who are logged in to the headless computer can still refresh the token session without having a web browser by using the oci session refresh command. This can usually be done for up to 24 hours, but again, the exact duration depends on your IdP. Automated Jobs That Don't Require User Interaction API signing keys are ideal for automated jobs, for example, jobs that run at 3 a.m. and don't require user interaction. A token-based CLI is not well suited for automated jobs because there is no recommended way to authenticate the session without user interaction. Key Management Considerations API signing keys must be distributed, rotated, and managed. The API signing key is a public/private key pair, and you need that private key on any machine that runs the CLI or SDK. This might be a concern because you now have to manage private keys. If you want a solution that requires simpler administration, consider the token-based CLI. It uses ephemeral keys that last only a short time. You don’t need to worry about distribution and key rotation, which might be significantly easier to administer for your organization. The following chart summarizes the preceding use cases: Using the Token-Based CLI and SDK The token-based CLI and SDK allows any federated user to invoke the Oracle Cloud Infrastructure CLI and SDK. To use this feature, run the oci session authenticate command from the CLI. This command brings up a web browser, which you can use to authenticate the session. When the session is authenticated, you can then run CLI and SDK commands. SDK Support To generate an authenticated session that the SDK can use, use oci session authenticate and then, from that command window, call your SDK script. You need to also change how your SDK script uses the signer. For information, see the documentation. PowerShell Tip Here’s a tip that will save you typing in PowerShell, and I’m sure there’s an equivalent in UNIX and MacOS. If you get tired of appending the profile and auth parameters to your CLI calls, you can write a one-line function in PowerShell that do the appending for you. The only prerequisite is that you use the same profile name every time that you authenticate. Write a PowerShell function and save it to your PowerShell profile: function toci {oci @args --profile TokenDemo --auth session_token} This function writes an alias called toci that always appends --profile TokenDemo and --auth session_token. So instead of typing this: oci <oci_command>  --profile TokenDemo --auth session_token You just type this: toci <oci_command> To make this work, you need to specify the profile name you used in the function—in my case, TokenDemo—as your profile every time you call oci session authenticate. Thanks for reading, and stay tuned for more authentication features coming soon.

At Oracle, we’ve been focused on improving our identity federation support for enterprises that integrates seamlessly with Oracle Cloud Infrastructure. We want you to be able to use your existing...

Developer Tools

Set Up a Machine-Learning Workbench on Oracle Cloud Infrastructure

Machine learning is fast becoming an integral part of enterprise applications and a core competency for enterprises. A basic machine-learning workflow involves collecting, preparing, and curating data, followed by building, training, evaluating, and deploying models by using different machine-learning algorithms, which you can then use to make predictions. This post focuses on the model development part of the workflow. A rich ecosystem of open source software and managed offerings are available, and I've picked some popular open source offerings to build an example workbench. Using this workbench, you can start coding and building models. I don't cover model development in depth, but I present one example to show how to use the workbench. Python and R are commonly used languages for machine learning. I use Python 3 here because it supports a robust set of popular machine-learning frameworks, libraries, and packages, such as TensorFlow, scikit-learn, Matplotlib, SciPy, NumPy, Keras, and pandas.   Notebooks like Jupyter and Apache Zeppelin provide integrated environments that incorporate results, visualizations, and documentation inline with code, providing an excellent workbench for data scientists, machine-learning engineers, and other stakeholders to develop, visualize, and share their work.  I show the steps to set up a basic, cloud native, machine-learning workbench from scratch on Oracle Cloud Infrastructure by using Python 3, TensorFlow, and Jupyter Notebook. I use an Oracle Cloud Infrastructure bare metal instance running Oracle Linux. Let’s get started. Launch an Instance Follow the steps in the documentation to launch a bare metal instance in Oracle Cloud Infrastructure. I made the following choices: Region: us-ashburn-1 OS: Oracle Linux 7.6 Instance shape: BM.Standard2.52 Use SSH to connect to the instance: $ ssh –i <path_to_your_private_key> opc@<public_IP_address_of_your_instance> I used Oracle Linux and a bare metal shape, but you can make other choices. These steps should work for the most part, regardless of your choices, although some commands will change based on the OS, and you need to use the right libraries for your instance shape. Install Python 3 By default, Oracle Linux 7 comes with Python 2.7. You can continue to use that, but I'm going to show you how to use Python 3. To install Python 3, following these steps: Install the EPEL repository: [opc@ml-workbench ~]$ sudo yum install -y oracle-epel-release-el7 oracle-release-el7 Install Python 3.6: [opc@ml-workbench ~]$ sudo yum install -y python36​ Set up a Python virtual environment (venv), called mlenv here: [opc@ml-workbench ~]$ python3.6 -m venv mlenv​ Activate the virtual environment: [opc@ml-workbench ~]$ source mlenv/bin/activate​ Using pip3, you can install some of the commonly used Python libraries like scikit-learn, Keras, NumPy, Matplotlib, pandas, and SciPy. Install TensorFlow Pick the correct TensorFlow package to use. Based on my choices, I can install TensorFlow by using the following command: (mlenv) [opc@ml-workbench ~]$ pip3 install tensorflow​ Install Jupyter and Run a Notebook Install Jupyter: (mlenv) [ml-workbench ~]$ python3 -m pip install jupyter​ Run a Jupyter notebook: (mlenv) [opc@ ml-workbench ~]$ jupyter notebook --ip=0.0.0.0​ You will see a warning such as No web browser found: could not locate runnable browser. Connect to Jupyter from Your Local Machine with SSH Tunneling Open another terminal window, and use the following command with an available port number to access the notebook: $ ssh –i <path_to_ your_private_key> opc@<public_IP_address_of_your_instance> -L 8000:localhost:8888​ Open a web browser on your local machine and browse to http://localhost:8000. When you are prompted for a token, use the token key listed in your previous terminal window to log in to the Jupyter notebook. Start Coding Click the New button to create a new Python 3 notebook. You can also upload existing IPython notebooks. Following is an example from a TensorFlow tutorial for a basic image classifier: Conclusion In this post, I set up a machine learning workbench with an Oracle Cloud Infrastructure bare metal instance running Oracle Linux. I used TensorFlow, a popular open source, machine-learning framework, and I described how to install and use Python 3 and Jupyter notebooks, which make it easy to build, train, and deploy models using Python. You can follow similar steps to install other languages, tools, and frameworks in the rapidly evolving machine-learning ecosystem. If you don’t have an Oracle Cloud account, you can sign up for a free account and get a $300 credit to start building.

Machine learning is fast becoming an integral part of enterprise applications and a core competency for enterprises. A basic machine-learning workflow involves collecting, preparing, and curating...

Product News

Manage Petabytes of Cloud Block Storage in Seconds in Your IaaS Console

You can now leverage a powerful and differentiating feature from Oracle Cloud Infrastructure to manage multiple block storage volumes and boot volumes more quickly and easily. In May 2018, we introduced our volume groups feature, which enables you to streamline the creation and management of groups of block volumes by using the CLI, SDK, and Terraform. The ability to manage volume groups is ideal for the protection and lifecycle management of enterprise applications, which typically require multiple volumes across multiple compute instances to function effectively. We are now extending our volume group feature to the web-based Oracle Cloud Infrastructure Console. With volume groups, you can manage petabytes of cloud block storage in a crash-consistent, coordinated manner in seconds and in just a few clicks. You can create groups of your volumes, back up volume groups, restore from volume group backups, and create deep disk-to-disk clones of volume groups, all within a few seconds. That includes the boot volumes for your compute instances and all your storage volumes for crash-consistent instance disaster recovery or environment duplication and expansion. A single volume group can have up to 128 TB of storage and 32 volumes. If you need to manage more storage space, you can create multiple volume groups. Because the solution is scalable, you can start with a small storage allocation. As your business demand grows, you can add more volumes and groups, and combine that with the offline resize feature, to easily scale and manage petabytes of storage by using the Oracle Cloud Infrastructure Block Volumes service. With this feature update, we are also improving the policies that control the volume group operations. You now have the ability to manage more granular permissions by separating volume group permissions from other volume operations. For details, see the Block Volumes service documentation on policies. This feature is free of charge and is available in all regions. You pay only for the amount of provisioned storage using the Oracle Cloud Infrastructure storage pricing. The following sections show how to use the volume group functionality in the Oracle Cloud Infrastructure Console. Create a Volume Group From the navigation menu, select Block Storage, select Volume Groups, and then click Create Volume Group. Enter a name for the volume group, and select and add the volumes to place in it. You can add boot volumes for your compute instances and block volumes. Click Create Volume Group. The volume group is created in seconds.   Add or Remove Volumes as Needed From the volume group's details page, you can add volumes to the group by clicking Add Block Volume or Add Boot Volume. You can remove a volume by clicking the Actions menu (three dots) for the volume and selecting Remove. Perform a Crash-Consistent Backup of a Volume Group From the Actions menu for the volume group, select Create Volume Group Backup. Enter a name for the volume group backup, and then click Create. Restore from a Volume Group Backup to a Crash-Consistent State From the Actions menu for the volume group backup, select Create Volume Group. Create a Clone of a Volume Group From the Actions menu for the volume group, select Create Volume Group Clone. After entering a name for the clone, click Create. The cloned volume group and the cloned volumes in it become available within seconds. Conclusion The Oracle Cloud Infrastructure Console provides a straightforward and simple way to create and manage volume groups. You can perform a crash-consistent, coordinated backup of a group of volumes, and restore from those backups in a few clicks. You can also create crash-consistent, coordinated, deep disk-to-disk volume group clones. The clones become available in seconds, and they are completely isolated from the source volume group and the volumes in it. You can use volume group clones to duplicate your entire instance OS boot disk and block storage for production troubleshooting, dev/test, and UAT/QA purposes. We want you to experience the block storage features and all the enterprise-grade capabilities that Oracle Cloud Infrastructure offers. It’s easy to take advantage of these capabilities with a US$300 free trial. For more information about block storage, see the Oracle Cloud Infrastructure Getting Started guide, Block Volumes service overview, and FAQ. Watch for announcements in this space about additional features and capabilities. We value your feedback as we continue to make our cloud service the best for enterprises. Send me your thoughts about how we can continue to improve our services or if you want more details about any topic.

You can now leverage a powerful and differentiating feature from Oracle Cloud Infrastructure to manage multiple block storage volumes and boot volumes more quickly and easily. In May 2018, we introduce...

Developer Tools

What's Next for Cloud Native Development Technologies?

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. The next wave of cloud native development will be about inclusivity, according to Bob Quillin, Vice President of Developer Relations at Oracle Cloud Infrastructure. That means enabling more enterprises to access and use technologies like containers and serverless, and being more inclusive of traditional, on-premises applications. I recently chatted with Quillin about the future of cloud native development, and he also gave me some updates on Oracle Container Engine for Kubernetes (OKE) and the current state of cloud native security. Listen to our conversation here, and read a condensed version below:   Your browser does not support the audio player   Oracle Cloud Infrastructure unveiled its Container Engine for Kubernetes awhile back. Is OKE mature and can enterprises start using it? Where does it stand? Quillin: Yes. It's been out for over a year now. It was one of the earliest platforms to be certified as Kubernetes conformant. As a managed service, it provides a fully managed control plane, it will manage the masters for you and provide total lifecycle management. It's currently being used by hundreds of enterprises. It's all based on standard Docker and standard Kubernetes, and it integrates very easily with Oracle Cloud networking, storage, and load balancing. Plus, you can leverage all of the power of our enterprise-grade cloud infrastructure. As a developer advocate, I'm super proud of the fact that we've got a certified, Kubernetes platform managed service running on top of an enterprise-grade cloud. This provides a unique combination of a great cloud with high levels of security and superior performance that can run simple cloud native applications, open source applications, and the most advanced applications out there. We're finding that it has a lot of interest from the startup community who are often up and running within an hour or so. I also work with larger organizations who are running WebLogic and Java and database applications, and those development teams are seeing amazing successes, too. So, it's an inclusive technology that can provide value no matter where you are on the spectrum. What is happening in the area of cloud native applications and security? Quillin: I think security was a much more contentious topic in the container world probably three or four years ago. There's been tremendous progress and focus on security since then. Obviously, Oracle has lots of security experts, and enterprise security is certainly one of the core tenants that we push forward in terms of applications, and this will continue to be a major area of focus. I think the registry is an area where there is a lot of good work happening with image scanning, tagging containers, and having registered containers and images. We're working with partners like Twistlock, for example, to integrate their image scanning and tagging into our registry, too. There are several levels of security in our cloud. On top of that you have Kubernetes security, which is both role and application based. You have multiple levels of security that create many different dimensions and options to control or constrain access. The tools are there to create a very secure and stable environment. We still have more work to do because security is always a moving target, but Oracle is a great security partner to have, and continuing to leverage that technology in the container and application world is a big goal for Oracle Cloud. Looking ahead, what do you think are going to be the hottest new trends? Quillin: One of them is definitely serverless technologies. I think they are the next big opportunity for standardization and open Cloud Native Computing Foundation (CNCF)-sanctioned activities. The focus will be on creating more ways to build out serverless applications based on standard technologies, and the CNCF is starting to address that. You'll likely see a lot of progress on that going forward. It's also one of the bigger challenges people are going to face, and the industry needs to come together on that. What else do you see happening in the future? Quillin: We're basically exiting this first wave of cloud native, and I think it's pretty clear that there's a set of patterns and methodologies that have emerged to build new cloud native applications. For new greenfield applications, the tooling and technology is at just the right moment. The next big challenge to enable a second wave of cloud native development is reaching out to more underserved communities, being more inclusive in terms of on-premises technologies and traditional technologies, and enabling more enterprise access to these technologies so we can get more organizations to adopt cloud native. That will happen through better training, more managed services so they don't have to do it themselves, and then more blueprints and solutions that provide access to best practices. We could all benefit from finding ways to simplify. Oracle, in particular, is focusing on ways to take away all that complexity from the user. Open standards, more inclusive enterprise strategies, and simplification of complexities are the three big things I'm looking forward to. If our readers want to learn more about cloud native technologies at Oracle, where should they go? Quillin: A good starting point is cloudnative.oracle.com. It's a microsite that’s a great focal point for learning what's going on in cloud native. It also branches into related content sources throughout Oracle. I'd also recommend going to some local meetups. We spend a lot of time here in Austin, for example, with local meetups. But our evangelist teams are all over the world, so look for an Oracle Cloud Native Evangelist in your neck of the woods. And if you have an interesting meetup or conference, definitely reach out to us. We're happy to participate.

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. The next wave...

Security

Addressing the Top Technological Risks in 2019

Nearly 30 years of predominantly digital-based, technological advances have tremendously affected our world, primarily through the birth, adoption, and growth of the internet. But a set of risks accompanies this phenomenon, according to the latest World Economic Forum Global Risks Report.  The report is derived from data collected through the World Economic Forum's Global Risks Perception Survey, and it incorporates the knowledge and viewpoints of the forum’s vast network of business, government, civil society, and thought leaders. According to the report, the world is facing an increasing number of multifaceted and interrelated challenges, including slowing global growth, persistent economic inequality, climate change, geopolitical tensions, and the accelerating pace of the Fourth Industrial Revolution.  The report also raises significant concerns about the following technological risks or instabilities: The adverse consequences of technological advances The breakdown of critical information infrastructure and networks Large-scale cyberattacks Massive incidents of data fraud and theft Interestingly, these technological risks rise in importance year-over-year compared to other, nontechnical risks. For example, massive data fraud and theft was ranked the fourth-highest global risk over a 10-year span, and cyberattacks were ranked number five. A similar theme can also be observed in many of the past reports. Because the internet-related and technology-related risks facing the world are only rising, the need to secure the internet overall has become more critical than at any other time. Core to Edge Security Oracle Cloud Infrastructure was built from the ground up to provide security and availability not only in the cloud core but also at the cloud edge, where users and their devices connect to the cloud, often over the internet.  Oracle Cloud Infrastructure's Edge Services range from Internet Intelligence, which provides global internet performance and availability data, to DNS and web application security services. And Oracle Dyn Web Application Security integrates a web application firewall, bot management, and API security in the cloud to keep web applications safe from cyber-induced outages, data breaches, and other threats.   In the Oracle Cloud Infrastructure core, organizations will find compute, storage, connectivity, applications, and database instances that are exceedingly protected by the highest levels of data encryption, identity and access management, key management, API management, and configuration and compliance management.  As more organizations move to the cloud, securing both the cloud core and the cloud edge is critical to address the technological risks highlighted in the World Economic Forum Global Risks Report.

Nearly 30 years of predominantly digital-based, technological advances have tremendously affected our world, primarily through the birth, adoption, and growth of the internet. But a set of...

Push Time-Sensitive Notifications to Many Distributed Applications

As enterprises transform and build modern cloud native applications, they need a foundational, easy-to-use, cloud-scale, publish-subscribe messaging service to help application development teams in the following ways: Simplify development of event-driven applications Enable 24x7 DevOps for application development in the cloud Provide a mechanism for cloud native applications to easily deliver messages to large numbers of subscribers. We are proud to announce the general availability of the Oracle Cloud Infrastructure Notifications service in all Oracle Cloud Infrastructure commercial regions. Notifications is a fully managed publish-subscribe service that pushes messages, such as monitoring alarms, to subscription endpoints at scale. This service delivers secure, low-latency, and durable messages for applications hosted anywhere. As part of our initial launch, Notifications supports email and PagerDuty delivery. Notifications reduces code complexity and resource consumption by pushing messages to endpoints, so your applications no longer need to poll messages periodically. And because the service provides integration with subscription endpoints such as email and PagerDuty, there's no need for direct point-to-point integration. As part of Oracle Cloud Infrastructure, Notifications is integrated with Identity and Access Management (IAM), which enables fine-grained, security-rules enforcement via access control policies. Notifications pricing is intuitive, simple and elastic; customers pay per message delivery. Notifications is accessible via the Oracle Cloud Infrastructure Console, SDKs, CLI, and REST API, and also provides Terraform integration. Getting Started Getting started with the Notifications service is straightforward in the Oracle Cloud Infrastructure Console and using the REST API. The following steps show how to create a topic, add subscribers to the topic, and start producing messages by using the topic. In the Console main menu, navigate to the Notifications section (Application Integration > Notifications) in the appropriate compartment. Click Create Topic. Alternatively, you can use the REST API CreateTopic operation. Specify the topic name, and then click Create. After the topic is created, add subscribers by clicking Create Subscription. With the REST API, use the CreateSubscription operation. From the console, choose the protocol, provide the email address for the email protocol or the PagerDuty integration URL for the HTTPS (PagerDuty) protocol, and then click Create. Confirm the subscription to activate the subscriber and start delivering messages to the subscriber. In the Console, click Publish Message. With the REST API, use the PublishMessages operation and pass a payload to produce data to a topic. Specify a title and message, and then click Publish. Check the delivered message to the respective endpoint. Next Steps We want you to experience this new service and all the enterprise-grade capabilities that Oracle Cloud Infrastructure offers. It’s easy to try it with our US$300 free credit. For more information, see the Oracle Cloud Infrastructure Getting Started guide and Notifications documentation. Be on the lookout for announcements about additional features and capabilities, including integration with Oracle Functions, generic HTTPS subscription endpoints, message filtering, and custom retries for message delivery. We value your feedback as we continue to enhance our offering and make our service the best in the industry. Let us know how we can continue to improve or if you want more information about any topic. We are excited for what's ahead and are looking forward to building the best publish-subscribe messaging platform.

As enterprises transform and build modern cloud native applications, they need a foundational, easy-to-use, cloud-scale, publish-subscribe messaging service to help application development teams in...

Developer Tools

Track Critical Health and Performance Metrics with Oracle Cloud Infrastructure Monitoring

In today’s always-connected world, there’s an expectation that the systems that power our apps, businesses, and entertainment won't falter in their reliability, performance, and customer experience. Using the same technology that ensures our infrastructure’s availability, we have built a monitoring and alarming service designed to give our customers the insight that they need to exceed these expectations. We're proud to announce the launch of the Oracle Cloud Infrastructure Monitoring service in all Oracle Cloud Infrastructure commercial regions. The Monitoring service gives you the insight that you need to understand the health of your resources, optimize the performance of your applications, and respond to anomalies in real time. Out-of-the-box metrics and dashboards are provided for your Oracle Cloud Infrastructure resources such as compute instances, block volumes, virtual NICs, load balancers, object storage buckets, and more. You can also have your applications emit their own custom metrics, enabling you to visualize, monitor, and alert on all critical time-series data in one place. Combined with a powerful query language, Oracle Cloud Infrastructure Monitoring includes a robust metrics engine that enables flexible aggregation and complex queries across multiple metric streams and dimensions in real time. Alarms are a key component of the Monitoring service. The service quickly detects fluctuations in performance or health and notifies you about them. The alarm definition language lets you create a variety of alarm types that can use multiple statistics, trigger operators, time intervals, and wildcards for applying them to the entire fleet of resources. Alarms are integrated with the newly released Oracle Cloud Infrastructure Notifications service, which delivers messages to destinations such as PagerDuty and email securely and reliably. Getting started is straightforward. Use the Metrics Explorer available in the Oracle Cloud Infrastructure Console to search and visualize multiple metrics across various dimensions and time. Alternatively, you can use the Oracle Cloud Infrastructure Data Source for Grafana, available from GitHub and the Grafana Marketplace, to natively access metrics directly from Grafana. Additional features and capabilities are coming in the near future, including an expanded list of Oracle Cloud Infrastructure resource metrics, import and export functionality, and customizable retention times. We are excited to share this initial launch as a first step towards providing a best-in-class cloud monitoring solution. We want you to experience these new features and all the enterprise-grade capabilities that Oracle Cloud Infrastructure offers. It’s easy to try them with our US$300 free credits. Beyond the free tier, Oracle Cloud Infrastructure Monitoring provides a simple, competitive pricing model based on two dimensions: the number of data points sent as custom metrics and the number of data points used for analysis. For more information, see the Oracle Cloud Infrastructure Resource Monitoring service essentials and the Oracle Cloud Infrastructure Monitoring documentation.

In today’s always-connected world, there’s an expectation that the systems that power our apps, businesses, and entertainment won't falter in their reliability, performance, and customer experience....

Performance

Announcing Parallel File Tools for File Storage

We are excited to share a new suite of Parallel File Tools that take advantage of the full performance of the Oracle Cloud Infrastructure File Storage service. File Storage offers elastic performance where throughput grows with the amount of data stored in every file system. File Storage is best suited for parallel workloads that can use the scalable throughput. The Parallel File Tools provide parallel versions of tar, rm, and cp that run requests on large file systems in parallel, enabling you to make the best use of the performance characteristics of the File Storage service. The current toolkit is distributed as an RPM for Oracle Linux, Red Hat Enterprise Linux, and CentOS. The current Parallel File Toolkit includes: partar, a parallelized subset of tar functionality to create and extract tarballs in parallel parrm, a parallelized recursive remove of a directory parcp, a parallelized recursive copy of a directory Installing and Running Use the following commands for your OS. Oracle Linux yum install sudo yum install -y fss-parallel-tools CentOS and Red Hat 6.x OL6 install sudo wget http://yum.oracle.com/public-yum-ol6.repo -O /etc/yum.repos.d/public-yum-ol6.repo sudo wget http://yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle sudo yum --enablerepo=ol6_developer install fss-parallel-tools CentOS and Red Hat 7.x yum install sudo wget http://yum.oracle.com/public-yum-ol7.repo -O /etc/yum.repo.d/public-yum-ol7.repo sudo wget http://yum.oracle.com/RPM-GPG-KEY-oracle-ol7 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle sudo yum --enablerepo=ol7_developer install fss-parallel-tools Using the Manual To display the man pages, use the following commands: man partar man parrm man parcp Reporting Bugs or Enhancement Requests Install the debuginfo RPM, rerun the command to collect the core, and send that to us at parallel-tools-support_ww@oracle.com with the version of the package and all the error messages that the command emits. To ensure that you have the latest version, check the yum update or yum install command. To install the debuginfo package, run: sudo debuginfo-install fss-parallel-tools What's Next? In the near future, we plan to provide support for Ubuntu. More About File Storage Oracle Cloud Infrastructure File Storage is a fully managed, network-attached storage that offers high scalability, high durability, and high availability for your data in any Oracle Cloud Infrastructure availability domain. File Storage provides the reliability and consistency of traditional NFS filers and offers enterprise-grade file systems that can scale up in the cloud without any upfront provisioning. You can start with a file system that contains only a few kilobytes of data, and grow it to 8 exabytes of data. Because File Storage is a fully managed service, you don't have to worry about capacity planning, software upgrades, security patches, hardware installation, and storage maintenance. You pay only for your capacity stored monthly; you stop paying as soon as you delete your data. File Storage supports NFS version 3 protocol with Network Lock Manager (NLM) as the locking mechanism to provide POSIX interfaces. You can also use the NFS client on Microsoft Windows to access File Storage. File Storage protects your data by maintaining multiple replicas locally, along with encryption and the ability to take frequent snapshots. Within an availability domain, File Storage uses synchronous replication and high availability failover to keep your data safe and available. Interested in trying File Storage? I can help. Just sign up for a free trial or drop me a line at mona.khabazan@oracle.com. Mona Khabazan, Principal Product Manager, Oracle Cloud Infrastructure File Storage References https://cloud.oracle.com/storage/file-storage/features https://cloud.oracle.com/storage/file-storage/faq https://cloud.oracle.com/en_US/storage/tutorials

We are excited to share a new suite of Parallel File Tools that take advantage of the full performance of the Oracle Cloud Infrastructure File Storage service. File Storage offers elastic performance...

Product News

Introducing Oracle Cloud Infrastructure Load Balancing Metrics

In today’s digital world, customers expect applications to be always available and responsive, and to provide a superior end-user experience. As the first gateway between users and an application, load balancers are a critical piece of any scalable application infrastructure. An unhealthy or improperly configured load balancer can cause degraded user experiences like higher latency, reachability errors, or, much worse, an application outage, which often leads to customer churn and lost business. It's imperative to have meaningful metrics on your load balancer that can provide insights on the health of your application and help remediate issues faster. Introducing Load Balancing Service Metrics Oracle Cloud Infrastructure Load Balancing service metrics provide an array of critical metrics to proactively monitor the health and load of your Oracle load balancer infrastructure. The Load Balancing service metrics measure the number and type of connections, the HTTP responses, and the quantity of data managed by your load balancer. These metrics are statistics calculated from relevant data points as an ordered set of time-series data and are divided by load balancer, listener, and backend set component groups. Accessing Load Balancing Service Metrics The service metrics are an integral part of the Oracle Cloud Infrastructure Monitoring service and are automatically available for any load balancer that you create in your tenancy. You don't need to enable monitoring on the resource to get these metrics. In the Oracle Cloud Infrastructure Console, you can view the metrics details for load balancers in your compartment by selecting Monitoring > Service Metrics from the navigation menu, selecting your compartment, and  then selecting oci_lbaas from the Metric Namespace menu. Figure 1: Load Balancing Service Metrics You can further filter or group the service metrics by dimensions such as availability domain, backend set name, listener name, region, or OCID. You do this by adding a dimension filter under the Dimensions option in the Service Metrics page. Figure 2: Filtering Based on Metric Dimensions The metrics are automatically refreshed every minute. You can modify the metrics time-interval data on the charts to one-minute, five-minute, or one-hour time periods. You can modify the aggregate statistic to perform functions such as Rate, Mean, Sum, and so on by choosing the option from the Statistic menu. You can also view the metrics for a load balancer by navigating to the details page for the load balancer and accessing the Metrics tab under Resources. Similarly, you can view the metrics for a specific backend set by navigating to the Metrics tab on the backend set's details page. The Load Balancing service metrics are also available through the Monitoring API endpoint. You can use the Monitoring API to manage metric queries, alarms, and the performance of your load balancing resources. Using Load Balancing Metrics to Deliver a Great Digital Experience One of the key objectives of our Monitoring service is to deliver metrics that provide actionable insights that enable you to deliver a great digital experience for your end users. For example, you can use the load balancing metrics to understand your baseline performance metrics, such as the average/peak-time traffic trends over time. The metrics can also be used as a demand signal for business decisions such as future-capacity planning. Let's walk through an example scenario. You are the head of operations for a travel website hosted on Oracle Cloud Infrastructure. Your business has been running a social media campaign for a summer travel sale. You have been tasked to ensure that users have a great digital experience and that no business is lost because of application infrastructure issues. You wonder, can load balancing metrics help in this scenario? Absolutely! You can leverage the Inbound Requests, Active Connections, and Bytes Received load balancing metrics, in addition to your compute metrics, to gather insights on incoming traffic patterns and predict load balancer/compute capacity needs. The service metrics enable you to make data-driven decisions and dynamically adjust to the changing needs of your application infrastructure. Troubleshooting Scenario: HTTP 502 Bad Gateway Apart from proactive monitoring and management, load balancing metrics also help you to identify, isolate, and troubleshoot issues with your load balancer infrastructure. In this example scenario, you are deploying a new web application, ociexample.com, with an Oracle Cloud Infrastructure public load balancer as the front end in your development environment. However, when you try to access the application, you see an HTTP 502 response on the browser. Let's explore how load balancing metrics can help you troubleshoot this issue. When you browse to a load-balanced IP address, you see 502 Bad Gateway error. You can confirm this behavior by running a curl test: curl -v http://ociexample.com > GET / HTTP/1.1 > Host: 129.146.93.99 > User-Agent: curl/7.54.0 > Accept: */* > < HTTP/1.1 502 Bad Gateway < Content-Type: text/html < Content-Length: 161 < Connection: keep-alive In the Oracle Cloud Infrastructure Console, navigate to Monitoring > Service Metrics. Select your compartment and select oci_lbaas as the metric namespace. You will notice that an HTTP 502 response appears for each curl or browser test. Navigate to the Load Balancer Details page, and note that the load balancer backend set health is critical. If you run the same curl test against the IP address of the instance, you get the following error: Connection failed connect to 129.146.161.17 port 80 failed: Connection refused Failed to connect to 129.146.161.17 port 80: Connection refused Closing connection 0 However, you can log in to the backend instance via SSH, and running curl -v href="http://127.0.0.1/ returns an HTTP 200 OK response. HTTP/1.1 200 OK Server: Apache/2.4.6 () Accept-Ranges: bytes Content-Length: 5 charset=UTF-8 In this scenario, the host firewall is preventing the traffic from reaching the instance from beyond the instance on port 80. To resolve this issue, open port 80 on the firewall using the firewall-offline-cmd --add-port=80/tcp command and then using systemctl restart firewalld to cycle. Setting Up Alarms and Notifications The Monitoring service provides alarms and notifications functionality that is tightly integrated with the metrics. We recommend setting up alarms and notifications to be proactively notified on deviations from your baseline metrics. Let's walk through the steps to create an alarm and a notification for the HTTP 502 responses in our previous example. On the metric chart for which you want to create an alarm, click Options and then select Create an Alarm on this Query. Select a name, severity, and message body for the alarm message. Keep the Compartment, Metric Namespace, and Metric Name values the same, but adjust the interval and statistic as needed. You can optionally set up a metrics dimension, such as region, to filter the alarm. Create a trigger rule condition to enable firing of the alarm when the condition is met. Create a notification, which is the most common approach to managing alarms. You can add a list of recipients to a notification, and those recipients are emailed a notification in the event of an alarm. The Monitoring service also supports a native integration with PagerDuty, which allows companies to configure services, on-call rotations, acknowledgment requirements, and escalation rules for inbound notifications. Select Notification Service. Click Create a topic. Enter a name and a description for the topic. Select Email as the subscription protocol. Enter the email address or addresses to send notifications to. Click Create topic and subscription. Click Save alarm. Next Steps We recommend that you use the Monitoring service and Load Balancing service metrics to monitor any critical application that you are delivering, whether it's hosted solely in Oracle Cloud Infrastructure or across your hybrid environment. Load Balancing metrics will be extended to include more metrics across the Load Balancing infrastructure. If there is a specific metric or integration that you would like us to support, let us know. For more information, see Monitoring Overview and Load Balancing Metrics in the Oracle Cloud Infrastructure documentation. If you haven't tried Oracle Cloud Infrastructure yet, you can try it for free.  

In today’s digital world, customers expect applications to be always available and responsive, and to provide a superior end-user experience. As the first gateway between users and an...

Developer Tools

Getting Started with the Resource Manager on Oracle Cloud Infrastructure

Continuing our commitment to openness and open source software, today we're announcing the general availability of Resource Manager, a fully managed service that makes it significantly easier to use HashiCorp Terraform on Oracle Cloud Infrastructure. This is especially true for large or distributed enterprise development teams that share common infrastructure. Teams can use Resource Manager's deep integration with the Oracle Cloud Infrastructure platform by, for example, creating policies to constrain the operations that different users and groups can perform. We previewed Resource Manager at KubeCon + CloudNativeCon North America in December 2018. This article provides a high level overview and steps to get started. To get the most out of this walk-through, the reader should understand the basics of Terraform and have some experience using it to manage cloud infrastructure. If you need a refresher, take a moment to review the basics. If you don't have an account on Oracle Cloud Infrastructure and want to work through these examples, you can create a free account and get started with up to 3,500 free hours. When migrating their Terraform workflows to use Resource Manager, developers will be happy to know that all of the Oracle Cloud Infrastructure Terraform Provider functionality is completely supported. Resource Manager is the latest example of Oracle Cloud Infrastructure's commitment to openness by building on top of powerful open source software technologies like Terraform and honoring Oracle's commitment to preserving our customers' choice to migrate their workloads with minimal impact to their business, code, and runtime. "We are excited to partner with Oracle on Resource Manager and have it power the core infrastructure provisioning on Oracle Cloud Infrastructure. With Resource Manager, HashiCorp Terraform becomes accessible to even more users who care about open source standards and are looking to deploy their applications consistently on Oracle Cloud Infrastructure.” –Burzin Patel, VP, Worldwide Alliances, HashiCorp This post provides a high-level overview and steps to get you started. To get the most out of the walkthrough, it's helpful to understand the basics of Terraform and have some experience using it to manage cloud infrastructure. If you need a refresher, take a moment to review the basics. If you don't have an account on Oracle Cloud Infrastructure and want to work through these examples, you can create a free account and get started with up to 3,500 free hours. Overview At the highest level, Resource Manager is an orchestration service for managing Oracle Cloud Infrastructure resources by using Terraform. Using Resource Manager introduces many benefits, for example: Improves team collaboration: Securely manages Terraform state files and provides state locking when needed.  Terraform uses state files to map resources and track metadata when creating, modifying or destroying infrastructure. This enables teams to understand existing infrastructure and see any changes when running a Terraform plan. Enables automation: Unlocks infrastructure automation capabilities by using its fully supported SDKs, APIs, and CLI tooling. Integrates seamlessly with the platform: Enables developers to leverage the entire Oracle Cloud Infrastructure API catalog (for example, authentication). For example, you can create policies that govern access to Resource Manager stacks and jobs (more about those later). Provides a familiar console experience: Conveniently manage Oracle Cloud Infrastructure resources and other services that also leverage identity federation. The Resource Manager workflow is straightforward: you upload a bundled Terraform configuration file set, associate the bundle with a user-defined stack, and then run a job's Terraform action against the stack. A stack defines a distinct set of cloud resources within a compartment, and a job is the set of Terraform commands that can be executed against a stack. The Terraform commands supported include plan, apply, and destroy. After you perform a Terraform action, the refreshed Terraform state file is securely managed by Resource Manager and is subsequently referenced for each  Terraform action coordinated by Resource Manager. By managing the Terraform state file for you, Resource Manager ensures a single source of truth for the current state of the infrastructure. Target Architecture The source code for this walkthrough is available on GitHub. It deploys a public regional load balancer on Oracle Cloud Infrastructure as described in the Getting Started with Load Balancing tutorial. The resources defined in the Terraform configuration files are as follows: A compartment, and a virtual cloud network in a single region that contains an internet gateway A regional load balancer in a public subnet, a listener, and a failover load balancer also within a public subnet Two backend sets each with a single compute instance in separate private subnets within a different availability domain The following diagram shows the target architecture: In this post, we manage Oracle Cloud Infrastructure resources with Resource Manager in the Oracle Cloud Infrastructure Console. Walkthrough This walkthrough assumes that you've created a compartment named orm-demo-cmpt and a group named orm-demo-admin-grp, and added a user to that group. If you need help creating these, see the documentation on managing compartments, users, and groups. Step 1: Create an IAM Policy In the Console navigation menu, select Identity and then click Policies. On the Policies page, click Create Policy. In the Create Policy dialog box, name the policy orm-demo-admin-policy. This policy gives the orm-demo-admin-grp group the permissions to manage all Resource Manager stacks and jobs in the orm-demo-cmpt compartment. Click Create. The policy is listed on the Policies page. Note: In a production system, it's both more secure (principle of least privilege) and practical to create additional groups with more granular permissions. For example, it's likely that we'd need to create a development team group that can only use predefined stacks and run jobs against it (use-orm-stack and use-orm-job, respectively). Step 2: Create the Stack As mentioned earlier, a stack represents definitions for a collection of Oracle Cloud Infrastructure resources within a specific compartment. In this step, we configure a new stack in the orm-demo-cmpt compartment in the us-phoenix-1 region and name it HA Load Balanced Simple Web App. As the stack's name suggests, its configuration files define the load balancing, networking, and compute resources to deploy the target architecture plus an HTTP server. In the Console navigation menu, select Resource Manager and then click Stacks. Click Create Stack. Before you upload the Terraform files to Resource Manager, bundle them into a zip archive. In the Create Stack dialog box, verify the compartment (and change it if necessary), provide a meaningful name for the stack and a description, upload the Terraform configuration file zip archive, and add the keys and values for any Terraform variables referenced in the uploaded bundle. Important: Don't include confidential information (for example, SSH private key values and authentication credentials) in variables, tags, or descriptions. Depending on the context, consider using a secret vault like the Oracle Cloud Infrastructure Key Management service or HashiCorp Vault, as appropriate. A description can help teams quickly identify a stack's use and purpose, which is especially helpful as the infrastructure increases in complexity. Also, when Resource Manager uploads the zip archive, it looks in the root directory for all *.tf files. If the Terraform files are not located in the root directory, add the relative path to the subdirectory that contains those files in the Working Directory field. Lastly, tags address many use cases, from associating stacks with environments in a CI/CD build pipeline to enterprise cost-center allocation. For more information about tagging, see Managing Tags and Tag Namespaces. Before moving on to running a job, quickly review the new stack and then click the hyperlinked stack name. Step 3: Execute Jobs: Plan, Apply, and Destroy As previously mentioned, jobs perform actions against the Terraform configuration files associated with a stack. The Terraform actions that a job can perform are plan, apply, and destroy. Because Terraform command execution is not atomic, it's crucial to prevent any race conditions or state corruption from occurring because of parallel execution. To prevent this, Resource Manager ensures that only one job can run against a stack at a given time against a single state file. On the Stack details page, you can completely manage the stack's configuration (for example, update and delete the stack, add tags, and edit variables) and also download the zip archive that contains the latest Terraform configuration (which can be especially helpful when troubleshooting).   From the Terraform Actions menu, select Plan. In the Plan dialog box, optionally provide a more readily identifiable name and specify a tag to associate with this action. Then, click Plan to run the job. In the Jobs section of the stack's details page, the job's state appears as Accepted, which indicates that the platform is spinning up the necessary resources to run the command. The state changes to In Progress and then to either Succeeded or Failed. Click the Actions menu (three dots) to display actions related to the job. You can also click the job name to view the job's details and its logs containing, which contain the Terraform output. You can scroll through the logs or download them. Because the previous plan action succeeded, select the Apply from the Terraform Actions menu. Make any necessary changes in the dialog box, and then click Apply. The job's state is updated as the job execution nears completion. After the apply action succeeds, verify that the resources have been provisioned by reading the Terraform output contained with the logs. You can also navigate to the Networking section of the Console to view the different resources that now exist (VCN, load balancer, subnets, and so on), and to the Compute section to view the two instances. Now that we've successfully applied Terraform to build some cloud resources, we can use Resource Manager to release them. From the Terraform Actions menu, select Destroy. In the Destroy dialog box, accept the defaults (or customize the values) and then click Destroy. The state change is reflected in the console. To verify that the command completed successfully and that all resources are released, click the job name and view the Terraform output in the logs. Step 4: Delete the Stack On the Stack details page, click Delete Stack. In the confirmation message that appears, click Delete to confirm the action. The stack disappears from the list of stacks. Wrapping Up There is a lot of history and momentum behind Oracle’s commitment to open source, and Oracle Cloud Infrastructure is making rapid progress in building out a truly open public cloud platform. See it yourself by creating a free account with up to 3,500 free hours.

Continuing our commitment to openness and open source software, today we're announcing the general availability of Resource Manager, a fully managed service that makes it significantly easier to use...

Events

AI, IoT, and Blockchain on Full Display at Oracle Cloud Day Leadership Summit

Cloud technology has been transformative across all types of businesses, industries, and products. One fascinating result of the rise of cloud computing is that it's enabling enterprises to adopt emerging technologies—like artificial intelligence (AI), blockchain, digital assistants, cloud native technologies, and the internet of things (IoT)—faster than ever before. All of these technologies and more will be on display at the upcoming Oracle Cloud Day Leadership Summit—along with plenty of expert advice and discussions about how you can use them to build the future of your organization. Businesses are no longer experimenting with emerging technologies in a sandbox. Instead, they're applying them in meaningful ways across their operations that result in new business, new value, and more sophisticated applications than ever before. In the coming years, cloud providers and the enterprises that they serve will move toward a next-generation cloud model in which they'll have access to these new technologies, better security, improved price-performance, and deep automation capabilities. Based on research conducted by Oracle and independent IT analyst firms, we predict that by 2025: 80 percent of all enterprise workloads will run in the cloud. AI and other emerging technologies will double productivity around the world. More than 50 percent of data will be managed autonomously. Taken together, these predictions demonstrate the need for organizations to adopt a comprehensive enterprise cloud strategy and cloud native IT environments. The Oracle Cloud Day Leadership Summit is a great place to start. Join us for an afternoon dedicated to looking ahead with leaders just like you. The next Oracle Cloud Day Leadership Summit is happening in Houston, TX, in early March. Here are the details: Date: March 6, 2019 Time: 2 p.m.–6 p.m. (CST) Location: Karbach Brewing Co.                  2032 Karbach Street                  Houston, TX 77092 Register for the Houston event But if you can't make it to Houston, don't worry. Several more Oracle Cloud Day Leadership Summit events are happening soon. Denver, CO: March 12, 2019 Minneapolis, MN: March 20, 2019 Montreal, QC, Canada: March 28, 2019 Vancouver, BC, Canada: April 3, 2019 El Segundo, CA: April 9, 2019 More Reasons to Attend the Summit The Oracle Cloud Day Leadership Summit is a half-day event designed to help technology leaders stay up-to-date on the latest technologies, use cases, and trends. Join us for an afternoon dedicated to learning, exploring, and networking. This quick, information-rich event will have you generating new ideas in just one afternoon. Some highlights include: An informative, densely packed keynote featuring cloud visionaries and thought leaders Unique and inspiring customer success stories Networking with peers in a relaxed setting at cool local venues The Oracle Cloud Day Leadership Summit will bring you up-to-date on the latest emerging technologies and trends. Learn more about the Oracle Cloud Day Leadership Summit today!

Cloud technology has been transformative across all types of businesses, industries, and products. One fascinating result of the rise of cloud computing is that it's enabling enterprises to...

Oracle Cloud Infrastructure

Making It Easier to Move Oracle-Based SAP Applications to the Cloud

For decades, Oracle has provided a robust, scalable, and reliable infrastructure for SAP applications and customers. For over 30 years, SAP and Oracle have worked closely to optimize Oracle technologies with SAP applications to give customers the best possible experience and performance. The most recent certification of SAP Business Applications on Oracle Cloud Infrastructure makes sense within the context of this long-standing partnership. As this blog post outlines, SAP NetWeaver® Application Server ABAP/Java is the latest SAP offering to be certified on Oracle Cloud Infrastructure, providing customers with better performance and security for their most demanding workloads, at a lower cost. Extreme Performance, Availability, and Security for SAP Business Suite Applications Oracle works with SAP to certify and support SAP NetWeaver® applications on Oracle Cloud Infrastructure, which makes it easier for organizations to move Oracle-based SAP applications to the cloud. Oracle Cloud enables customers to run the same Oracle Database and SAP applications, preserving their existing investments while reducing costs and improving agility. Unlike products from first-generation cloud providers, Oracle Cloud Infrastructure is uniquely architected to support enterprise workloads. It is designed to provide the performance, predictability, isolation, security, governance, and transparency required for your SAP enterprise applications. And it is the only cloud optimized for Oracle Database. Run your Oracle-based SAP applications in the cloud with the same control and capabilities as in your data center. There is no need to retrain your teams. Take advantage of performance and availability equal to or better than on-premises. Deploy your highest-performance applications (that require millions of consistent IOPs and millisecond latency) on elastic resources with pay-as-you-go pricing. Benefit from simple, predictable, and flexible pricing with universal credits. Manage your resources, access, and auditing across complex organizations. Compartmentalize shared cloud resources by using simple policy language to provide self-service access with centralized governance and visibility. Run your Oracle-based SAP applications faster and at lower cost. Moving SAP Workloads: Use Cases There are a number of different editions and deployment options for SAP Business Suite applications. As guidance, we are focusing on the following use cases: Develop and test in the cloud Test new customizations or new versions Validate patches Perform upgrades and point releases Backup and disaster recovery in the cloud Independent data center for high availability and disaster recovery Duplicated environment in the cloud for applications and databases Extend the data center to the cloud  Transient workloads (training, demos) Rapid implementation for acquired subsidiary, geographic expansion, or separate lines of business Production in the cloud Reduce reliance on or eliminate on-premises data centers Focus on strategic priorities and differentiation, not managing infrastructure Oracle Cloud Regions Today we have four Oracle Cloud Infrastructure regions and we’ve announced new regions in coming months. This provides the global coverage that enterprises need. Additional details at Oracle Cloud Infrastructure Regions. SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure Oracle Cloud Infrastructure offers hourly and monthly metered bare metal and virtual machine compute instances with up to 51.2 TB of locally attached NVMe SSD storage or up to 1PB (Petabyte) of iSCSI attached block storage. A Bare Metal  instance with a 51.2TB of NVMe storage with is capable of around 5.5 million 4K IOPS at < 1ms latency flash, the ideal platform for an SAP NetWeaver® workload using an Oracle Database. Get 60 IOPS per GB, up to a maximum of 25,000 IOPS per block volume, backed by Oracle's first in the industry performance SLA. Instances in the Oracle Cloud Infrastructure are attached using a 25 Gbps non-blocking network with no oversubscription. While each compute instance running on bare metal has access to the full performance of the interface, virtual machine servers can rely on guaranteed network bandwidths and latencies; there are no “noisy neighbors” to share resources or network bandwidth with. Compute instances in the same region are always less than 1 ms away from each other, which means that your SAP application transactions will be processed in less time, and at a lower cost than with any other IaaS provider.  To support highly available SAP deployments, Oracle Cloud Infrastructure builds regions with at least three availability domains. Each availability domain is a fully independent data center with no fault domains shared across availability domains. An SAP NetWeaver® Application Server ABAP/Java landscape can span across multiple availability domains. Planning Your SAP NetWeaver® Implementation For detailed information about deploying SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure, see the SAP NetWeaver Application Server ABAP/Java on Oracle Cloud Infrastructure white paper. This document also provides platform best practices and details about combining parts of Oracle Cloud Infrastructure, Oracle Linux, Oracle Database instances, and SAP application instances to run software products based on SAP NetWeaver® Application Server ABAP/Java in Oracle Cloud Infrastructure.  Topologies of SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure There are various installation options for SAP NetWeaver® Application Server ABAP/Java. You can place one complete SAP application layer and the Oracle Database on a single compute instance (two-tier SAP deployment). You can install the SAP application layer instance and the database instance on two different compute instances (three-tier SAP deployment). Based on the sizing of your SAP systems, you can deploy multiple SAP systems on one compute instance in a two-tier way or distribute those across multiple compute instances in two-tier or three-tier configurations. To scale a single SAP system, you can configure additional SAP dialog instances (DI) on additional compute instances. Recommended Instances for SAP NetWeaver® Application Server ABAP/Java Installation You can use the following Oracle Cloud Infrastructure Compute instance shapes to run the SAP application and database tiers. Bare Metal Compute BM.Standard1.36 BM.DenseIO1.36 BM.Standard2.52 BM.DenseIO2.52 Virtual Machine Compute VM.Standard2.1 VM.Standard2.2 VM.Standard2.4 VM.Standard2.8 VM.Standard2.16 VM.DenseIO2.8 VM.DenseIO2.16 For additional details, review the white paper referenced in the "Planning Your SAP NetWeaver® Implementation" section. Technical Components A SAP system consists of several application server instances and one database system. In addition to multiple dialog instances, the System Central Services (SCS) for AS Java instance and the ABAP System Central Services (ASCS) for AS ABAP instance provide message server and enqueue server for both stacks.  The following graphic gives an overview of the components of the SAP NetWeaver® Application Server: Conclusion This post provides some guidance about the main benefits of using Oracle Cloud Infrastructure for SAP NetWeaver® workloads, along with the topologies, main use cases, installation, and migration process. To learn more about migrating SAP to Oracle Cloud, register for our March 6 webinar. And for more information, review the following additional resources.  Additional Resources SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure white paper Oracle Cloud Infrastructure technical documentation Oracle Cloud for SAP Overview SAP Solutions Portal SAP on Oracle Community High Performance X7 Compute Service Review and Analysis

For decades, Oracle has provided a robust, scalable, and reliable infrastructure for SAP applications and customers. For over 30 years, SAP and Oracle have worked closely to optimize Oracle...

Product News

Announcing the Launch of Traffic Management

Today we’re excited to announce the release of DNS traffic management on Oracle Cloud Infrastructure. Our industry-leading DNS is already the most reliable and resilient DNS network across the world. Now you can use customizable steering policies to provide an optimal end-user experience based on factors such endpoint availability and end-user location. In addition to traditional global load-balancing capabilities, like distributing traffic across two Oracle Cloud Infrastructure regions, Oracle Cloud Infrastructure DNS now provides granular control of incoming queries and where to route these requests. Traffic management steering policies support a variety of use cases, ranging from a simple point-A-to-point-B failover to the most complex enterprise architectures. This support enables you to set predictable business expectations for service differentiation, geographic market targeting, and disaster recovery scenarios. Your rule sets can direct global traffic to any assets under your control, allowing you to optimize user experience and infrastructure efficiency in both hybrid and multicloud scenarios. Why Do I Need Traffic Management Steering Policies? Traffic management—a critical component of DNS—lets you configure routing policies for serving intelligent responses to DNS queries. Different answers can be served for a query according to the logic in the customer-defined traffic management policy, thus sending users to the most optimal location in your infrastructure. Oracle Cloud Infrastructure DNS is also optimized to make the best routing decisions based on current conditions on the internet. What Are the Most Common Use Cases? As more enterprises move to cloud and hybrid architectures, high availability and disaster recovery are more important than ever before. DNS failover provides the ability to move traffic away from an endpoint when it is unresponsive and send it to an alternate location. The availability status is monitored by Oracle health checks, now also available in Oracle Cloud Infrastructure. Cloud-based DNS load balancing is ideal for scaling your infrastructure across multiple geographic reasons. Common use cases include scaling out new infrastructure, cloud migration, and controlling the release of new features across your user base. It’s easy to set up “pools” of endpoints across all of your infrastructure (on-premises or cloud-based) and assign ratio-based weighting. Source-based steering lets you automate routing decisions based on where requests are coming from. These source-based policies can route traffic to the closest geographic assets, create a “split horizon” to control traffic based on the IP prefix of the originating query, and make decisions based on preferred providers. You can load balance traffic across different Oracle Cloud Infrastructure regions by sending users to the closest geographic location. You can also use health checks to divert users to an alternate region if your assets are unreachable in a particular region. Getting Started After you set up basic DNS, select Traffic Management Steering Policies under Edge Services, where you can create and customize your own steering policies. If you have any questions about getting started or how to handle your use case, please reach out to us.

Today we’re excited to announce the release of DNS traffic management on Oracle Cloud Infrastructure. Our industry-leading DNS is already the most reliable and resilient DNS network across the world....

Partners

Click to Launch Images by Using the Marketplace in Oracle Cloud Infrastructure

At Oracle, our mission is to enable your business transformation by migrating and modernizing your most demanding enterprise workloads onto the cloud without rearchitecting them. Our development efforts for Oracle Cloud Infrastructure are focused on this primary objective. We also know that to support your critical system-of-record workloads effectively with the least amount of rearchitecture, you require supporting applications from a broad range of vendors that surround, secure, and extend your core enterprise workloads. To that end, we want to dramatically simplify how you find, learn about, and launch both Oracle and third-party applications from our Oracle Cloud Marketplace. Announcing the General Availability of Marketplace in Oracle Cloud Infrastructure Today, I’d like to announce the general availability of our Marketplace in Oracle Cloud Infrastructure. We introduced this feature at OpenWorld San Francisco in October 2018, and we’re proud to make it available to our cloud customers effective immediately. Embedding the Marketplace in Oracle Cloud Infrastructure gives you ready access to security solutions from Fortinet and Check Point, DevOps solutions from Bitnami, and high-performance computing (HPC) workload management tools from Altair. If you’re an Oracle Applications customer, you can easily find and “click to launch” the automated lift and shift, provisioning, and lifecycle management tools for Oracle E-Business Suite and PeopleSoft. The best part is that you can launch any of these applications directly on your Oracle Cloud Infrastructure Compute instance, which dramatically reduces deployment times to minutes or hours, instead of days or weeks. Next, I’d like to highlight some of the innovative and unique solutions that we have in the Marketplace, and take you through some of their use cases. Enhance Security with Solutions That You Know Oracle Cloud Infrastructure is dedicated to offering you a secure cloud. We ensure the security of our cloud through our infrastructure architecture. We also enable you to choose the level of isolation and security controls that you need to run your most important workloads securely. We know that enterprise customers have implemented security and networking systems for their on-premises data centers and governance frameworks. Because we want to help you modernize your critical workloads by moving them to our cloud without massive rearchitecture work, we want to make it easier for you to use popular security solutions from third-party providers that you already know and trust. Oracle Cloud Infrastructure has supported integration with leading virtual firewall solutions from Fortinet and Check Point for some time. However, deployment used to require several steps, including importing or running setup into a running instance and then creating custom images. In fact, we wrote blog posts to help guide our customers through those steps. Now, turnkey images for Fortinet's FortiGate-VM NGFW, FortiADC (load balancing), FortiAnalyzer, FortiManager solutions, and Check Point’s CloudGuard IaaS NGFW are available on the Marketplace. You can quickly launch these images to your Oracle Cloud Infrastructure environment by using the Console. You can select the image of choice, click “Launch Instance”, and the GUI will guide you through deployment. It's as easy as that. Lior Cohen, Senior Director of Cloud Security Products and Solutions at Fortinet, talks about the impact of enabling customers to easily launch turnkey solutions via the embedded Marketplace in Oracle Cloud Infrastructure. "More and more enterprises are shifting their critical production workloads to hyperscale IaaS cloud vendors like Oracle. Because of the demands of today’s digital marketplace, it’s crucial for those customers to be able to instantly launch solutions that can secure and efficiently deliver applications at the speed their users expect. Rapid and simplified access to essential tools, like our award-winning FortiGate next-generation firewall security solution, equips organizations with the breadth of protection and confidence needed to migrate even their most critical enterprise applications. With just a few clicks, customers can launch and use our award-winning firewall and application delivery controller solutions, enabling them to secure critical applications in minutes." Launch E-Business Suite and PeopleSoft Cloud Manager, Available Only with Oracle Cloud Oracle Cloud Infrastructure is designed and optimized to run Oracle Applications such as E-Business Suite and PeopleSoft. These solutions help run the back offices of leading enterprises worldwide. Companies that depend on these applications, but demand the benefits of cloud, choose to modernize by running on Oracle Cloud Infrastructure, which offers better performance, lower price, and higher-availability options, including RAC and Exadata, that cannot be found with any other cloud. For example, E-Business Suite Cloud Manager and PeopleSoft Cloud Manager help automate provisioning and facilitate lifecycle management for these application environments in our cloud. And these solutions are unique to Oracle Cloud Infrastructure. They offer application modernization capabilities to facilitate migrations from on-premises environments to the cloud. They also provide intuitive UIs so that you can define your topologies and create templates for more streamlined deployments. Finally, these web-based solutions enable you to subscribe to the latest updates and stay current with the latest images, improving your security. Both solutions are now available through the Marketplace in Oracle Cloud Infrastructure, where you can click to launch the latest E-Business Suite Cloud Manager and PeopleSoft Cloud Manager images. For more information on E-Business Suite Cloud Manager, check out their blog. Click to Launch DevOps Tools Building on Oracle’s commitment to support open standards through our Oracle Cloud Native Framework, the new Marketplace in Oracle Cloud Infrastructure also makes it easier for DevOps practitioners to launch the following images: Continuous Integration (CI) software, such as Jenkins Certified by Bitnami Source code management with CI features, such as GitLab CE Certified by Bitnami Bug tracking software, such as Redmine Certified by Bitnami By simplifying how software development teams access solutions to build, test, and deploy the latest cloud native innovations, we’re supporting our customers’ ability to innovate and respond to changing business requirements. “For enterprises running open source, it is critical that they choose a trusted, secure, up-to-date version,” said Pete Catalanello, Bitnami Vice President of Business Development and Sales. “By adding Bitnami certified solutions such as Jenkins and Redmine to its Marketplace, Oracle is helping DevOps to add agility and best practices to their processes.” Making HPC Solutions Accessible to Engineers Oracle continues to deliver on our vision to make the power of supercomputing readily accessible to every engineer and scientist. Historically, enterprise HPC workloads have remained on-premises because they require specialized technology and demand high, consistent performance that wasn’t possible or was too cost prohibitive on cloud infrastructure. At Oracle, we're challenging you to bring your most demanding HPC applications to our cloud. Not only do we offer bare metal GPUs based on cutting-edge technology and enable clustered networking to deliver single-digit microsecond latency, we also partner with innovators like Altair to offer easy access to their market-leading HPC workload management solution, Altair PBS Works™. Sam Mahalingam, Chief Technical Officer for Enterprise Solutions at Altair talks about teaming up with Oracle on the new Marketplace in Oracle Cloud Infrastructure. “Organizations all over the world depend on Altair PBS Works to simplify the administration of their largest and most complex clusters and supercomputing environments,” said Mahalingam. “Many of our customers are smaller organizations that need HPC solutions that are easy to adopt and use. Partnering with Oracle and offering Altair PBS Professional™, the flagship product of the PBS Works suite, through the new Marketplace delivers on our joint mission of making HPC more accessible.” We’ll continue to add partner solutions to our Marketplace. If there are any third-party solutions that you just can’t live without, we welcome your comments to this post as we continue to build out our ecosystem of solutions for you.

At Oracle, our mission is to enable your business transformation by migrating and modernizing your most demanding enterprise workloads onto the cloud without rearchitecting them. Our development...

Strategy

Why Oracle Cloud Infrastructure?

We recently launched a "Why Oracle Cloud Infrastructure" web page that describes how Oracle's approach to cloud is different, and why we think our cloud infrastructure can be uniquely valuable to customers. I’ve been at Oracle for two years, and I joined this team because I believe that Oracle is in a position to solve technology challenges in cloud that other vendors can't. This page is our way of telling that story to the world. It’s part of a conversation that we’ve been having with many customers, and I’m excited that we're sharing the story with a wider audience. Why Build a Cloud? For decades, Oracle has been partnering with enterprises to solve some of their most challenging business problems by using the world’s most scalable, reliable, and performant database and business application software, and differentiated hardware to run it on. Like every other technology company in the world, we know that cloud offers significant benefits. It promises to make IT more agile to drive innovation and get customers out of the tedious business of infrastructure management.  When we thought about the best way to serve our current and future customers, we didn't believe that any existing cloud infrastructure provider could meet their needs for consistently high performance, isolation from other tenants, compatibility with what they wanted to run, and the ability to support the most critical features of Oracle technologies. We felt that by building our own cloud infrastructure, we’d better serve customers in the long run by allowing our database and applications teams to work closely with our own cloud infrastructure teams to offer optimized solutions.  Our Imperatives To cloud-enable the class of applications that we know best—systems of record that Oracle customers rely on to build products, transact finances, and effectively run the most critical parts of their operations—we knew that we needed a cloud that was up to the task. Specifically, we had three imperatives in mind: Our customers could not take a step backwards when moving to cloud, especially regarding performance and reliability. The economics had to work by reducing costs compared to on-premises deployments and being predictable enough to facilitate budget planning. Our customers had to be able to get their workloads to cloud easily, without requiring massive refactoring or excessive risk. Performance and Reliability For the first imperative, we built a cloud for enterprise with top-tier components and access to bare metal servers to give customers full isolation and the ability to run what they want. From an infrastructure perspective, the most important part is the network. We avoided the oversubscription that’s common with other clouds, so that performance isn’t variable from moment to moment, day to day, or month to month. This reliability enables high performance that’s validated by third party testing to have better results for key application workloads. What’s more, the level of performance that customers get doesn’t change depending on what neighbors are doing, a key requirement for demanding systems of record. Finally, we built a cloud that natively supports crucial elements of Oracle Database functionality such as Oracle Real Application Clustering (RAC), Exadata, and deep DBA controls, all of which customers rely on for production applications and are unique to the Oracle Cloud. Economics We offer low component pricing and simplified pricing models that make it easy to predict and manage that total cost of operations at scale. Our on-demand rates for compute, network, and storage components are materially lower than those of our competitors. But perhaps more importantly, our rates include everything that enterprises need to get the most out of our services, like storage performance, inter-region transfer and high levels of internet data egress, and unlimited data movement across dedicated private line connections. Our discount structure is simple, with a straight percentage discount offered across of all services rather than a reserved instance model that is commonly restricted to compute and is tremendously complicated to optimize.  Moving to Cloud We wanted to make it easy for our enterprise customers to get to cloud. We make this possible through a combination of compatibility with what our customers run, automation and expertise in lifting and shifting workloads as they are to cloud, and integration and innovation in cloud native application frameworks. When our customers run workloads in Oracle Cloud, they can take the data that they create and manage in core systems of record, and get more value out of it with integration, analytics, and new ways to distribute and understand that data. So, Why Oracle Cloud Infrastructure? Other clouds were built primarily for web applications, but we took a different approach. Our cloud is built to withstand the rigorous demands of critical enterprise workloads, and to transform those business processes for improved agility, reliability, and long-term business results. We believe our long history of solving business problems in technology, combined with a focus on building a cloud for the most demanding category of workloads, will drive differentiation and results for enterprise environments around the world.  Hope that you give our new page a read and feel free to let me know what you think.

We recently launched a "Why Oracle Cloud Infrastructure" web page that describes how Oracle's approach to cloud is different, and why we think our cloud infrastructure can be uniquely valuable...

Product News

Oracle Cloud Simplifies Identity Management with Enhanced Okta Support

We are enhancing our federation support by enabling users who are federated with Okta to directly access the Oracle Cloud Infrastructure SDK and CLI. Federation enables you to use identity management software - often times this is existing an existing identity management solution that is integrated with your corporate directory - to manage users and groups while giving them access to the Oracle Cloud Infrastructure Console, CLI, and SDK. If you're an Okta user, that means you can leverage the same set of credentials in the Oracle Cloud Infrastructure web console as well as in long-running, unattended CLI or SDK scripts. Users that are members of Okta groups that you select are synchronized from Okta to Oracle Cloud Infrastructure. You control which Okta users have access to Oracle Cloud Infrastructure, and you can consolidate all user management in Okta. To use this new feature, follow the setup process described in the documentation.  Following is an example cost-management scenario that is greatly simplified by this feature. Suppose that you want to use the SDK to run a Python script that finds and terminates compute instances that don't have the CostCenter cost tracking tag. Instead of creating a local Oracle Cloud Infrastructure user, you can set up a user in Okta to run this script. You would follow these steps to enable this scenario: Step 1: Set up or upgrade your Okta federation to provision users If you do not have an existing federation with Okta, follow the instructions in the white paper, Oracle Cloud Infrastructure Okta Configuration for Federation and Provisioning. This paper includes instructions for both setting up your federation and provisioning with SCIM. If you have an existing federation with Okta with group mappings that you want to maintain, you can add SCIM provisioning via the instructions in our documentation. Step 2: Set up the user in Okta and associate that user with the correct groups Managing all your users from your identity provider is a more scalable, manageable, and secure way to manage your user identities. Be sure to follow the principal of least privilege by creating an Okta user and associating that user with only the Okta groups that they need to do their job. Step 3: Set up the Oracle Cloud Infrastructure group Create a local Oracle Cloud Infrastructure group that will be used for the task and ensure that it has a policy that enables just the access control needed for the task. Consider setting up a group specifically for the type of administrator that you want (for example, compute instances administrator). For a detailed explanation of best practices in setting up granular groups and access policies, see the Oracle Cloud Infrastructure Security white paper. You can also create the group when you map it (next step). Step 4: Map the Okta group to the Oracle Cloud Infrastructure group Follow the instructions on adding groups and users for tenancies federated with Okta, and ensure that you map the correct group from Okta to the equivalent group in Oracle Cloud Infrastructure. You will know that you succeeded if you see users created in your tenancy from Okta (there is a filter that allows you to see only federated users).   Step 5: Set up the user with an API key Now that the Okta user exists as a provisioned user in Oracle Cloud Infrastructure, you must create an API key pair and upload it to the user. Each user should have their own key pair. For details, see the SDK setup instructions. Step 6: Check the user's capabilities  As a final check, ensure that the user has the capability to use API keys. You can also set the user's capabilities to use only API keys for the SDK and not the web console. Now you've set up the Okta user to use the SDK and run scripts that the Oracle Cloud Infrastructure user has access to.    Tips You know that the user is federated if the user name is prefixed with the name that you gave the identity provider. For example, if you called the Okta federation okta, your user would be okta/username. There is also a feature that lets you filter the list of local users by which federation provider they came from. Only users assigned to mapped groups are replicated. If you see some users but not the Okta user that you want, then that user doesn't belong to a group that has been mapped from Okta to Oracle Cloud Infrastructure. If no users are being replicated, verify that you've followed the setup procedure and mapping between the groups. If that doesn’t work, visit My Oracle Support to open a support ticket. To use the SDK or CLI, the client that runs the CLI or SDK must have the matching private key material stored on the client machine. Secure the client machine appropriately to prevent inappropriate access. Conclusion This feature streamlines how Okta users can be used with Oracle Cloud Infrastructure and especially the CLI and SDK. Stay tuned for future feature announcements regarding federation.  

We are enhancing our federation support by enabling users who are federated with Okta to directly access the Oracle Cloud Infrastructure SDK and CLI. Federation enables you to use identity management...

Developer Tools

External Health Checks on Oracle Cloud Infrastructure

When you run a solution in the public cloud, it's important to monitor the availability and performance of the service to end users, to ensure access from outside the host cloud. To meet this need, a cloud provider must provide monitoring from a diverse set of locations within relevant regional markets around the globe. Oracle Cloud Infrastructure is pleased to announce the release of external health checks to help you monitor the availability and performance of any public-facing service, whether hosted in Oracle Cloud or in a hybrid infrastructure. What Are Health Checks? External health checks enable you to perform scheduled testing, by using a variety of different protocols, from a set of Oracle managed vantage points around the globe to any fully qualified domain name (FQDN) or IP address specified by the user. Health checks provides support for HTTP and HTTPS web application checks and TCP and ICMP pings for monitoring IP addresses. You can also choose high-availability testing with tests running as frequently as every 10 seconds from each vantage point. On-demand testing, which allows for one-off validation or troubleshooting tests, is also available through a REST API. Additionally, the Health Check service is fully integrated with the DNS Traffic Management service to enable automated detection of service failures and trigger DNS failovers to ensure continuity of service. Creating a Health Check From the Edge Services menu, navigate to Health Checks. In the Health Checks area, click Create Health Check, and enter the details of your check in the dialog box. Enter name that will help you to remember the purpose of this check when you return to this page. Select the compartment that you want to add this check to. Add the target endpoints that you want to monitor. The Targets field is prepopulated with suggested endpoints drawn from public IP addresses already configured in your compartment. You can select one of these endpoints to monitor or add a new one. Select vantage points from which you intend to monitor the targets. These vantage points are located in locations around the globe, and we generally recommend selecting vantage points that are  located in the same continent as your application. Select the type of test that you want to run—HTTP or HTTPS for a web page, or TCP or ICMP for a public IP address. Set the frequency of the tests as appropriate to the level of monitoring that your service requires. Current options include every 30 or 60 seconds for basic tests, and premium tests run at the higher frequency of every 10 seconds.  An additional fee is calculated for premium tests.  Add any tags to help you quickly search for this check in the future. Click Create Health Check.  After the check is created, a details page shows information specific to this check. Retrieving Metrics The Health Check service delivers metrics directly to the Oracle Cloud Infrastructure Monitoring service to enable you to query the input metrics, build reports, and configure alerts based on the external monitoring. This integration gives you the flexibility to identify service failures visible from locations around the globe. A full REST API lets you access up to 90 days of historic health check monitoring data. Within the Health Check UI, each test also provides access to measurement data from the probes. Suspending Health Checks If you need to temporarily suspend a health check—for example, you are working to operationally maintain or alter a service—you can do that by selecting the affected check and clicking Disable on the Health Check details page. Next Steps Health checks are simple to configure through a flexible UI or REST API. We recommend using the service to monitor any critical publicly exposed IP address or FQDN that you are delivering, whether hosted solely in Oracle Cloud Infrastructure or across your hybrid environment. Health checks will be extended to include more types of tests across different protocols and with more configuration options. If there is a specific test you would like us to support, let us know. If you haven't tried Oracle Cloud Infrastructure yet, you can try it for free. 

When you run a solution in the public cloud, it's important to monitor the availability and performance of the service to end users, to ensure access from outside the host cloud. To meet this need, a...

Using File Storage Service with Container Engine for Kubernetes

Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. One of the best practices for containerized applications is to use stateless containers. However, many real-world applications require stateful behavior for some of their containers. For example, a classic three-tier application might have three containers: One for the presentation layer, stateless One for the application layer, stateless One for persistence ( such as database), stateful In Kubernetes, each container can read and write to its own file system. But when a container is restarted, all data is lost. Therefore, containers that need to maintain state would store data in a persistent storage such as Network File System (NFS). What’s already stored in NFS isn't deleted when a pod, which might contain one or more containers, is destroyed. Also, an NFS can be accessed from multiple pods at the same time, so an NFS can be used to share data between pods. This behavior is really useful when containers or applications need to read configuration data from a single shared file system or when multiple containers need to read from and write data to a single shared file system. Oracle Cloud Infrastructure File Storage provides a durable, scalable, and distributed enterprise-grade network file system that supports NFS version 3 along with Network Lock Manager (NLM) for a locking mechanism. You can connect to File Storage from any bare metal, virtual machine, or container instance in your virtual cloud network (VCN). You can also access a file system from outside the VCN by using Oracle Cloud Infrastructure FastConnect or an Internet Protocol Security (IPSec) virtual private network (VPN). File Storage is a fully managed service so you don't have to worry about hardware installations and maintenance, capacity planning, software upgrades, security patches,  and so on. You can start with a file system that contains only a few kilobytes of data and grows to handle 8 exabytes of data. This post explains how to use File Storage (sometimes referred to as FSS) with Container Engine for Kubernetes (sometimes referred to as OKE). We'll create two pods. One pod runs on Worker Node 1, the other pod runs on Worker Node 2, and they share the same File Storage file system:   Then we'll look inside the pod and see how to configure it with File Storage. Prerequisites Oracle Cloud Infrastructure account credentials for the tenancy. A Container Engine for Kubernetes cluster created in your tenancy. An example of that is shown in the Container Engine for Kubernetes documentation. Security lists configured to support File Storage as explained in the File Storage documentation. The following image shows a sample security list configuration: A file system and a mount target created according to the instructions in Announcing File Storage Service UI 2.0. High-Level Steps Create storage class. Create a persistent volume (PV). Create a persistent volume claim (PVC). Create a pod to consume the PVC. Create a Storage Class Create a storage class that references the mount target ID from file system that you created: kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: oci-fss provisioner: oracle.com/oci-fss parameters: # Insert mount target from the FSS here  mntTargetId: ocid1.mounttarget.oc1.iad.aaaaaaaaaaaaaaaaaaaaaaaaaa   Create a Persistent Volume (PV) apiVersion: v1 kind: PersistentVolume metadata: name: oke-fsspv spec: storageClassName: oci-fss capacity: storage: 100Gi accessModes: - ReadWriteMany mountOptions: - nosuid nfs: # Replace this with the IP of your FSS file system in OCI server: 10.0.32.8 # Replace this with the Path of your FSS file system in OCI path: "/okefss" readOnly: false   Create a Persistent Volume Claim (PVC) apiVersion: v1 kind: PersistentVolumeClaim metadata: name: oke-fsspvc spec: storageClassName: oci-fss accessModes: - ReadWriteMany resources: requests: # Although storage is provided here it is not used for FSS file systems  storage: 100Gi volumeName: oke-fsspv   Verify That the PVC Is Bound raghpras-Mac:fss raghpras$ kubectl get pvc oke-fsspvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE oke-fsspvc Bound oke-fsspv 100Gi RWX oci-fss 1h raghpras-Mac:fss raghpras$   Label the Worker Nodes Label two worker nodes so that a pod can be assigned to each of them: kubectl label node 129.213.110.23 nodeName=node1 kubectl label node 129.213.137.236 nodeName=node2   Use the PVC in a Pod The following pod (oke-fsspod) on Worker Node 1 (node1) consumes the file system PVC (oke-fsspvc). #okefsspod.yaml apiVersion: v1 kind: Pod metadata: name: oke-fsspod spec: containers: - name: web image: nginx volumeMounts: - name: nfs mountPath: "/usr/share/nginx/html/" ports: - containerPort: 80 name: http volumes: - name: nfs persistentVolumeClaim: claimName: oke-fsspvc readOnly: false nodeSelector: nodeName: node1   Create the Pod kubectl apply -f okefsspod.yaml   Test After creating the pod, use kubectl exec to test that you can write to the file share: raghpras-Mac:fss raghpras$ kubectl get pods oke-fsspod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE oke-fsspod 1/1 Running 0 33m 10.244.2.11 129.213.110.23 <none> raghpras-Mac:fss raghpras$   Write to the File System by Using kubectl exec raghpras-Mac:fss raghpras$ kubectl exec -it oke-fsspod bash root@oke-fsspod:/# echo "Hello from POD1" >> /usr/share/nginx/html/hello_world.txt root@oke-fsspod:/# cat /usr/share/nginx/html/hello_world.txt Hello from POD1 root@oke-fsspod:/#   Repeat the Process with the Other Pod Ensure that this file system can be mounted into the other pod (oke-fsspod2), which is on Worker Node 2 (node2): apiVersion: v1 #okefsspod2.yaml kind: Pod metadata: name: oke-fsspod2 spec: containers: - name: web image: nginx volumeMounts: - name: nfs mountPath: "/usr/share/nginx/html/" ports: - containerPort: 80 name: http volumes: - name: nfs persistentVolumeClaim: claimName: oke-fsspvc readOnly: false nodeSelector: nodeName: node2   raghpras-Mac:fss raghpras$ kubectl apply -f okefsspod2.yaml pod/oke-fsspod2 created raghpras-Mac:fss raghpras$ kubectl get pods oke-fsspod oke-fsspod2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE oke-fsspod 1/1 Running 0 12m 10.244.2.17 129.213.110.23 <none> NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE oke-fsspod2 1/1 Running 0 12m 10.244.1.9 129.213.137.236 <none> raghpras-Mac:fss raghpras$ kubectl exec -it oke-fsspod2 -- cat /usr/share/nginx/html/hello_world.txt Hello from POD1 raghpras-Mac:fss raghpras$   Test Again You can also test that the newly created pods can write to the share: raghpras-Mac:fss raghpras$ kubectl exec -it oke-fsspod2 bash root@oke-fsspod2:/# echo "Hello from POD2" >> /usr/share/nginx/html/hello_world.txt root@oke-fsspod2:/# cat /usr/share/nginx/html/hello_world.txt Hello from POD1 Hello from POD2 root@oke-fsspod2:/# exit exit   Conclusion Both File Storage and Container Engine for Kubernetes are fully managed services that are highly available and highly scalable. File Storage also provides highly persistent and durable storage for your data on Oracle Cloud Infrastructure. File Storage is built on distributed architecture to provide scale for your data and for accessing your data. Leveraging both services will simplify your workflows in the cloud and give you flexibility and options on how you store your container data. What's Next Dynamic volume provisioning for File Storage Service., which is in development, creates file systems and mount targets when a customer requests file storage inside the Kubernetes cluster. If you want to learn more about Oracle Cloud Infrastructure, Container Engine for Kubernetes, or File Storage, our cloud landing page is a great place to start. Update Just a quick note that FSS requires public subnets which means that the OKE worker nodes need to be on the public subnet in order for this solution to work. Resources Container Engine for Kubernetes workshop File Storage Overview Creating a Kubernetes Cluster

Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. One of...

Developer Tools

The Intersection of Hybrid Cloud and Cloud Native Adoption in the Enterprise

Welcome to Oracle Cloud Infrastructure Innovators, a series of articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Enterprises are turning in droves to hybrid cloud computing strategies, especially for testing and development, quality assurance, and DevOps activities. But before the majority of enterprises can move on to more advanced hybrid cloud use cases, they'll need to overcome some lingering challenges. I recently sat down with Bob Quillin, Vice President of Developer Relations at Oracle Cloud Infrastructure, to discuss Oracle’s cloud native direction. We discussed the biggest trends in hybrid cloud computing and the major obstacles—things like skills shortages, resistance to cultural change, and rapidly evolving technologies—that often stand in the way of adoption. Listen to our entire conversation, followed by a condensed written version:   Your browser does not support the audio player   One of the common trends I'm seeing in just about every enterprise is the move toward hybrid cloud strategies. What are you seeing on that front? Bob Quillin: We're seeing a lot of demand and interest in hybrid cloud. People have been trying out different models and patterns and testing out different technologies, and there have been some challenges. But one of the first major areas where we're seeing a lot of traction with customers is using the cloud for development, quality assurance, DevOps, and for running tests, with production still largely on premises. Many people feel more comfortable in their on-premises environment for certain production applications. But with a cloud native and DevOps environment in the cloud, you can spin up, spin down, and support a variety of testing, staging, and QA projects. It gives you a lot of elasticity, it's cheaper, and the test cases can run in containers. Sometimes people say, "Oh, I can't run my database applications in the cloud." Well, that isn’t the case for test and QA use cases. You can put them in a container, run the test, break it back down, and you're good to go. The disposability and quick reusability of these environments is where we're seeing a lot of success, and that is where a lot of people get started. What's the next step on the road to a hybrid cloud strategy? Quillin: The next step is getting to the point where you have a platform that gives you confidence that you can develop on the cloud or on premises, and that you have bi-directional portability—on premises to cloud or cloud to on premises. Ensuring that kind of application portability is the next major pattern we've seen. Disaster recovery and high availability deployments is the third approach we've seen. For example, people will mirror their application in the cloud to have it available. But they keep running the existing application on premises so they can failover if they have a disaster event. Disaster recovery is one of the classic hybrid models. Those three areas are the ones we see being most successful right now. Are organizations using hybrid strategies at all in more advanced areas? Quillin: There are two more use cases we're seeing that are more advanced. One is a workload balancing application that's able to run both on premises and in the cloud. It lets you choose where to run each workload based on its regulatory requirements, governance, latencies, whether it’s a new or legacy workload, etc. This approach requires a bit more sophistication and a little more targeting. The other big one that people have been working toward for a long time has been cloud bursting, where users can expand resources into the cloud dynamically back and forth. Or, users enlist some kind of federated automation where, based on performance or quality of service, I'm able to choose where I run my application and have a federated, single view of all that. These use cases have been highly desirable from an enterprise perspective. But what's been lacking is a platform from which to do it and a framework that enables it. Let's talk a bit more about challenges. I'm sure almost every organization that you deal with is facing certain setup challenges in deploying, particularly to the hybrid model. What are you seeing? Quillin: Cultural change and training continue to be inhibitors—and I think those roll up into an overall operational readiness challenge. Organizations are struggling with how to get started on this. At Oracle, what we're providing is an easier way to get started. The Oracle Cloud Native Framework provides a set of patterns and a model that gives the customer a supported blueprint for hybrid cloud. The next challenge is dealing with portability complexities related to a variety of underlying integration issues, including storage, networking, and the wide variety of Kubernetes settings and configurations. A related challenge—and this is one of the dark secrets of cloud native—is that there are a lot of “devil in the details” problems based on the rapid rate of change of Kubernetes, its quick release cadence, shifting APIs, and the general way the technology is rocketing forward. What you need is a vendor that supports you through these changes by supporting a bi-directional portability model. At Oracle Cloud Infrastructure, we're helping organizations through this process—and we're not going to leave them high and dry by using a proprietary approach. We're committed to open standards. Many organizations think that open source is great. But there are also those who think that sourcing software from a single proprietary vendor can be cheaper due to the DevOps and maintenance costs associated with open source. What are your thoughts on that? Quillin: All sorts of studies have been made around organizations that use an open source and DevOps culture, and they're always faster and more successful in terms of business agility. But also, the developers are happy. It's true that some on the business side of an organization would choose proprietary technology. But if you really want to recruit the best developers, you're going to want to work in open source because that's the most marketable set of skills today. You get happier developers, you can recruit better, and you get the best development teams. Oracle is a platinum member of the CNCF (Cloud Native Computing Foundation). How does the CNCF help in terms of enabling enterprises to overcome these challenges? Quillin: I think the most important thing they've done—which is amazing to me—is they've enabled the market by creating a standard cloud native platform based around Kubernetes. That's been their crowning achievement so far. If you remember back to just a few years ago, everyone had their own orchestration technology and it was all over the place. That's settled down now. The CNCF has created stability and enablement for the market. What is next for Oracle and CNCF? Quillin: The challenge is to continue that success. There's some next-level tooling that needs to come out. Some of the fastest-growing projects in the CNCF are around monitoring and tracing and logging, around networking and storage, and around the best ways to manage a Kubernetes environment and connect it to existing storage and networking infrastructure. Kubernetes is growing, but what's really growing faster, which is a good sign, are the things that make Kubernetes more manageable, more secure, and more integrated into your existing infrastructure. Learn more about Oracle Cloud Infrastructure's cloud native technologies.

Welcome to Oracle Cloud Infrastructure Innovators, a series of articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Enterprises are turning in...

Oracle Cloud Infrastructure

Security in the Cloud: Are Audits and Certifications Really Enough?

For most organizations, the process of verifying that cloud providers manage data securely involves looking at security and compliance certifications and reading reports from independent, third-party auditors. At first glance, this approach makes sense. After all, organizations need some way to confirm that sensitive customer, supplier, and financial information is adequately protected. They also need to verify that data is stored and handled in compliance with applicable security requirements like the Health Insurance Portability and Accountability Act (HIPAA). Third-party audits and certifications can be a big help in that regard. But although audits and certifications provide some level of assurance that cloud providers and other enterprises are meeting certain requirements related to security and compliance, they don't always go far enough. For example, some of the biggest security breaches in the last several years happened to vendors with active Payment Card Industry Data Security Standard (PCI DSS) certifications.   The fact that organizations subject to regular security audits can experience breaches shows that certifications and audits aren't a substitute for vetting a cloud's security architecture and controls framework for sound design. Organizations that want to be confident that data stored in the cloud is appropriately secured should take some additional steps. Here are a few things that you can do to verify that a cloud provider places a high priority on security. Understand the Cloud's Architecture You can tell a lot about a cloud provider's approach to security by looking closely at their cloud's architecture. How was the service built? Was it designed using security-first principles? Oracle Cloud Infrastructure, for example, was designed with a security-first focus, isolating customer resources such as network, compute, and data. This single-tenant approach increases the granularity of control and reduces the attack footprint. It also results in predictable and superior performance by eliminating problems caused by "noisy neighbors."    Oracle Cloud Infrastructure users can also create their own virtual cloud networks (VCNs). A VCN is a customizable and completely private network that gives you full control to create an IP address space, subnets, route tables, and stateful firewalls. You can also configure inbound and outbound security lists to protect against unwarranted access and malicious users. Clarify Roles and Responsibilities Migrating to the cloud means shifting to a shared-responsibility model for security. This model is often a source of confusion for cloud adopters, as highlighted in a Cloud Threat Report jointly authored by Oracle and KPMG. As you move to the cloud, understanding your cloud use cases and how these impact the division of security roles is hugely important. Before you select a cloud provider, begin documenting your cloud use cases by making a comprehensive list of your security requirements. This action helps you create priorities and guide conversations with providers. When negotiating with providers, consider using Standardized Information Gathering (SIG) questionnaires from Shared Assessments or the Consensus Assessments Initiative Questionnaire (CAIQ) from the Cloud Security Alliance. And ensure that security roles and responsibilities are clearly defined in contracts and service level agreements (SLAs). Not all vendors offer availability and performance SLAs, for instance. It's also important to remember that the customer's level of responsibility for security shifts depending on which types of cloud services are being used. For example, Oracle customers who choose bare metal cloud deployments have extensive control over their cloud infrastructure. Therefore, they have far greater responsibility for things like identity and access management, password management, firewall configuration, and other controls. Learn About the Cloud Provider's Culture Security should be integral to the culture and everyday activities of a cloud provider—it should never be an afterthought. Ask the following questions to determine if a cloud provider truly embraces a culture of security: How do you ensure that engineers know their security responsibilities? How do you enable engineers to perform their security-related tasks, and how do you measure their results? What are the processes and technologies for reviewing new code and checking for vulnerabilities, and how do you learn from the things that you discover? What kind of penetration testing do you use, and how often are tests run? Do you give security issues a high priority at daily and weekly stand-ups and meetings? Have you ever made a tough decision between shipping a product to meet a commitment or fixing a security bug? As a longtime security professional and someone who has worked with many IT security engineers over the years, I can attest that we all have good intentions and want to keep customer data private and secure. But it takes more than good intentions to do a job correctly. We must embrace experience and innovation to continually improve the security architecture and ensure the maximum effectiveness of protection measures. Learn more about Oracle Cloud Infrastructure security.

For most organizations, the process of verifying that cloud providers manage data securely involves looking at security and compliance certifications and reading reports from independent, third-party...

Product News

Improving the User Experience for the Oracle IaaS and PaaS Console

As a product manager who's focused on improving the user experience for Oracle IaaS and PaaS customers, I love getting feedback about how we can make your jobs easier. Even better, I love being a part of the team that helps to bring that feedback to life in the form of enhancements to our console. We originally unveiled our new console homepage on October 2018, when we updated our look and feel, and made it easier for you to find the information that you need. Today, we're excited to build on that momentum by introducing key console enhancements that will further improve how you leverage, manage, and get support for your IaaS and PaaS services. Enhanced Service Announcements We've enhanced how we deliver service-related announcements directly in the console. All users with the right set of permissions, and not just the tenant administrator, can stay up-to-date on relevant service updates or planned changes. Announcements for the most high-impact events appear as banners at the top of the console. You can click them for details or dismiss them when you're already up to speed. Additionally, you can view all announcements by clicking on the announcement icon (it looks like a bell) on the top bar of the console. You can filter by the type of announcement, and if you're searching for a specific announcement made during a certain time period, you can filter by date range. Improved Support Experience for Service Limit Increases We've simplified the ability for you to see when you're about to reach your service limits, and we've added the ability to request increases within the console. Just enter your contact information, the service category (for example, Compute), and the resource for which you want to request an increase. For most requests, we offer a one-day response window. Relevant and Contextual Help We've enhanced our help functionality by making it more relevant to the service that you're currently provisioning or managing. First, we analyzed the most common issues that our users need help with, by service. Then we curated short lists of the top help topics and focused on those in the help navigation window. For example, if you're provisioning compute, all help topics are relevant to compute. And when you move on to provisioning Autonomous Data Warehouse, all help topics focus on the most common guidelines for that service. Over time, we will add these links across all services so that you can quickly find what you need. Improved Cost and Billing Management Capabilities A new cost analysis dashboard makes it easier for administrators and controllers to stay on top of usage and costs. You can easily see your latest usage charges at-a-glance. If you're a customer who's trying us out with our US$300 free trial, you can see how many trial credits you've used and the days left in your trial. You can filter by specific date ranges, and filter by compartment and tags to analyze usage and costs by department and projects. Additionally, you can expand by service to analyze how much each service has been used over time. For example, if you tag resources used for different development projects, you can easily filter and track service usage by development team. This is only the beginning. We'll be rolling out more user experience enhancements soon. As always, we want and appreciate your feedback, so keep it coming. If you're new to the Oracle Cloud Platform, we invite you to see how easy it is to get started with Oracle Cloud services with our US$300 free trial.

As a product manager who's focused on improving the user experience for Oracle IaaS and PaaS customers, I love getting feedback about how we can make your jobs easier. Even better, I love being a part...

Events

Learn the Benefits of Running PeopleSoft on Oracle Cloud

Hundreds of PeopleSoft customers have moved their PeopleSoft applications to Oracle Cloud Infrastructure, and many more are planning their move now. The challenge that customers face is not whether they should move PeopleSoft to the cloud, but rather when and how to do it. PeopleSoft is supported and viable at least until 2030. Continued investment in product development is realized in the quarterly PeopleSoft Update Manager (PUM) image updates that deliver new functionality requested by customers. This continued innovation means that you can enjoy the latest features in an application that you have relied on for a long time. The benefits of Oracle Cloud Infrastructure are proven. It improves performance, reduces costs, and automates lifecycle management. Customers move PeopleSoft to Oracle Cloud Infrastructure to accomplish the following goals: Maximize their current investment in PeopleSoft apps, customizations, and add-ons by running them on Oracle Cloud Infrastructure for a lower cost. Exit the data center business and focus on business enablement instead, deliver PeopleSoft implementations with agility and speed, and deliver upgrade and update projects with 40 to 70 percent savings. Improve business continuity with Oracle Cloud Infrastructure-based disaster recovery with significantly better Recovery Point Objective (RPO) and Recovery Time Objective (RTO) metrics than onsite, and at a reduced cost. Automate PeopleSoft lifecycle management tasks such as instance deployment, cloning, tools patching, tools upgrade, backup, monitoring and much more—all backed by Oracle Cloud Infrastructure’s industry-leading SLA. Leverage native Transparent Data Encryption (TDE) to secure PeopleSoft application data at rest and in motion, end-to-end application security, 90 percent faster environment setup, line of business (LOB) self-service, and more streamlined PeopleSoft lifecycle management. Don’t believe it? Attend our upcoming PeopleSoft webinar to see a live demonstration of PeopleSoft Cloud Manager and learn how to migrate Oracle PeopleSoft to Oracle Cloud. You'll also hear about how the cloud enables innovation at a much faster pace. Sign up today!

Hundreds of PeopleSoft customers have moved their PeopleSoft applications to Oracle Cloud Infrastructure, and many more are planning their move now. The challenge that customers face is not whether...

Developer Tools

Oracle Simplifies Cloud Native Development

Welcome to Oracle Cloud Infrastructure Innovators, a series of articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. The majority of enterprises are ready to join the cloud native development movement. But some stubborn obstacles—such as resistance to cultural change, complexity, and skills shortages—continue to stand in their way. I recently sat down for a conversation with Bob Quillin, Vice President of Developer Relations at Oracle Cloud Infrastructure, to talk about Oracle’s cloud native direction. We discussed cloud vendor lock-in and other difficulties enterprises face when moving to cloud. We also talked about creating a sustainable, open standards-based strategy to overcome the challenges to cloud adoption. Listen to our entire conversation here, and read a condensed version following.   Your browser does not support the audio player   How does Oracle Cloud Infrastructure support concepts like serverless programming and cloud native development? Quillin: Oracle last year started the Fn Project. Fn is an open source, container-native, serverless platform. It's one of the first serverless solutions that lets you run serverless applications basically anywhere, whether that's in the cloud or on-premises, or both. It supports any programming language, and it's very extensible. It was developed by the serverless group from Iron.io that was hired into Oracle a couple of years ago. Instead of creating a proprietary service, what we did was build out the Fn Project over the last year. Contributors have been adding in new features to build out the platform. For example, there's a new Cloud Events project that came out recently and is being hosted by the Cloud Native Computing Foundation (CNCF) as a sandbox project. It focuses on how events and serverless functions work together. We're one of the early adopters of that. What else can customers expect from the Fn Project? Quillin: One thing we just rolled out at the KubeCon conference in Seattle is a product that's based on the Fn Project called Oracle Functions. It's a fully managed, scalable, on-demand, functions-as-a-service (FaaS) platform that runs on top of Oracle Cloud Infrastructure. It's all based on the Fn engine. It's a very unique service in that regard in that you can still use the Fn Project capabilities on your laptop, or on any other cloud, for example. But if you want a managed service—like an AWS Lambda but better—we are offering Oracle Functions. But unlike Lambda, Oracle Functions is an open solution and won't lock you in with proprietary APIs. You can run anything you develop on Oracle Functions anywhere else with the Fn Project. So, it's really an environment for deploying and executing any functions-based application, and there's no need to manage the infrastructure. While there are servers underneath it all that Oracle manages for you—it isn't truly “serverless” as people say—you don't have to worry about the servers. That's one of the huge benefits because it makes deploying managed functions simple. It's DevOps friendly and Docker-based, so each function and serverless component is a container. Thus, it's a truly container-native approach. You can actually deploy it using your favorite container management solution. In particular, it works very well with Kubernetes, but also other types of container platforms, too. In terms of serverless products, almost all other solutions that are out there are all proprietary. But it's always tough to teach an old dog some new tricks. What major challenges are traditional enterprises facing as they try to become successful as cloud native companies? Quillin: That's a good question. The CNCF did a survey a couple of months ago asking about the big challenges that organizations are facing with container technology. The top three challenges were cultural change for developers, complexity, and lack of training. We've made some amazing progress as an industry, particularly over this last year and a half. But many developers and teams still feel left behind. As the culture changes and as we push DevOps forward, they're looking for ways to connect into those technologies that use these new technologies. But they're also responsible for maintaining and using existing platforms, like WebLogic or database applications. You can't just “lift and shift” those overnight. I've talked to CIOs at enterprises that are going through this change, and it may be easy to move a five- or ten-person team, but moving a thousand-person team or multi-thousand-person team is challenging. Then, combine that challenge with the complexity of all the open source options and all the solutions that are available to you. If your choice is to choose from 5,000 different solutions or choose between a single vendor that offers you five solutions, you're between a rock and a hard place. You're faced with either too much choice or not enough choice. How are enterprises addressing this issue of too much choice versus not enough choice? Quillin: Sometimes they address it by saying, "Well, I'm just going to choose one cloud. I'm going to single source it." Unfortunately, that approach has left many people locked in. What they find is that the fastest solution is not always the best solution. They started using closed APIs, proprietary services, and inch by inch, application by application, they get more and more locked in. The whole value proposition of open source is choice and portability—being able to take an application and move it wherever is appropriate for that workload, for that geography, for your business. So, if you're going to choose open source technology, you really need to embrace that and push your vendors. As part of the selection process you should ask, "Is this going to lock me in?" What we're seeing now is that people want to go hybrid cloud or multicloud, but their single cloud vendor strategy won't let them. Can you tell me a little bit about the Oracle Linux Cloud Native Environment?  Quillin: The Oracle Linux Cloud Native Environment is a software stack that is available on premises and runs on the cloud itself. It is a curated set of open source Cloud Native Computing Foundation (CNCF) projects that can be easily deployed, have been tested for interoperability, and for which enterprise-grade support is offered. It's included with an Oracle Linux Premier support subscription at no additional cost and is unique in that it's actually driven by true open source technologies. It uses no proprietary approaches to lock you in. If you develop on top of the Oracle Linux Cloud Native Environment, you can run that application anywhere. We're also combining that with Oracle Cloud Infrastructure cloud native services, so now you have a really strong one-two punch. The whole solution is called the Oracle Cloud Native Framework. The Oracle Cloud Native Framework consists of the Oracle Cloud Infrastructure cloud native services, which includes Kubernetes, the Oracle Cloud Infrastructure Registry, and a whole set of new observability and application definition, development, and provisioning technologies delivered as managed services right on top of our Generation 2 cloud. How does this help those enterprises that are struggling to go cloud native? Quillin: We've talked about the teams who are being left behind by complexity and cultural changes in the push to cloud. The Oracle Cloud Native Framework provides a pattern, a model by which these teams can easily move applications back and forth on-premises to cloud and back—and it's a sustainable strategy. If teams lack the training or complexity is slowing down adoption, the services in Oracle Cloud Infrastructure are offered as managed cloud services. For example, many—if not most—development teams did not become experts in managing Kubernetes or deploying Docker in the last two years. These teams can go directly into a managed cloud environment where all of that complexity is managed for you. You don't actually have to become a Kubernetes expert. You can just run the application and understand how you need to build the application to run on top of it. Plus, you don't have to run the underlying infrastructure, the clusters, the cluster management, and all of the tools and techniques that go along with that. The Oracle Cloud Native Framework is providing a truly inclusive, sustainable strategy for these developers, and it's all based on open CNCF technologies. Learn more about the Oracle Cloud Native Framework today.

Welcome to Oracle Cloud Infrastructure Innovators, a series of articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. The majority of...