X

Recent Posts

Product News

Announcing the Launch of Traffic Management

Today we’re excited to announce the release of DNS traffic management on Oracle Cloud Infrastructure. Our industry-leading DNS is already the most reliable and resilient DNS network across the world. Now you can use customizable steering policies to provide an optimal end-user experience based on factors such endpoint availability and end-user location. In addition to traditional global load-balancing capabilities, like distributing traffic across two Oracle Cloud Infrastructure regions, Oracle Cloud Infrastructure DNS now provides granular control of incoming queries and where to route these requests. Traffic management steering policies support a variety of use cases, ranging from a simple point-A-to-point-B failover to the most complex enterprise architectures. This support enables you to set predictable business expectations for service differentiation, geographic market targeting, and disaster recovery scenarios. Your rule sets can direct global traffic to any assets under your control, allowing you to optimize user experience and infrastructure efficiency in both hybrid and multicloud scenarios. Why Do I Need Traffic Management Steering Policies? Traffic management—a critical component of DNS—lets you configure routing policies for serving intelligent responses to DNS queries. Different answers can be served for a query according to the logic in the customer-defined traffic management policy, thus sending users to the most optimal location in your infrastructure. Oracle Cloud Infrastructure DNS is also optimized to make the best routing decisions based on current conditions on the internet. What Are the Most Common Use Cases? As more enterprises move to cloud and hybrid architectures, high availability and disaster recovery are more important than ever before. DNS failover provides the ability to move traffic away from an endpoint when it is unresponsive and send it to an alternate location. The availability status is monitored by Oracle health checks, now also available in Oracle Cloud Infrastructure. Cloud-based DNS load balancing is ideal for scaling your infrastructure across multiple geographic reasons. Common use cases include scaling out new infrastructure, cloud migration, and controlling the release of new features across your user base. It’s easy to set up “pools” of endpoints across all of your infrastructure (on-premises or cloud-based) and assign ratio-based weighting. Source-based steering lets you automate routing decisions based on where requests are coming from. These source-based policies can route traffic to the closest geographic assets, create a “split horizon” to control traffic based on the IP prefix of the originating query, and make decisions based on preferred providers. You can load balance traffic across different Oracle Cloud Infrastructure regions by sending users to the closest geographic location. You can also use health checks to divert users to an alternate region if your assets are unreachable in a particular region. Getting Started After you set up basic DNS, select Traffic Management Steering Policies under Edge Services, where you can create and customize your own steering policies. If you have any questions about getting started or how to handle your use case, please reach out to us.

Today we’re excited to announce the release of DNS traffic management on Oracle Cloud Infrastructure. Our industry-leading DNS is already the most reliable and resilient DNS network across the world....

Partners

Click to Launch Images by Using the Marketplace in Oracle Cloud Infrastructure

Guest Blogger: Bruce Burns, Senior Director of Product Management At Oracle, our mission is to enable your business transformation by migrating and modernizing your most demanding enterprise workloads onto the cloud without rearchitecting them. Our development efforts for Oracle Cloud Infrastructure are focused on this primary objective. We also know that to support your critical system-of-record workloads effectively with the least amount of rearchitecture, you require supporting applications from a broad range of vendors that surround, secure, and extend your core enterprise workloads. To that end, we want to dramatically simplify how you find, learn about, and launch both Oracle and third-party applications from our Oracle Cloud Marketplace. Announcing the General Availability of Marketplace in Oracle Cloud Infrastructure Today, I’d like to announce the general availability of our Marketplace in Oracle Cloud Infrastructure. We introduced this feature at OpenWorld San Francisco in October 2018, and we’re proud to make it available to our cloud customers effective immediately. Embedding the Marketplace in Oracle Cloud Infrastructure gives you ready access to security solutions from Fortinet and Check Point, DevOps solutions from Bitnami, and high-performance computing (HPC) workload management tools from Altair. If you’re an Oracle Applications customer, you can easily find and “click to launch” the automated lift and shift, provisioning, and lifecycle management tools for Oracle E-Business Suite and PeopleSoft. The best part is that you can launch any of these applications directly on your Oracle Cloud Infrastructure Compute instance, which dramatically reduces deployment times to minutes or hours, instead of days or weeks. Next, I’d like to highlight some of the innovative and unique solutions that we have in the Marketplace, and take you through some of their use cases. Enhance Security with Solutions That You Know Oracle Cloud Infrastructure is dedicated to offering you a secure cloud. We ensure the security of our cloud through our infrastructure architecture. We also enable you to choose the level of isolation and security controls that you need to run your most important workloads securely. We know that enterprise customers have implemented security and networking systems for their on-premises data centers and governance frameworks. Because we want to help you modernize your critical workloads by moving them to our cloud without massive rearchitecture work, we want to make it easier for you to use popular security solutions from third-party providers that you already know and trust. Oracle Cloud Infrastructure has supported integration with leading virtual firewall solutions from Fortinet and Check Point for some time. However, deployment used to require several steps, including importing or running setup into a running instance and then creating custom images. In fact, we wrote blog posts to help guide our customers through those steps. Now, turnkey images for Fortinet's FortiGate-VM NGFW, FortiADC (load balancing), FortiAnalyzer, FortiManager solutions, and Check Point’s CloudGuard IaaS NGFW are available on the Marketplace. You can quickly launch these images to your Oracle Cloud Infrastructure environment by using the Console. You can select the image of choice, click “Launch Instance”, and the GUI will guide you through deployment. It's as easy as that. Lior Cohen, Senior Director of Cloud Security Products and Solutions at Fortinet, talks about the impact of enabling customers to easily launch turnkey solutions via the embedded Marketplace in Oracle Cloud Infrastructure. "More and more enterprises are shifting their critical production workloads to hyperscale IaaS cloud vendors like Oracle. Because of the demands of today’s digital marketplace, it’s crucial for those customers to be able to instantly launch solutions that can secure and efficiently deliver applications at the speed their users expect. Rapid and simplified access to essential tools, like our award-winning FortiGate next-generation firewall security solution, equips organizations with the breadth of protection and confidence needed to migrate even their most critical enterprise applications. With just a few clicks, customers can launch and use our award-winning firewall and application delivery controller solutions, enabling them to secure critical applications in minutes." Launch E-Business Suite and PeopleSoft Cloud Manager, Available Only with Oracle Cloud Oracle Cloud Infrastructure is designed and optimized to run Oracle Applications such as E-Business Suite and PeopleSoft. These solutions help run the back offices of leading enterprises worldwide. Companies that depend on these applications, but demand the benefits of cloud, choose to modernize by running on Oracle Cloud Infrastructure, which offers better performance, lower price, and higher-availability options, including RAC and Exadata, that cannot be found with any other cloud. For example, E-Business Suite Cloud Manager and PeopleSoft Cloud Manager help automate provisioning and facilitate lifecycle management for these application environments in our cloud. And these solutions are unique to Oracle Cloud Infrastructure. They offer application modernization capabilities to facilitate migrations from on-premises environments to the cloud. They also provide intuitive UIs so that you can define your topologies and create templates for more streamlined deployments. Finally, these web-based solutions enable you to subscribe to the latest updates and stay current with the latest images, improving your security. Both solutions are now available through the Marketplace in Oracle Cloud Infrastructure, where you can click to launch the latest E-Business Suite Cloud Manager and PeopleSoft Cloud Manager images. For more information on E-Business Suite Cloud Manager, check out their blog. Click to Launch DevOps Tools Building on Oracle’s commitment to support open standards through our Oracle Cloud Native Framework, the new Marketplace in Oracle Cloud Infrastructure also makes it easier for DevOps practitioners to launch the following images: Continuous Integration (CI) software, such as Jenkins Certified by Bitnami Source code management with CI features, such as GitLab CE Certified by Bitnami Bug tracking software, such as Redmine Certified by Bitnami By simplifying how software development teams access solutions to build, test, and deploy the latest cloud native innovations, we’re supporting our customers’ ability to innovate and respond to changing business requirements. “For enterprises running open source, it is critical that they choose a trusted, secure, up-to-date version,” said Pete Catalanello, Bitnami Vice President of Business Development and Sales. “By adding Bitnami certified solutions such as Jenkins and Redmine to its Marketplace, Oracle is helping DevOps to add agility and best practices to their processes.” Making HPC Solutions Accessible to Engineers Oracle continues to deliver on our vision to make the power of supercomputing readily accessible to every engineer and scientist. Historically, enterprise HPC workloads have remained on-premises because they require specialized technology and demand high, consistent performance that wasn’t possible or was too cost prohibitive on cloud infrastructure. At Oracle, we're challenging you to bring your most demanding HPC applications to our cloud. Not only do we offer bare metal GPUs based on cutting-edge technology and enable clustered networking to deliver single-digit microsecond latency, we also partner with innovators like Altair to offer easy access to their market-leading HPC workload management solution, Altair PBS Works™. Sam Mahalingam, Chief Technical Officer for Enterprise Solutions at Altair talks about teaming up with Oracle on the new Marketplace in Oracle Cloud Infrastructure. “Organizations all over the world depend on Altair PBS Works to simplify the administration of their largest and most complex clusters and supercomputing environments,” said Mahalingam. “Many of our customers are smaller organizations that need HPC solutions that are easy to adopt and use. Partnering with Oracle and offering Altair PBS Professional™, the flagship product of the PBS Works suite, through the new Marketplace delivers on our joint mission of making HPC more accessible.” We’ll continue to add partner solutions to our Marketplace. If there are any third-party solutions that you just can’t live without, we welcome your comments to this post as we continue to build out our ecosystem of solutions for you.

Guest Blogger: Bruce Burns, Senior Director of Product Management At Oracle, our mission is to enable your business transformation by migrating and modernizing your most demanding enterprise workloads...

Strategy

Why Oracle Cloud Infrastructure?

We recently launched a "Why Oracle Cloud Infrastructure" web page that describes how Oracle's approach to cloud is different, and why we think our cloud infrastructure can be uniquely valuable to customers. I’ve been at Oracle for two years, and I joined this team because I believe that Oracle is in a position to solve technology challenges in cloud that other vendors can't. This page is our way of telling that story to the world. It’s part of a conversation that we’ve been having with many customers, and I’m excited that we're sharing the story with a wider audience. Why Build a Cloud? For decades, Oracle has been partnering with enterprises to solve some of their most challenging business problems by using the world’s most scalable, reliable, and performant database and business application software, and differentiated hardware to run it on. Like every other technology company in the world, we know that cloud offers significant benefits. It promises to make IT more agile to drive innovation and get customers out of the tedious business of infrastructure management.  When we thought about the best way to serve our current and future customers, we didn't believe that any existing cloud infrastructure provider could meet their needs for consistently high performance, isolation from other tenants, compatibility with what they wanted to run, and the ability to support the most critical features of Oracle technologies. We felt that by building our own cloud infrastructure, we’d better serve customers in the long run by allowing our database and applications teams to work closely with our own cloud infrastructure teams to offer optimized solutions.  Our Imperatives To cloud-enable the class of applications that we know best—systems of record that Oracle customers rely on to build products, transact finances, and effectively run the most critical parts of their operations—we knew that we needed a cloud that was up to the task. Specifically, we had three imperatives in mind: Our customers could not take a step backwards when moving to cloud, especially regarding performance and reliability. The economics had to work by reducing costs compared to on-premises deployments and being predictable enough to facilitate budget planning. Our customers had to be able to get their workloads to cloud easily, without requiring massive refactoring or excessive risk. Performance and Reliability For the first imperative, we built a cloud for enterprise with top-tier components and access to bare metal servers to give customers full isolation and the ability to run what they want. From an infrastructure perspective, the most important part is the network. We avoided the oversubscription that’s common with other clouds, so that performance isn’t variable from moment to moment, day to day, or month to month. This reliability enables high performance that’s validated by third party testing to have better results for key application workloads. What’s more, the level of performance that customers get doesn’t change depending on what neighbors are doing, a key requirement for demanding systems of record. Finally, we built a cloud that natively supports crucial elements of Oracle Database functionality such as Oracle Real Application Clustering (RAC), Exadata, and deep DBA controls, all of which customers rely on for production applications and are unique to the Oracle Cloud. Economics We offer low component pricing and simplified pricing models that make it easy to predict and manage that total cost of operations at scale. Our on-demand rates for compute, network, and storage components are materially lower than those of our competitors. But perhaps more importantly, our rates include everything that enterprises need to get the most out of our services, like storage performance, inter-region transfer and high levels of internet data egress, and unlimited data movement across dedicated private line connections. Our discount structure is simple, with a straight percentage discount offered across of all services rather than a reserved instance model that is commonly restricted to compute and is tremendously complicated to optimize.  Moving to Cloud We wanted to make it easy for our enterprise customers to get to cloud. We make this possible through a combination of compatibility with what our customers run, automation and expertise in lifting and shifting workloads as they are to cloud, and integration and innovation in cloud native application frameworks. When our customers run workloads in Oracle Cloud, they can take the data that they create and manage in core systems of record, and get more value out of it with integration, analytics, and new ways to distribute and understand that data. So, Why Oracle Cloud Infrastructure? Other clouds were built primarily for web applications, but we took a different approach. Our cloud is built to withstand the rigorous demands of critical enterprise workloads, and to transform those business processes for improved agility, reliability, and long-term business results. We believe our long history of solving business problems in technology, combined with a focus on building a cloud for the most demanding category of workloads, will drive differentiation and results for enterprise environments around the world.  Hope that you give our new page a read and feel free to let me know what you think.

We recently launched a "Why Oracle Cloud Infrastructure" web page that describes how Oracle's approach to cloud is different, and why we think our cloud infrastructure can be uniquely valuable...

Product News

Oracle Cloud Simplifies Identity Management with Enhanced Okta Support

We are enhancing our federation support by enabling users who are federated with Okta to directly access the Oracle Cloud Infrastructure SDK and CLI. Federation enables you to use identity management software - often times this is existing an existing identity management solution that is integrated with your corporate directory - to manage users and groups while giving them access to the Oracle Cloud Infrastructure Console, CLI, and SDK. If you're an Okta user, that means you can leverage the same set of credentials in the Oracle Cloud Infrastructure web console as well as in long-running, unattended CLI or SDK scripts. Users that are members of Okta groups that you select are synchronized from Okta to Oracle Cloud Infrastructure. You control which Okta users have access to Oracle Cloud Infrastructure, and you can consolidate all user management in Okta. To use this new feature, follow the setup process described in the documentation.  Following is an example cost-management scenario that is greatly simplified by this feature. Suppose that you want to use the SDK to run a Python script that finds and terminates compute instances that don't have the CostCenter cost tracking tag. Instead of creating a local Oracle Cloud Infrastructure user, you can set up a user in Okta to run this script. You would follow these steps to enable this scenario: Step 1: Set up or upgrade your Okta federation to provision users If you do not have an existing federation with Okta, follow the instructions in the white paper, Oracle Cloud Infrastructure Okta Configuration for Federation and Provisioning. This paper includes instructions for both setting up your federation and provisioning with SCIM. If you have an existing federation with Okta with group mappings that you want to maintain, you can add SCIM provisioning via the instructions in our documentation. Step 2: Set up the user in Okta and associate that user with the correct groups Managing all your users from your identity provider is a more scalable, manageable, and secure way to manage your user identities. Be sure to follow the principal of least privilege by creating an Okta user and associating that user with only the Okta groups that they need to do their job. Step 3: Set up the Oracle Cloud Infrastructure group Create a local Oracle Cloud Infrastructure group that will be used for the task and ensure that it has a policy that enables just the access control needed for the task. Consider setting up a group specifically for the type of administrator that you want (for example, compute instances administrator). For a detailed explanation of best practices in setting up granular groups and access policies, see the Oracle Cloud Infrastructure Security white paper. You can also create the group when you map it (next step). Step 4: Map the Okta group to the Oracle Cloud Infrastructure group Follow the instructions on adding groups and users for tenancies federated with Okta, and ensure that you map the correct group from Okta to the equivalent group in Oracle Cloud Infrastructure. You will know that you succeeded if you see users created in your tenancy from Okta (there is a filter that allows you to see only federated users).   Step 5: Set up the user with an API key Now that the Okta user exists as a provisioned user in Oracle Cloud Infrastructure, you must create an API key pair and upload it to the user. Each user should have their own key pair. For details, see the SDK setup instructions. Step 6: Check the user's capabilities  As a final check, ensure that the user has the capability to use API keys. You can also set the user's capabilities to use only API keys for the SDK and not the web console. Now you've set up the Okta user to use the SDK and run scripts that the Oracle Cloud Infrastructure user has access to.    Tips You know that the user is federated if the user name is prefixed with the name that you gave the identity provider. For example, if you called the Okta federation okta, your user would be okta/username. There is also a feature that lets you filter the list of local users by which federation provider they came from. Only users assigned to mapped groups are replicated. If you see some users but not the Okta user that you want, then that user doesn't belong to a group that has been mapped from Okta to Oracle Cloud Infrastructure. If no users are being replicated, verify that you've followed the setup procedure and mapping between the groups. If that doesn’t work, visit My Oracle Support to open a support ticket. To use the SDK or CLI, the client that runs the CLI or SDK must have the matching private key material stored on the client machine. Secure the client machine appropriately to prevent inappropriate access. Conclusion This feature streamlines how Okta users can be used with Oracle Cloud Infrastructure and especially the CLI and SDK. Stay tuned for future feature announcements regarding federation.  

We are enhancing our federation support by enabling users who are federated with Okta to directly access the Oracle Cloud Infrastructure SDK and CLI. Federation enables you to use identity management...

Developer Tools

External Health Checks on Oracle Cloud Infrastructure

When you run a solution in the public cloud, it's important to monitor the availability and performance of the service to end users, to ensure access from outside the host cloud. To meet this need, a cloud provider must provide monitoring from a diverse set of locations within relevant regional markets around the globe. Oracle Cloud Infrastructure is pleased to announce the release of external health checks to help you monitor the availability and performance of any public-facing service, whether hosted in Oracle Cloud or in a hybrid infrastructure. What Are Health Checks? External health checks enable you to perform scheduled testing, by using a variety of different protocols, from a set of Oracle managed vantage points around the globe to any fully qualified domain name (FQDN) or IP address specified by the user. Health checks provides support for HTTP and HTTPS web application checks and TCP and ICMP pings for monitoring IP addresses. You can also choose high-availability testing with tests running as frequently as every 10 seconds from each vantage point. On-demand testing, which allows for one-off validation or troubleshooting tests, is also available through a REST API. Additionally, the Health Check service is fully integrated with the DNS Traffic Management service to enable automated detection of service failures and trigger DNS failovers to ensure continuity of service. Creating a Health Check From the Edge Services menu, navigate to Health Checks. In the Health Checks area, click Create Health Check, and enter the details of your check in the dialog box. Enter name that will help you to remember the purpose of this check when you return to this page. Select the compartment that you want to add this check to. Add the target endpoints that you want to monitor. The Targets field is prepopulated with suggested endpoints drawn from public IP addresses already configured in your compartment. You can select one of these endpoints to monitor or add a new one. Select vantage points from which you intend to monitor the targets. These vantage points are located in locations around the globe, and we generally recommend selecting vantage points that are  located in the same continent as your application. Select the type of test that you want to run—HTTP or HTTPS for a web page, or TCP or ICMP for a public IP address. Set the frequency of the tests as appropriate to the level of monitoring that your service requires. Current options include every 30 or 60 seconds for basic tests, and premium tests run at the higher frequency of every 10 seconds.  An additional fee is calculated for premium tests.  Add any tags to help you quickly search for this check in the future. Click Create Health Check.  After the check is created, a details page shows information specific to this check. Retrieving Metrics The Health Check service delivers metrics directly to the Oracle Cloud Infrastructure Monitoring service to enable you to query the input metrics, build reports, and configure alerts based on the external monitoring. This integration gives you the flexibility to identify service failures visible from locations around the globe. A full REST API lets you access up to 90 days of historic health check monitoring data. Within the Health Check UI, each test also provides access to measurement data from the probes. Suspending Health Checks If you need to temporarily suspend a health check—for example, you are working to operationally maintain or alter a service—you can do that by selecting the affected check and clicking Disable on the Health Check details page. Next Steps Health checks are simple to configure through a flexible UI or REST API. We recommend using the service to monitor any critical publicly exposed IP address or FQDN that you are delivering, whether hosted solely in Oracle Cloud Infrastructure or across your hybrid environment. Health checks will be extended to include more types of tests across different protocols and with more configuration options. If there is a specific test you would like us to support, let us know. If you haven't tried Oracle Cloud Infrastructure yet, you can try it for free. 

When you run a solution in the public cloud, it's important to monitor the availability and performance of the service to end users, to ensure access from outside the host cloud. To meet this need, a...

Using File Storage Service with Container Engine for Kubernetes

Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. One of the best practices for containerized applications is to use stateless containers. However, many real-world applications require stateful behavior for some of their containers. For example, a classic three-tier application might have three containers: One for the presentation layer, stateless One for the application layer, stateless One for persistence ( such as database), stateful In Kubernetes, each container can read and write to its own file system. But when a container is restarted, all data is lost. Therefore, containers that need to maintain state would store data in a persistent storage such as Network File System (NFS). What’s already stored in NFS isn't deleted when a pod, which might contain one or more containers, is destroyed. Also, an NFS can be accessed from multiple pods at the same time, so an NFS can be used to share data between pods. This behavior is really useful when containers or applications need to read configuration data from a single shared file system or when multiple containers need to read from and write data to a single shared file system. Oracle Cloud Infrastructure File Storage provides a durable, scalable, and distributed enterprise-grade network file system that supports NFS version 3 along with Network Lock Manager (NLM) for a locking mechanism. You can connect to File Storage from any bare metal, virtual machine, or container instance in your virtual cloud network (VCN). You can also access a file system from outside the VCN by using Oracle Cloud Infrastructure FastConnect or an Internet Protocol Security (IPSec) virtual private network (VPN). File Storage is a fully managed service so you don't have to worry about hardware installations and maintenance, capacity planning, software upgrades, security patches,  and so on. You can start with a file system that contains only a few kilobytes of data and grows to handle 8 exabytes of data. This post explains how to use File Storage (sometimes referred to as FSS) with Container Engine for Kubernetes (sometimes referred to as OKE). We'll create two pods. One pod runs on Worker Node 1, the other pod runs on Worker Node 2, and they share the same File Storage file system:   Then we'll look inside the pod and see how to configure it with File Storage. Prerequisites Oracle Cloud Infrastructure account credentials for the tenancy. A Container Engine for Kubernetes cluster created in your tenancy. An example of that is shown in the Container Engine for Kubernetes documentation. Security lists configured to support File Storage as explained in the File Storage documentation. The following image shows a sample security list configuration: A file system and a mount target created according to the instructions in Announcing File Storage Service UI 2.0. High-Level Steps Create storage class. Create a persistent volume (PV). Create a persistent volume claim (PVC). Create a pod to consume the PVC. Create a Storage Class Create a storage class that references the mount target ID from file system that you created: kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: oci-fss provisioner: oracle.com/oci-fss parameters: # Insert mount target from the FSS here  mntTargetId: ocid1.mounttarget.oc1.iad.aaaaaaaaaaaaaaaaaaaaaaaaaa   Create a Persistent Volume (PV) apiVersion: v1 kind: PersistentVolume metadata: name: oke-fsspv spec: storageClassName: oci-fss capacity: storage: 100Gi accessModes: - ReadWriteMany mountOptions: - nosuid nfs: # Replace this with the IP of your FSS file system in OCI server: 10.0.32.8 # Replace this with the Path of your FSS file system in OCI path: "/okefss" readOnly: false   Create a Persistent Volume Claim (PVC) apiVersion: v1 kind: PersistentVolumeClaim metadata: name: oke-fsspvc spec: storageClassName: oci-fss - ReadWriteMany resources: requests: # Although storage is provided here it is not used for FSS file systems  storage: 100Gi volumeName: oke-fsspv   Verify That the PVC Is Bound raghpras-Mac:fss raghpras$ kubectl get pvc oke-fsspvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE oke-fsspvc Bound oke-fsspv 100Gi RWX oci-fss 1h raghpras-Mac:fss raghpras$   Label the Worker Nodes Label two worker nodes so that a pod can be assigned to each of them: kubectl label node 129.213.110.23 nodeName=node1 kubectl label node 129.213.137.236 nodeName=node2   Use the PVC in a Pod The following pod (oke-fsspod) on Worker Node 1 (node1) consumes the file system PVC (oke-fsspvc). #okefsspod.yaml apiVersion: v1 kind: Pod metadata: name: oke-fsspod spec: containers: - name: web image: nginx volumeMounts: - name: nfs mountPath: "/usr/share/nginx/html/" ports: - containerPort: 80 name: http volumes: - name: nfs persistentVolumeClaim: claimName: oke-fsspvc readOnly: false nodeSelector: nodeName: node1   Create the Pod kubectl apply -f okefsspod.yaml   Test After creating the pod, use kubectl exec to test that you can write to the file share: raghpras-Mac:fss raghpras$ kubectl get pods oke-fsspod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE oke-fsspod 1/1 Running 0 33m 10.244.2.11 129.213.110.23 <none> raghpras-Mac:fss raghpras$   Write to the File System by Using kubectl exec raghpras-Mac:fss raghpras$ kubectl exec -it oke-fsspod bash root@oke-fsspod:/# echo "Hello from POD1" >> /usr/share/nginx/html/hello_world.txt root@oke-fsspod:/# cat /usr/share/nginx/html/hello_world.txt Hello from POD1 root@oke-fsspod:/#   Repeat the Process with the Other Pod Ensure that this file system can be mounted into the other pod (oke-fsspod2), which is on Worker Node 2 (node2): apiVersion: v1 #okefsspod2.yaml kind: Pod metadata: name: oke-fsspod2 spec: containers: - name: web image: nginx volumeMounts: - name: nfs mountPath: "/usr/share/nginx/html/" ports: - containerPort: 80 name: http volumes: - name: nfs persistentVolumeClaim: claimName: oke-fsspvc readOnly: false nodeSelector: nodeName: node2   raghpras-Mac:fss raghpras$ kubectl apply -f okefsspod2.yaml pod/oke-fsspod2 created raghpras-Mac:fss raghpras$ kubectl get pods oke-fsspod oke-fsspod2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE oke-fsspod 1/1 Running 0 12m 10.244.2.17 129.213.110.23 <none> NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE oke-fsspod2 1/1 Running 0 12m 10.244.1.9 129.213.137.236 <none> raghpras-Mac:fss raghpras$ kubectl exec -it oke-fsspod2 -- cat /usr/share/nginx/html/hello_world.txt Hello from POD1 raghpras-Mac:fss raghpras$   Test Again You can also test that the newly created pods can write to the share: raghpras-Mac:fss raghpras$ kubectl exec -it oke-fsspod2 bash root@oke-fsspod2:/# echo "Hello from POD2" >> /usr/share/nginx/html/hello_world.txt root@oke-fsspod2:/# cat /usr/share/nginx/html/hello_world.txt Hello from POD1 Hello from POD2 root@oke-fsspod2:/# exit exit   Conclusion Both File Storage and Container Engine for Kubernetes are fully managed services that are highly available and highly scalable. File Storage also provides highly persistent and durable storage for your data on Oracle Cloud Infrastructure. File Storage is built on distributed architecture to provide scale for your data and for accessing your data. Leveraging both services will simplify your workflows in the cloud and give you flexibility and options on how you store your container data. What's Next Dynamic volume provisioning for File Storage Service., which is in development, creates file systems and mount targets when a customer requests file storage inside the Kubernetes cluster. If you want to learn more about Oracle Cloud Infrastructure, Container Engine for Kubernetes, or File Storage, our cloud landing page is a great place to start. Resources Container Engine for Kubernetes workshop File Storage Overview Creating a Kubernetes Cluster

Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. One of...

Developer Tools

The Intersection of Hybrid Cloud and Cloud Native Adoption in the Enterprise

Welcome to Oracle Cloud Infrastructure Innovators, a series of articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Enterprises are turning in droves to hybrid cloud computing strategies, especially for testing and development, quality assurance, and DevOps activities. But before the majority of enterprises can move on to more advanced hybrid cloud use cases, they'll need to overcome some lingering challenges. I recently sat down with Bob Quillin, Vice President of Developer Relations at Oracle Cloud Infrastructure, to discuss Oracle’s cloud native direction. We discussed the biggest trends in hybrid cloud computing and the major obstacles—things like skills shortages, resistance to cultural change, and rapidly evolving technologies—that often stand in the way of adoption. Listen to our entire conversation, followed by a condensed written version:   Your browser does not support the audio player   One of the common trends I'm seeing in just about every enterprise is the move toward hybrid cloud strategies. What are you seeing on that front? Bob Quillin: We're seeing a lot of demand and interest in hybrid cloud. People have been trying out different models and patterns and testing out different technologies, and there have been some challenges. But one of the first major areas where we're seeing a lot of traction with customers is using the cloud for development, quality assurance, DevOps, and for running tests, with production still largely on premises. Many people feel more comfortable in their on-premises environment for certain production applications. But with a cloud native and DevOps environment in the cloud, you can spin up, spin down, and support a variety of testing, staging, and QA projects. It gives you a lot of elasticity, it's cheaper, and the test cases can run in containers. Sometimes people say, "Oh, I can't run my database applications in the cloud." Well, that isn’t the case for test and QA use cases. You can put them in a container, run the test, break it back down, and you're good to go. The disposability and quick reusability of these environments is where we're seeing a lot of success, and that is where a lot of people get started. What's the next step on the road to a hybrid cloud strategy? Quillin: The next step is getting to the point where you have a platform that gives you confidence that you can develop on the cloud or on premises, and that you have bi-directional portability—on premises to cloud or cloud to on premises. Ensuring that kind of application portability is the next major pattern we've seen. Disaster recovery and high availability deployments is the third approach we've seen. For example, people will mirror their application in the cloud to have it available. But they keep running the existing application on premises so they can failover if they have a disaster event. Disaster recovery is one of the classic hybrid models. Those three areas are the ones we see being most successful right now. Are organizations using hybrid strategies at all in more advanced areas? Quillin: There are two more use cases we're seeing that are more advanced. One is a workload balancing application that's able to run both on premises and in the cloud. It lets you choose where to run each workload based on its regulatory requirements, governance, latencies, whether it’s a new or legacy workload, etc. This approach requires a bit more sophistication and a little more targeting. The other big one that people have been working toward for a long time has been cloud bursting, where users can expand resources into the cloud dynamically back and forth. Or, users enlist some kind of federated automation where, based on performance or quality of service, I'm able to choose where I run my application and have a federated, single view of all that. These use cases have been highly desirable from an enterprise perspective. But what's been lacking is a platform from which to do it and a framework that enables it. Let's talk a bit more about challenges. I'm sure almost every organization that you deal with is facing certain setup challenges in deploying, particularly to the hybrid model. What are you seeing? Quillin: Cultural change and training continue to be inhibitors—and I think those roll up into an overall operational readiness challenge. Organizations are struggling with how to get started on this. At Oracle, what we're providing is an easier way to get started. The Oracle Cloud Native Framework provides a set of patterns and a model that gives the customer a supported blueprint for hybrid cloud. The next challenge is dealing with portability complexities related to a variety of underlying integration issues, including storage, networking, and the wide variety of Kubernetes settings and configurations. A related challenge—and this is one of the dark secrets of cloud native—is that there are a lot of “devil in the details” problems based on the rapid rate of change of Kubernetes, its quick release cadence, shifting APIs, and the general way the technology is rocketing forward. What you need is a vendor that supports you through these changes by supporting a bi-directional portability model. At Oracle Cloud Infrastructure, we're helping organizations through this process—and we're not going to leave them high and dry by using a proprietary approach. We're committed to open standards. Many organizations think that open source is great. But there are also those who think that sourcing software from a single proprietary vendor can be cheaper due to the DevOps and maintenance costs associated with open source. What are your thoughts on that? Quillin: All sorts of studies have been made around organizations that use an open source and DevOps culture, and they're always faster and more successful in terms of business agility. But also, the developers are happy. It's true that some on the business side of an organization would choose proprietary technology. But if you really want to recruit the best developers, you're going to want to work in open source because that's the most marketable set of skills today. You get happier developers, you can recruit better, and you get the best development teams. Oracle is a platinum member of the CNCF (Cloud Native Computing Foundation). How does the CNCF help in terms of enabling enterprises to overcome these challenges? Quillin: I think the most important thing they've done—which is amazing to me—is they've enabled the market by creating a standard cloud native platform based around Kubernetes. That's been their crowning achievement so far. If you remember back to just a few years ago, everyone had their own orchestration technology and it was all over the place. That's settled down now. The CNCF has created stability and enablement for the market. What is next for Oracle and CNCF? Quillin: The challenge is to continue that success. There's some next-level tooling that needs to come out. Some of the fastest-growing projects in the CNCF are around monitoring and tracing and logging, around networking and storage, and around the best ways to manage a Kubernetes environment and connect it to existing storage and networking infrastructure. Kubernetes is growing, but what's really growing faster, which is a good sign, are the things that make Kubernetes more manageable, more secure, and more integrated into your existing infrastructure. Learn more about Oracle Cloud Infrastructure's cloud native technologies.

Welcome to Oracle Cloud Infrastructure Innovators, a series of articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Enterprises are turning in...

Oracle Cloud Infrastructure

Security in the Cloud: Are Audits and Certifications Really Enough?

For most organizations, the process of verifying that cloud providers manage data securely involves looking at security and compliance certifications and reading reports from independent, third-party auditors. At first glance, this approach makes sense. After all, organizations need some way to confirm that sensitive customer, supplier, and financial information is adequately protected. They also need to verify that data is stored and handled in compliance with applicable security requirements like the Health Insurance Portability and Accountability Act (HIPAA). Third-party audits and certifications can be a big help in that regard. But although audits and certifications provide some level of assurance that cloud providers and other enterprises are meeting certain requirements related to security and compliance, they don't always go far enough. For example, some of the biggest security breaches in the last several years happened to vendors with active Payment Card Industry Data Security Standard (PCI DSS) certifications.   The fact that organizations subject to regular security audits can experience breaches shows that certifications and audits aren't a substitute for vetting a cloud's security architecture and controls framework for sound design. Organizations that want to be confident that data stored in the cloud is appropriately secured should take some additional steps. Here are a few things that you can do to verify that a cloud provider places a high priority on security. Understand the Cloud's Architecture You can tell a lot about a cloud provider's approach to security by looking closely at their cloud's architecture. How was the service built? Was it designed using security-first principles? Oracle Cloud Infrastructure, for example, was designed with a security-first focus, isolating customer resources such as network, compute, and data. This single-tenant approach increases the granularity of control and reduces the attack footprint. It also results in predictable and superior performance by eliminating problems caused by "noisy neighbors."    Oracle Cloud Infrastructure users can also create their own virtual cloud networks (VCNs). A VCN is a customizable and completely private network that gives you full control to create an IP address space, subnets, route tables, and stateful firewalls. You can also configure inbound and outbound security lists to protect against unwarranted access and malicious users. Clarify Roles and Responsibilities Migrating to the cloud means shifting to a shared-responsibility model for security. This model is often a source of confusion for cloud adopters, as highlighted in a Cloud Threat Report jointly authored by Oracle and KPMG. As you move to the cloud, understanding your cloud use cases and how these impact the division of security roles is hugely important. Before you select a cloud provider, begin documenting your cloud use cases by making a comprehensive list of your security requirements. This action helps you create priorities and guide conversations with providers. When negotiating with providers, consider using Standardized Information Gathering (SIG) questionnaires from Shared Assessments or the Consensus Assessments Initiative Questionnaire (CAIQ) from the Cloud Security Alliance. And ensure that security roles and responsibilities are clearly defined in contracts and service level agreements (SLAs). Not all vendors offer availability and performance SLAs, for instance. It's also important to remember that the customer's level of responsibility for security shifts depending on which types of cloud services are being used. For example, Oracle customers who choose bare metal cloud deployments have extensive control over their cloud infrastructure. Therefore, they have far greater responsibility for things like identity and access management, password management, firewall configuration, and other controls. Learn About the Cloud Provider's Culture Security should be integral to the culture and everyday activities of a cloud provider—it should never be an afterthought. Ask the following questions to determine if a cloud provider truly embraces a culture of security: How do you ensure that engineers know their security responsibilities? How do you enable engineers to perform their security-related tasks, and how do you measure their results? What are the processes and technologies for reviewing new code and checking for vulnerabilities, and how do you learn from the things that you discover? What kind of penetration testing do you use, and how often are tests run? Do you give security issues a high priority at daily and weekly stand-ups and meetings? Have you ever made a tough decision between shipping a product to meet a commitment or fixing a security bug? As a longtime security professional and someone who has worked with many IT security engineers over the years, I can attest that we all have good intentions and want to keep customer data private and secure. But it takes more than good intentions to do a job correctly. We must embrace experience and innovation to continually improve the security architecture and ensure the maximum effectiveness of protection measures. Learn more about Oracle Cloud Infrastructure security.

For most organizations, the process of verifying that cloud providers manage data securely involves looking at security and compliance certifications and reading reports from independent, third-party...

Product News

Improving the User Experience for the Oracle IaaS and PaaS Console

As a product manager who's focused on improving the user experience for Oracle IaaS and PaaS customers, I love getting feedback about how we can make your jobs easier. Even better, I love being a part of the team that helps to bring that feedback to life in the form of enhancements to our console. We originally unveiled our new console homepage on October 2018, when we updated our look and feel, and made it easier for you to find the information that you need. Today, we're excited to build on that momentum by introducing key console enhancements that will further improve how you leverage, manage, and get support for your IaaS and PaaS services. Enhanced Service Announcements We've enhanced how we deliver service-related announcements directly in the console. All users with the right set of permissions, and not just the tenant administrator, can stay up-to-date on relevant service updates or planned changes. Announcements for the most high-impact events appear as banners at the top of the console. You can click them for details or dismiss them when you're already up to speed. Additionally, you can view all announcements by clicking on the announcement icon (it looks like a bell) on the top bar of the console. You can filter by the type of announcement, and if you're searching for a specific announcement made during a certain time period, you can filter by date range. Improved Support Experience for Service Limit Increases We've simplified the ability for you to see when you're about to reach your service limits, and we've added the ability to request increases within the console. Just enter your contact information, the service category (for example, Compute), and the resource for which you want to request an increase. For most requests, we offer a one-day response window. Relevant and Contextual Help We've enhanced our help functionality by making it more relevant to the service that you're currently provisioning or managing. First, we analyzed the most common issues that our users need help with, by service. Then we curated short lists of the top help topics and focused on those in the help navigation window. For example, if you're provisioning compute, all help topics are relevant to compute. And when you move on to provisioning Autonomous Data Warehouse, all help topics focus on the most common guidelines for that service. Over time, we will add these links across all services so that you can quickly find what you need. Improved Cost and Billing Management Capabilities A new cost analysis dashboard makes it easier for administrators and controllers to stay on top of usage and costs. You can easily see your latest usage charges at-a-glance. If you're a customer who's trying us out with our US$300 free trial, you can see how many trial credits you've used and the days left in your trial. You can filter by specific date ranges, and filter by compartment and tags to analyze usage and costs by department and projects. Additionally, you can expand by service to analyze how much each service has been used over time. For example, if you tag resources used for different development projects, you can easily filter and track service usage by development team. This is only the beginning. We'll be rolling out more user experience enhancements soon. As always, we want and appreciate your feedback, so keep it coming. If you're new to the Oracle Cloud Platform, we invite you to see how easy it is to get started with Oracle Cloud services with our US$300 free trial.

As a product manager who's focused on improving the user experience for Oracle IaaS and PaaS customers, I love getting feedback about how we can make your jobs easier. Even better, I love being a part...

Events

Learn the Benefits of Running PeopleSoft on Oracle Cloud

Hundreds of PeopleSoft customers have moved their PeopleSoft applications to Oracle Cloud Infrastructure and many more are planning their move now. The challenge that customers face is not whether they should move PeopleSoft to the cloud, but rather when and how they should do it. PeopleSoft is supported and viable at least until 2030. Continued investment in product development is realized in the quarterly PeopleSoft Update Manager (PUM) image updates that deliver new functionality requested by customers. This continued innovation means that you can enjoy the latest features in an application that you have relied on for a long time. The benefits of Oracle Cloud Infrastructure are proven. Oracle Cloud Infrastructure improves performance, reduces costs, and automates lifecycle management. Customers move PeopleSoft to Oracle Cloud Infrastructure to accomplish the following goals: Maximize their current investment in PeopleSoft apps, customizations, and add-ons by running them on Oracle Cloud Infrastructure for a lower cost Exit the data center business and focus on business enablement instead, deliver PeopleSoft implementations with agility and speed, and deliver upgrade and update projects with 40 to 70 percent savings Improve business continuity with Oracle Cloud Infrastructure based disaster recovery with significantly better Recovery Point Objective (RPO) and Recovery Time Objective (RTO) metrics than onsite, and at a reduced cost Automate PeopleSoft lifecycle management tasks such as instance deployment, cloning, tools patching, tools upgrade, backup, monitoring and much more—all backed by Oracle Cloud Infrastructure’s industry leading SLA for IaaS and PaaS services Leverage native TDE to secure PeopleSoft application data at rest and in motion, end-to-end application security, 90 percent faster environment setup, LOB self-service, and more streamlined PeopleSoft lifecycle management Don’t believe it? At the upcoming PeopleSoft on Oracle Cloud event in Atlanta, hear directly from Oracle, Care.org (a PeopleSoft customer), and their systems integrator, Astute Business Solutions, about how the cloud enables innovation at a much faster pace.

Hundreds of PeopleSoft customers have moved their PeopleSoft applications to Oracle Cloud Infrastructure and many more are planning their move now. The challenge that customers face is not whether...

Developer Tools

Oracle Simplifies Cloud Native Development

Welcome to Oracle Cloud Infrastructure Innovators, a series of articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. The majority of enterprises are ready to join the cloud native development movement. But some stubborn obstacles—such as resistance to cultural change, complexity, and skills shortages—continue to stand in their way. I recently sat down for a conversation with Bob Quillin, Vice President of Developer Relations at Oracle Cloud Infrastructure, to talk about Oracle’s cloud native direction. We discussed cloud vendor lock-in and other difficulties enterprises face when moving to cloud. We also talked about creating a sustainable, open standards-based strategy to overcome the challenges to cloud adoption. Listen to our entire conversation here, and read a condensed version following.   Your browser does not support the audio player   How does Oracle Cloud Infrastructure support concepts like serverless programming and cloud native development? Quillin: Oracle last year started the Fn Project. Fn is an open source, container-native, serverless platform. It's one of the first serverless solutions that lets you run serverless applications basically anywhere, whether that's in the cloud or on-premises, or both. It supports any programming language, and it's very extensible. It was developed by the serverless group from Iron.io that was hired into Oracle a couple of years ago. Instead of creating a proprietary service, what we did was build out the Fn Project over the last year. Contributors have been adding in new features to build out the platform. For example, there's a new Cloud Events project that came out recently and is being hosted by the Cloud Native Computing Foundation (CNCF) as a sandbox project. It focuses on how events and serverless functions work together. We're one of the early adopters of that. What else can customers expect from the Fn Project? Quillin: One thing we just rolled out at the KubeCon conference in Seattle is a product that's based on the Fn Project called Oracle Functions. It's a fully managed, scalable, on-demand, functions-as-a-service (FaaS) platform that runs on top of Oracle Cloud Infrastructure. It's all based on the Fn engine. It's a very unique service in that regard in that you can still use the Fn Project capabilities on your laptop, or on any other cloud, for example. But if you want a managed service—like an AWS Lambda but better—we are offering Oracle Functions. But unlike Lambda, Oracle Functions is an open solution and won't lock you in with proprietary APIs. You can run anything you develop on Oracle Functions anywhere else with the Fn Project. So, it's really an environment for deploying and executing any functions-based application, and there's no need to manage the infrastructure. While there are servers underneath it all that Oracle manages for you—it isn't truly “serverless” as people say—you don't have to worry about the servers. That's one of the huge benefits because it makes deploying managed functions simple. It's DevOps friendly and Docker-based, so each function and serverless component is a container. Thus, it's a truly container-native approach. You can actually deploy it using your favorite container management solution. In particular, it works very well with Kubernetes, but also other types of container platforms, too. In terms of serverless products, almost all other solutions that are out there are all proprietary. But it's always tough to teach an old dog some new tricks. What major challenges are traditional enterprises facing as they try to become successful as cloud native companies? Quillin: That's a good question. The CNCF did a survey a couple of months ago asking about the big challenges that organizations are facing with container technology. The top three challenges were cultural change for developers, complexity, and lack of training. We've made some amazing progress as an industry, particularly over this last year and a half. But many developers and teams still feel left behind. As the culture changes and as we push DevOps forward, they're looking for ways to connect into those technologies that use these new technologies. But they're also responsible for maintaining and using existing platforms, like WebLogic or database applications. You can't just “lift and shift” those overnight. I've talked to CIOs at enterprises that are going through this change, and it may be easy to move a five- or ten-person team, but moving a thousand-person team or multi-thousand-person team is challenging. Then, combine that challenge with the complexity of all the open source options and all the solutions that are available to you. If your choice is to choose from 5,000 different solutions or choose between a single vendor that offers you five solutions, you're between a rock and a hard place. You're faced with either too much choice or not enough choice. How are enterprises addressing this issue of too much choice versus not enough choice? Quillin: Sometimes they address it by saying, "Well, I'm just going to choose one cloud. I'm going to single source it." Unfortunately, that approach has left many people locked in. What they find is that the fastest solution is not always the best solution. They started using closed APIs, proprietary services, and inch by inch, application by application, they get more and more locked in. The whole value proposition of open source is choice and portability—being able to take an application and move it wherever is appropriate for that workload, for that geography, for your business. So, if you're going to choose open source technology, you really need to embrace that and push your vendors. As part of the selection process you should ask, "Is this going to lock me in?" What we're seeing now is that people want to go hybrid cloud or multicloud, but their single cloud vendor strategy won't let them. Can you tell me a little bit about the Oracle Linux Cloud Native Environment?  Quillin: The Oracle Linux Cloud Native Environment is a software stack that is available on premises and runs on the cloud itself. It is a curated set of open source Cloud Native Computing Foundation (CNCF) projects that can be easily deployed, have been tested for interoperability, and for which enterprise-grade support is offered. It's included with an Oracle Linux Premier support subscription at no additional cost and is unique in that it's actually driven by true open source technologies. It uses no proprietary approaches to lock you in. If you develop on top of the Oracle Linux Cloud Native Environment, you can run that application anywhere. We're also combining that with Oracle Cloud Infrastructure cloud native services, so now you have a really strong one-two punch. The whole solution is called the Oracle Cloud Native Framework. The Oracle Cloud Native Framework consists of the Oracle Cloud Infrastructure cloud native services, which includes Kubernetes, the Oracle Cloud Infrastructure Registry, and a whole set of new observability and application definition, development, and provisioning technologies delivered as managed services right on top of our Generation 2 cloud. How does this help those enterprises that are struggling to go cloud native? Quillin: We've talked about the teams who are being left behind by complexity and cultural changes in the push to cloud. The Oracle Cloud Native Framework provides a pattern, a model by which these teams can easily move applications back and forth on-premises to cloud and back—and it's a sustainable strategy. If teams lack the training or complexity is slowing down adoption, the services in Oracle Cloud Infrastructure are offered as managed cloud services. For example, many—if not most—development teams did not become experts in managing Kubernetes or deploying Docker in the last two years. These teams can go directly into a managed cloud environment where all of that complexity is managed for you. You don't actually have to become a Kubernetes expert. You can just run the application and understand how you need to build the application to run on top of it. Plus, you don't have to run the underlying infrastructure, the clusters, the cluster management, and all of the tools and techniques that go along with that. The Oracle Cloud Native Framework is providing a truly inclusive, sustainable strategy for these developers, and it's all based on open CNCF technologies. Learn more about the Oracle Cloud Native Framework today.

Welcome to Oracle Cloud Infrastructure Innovators, a series of articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. The majority of...

Oracle Cloud Infrastructure

Learn the ABCs of Data Science Concepts

Data scientists are some of the most sought-after professionals on the planet. These highly skilled individuals work with all types of organizations, using scientific processes and algorithms to unlock valuable business insights hidden within mountains of structured and unstructured data. But sometimes it can feel like data science concepts are cloaked in mystery. If you're one of the many nontechnical professionals increasingly being asked to collaborate with data science teams, you might be wondering, what are those scientific processes? And how do data scientists work their magic? The good news is that the team at Oracle Cloud Infrastructure and DataScience.com has the answers. Our new ebook, The Data Science ABCs, demystifies data science concepts, making it easier for nontechnical professionals and data scientists to work together productively. Oracle acquired DataScience.com last year to provide customers with a single data science platform that leverages Oracle Cloud Infrastructure. DataScience.com centralizes data science tools and projects in a fully-governed workspace and removes the barriers to deploying machine learning models in production. Data Science Is Everywhere Over the last decade, data science has made its way into nearly every industry, from finance and hospitality to gaming and manufacturing. Data scientists use their skills to help agriculture businesses improve crop yield and to help customer service agents reduce customer churn. They’re even responsible for those automated recommendation engines that keep us glued to Netflix and improve our shopping experience with Amazon and other online retailers. The list of ways that data scientists provide value to businesses goes on and on—and that's one of the reasons why data scientist is the most promising career of 2019, according to new research from LinkedIn. The networking site for business professionals found that data scientists saw a 56 percent increase in job openings in the US over the past year. And career-search site Indeed.com reports that data scientist job listings rose 75 percent from 2015 to 2018. What does it all mean? If you haven't already encountered a data science team at your organization, there's a strong possibility you will someday soon. Learn Data Science Concepts from A to Z Did you ever wonder how data scientists use historical information to accurately predict the future? Or how graphical processing units helped advance the field of machine learning? Have you ever thought about neural networks and how they mimic the structure of biological nervous systems to find patterns and meaning in data? Our new ebook doesn't just cover these and other data science concepts from A to Z, it also puts them into historical context and offers real-world examples of data science in action. For those interested in digging even deeper into data science, the ebook provides links to helpful resources and related information. To learn the data science lingo and discover how data scientists are bolstering businesses around the globe, be sure to read our new ebook today.

Data scientists are some of the most sought-after professionals on the planet. These highly skilled individuals work with all types of organizations, using scientific processes and algorithms...

Oracle Cloud Infrastructure

Part 4 of 4 - Oracle IaaS and Seven Pillars of Trusted Enterprise Cloud Platform

The concluding post of this series, in which we mapped Oracle's seven pillars of a trusted computing platform to Oracle Cloud Infrastructure security capabilities, covers a few services that were introduced or enhanced since the publication of earlier posts (Part 1, Part 2 and Part 3), along with relevant services from the Oracle Cloud Security portfolio for enterprises. New and Enhanced Features First, let's explore the major new services and features that enhance the security of customer environments on Oracle Cloud Infrastructure. Encrypt your Data using Keys you Control In October 2018, we announced the release of Oracle Cloud Infrastructure Key Management, a managed service that enables customers to encrypt their data by using keys that they control. Customers who have the following requirements should consider using Key Management: Customers who want to centralize encryption management of their data in the public cloud Customers who are currently using hardware security module (HSM) based key management in their physical data centers (on-premises) and want to use a similar type of secure service for encryption key management Customers who want to have full control of their encryption key management for their public cloud assets Customers who want to have the public cloud based key management systems backed by cloud service providers' HSMs that meet Federal Information Processing Standards (FIPS) 140-2 Security Level 3 security certification For more information, see the Key Management documentation. Ensuring Secure Network Isolation between Departments with Transit Routing through a Hub VCN This feature involves connecting a customer's on-premises network to a virtual cloud network (VCN) with either Oracle Cloud Infrastructure FastConnect or an IPSec VPN. The following is a basic use case for using transit routing: A customer organization has different departments, each with its own VCN. The customer's on-premises-based security information and event management (SIEM) tool needs access to the applications and servers running in different VCNs, but the customer doesn't want the administration overhead of maintaining a secure connection from each VCN to the on-premises network. Instead, the customer wants to use a single FastConnect or IPSec VPN. I'll use the below diagram to give you an idea of how transit routing works. One of the VCNs acts as the hub (VCN-H) and connects to the customer's on-premises network by way of FastConnect or an IPSec VPN. The other VCNs are locally peered with the hub VCN. The traffic between the on-premises network and the peered VCNs transits through the hub VCN. The VCNs must be in the same region but can be in different tenancies. For details, see the transit routing documentation. Isolate Resources by Teams and Projects with Nested Compartments Be able to isolate resources as needed based on your corporate structure or hierarchy by nesting compartments. Nesting enables a managed service provider, or a customer's central IT department that provides IT as a service to the business units, to grant granular rights by assigning policies that correspond to nested compartments. Consider the following use case: The Central IT network team is responsible for managing networks elements such as VCNs across projects. Central IT would like to enable the App/Dev project teams to create subnets in the prebuilt VCNs on demand, through their CI/CD pipeline, during application and associated compute/storage deployment. Central IT would also like to hide certain projects based on business units. The following diagram depicts the nested compartment architecture that the Central IT team could create to grant access to specific groups: I'd appreciate your comments below if you're interested in a detailed blog post further explaining this use case. Enterprise Cloud Security Offerings Now let's move on to some of the enterprise-scale cloud security offerings from Oracle that can be consumed as a platform as a service (PaaS). Customers can use these services to fulfill their portion of the shared security responsibility model. Enhance your Security Controls with Oracle Identity Cloud Service Oracle Identity Cloud Service (IDCS) enables enterprises to seamlessly connect their users to cloud-based and on-premises applications. IDCS integrates tightly with on-premises systems such as Active Directory as well as Oracle’s IAM to extend identities to the cloud. IDCS provides administration capabilities in the cloud such as user/group and application administration, including provisioning and deprovisioning of applications. It also provides access management capabilities such as single sign-on, strong authentication, and adaptive risk-based policies. Finally, it is the platform upon which governance capabilities like access requests, certifications, and workflows will be built for cloud applications. IDCS acts as the identity foundation for the Oracle Cloud. In other words, if you purchase any service in the Oracle Cloud, an instance of IDCS is automatically created for your tenant instance, where all users are managed in it. For details, see the IDCS service page. Maintain Security Control and Detect Threats with Oracle Cloud Access Security Broker Oracle Cloud Access Security Broker (CASB) Cloud Service is a multimode cloud-access security broker that provides advanced threat analytics using user-behavior analytics (UBA) and third-party feeds, configuration seeding, monitoring and alerts, and shadow IT discovery. For details, see the CASB service page. Following are the key features for Oracle CASB on Oracle Cloud Infrastructure: Policy alerts: Alerting and notifications on policy changes to resources Security controls: Detection of insecure settings of Oracle Cloud Infrastructure resources Threat detection: Detection of user risks and threats using machine learning (ML) analytics Key security indicator reports Exporting data and threat remediation: Enterprise integrations with SIEM or ITSM systems The following sections provide details. CASB Policy Alerts Following are some examples of these alerts and notifications on policy changes to Oracle Cloud Infrastructure resources: Compute images: Updates to or removal of images Compute instances: Launch or termination DB systems: Launch or termination actions Identity groups and policies: Creation, updates, and deletion Identity users: Lifecycle actions, API key actions, login failures, and resets Network load balancers: Creation, updates, and deletion Network security lists: Creation, updates, and deletion Network VCNs: Creation, updates, and deletion Object storage: Creation and deletion, and preauthentication requests Storage block volumes: Attaching and export/import events CASB Security Controls Following are some examples of the controls for detecting insecure settings of Oracle Cloud Infrastructure resources: Compute Instances having public IP addresses or public images Untagged resources Users and IAM User groups having too many or too few users Too broad IAM policies Use of API keys Storage Unattached storage volume Public storage buckets Network VCNs or load balancers with no inbound security lists, or attached Internet gateway insecure security lists with open ports for telnet, FTP, Finger, or other attack vector protocols Imminent expiration of load balancer certificates CASB Threat Detection CASB uses ML-based analytics to detect the following threats in Oracle Cloud Infrastructure: IP hopping Brute force attacks User behavior risks and anomalies Admin behavior risks Audit activity Number of successful or failed logins per day Network IP addresses and mapped geolocations Time of access Endpoint context (OS, browsers) External threat feeds Geolocation feeds IP address reputation CASB Key Security Indicator Compliance Reports Following are some of the out-of-the-box reports used for Oracle Cloud Infrastructure: API Key Roll Over report: Key state and rollover status for API keys Privileged IAM Group Membership report: Users added to or removed from groups Privileged IAM Users and Groups report: Actions targeting users and groups Public Buckets report: Details on publicly accessible buckets Swift Passwords report: Information about Swift passwords CASB Data Exports and Remediation CASB provides the following enterprise integrations with SIEM or ITSM systems: Manual incident management: Creation and management of incidents generated from reported events External incident management: Integration with ServiceNow Integration with SIEM: Export events to Splunk, QRadar Export to CSV Security Monitoring with Oracle Management Cloud Oracle Management Cloud is an integrated suite of capabilities that enable customers to perform the following actions: Easily monitor applications, end to end, and reduce false alerts and give notifications where possible Quickly troubleshoot issues with all the data needed to solve that problem at that time—metrics, logs, topology Keep applications secure and compliant Automatically remediate the most common problems whether they are security or management events Analyze data over a longer period of time to spot trends and issues Regardless of whether the application is running on-premises, in the Oracle Cloud, or in anyone else’s cloud and on any technology stack, customers can use parts of these capabilities individually or use them all together. This unified platform brings a rich set of potentially interrelated data to a single place that allows you to get a complete view, entity and topology. For details, see the Oracle Cloud Management service page. Oracle Management Cloud Security Features This post highlights the Oracle Management Cloud features related to security, such as monitoring security events and user behavior, and catching data access (SQL-based) anomalies at the user, group, database, and application level. The security monitoring tools can tell you that a user accessing a database host was normal. The Security Monitoring and Analytics (SMA) module can go deeper and tell you that the query that the user ran was abnormal for the user based on behavioral analysis, thereby providing benefits like a broader threat-detection range. SMA can detect nuanced anomalies through multi-dimensional baselines (for example, user logins by location, time, and host). SMA also provides the following security features: Addresses scalability problems through our platform (next-generation service with auto scaling) and visualization problems through intelligent security visualization (for example, timeline). Investigates faster with session awareness and kill chain visualization (for example, account hijacking). In general, user context is rarely present in logs. SMA determines the underlying user by stitching together DHCP, IDM, VPN, and other activity context. Then it enables visualizing threats at the user level (rather than the account level), thereby providing benefits like a dramatic reduction in manual investigative work, resulting in faster time to detection. Helps Security Operations Center (SOC) analysts understand internal and external threat vectors by ensuring security visibility across a heterogeneous, evolving infrastructure. SMA can collect and analyze any log or other data from the IT stack on bare metal, in private clouds, or in SaaS, PaaS, and IaaS infrastructure. SMA can be used to automate SOC runbooks with out-of-the-box vendor independent security and compliance content (rules, reports, and so on). Categorizes events so content is future-proofed against changes in vendors and products (that is, a failed login is just that, regardless of the device type and vendor). This results in actionable insights, automated remediation, and faster time to value. Uses underlying ML algorithms to leverage continuous threat intelligence context (URL classification, URL/IP reputation) in detection and triage of threat indicators. Customers can bring their own threat intelligence feed or leverage Oracle's out-of-the-box feed for early awareness of threat indicators in detection and investigation, thereby getting benefits like reduced false negatives by leveraging the latest threat context as activity happens. Works with Oracle Management Cloud Orchestration to continually harden systems by triggering runbook automation (account lockouts, and port or other configuration changes). SMA can hook its correlation and detection logic to any instrumentation framework so the appropriate SOC remediation procedures for a given threat type can be automated. This results in unprecedented benefits like faster mean time to remediation. Conclusion The primary goal of the posts in the series is to provide guidance for the customer to securely develop, migrate, and run workloads on Oracle Cloud Infrastructure. The posts throughout the series depicted how to use various Oracle IaaS and PaaS services to protect data, achieve required compliance, and secure the application environments across Oracle Cloud Infrastructure. Links to other relevant Oracle Cloud Infrastructure security blogs: Guidance for Setting Up a Cloud Security Operations Center (cSOC) Security Checklist for Application Migrating to Oracle Cloud Infrastructure Security Patterns for Customers Achieving PCI Compliance on Oracle Cloud Infrastructure

The concluding post of this series, in which we mapped Oracle's seven pillars of a trusted computing platform to Oracle Cloud Infrastructure security capabilities, covers a few services that were...

Developer Tools

Usability Improvement: Consistent Device Path Names and Ordering for Block Volume Attachments

Today we released an exciting service update to ensure that block-volume-device path names and hierarchies stay consistent and persist across reboots. This usability-related update simplifies the management and ordering of block storage devices and improves the user experience for many of the workloads that run on Oracle Cloud Infrastructure. Now when you attach a volume to an instance, you can select a device path name from a drop-down list. Device path names have the format dev/oracleoci/oraclevdxx, where the value of xx ranges from a to af, corresponding to up to 32 volume attachments per instance. This new format is compatible and aligned with the direction of the open source community. It's supported on all Linux OS flavors that are available on Oracle Cloud Infrastructure. However, it's not supported for Windows OSs, legacy OSs and versions, and customer-provided OS images that aren't enabled for this feature. Like other service features, this update is available in the Oracle Cloud Infrastructure Console, CLI, SDK, and Terraform. Specifying a device path name for a volume attachment is straightforward in the Oracle Cloud Infrastructure Console with a click. In the Compute section of the console, access the instance in the appropriate compartment. When you attach a volume on the instance, select a device path from the drop-down list. Confirm the device path for the attached volume. To access the device on the instance and see the device path names, use the lsscsi and ll commands. In the preceding example, the /dev/oracleoci/oraclevdb device path points to the /dev/sdb legacy path. Also note that the LUN # (4:0:0:2) for /dev/sdb is 2, which corresponds to b in /dev/oracleoci/oraclevdb. Following are some examples of how to use consistent volume names on Linux based systems: Example Scenario Previous Experience New Experience with Consistent Volume Names Creating partitions fdisk /dev/sdb fdisk /dev/oracleoci/oraclevdb Creating a file system /sbin/mkfs.ext3 /dev/sdb1 /sbin/mkfs.ext3 /dev/oracleoci/oraclevdb1 /etc/fstab changes UUID=84dc162c-43dc-429c-9ac1-b511f3f0e23c /oradiskvdb1 xfs defaults,_netdev,noatime 0 2 /dev/oracleoci/oraclevdb1   /oradiskvdb1    ext3    defaults,_netdev,noatime  0  2 Mounting a file system mount /dev/sdb1 /oradiskvdb1 mount /dev/oracleoci/oraclevdb1 /oradiskvdb1   Watch for announcements about additional service updates that continue to streamline the storage management experience. Let us know how we can help ease your cloud management or if you want more information about any topic.

Today we released an exciting service update to ensure that block-volume-device path names and hierarchies stay consistent and persist across reboots. This usability-related update simplifies the...

Strategy

Cloud Computing Predictions for 2019: More Migrations, Onward with Openness

It's an interesting time in the cloud computing industry. Cloud is an established technology, and it's the default deployment model for new applications and services. But at the same time, the majority of enterprise workloads still live on-premises. As the market approaches a crossroads, we asked experts from Oracle Cloud Infrastructure and the industry at large to share their predictions for 2019. Karan Batta Director of Product Management, Oracle Cloud Infrastructure @karan_batta For a lot of organizations, AI and machine learning are still a science experiment. But in 2019, most of them will start to implement these technologies in production. That will drive a lot more usage of high-performance computing (HPC) in the cloud. At the same time, companies have spent millions of dollars to build bespoke and very specific data centers for their HPC workloads, and the hardware is now coming up to its depreciation cycle. Most of these businesses want to move to the cloud because they don't want to have to keep buying new hardware. The pace of innovation is so quick now that cloud providers are introducing new hardware every year. On-premises shops can't keep up with that. Mark Cliff Lynd Managing Partner, Relevant Track @mclynd Enterprises are struggling to manage and maintain their hybrid-cloud environments. Vendors will be needed to fill the automation, orchestration, management, and security gaps to ensure a seamless environment that supports their growth. The use of containers and orchestration products like Kubernetes will grow, and security offerings will need to integrate and collaborate accordingly. Bob Quillin Vice President, Developer Relations, Oracle Cloud Native Labs @bobquillin Enterprises will choose inclusive solutions that can cover cloud and on-premises, modern and traditional, dev and ops. Managed cloud native services will replace do-it-yourself models so enterprises can leapfrog learning how to administer and maintain complex, rapidly changing platforms like Kubernetes and instead start using them immediately. Truly open and community-driven solutions in areas such as serverless will replace proprietary cloud services. These will allow enterprises to embrace open source, hybrid cloud, and multicloud options, as opposed to the single-source cloud model that has left users with cloud lock-in issues, diminished choices, and spiraling costs. Andy Thurai Emerging Technology Strategist and Evangelist, Oracle Cloud Infrastructure @AndyThurai Open source software and the pay-as-you-go model will dominate the cloud industry in 2019. This will lead to newer licensing models for all enterprise software. Instead of pricing based on the cores, servers, and machines the software runs on, the market will demand pricing based on the volume of data, time of usage, and -- most importantly -- business value. Cheaper combined costs from software, hardware, infrastructure, storage, etc. will lead to higher operational efficiency. This will free up enterprises to spend more time and energy on experimenting with their data, business models, and expansion into adjacent areas. Sophina Kio-Lawson and Lilian Douglas Cofounders, SheSecures @she_secures There is going to be a huge demand for more public cloud services from different industries, from the telecommunications sector to financial institutions, health sectors, etc. A lot of organizations ran into losses from managing their super-expensive physical data centers. Andrew Reichman Director of Product Management, Oracle Cloud Infrastructure @reichmanIT The industry's shift from infrastructure as a service to platform as a service will continue in 2019. Cloud vendors are moving up the stack to get to stickier solutions and offer end users more automation, which adds value faster. As this happens, storage will be more closely tied to the workload running. Instead of customers selecting and configuring storage on their own, higher-level solutions will allow customers to better tailor storage services to meet the needs of the workload itself. Additionally, there will be deeper usage of object storage. This has the potential to ease capacity concerns and reduce the effort required to manage and change workload configurations, because storage management can be coded directly into an application. Laurent Gil Security Product Strategy Architect, Oracle Cloud Infrastructure @laurentgil Enterprise multi-cloud strategies are going to have some unintended consequences. As enterprises accelerate their move to the cloud over the next two to three years, their security operations centers will have to become fluent in powerful data analytics systems. These systems must be able to ingest and reconcile incompatible and apparently uncorrelated security events, using massive compute capacity, and organize relevant security events for human analysts.

It's an interesting time in the cloud computing industry. Cloud is an established technology, and it's the default deployment model for new applications and services. But at the same time,...

Customer Stories

Develop Your Cloud Computing Skills in 2019

January is a time when most of us create personal and professional plans for the year. For me, this time of the year is about creating a plan for investing in myself so that I am better prepared for the innovation that is happening in cloud computing. Today, enterprises spend over $3 trillion in IT. According to Forrester’s "Predictions 2019: Cloud Computing" report, the global cloud market will exceed $200 billion in 2019. And distributed and cloud computing skills are the most sought-after skills globally, according to LinkedIn. As enterprises move their workloads to cloud, it's important to bridge the skills gap in the current workforce to drive adoption and successful deployments. At Oracle Cloud Infrastructure, we're building a world-class cloud for enterprises and startups alike. We're releasing services and features at a rapid pace to provide our customers a secure, robust, high-performance cloud infrastructure that meets the demands of their mission-critical workloads. We know that for our customers to be able to take advantage of these services and features, they need to know how to use them properly. To that end, we've collaborated with Qloudable to bring customers a self-paced learning platform called Oracle Cloud Infrastructure Jump Start Learning Labs. As I mentioned in my TD at Work issue, "Foster Learning Through Engaging Content," to bridge the skills gap and to foster a learning culture, it's important to keep the learner engaged and have a well-defined outcome. Oracle Cloud Infrastructure's 15-minute to 45-minute hands-on, task-based labs provide step-by-step instructions for configuring and working with services at your own pace and convenience. It's a great way to bridge the cloud computing skills gap and be prepared for the cloud wave in 2019, and all you need is a laptop with a browser. The labs are categorized by skill level: Beginner: For users with little-to-no experience with Oracle Cloud Infrastructure or other cloud technologies Experienced: For users with some hands-on experience with Oracle Cloud Infrastructure and knowledge of cloud technologies Advanced: For users with extensive experience with Oracle Cloud Infrastructure We launched these labs at Oracle OpenWorld 2018, and our customers and partners are loving them. They have huge goals to close the cloud computing skills gap, and this platform is enabling them to accelerate their journey to the cloud. Here’s what some of our customers had to say about these labs: "Oracle Cloud Infrastructure Jump Start Learning Labs are a great way to provide actual hands-on experience. The labs are easy to use and a great learning tool. Darling Ingredients decided to go with Oracle Cloud Infrastructure last year. Having these labs available for our team will help close the skills gap and drive adoption. We expect to see a significant impact on our business as our teams are preparing for the Oracle Cloud Infrastructure Architect Associate certification." —Tom Morgan, Darling Ingredients "Oracle has paved a remarkable way for its partners to have a live, intuitive experience of their 'click-and-go' cloud infrastructure. Jump Start Learning Labs have been instrumental in assisting our architects and developers to get familiar with Oracle Cloud Infrastructure in minutes and gain confidence in the platform. We have 10-plus certified associates now, which shall definitely help us to increase our reach to more potential customers globally and provide better cloud solutions." —Ashish Thakkar, L&T Infotech "At Deloitte, we are constantly providing innovative cloud solutions to our customers. And therefore, it is important that our consultants and architects have in-depth, hands-on knowledge of Oracle Cloud Infrastructure. The availability of such labs is imperative for our teams in closing the skill gap and driving adoption of Oracle Cloud Infrastructure for our clients. The ease-of-use of the learning labs makes it a great learning tool, and thus it is an excellent way to provide actual hands-on experience to our consultants/architects." —Abhinav Phadnis, Deloitte Incorporate the Oracle Cloud Infrastructure Jump Start Learning Labs into your 2019 goals and be prepared for the coming cloud wave. Happy learning!

January is a time when most of us create personal and professional plans for the year. For me, this time of the year is about creating a plan for investing in myself so that I am better prepared for...

Oracle Cloud Infrastructure

Part 3 of 4 - Oracle IaaS and Seven Pillars of Trusted Enterprise Cloud Platform

This post is the third one in the series in which we are mapping Oracle's seven pillars of a trusted computing platform to Oracle Cloud Infrastructure security capabilities. This post covers the rest of the pillars. The fourth and final installment in this series will highlight some security services and enhancements that have been added to the portfolio.   Links to Part 1 and Part 2. 5: Secure Hybrid Cloud Oracle Cloud Infrastructure supports SAML 2.0 federation via Oracle Identity Cloud Service (IDCS), Microsoft Active Directory Federation Service (ADFS), and any SAML 2.0 compliant identity provider. Customers can also use Oracle Cloud Infrastructure native IAM for federated access. IDCS is offers broad integration services with various identity providers. Oracle Cloud Infrastructure also offers two ways to securely connect customers' on-premises data centers or other public cloud providers to Oracle Cloud Infrastructure virtual cloud networks (VCNs). One way to connect is to use an IPSec VPN over the internet. IPSec is a protocol suite that encrypts the entire IP traffic before the packets are transferred from the source to the destination. IPSec can be configured in tunnel mode and transport mode, although Oracle Cloud Infrastructure supports only the tunnel mode for IPSec VPNs. In tunnel mode, IPSec encrypts and authenticates the entire packet. After encryption, the packet is then encapsulated to form a new IP packet that has different header information. Each Oracle IPSec VPN consists of multiple redundant IPSec tunnels that use static routes to route traffic. Border Gateway Protocol (BGP) is not supported for the Oracle IPSec VPN. For more information, see IPSec VPN Overview. For a higher bandwidth and more reliable and consistent networking experience compared to internet-based connections, Oracle Cloud Infrastructure FastConnect provides an easy way to create a dedicated, private connection between customers' data center and Oracle Cloud Infrastructure. For more information, see FastConnect Overview. Additionally, Oracle Cloud Infrastructure is collaborating with various third-party security vendors (for example, FireEye, Fortinet, Symantec, and CheckPoint) to make their solutions accessible on Oracle Cloud Infrastructure so that customers can use their existing security tools when securing data and applications in the cloud. Visit the Oracle Cloud Marketplace for a list of partners who have been successfully tested on Oracle Cloud Infrastructure. 6: High Availability To provide data availability and durability, Oracle Cloud Infrastructure enables customers to select from infrastructure with distinct geographic and threat profiles. A region is the top-level component of the infrastructure. Each region is a separate geographic area with multiple, fault-isolated locations called availability domains. Availability domains are designed to be independent and highly reliable. Each one is built with fully independent infrastructure: buildings, power generators, cooling equipment, and network connectivity. With physical separation comes protection against natural and other disasters. Availability domains within the same region are connected by a secure, high-speed, low-latency network, which allows customers to build and run highly reliable applications and workloads with minimum impact to application latency and performance. All links between availability domains are encrypted. Each region in the US has at least three availability domains, which allows customers to deploy highly available applications. Each availability zone is the US has three fault domains. Because of geographic constraints, some regions contain a single availability domain with multiple fault domains for application redundancies. When resources are placed across fault domains, they are far less likely to fail together. From a customer's perspective, instances placed across fault domains are guaranteed to be on different racks. Each tenancy has its own fault domain identifiers for an availability domain. Instances returned by Compute APIs include these fault domain identifiers.   7: Verifiably Secure Infrastructure Oracle Cloud Infrastructure's verifiably secure infrastructure is built using multiple security solutions that complement each other.  Oracle is continuously investing time and resources to meet customers’ strict requirements for internal control over financial reporting and data protection across a variety of highly regulated industries. ISO 27001 Regions: Phoenix (Arizona), Ashburn (Virginia), London (United Kingdom), and Frankfurt (Germany) Services covered: Block Volumes, Compute, Database, Governance, Load Balancing, Networking, and Object Storage  SOC 1, SOC 2, and SOC 3 Regions: Phoenix (Arizona), Ashburn (Virginia), and Frankfurt (Germany) Services covered: Block Volumes, Compute, Database, Governance, Load Balancing, Networking, and Object Storage    PCI DSS Attestation of Compliance Services covered: Archive Storage, Block Volumes, Compute, Container Engine for Kubernetes, Data Transfer Service, Database, Exadata, FastConnect, File Storage, Governance, Load Balancing, Networking, Object Storage, and Registry  HIPAA Attestation Services covered: Archive Storage, Block Volumes, Compute, Data Transfer, Database, Exadata, FastConnect, File Storage, Governance, Load Balancing, Networking, and Object Storage Strong security controls to meet GDPR requirements For a complete and updated list of compliance certifications and attestations, please visit https://cloud.oracle.com/en_US/cloud-compliance. Oracle regularly performs penetration and vulnerability testing and security assessments against the Oracle Cloud infrastructure, platforms, and applications. These tests are intended to validate and improve the overall security of Oracle Cloud Services. However, Oracle does not assess or test any components that customers manage through or introduce into the Oracle Cloud Services. For more information, see Oracle Cloud Security Testing Policy. Conclusion Oracle Cloud Infrastructure is gaining the trust of Customer Security teams by having: A world-class security team Foundational core and edge security capabilities built around seven pillars  Deeper customer isolation  Easy-to-use IAM policies   Geographic security compartmentalization Secure access to APIs via asymmetric keys For more information, visit the following sites: Oracle Cloud Infrastructure Security white paper Oracle Cloud Infrastructure GDPR white paper Oracle Cloud Infrastructure Security Best Practices (provides actionable security guidance, including IAM policies and scripts, for each service) Services Security Documentation Blog posts https://blogs.oracle.com/cloud-infrastructure (search on Sanjay Basu) https://blogs.oracle.com/cloud-infrastructure/heres-a-nifty-checklist-to-secure-a-cloud-application

This post is the third one in the series in which we are mapping Oracle's seven pillars of a trusted computing platform to Oracle Cloud Infrastructure security capabilities. This post covers the rest...

Events

Four Can't-Miss Cloud Sessions at Oracle OpenWorld Europe: London

This year, Oracle is taking OpenWorld global. Following October's Oracle OpenWorld 2018 conference in San Francisco, we're holding three regional events to show customers in Europe, the Middle East, and Asia how we can help transform and secure their businesses. The first of these events, Oracle OpenWorld Europe: London, starts January 16. Oracle Cloud Infrastructure was central in the strategic announcements at OpenWorld in San Francisco. In his keynote, Oracle Executive Chairman and CTO Larry Ellison touted its advantages around security, performance, and pricing. And attendees packed dozens of sessions to learn why Oracle offers the only true enterprise-grade public cloud. At OpenWorld Europe, attendees will learn what Oracle Cloud Infrastructure has been working on since then. (Hint: a lot!) Here's a preview of some can't-miss cloud sessions: Move and Improve Your Apps in the Cloud January 16, 9 a.m. GMT Other providers talk about how you can "lift and shift" your applications from on-premises to the cloud. Oracle Cloud Infrastructure takes that one step further. We enable organizations to "move and improve" their applications by running them in a purpose-built cloud that delivers higher performance and better security at a lower cost. Attendees of this session will learn why Oracle's cloud is now ready for any and all workloads. Real-World Enterprise Outcomes and Reactions January 16, 12:55 p.m. GMT In this breakout session, I'll be joined by two customers sharing their stories about deploying and using Oracle Cloud Infrastructure. They'll cover a variety of use cases, including workload migrations and cloud native applications. Your Cloud Transformation Roadmap January 16, 1:40 p.m. GMT In this solution keynote, Kyle York, Vice President of Product Strategy for Oracle Cloud Infrastructure, will explain why infrastructure should be the foundation on which a successful cloud transformation strategy is built. An enterprise customer will also detail the performance and cost benefits they achieved by moving to the cloud. Running Mission-Critical Apps January 16, 3:10 p.m. GMT The majority of enterprise applications do not yet live in the cloud. Most of these apps are the mission-critical workloads that have high demands around latency, availability, and performance. In this breakout session, Don Mowbray, Director of Product Management for Oracle Cloud Services, will explain how Oracle Cloud Infrastructure delivers the security, predictability, and performance that these workloads require. If you can't attend Oracle OpenWorld Europe: London, we're also holding events in Dubai in February and Singapore in March. We hope to see you there!

This year, Oracle is taking OpenWorld global. Following October's Oracle OpenWorld 2018 conference in San Francisco, we're holding three regional events to show customers in Europe, the Middle East,...

Developer Tools

Serverless Image Classification with Oracle Functions and TensorFlow

Image classification is a canonical example used to demonstrate machine learning techniques. This post shows you how to run a TensorFlow-based image classification application on the recently announced cloud service Oracle Functions. Oracle Functions Oracle Functions which is a fully managed, highly scalable, on-demand, function-as-a-service platform built on enterprise-grade Oracle Cloud Infrastructure. It's a serverless offering that enables you to focus on writing code to meet business needs without worrying about the underlying infrastructure, and get billed only for the resources consumed during the execution. You can deploy your code and call it directly or in response to triggers— Oracle Functions does all the work required to ensure that your application is highly available, scalable, secure, and monitored. Oracle Functions is powered by the Fn Project, which is an open source, container native, serverless platform that can be run anywhere—in any cloud or on-premises. You can download and install the open source distribution of Fn Project, develop and test a function locally, and then use the same tooling to deploy that function to Oracle Functions. What to Expect Before we dive into the details, let's see what you can expect from your serverless machine learning function. After it's set up and running, you can point the app to images and it will return an estimate of what it thinks the image is, along with the accuracy of the estimate. For example, when passed to the classification function, this image returned—This is a ‘pizza’ Accuracy—100%. Photo by Alan Hardman on Unsplash The Code The image classification function is based on an existing TensorFlow example. It leverages the TensorFlow Java SDK, which in turn uses the native C++ implementation using JNI (Java Native Interface). Function Image Input The image classification function leverages the Fn Java FDK, which simplifies the process of developing and running Java functions. One of its benefits is that it can seamlessly convert the input sent to your functions into Java objects and types. This includes: Simple data binding, like handling string input. Binding JSON data types to POJOs. You can customize this because it’s internally implemented using Jackson. Working with raw inputs, enabled by an abstraction of the raw Fn Java FDK events received or returned by the function. The binding can be further extended if you want to customize the way your input and output data is marshaled. The existing TensorFlow example expects a list of image names (which must be present on the machine from which the code is being executed) as input. The function behaves similarly, but with an important difference —it uses the flexible binding capability provided by the Fn Java FDK. The classify method serves as the entry point to the function and accepts a Java byte array (byte[]), which represents the raw bytes of the image that is passed into the function. This byte array is then used to create the Tensor object using the static Tensor.create(byte[]) method: public class LabelImageFunction { public String classify(byte[] image) { ... Tensor<String> input = Tensors.create(image); ... } } The full source code is available on GitHub. Machine Learning Model Typically, a machine-learning-based system consists of the following phases: Training: An algorithm is fed with past (historical) data in order to learn from it (derive patterns) and build a model. Very often, this process is ongoing. Predicting: The generated model is then used to generate predictions or outputs in response to new inputs based on the facts that were learned during the training phase This application uses a pregenerated model. As an added convenience, the model (and labels) required by the classification logic are packaged with the function itself (part of the Docker image). These can be found in the resources folder of the source code. This means that you don’t have to set up a dedicated model serving component (like TensorFlow Serving). Function Metadata The func.yaml file contains function metadata, including attributes like memory and timeout (for this function, they are 1024 MB and 120 seconds, respectively). This metadata is required because of the (fairly) demanding nature of the image classification algorithm (as opposed to simpler computations). schema_version: 20180708 name: classify version: 0.0.1 runtime: java memory: 1024 timeout: 120 triggers: - name: classify type: http source: /classify Here is a summary of the attributes used: schema_version represents the version of the specification for this file. name is the name and tag to which this function is pushed. version represents the current version of the function. When deploying, it is appended to the image as a tag. runtime represents the programming language runtime, which is java in this case. memory (optional) is the maximum memory threshold for this function. If this function exceeds this limit during execution, it's stopped and an error message is logged. timeout (optional) is the maximum time that a function is allowed to run. triggers (optional) is an array of trigger entities that specify triggers for the function. In this case, we’re using an HTTP trigger. Function Dockerfile Oracle Functions uses a set of prebuilt, language-specific Docker images for build and runtime phases. For example, for Java functions, fn-java-fdk-build is used for the build phase and fn-java-fdk is used at runtime. Here is the default Dockerfile that is used to create Docker images for your functions: FROM fnproject/fn-java-fdk-build:jdk9-1.0.75 as build-stage WORKDIR /function ENV MAVEN_OPTS -Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttps.proxyHost= -Dhttps.proxyPort= -Dhttp.nonProxyHosts= -Dmaven.repo.local=/usr/share/maven/ref/repository ADD pom.xml /function/pom.xml RUN ["mvn", "package", "dependency:copy-dependencies", "-DincludeScope=runtime", "-DskipTests=true", "-Dmdep.prependGroupId=true", "-DoutputDirectory=target", "--fail-never"] ADD src /function/src RUN ["mvn", "package"] FROM fnproject/fn-java-fdk:jdk9-1.0.75 WORKDIR /function COPY --from=build-stage /function/target/*.jar /function/app/ CMD ["com.example.fn.HelloFunction::handleRequest"] It’s a multiple-stage Docker build that performs the following actions (out-of-the-box): Maven package and build Copying (using COPY) the function JAR and dependencies to the runtime image Setting the command to be executed (using CMD) when the function container is spawned But there are times when you need more control over the creation of the Docker image, for example, to incorporate native third-party libraries. In such cases, you want to use a custom Dockerfile. It's powerful because it gives you the freedom to define the recipe for your function. All you need to do is extend from the base Docker images. Following is the Dockerfile used for this function: FROM fnproject/fn-java-fdk-build:jdk9-1.0.75 as build-stage WORKDIR /function ENV MAVEN_OPTS -Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttps.proxyHost= -Dhttps.proxyPort= -Dhttp.nonProxyHosts= -Dmaven.repo.local=/usr/share/maven/ref/repository ADD pom.xml /function/pom.xml RUN ["mvn", "package", "dependency:copy-dependencies", "-DincludeScope=runtime", "-DskipTests=true", "-Dmdep.prependGroupId=true", "-DoutputDirectory=target", "--fail-never"]' ARG TENSORFLOW_VERSION=1.12.0 RUN echo "using tensorflow version " $TENSORFLOW_VERSION RUN curl -LJO https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-$TENSORFLOW_VERSION.jar RUN curl -LJO https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-linux-x86_64-$TENSORFLOW_VERSION.tar.gz RUN tar -xvzf libtensorflow_jni-cpu-linux-x86_64-$TENSORFLOW_VERSION.tar.gz ADD src /function/src RUN ["mvn", "package"] FROM fnproject/fn-java-fdk:jdk9-1.0.75 ARG TENSORFLOW_VERSION=1.12.0 WORKDIR /function COPY --from=build-stage /function/libtensorflow_jni.so /function/runtime/lib COPY --from=build-stage /function/libtensorflow_framework.so /function/runtime/lib COPY --from=build-stage /function/libtensorflow-$TENSORFLOW_VERSION.jar /function/app/ COPY --from=build-stage /function/target/*.jar /function/app/ CMD ["com.example.fn.LabelImageFunction::classify"] Notice the additional customization that it incorporates,  in addition to the default steps like Maven build: Automates TensorFlow setup (per the instructions), extracts the TensorFlow Java SDK and the native JNI (.so) libraries (as part of the second stage of the Docker build) Copies the JNI libraries to /function/runtime/lib and the SDK JAR to /function/app so that they are available to the function at runtime Deploying to Oracle Functions As mentioned previously, you can use the open source Fn CLI to deploy to Oracle Functions. Ensure that you have the latest version. curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install | sh ​You can also download it directly from https://github.com/fnproject/cli/releases. Oracle Functions Context Before using Oracle Functions, you have to configure the Fn Project CLI to connect to your Oracle Cloud Infrastructure tenancy. When the Fn Project CLI is initially installed, it’s configured for a local development context. To configure the Fn Project CLI to connect to your Oracle Cloud Infrastructure tenancy instead, you have to create a new context. The context information is stored in a .yaml file in the ~/.fn/contexts directory. It specifies Oracle Functions endpoints, the OCID of the compartment to which deployed functions belong, the Oracle Cloud Infrastructure configuration file, and the address of the Docker registry to push images to and pull images from. This is what a context file looks like: api-url: https://functions.us-phoenix-1.oraclecloud.com oracle.compartment-id: <OCI_compartment_OCID> oracle.profile: <profile_name_in_OCI_config> provider: oracle registry: <OCI_docker_registry>  Oracle Cloud Infrastructure Configuration The Oracle Cloud Infrastructure configuration file contains information about user credentials and the tenancy OCID. You can create multiple profiles with different values for these entries. Then, you can define the profile to be used by the CLI by using the oracle.profile attribute. Here is an example configuration file: [DEFAULT] user=ocid1.user.oc1..exampleuniqueID fingerprint=20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34 key_file=~/.oci/oci_api_key.pem tenancy=ocid1.tenancy.oc1..exampleuniqueID pass_phrase=tops3cr3t region=us-ashburn-1 [ORACLE_FUNCTIONS_USER] user=ocid1.user.oc1..exampleuniqueID fingerprint=72:00:22:7f:d3:8b:47:a4:58:05:b8:95:84:31:dd:0e key_file=/.oci/admin_key.pem tenancy=ocid1.tenancy.oc1..exampleuniqueID pass_phrase=s3cr3t region=us-phoenix-1 You can define multiple contexts, each stored in a different context file. Switch to the correct context according to your Functions development environment: fn use context <context_name> Create the Application Start by cloning the contents of the GitHub repository: git clone https://github.com/abhirockzz/fn-hello-tensorflow Here is the command required to deploy an application: fn create app <app_name> --annotation oracle.com/oci/subnetIds='["<subnet_ocid>"]' <app_name> is the name of the new application. <subnet_ocid> is the OCID of the subnet in which to run your function. For example: fn create app fn-tensorflow-app --annotation oracle.com/oci/subnetIds='["ocid1.subnet.oc1.phx.exampleuniqueID","ocid1.subnet.oc1.phx.exampleuniqueID","ocid1.subnet.oc1.phx.exampleuniqueID"]' Deploy the Function After you create the application, you can deploy your function with the following command: fn deploy --app <app_name> <app_name> is the name of the application in Oracle Functions to which you want to add the function. If you want to use TensorFlow version 1.12.0 (for Java SDK and corresponding native libraries), use the following command: fn -v deploy --app fn-tensorflow-app You can also choose a specific version. Ensure that you specify it in pom.xml file before you build the function. For example, if you want to use version 1.11.0: <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow</artifactId> <version>1.11.0</version> <scope>provided</scope> </dependency> To specify the version during function deployment, you can use --build-arg (build argument) as follows: fn -v deploy --app fn-tensorflow-app --build-arg TENSORFLOW_VERSION=<version> For example, if you want to use 1.11.0: fn -v deploy --app fn-tensorflow-app --build-arg TENSORFLOW_VERSION=1.11.0 When the deployment completes successfully, your function is ready to use. Use the fn ls apps command to list down the applications currently deployed. fn-tensorflow-app should be listed. Time to Classify Images! As mentioned earlier, the function can accept an image as input and tell you what it is, along with the percentage accuracy. You can start by downloading some of the recommended images or use images that you already have on your computer. All you need to do is pass them to the function while invoking it: cat <path to image> | fn invoke fn-tensorflow-app classify Ok, let’s try this. Can it detect the sombrero in this image? cat /Users/abhishek/manwithhat.jpg | fn invoke fn-tensorflow-app classify “366 • 9 • Gringo” (CC BY-NC-ND 2.0) by Pragmagraphr Result: This is a ‘sombrero’ Accuracy — 92% How about a terrier? cat /Users/abhishek/terrier.jpg | fn invoke fn-tensorflow-app classify “Terrier” (CC BY-NC 2.0) by No_Water Result: This is a 'West Highland white terrier' Accuracy - 88% What will you classify? Summary We just deployed a simple yet fully functional machine learning application in the cloud! Eager to try this out? Oracle Functions will be generally available in 2019, but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Oracle Functions or to request access, please register. You can also learn more about the underlying open source technology used in Oracle Functions at FnProject.io. Featured image by Franck V. on Unsplash

Image classification is a canonical example used to demonstrate machine learning techniques. This post shows you how to run a TensorFlow-based image classification application on the recently...

Partners

NoSQL in Kubernetes: Couchbase Operator on Container Engine for Kubernetes

In my last post about Couchbase, I covered how to run Couchbase on Oracle Cloud Infrastructure with Terraform. That's all well and good, but now for something completely different...   At KubeCon + CloudNativeCon 2018, Oracle made lots of announcements, many about the Oracle Cloud Native Framework. This framework builds on earlier work we've done, joining the Cloud Native Computing Foundation (CNCF) as a Platinum member and releasing a managed Kubernetes service called Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE). Conventional wisdom is that Kubernetes is great for running stateless containers but not stateful workloads. Recommended architectures have application servers managed by Kubernetes with databases running outside on VMs or bare metal. This approach performs well, but it negates one of the biggest advantages of Kubernetes—providing a single orchestrator that can be used to manage an entire application. The result is that you must manage software both inside and outside of Kubernetes along with the connections between the two. It's not fun. The Kubernetes community recognized all this as a limitation and tried a number of approaches for running stateful workloads. The first was called Pet Sets, drawing on the pets/cattle analogy common in the cloud. This was later renamed to StatefulSets. That model was superseded by an approach called Sidecar, which suffered from a variety of issues, including single point of failure. All that culminated in CoreOS proposing a model called Operator, which uses a custom resource definition (CRD) to manage a stateful application inside a Kubernetes cluster. A variety of stateful pieces of software now run with the Operator model, including Confluent, Hazelcast, and Couchbase. If you're looking for an industry-leading NoSQL database on which to build your Kubernetes application, Couchbase is an obvious choice. Couchbase itself is a great product, with all the features you'd expect in a NoSQL database. Couchbase Autonomous Operator distinguishes it a bit further.   We've partnered closely with Couchbase to make it easy to deploy Operator on Container Engine for Kubernetes: “Couchbase is collaborating with Oracle to bring the benefits of the Oracle Kubernetes Engine (OKE) to applications like the Couchbase Data Platform. With Couchbase Autonomous Operator now users can easily deploy Couchbase Data Platform on OKE.” Anil Kumar, Director of Product Management, Couchbase Operator enables you to run your database next to the application in Kubernetes, lowering latency and simplifying administration of the application as a whole. Operator also blurs the line between enterprise software and a managed service, automating many tasks, including: Resizing the cluster Recovering from node failure Upgrading the database version Because Operator relies on the CRD API, it can run on any conformant Kubernetes distribution, including, of course, Container Engine for Kubernetes. Getting started with Couchbase Autonomous Operator on Container Engine for Kubernetes takes just a few steps. First you deploy a Container Engine for Kubernetes cluster. Instructions to do that, along with a Terraform module that automates the process, are on GitHub. With your cluster up and running, the next step is to deploy Couchbase. We've created a walkthrough for that. As always, it's been a pleasure working with the Couchbase team to set this up. Special thanks to Tommie McAfee at Couchbase for helping with some permissions issues! If you have any questions, reach out to me at ben.lackey@oracle.com or on Twitter @benofben.

In my last post about Couchbase, I covered how to run Couchbase on Oracle Cloud Infrastructure with Terraform. That's all well and good, but now for something completely different...   At KubeCon +...

Customer Stories

Virtual Humans on Oracle Cloud Infrastructure HPC

A great cloud success story is how Oracle Cloud Infrastructure has enabled ELEM Biotech. We've invited ELEM Biotech founders Mariano Vazquez, Chris Morton, Guillaume Houzeaux, and Jose Maria Cela to share how Oracle Cloud Infrastructure High-Performance Computing (HPC) enables them to make Virtual Humans. Check out their company at http://www.elem.bio/. Oracle Cloud Infrastructure is at the heart of the revolution that ELEM Biotech wants to lead. Oracle Cloud Infrastructure provides the flexible and secure HPC power that we need, especially thanks to the performance of its bare metal instances. ELEM Biotech develops Alya Red. With a software as a service (SaaS) model deployed in Oracle Cloud Infrastructure, Alya Red allows users to set up and run advanced simulations and analyze them through a sophisticated, tailor-made biomedical interface. Alya Red is the biomedical interface, the cloud deployment and orchestration, the simulation engine, and a cloud-based database. Pacemakers, valve replacements, stents, anti-arrhythmic drugs, obstructive pulmonary diseases and asthma treatments, drug pumps—all of these scenarios can be set up now using Alya Red. Our Virtual Humans are created in Oracle Cloud Infrastructure, where medical devices manufactures, pharmaceutical companies, and CROs can analyze their products and optimize the treatments to better fit patients. The Alya Read team has almost 50 developers at the Barcelona Supercomputing Center and another 40 developers elsewhere. We have provided services to and received funding from Juan Yacht Design, Repsol, Iberdrola, and Medtronic. Alya Red is running on Oracle Cloud Infrastructure’s HPC instances with large cases, using up to 1,000 cores. The results are extremely promising: we are seeing the same scalability on Oracle Cloud Infrastructure HPC that we see in our dedicated MareNostrum cluster, and we are able to get the results back 90 percent faster because there is no queue time. It's worth mentioning that the code, which was tuned for MareNostrum, was compiled and run on Oracle Cloud Infrastructure's bare metal instances with RDMA without requiring any additional libraries or tuning—truly "lift and shift." Public HPC resources are largely focused on academia, and they can't provide the required quality of service that software such as Alya Red needs. That's where Oracle solves the problem with its HPC instances and cloud infrastructure. Thanks to the Oracle Startup Ecosystem and Oracle Cloud Infrastructure's HPC offering, ELEM Biotech has been able to hit the ground running in the HPC world. Our vision is that with Oracle Cloud Infrastructure, Alya Red will progressively include more human systems and organs, all of them validated and certified by regulatory agencies for their context of use. Our Virtual Humans will contribute to largely reducing animal and human testing, and reducing product costs and time to market. We hope to reduce the healthcare gap between rich and poor by streamlining innovation in medical devices and pharmaceutical industries.

A great cloud success story is how Oracle Cloud Infrastructure has enabled ELEM Biotech. We've invited ELEM Biotech founders Mariano Vazquez, Chris Morton, Guillaume Houzeaux, and Jose Maria Cela to...

Customer Stories

Exabyte.io for Scientific Computing on Oracle Cloud Infrastructure HPC

We recently invited Exabyte.io, a cloud-based, nanoscale modeling platform that accelerates research and development of new materials, to test the high-performance computing (HPC) hardware in Oracle Cloud Infrastructure. Their results were similar to the performance that our customers have been seeing and what other independent software vendors (ISVs) have been reporting: Oracle Cloud Infrastructure provides the best HPC performance for engineering and simulation workloads. Exabyte.io enables their customers to design chemicals, catalysts, polymers, microprocessors, solar cells, and batteries with their Materials Discovery Cloud. Exabyte.io allows scientists in enterprise R&D units to reliably exploit nanoscale modeling tools, collaborate, and organize research in a single platform. As Exabyte.io seeks to provide their customers with the highest-performing and lowest-cost modeling and simulation solutions, they have done extensive research and benchmarking with cloud-based HPC solutions. We were eager to have them test the Oracle Cloud Infrastructure HPC hardware. Exabyte.io ran several benchmarks, including general dense matrix algebra with LINPACK, density functional theory with Vienna Ab-initio Simulation Package (VASP), and molecular dynamics with GROMACS. The results were impressive and prove the value, performance, and scale of HPC on Oracle Cloud Infrastructure. The advantage of Oracle Cloud Infrastructure's bare metal was obvious with LINPACK, throughput is almost double the closest cloud competitor and consistent with on-premises performance. Latency is even more interesting: the BM.HPC2.36 shape with RDMA provides the lowest latency at any packet size and is orders of magnitude faster than cloud competitors. In fact, for every performance metric that Exabyte.io tested on VASP and GROMACS, they saw Oracle's BM.HPC2.36 shape with RDMA (shown as OL in the following graph) outperform the other cloud competitors. Below is a great example of both performance and scaling of Oracle Cloud Infrastructure on VASP. When parallelizing over electronic bands for large-unit-cell materials and normalizing for core count, the single node performance of the BM.HPC2.36 exceeds it's competitors and then scales consistently as the cluster size increases. The BM.HPC2.36 runs large VASP jobs faster and can scale larger than any other cloud competitor. Exabyte.io has provided the full test results on their website. Their blog concluded that "Running modeling and simulations on the cloud with similar performance as on-premises is no longer a dream. If you had doubts about this before, now might be the right time to give it another try." By offering bare metal HPC performance in the cloud Oracle Cloud Infrastructure enables customers running the largest workloads on the most challenging engineering and science problems to get their results faster. The results that Exabyte.io has seen are exceptional, but the results are not unique among our customers. Spin up your own HPC cluster in 15 minutes on Oracle Cloud Infrastructure.  

We recently invited Exabyte.io, a cloud-based, nanoscale modeling platform that accelerates research and development of new materials, to test the high-performance computing (HPC) hardware in Oracle...

Migrate Oracle Database to Oracle Cloud Infrastructure by Using Storage Gateway

This blog post outlines the process of migrating a single-instance Oracle Database from on-premises to the Oracle Cloud Infrastructure Database as a Service (DBaaS) instance. There are many other way to migrate an on-premises Oracle Database to the Oracle Cloud Infrastructure DBaaS instance. However, this blog post uses the Oracle Cloud Infrastructure Storage Gateway Service and Oracle RMAN utility to migrate an on-premises Oracle single-instance database to Oracle Cloud Infrastructure.  Oracle Cloud Infrastructure Storage Gateway is a cloud storage gateway that lets you connect your on-premises applications with Oracle Cloud Infrastructure. Any application that can write data to an NFS target can also write data to Oracle Cloud Infrastructure Object Storage by using Storage Gateway, without requiring application modification. At very high level, the OCI Storage Gateway and Object Storage are used to create an NFS share. The NFS share is mounted on the database host and offline full database backup is performed using the the Oracle RMAN utility to the NFS share. This backup copy is getting stored on Object Storage through the Storage Gateway. Restore the Oracle Database on the DBaaS instance using Oracle RMAN utility from Object Storage mounted through Storage Gateway, not the DBaaS host. Before You Start Before you start the database migration, consider the following requirements: Ensure that Storage Gateway is already installed on a virtual or physical host in your on-premises data center as well as on Oracle Cloud Infrastructure. Create the file system on the Storage Gateway host and map it to Object Storage. Install the Cloud Backup Module on both the source and destination database. Encrypt the backup by using RMAN (wallet or password based). Create a manifest file for RMAN to know about the contents of backup set files (manifest.xml). Consider a high-throughput network for Storage Gateway to reduce the latency. Appropriately size the Storage Gateway cache for read and write operations. Set up a strong password and share passwords with others only as needed. Evaluate and Plan Use the Evaluation and Planning Checklist to help you evaluate and plan for the migration of your on-premises Oracle Databases to Oracle Cloud Infrastructure, based on the unique requirements of your source and target databases.  Back Up the Oracle Database to the NFS Share Before starting the backup of the Oracle database, mount the file system that you created using Storage Gateway and export to the NFS share on the database host by using the appropriate NFSv4 mount options. Use the RMAN utility to create a full backup of the Oracle database to the NFS share as follows: Mount the file system on your database host. [root@db-host ~]# mount –t nfs –o vers=4,port=32769  <IP_of_storage_gateway>:/<filesystem_name>  /<local_directory> [root@db-host ~]# chown –R oracle:oinstall /<mount_directory> Connect to the source database, enable backup encryption, and set the compression to medium. [oracle@db-host ~]$ rman target / RMAN > set encryption on; RMAN > set compression algorithm 'medium'; Perform a full database backup including controlfile and spfile. RUN { ALLOCATE CHANNEL ch11 DEVICE TYPE DISK MAXPIECESIZE 1G; BACKUP FORMAT '/mydb_backup/%d_D_%T_%u_s%s_p%p' DATABASE CURRENT CONTROLFILE FORMAT '/mydb_backup/%d_C_%T_%u' SPFILE FORMAT '/mydb_backup/%d_S_%T_%u' PLUS ARCHIVELOG FORMAT '/mydb_backup/%d_A_%T_%u_s%s_p%p'; RELEASE CHANNEL ch11; } Copy the password file and TDE wallet files. [oracle@db-host ~]$cp $ORACLE_HOME/dbs/orapwdorcl  /mydb_backup/. [oracle@db-host ~]$ zip –rj   /mydb_backup/tde_wallet.zip /u01/app/oracle/admin/orcl/tde_wallet Restore and Recover the Oracle Database on Oracle Cloud Infrastructure Database Before starting the restore process, disconnect the file system from the on-premises Storage Gateway host and create a file system on Oracle Cloud Infrastructure Storage Gateway by using the same object storage. Create the target database in Oracle Cloud Infrastructure. To ensure that the target database has all the required metadata for Oracle Cloud Infrastructure tooling to work, create the target database by using one of the supported methods: Oracle Cloud Infrastructure Console, CLI, or Terraform provider. This target database will be cleaned to be used as a shell for the migration, as needed. Configure the Oracle Database Cloud Backup Module. Configuring the Cloud Backup Module for Existing or Fresh Backups provides an example of how to configure the Cloud Backup Module to point to the Object Storage backup bucket. For details, including variables and commands, see Installing the Oracle Database Cloud Backup Module. Shut down the database on Oracle Cloud Infrastructure and clean up the existing files. Delete the existing data files, temp files, redo log files, wallet file, and password file using the grid user and the oracle user. Note: Do not delete the parameter file. Create a local directory and mount the Storage Gateway file system on the database host. [opc@dbhost ~]$ sudo mount –t nfs –o vers=4,port=32769  <IP_of_storage_gateway>:/<filesystem_name>  /<local_directory> [opc@dbhost ~]$ sudo chown –R oracle:oinstall /<mount_directory> Copy the source password file and TDE wallet files at the target location. Use the oracle user to copy the password file and TDE wallet files from the NFS mount directory to the target location of the Oracle Cloud Infrastructure database host. Ensure that sqlnet.ora has the right ENCRYPTION_WALLET_LOCATION. cat $ORACLE_HOME/network/admin/sqlnet.ora Restore the database on the Oracle Cloud Infrastructure host by using the RMAN utility. Run the RMAN utility as the oracle user to restore and recover the database. Set the appropriate dbid and start up the instance to the nomount stage before restoring the database. [oracle@dbhost ~]$ rman target / RMAN>  startup nomountpfile=’$ORACLE_HOME/dbs/initorcl.ora’ RMAN>  restore controlfile from ‘/mydb_backup/ORCL_C_xxxxxxx_xxxxx’; RMAN>  alter database mount; RMAN> catalog start with ‘/mydb_backup/ORCL’; RMAN> RUN { SET NEWNAME FOR DATAFILE 1 TO '+DATA/mydb/system01.dbf'; SET NEWNAME FOR DATAFILE 2 TO '+DATA/mydb/sysaux01.dbf'; SET NEWNAME FOR DATAFILE 3 TO '+DATA/mydb/undotbs01.dbf'; SET NEWNAME FOR DATAFILE 4 TO '+DATA/mydb/users01.dbf'; SET NEWNAME FOR DATAFILE 6 TO '+DATA/mydb/soe.dbf'; RESTORE DATABASE; SWITCH DATAFILE ALL; RECOVER DATABASE; } RMAN> alter database open resetlogs; SELECT open_mode from v$database; Note: Create an spfile by using the pfile.

This blog post outlines the process of migrating a single-instance Oracle Database from on-premises to the Oracle Cloud Infrastructure Database as a Service (DBaaS) instance. There are many other way...

Oracle Cloud Infrastructure

Bacula Enterprise Integrates Natively with Oracle Cloud Infrastructure

In an exciting addition to the Oracle Cloud ecosystem, Bacula Enterprise now connects natively to Oracle Cloud Infrastructure. Bacula is a highly scalable, modular enterprise backup and recovery solution that has a wide range of features designed for medium and large organizations. What makes this latest development for Oracle Cloud even more interesting is that Bacula offers unique cloud interaction tools. For example, with Bacula you can perform the following tasks: Back up and restore data to and from Oracle Cloud by using either command line or BWeb (GUI) interfaces. Manage network bandwidth when transferring backup data to Oracle Cloud, which ensures that your backup doesn't monopolize your network. Perform and manage concurrent, asynchronous uploads and downloads of backup data from Oracle Cloud. Employ multiple-bucket support in a single storage daemon. By providing the ability to configure each bucket to suit the user’s personal needs, Bacula Enterprise pushes the boundaries of customization and flexibility far beyond its competitors. Use Bacula’s unique disk-caching system to recover and restore specific files with extreme speed. These special cloud management tools provide data center managers with new levels of control and integration with Oracle Cloud Infrastructure. The tools are part of Bacula’s backup software feature set that works on entire physical and virtual environments, regardless of architecture—all from a single platform. Bacula’s ability to interoperate with databases, virtual environments, and practically any type of storage destination (VTLs, disk, tape, Oracle Cloud Infrastructure, and so on), coupled with the absence of data volume-related charges, gives you an opportunity to simplify and modernize your backup and recovery strategy while cutting costs. The following diagram provides an overview of Bacula Enterprise's wider feature set: To get started, explore Bacula’s free trial software and videos.

In an exciting addition to the Oracle Cloud ecosystem, Bacula Enterprise now connects natively to Oracle Cloud Infrastructure. Bacula is a highly scalable, modular enterprise backup and recovery...

Partners

Major Updates to DataStax on Oracle Cloud Infrastructure with Terraform

DataStax is the company behind Apache Cassandra, a distributed NoSQL database. DataStax offers DataStax Enterprise (DSE), an enterprise version of Cassandra with added capabilities such as integrated Spark and Solr, improved security features, and a graph database written by the same engineers who built TitanDB. Major enterprises like Walmart, Safeway, and ING rely on DSE for operational database use cases in which the database must always be on. That high availability is the result of architecture decisions that provide redundancy at the data center, rack and node levels. Applications powered by DSE can suffer the failure of entire regions and continue operating, ensuring uninterrupted service for the end user. DataStax and Oracle have a relationship going back to Oracle OpenWorld 2016, when Mahesh Thiagarajan and I worked on the launch of Oracle Cloud Infrastructure. At the time, I was leading the Partner Architecture team at DataStax, Oracle Cloud Infrastructure was a brand new cloud, and DataStax Lifecycle Manager (LCM) hadn't come out yet. Since then, much has changed. Gilbert Lau at DataStax worked to incrementally enhance those integrations, moving from a proprietary infrastructure as code (IaC) API to the open source industry standard of Terraform. He also added LCM support. Oracle Cloud Infrastructure has continued to advance, adding new regions, VMs, and services. One of the first things I worked on when I started at Oracle Cloud Infrastructure a few months ago was revving the DataStax Terraform module. The latest version of that is now available. We've also submitted a pull request to the root DSPN repo. It looks like the net change is to drop 700 lines. The best code is deleted code! The update module includes several improvements: DataStax 6.0.2 Terraform 0.11 Latest Oracle Cloud Infrastructure Terraform provider Reorganized Terraform and scripts Arbitrary VM types and node counts Removed dependency on remote-exec Improved README.md I worked with our video team to record a demo of the new module: More is coming. In November, Collin Poczatek joined the Oracle Cloud Infrastructure team. He used to maintain the AWS Quick Start and Azure Marketplace listings for DataStax, so he brings deep expertise in cloud deployments. He's currently working on a number of projects: Additional updates to the Terraform module for Oracle Cloud Infrastructure. Hardware recommendations for one of our customers running DSE on Oracle Cloud Infrastructure. This includes benchmarking block versus NVMe, different machine types, and DSE 4.8 versus 6.0. We're planning to report on the results of all that work. A comparison showing the zData AWS benchmark running on Oracle Cloud Infrastructure. At Oracle Cloud Infrastructure, we're committed to building an open cloud that is the best place to run a variety of ISV workloads, including NoSQL databases like DSE and Cassandra. If you're running one of these databases today or looking at deploying one, we'd love to show you how Oracle Cloud Infrastructure can offer the best price and performance, saving two to three times over AWS. If we can help in any way, reach out to me at ben.lackey@oracle.com or say hi on twitter @benofben.

DataStax is the company behind Apache Cassandra, a distributed NoSQL database. DataStax offers DataStax Enterprise (DSE), an enterprise version of Cassandra with added capabilities such as integrated...

Customer Stories

Princess House Extends JD Edwards Capabilities on Oracle Cloud

Princess House is a direct sales company that sells cookware, serveware, and housewares through a network of 25,000 independent business owners. For 20 of the 55 years that they've been in business, they've leveraged JD Edwards for finance and procurement. To support their growing business, Princess House needed to modernize JD Edwards and the supporting infrastructure. They had the following goals: Extend JD Edwards beyond just financial and procurement management to become their core enterprise resource planning (ERP) system Add agility and resiliency into their infrastructure, specifically for dev/test use cases To accomplish these goals, Princess House demanded a cloud infrastructure that would not only make it easy for them to move JD Edwards, but also offer them superior availability and support. Oracle Cloud Infrastructure was the clear choice. Needed Cloud Agility for JD Edwards Dev/Test Like many other enterprises, Princess House had been running their own data center on-premises, and they wanted to get out of the business of doing so. They wanted to be able to add and remove resources as needed for their dev/test workloads, and their legacy infrastructure didn't provide them with this level of agility. “Our main driver for moving to cloud was to be able to create resources on the fly without the upfront investments, constant capacity planning and hardware renewals that come with running your own data center,” said Bassam Alqassar, Vice President of Information Systems at Princess House. In addition to gaining much-needed agility, Princess House's IT team could rely on Oracle Cloud Infrastructure to deliver a highly available infrastructure. Now they could devote their resources to improving their enterprise applications for business users. Princess House upgraded to JD Edwards EnterpriseOne, adding in supply chain management capabilities and integrating with Softeon, a warehouse management system. They were able to do so with limited disruption to existing business processes. And end-users who were already familiar with JD Edwards were able to get trained quickly on the upgraded system. Oracle Cloud Infrastructure Was Best Choice for JD Edwards Princess House could have gone with any other cloud provider to get the agility benefits that they were seeking. However, when they looked into hosting with a prior partner, they discovered that it would have been three times more expensive than Oracle's solution. Additionally, they wanted to work with a cloud vendor that would be able to offer strong support across the stack, from the application itself down to the database and the infrastructure. “We looked at other clouds, but we knew Oracle Cloud Infrastructure was the best choice to run an Oracle solution,” said Alqassar. Learn more about Princess House's story to move and improve JD Edwards to Oracle Cloud Infrastructure.

Princess House is a direct sales company that sells cookware, serveware, and housewares through a network of 25,000 independent business owners. For 20 of the 55 years that they've been in business,...

Product News

IDCS Users Can Now Use the Oracle Cloud Infrastructure SDK and CLI

We're announcing an enhancement to our federation capabilities using Oracle Identity Cloud Service. Available today, users who are federated with IDCS can directly access the Oracle Cloud Infrastructure SDK and CLI. This enhancement supports a broad range of use cases, including the simplification of governance and management tasks.  You can now use an IDCS user for all CLI access.  For example, IDCS users can use scripts to automate common tasks using the CLI as well as integrating OCI tasks with other infrastructure tools and systems you might use.  For another example, if you want to create a script that copies files to Object Storage, you can now do that by using an IDCS user instead of creating a local Oracle Cloud Infrastructure user. As a result, you can greatly reduce the number of users that you have to secure and manage. Federation enables you to use identity management software to manage users and groups. All tenancies created after December 2017 are automatically federated with IDCS. If you're an IDCS user, that means your can leverage the same set of credentials across all Oracle Cloud solutions, including Oracle Cloud Applications and Oracle Cloud Infrastructure. In addition, all users that are members of IDCS groups that are mapped to Oracle Cloud Infrastructure groups will be synchronized from IDCS to Oracle Cloud Infrastructure. This synchronization enables you to control which IDCS users have access to Oracle Cloud Infrastructure and to consolidate all user management in IDCS. To take advantage of this new feature, follow the setup process described in Upgrading Your Oracle Identity Cloud Service Federation.  Next, I'd like to give an example of a cost management scenario that is greatly simplified by this feature. Let's say you want to run a Python script, using the SDK, that finds and terminates compute instances that don't have the CostCenter cost tracking tag. Instead of creating a local Oracle Cloud Infrastructure user, you can set up a user in IDCS to run this script. You would follow these steps to enable this scenario: Step 1: Ensure that your federation has been upgraded If you haven't already followed the setup process described in Upgrading Your Oracle Identity Cloud Service Federation, do so now. Step 2: Set up the user in IDCS and associate that user with the correct groups Managing all your users from your identity provider is a more scalable, manageable, and secure way to manage your user identities. Be sure to follow the principal of least privilege by creating an IDCS user and associating that user with only the IDCS groups that they need to do their job. Step 3: Set up the Oracle Cloud Infrastructure group Create a local Oracle Cloud Infrastructure group that will be used for this task, and ensure that it has a policy that enables just the access control that it needs to do the work. Consider setting up a group specifically for the type of administrator you want (for example, compute instances administrator). For a detailed explanation of best practices in setting up granular groups and access policy, see the Oracle Cloud Infrastructure Security white paper. You can also create the group when you map it. Step 4: Map the IDCS group to the Oracle Cloud Infrastructure group Follow the instructions on adding groups and users for tenancies federated with Oracle Identity Cloud Service, and ensure that you map the correct group from IDCS to the equivalent group in Oracle Cloud Infrastructure. You will that you succeeded if you see users created in your tenancy from IDCS (there is a filter that allows you to see only federated users). You can also create groups as you map them. Step 5: Set up the user with an API key Now that the IDCS user exists as a provisioned user in Oracle Cloud Infrastructure, you must create an API key pair and upload it to the user. Each user should have their own key pair. See the SDK setup instructions for details. Step 6: Check the user's capabilities  As a final check, ensure that the user has the capability to use the CLI or SDK. You could also set the user's capabilities to use only the SDK and not the web console. Now you've set up the IDCS user so that they can take advantage of the SDK and run scripts that the Oracle Cloud Infrastructure user has been granted.    Tips You know that the user is federated if the user name is prefixed with the name of the identity provider. By default, IDCS is called oracleidentitycloudservice. For example, oracleidentitycloudservice/Martin. If no users are being replicated, verify that you've followed the setup procedure and mapping between the groups. If that doesn’t work, visit My Oracle Support to open a support ticket. Only users assigned to mapped groups are replicated. If you see some users but not the IDCS user that you want, that user doesn't belong to a group that has been mapped from IDCS to Oracle Cloud Infrastructure. To use the SDK or CLI, the client that runs the CLI or SDK must have the matching private key material stored on the client machine. Secure the client machine appropriately to prevent inappropriate access. Conclusion Stay tuned for future feature announcements regarding federation. We plan to support other federation providers, and we'll keep you informed as we make updates.

We're announcing an enhancement to our federation capabilities using Oracle Identity Cloud Service. Available today, users who are federated with IDCS can directly access the Oracle...

Partners

Couchbase: Expanding the Oracle Cloud Infrastructure NoSQL Ecosystem

Couchbase is a distributed NoSQL database. It's a document store in the same class of databases as MongoDB, but it differs in some interesting ways. For example, Couchbase offers a query language called N1QL that enables a user to write ANSI SQL (including joins!) against the document store. These N1QL queries return JSON documents. It's a little weird, but it substantially simplifies the development of web, mobile, and IoT apps. Couchbase has three products: Couchbase Server, the core NoSQL database Couchbase Lite, a lightweight version of the database that runs on Android and iOS devices Couchbase Sync Gateway, which manages the synchronization between the mobile and server components This architecture leads to some amazing use cases, with companies like Ryanair using Couchbase in their mobile app to store information locally, improving customer experience through reduced latency when using a mobile app. This also provides the ability to continue using the app if the device is disconnected from the server or internet. Beyond this core functionality, Couchbase offers the following components, which can be used in various combinations on heterogeneous nodes: Data Query Index Full Text Search Analytics Eventing This flexibility enables users to scale whatever component of the database their use case demands. Couchbase calls this Multi-Dimensional Scaling (MDS). I have a particular affinity for Couchbase because I was leading their cloud partnerships before coming to Oracle Cloud Infrastructure. They have a great team of people who are a pleasure to work with. Today, Oracle Cloud Infrastructure has a close partnership with Couchbase. We've created a Terraform module that automates the deployment of Couchbase on Oracle Cloud Infrastructure. Writing this module has been an interesting continuation of my introduction to Terraform. That introduction was working with Gruntwork.io on the Terraform module that they created to deploy Couchbase on AWS. One of the cofounders, Jim Brikman, literally wrote the book on Terraform. It's a good read. Gruntwork also created an open source framework for testing Terraform code called Terratest. We're currently looking at ways to use that in our work at Oracle Cloud Infrastructure. Before working on the Terraform module, I'd worked with the infrastructure as code (IaC) languages that each cloud provides: AWS CloudFormation, Azure Resource Manager, and Google Deployment Manager. It's been interesting to explore how Terraform presents a single framework for deploying on any cloud, providing an open source technology that is the basis of a multi-cloud world. I've been impressed by Oracle Cloud Infrastructure's choice to embrace an existing open source technology, both by joining the Cloud Native Computing Foundation (CNCF) as a Platinum member and contributing the Oracle Cloud Infrastructure provider to the Terraform project. This open approach to building a cloud seems preferable to technology stacks that lock users into a single platform. I worked with our video team to record a demo that shows how to run the Terraform module to get a Couchbase cluster on Oracle Cloud Infrastructure:   The module deploys both Couchbase Server and Sync Gateway. You can, of course, configure it to deploy different numbers of nodes, machines, and so on. This is just the start of our partnership with Couchbase. Here are some upcoming items: Oguz Pastirmaci on the Oracle Cloud Infrastructure Data and AI team is working to improve the module, including revving it to Couchbase 6 and adding MDS support.   We're debating whether to reuse the Python template generator approach I've used on some other technologies or wait for the control structures that Terraform 0.12 is introducing. A blog post about the Kubernetes Operator for Couchbase on Oracle Cloud Infrastructure Container Engine for Kubernetes is coming soon. We're also starting to run some POCs of Couchbase on Oracle Cloud Infrastructure. If you're interested in learning more, reach out to me at ben.lackey@oracle.com or on Twitter @benofben.

Couchbase is a distributed NoSQL database. It's a document store in the same class of databases as MongoDB, but it differs in some interesting ways. For example, Couchbase offers a query...

Partners

H2O.ai Driverless AI Cruises on Oracle Cloud Infrastructure GPUs

One of the things I'm most excited about at Oracle Cloud Infrastructure is the opportunity to do cool things with our partners in the artificial intelligence (AI)/machine learning (ML) ecosystem. H2O.ai is doing some really innovative things in the ML space that can help power these sorts of use cases and more. Their open source ML libraries have become the de facto standard in the industry, providing a simple way to run a variety of ML methods, from logistic regressions and GBT to an AutoML capability that tunes the model automatically. H2O.ai has continued to build on this functionality with GPU support with what I think might be the best-named product of all time, Sparkling Water. (Yes, it's H2O running on Spark. Get it?). The latest H2O.ai product is Driverless AI. The name is perhaps a bit misleading. Driverless AI isn't related to driverless cars. Instead, it's an ML platform that provides a GUI on top of the H2O ML libraries that we already know. The GUI provides support for a significant chunk of the ML lifecycle: Data loading Visualization Feature engineering Model creation Model evaluation Deployment for scoring Software to do all this simply wasn't available five years ago. Instead, a highly skilled person would have had to put everything together by hand over a period of weeks or months. There are still some gaps. For example, data wrangling is still a mess even with the time series support and automatic feature generation abilities of Driverless AI. That said, building accurate ML models has never been easier. So, what does this all have to do with Oracle Cloud Infrastructure? We're building data centers all over the world, and they're being populated with some nifty hardware, including cutting-edge GPU boxes. The new BM.GPU3.8 is the top of that range with 8 NVIDIA Volta cards. This is the perfect machine to handle the compute demands of DAI, and we're pricing them to be significantly less expensive than any competing platform. For our provisioning plane, Oracle Cloud Infrastructure has made an open choice. Rather than building a proprietary technology such as Amazon Web Services CloudFormation, we've chosen to adopt the open source industry standard of Terraform. We've joined the Cloud Native Computing Foundation (CNCF) as a Platinum member and contributed our Terraform provider to the open source project. We've partnered with H2O.ai to write some Terraform modules that deploy H2O.ai Driverless AI on Oracle Cloud Infrastructure. The first module deploys on GPU machines. I worked with our team to record this video that demonstrates how to use the module. It also includes a very basic demo.   This is just the beginning of our partnership with H2O.ai. We're working on several activities with them: Oguz Pastirmaci from the Oracle Cloud Infrastructure data and AI team is working to enhance the Terraform module. Building a model is fast with 8 GPUs. It's going to be a lot faster with a whole cluster of those machines humming in parallel. We're discussing how we might be able to simplify deployment even further, providing a more integrated experience with a higher-level interface. We'll be at H2O World San Francisco 2019 on Feb. 4-5. Although the event won't have booths, a number of us should be wandering around the conference. Say hi! If you're interested in learning more about H2O.ai on Oracle Cloud Infrastructure or about our AI/ML partnerships in general, reach out to me at ben.lackey@oracle.com. You can also follow me on Twitter @benofben.

One of the things I'm most excited about at Oracle Cloud Infrastructure is the opportunity to do cool things with our partners in the artificial intelligence (AI)/machine learning (ML) ecosystem. ...

Customer Stories

How TruGreen Got Two to Four Times Better Performance for JD Edwards

TruGreen is arguably the largest and one of the most recognizable lawn care brands in the US. When I'm out walking the dog, I often find that the neighbors' yards that I envy the most all boast physical signposts indicating that they are cared for by TruGreen. Because the company's business model and motto is all about "living life outside," they have an expansive team of seasonal and part-time employees in the field, and branches located all around the US. In addition, TruGreen operates multiple lines of business ranging from lawn care to mosquito defense, and supports not only residential but also commercial properties. Therefore, they need a comprehensive and modern enterprise resource planning (ERP) solution that enables them to support and power their diverse lines of business and their distributed workforce. What's more, their ERP system needs to be backed by a high-performing and cost-effective infrastructure. An Outdated Legacy Application Environment TruGreen faced several challenges regarding their ERP application and the infrastructure supporting it: The JD Edwards version that they were running had been inherited from their parent company, ServiceMaster. It had been released over 15 years ago and was no longer supported. Their presentation layer leveraged outdated versions of IBM WebSphere, running on IBM DB2 Universal Database and AIX in the backend. TruGreen had disjointed ERP processes, a result from historical merger and acquisitions (M&A) activity, and the obsolete technology was preventing them from streamlining operations for their many lines of business and over 13,000 employees. In 2014, TruGreen spun off from ServiceMaster. This action served as a key trigger and perfect opportunity to modernize both their ERP application and the infrastructure that it ran on. The IT team knew they hadn't taken advantage of all the capabilities that modern JD Edwards had to offer, so they decided to upgrade their solution and to move to the cloud. They needed good technology partners that could help them get up and running in the new cloud environment as effectively as possible. Needed Best Performing, Most Reliable Cloud Infrastructure for JD Edwards TruGreen found a great partner in Velocity, which provided consulting and implementation services during their cloud transformation process. They considered both private and public cloud options and decided, in the end, to take a hybrid approach. TruGreen chose to archive their historical data in Velocity's private cloud offering, and they also wanted the best performing and most reliable cloud infrastructure for JD Edwards. Velocity recommended implementing JD Edwards EnterpriseOne 9.2 with Oracle Database in Oracle Cloud Infrastructure. TruGreen chose Oracle for several reasons. Not only was it the ideal IaaS to run JD Edwards, but TruGreen would also be able to take advantage of infrastructure and database services from a single vendor. The ability to purchase both IaaS and PaaS services through the Universal Credit pricing model was a key benefit. Additionally, Oracle Cloud Infrastructure made it easier for enterprises to create a cloud environment that offered the same level of isolation as on-premises, something that TruGreen required. With Oracle's highly customizable and private virtual cloud network, TruGreen and Velocity were able to create subnets to isolate private resources from public ones. Improved Performance and Streamlined Business Operations Despite TruGreen's outdated legacy environment, Velocity was able to help them stand up their new JD Edwards environment quickly. They leveraged Oracle APIs and their own proprietary tools to accelerate deployment. Since moving to Oracle Cloud Infrastructure, TruGreen has already seen significant improvements in performance. Paul Shearer, Director of JD Edwards Professional Services at Velocity Technology Solutions, believes that TruGreen's JD Edwards implementation is performing two to four times better than average. “Of the entire portfolio of JD Edwards customers Velocity hosts today, TruGreen is the most performant,” said Shearer. Clif Lee, Director of Corporate Systems at TruGreen, added, "Our finance and branch teams that use JD Edwards are just ecstatic over the performance." With Oracle, not only was TruGreen able to streamline business processes, but they now have a modern platform on which to build in additional capabilities, like multi-currency processing, to support their Canadian operations. For more details about TruGreen's JD Edwards implementation in Oracle Cloud Infrastructure, including the quantifiable results that they are seeing so far, read the full case study.

TruGreenis arguably the largest and one of the most recognizable lawn care brands in the US. When I'm out walking the dog, I often find that the neighbors' yards that I envy the most all boast...

Building Enterprise Onramps to the Cloud Native Freeway

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Businesses of all sizes, from startups to large, established enterprises, are talking about cloud native technologies such as Kubernetes, containers, and serverless. In fact, cloud native is becoming a market imperative for technology-centric businesses that want to remain competitive in the era of cloud computing. I recently sat down for a conversation with Bob Quillin, vice president of developer relations at Oracle Cloud Infrastructure, who founded StackEngine, a container management startup acquired by Oracle in 2015. In this interview, Quillin explains how Oracle Cloud Infrastructure enables enterprises to use traditional technologies in a cloud-native context. He also talks about Oracle's longstanding embrace of open standards and open source software and offers recent examples of cloud native uses cases around Oracle Database, Java microservices, and WebLogic technologies. Listen to our conversation here and read a condensed version below: Your browser does not support the audio player You came to Oracle Cloud Infrastructure from a startup. How does it feel here? Bob Quillin: It has actually been a pretty exciting time. The Oracle Cloud Infrastructure team up in Seattle was basically formed as a startup within Oracle. Over the last two to three years, we acquired Wercker for continuous integration and continuous delivery (CI/CD). They were a startup in the CI/CD space. We also brought in the Iron.io serverless team, and Dyn was another huge acquisition that added a whole bunch of edge services expertise into the Oracle Cloud Infrastructure team. So, we really have a lot of innovators. There are lots of people trying to do some cool and interesting things inside Oracle, and also help us take our second-generation capabilities around cloud and cloud native to the market. It's been a great opportunity. What is your team hearing from customers about the cloud and cloud native technologies? Quillin: As we talk to customers—enterprises, startups, any technology-centric company—we've learned that going cloud and cloud native is really a market imperative. It's a mandate for businesses that want to be digital in this era. The movement is customer driven—and one thing I've found here in my time at Oracle is that Oracle is very customer-centric. Working on this team and evangelizing cloud native technologies over the last four to five years, I've seen that this movement toward cloud native is pervasive. It's both large and small organizations, and it's happening across the board. Can you tell me more about what's happening with cloud native and open source technologies at Oracle? Quillin: Oracle actually has a long history in open source and open standards with technologies like SQL, Java, and Linux, for example. We now have this new whole new breed of startups that came into Oracle Cloud Infrastructure, and they're bringing a real startup mentality, a new kind of DNA, into Oracle. If you think about the Oracle Cloud Infrastructure team in Seattle, a lot of them came out of cloud businesses like Amazon and Azure. So, in many ways, we're used to cloud, we're open source software developers, and we're really committed to taking this forward in the right way. To that end, last year we joined the Cloud Native Computing Foundation (CNCF) as a Platinum member. We rolled out a whole bunch of new cloud native technologies based on CNCF standards and Kubernetes. We were one of the first solutions to be certified conformant by the CNCF, and we're also one of the first to release open source serverless solutions through our Fn project. People don't necessarily equate Oracle and open source and cloud, but we’re here to help change that. The way to do it is to really commit from the bottom up by engaging with the community and working with developers organically. That's what we're doing. We're working with developers from the bottom up and enterprises from the top down. Cloud native development is obviously taking the world by storm. But is it only suitable for enterprises that focus mostly on developing new applications? Or, can it also help if I'm a big enterprise with lots of traditional applications, databases, and legacy workloads? Quillin: It can absolutely help big enterprises with traditional workloads. Most of these technologies came out of web-scale companies, be they Netflix, or Google, or Spotify, for example. A lot of the cloud native technologies came from Generation 1 cloud or first-wave cloud native offerings. I think what we're seeing now is the second wave, where you have more and more organizations like Oracle trying to build more onramps to the cloud native freeway, and getting more people, teams, and technologies on board with cloud native. We've got to reach out and connect to the technologies that people know so they have a starting point from which they can actually adopt cloud native strategies. Can you provide some examples of our cloud native technologies? How are organizations using them? Quillin: For starters, the WebLogic team here built a Kubernetes operator which basically extends Kubernetes to create, configure, and manage a whole WebLogic domain. One of our big technology customers is CERN, the European research organization with the largest particle physics lab in the world located in Switzerland. CERN is a huge WebLogic and Java shop. They've embraced Kubernetes, and they're using this operator to move a lot of existing technology to this cloud-native world. That's great. Do you have any other examples? Quillin: Another good example is a project we announced at Oracle OpenWorld in October called Helidon. It's a Java microservices architecture that was rolled out to really simplify the process of doing microservice cloud native deployments in Java. My solutions team basically helped write a Kubernetes wrapper to connect that into Kubernetes. As a result, Java applications that are written in microservices format using that pattern can easily connect easily into Kubernetes. A third example is one we're working on right now: We're seeing a lot of Oracle Database customers starting to leverage cloud native apps based on Kubernetes for new web frontends or for some artificial intelligence back-end processing. They're looking at moving to the autonomous database that Oracle launched at OpenWorld this year and using the Oracle Container Engine for Kubernetes to get autonomy and automation, not just on the database side but throughout the whole application. So, those are three pretty powerful examples of database technology, Java technology, WebLogic technology. Over the last year, we've seen a huge leap forward in enabling customers to use more traditional technologies but use them in a cloud-native context. Learn more about Oracle cloud native technologies today.

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Businesses of...

Customer Stories

Covanta Migrates a PeopleSoft Application to Oracle Cloud Infrastructure

Energy-from-waste giant Covanta recently migrated a critical PeopleSoft implementation to Oracle Cloud Infrastructure. But first the company ramped up security around its network edge with the Oracle Dyn Web Application Security platform. Covanta—which offers a variety of waste management services—has long used Oracle PeopleSoft to run its finance, supply chain management, and procurement portal. The company initially managed the application on premises, and then moved to Oracle Managed Cloud Services, but the company recently decided that they were ready for the simplicity and automation of a public cloud deployment for their critical application stack. Citing benefits such as excellent support, faster provisioning, greater scalability, and lower management costs, Covanta migrated the PeopleSoft implementation to Oracle Cloud Infrastructure. "We did an analysis and we looked at a couple of different options for the PeopleSoft application, including [Amazon Web Services], and we spent countless hours with the Oracle team," said Ben Cabrera, Covanta's Vice President and Chief Information Officer. "Everyone at Oracle stepped up to the plate, and now we're in the Oracle Cloud Infrastructure environment." As planning for the migration began, Covanta realized that it had additional security issues to address. With hackers increasingly targeting internet-facing applications, the company needed a higher level of security at the edge of its network. Following a successful proof of concept, Covanta decided to go live with a next-generation Web Application Firewall (WAF) from the cloud-based Oracle Dyn Web Application Security platform. Oracle Dyn Web Application Security enabled Covanta to protect the PeopleSoft application from DDoS attacks and other threats before, during, and after the migration was complete. It also gives Covanta greater visibility into the different types of malicious traffic targeting internet-facing application environments. "[Oracle Dyn Web Application Security] is definitely one of the higher-performing solutions in this space," said Jason Gonsalves, a security architect and manager at Covanta. "We're really happy with the capabilities, the output, and the integration." Why Oracle Cloud Infrastructure? Covanta embraces a multi-cloud strategy and had several options when deciding which cloud infrastructure was right for the PeopleSoft deployment. While the company uses Microsoft Azure and Amazon Web Services for other applications, they decided that Oracle Cloud Infrastructure was the best choice for their mission critical business application stack because of the single vendor support model and optimization specifically for PeopleSoft.  What's more it allows Covanta to continue using Oracle management tools they are already successful with, including PeopleSoft Cloud Manager, an orchestration framework that enables users to easily provision and manage PeopleSoft within cloud environments. Oracle Cloud Infrastructure combines the elasticity and utility of a public cloud with the granular control, security, and predictability of on-premises infrastructure to deliver high performance, high availability, and cost-effective infrastructure services. Oracle Cloud Infrastructure makes it easier for companies like Covanta to provision new services and enhance existing ones over time. "Compute resources are actually a lot better from a CPU utilization perspective. It's a huge improvement," said Karish Chowdhury, a cloud architect and manager at Covanta. "The environment is also in our control, so making a change that goes through development, QA, and production is much easier and less time consuming." Among other tools, Covanta is using Oracle Cloud Infrastructure virtual cloud network (VCN) technology, which gives it complete control over the networking environment. By using VCNs, Covanta can assign its own IP address space, create subnets, create route tables, and configure firewalls. "The VMs that we have been provisioning seem to be a lot better than what other cloud providers offer at the same level in terms of disk resolution, memory, and CPUs," Chowdhury said. Watch Cabrera discuss Covanta's use of Oracle Cloud Infrastructure in this video: Learn more about Covanta's migration to Oracle Cloud Infrastructure today.

Energy-from-waste giant Covanta recently migrated a critical PeopleSoft implementation to Oracle Cloud Infrastructure. But first the company ramped up security around its network edge with the Oracle...

Oracle Cloud Infrastructure

Configure a FastConnect Direct Link with Equinix Cloud Exchange Fabric

This post was written by Sergio J. Castro, Senior Solutions Engineer at Oracle, and Bill Blake, Global Solutions Architect at Equinix. Oracle Cloud Infrastructure FastConnect is a network connectivity alternative to the public internet for connecting an on-premises data center or network with Oracle Cloud Infrastructure. Equinix is the first and the largest FastConnect partner, that connects the world's leading businesses to their customers, employees and partners inside the most interconnected data centers. With Equinix Cloud Exchange Fabric ™ (ECX Fabric), customers can extend their Oracle IaaS and PaaS solutions to the Oracle Cloud in 30 plus locations across US, EMEA and APAC. Equinix Cloud Exchange Fabric is optimized for connectivity to Oracle Cloud Infrastructure services, leveraging FastConnect. The result is a secure connection that offers predictable and consistent latency and high bandwidth, for dedicated speeds up to 10 GBPS. In this post, cloud architects from Oracle and Equinix provide all the necessary steps for completely configuring a FastConnect link from Oracle Cloud Infrastructure to an on-premise router by using Equinix Cloud Exchange Fabric. Accounts in both Oracle Cloud Infrastructure and Equinix are needed.  On the on-premise side of the connection, administrator access to the router that will serve as the customer premise equipment is required. In this post we use a Cisco CSR. On Oracle Cloud Infrastructure, we build a virtual cloud network (VCN), configure a dynamic routing gateway (DRG), associate the DRG with the VCN, and then add a route rule that points VCN traffic to the DRG. We then configure the FastConnect link. From the FastConnect configuration, we retrieve the virtual circuit OCID and pass it to Equinix for their Cloud Exchange Fabric configuration for setting a private peering. On Equinix Cloud Exchange Fabric, we create the connection to Oracle Cloud Infrastructure, using the OCID and other information like region, Border Gateway Protocol (BGP) IPs, and autonomous system number (ASN) to complete the configuration. Create a Virtual Cloud Network (You can skip this step if you already have the VPC that you want to use) A Virtual Cloud Network (VCN) is a software defined, private network that you set up in Oracle Cloud Infrastructure. It is a virtual representation of a physical network, with routers, routes and security rules. A VCN is not really needed for the purpose of configuring the Fastconnect link. But the purpose of it is for interconnecting on-premise to cloud networks. And, in this post, for testing end to end connectivity via ICMP. Sign in to your tenancy in the Oracle Cloud Infrastructure Console. Ensure that you’re in the Oracle Cloud Infrastructure region that matches the Equinix destination region that you’re going to configure. This example uses the Ashburn region. In the Quick Launch section of the home page, click Create a virtual cloud network: Networking. In the Create Virtual Cloud Network dialog box, select a compartment. If one is preselected, ensure that you want your VCN to reside there, or select another one. Oracle Cloud Infrastructure uses compartments to organize resources. Give your VCN a name. If you leave this field blank, the date and time of creation will be the VCN name. Select Create Virtual Cloud Network Plus Related Resources. This option assigns a default CIDR block, creates a subnet in each availability domain, adds an internet gateway, generates a security list, and generates a route table with a rule that routes out to the open internet. If you want to customize your own settings, select Create Virtual Cloud Network instead and then create each of these resources. Click Create Virtual Cloud Network. The VCN detail page is displayed. Note: For this example, we launched a Linux VM compute instance with a private IP address of 10.0.2.2. For information about how to launch compute instances on Oracle Cloud Infrastructure, see the Getting Started guide. Create a Dynamic Routing Gateway A Dynamic Routing Gateway (DRG) is a virtual router that provides a pathway for private traffic between your VCN and other networks, like an on premise network. On the left side of the Console, under Networking, click Dynamic Routing Gateways. Click Create Dynamic Routing Gateway. In the Create Dynamic Routing Gateway dialog box, select the compartment where you want your DRG to reside, and give your DRG a name (in this example, EquinixDRG). Click Create Dynamic Routing Gateway. After your DRG is provisioned, select it. On the left side of the Console, under Resources, click Virtual Cloud Networks. Click Attach to Virtual Cloud Network. In the Attach to Virtual Cloud Network dialog box, select the same compartment where your VCN resides, and then select the VCN (in this example, EquinixVCN). You can ignore the Associate with Route Table settings. For more information about this option, click the help link or the information symbol in the dialog box. Click Attach. Your VCN is now attached to the DRG. Add a Rule to the DRG on Your Route Table A VCN uses virtual route tables to send traffic out of the VCN, for example, to the Internet or to your on-premises network, which is this case. Go back to the Networking section and select your VCN (in this example, EquinixVCN). Under Resources, click Route Tables. Click Default Route Table for EquinixVCN. Click Edit Route Rules. Click +Another Route Rule. In the expanded dialog box, provide the following information: For Target Type, select Dynamic Routing Gateway. For Compartment, select the same one that you’ve been using throughout this exercise (in this example, Equinix). For Destination CIDR Block, enter the on-premises network CIDR block. In this example, we are using 192.168.1.0/24. For Target Dynamic Routing Gateway, select the DRG that you just created (in this example, EquinixDRG). Click Save. Create a FastConnect Virtual Circuit The final step on Oracle Cloud Infrastructure is to configure the FastConnect Circuit that the DRG will be using for reaching the on-premise network. For these step you need to know the Border Gateway Protocol (BGP) IP Addresses, and the Autonomous System Number (ASN). Equinix will provide this information. Go back to the Networking section. Under Networking, click FastConnect. Click Create Connection. In the Create Connection dialog box, select Connect Through a Provider, and then select Equinix: CloudExchange. Click Continue. In the new Create Connection dialog box, provide the following information. The values provided here are specific to this example. Name: Give the connection a name (in this example, Equinix). Compartment: Select the same compartment that you’ve been using throughout this exercise (in this example, Equinix). Virtual Circuit Type: Private Virtual Circuit Dynamic Routing Gateway Compartment: Equinix Dynamic Routing Gateway: EquinixDRG Provisioned Bandwidth: 1 GBPS Customer BGP IP Address: 172.16.4.1/30 Oracle BGP IP Address: 172.16.4.2/30 Customer BGP ASN: 65100 Click Continue. The connection is created from Oracle Cloud Infrastructure. On the details page for the connection, copy the OCID. You need it to provision the virtual connection from Equinix in the next section. You can also click the Equinix link, which takes you to their main site, where you can log in to their portal (for the next section). Complete the Connection from Equinix to Oracle Cloud Infrastructure Now that you completed the Oracle Cloud Infrastructure part, as it is indicated in the image above, the FastConnect status is Pending Provider. Now you need to configure the Equinix part, which is the provider of the actual physical link. Log in to the Equinix Cloud Exchange Portal. Click the Create Connection tab. Select Oracle Cloud. From the four options, select Oracle Cloud Infrastructure –OCI- FastConnect (Layer 2) and then click Create a Connection. Select an origin and destination. In this example, we are creating a virtual connection from Equinix Chicago to the Oracle Cloud Infrastructure Ashburn region, which is local to Equinix Ashburn. Note that we are using the Equinix Cloud Exchange (ECX) WAN Fabric to transit between Chicago and Ashburn. Provide the required information to build the virtual connection: FastConnect Virtual Circuit: Provide a name for this connection. VLAN: Enter the VLAN used on your router. The values must match. Virtual Circuit OCID: Enter the OCID that you copied in the previous procedure from the Oracle Cloud Infrastructure Console. This ID is validated by the system. Purchase Order Number: This optional field is for customer tracking. Click Next. The circuit speed is automatically based on the OCID from Oracle Cloud Infrastructure. On the page that summarizes the virtual connection settings, validate the settings and add your email address for order notifications. A confirmation screen appears. Click Inventory and locate your new virtual connection. Click the virtual connection to view the status. It normally takes from 5 to 10 minutes for the Equinix Cloud Exchange to configure the Equinix and Oracle sides. Ensure that the Status and Provider Status fields say Provisioned. The additional information that shows the Oracle side of the virtual connection can be used later for troubleshooting. On the connection detail page in the Oracle Cloud Infrastructure Console, note that the link is provisioned but not yet synchronized. Complete the Router Configuration from Equinix to Your Network Now that the Equinix part is done, the final step is configuring the connection to the on-premise network. Access your router to configure the BGP properties and establish a peering relationship with Oracle Cloud Infrastructure DRG to exchange routes. This step can vary by vendor; this example is using a Cisco CSR. Refer to your vendor’s documentation for help with BGP. Oracle's BGP ASN is 31898 when using the Equinix Cloud Exchange. Your ASN can be any private or public ASN that you own. Configure the router IP address and BGP information: In this example, 172.16.4.0/30 is used. In this example, the private BGP ASN 65100 is used. Validate Connectivity Between the Router and Oracle Cloud Infrastructure Following are some suggested steps for testing the connectivity. Verify that BGP has been established. Verify that BGP routes are being sent and received from Oracle Cloud Infrastructure. Send ping and traceroute commands to the Oracle DRG. Send ping and traceroute commands to Oracle bare metal hosts or VMs within Oracle Cloud Infrastructure. If you are using multiple virtual connections, test failover. Verify that you can ping an Oracle VM (10.0.2.2) from your router (192.168.1.1). Verify that you can ping the Oracle DRG IP address (172.16.4.2) from your router. In the Oracle Cloud Infrastructure Console, verify that the status of the FastConnect connection is UP. Basic connectivity should now be established between the edge router and Oracle Cloud Infrastructure. To learn more about Oracle Cloud Infrastructure FastConnect, see FastConnect Overview. To learn more about Oracle Cloud partnership with Equinix, see this partner page. To learn more about Equinix Cloud Exchange Fabric, see ECX Overview. Bill Blake is a network veteran of over 13 years, and has covered nearly all related technologies including wireless, routing, switching, security, cloud, data centers, and load balancing. He has worked for large enterprises in technical, architectural, and managerial roles, as well as for a large VAR performing massive data center migrations. Most recently, Bill works at Equinix helping customers architect their data center, WAN, and cloud strategies on a global scale. Sergio Castro is an Oracle Cloud Infrastructure Certified Architect, Associate. He focuses on networking and on next-generation IT services. You can reach him at sergio.castro@oracle.com.  

This post was written by Sergio J. Castro, Senior Solutions Engineer at Oracle, and Bill Blake, Global Solutions Architect at Equinix. Oracle Cloud Infrastructure FastConnect is a network connectivity...

Solutions

Bring Your Domain Name to Oracle Cloud Infrastructure’s Edge Services

The Domain Name System, or DNS, is the first step in site and web application performance. Founded on Dyn’s DNS, the Oracle Cloud Infrastructure DNS service is an integral part of Oracle Cloud Infrastructure’s suite of edge services. It’s available through the Oracle Cloud Infrastructure Console and the API. Bringing your domain name to Oracle Cloud Infrastructure is a straightforward process. As the preceding image indicates, it can take from three to nine minutes to configure and start using your domain name in Oracle Cloud Infrastructure. This post describes how to bring a domain name from a third-party provider. We create a zone and the needed records, and then publish it. The DNS record will be mapped to a live web server running on an Oracle Cloud Infrastructure Compute Linux instance. Create a DNS Zone Sign in to your tenancy in the Oracle Cloud Infrastructure Console. In the Quick Launch section of the home page, click Manage a domain: DNS Zone Management. The Create Zone dialog box is displayed. Here you enter the domain name that you’re bringing from Dyn or from a third-party provider such as GoDaddy. Select a compartment where your DNS zone will reside. If one is preselected, you might need to update it. If so, perform the following actions. If not, skip to the next step. Click cancel in the Create Zone dialog box. On the DNS Zone Management page, click the Compartment list and choose a compartment. Oracle Cloud Infrastructure uses compartments to organize resources. Click Create Zone. In the Create Zone dialog box, enter the following values: For Method, select Manual. The manual method is the most common method, but you can also import your zone. For Zone Type, select Primary. The other option is Secondary. The primary zone serves as a master zone, and the secondary zone has a read-only copy of the zone that stays synchronized with the primary DNS server. For Zone Name, enter the domain name that you are going to associate with the IP address of your web server. This example uses the domain name castro.cloud. Click Submit. The DNS zone is created. Oracle Cloud Infrastructure created five records; four nameserver (NS) records and one start of authority (SOA) record. The four nameserver records are the records that you will take to the third-party DNS provider from which you’re bringing the DNS (in the next section). Add the Nameserver Records to the Third-Party DNS Provider In the third-party DNS provider, change the nameservers type from Default to Custom. Enter the four nameserver records provided by Oracle Cloud Infrastructure, and then click Save. Add the A and CNAME Records Go back to the zone details page in the Oracle Cloud Infrastructure Console, and click Add Record. For the record type, select A - IPv4 Address. Retrieve the public IP address of your web server, and enter it in the Address field. Enter values in the TTL and TTL Unit fields. If you want your domain name to be preceded by www (for example, www.castro.cloud), add a CNAME (canonical name) record by selecting the Add Another Record check box. Click Submit. If you are adding a CNAME record, select CNAME - CNAME as the record type and provide the necessary information. Then clear the Add Another Record check box and click Submit again. Publish Your New DNS Zone The last step is to publish your new DNS zone by clicking Publish Changes. As soon as you publish, you can access your web server with its new domain name address (in this example, castro.cloud and www.castro.cloud). To learn more about Oracle Cloud Infrastructure’s DNS Zone Management, see Overview of the DNS Service.

The Domain Name System, or DNS, is the first step in site and web application performance. Founded on Dyn’s DNS, the Oracle Cloud Infrastructure DNS service is an integral part of Oracle Cloud...

Oracle Cloud Infrastructure

Big Data Performance on Oracle Cloud Infrastructure

Hello, my name is Zachary Smith, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure. You might have seen Larry Ellison’s keynote at Oracle OpenWorld 2018 regarding the significant price and performance advantages of running Big Data workloads on Oracle Cloud Infrastructure. I ran TeraSort benchmarks to understand the price and performance advantage, and I’d like to share an in-depth look at the benchmark process. Creating the Environments For the benchmark environments, I used Cloudera as the Hadoop distribution (v5.15.1). Deployment on Oracle Cloud Infrastructure leveraged Terraform templates available on GitHub to automate cluster deployment and configuration. Oracle Cloud Infrastructure clusters were tested using four bare metal shapes: BM.StandardE2.64, BM.Standard1.36, BM.Standard2.52, and BM.HPC2.36, with 32 x 1-TB block volumes for HDFS capacity per worker, and 256-GB root volumes.    The AWS deployments were done using some of the same automation elements from the Terraform templates, but because of the differences in provider code, the provisioning and installation was done manually. I chose AWS M5.24xLarge and AWS M4.16xLarge shapes for comparative analysis, with 25 x 1-TB EBS GP2 volumes for HDFS capacity per worker, and 256-GB root volumes. I used 25 volumes instead of 32 (as done on Oracle Cloud Infrastructure) because the Cloudera Enterprise Reference Architecture for AWS Deployments document (page 20) says not to use more than 26 EBS volumes on a single instance, including root volumes. All hosts had the same OS (CentOS 7.5) and similar Cloudera cluster tunings aside from variances in CPU and memory, which depend on the worker host resources. For cluster sizing, I normalized the OCPU as close to 300 cores per cluster as possible. The following table shows relative cluster sizes, OCPU/vCPU, and RAM information: Running the Tests To run the TeraSort, I used a benchmarking script that submits jobs to the cluster with relative tuning to available cluster resources. It uses the following formulas to calculate the number of mappers/reducers, map/reduce memory, and Java JMX/JMS parameters: Mappers/reducers: (number of workers * cpu_vcores) Map/reduce memory: (number of workers * RAM per worker)/number of mappers Java JMS/JMX parameters: (map memory * 0.8) When running the tests on AWS shapes, I encountered Java heap space errors because the memory values using these formulas was too low for the same cluster heap settings. I tuned for higher memory/JMX/JMS values by increasing the yarn.cpu.vcores, halving the number of mappers, and then adjusting the minimum allocation vcores to compensate. These values produced equivalent job submission parameters.  The following table shows the relative cluster tuning parameters in detail: The following table shows the job submission parameters: The following graphs show 1-TB and 10-TB TeraSort times: I also want to show some comparative utilization graphs in Cloudera Manager. The following screenshot shows the AWS M4.16XLarge cluster: The following screenshot shows the M5.24xLarge cluster: Note that the Cluster Network IO profiles are similar for both clusters, peaking at about 2G/s. Cluster Disk IO peaks a little over 9G/s, as does HDFS IO. Now look at the comparative graphs on an Oracle Cloud Infrastructure cluster with BM.HPC2.36 worker shapes: On Oracle Cloud Infrastructure, the Cluster Disk IO peaks at around 25G/s and Cluster Network IO mirrors it. That’s 10 times the bandwidth compared to AWS! HDFS IO also peaks at almost 5 times that of AWS. Also note the CPU utilization graphs; the impact of reduced I/O in AWS directly affects the processing ability of the cluster. In essence, the network performance is a bottleneck. With the Oracle Cloud Infrastructure shapes, you get near line-speed, which eliminates that performance bottleneck and allows maximum utilization of cluster resources. This leads to substantially faster workload processing times. The bottom line: When you choose to run your Big Data workloads on Oracle Cloud Infrastructure, you are getting exponentially better performance, with guaranteed SLAs for performance, manageability, and availability, at a substantially cheaper price point as compared to AWS. Next Steps Sign up and try Oracle Cloud Infrastructure for free, download a Terraform deployment template, and experience the blazing fast performance of Big Data on Oracle Cloud Infrastructure yourself!

Hello, my name is Zachary Smith, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure. You might have seen Larry Ellison’s keynote at Oracle OpenWorld 2018 regarding the...

Security

Core-to-Edge Security: The Oracle Cloud Infrastructure Edge Network

As more customer, partner and employee interactions happen over internet-connected, digital channels and the threat landscape becomes more complex and varied, the imperative for security has compounded. That’s why Oracle Cloud Infrastructure takes a different approach to security, one that extends from the core infrastructure (including the database) to the user edge. Oracle Cloud Infrastructure's core-to-edge security strategy protects you and your organization from a variety of external and internal threats and incorporates common management of events, alerts, and orchestration of mitigations. Adding the edge to the core brings many benefits, including: Layers of defense that are designed to secure users, apps, data, and infrastructure Defense layer integration, so that detection of a botnet attack at the edge may automatically increase the security warnings and postures in the core Support for multi-location workloads (in the cloud, in many clouds, or at the edge), regardless of where users and customers are and what delivery mechanisms they use  Automatic detection and mitigation of attacks using simultaneous vectors on the network, user, and application layers A deep monitoring network of sensors that provides data on internet performance and security events all over the world New edge security services including a web application firewall and DDoS protection were announced at Oracle OpenWorld 2018 to provide a secure cloud with reliable performance. The services run on the new globally-distributed Oracle Cloud Infrastructure Edge Network and are designed to alleviate many enterprise cloud migration concerns. Oracle edge security services can protect any application in any cloud and any on-premises infrastructure. What Is an Edge Network? The cloud edge is where users and devices connect to the network. That makes it both a crucial point for users’ interactions with applications in the cloud and a potential launch-point of attacks. An enterprise-ready cloud needs to include an edge network that provides the following: Low latency and real-time processing of massive datasets such as web traffic Performance acceleration techniques such as load balancing, DNS resolution, local caching, and tracking of internet route changes Local learning and automation techniques Real-time internet health analysis Many applications and services are designed to work at the edge, leveraging compute from the devices on which they are accessed, as well as workloads on the nearest cloud server. Today, that needs to be just about anywhere to enable business-critical functions. As the capacity of core networks is outstripped by computational intensity, organizations become more reliant on edge services, servers, and devices themselves to process business logic. Oracle's edge network is deployed close to end users in many markets and complements the large, secure Oracle Cloud Infrastructure regions that host workloads by adding an important layer of security and performance for traffic coming into web applications. The network has now been deployed at scale in globally distributed, very high-capacity points of presence (POPs). Each POP is fully redundant, multi-tenant, fault-tolerant, and self-repairing. The compute capacity of the edge network secures applications at the edge before requests and data are routed optimally to an Oracle cloud region, any other cloud provider, or on-premises infrastructures used by Oracle customers. Below is a map of Oracle Cloud Infrastructure Edge Network POPs. Fifteen locations are dedicated to application security, and five locations have high-capacity DDoS scrubbing centers. Nineteen locations are dedicated to DNS. Stopping Attacks at the Edge Security is the top cloud challenge of 2018; 77% of IT professionals identified it as a challenge in the RightScale 2018 State of the Cloud Survey. And when it comes to security, location is key. Oracle Cloud Infrastructure’s security defense platform sits at the network edge, away from the core web server infrastructure and closer to the end user. Hence, the process of detection and mitigation happens before the potential threat reaches your network. Additionally, this configuration allows users to run ad hoc security defenses based on specific events -- say, the escalation of an attack -- or focus on only a specific section of applications that need to be addressed during an attack without affecting the rest of the infrastructure. How an application security POP works at the edge Protecting Hybrid and Multi-Cloud Architectures Enterprises commonly use several cloud providers, often in combination with on-premises legacy systems. This is why all security services that run on the Oracle Cloud Infrastructure Edge Network are designed to work independently from where applications are hosted. This design is especially important for security and performance, as it allows for a global view of all events and monitoring and protection of all and any applications in one unique platform -- regardless of where these applications are hosted and regardless of the delivery mechanisms. The edge security services are a pure, cloud native, multi-tenant solution. Helping Move and Improve One of the largest impediments enterprises face is maintaining a strong security posture during a migration of workloads to the cloud. Oracle understands this concern and has built tools and solutions that support this transition. Because the application security services are independent from the hosting location(s) of the applications, the same security postures that applied to the old infrastructure continue to apply seamlessly to the new infrastructure before, during, and after the migration. Hence, to take the risk out of the move-and-improve process, Oracle recommends that Oracle application security services are activated on the current applications sitting within the old infrastructure before the migration. Then, as the customer migrates their application servers to the new target infrastructure, all security services are already in place and activated. This is a key differentiator from the rest of the infrastructure as a service (IaaS) market, which can't offer the same no-risk solution for an enterprise cloud migration. Deep Monitoring of the Internet Oracle has also deployed a deep monitoring network that provides data on internet performance and security events all over the world, with real-time information about performance degradation, internet routing changes, and network security alerts. Oracle Cloud products, such as Market Performance and IP Troubleshooting, and based on this Internet Intelligence data. Oracle’s Internet Intelligence Map monitors the volatility of the internet as a whole. With ever more organizations relying on third-party providers for their most critical services, monitoring the collective health of the internet is increasingly important. Data gathered by the Oracle Cloud Infrastructure Edge Network is also used to provide valuable insight to the Oracle Security Research team around BGP route changes and DDoS activation worldwide. The Oracle Security Research team is able to monitor 250 million route updates per day, including where DDoS protection is being activated and when attacks are occurring. We measure the quality, in near real time, of any cloud DDoS protection activation by most cloud-based DDoS vendors. This information can be used to measure the effectiveness of protection solutions. The Pillar of Security The agility, scalability, and integration capabilities of the cloud, combined with extensive cost savings, have made migration to the cloud a necessity for enterprise-grade organizations. However, there are risks involved in an enterprise cloud migration, concerning everything from security to the sheer scale of such a move. Oracle Cloud Infrastructure was designed with this in mind. Security is a core pillar of everything we do, from deploying data centers and architecting networks to monitoring and scaling services. The Oracle Cloud Infrastructure Edge Network is part of Oracle’s forward-looking strategy. As the world moves to the cloud, we provide a core-to-edge solution to do so securely, efficiently, and without boundaries.

As more customer, partner and employee interactions happen over internet-connected, digital channels and the threat landscape becomes more complex and varied, the imperative for security...

Developer Tools

At KubeCon + CloudNativeCon, Oracle Extends Its Commitment to Openness

Today at KubeCon + CloudNativeCon, Oracle Cloud Infrastructure unveiled the Oracle Cloud Native Framework, the world's most comprehensive open source framework for deploying public cloud, hybrid cloud, and on-premises applications. I'm especially excited about this announcement because it represents another step forward in our mission to give developers the tools that they need to reduce complexity and deploy modern applications in any type of environment. It's also the latest example of Oracle Cloud Infrastructure's longstanding commitment to interoperability and open standards. The Oracle Cloud Native Framework introduces a comprehensive set of new cloud native resources for developers. One of those resources is Oracle Functions, a serverless cloud service based on the open source Fn Project. As part of the announcement, we also introduced a rich set of cloud native offerings built on the Oracle Cloud Infrastructure Container Engine for Kubernetes, our Kubernetes orchestration and management layer. These resources address developer needs in three key areas: provisioning, application definition and development, and observability and analysis. Embracing Open Standards A commitment to openness is one of the five key pillars that Oracle Cloud Infrastructure is built on, along with protecting customers' existing investments, ensuring security, delivering mission-critical performance, and providing unmatched enterprise expertise. We embrace open standards because they enable our customers to be agile and responsive to changing business requirements. Open standards ensure that customers have the freedom and flexibility to move workloads between their on-premises data centers and Oracle Cloud, and even to other vendors' clouds when needed. Additionally, open standards lower barriers to innovation and reduce the total cost of technology investments. Oracle has a long history of supporting open standards and making technical contributions to the open source communities responsible for Linux, Berkeley DB, Xen, MySQL, and many other technologies. Additionally, Oracle is a contributing member of several industry groups that promote open standards, including the Eclipse Foundation, the Cloud Security Alliance, and the Internet Society. And this year, we expanded our membership in and contributions to the Cloud Native Computing Foundation (CNCF). Oracle Cloud Infrastructure has made several announcements in recent months that advance our commitment to openness and support the needs of software developers. Following is a summary of some of the latest news. The Oracle Linux Cloud Native Environment Introduced at OpenWorld in October, the Oracle Linux Cloud Native Environment gives developers the features that they need to develop microservices-based applications that can be deployed in environments that support open standards. The Oracle Linux Cloud Native Environment is based entirely on open standards, specifications, and APIs defined by the CNCF. The environment makes it easier for developers to create, orchestrate, and manage containers. It also provides tools and resources for cloud native networking and storage, continuous integration and continuous delivery, and observability and diagnostics. GraphPipe Oracle recently introduced GraphPipe, an open source project designed to make it easier for enterprises to deploy and query machine learning models. GraphPipe gives developers a standard, high-performance protocol for transmitting tensor data over networks. Terraform We also recently released our Terraform provider, which gives developers access to an open source, enterprise-class orchestration tool that they can use to manage Oracle Cloud Infrastructure Compute. We'll soon be releasing a group of open source Terraform modules that enable easy provisioning of Oracle Cloud Infrastructure services. And those are just some of the advancements that Oracle Cloud Infrastructure is making as it builds out the world's first truly open public cloud platform. Stay tuned for more news. In the meantime, try our cloud for yourself. Create a trial account with up to 3,500 hours of free cloud computing.

Today at KubeCon + CloudNativeCon, Oracle Cloud Infrastructure unveiled the Oracle Cloud Native Framework, the world's most comprehensive open source framework for deploying public cloud, hybrid...

Announcing Oracle Cloud Native Framework at KubeCon North America 2018

This blog was originally published at https://blogs.oracle.com/cloudnative/ At KubeCon + CloudNativeCon North America 2018, Oracle has announced the Oracle Cloud Native Framework - an inclusive, sustainable, and open cloud native development solution with deployment models for public cloud, on premises, and hybrid cloud. The Oracle Cloud Native Framework is composed of the recently-announced Oracle Linux Cloud Native Environment and a rich set of new Oracle Cloud Infrastructure cloud native services including Oracle Functions, an industry-first, open serverless solution available as a managed cloud service based on the open source Fn Project. With this announcement, Oracle is the only major cloud provider to deliver and support a unified cloud native solution across managed cloud services and on-premises software, for public cloud (Oracle Cloud Infrastructure), hybrid cloud and on-premises users, supporting seamless, bi-directional portability of cloud native applications built anywhere on the framework.  Since the framework is based on open, CNCF certified, conformant standards it will not lock you in - applications built on the Oracle Cloud Native Framework are portable to any Kubernetes conformant environment – on any cloud or infrastructure Oracle Cloud Native Framework – What is It? The Oracle Cloud Native Framework provides a supported solution of Oracle Cloud Infrastructure cloud services and Oracle Linux on-premises software based on open, community-driven CNCF projects. These are built on an open, Kubernetes foundation – among the first K8s products released and certified last year. Six new Oracle Cloud Infrastructure cloud native services are being announced as part of this solution and build on the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services. Cloud Native at a Crossroads – Amazing Progress We should all pause and consider how far the cloud native ecosystem has come – evidenced by the scale, excitement, and buzz around the sold-out KubeCon conference this week and the success and strong foundation that Kubernetes has delivered! We are living in a golden age for developers – a literal "First Wave" of cloud native deployment and technology - being shaped by three forces coming together and creating massive potential: Culture: The DevOps culture has fundamentally changed the way we develop and deploy software and how we work together in application development teams. With almost a decade’s worth of work and metrics to support the methodologies and cultural shifts, it has resulted in many related off-shoots, alternatives, and derivatives including SRE, DevSecOps, AIOps, GitOps, and NoOps (the list will go on no doubt). Code: Open source and the projects that have been battle tested and spun out of webscale organizations like Netflix, Google, Uber, Facebook, and Twitter have been democratized under the umbrella of organizations like CNCF (Cloud Native Computing Foundation). This grants the same access and opportunities to citizen developers playing or learning at home, as it does to enterprise developers in the largest of orgs. Cloud: Unprecedented compute, network, and storage are available in today’s cloud – and that power continues to grow with a never-ending explosion in scale, from bare metal to GPUs and beyond. This unlocks new applications for developers in areas such as HPC apps, Big Data, AI, blockchain, and more.  Cloud Native at a Crossroads – Critical Challenges Ahead Despite all the progress, we are facing new challenges to reach beyond these first wave successes. Many developers and teams are being left behind as the culture changes. Open source offers thousands of new choices and options, which on the surface create more complexity than a closed, proprietary path where everything is pre-decided for the developer. The rush towards a single source cloud model has left many with cloud lock-in issues, resulting in diminished choices and rising costs – the opposite of what open source and cloud are supposed to provide. The challenges below mirror the positive forces above and are reflected in the August 2018 CNCF survey: Cultural Change for Developers: on premises, traditional development teams are being left behind. Cultural change is slow and hard. Complexity: too many choices, too hard to do yourself (maintain, administer), too much too soon? Cloud Lock-in: proprietary single-source clouds can lock you in with closed APIs, services, and non-portable solutions. The Cloud Native Second Wave – Inclusive, Sustainable, Open What’s needed is a different approach: Inclusive: can include cloud and on-prem, modern and traditional, dev and ops, startups and enterprises Sustainable: managed services versus DIY, open but curated, supported, enterprise grade infrastructure Open: truly open, community-driven, and not based on proprietary tech or self-serving OSS extensions Introducing the Oracle Cloud Native Framework – What’s New? The Oracle Cloud Native Framework spans public cloud, on-premises, and hybrid cloud deployment models – offering choice and uniquely meeting the broad deployment needs of developers. It includes Oracle Cloud Infrastructure Cloud Native Services and the Oracle Linux Cloud Native Environment. On top of the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services, a rich set of new Oracle Cloud Infrastructure cloud native services has been announced with services across provisioning, application definition and development, and observability and analysis.   Application Definition and Development Oracle Functions: A fully managed, highly scalable, on-demand, functions-as-a-service (FaaS) platform, built on enterprise-grade Oracle Cloud Infrastructure and powered by the open source Fn Project. Multi-tenant and container native, Oracle Functions lets developers focus on writing code to meet business needs without having to manage or even address the underlying infrastructure. Users only pay for execution, not for idle time. Streaming: Enables applications such as supply chain, security, and IoT to collect data from many sources and process in real-time. Streaming is a highly available, scalable and multi-tenant platform that makes it easy to collect and manage streaming data. Provisioning Resource Manager: A managed Oracle Cloud Infrastructure provisioning service based on industry standard Terraform. Infrastructure-as-code is a fundamental DevOps pattern, and Resource Manager is an indispensable tool to automate configuration and increases productivity by managing infrastructure declaratively. Observation and Analysis Monitoring: An integrated service that reports metrics from all resources and services in Oracle Cloud Infrastructure. Monitoring provides predefined metrics and dashboards, and also supports a service API to obtain a top-down view of the health, performance, and capacity of the system. The monitoring service includes alarms to track these metrics and act when they vary or exceed defined thresholds, helping users meet service level objectives and avoid interruptions. Notifications: A scalable service that broadcasts messages to distributed components, such as email and PagerDuty. Users can easily deliver messages about Oracle Cloud Infrastructure to large numbers of subscribers through a publish-subscribe pattern. Events: Based on the CNCF Cloud Events standard, Events enables users to react to changes in the state of Oracle Cloud Infrastructure resources, both when initiated by the system or by user action. Events can store information to Object Storage, or they can trigger Functions to take actions, Notifications to inform users, or Streaming to update external services. Use Cases for the Oracle Cloud Native Framework: Inclusive, Sustainable, Open Inclusive: The Oracle Cloud Native Framework includes both cloud and on-prem, supports modern and traditional applications, supports both dev and ops, can be used by startups and enterprises. As an industry, we need to create more on-ramps to the cloud native freeway – in particular by reaching out to teams and technologies and connecting cloud native to what people know and work on every day. The WebLogic Server Operator for Kubernetes is a great example of just that. It enables existing WebLogic applications to easily integrate into and leverage Kubernetes cluster management.  As another example, the Helidon project for Java creates a microservice architecture and framework for Java apps to move more quickly to cloud native. Many Oracle Database customers are connecting cloud native applications based on Kubernetes for new web front-ends and AI/big data processing back-ends, and the combination of the Oracle Autonomous Database and OKE creates a new model for self-driving, securing, and repairing cloud native applications. For example, using Kubernetes service broker and service catalog technology, developers can simply connect Autonomous Transaction Processing applications into OKE services on Oracle Cloud Infrastructure.   Sustainable: The Oracle Cloud Native Framework provides a set of managed cloud services and supported on-premises solutions, open and curated, and built on an enterprise grade infrastructure. New open source projects are popping up every day and the rate of change of existing projects like Kubernetes is extraordinary. While the landscape grows, the industry and vendors must face the resultant challenge of complexity as enterprises and teams can only learn, change, and adopt so fast. A unified framework helps reduce this complexity through curation and support. Managed cloud services are the secret weapon to reduce the administration, training, and learning curve issues enterprises have had to shoulder themselves. While a do-it-yourself approach has been their only choice up to recently, managed cloud services such as OKE give developers a chance to leapfrog into cloud native without a long and arduous learning curve. A sustainable model – built on an open, enterprise grade infrastructure, gives enterprises a secure, performant platform from which to build real hybrid cloud deployments including these five key hybrid cloud use cases: Development and DevOps: Dev/test in the cloud, production on-prem     Application Portability and Migration: enables bi-directional cloud native application portability (on-prem to cloud, cloud to on-prem) and lift and shift migrations.  The Oracle MySQL Operator for Kubernetes is an extremely popular solution that simplifies portability and integration of MySQL applications into cloud native tooling.  It enables creation and management of production-ready MySQL clusters based on a simple declarative configuration format including operational tasks such as database backups and restoring from an existing backup. The MySQL Operator simplifies running MySQL inside Kubernetes and enabling further application portability and migrations.     HA/DR: Disaster recovery or high availability sites in cloud, production on-prem Workload-Specific Distribution: Choose where you want to run workloads, on-prem or cloud, based on specific workload type (e.g., based on latency, regulation, new vs. legacy) Intelligent Orchestration: More advanced hybrid use cases require more sophisticated distributed application intelligence and federation – these include cloud bursting and Kubernetes federation   Open: Over the course of the last few years, development teams have typically chosen to embrace a single-source cloud model to move fast and reduce complexity – in other words the quick and easy solution. The price they are paying now is cloud lock in resulting from proprietary services, closed APIs, and non-portable solutions. This is the exact opposite of where we are headed as an industry – fueled by open source, CNCF-based, and community-driven technologies.   An open ecosystem enables not only a hybrid cloud world but a truly multi-cloud world – and that is the vision that drives the Oracle Cloud Native Framework!

This blog was originally published at https://blogs.oracle.com/cloudnative/ At KubeCon + CloudNativeCon North America 2018, Oracle has announced the Oracle Cloud Native Framework - an inclusive,...

Developer Tools

Announcing Oracle Cloud Infrastructure Monitoring

At CloudNativeCon North America 2018 in Seattle this week, we are announcing a new service, Oracle Cloud Infrastructure Monitoring. Monitoring provides your enterprise with fine grained metrics and notifications to monitor your entire stack. Using the Monitoring service, your enterprise is able to understand the health and performance of your stack including Oracle Cloud Infrastructure Resources, optimize resource utilization, and respond to anomalies in real time. Out of the box performance and health metrics are provided for your Oracle Cloud Infrastructure Resources, including Compute Instances, Virtual Cloud Network (VCN) Virtual NICs, Block Store Volumes, and Load Balancer as a Service (LBaaS). You can also emit your own Custom Metrics to have visibility across your entire stack. Additionally, Alarms can be created on these Metrics using industry standard statistics, trigger operators, and time intervals. Alarms alert you in real time to important changes across your stack via email and PagerDuty using the Oracle Notification service. The interactive Metrics Explorer in the Oracle Cloud Infrastructure Console provides a comprehensive view of Metrics across your Resources and Custom Metrics with the ability to customize and filter the data. The Monitoring Service offers a best-in-class metric engine, allowing you to perform powerful aggregation and slice-and-dice queries across multiple metric streams and dimensions in real time. The Monitoring service public API, and SDK/CLI enable easy integration with your existing enterprise infrastructure. Sample Monitoring use cases Use the Metrics Explorer to understand the health of your Oracle Cloud Infrastructure Resources. Metrics can be visualized individually or aggregated across multiple resources. Below is the CPU percentage for multiple Compute Instances over the past day: OCI Monitoring provides notifications, such as CPU percentages passing a predefined utilization threshold. When a resource's CPU passes the threshold, a PagerDuty notification is triggered or an email is sent to your team. As an example below, an alarm can be triggered when a resource's utilization is below 3% for more than 24 hours. Availability With the Monitoring service, we are delivering another core pillar to ensure that Oracle Cloud Infrastructure offers a best-in-class foundation for all enterprise workloads and use cases. Monitoring will become available in early 2019, but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Monitoring or to request access to the technology, please register.

At CloudNativeCon North America 2018 in Seattle this week, we are announcing a new service, Oracle Cloud Infrastructure Monitoring. Monitoring provides your enterprise with fine grained metrics and...

Developer Tools

Announcing Oracle Cloud Infrastructure Notifications

At CloudNativeCon North America 2018 this week in Seattle, we are announcing a new service, Oracle Cloud Infrastructure Notifications. Notifications is a fully managed, durable, and secure pub-sub messaging service that broadcasts your messages to a large number of distributed applications at scale. It eliminates polling overhead by pushing messages to your subscribers' endpoints. Notifications delivers secure, highly reliable, low latency, and durable messages for applications hosted on Oracle Cloud Infrastructure and externally. It empowers you to send notifications and to integrate distributed systems and microservices. Notifications Use Cases Here are some of the many possible uses for Notifications: Operational alerts You can use Notifications to receive notifications triggered by your applications' alerts. For example, you can configure Oracle Cloud Infrastructure Alarms to send notifications to a Notifications topic. Then, you can subscribe to the topic by using either email or PagerDuty. Application integration In this "fan-out" scenario, you can send your application events to a Notifications topic. Then, Notifications pushes the messages to all subscriptions. For example, in a freight-management application, any change in freight status can be broadcast via Notifications to multiple applications to initiate a process or notify a customer. Getting Started Oracle Cloud Infrastructure Notifications will be generally available in early 2019, but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Notifications or to request access, please register.

At CloudNativeCon North America 2018 this week in Seattle, we are announcing a new service, Oracle Cloud Infrastructure Notifications. Notifications is a fully managed, durable, and secure pub-sub...

Developer Tools

Announcing Oracle Cloud Infrastructure Streaming

At CloudNativeCon North America 2018 this week in Seattle, we are excited to announce a new service, Oracle Cloud Infrastructure Streaming. The Streaming service provides a fully managed, scalable, and durable storage for ingesting continuous, high-volume streams of data that you can consume and process in real-time. Streaming can be used for messaging, ingesting high volume data such as application log data, operational telemetry data, web click-stream data or other use cases in which data is produced and processed continually and sequentially in a publish-subscribe messaging model. Use Cases Here are some of the many possible use cases for Oracle Cloud Infrastructure Streaming: Messaging: Use streaming as a backplane to decouple components of large systems. Key-scoped ordering, low latency and guaranteed durability of streaming provide reliable primitives to implement a variety of messaging patterns, while high throughput potential allows for such a system to scale well. Web/Mobile activity data ingestion: Use streaming as your ingestion pipeline for usage data from web sites or mobile apps (such as page views, searches, or other actions users may take). Streaming’s consumer model makes it easy to feed information to multiple real-time monitoring and analytics systems or to a data warehouse for offline processing and reporting. Metric and log ingestion: Use streaming as an alternative for traditional log and metrics aggregation approaches to help make critical operational data more quickly available for indexing, analysis, and visualization. Infrastructure and application event processing: Use streaming as a unified entry point for cloud components to report their life cycle events for audit, accounting, and related activities.   Managed Service Oracle Cloud Infrastructure Streaming manages everything needed to operate the service, from provisioning, deployment, maintenance, security patches, infrastructure, storage, networking, and replication to configuration of the hardware and software that enables you to stream data. Security Oracle Cloud Infrastructure Streaming is secure by default. Only the account and data stream owners have access to the stream resources that they create. It supports user authentication to control access to data, allowing you to use Oracle Cloud Infrastructure Identity and Access Management (IAM) policies to selectively grant permissions to users and groups of users. You can securely put and get your data from Streaming through SSL endpoints, using the HTTPS protocol. Lastly, user data is encrypted both at rest and in transit. Getting Started Oracle Cloud Infrastructure Streaming will be generally available in 2019, but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Streaming or to request access, please register.

At CloudNativeCon North America 2018 this week in Seattle, we are excited to announce a new service, Oracle Cloud Infrastructure Streaming. The Streaming service provides a fully managed, scalable,...

Developer Tools

Announcing Oracle Cloud Infrastructure Resource Manager

At CloudNativeCon North America 2018 this week in Seattle, we are excited to announce a new service, Oracle Cloud Infrastructure Resource Manager, that makes it easy to manage your infrastructure resources on Oracle Cloud Infrastructure. Resource Manager enables you to use infrastructure as code (IaC) to automate provisioning for infrastructure resources such as compute, networking, storage, and load balancing. Using IaC is a DevOps practice that makes it possible to provision infrastructure quickly, reliably, and at any scale. Changes are made in code, not in the target systems. That code can be maintained in a source control system, so it’s easy to collaborate, track changes, and document and reverse deployments when required. HashiCorp Terraform To describe infrastructure Resource Manager uses HashiCorp Terraform, an open source project that has become the dominant standard for describing cloud infrastructure. Oracle is making a strong commitment to Terraform and will enable all its cloud infrastructure services to be managed through Terraform. Earlier this year we released the Terraform Provider, and we have started to submit Terraform modules for Oracle Cloud Infrastructure to the Terraform Module Registry. Now we are taking the next step by providing a managed service. Managed Service In addition to the provider and modules, Oracle now provides Resource Manager, a fully managed service to operate Terraform. Resource Manager integrates with Oracle Cloud Infrastructure Identity and Access Management (IAM), so you can define granular permissions for Terraform operations. It further provides state locking, gives users the ability to share state, and lets teams collaborate effectively on their Terraform deployments. Most of all, it makes operating Terraform easier and more reliable. With Resource Manager, you create a stack before you run Terraform actions. Stacks enable you to segregate your Terraform configuration, where a single stack represents a set of Oracle Cloud Infrastructure resources that you want to create together. Each stack individually maps to a Terraform state file that you can download. To create a stack, you define a compartment and upload the Terraform configuration while creating this stack. This zip file contains all the .tf files that define the resources that you want to create. You can optionally include a variables.tf file or define your variables in a (key,value) format on the console. After your stack is created, you can run different Terraform actions like plan, apply, and destroy on this stack. These Terraform actions are called jobs. You can also update the stack by uploading a new zip file, download this configuration, and delete the stack when required. Plan: Resource Manager parses your configuration and returns an execution plan that lists the Oracle Cloud Infrastructure resources describing the end state. Apply: Resource Manager creates your stack based on the results of the plan job. After this action is completed, you can see the resources that have been created successfully in the defined compartments. Destroy: Terraform attempts to delete all the resources in the stack. You can define permissions on your stacks and jobs through IAM policies. You can define granular permissions and let only certain users or groups perform actions like plan, apply, or destroy. Availability Resource Manager will become available in early 2019. We are currently providing access to selected customers through our Cloud Native Limited Availability Program. The currently available early version offers access to the Compute, Networking, Block Storage, Object Storage, IAM, and Load Balancing services. To learn more about Resource Manager or to request access to the technology, please register.

At CloudNativeCon North America 2018 this week in Seattle, we are excited to announce a new service, Oracle Cloud Infrastructure Resource Manager, that makes it easy to manage your...

Developer Tools

The Journey to Enterprise Managed Kubernetes

It’s been just over a year since we announced the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) and Registry (OCIR) offerings, foundational pieces in Oracle’s cloud for developing, deploying, and running container native applications. With KubeCon upon us, it’s a good time to consider not only how Oracle’s cloud offerings have evolved, but also the macro challenges facing Kubernetes and managed Kubernetes offerings, as they target deeper adoption by enterprise developer and DevOps teams. An interesting starting point is to review the challenges that folks are facing adopting containers and Kubernetes, and how that has evolved in the last year or so. The Cloud Native Computing Foundation (CNCF) survey measures these challenges, and the following graphic shows the results of four consecutive surveys: Source: CNCF Survey An interesting takeaway from these results is that the purely "technology-related" challenges, such as networking and storage, have dropped fairly dramatically. This drop could be attributed to continued standardization in the CNCF with efforts like Container Networking Interface (CNI) and Container Storage Interface (CSI), and to the rise of managed Kubernetes offerings, where storage and networking are pre-integrated for users and should "just work." At the same time, it’s interesting to see the prevalence of the "non-technical" challenges (acknowledging that some of these are new questions in the latest survey), things like "complexity," "cultural changes," and "lack of training." Let’s take a look at these through the lens of enterprise customers and users. Enterprise Needs and Culture Oracle Cloud Infrastructure Container Engine for Kubernetes has been built with the needs of enterprise developers in mind. We have the luxury of building on a Gen 2 cloud and being able to marry the best in managed open source with leading edge compute, network, and storage cloud technologies to provide a secure, compliant, and highly performant offering to our customers. With highly available clusters (and control plane) across multiple availability domains, bare metal and node pool support (of bare metal and VMs), and native integration with Identity and Access Management and Kubernetes RBAC, Container Engine for Kubernetes is ready for the most demanding enterprise workloads. Even better, customers get access to these capabilities while paying only for the underlying IaaS resources (compute, storage, network) consumed. (Just an aside—running Kubernetes clusters on bare metal makes a big difference for our customers; see this interesting comparison). A great example of cultural change here is the separation of concerns often present in enterprise IT teams, for example, between the network team and development team. With Container Engine for Kubernetes, compartments can be leveraged to provide the network team with control over the shared network aspects (VCNs, subnets, and so on) while enabling developers to control their clusters (and cluster nodes). Tackling Complexity Complexity has been the genesis of managed Kubernetes offerings. The majority of customers we speak to want to leverage the technology to build their applications faster and further their business, but they don’t want to continually maintain the Kubernetes infrastructure itself.  In addition to making it easy for users to upgrade to new Kubernetes versions, Container Engine for Kubernetes installs common tooling into those clusters by default (Kubernetes dashboard, Tiller, and Helm). Users can leverage the “quick cluster” capability to get a new cluster in a couple of clicks with a sensible set of defaults (see the following screenshot). Container Engine for Kubernetes worker nodes also leverage the standard Docker runtime that’s familiar to so many developers. Managed Registry users can help manage container image sprawl by setting global retention policies (or targeting specific repositories), for example, to remove images that haven’t been pulled or tagged within a certain time frame. Container Engine for Kubernetes "Quick Cluster" As more developers and DevOps teams leverage infrastructure as code, they want to leverage those capabilities against their Kubernetes clusters. In addition to full API and CLI support, Container Engine for Kubernetes also supports Terraform and Ansible providers, so DevOps teams can use the common open tooling that they’re already familiar with to provision and use their clusters, and to scale them, and even do so as part of their CI/CD pipelines. Openness Is Key Finally, a major point for tackling "complexity" and "lack of training" is being able to leverage the great resources available in the open source community. To that end, it’s critical that any managed Kubernetes offering conforms to true upstream open source, so that customers can be assured of portability and avoid lock in. As a platinum member of the CNCF, Oracle participates in the Certified Kubernetes Conformance Program, and we ensure that every new minor release on Container Engine for Kubernetes is conformant. Beyond conformance, it’s also important to understand the typical (open) tooling and solutions that customers are deploying. For example, key features like the "mutating webhook admission controller" must be enabled in the (managed) control plane to enable advanced features like automatic sidecar injection for Istio deployments. Solutions and best practices are part of the key to handling complexity. Many of these are available in the community, including from Oracle, such as monitoring (Prometheus), logging (EFK), Istio, Confluent, Couchbase, and Hazelcast. Others are provided by partners, including Twistlock and Bitnami. As Kubernetes continues to grow and its charter continues to expand as the de facto operating system of the cloud, tackling complexity continues to be an ongoing challenge for the community. In this blog, we’ve looked at how our Container Engine for Kubernetes can help to address this. This week we will be at Kubecon 2018. If you are there too, please stop by at our booth P4 and the nearby Oracle lounge!

It’s been just over a year since we announced the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) and Registry (OCIR) offerings, foundational pieces in Oracle’s cloud for...

Oracle Cloud Infrastructure

Creating a Windows Active Directory Domain Controller in Oracle Cloud Infrastructure

There are several circumstance under which you might want to create a new Windows Active Directory (AD) environment. This post talks about using Oracle Cloud Infrastructure to build a new AD domain controller. By using Microsoft PowerShell and the Oracle Cloud Infrastructure cloudbase-init scripts, you can automate the process and eliminate the headache of building Windows AD. You can place your script in the User data section of the Advanced options when you create the host in the Oracle Cloud Infrastructure Console. This post talks only about the automation of the AD domain controller, and not about your virtual cloud network (VCN) and network environment. You can learn more in the Creating Windows Active Directory Domain Servers in Oracle Cloud Infrastructure white paper. The following diagram shows the basic architecture of how you would build your VCN and subnets: Automating the Deployment of the AD Domain Controller When you are planning your domain, your first task is to determine how you want to structure your forest. Building an AD forest can get complicated if you have numerous subdomains and trust dependencies, so we are keeping it here simple by using a simple forest with a single tree. For more information about forests, see the Microsoft documentation. Scripting the Host Deployment Let’s jump into some PowerShell code. First, ensure that you have the correct header for the cloudbase-init script. You need to run in the ps1_sysnative mode for cloudbase-init and PowerShell to interpret the correct mode of execution. #ps1_sysnative Then, set the local administrator password and activate the administrator account (which is deactivated by default in Oracle Cloud Infrastructure Windows images). The account will be cloned, and the clone will be turned into the domain administrator, so ensure that the password is secure and meets the standard of special characters, numbers, and a mix of uppercase and lowercase letters. This password is temporary, and you’ll change it later in the process. $password="P@ssw0rd123!!" # Set the Administrator Password and activate the Domain Admin Account net user Administrator $password /logonpasswordchg:no /active:yes After you activate the administrator account, you install the prerequisites for the AD Domain Services. The first of these Windows features and roles is the .NET Framework 3.0, which has the backwards-compatible code line. Next are the AD Domain Services and the Remote Active Directory Services, which are the core management features for AD. Finally, install DNS Services, which eases most of the communication issues within the AD domain. Install-WindowsFeature NET-Framework-Core Install-WindowsFeature AD-Domain-Services Install-WindowsFeature RSAT-ADDS Install-WindowsFeature RSAT-DNS-Server After you install the prerequisites, you reboot the new host. However, before you reboot the host, create a RUNONCE script that will finish building the AD forest. For this task, use the adds module and create a text file that will become the RUNONCE script. This script, which runs on the next login by the local administrator after the reboot, imports the PowerShell Module ADDSDeployment and then runs the Install-ADDSForest command, which names the forest and promotes the host to a domain controller. After these actions are done, the host is automatically rebooted. # Create text block for the new script that will run once on reboot $addsmodule02 = @" #ps1_sysnative Try { Start-Transcript -Path C:\DomainJoin\stage2.txt `$password = "P@ssw0rd123!!" `$FullDomainName = "cesa.corp" `$ShortDomainName = "CESA" `$encrypted = ConvertTo-SecureString `$password -AsPlainText -Force Import-Module ADDSDeployment Install-ADDSForest `` -CreateDnsDelegation:`$false `` -DatabasePath "C:\Windows\NTDS" `` -DomainMode "WinThreshold" `` -DomainName `$FullDomainName `` -DomainNetbiosName `$ShortDomainName `` -ForestMode "WinThreshold" `` -InstallDns:`$true `` -LogPath "C:\Windows\NTDS" `` -NoRebootOnCompletion:`$false `` -SysvolPath "C:\Windows\SYSVOL" `` -SafeModeAdministratorPassword `$encrypted `` -Force:`$true } Catch { Write-Host $_ } Finally { Stop-Transcript } "@ Add-Content -Path "C:\DomainJoin\ADDCmodule2.ps1" -Value $addsmodule02 Then, add the RUNONCE key for the next time that the administrator logs in to the host. # Adding the RunOnce job # $RunOnceKey = "HKLM:\Software\Microsoft\Windows\CurrentVersion\RunOnce" set-itemproperty $RunOnceKey "NextRun" ('C:\Windows\System32\WindowsPowerShell\v1.0\Powershell.exe -executionPolicy Unrestricted -File ' + "C:\DomainJoin\ADDCmodule2.ps1") Rebooting After all the prerequisites have finished installing on the host, you are ready to reboot the host. Rebooting the host makes the installation cleaner and reduces the number of errors that can happen when you are installing the new AD forest. # Last step is to reboot the local host Restart-Computer -ComputerName "localhost" -Force After the reboot, the forest is installed using the RUNONCE script, as shown in the following screenshot. After the forest is installed, the host automatically reboots again. Final Checks Check your forest to ensure that everything is correct. Use the Get-ADForest command to get all the basic information that you need to confirm that your domain has been installed correctly. On your next login, change the administrator password with the Set-ADAccountPassword command to ensure the security of your domain. Final Script Here is the entire script. In addition to all the previously discussed parts, you can set logging in the script to ensure that you can track and troubleshoot the operations. #ps1_sysnative Try { # # Start the logging in the C:\DomainJoin directory # Start-Transcript -Path "C:\DomainJoin\stage1.txt" # Global Variables $password="P@ssw0rd123!!" # Set the Administrator Password and activate the Domain Admin Account # net user Administrator $password /logonpasswordchg:no /active:yes # Install the Windows features necessary for Active Directory # Features # - .NET Core # - Active Directory Domain Services # - Remote Active Directory Services # - DNS Services # Install-WindowsFeature NET-Framework-Core Install-WindowsFeature AD-Domain-Services Install-WindowsFeature RSAT-ADDS Install-WindowsFeature RSAT-DNS-Server # Create text block for the new script that will be ran once on reboot # $addsmodule02 = @" #ps1_sysnative Try { Start-Transcript -Path C:\DomainJoin\stage2.txt `$password = "P@ssw0rd123!!" `$FullDomainName = "cesa.corp" `$ShortDomainName = "CESA" `$encrypted = ConvertTo-SecureString `$password -AsPlainText -Force Import-Module ADDSDeployment Install-ADDSForest `` -CreateDnsDelegation:`$false `` -DatabasePath "C:\Windows\NTDS" `` -DomainMode "WinThreshold" `` -DomainName `$FullDomainName `` -DomainNetbiosName `$ShortDomainName `` -ForestMode "WinThreshold" `` -InstallDns:`$true `` -LogPath "C:\Windows\NTDS" `` -NoRebootOnCompletion:`$false `` -SysvolPath "C:\Windows\SYSVOL" `` -SafeModeAdministratorPassword `$encrypted `` -Force:`$true } Catch { Write-Host $_ } Finally { Stop-Transcript } "@ Add-Content -Path "C:\DomainJoin\ADDCmodule2.ps1" -Value $addsmodule02 # Adding the run once job # $RunOnceKey = "HKLM:\Software\Microsoft\Windows\CurrentVersion\RunOnce" set-itemproperty $RunOnceKey "NextRun" ('C:\Windows\System32\WindowsPowerShell\v1.0\Powershell.exe -executionPolicy Unrestricted -File ' + "C:\DomainJoin\ADDCmodule2.ps1") # End the logging # } Catch { Write-Host $_ } Finally { Stop-Transcript } # Last step is to reboot the local host Restart-Computer -ComputerName "localhost" -Force Summary This is a simple script for installing your first AD domain controller. You will have two reboots to make the process complete, and it takes about 20-25 minutes for all of the Windows features to install. This is just the first step in building a larger domain. The "Creating Your Windows Active Directory Domain Servers in Oracle Cloud Infrastructure" white paper walks you through additional steps to build a resilient AD environment on Oracle Cloud Infrastructure. Be sure to download the white paper, and check out how you can get your free Oracle Cloud Infrastructure trial account.

There are several circumstance under which you might want to create a new Windows Active Directory (AD) environment. This post talks about using Oracle Cloud Infrastructure to build a new AD domain...

Enabling NFS-Client on Windows at Instance Launch Time

One benefit of cloud offerings comes from the ability to spin up new resources on-demand, with minimal effort. Automation tools play a pivotal role in harnessing the power of cloud infrastructure. As announced in July 2018, Oracle Cloud Infrastructure Windows instances can also be configured using cloudbase-init through user data provided at launch time. Check out the Windows Custom Startup Scripts and Cloud-Init on Oracle Cloud Infrastructure post by Andy Corran that also covers Windows Remote Management (WinRM). Here, we have another example of how you can use a PowerShell script to configure a new instance at launch time. In January 2018, Oracle Cloud Infrastructure announced the launch of the File Storage service. You can use File Storage to share unstructured files between Windows and Linux-based hosts. The Oracle Cloud Infrastructure File Storage service is an NFSv3 file storage service that can scale to exabytes in size. I use File Storage as the provider of shared files accessed via NFS in the following example. If you would like to know more about our File Storage service, check out the Introducing Oracle Cloud Infrastructure File Storage Service blog post by my colleague Ed Beauvais or the official documentation. The commands required to enable the Windows NFS-Client come from the official documentation. Since Windows registry keys need to be created, including the required PowerShell commands as user data is a great option. In this example, I am going to be using the Oracle Cloud Infrastructure CLI to create my Windows Server 2016 Standard Edition instance and include PowerShell commands from my local machine. Read about setting up the CLI in the official documentation. Prepare the input files Create the PowerShell script. Be sure to include the #ps1_sysnative header so that cloudbase-init interprets the commands correctly. In my example, I named the file enable_nfs.ps1. #ps1_sysnative   ## Timestamp function for logging. function Get-TimeStamp {       return "[{0:MM/dd/yy} {0:HH:mm:ss}]" -f (Get-Date)   }   ## Create a log file. $path = $env:SystemRoot + "\Temp\" $logFile = $path + "CloudInit_$(get-date -f yyyy-MM-dd).log" New-Item $logFile -ItemType file Write-Output "$(Get-TimeStamp) Logfile created..." | Out-File -FilePath $logFile -Append   ## Install NFS-Client. Install-WindowsFeature -Name NFS-Client Write-Output "$(Get-TimeStamp) Installed NFS-Client." | Out-File -FilePath $logFile -Append   ## Configure NFS user mapping registry values. New-ItemProperty -Path "HKLM:\Software\Microsoft\ClientForNFS\CurrentVersion\Default" -Name "AnonymousGid" -Value 0 -PropertyType DWord New-ItemProperty -Path "HKLM:\Software\Microsoft\ClientForNFS\CurrentVersion\Default" -Name "AnonymousUid" -Value 0 -PropertyType DWord Write-Output "$(Get-TimeStamp) Created registry keys for NFS root user mapping." | Out-File -FilePath $logFile -Append   ## Restart NFS-Client. nfsadmin client stop nfsadmin client start Write-Output "$(Get-TimeStamp) Restarted NFS Cleint." | Out-File -FilePath $logFile -Append Create the instance JSON file. In my example, I named the file c01-win2016std.json. My JSON file only contains the required values. The values for ad, compartmentId, and subnetId are unique to an individual tenancy. {   "ad": "<AVAILABILITY_DOMAIN>",   "compartmentId": "<COMPARTMENT_OCID>",   "subnetId": "<SUBNET_OCID>",   "bootVolumeSizeInGbs": 256,   "displayName": "c01-win01",   "hostnameLabel": "c01-win01",   "imageId": "ocid1.image.oc1.phx.aaaaaaaaq3o6o4lwhrna3dlomvo6rgkyqzzcvtkuw7j3u4pf42ucpfmyzfia",   "shape": "VM.Standard2.1",   "skipSourceDestCheck": true } Launch the instance Use Oracle Cloud Infrastructure CLI to launch the instance using the two files created previously as input. Note the OCID of the new instance in the return JSON object. The next two steps require the OCID of the new instance. $ oci compute instance launch --from-json file://c01-win2016std.json --user-data-file enable_nfs.ps1 Use the CLI to find the IP address assigned to the primary VNIC. $ oci compute instance list-vnics --query "data [0].{IP:\"private-ip\"} --instance-id <OCID_FROM_PREVIOUS_JSON_RESPONSE> Use the CLI to find the initial password for the opc user. $ oci compute instance get-windows-initial-creds --query "data.{Password:password}" --instance-id <OCID_FROM_PREVIOUS_JSON_RESPONSE> Verify the Windows NFS-Client In my tenancy, I tunnel Windows RDP sessions through SSH to a Linux bastion hosts. The white paper on Protected Access for Virtual Cloud Networks describes this process in detail.  $ ssh -L 33389<IP_ADDRESS_OF_NEW_INSTANCE>:3389 opc@<IP_ADDRESS_OF_BASTION> After logging in to the new Windows instance and changing the initial opc user password, mount the NFS share as you would map any network drive in Windows. Success! This blog post gives you another example of how cloudbase-init userdata can be used to configure a Windows host at launch time. If you do not have an Oracle Cloud Infrastructure account, you can sign up for a free trial and evaluate the File Storage service for yourself. The Oracle Cloud Infrastructure Solution Architect team is working on a few other Windows-related publications. Keep an eye out for more blog posts and white papers from the team.

One benefit of cloud offerings comes from the ability to spin up new resources on-demand, with minimal effort. Automation tools play a pivotal role in harnessing the power of cloud infrastructure....

Strategy

Five Big Reasons to Run Oracle Database on Oracle Cloud Infrastructure

Oracle built its reputation on the power of the world's first commercially available relational database technology. Today, Oracle Database technologies are the number-one choice for enterprise database workloads—and Oracle Cloud Infrastructure is the best public cloud for running Oracle Database offerings. That's partly because the underlying architecture of Oracle Cloud Infrastructure was designed with the optimal performance of Oracle Database in mind. But it also has to do with the powerful capabilities built directly into our database technology. Here are five important reasons why Oracle Cloud Infrastructure is the right choice for companies that want to run Oracle Database technologies in the cloud. Superior Database and Application Performance The networking architecture of Oracle Cloud Infrastructure is designed to support optimal performance of Oracle Database and the applications that depend on it. We accomplish this by using direct, point-to-point connections between compute and database instances running within Oracle Cloud Infrastructure. These point-to-point connections translate to low latency and superior application performance. In addition to superior performance, users of Oracle Database on Oracle Cloud Infrastructure also benefit from predictable performance. They get dedicated servers with a defined shape and networking architecture, so they always know what they're getting in terms of performance. This approach also ensures that "noisy neighbors" don't negatively impact performance. Tight Security, Private Networking Oracle Database technologies on Oracle Cloud Infrastructure provide unmatched levels of privacy and security because database systems are deployed into a virtual cloud network (VCN) by default. A VCN is a customizable and completely private network that gives users full control over the networking environment. Users can assign their own private IP address space, create subnets, create route tables, and configure stateful firewalls. Users can also configure inbound and outbound security lists to protect against malicious users accessing DB systems. A single tenant can have multiple VCNs, and databases can be deployed into private subnets with no public IP address, said Sebastian Solbach, an Oracle Database expert and technical consultant with Oracle Cloud Infrastructure. "When you spin up a database instance in most clouds today, it will take a maximum of five minutes before the first hacker tries to access your system if it has a public IP address," Solbach said. "Oracle Cloud Infrastructure allows databases to run in a totally private subnet without any public networking access to these database instances." Real Application Clusters Oracle is the only cloud provider that offers the power of Oracle Real Application Clusters (RAC). Oracle RAC gives users the highest level of database availability by removing individual database servers as a single point of failure. In a clustered environment, the database itself is shared across a pool of servers, which means that if any server in the pool fails, the database continues to run on surviving servers. In Oracle Cloud Infrastructure, users run Oracle RAC on virtual machines (VMs) within a VCN. "Oracle RAC requires two things, and the first thing is shared storage. Oracle is the only cloud provider that officially offers shared block storage between instances in the cloud," Solbach said. "Oracle RAC also has networking requirements that can only be met by Oracle Cloud Infrastructure today." Exadata Oracle Database Exadata Cloud Service combines the best database with the best cloud platform. Exadata has proven itself at thousands of mission-critical deployments at leading banks, airlines, telecommunications companies, stock exchanges, government agencies, and ecommerce sites. Oracle Database Exadata Cloud Machines are optimized for performance and run in the same VCN as your VMs and bare metal server instances. Exadata supports Oracle RAC for highly available databases. Users can also configure Exadata in multiple availability domains with Oracle Active Data Guard for even higher availability. Oracle Active Data Guard eliminates single points of failure by maintaining synchronized physical replicas of production databases at a remote location. "Oracle Exadata is our most powerful database server offering," Solbach said. "Oracle Exadata is the place to run data warehouse workloads. It's also a very robust platform for running (Online Transaction Processing) workloads." One Point of Contact Fewer phone calls. Less hassle. Top-notch support. Users who run Oracle Database technologies in Oracle Cloud Infrastructure save time and increase efficiency by having a single point of contact for support when questions or problems arise. “If you're running in another cloud provider and you experience an issue with Oracle Database, you have to contact Oracle. If you have a problem with the underlying infrastructure, you have to contact that cloud provider,” Solbach said. "With Oracle Database on Oracle Cloud Infrastructure, you get a single point of contact who understands Oracle technologies better than anyone else." Learn more about Oracle Database cloud services today.

Oracle built its reputation on the power of the world's first commercially available relational database technology. Today, Oracle Database technologies are the number-one choice for enterprise...

Oracle Cloud Infrastructure

Customer Managed VM Maintenance with Reboot Migration

One of the key benefits of moving workloads to the cloud is the ability to rely on cloud providers to maintain underlying infrastructure. This helps you focus more resources on your specific business solutions. Occasionally, however, a maintenance event might affect the availability of your cloud resources, so it’s important you are not only informed of scheduled downtime, but also able to prepare for it according to your own business needs. With the Oracle Cloud Infrastructure Compute service, you can control downtime associated with those rare hardware maintenance events by using Customer Managed VM Maintenance. When a maintenance event is planned for an underlying system that supports one of your VMs, you can migrate that VM to other cloud infrastructure by rebooting the instance any time before the scheduled maintenance event. How It Works Approximately two weeks before an actual maintenance event is scheduled, you will receive an email notification from Oracle Cloud Infrastructure. It includes the date and duration of the downtime, a list of instances that are affected by the event, as well as clear instructions on how to move the instance(s) to different infrastructure at any time prior to the maintenance date.  It is a good idea to set up the Oracle Cloud Infrastructure account with an email alias that will ensure this notification email will reach the right inboxes. The same information is also available in the Oracle Cloud Infrastructure Console. Any VM instance to be affected by the scheduled downtime is marked with the Maintenance Reboot flag. If you choose to ignore this information, the instances go through the planned maintenance process, including a reboot. However, you can improve the availability of your services by rebooting the instance at a more convenient time before the scheduled maintenance. You can perform the reboot through the Console, the API, or the CLI. When the instance restarts, it’s running on other infrastructure in the cloud, and the maintenance flag is cleared. Checking for Scheduled Maintenance As part of your operational policies, you might want to regularly check for all instances that require a maintenance reboot. In the Console, you can use the following predefined query in the Advanced Search: "Query for all instances which have an upcoming scheduled maintenance reboot". The Console displays resources from a single region at a time, so you must run the query in each region separately. In the API or CLI, you can filter flagged instances by using the timeMaintenanceRebootDue property. You can use a script to list all such instances across all enabled regions of a tenancy. The script can be scheduled to run daily to ensure you have enough time to act on any flagged instances even in emergency situations. Considerations This feature is currently limited to StandardVM shapes running a Linux OS, either from Oracle images or custom images. Any instances that have non-iSCSI block volume attachments or secondary VNICs require the block volumes and secondary VNICs to be detached before the instance is rebooted. After reboot, these can be reattached to resume normal operations. In a future phase, the feature will support all VM shapes as well as instances that have non-iSCSI block volumes and secondary VNICs attached. Conclusion The Customer Managed VM Maintenance feature allows you to have control over downtime of VM instances running on top of infrastructure that requires maintenance. Once these instances are identified (either by email notification or by actively running a script) they can be migrated to new infrastructure by performing a reboot at a time that is convenient for you and your applications.

One of the key benefits of moving workloads to the cloud is the ability to rely on cloud providers to maintain underlying infrastructure. This helps you focus more resources on your specific business...

Using Availability Domains and Fault Domains to Improve Application Resiliency

The unfortunate truth about technology is that hardware requires maintenance and hardware failures do occur. Cloud resources are affected by the same hardware-related maintenance as traditional on-premises resources. In August 2018, Oracle Cloud Infrastructure introduced fault domains for virtual machine and bare metal Compute instances. Oracle Cloud Infrastructure fault domains ensure that applications can avoid hardware failures and planned hardware maintenance within an availability domain. An Oracle Cloud Infrastructure availability domain is: One or more data centers located within a region. Availability domains are isolated from each other, fault tolerant, and very unlikely to fail simultaneously. Because availability domains do not share infrastructure such as power or cooling, or the internal availability domain network, a failure at one availability domain is unlikely to impact the availability of the others. All the availability domains in a region are connected to each other by a low latency, high bandwidth network. An Oracle Cloud Infrastructure fault domain is: A grouping of hardware and infrastructure within an availability domain. Each availability domain contains three fault domains. Fault domains let you distribute your instances so that they are not on the same physical hardware within a single availability domain. A hardware failure or Compute hardware maintenance that affects one fault domain does not affect instances in other fault domains. Sanjay Pillai posted an excellent overview of the theory behind Oracle Cloud Infrastructure fault domains and how you place Compute instances in them. When you are deploying an application to cloud infrastructure, the unique architecture and affinity or anti-affinity requirements of the application determine how instances distribute across fault domains. The Best Practices for Your Compute Instances topic in the Oracle Cloud Infrastructure documentation explains fault domains with a couple of application scenarios. Deploying cloud services to multiple availability domains is the preferred method to ensure high availability. When your application architecture requires components to be in the same availability domain, choosing the proper fault domain for application components can help protect against resource failures. The sections below analyze the published architectures for JD Edwards EnterpriseOne (JDE) and Oracle E-Business Suite (EBS) and how availability domains and fault domains support the application. There are links at the end of this blog post to the Oracle solution documentation for EBS, JDE, and Siebel CRM. Multiple Availability Domains When you use multiple availability domains for an application stack, distributing instances across a single fault domain gives them the proper affinity to each other. In the following JD Edwards EnterpriseOne example, geographically separated availability domains provide application primary redundancy. Fault domains are assigned differently than in the preceding Oracle E-Business Suite example. Incoming connections to JD Edwards EnterpriseOne are routed to the load balancers in both availability domains via DNS that is external to Oracle Cloud Infrastructure. The load balancers route traffic to the application instances based on the configured distribution policy. In the following example, separate availability domains provide primary application redundancy. All hosts in the presentation tiers and middle tiers belong to FAULT-DOMAIN-1. Because fault domains are specific to an availability domain, FAULT-DOMAIN-1 in Availability Domain 1 is a different set of hardware than FAULT-DOMAIN-1 in Availability Domain 2, despite having the same name. Because all of the hosts in each availability domain are in the same fault domain, hardware failure or planned maintenance in a geographic region only minimally affects each availability domain. Hardware events that affect hosts in Availability Domain 2 to not affect hosts in Availability Domain 1. Placing all the hosts in the same fault domain ensures that similarly required infrastructure maintenance activities minimally affect the application stack. Diagram from Learn about Deploying JD Edwards EnterpriseOne on Oracle Cloud Infrastructure   One Availability Domain When you deploy an application stack in a single availability domain, distributing instances across multiple fault domains gives them the proper anti-affinity to each other. In the following Oracle E-Business Suite example, fault domains ensure that end users have access during hardware failures or planned infrastructure maintenance. Incoming connections to Oracle E-Business Suite are routed to the servers in the application pool via the load balancer. In the application tier, Host 1 is in FAULT-DOMAIN-1 and Host 2 is in FAULT-DOMAIN-2. If a hardware failure affects Host 1, Oracle E-Business Suite is accessible through Host 2. If Host 1 and Host 2 existed in the same fault domain, Oracle E-Business Suite would likely be inaccessible to end users. In the case of Oracle Cloud Infrastructure hardware maintenance, fault domain maintenance windows do not overlap. Diagram from Learn About Deploying Oracle E-Business Suite on Oracle Cloud Infrastructure   If you want to learn more about using fault domains in your application, the following solution documentation links have additional scenarios and details. Carefully selecting the proper fault domain for Compute instances in Oracle Cloud Infrastructure ensures hardware failures and scheduled maintenance activities do not unexpectedly affect your end users.  If you want another perspective, search the web for an article written by my colleague, Luke Feldman, about why fault domains are so crucial in Oracle Cloud Infrastructure. Finally, if you haven't already signed up for a free trial of Oracle Cloud Infrastructure, you can do it now. Let us provide the cloud so that you can build the future. Solutions Design Resources Learn About Deploying Oracle E-Business Suite on Oracle Cloud Infrastructure Learn About Deploying Siebel CRM on Oracle Cloud Infrastructure Learn about Deploying JD Edwards EnterpriseOne on Oracle Cloud Infrastructure

The unfortunate truth about technology is that hardware requires maintenance and hardware failures do occur. Cloud resources are affected by the same hardware-related maintenance as...

Strategy

Rethinking How to Build a Cloud

Oracle Cloud Infrastructure has faced many challenges with its late entry into infrastructure as a service (IaaS), but that late entry has also come with a significant benefit: we've been able to hire the best and brightest people from the market leaders. Most of the people building Oracle's cloud have worked on at least one other cloud. Their experiences have guided us as we've purpose-built the industry's first truly enterprise-grade cloud. Let's take a look at our approach and how it has evolved. In the Beginning Our initial goal was to create a high-scale cloud that would be a better fit for enterprise workloads such as Oracle applications and databases. Many Oracle customers wanted to move these mission-critical workloads to the cloud, but they found that it wasn't easy. And in some cases, it was downright impossible. To address their needs, the Oracle Cloud had to be a virtual infrastructure that looked like their on-premises environments, but it also needed the scalability of a public cloud. That combination would make it possible to more easily migrate not just applications but entire on-premises systems—virtualization, storage, and management and security software—to the cloud. The ability to move and improve these systems is one of our most important differentiators in the market. A lot of hard work and ingenuity went into making it happen. Oracle Cloud Infrastructure engineers spent the first year or so designing and building the foundation, which included data centers, automation, storage, and networking—plus the tools to build on top of that foundation. They all took a fresh perspective on what they had learned from working at companies such as Amazon, Google, Microsoft, and Facebook, and they rethought how to build a cloud. The three guiding principles of this approach, which we still follow today, are: We offer the same and oftentimes better performance than on-premises and other clouds, backed up by performance service-level agreements. We offer predictable pricing with a focus on long-term total cost of ownership. We make migrations from on-premises to our cloud more seamless and secure. The next step was to build out the core infrastructure pieces and keep iterating. The first Oracle Cloud Infrastructure offering was bare-metal compute, and then came virtual machine instances, followed by integrated database services, and we're adding new services today. To Infrastructure and Beyond! With everything we do, we're trying the answer this critical question: how do we support and improve our customers' mission-critical systems? A current area of focus is security and compliance. Oracle Cloud Infrastructure holds a PCI DSS Attestation of Compliance for more than a dozen services, and an attestation for HIPAA's requirements around security, breach notification, and privacy. We also recently announced a web application firewall, DDoS protection, cloud access security broker support, and a key management service for increased cloud security. Additionally, Oracle is making advancements in next-generation cloud infrastructure. We've announced a streaming service to receive, process, and archive all infrastructure and platform events in near real time. And we're the first public cloud with generally available compute instances powered by AMD EPYC processors. These enable us to offer 64 cores per server—more than any other cloud in production—which is a great fit for database, big data analytics, and high-performance computing (HPC) workloads.

Oracle Cloud Infrastructure has faced many challenges with its late entry into infrastructure as a service (IaaS), but that late entry has also come with a significant benefit: we've been able to hire...

Strategy

Rapid Global Expansion for Oracle Cloud Infrastructure

At OpenWorld this year, Oracle described a bold plan to extend global coverage of our next-generation enterprise-centric cloud infrastructure platform. This cloud is designed to meet the needs of our core customers, with consistent high performance, optimization around Oracle database and applications, and a broad suite of support for the demanding, data-centric workloads our customers run.  Cloud has become the dominant technology approach worldwide, allowing organizations to stop wasting effort on data center management, refreshes, and system upgrades. Most enterprises have a strong imperative to get their workloads onto the cloud.  But the first generation of cloud platforms were built to meet the needs of developers, with variable performance and a heavy reliance on oversubscription and shared tenancies. These platforms offer low cost to get started but high and unpredictable cost in the long run. They focus on proprietary services that require painful transitions to run existing applications in cloud, and allow little ability to move elsewhere. Oracle is trusted around the world for solving organizations’ biggest technology problems, especially around data.  When we reimagined a cloud to meet the needs of this category of workloads, we got it up and running quickly with coverage in the US and Europe.  But we knew that we needed to extend this footprint globally, to meet the latency and data residency requirements of multi-national organizations as well as those based outside of our initial locations.  To this end, we developed an aggressive plan to increase our footprint of next generation cloud data regions to extend coverage to the majority of our customer base by the end of 2019.  We plan to add a region in Toronto, Canada in the beginning of the year and we'll open more new regions over the course of the year in Australia, Europe, Japan, South Korea, India, Brazil, and the United States, including Virginia, Arizona, and Illinois to support public sector and Department of Defense customers. We’re excited to bring our unique next-generation enterprise cloud to the world.  Customers with demanding workloads will benefit from the consistent high performance, low predictable pricing, and the compatibility and portability that we bring to the table. Come talk to us about how we can help your enterprise IT environment get better results with less time wasted on remedial infrastructure management by running key workloads on Oracle Cloud Infrastructure.

At OpenWorld this year, Oracle described a bold plan to extend global coverage of our next-generation enterprise-centric cloud infrastructure platform. This cloud is designed to meet the needs of our...

Events

Top Five Reasons to Attend Oracle Cloud Day

I work with senior technology leaders who continuously tell me that it’s hard to keep up with—and secure—all of the new technologies that are coming online: AI, bots, IoT, blockchain, and so on. Oracle is hosting Oracle Cloud Day to help you stay up-to-date on these technologies and learn how to best use them. At Oracle Cloud Day events, which take place in several cities across North America, you’ll hear from companies that have deployed successful use cases with these new technologies. Here are five things that you can do at Oracle Cloud Day that make me really excited about this year's events: Find out how to solve your current IT challenges: Learn best practices for running mission-critical apps in the cloud, including tips for meeting high-performance and reliability requirements. Learn from companies that have already had success: Hear Oracle customers such as CMiC, OUTFRONT Media, Alliance Data Systems, and QMP share best practices and tell their cloud success stories. Attend sessions and hands-on labs with tracks geared specifically for IT experts, architects, integrators, and data professionals: Sessions are tailored with your role in mind, so you can focus on what really matters to you.   Try the latest technologies for yourself: Technology experts from Oracle and our partners will show you how to use emerging technologies such as AI, machine learning, and blockchain to build smart applications, machines, and systems. Talk to people who share your interests, challenges, and expertise: The Innovation Lounge is the very heart of Oracle Cloud Day. It's designed to give you an opportunity to connect with your peers, see expert demonstrations, visit with our partners, and recharge with terrific food and drink. How could Oracle Cloud Day best help you? Share your thoughts in the comments below.

I work with senior technology leaders who continuously tell me that it’s hard to keep up with—and secure—all of the new technologies that are coming online: AI, bots, IoT, blockchain, and so on. Oracle...

SAP Highlights at Oracle OpenWorld 2018

It was a very exciting time at Oracle OpenWorld this year! There were lots of great keynotes regarding further advancements in cloud technologies and database, and packed sessions with a ton of new announcements about Oracle Cloud services in general. Now is a great time to recap some of the major SAP highlights that we presented at Oracle's main conference in October.  Oracle provides robust, scalable, and reliable infrastructure for SAP applications running in demanding environments around the world. SAP and Oracle have worked closely to optimize Oracle technologies with SAP applications to give customers the best possible experience and performance. The certification of SAP Business Applications on Oracle Cloud Infrastructure furthers this long-standing partnership.   Oracle and SAP certify and support SAP NetWeaver applications on Oracle Cloud Infrastructure, making it easy for organizations to move Oracle-based SAP applications to the cloud. Oracle Cloud Infrastructure enables customers to run the same Oracle Database and SAP applications as they have done previously in their own data centers, preserving existing investments while reducing costs and improving agility. For additional details, click here. SAP NetWeaver on Oracle Cloud Infrastructure has been available on bare metal shapes since 2017, and in 2018 we announced support for virtual machine shapes. Having all these options available for SAP workloads on Oracle Cloud Infrastructure made it easier for us to continue the strong momentum of customers migrating their SAP applications to the Oracle Cloud. I'd like to share some highlights of successful SAP migration stories leveraging these recent service enhancements that our partners, SAP and Oracle engineering teams presented at OpenWorld. Partner Highlights Cintra, an Oracle Platinum partner, has been in business for over 20 years, working with big names in financial services, retail, aviation, healthcare, and gaming. During their SAP on Oracle Cloud Infrastructure and Cintra deliver extreme performance for top retailer [CAS4027] session at OpenWorld, they highlighted one of their SAP use cases that they have successfully migrated from on-premises to Oracle Cloud Infrastructure and Oracle Database Exadata Cloud Service, along with insightful details about that process.  The customer was a leading retailer whose on-premises architecture that was supporting their SAP applications - including SAP ERP, ERP Central Component (ECC), Business Warehouse, Solution Manager and Enterprise Portal - was nearing its end of life. Like many customers who look to cloud, they wanted to take advantage of a continuously modern infrastructure and wanted to replace their traditional CAPEX model with an OPEX one. However, they also required the utmost reliability and the highest performance for the database workloads that were supporting these applications. Based on the performance requirements, Cintra recommended running SAP on Oracle Cloud Infrastructure as it is the only IaaS provider to support Exadata Cloud Service and RAC. What's more, both Oracle and SAP have jointly tested and certified Oracle Engineered Systems like Exadata for SAP, ensuring a faster time to deployment. And customers who are running their SAP applications on Exadata on-premises can move to the cloud with 100% compatibility while taking advantage of Oracle's Bring your own License (BYOL) program. Cintra leveraged their RapidCloud Transformation Methodology and a strong collaboration with Oracle's SAP Cloud Platform team to execute an aggressive cloud deployment timeline and successfully migrated their customer's production implementation of SAP onto Oracle Cloud Infrastructure.  In the end, the customer was able to realize performance and business continuity improvements, and eliminated $2.5M in on-premises technology refresh costs. Abdul Sheikh, CTO at Cintra, said: “Having delivered the world’s first production SAP to Oracle Cloud Infrastructure, Cintra was delighted to present the details of our successful SAP cloud transformation project at OpenWorld 2018. It was a tremendous conference, and off the back of our involvement, we’re talking to a number of organizations who are keen to move their on-premises SAP platforms onto an enterprise-grade cloud. We’re excited to be working with the Oracle SAP and Oracle Cloud Infrastructure teams to realize these customers’ visions.” SAP and Oracle Engineering Highlights Valuable sessions like SAP on Oracle: Development Update [PRO4405] - given by Christian Graf, Manager / Supervisor, SAP SE; Gerhard Kuppler, Oracle VP, SAP Alliances, Oracle; and Jan Klokkers, Senior Director SAP Development, Oracle - provided a roadmap for our joint customers, explaining what's currently available and what's lined up in the coming months.  Here's a view of what's currently available to support SAP on Oracle Cloud Infrastructure. To keep up to date on the latest SAP on Oracle news, product certifications and best practices, I would encourage you to bookmark these resources: SAP on Oracle Community Follow SAP on Oracle on Twitter Another successful Oracle OpenWorld has finished strong. Over 60,000 customers and partners from 175 countries gathered in San Francisco, where they had a chance to see and hear all the great things that we have delivered and have planned for the next months. In Oracle Cloud Infrastructure sessions at Oracle OpenWorld, attendees learned why Oracle Cloud Infrastructure is different from other clouds—from its standout performance, to its commitment to openness, to its focus on security—and how it enables organizations to succeed. 

It was a very exciting time at Oracle OpenWorld this year! There were lots of great keynotes regarding further advancements in cloud technologies and database, and packed sessions with a ton of new...

Developer Tools

Apache Spark on Kubernetes: Maximizing Big Data Performance on Container Engine for Kubernetes

In this post, I demonstrate how you can quickly create a Kubernetes cluster on Oracle Cloud Infrastructure by using the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) service. Then I showcase how you can achieve a significant performance boost by running applications on Container Engine for Kubernetes bare metal instances, and I perform a comparison by running the same workload on Container Engine for Kubernetes virtual machine (VM) instances. Oracle Cloud Infrastructure is currently the only public cloud provider that provides the capability to run bare metal Kubernetes clusters. Running Kubernetes and containers on bare metal machines shows a 25 to 30 percent performance improvement for both CPU and IO operations, compared to running them on VMs, which makes it suitable for running Big Data applications. Deployment Architecture At a high level, the deployment looks as follows: Deploy a highly available Kubernetes cluster across three availability domains. Deploy two node pools in this cluster, across three availability domains. One node pool consists of VMStandard1.4 shape nodes, and the other has BMStandard2.52 shape nodes. Deploy Apache Spark pods on each node pool. Deployment Steps Perform the following steps to set up and test the deployment. Step 1: Deploy a Three-Node VMStandard1.4 Shape Kubernetes Cluster Create a Kubernetes cluster on Oracle Cloud Infrastructure using Container Engine for Kubernetes, using the steps outlined in this tutorial. This step involves creating the necessary virtual cloud network (VCN), subnets, security list rules, and Identity and Access Management (IAM) policies. After creating the cluster, deploy a three-node VMStandard1.4 shape node pool on the cluster, with one node per subnet in each availability domain. Download the kubeconfig file from the Oracle Cloud Infrastructure Console to your local environment and have kubectl installed. You need this to create Spark pods and to work with your Kubernetes environment. The cluster should look similar to the following example: Step 2: Deploy Apache Spark and Apache Zeppelin Pods on the Node Pool On the node pool that you just created, deploy one replica of Spark master, one replica of Spark UI-proxy controller, one replica of Apache Zeppelin, and three replicas of Spark master pods. You will use Apache Zeppelin to run Spark computation on the Spark pods. To create the Spark pods, follow the steps outlined in this GitHub repo. The spark-master-controller.yaml and spark-worker-controller.yaml files are the necessary Kubernetes manifest files for deploying Spark master and worker controllers, and the spark-master-service.yaml file exposes this as a Kubernetes service. Similarly, there are zepplin-controller.yaml and zepplin-service.yaml manifest files for deploying Zeppelin pods and to expose Zeppelin as a service. Note: The CPU and memory available for the entire cluster is 24 vCPUs (8 vCPUs per node) and 84 GB of memory (24 GB per node). If you look at the manifest files, you will observe that we are assigning 1vCPU per pod and 1000MiB of memory for Spark worker pods, and 200m vCPU per pod and100MiB of memory for each for Spark master, UI proxy, and Zeppelin pods. We are using the same allocation of memory and CPU per pod throughout this post. The deployment so far should look as follows: The process works as follows: Connect to Apache Zeppelin’s UI and trigger a Spark computation, which in turn interacts with the cluster's hosted Container Engine for Kubernetes master. The Container Engine for Kubernetes master sends the request to the node that contains the Spark master. The Spark master delegates the scheduling back to the Kubernetes master to run the Spark jobs on the Spark worker pods. The Kubernetes master schedules the Spark jobs on the Spark worker pods. The Spark worker and master pods interact with one other to perform the Spark computation. In the next step, you initiate the Spark computation by using Zeppelin. Step 3: Initiate the Spark Computation to Measure the Performance of the Cluster At the end of step 2, you took the Zeppelin pod and port-forwarded the WebUI port as follows: kubectl port-forward zepplin-controller-ja9s 8080:8080 After you load the Zeppelin UI, create a new notebook. In it, paste the Python code needed to run the Spark computation, which finds prime numbers in a data set from 0 to 100 million natural numbers. You need to add a %pyspark hint for Zeppelin to understand it. %pyspark from math import sqrt; from itertools import count, islice def isprime(n): return n > 1 and all(n%i for i in islice(count(2), int(sqrt(n)-1))) nums = sc.parallelize(xrange(100000000)) print nums.filter(isprime).count() After pasting the code, press Shift+Enter or click the play icon to the right of the snippet. The Spark job runs and the result is displayed. Observe that it takes about 387 seconds to complete this task (completion numbers may vary). Step 4: Scale the Replicas of Spark Worker Pods and Measure the Performance Again Use the following command to scale the replicas of Spark worker pods to six, using the same allocation of vCPU and memory per pod as described in step 2. kubectl scale --replicas=6 rc/spark-worker-controller You can check the CPU and memory allocation of the cluster by using kubectl describe nodes. CPU Requests and Memory Requests indicate the allocated values. kubectl describe nodes | grep -A 2 -e "^\\s*CPU Requests" Now, you can run the Spark computation again on the Zeppelin UI on the newly scaled six replica Spark worker pods cluster, and measure the performance. Notice that the computation happens slightly faster because the Spark jobs were distributed across more sets of worker pods. Lastly, scale the worker pods to 20 replicas and test the performance of the cluster again. Notice that the performance actually deteriorates because of excessive scaling of worker pods. This time it takes 371 seconds to complete. The “Inferences” section explains why this pattern occurs. Step 5: Deploy a Three-Node BMStandard2.52 Node Pool in the Same Cluster Deploy a three-node BMStandard2.52 shape node pool on the same cluster by clicking the Add Node Pool button in the Node Pools section on the Oracle Cloud Infrastructure Console. Step 6: Repeat Steps 2, 3, and 4 Repeat steps 2, 3, and 4 of deploying Apache Spark and Zeppelin, and running the performance tests on the new node pool. In Container Engine for Kubernetes, each node pool has a unique Kubernetes label assigned. Use these labels to have a "node-affinity" to schedule the Spark jobs on the BMStandard2.52 shape node pool, rather than on the VMStandard1.4 shape node pool. Note: Use the same vCPU and memory allocation for pods in the bare metal shape cluster, which ensures that the performance comparison is consistent across both the node pools. Spark computation on three replica Spark worker pods takes just 74 seconds to complete. Spark computation on six replica Spark worker pods takes 39 seconds to complete. Finally, the Spark computation on a 20-replica Spark worker pods cluster takes 30 seconds to complete. The Spark jobs on BMStandard2.52 bare metal node pools finish 5 to 10 times faster compared to the same workloads running on the VMStandard1.4 node pool, although both the node pools have the same vCPU and memory allocation for Spark and Zeppelin pods. The following section discusses the reasons for the performance differences. Inferences With no hypervisor overhead, containers on bare metal perform up to 30 percent better than those on VMs, which makes them well suited for running performance-intensive workloads like Big Data and HPC jobs. Bare metal instances also offer higher packing density for containers, which causes better resource use and provides minimal network traversal for intercontainer communication. For a massively parallel and distributed computation like Spark or Hadoop, minimizing the network traversal for intercontainer communication results in significant performance gain. Lastly, bare metal instances come with two extremely fast NICs offering 25 Gbps of raw bandwidth each. The bandwidth on VM shapes scales with the size of the VM; a VMStandard1.4 shape offers 1.2 Gbps of network bandwidth. As a result, bare metal instances are better suited for applications that require high network throughput. Conclusion This post demonstrated how to quickly deploy a highly available, multiple-availability-domain Kubernetes cluster on Oracle Cloud Infrastructure; running VM and bare metal instances on the same cluster, as separate node pools; and the significant performance benefits that you can achieve by running Big Data and HPC applications on bare metal Kubernetes clusters. Container Engine for Kubernetes is the only managed Kubernetes offering in the public cloud provider space that lets you create a node pool of bare metal instance shapes in a Kubernetes cluster. As shown in this post, you can create independent node pools of VM shapes and bare metal shapes in the same Kubernetes cluster, and use Kubernetes labels to intelligently route high-performance computation workloads to bare metal node pools and the rest to VM node pools. Having this flexibility is extremely useful, as shown in the following figure: References Overview of Container Engine for Kubernetes GitHub code for deploying Apache Spark and Zeppelin on Kubernetes Tutorial for creating a Kubernetes cluster using Container Engine for Kubernetes   -Abhiram Annangi  Twitter | LinkedIn

In this post, I demonstrate how you can quickly create a Kubernetes cluster on Oracle Cloud Infrastructure by using the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) service. Then...

Product News

Cross-Region Block Volume Backups for Business Continuity, Migration, and Expansion

We are excited to announce that you can now copy your block volume backups between Oracle Cloud Infrastructure regions. This new capability is part of our continual investment to provide customers with comprehensive application and data protection solutions in cloud. Cross-region backup copy enhances the following capabilities: Disaster recovery and business continuity: By copying block volume backups to another region at a regular interval, you can rebuild applications and data in another region if a region-wide disaster occurs. Migration and expansion: You can easily migrate and expand your applications to another region. This capability is provided at no additional cost to Oracle Cloud Infrastructure customers beyond the cost of the amount of block storage, object storage and outbound data transfer consumed by the remote copy as listed in Oracle Cloud Infrastructure pricing page. It is available in the Oracle Cloud Infrastructure Console, API, CLI, SDK, and Terraform. Copying a block volume backup to another region is straightforward in the Oracle Cloud Infrastructure Console. The following steps show how to copy a backup across regions and how to restore volume content from a backup in another region. In the Block Storage section of the console, access the block volume backups in the appropriate compartment. From the action menu (...) for the backup that you want to copy to another region, select Copy to Another Region. Specify a name for the backup and the destination region, and click Copy Block Volume Backup. Confirm the backup copy settings. In the Console, go to the destination region and verify that the backup is available in that region. Now you can restore from the backup on the destination region by creating a new volume from the backup. From the action menu (...) for the backup in the destination region, select Create Block Volume. Enter a name for the restored volume, provide the necessary parameters, and then click Create Block Volume. In the Block Volumes section on the destination region, verify that the restored volume is available. We want you to experience these new features and all the enterprise-grade capabilities that Oracle Cloud Infrastructure offers. It’s easy to try them with our $300 free credit. For more information, see the Oracle Cloud Infrastructure Getting Started guide, Block Volume service overview, Block Volume documentation, and FAQ. Watch for announcements about additional features and capabilities in this space. Features such as policy-based, scheduled copy of backups across regions are on our near-term road map. We value your feedback as we continue to enhance our offering and make our service the best in the industry. Let us know how we can continue to improve or if you want more information about any topic. Max Verun

We are excited to announce that you can now copy your block volume backups between Oracle Cloud Infrastructure regions. This new capability is part of our continual investment to provide customers with...

Oracle Cloud Infrastructure

Deploying Confluent Platform Using Helm Charts on Oracle Kubernetes Engine

Hello, my name is Pinkesh Valdria, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure.   This post is a follow-up to our post about deploying Confluent on Oracle Cloud Infrastructure Compute instances. Now you can use Terraform automation to deploy Confluent Platform using Helm charts on Oracle Cloud Infrastructure Container Engine for Kubernetes.   Oracle Cloud Infrastructure Container Engine for Kubernetes Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. Use Container Engine for Kubernetes when your development team wants to reliably build, deploy, and manage cloud-native applications. You specify the compute resources that your applications require, and Container Engine for Kubernetes provisions them on Oracle Cloud Infrastructure in an existing tenancy. Figure 1: OKE High-Level Architecture Confluent Open Source Provides a More Complete Distribution of Apache Kafka Confluent Open Source brings together the best distributed streaming technology from Apache Kafka and takes it to the next level by addressing the requirements of modern enterprise streaming applications. It includes clients for C, C++, Python, and Go programming languages; Connectors for JDBC, Elasticsearch, and HDFS; Confluent Schema Registry for managing metadata for Kafka topics; and Confluent REST Proxy for integrating with web applications. Figure2: Confluent Platform Components Helm Helm is an open source packaging tool that helps you install applications and services on Kubernetes. Helm uses a packaging format called charts, which are a collection of YAML templates that describe a related set of Kubernetes resources. Deploying Confluent Platform on Container Engine for Kubernetes The Terraform automation template performs the following steps: Deploys a Kubernetes cluster on Oracle Cloud Infrastructure in a new virtual cloud network (VCN), including subnets, a load balancer, a security list, and node pools across three availability domains. Prepares your local machine to access the Kubernetes cluster, as follows: Generates a kube configuration file to access the cluster. Installs kubectl. Installs Helm. Adds Confluent Helm charts to the Helm repo. Installs the Confluent Open Source platform (named my-confluent-oss, but you can change the name). This Terraform template is available on our cloud partner repository. Figure 3: Cluster with Node Pools By default, the Confluent Helm chart deploys three pods for Zookeeper, three pods for Kafka, and one each for Schema Registry, Kafka Rest, Kafka Connect, and KSQL. Grafana and Prometheus monitoring are optional. Figure 4:  Pods Deployed on Clusters We hope that you are as excited as we are about Confluent Platform deployment on Container Engine for Kubernetes. Let us know what you think!    Pinkesh Valdria Principal Solutions Architect, Big Data https://www.linkedin.com/in/pinkesh-valdria/

Hello, my name is Pinkesh Valdria, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure.   This post is a follow-up to our post about deploying Confluent on Oracle Cloud...

Product News

Connect Your On-Premises Corporate Resources with Multiple Virtual Cloud Networks

If you have organized your resources in multiple virtual cloud networks (VCNs) to meet your governance model and regional presence, connecting all those VCNs to your on-premises networks can be a real challenge. Until now, your only option was to have a FastConnect or IPSec connection terminate at each of your VCNs. However, this option means you incur costs for multiple FastConnect links, and you have the operational burden of provisioning a new FastConnect or IPSec connection for each new VCN you add. We are excited to announce the availability of Oracle Cloud Infrastructure VCN Transit Routing, which now offers an alternative. This solution is based on a hub-and-spoke topology and enables the hub VCN to provide consolidated transit connectivity between an on-premises network and multiple spoke VCNs within the Oracle Cloud Infrastructure region. Only a single FastConnect or IPSec VPN (connected to hub VCN) is required for the on-premises network to communicate with all the spoke VCNs. This solution is based on our existing local VCN peering and dynamic routing gateway (DRG) offerings.  With this solution, you no longer need to attach a DRG to each of your VCNs to access your on-premises network. You attach only a single DRG to the hub VCN and allow resources in the spoke VCNs to share the (FastConnect or IPSec VPN) connectivity to the on-premises resources. At a high level, you do this by performing the following steps: Establishing a peering relationship between each spoke VCN and the hub VCN Associating route tables with the hub VCN's local peering gateways (LPG) and DRG Setting up rules in those route tables to direct traffic from each LPG on the hub VCN to the DRG, and from the DRG to each LPG.   VCN transit routing offers customers many benefits: Better network design: Simplified network management and fewer connections required to establish traffic flow between multiple VCNs and on-premises networks. The hub VCNs enables shared transit connectivity to remote networks and act as a central point of policy enforcement for all your traffic transiting in and out of the region. Increased service velocity/Faster Time-to-Market: This solution supports Fastconnect and IPSec VPN connectivity requirements between VCNs and remote resources with minimal on-premises network changes.  If you add a new VCN, it will now just take few minutes to set up local peering and routing to the hub VCN. This is a marked improvement to previous lead times of days/weeks waiting on corporate change management procedures for establishing Fastconnect or provisioning IPsec connections at on-premises edge routers. Centralized control of route advertisements:  This solution does not enable the traffic to flow between the on-premises network and the spoke VCNs by default You may want to allow the spoke VCNs to access all or specific corporate network partitions (or subnets) in the customer premises. You control this with the route table associated with the LPG on the hub VCN. You can configure restrictive route rules that specify only the on-premises subnets that you want to make available to the spoke VCN. The routes advertised to the spoke VCN are those in that route table and the hub VCN's CIDR. This control enables isolated connectivity of spoke VCNs' resources to their on-premises counterparts. Similarly, you may want to allow the on-premises network to access all or specific subnets in the spoke VCNs. You control this using the route table associated with the DRG on the hub VCN. You can configure restrictive route rules that specify only the spoke VCN subnets that you want to make available to the on-premises network. The BGP route advertisements to the on-premises network are those in that route table and the hub VCN's CIDR. Cost savings: Significant TCO benefits based on centralized management of the private connectivity (Fast Connect/VPN links) and routing to corporate resources in customer premises. Streamlined operations: This solution enables the service provider model where the hub VCN provides the transit connectivity service to the spoke VCNs. The governance boundaries of these hub VCNs may be different from those of the spoke VCNs and hence may be managed by separate compartments/tenancies. In some cases, the hub and spoke VCNs are in the same company, and the central IT team’s hub VCN provides transit service to spoke VCNs managed by LOBs (Lines of Business). In other cases, the hub and spoke VCNs are in different companies, and one company provides transit service to others. I hope you enjoyed learning about the VCN Transit Routing feature in Oracle Cloud Infrastructure. In my next post, I will walk you through a VCN Transit Routing scenario where two spoke VCNs are enabled to access remote on-premises networks by using a single IPSec connection attached to the hub VCN. Stay tuned...

If you have organized your resources in multiple virtual cloud networks (VCNs) to meet your governance model and regional presence, connecting all those VCNs to your on-premises networks can be a real...

Oracle Cloud Infrastructure

Here's a Nifty Checklist to Secure a Cloud Application

When customers are migrating existing applications from on-premises data centers and from other cloud providers to Oracle Cloud Infrastructure, or even when they are building new cloud native applications on Oracle Cloud Infrastructure, I often get asked for advice on how they can best secure their applications in a cloud environment. First of all, it is critical that development teams and security teams work in tandem to secure applications as well as the cloud environment. Following the agile methodology, most modern IT organizations have transformed to a DevSecOps model. In fact, Continuous Integration and Continuous Deployment (CI/CD) with on-demand releases has also led to Continuous Security (CS). Based on past experience with customer deployments and in training I've done with SANS and OWASP, I've put together a nifty checklist that can be used as a guide when securing any cloud application. This can also be used by your Cloud Security Operations Center (cSOC), should you have one or are looking to establish one. The checklist is categorized into seven sections: SecOPS and Configuration Management Data Protection Authentication and Access Control I/O Handling Logging Error Handling Session Management SecOPS and Configuration Management From the outset, it's important to ensure that all security requirements are documented, and that these requirements are accounted for in your deployment, design, review, testing and change management processes. Checklist Item Notes         Document security requirements Work with the cloud Governance, Risk, and Compliance (GRC) group and the application team to document all the security-related requirements. These can be across functional and non-functional requirements. Transforming requirements to user stories allows you to track them using your agile ticketing system (like Rally or Jira). DevSecOps friendly change management Automate the change management process and align with the current CI/CD process so that new releases can be deployed only after proper testing and associated documentation. Automated deployment Use automation for Continuous Integration and Continuous Deployment to ensure that releases are consistent and repeatable in all environments. Continuous design review Continuously review the design and architecture of the application throughout its life cycle. Security analysis, risk identification, and mitigation are key focus areas. Continuous code review Continuously review the code of the application as the application is updated or modified. Security analysis, risk identification, and mitigation are key focus areas. Continuous security testing Continuously test the application for security vulnerabilities throughout the DevOps process and the application lifecycle. Infrastructure hardening based on releases Harden all components of the logical infrastructure that the application uses as per the guidelines and compliance required for that application environment. Incident response automation Automate and continuously update the defined incident-handling plan. Continuous training Train developers, cloud engineers, and architects on the new features of the cloud services that the application uses.    Data Protection It is also important to ensure you have these data protection capabilities. At Oracle, many are built into our cloud infrastructure by default, and others are available as a service. Checklist Item Notes                           HTTPS only Use HTTPS (TLS) for front end and backend application flows. HTTP access disabled Disable HTTP for all publicly exposed interfaces. Ideally disable it globally. Use vaults for user password stores Use secret management with Oracle wallet or Oracle Key Vault. Use of Strict-Transport-Security header Strict-Transport-Security header helps to mitigate any HTTP downgrade attacks using variations of the sslsniff tool. Secure key management Properly store, secure, and rotate keys. Oracle Cloud Infrastructure Key Management can provide this solution. Strong TLS configuration Use TLS 1.2 or above with strong EC cipher strength. Oracle Cloud Infrastructure LBaaS uses TLS 1.2 with following cipher sets: p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.5px Helvetica; color: #464646} ECDHE-RSA-AES256-GCM-SHA384 ECDHE-RSA-AES256-SHA384 ECDHE-RSA-AES128-GCM-SHA256 p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.5px Helvetica; color: #464646} ECDHE-RSA-AES128-SHA256 DHE-RSA-AES256-GCM-SHA384 DHE-RSA-AES256-SHA256 DHE-RSA-AES128-GCM-SHA256 DHE-RSA-AES128-SHA256 Reputable certificate authority  Ensure certificates are valid and signed by reputable certificate authorities. Match the name on the certificate with the FQDN of the website. Browser data caching  Configure browsers not to cache data using cache control HTTP header or meta tags. Data at rest  In Oracle Cloud Infrastructure, by default, all storage types (block, file, and object) are encrypted. Key exchange Exchange keys over a secure channel. Tokenization of sensitive data Where possible, don't store sensitive data at the web or application layer. If necessary, use tokenization to reduce exposure.    Authentication & Access Control When it comes to authentication and access control over your cloud infrastructure, I would advise following these guidelines. Items Notes Access control checks Apply access controls checks consistently all along the stack following the principle of complete mediation. Least privilege Apply the principle of the least privilege by using a mandatory access control systems such as Oracle Identity Cloud Service (IDCS) and Oracle Cloud Infrastructure IAM mediation. Direct object reference Avoid referring to objects directly. Always use relative pointers based on the authenticated user identity and trusted server-side information. Unvalidated redirects Don't permit unvalidated redirects. Put a strong access control policy in place to validate any redirect requests. Credential security Avoid hardcoding credentials. Secure the database storing the credentials using multiple tiers of security controls. Strong password policy Implement a strong password policy along with an automated multi-factor identity-based password reset system. Account lockout policy Implement an account lockout policy to protect against brute-force attacks. Display appropriate nonspecific messages around wrong credentials to confuse an attacker. Multi-factor authentication Ensure multi-factor authentication is in place using Yubikeys or other hardware or software-based tokens.   I/O Handling To help ensure secure I/O handling, I recommend reviewing this checklist to mitigate possible security attacks. Checklist Item Notes Whitelist Use whitelists in place of blacklists. Validate each input or output within the context of use. Standard encoding for the application Use standard encoding like UTF-8 consistently for all the application pages using HTTP headers or meta tags to reduce risks like cross-site scripting attacks. Nosniff header usage X-Content-Type-Options: Use nosniff headers to stop browsers from guessing the data type. Tabnabbing Prevent tabnabbing by denying the linked page the ability to change the opener's tab. This is a common look-a-like phishing attack.  Well formed SQL queries Use parameterized SQL Queries with user content passed into a bind variable to make queries safe against SQL injection attacks. Never build SQL Query strings dynamically from user input. X-Frame-Options Use Content-Security-Policy (CSP) header frame-ancestors directive to mitigate clickjacking. Secure HTTP response header To defend against MITM and XSS attacks, use X-XSS-Protection, CSP, and Publik-Key-Pin headers.   Logging Of course, logging is a critical part of ensuring adherence to compliance and for maintaining a good security posture. Below are some guidelines on the types of activities to log. Checklist Item Notes Sensitive data access logging Log sensitive data access to meet regulatory compliance such as PCI and HIPAA. Privilege escalation logging  Log all privilege escalation requests for audit and compliance. Administrative activities using Console, CLI, and API logging Log all administrative access for application configurations or infrastructure configurations. Authentication and validation activities logging Log all authentication, session management, and input validations. Ignore unimportant data Avoid logging unimportant or inappropriate data to reduce storage and the associated encryption overhead. Secure all logs Securely store logs using encryption and as per the established log retention policy.   Error Handling Below are some best practices for how to handle unexpected errors and the error messages your system sends. Checklist Item Notes Handle all exceptions Handle unexpected errors and gracefully return to the user or the invoking application. Generic error messages Display generic error messages to the user to protect details of the application stack. Framework generated messages Suppress framework-generated messages because they can reveal sensitive information about the framework used and can lead to sophisticated exploits.   Session Management And finally, I would recommend implementing these session management attributes to avoid any potential security risks. Checklist Item Notes Session tokens Every time a user authenticates or escalates their privilege level, generate a new session token. Regenerate the token even if the encryption status changes. Idle session timeout  To prevent against Ajax application-based attacks, implement an idle session timeout. Absolute session timeout  To mitigate against a session hijacking, log users out every 4–6 hours. Session destruction In case any tampering or intrusion is detected, immediately destroy the session. Cookie domain path Restrict the domain and the path scope for the application in context. Avoid any wildcard domain setting. Cookie expiration time Set a reasonable expiration time for every session cookie. Cookie attributes Set secure attributes using HttpOnly and secure flags to make the session id invisible to any client-side scripts. Session log out Once the user logs out of their session, invalidate and destroy the session.   In conclusion, I hope this checklist will come in handy as you're migrating or building applications in the cloud. I'd also like to cite that this is a subset of various checklists that can be downloaded from the following resource sites: NIST OWASP SANS

When customers are migrating existing applications from on-premises data centers and from other cloud providers to Oracle Cloud Infrastructure, or even when they are building new cloud native...

Events

Inside NVIDIA and Oracle's Partnership on AI and HPC in the Cloud

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Oracle is now offering NVIDIA's unified artificial intelligence (AI) and high performance computing (HPC) platform on Oracle Cloud Infrastructure. I recently caught up with Karan Batta, who manages HPC for Oracle Cloud Infrastructure, to find out what this partnership means for Oracle customers who run performance-intensive workloads and are looking to move to the cloud. He also explains how Oracle makes it easy for customers to transfer NVIDIA HPC workloads to the cloud. Listen to our conversation and read a condensed version: Your browser does not support the audio player Why is the partnership between Oracle and NVIDIA such a big deal? Karan Batta: It's a big deal in part because we are the first public cloud provider to support NVIDIA HGX-2, the company's unified AI and HPC platform. But let's talk about the GPU market for a minute. I would say that the GPU-accelerated market is going to be a huge portion of the future market. Obviously, it doesn't make sense to move everything to a GPU. But certainly, a lot of computationally intensive tasks like risk modeling, DNA sequencing, and a lot of real-time analysis makes sense for GPUs. The big use cases today are things like AI and ML, and in the future, it will be things like autonomous driving and weather simulation. Many tasks can benefit from GPUs. Why did Oracle choose to partner with NVIDIA? Batta: NVIDIA is the global leader right now in terms of not just the GPU hardware but the software ecosystem as well. They've done a fantastic job of growing their ecosystem around CUDA and different open source libraries such as cuDNN and cuML. What we're trying to do at Oracle Cloud Infrastructure is enable the entire ecosystem on our platform. We're not going to tell people to rip up their application and use our APIs instead of anybody else's like other cloud providers do. If you're already invested in the ecosystem, you want to come to Oracle. Not only do we offer the best GPU infrastructure, you can also get the ecosystem along with it. As part of that effort, we also announced that we've integrated the NVIDIA GPU Cloud (NGC) container registry. NVIDIA essentially builds, manages, qualifies, certifies, benchmarks, tests, and publishes many containers for deep learning, ML, AI, HPC, and now they're moving into data analytics as well. We're supporting all of that in our public cloud. Are we certified for this? Batta: Yes. Right now, we're the only ones that have RAPIDS available on a public cloud certified through NGC. RAPIDS is a suite of open-source software libraries for executing data science training pipelines entirely on NVIDIA GPUs. It's generally available and you can find documentation on NVIDIA's and Oracle's websites. What do we offer in terms of making it easier for customers to transfer NVIDIA HPC workloads to Oracle Cloud Infrastructure? Batta: We've made it much easier for customers to use the NVIDIA stack on top of Oracle. I think that is one of the biggest things that people are starting to notice. You can take any framework or application that is already running on GPUs and quickly run it on Oracle Cloud Infrastructure without changing the image or anything else. That's true even if you have an on-premises image. You can run it para-virtualized on Oracle Cloud Infrastructure and it just works. On top of that, we are co-building this hardware with NVIDIA. We're doing special things in regard to how we build that hardware and especially how we spec that hardware for different types of markets, whether it's AI or a legacy HPC workload. Can you tell me how many Oracle Cloud Infrastructure regions have these capabilities right now and what are the future plans? Batta: This is available today in all of our regions. We have four major regions today - Virginia, Phoenix, London, and Frankfurt. And we've announced numerous new regions that will come online in the next 12 months in places like Korea, Japan, and India. We're also going to have quite a few government regions along with additional regions in Europe and Asia-Pacific—so we are in this for the long term. All of these capabilities are going to be uniform across of our regions. Okay, I'm sold. I want to take this for a test drive. How do I try it out? Batta: We offer 300$ in free credits so you can go to our website and try it out. If you have additional questions or if you want to try out something different, feel free to reach out to me and my team. We'd be more than happy to guide you and make sure that you're successful on Oracle Cloud Infrastructure.  

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Oracle is now...

Breaking Down the HPC Barriers

In my last post, I discussed some reasons why most enterprise workloads still haven't moved to the cloud. One of those reasons is that mission-critical applications require levels of performance and reliability that earlier-generation clouds simply haven't offered. This has been especially true for high-performance computing (HPC) applications. Oracle Cloud Infrastructure's wide variety of compute options—combined with our networking innovations, robust security features, and industry partnerships—makes us uniquely suited for these data-intensive workloads. (And our prowess in database for the past four decades doesn’t hurt!) At Oracle OpenWorld last month, I announced that Oracle Cloud Infrastructure is ready for any and all workloads. Case in point: our news at Supercomputing 2018 this week. We've launched bare metal compute instances that enable organizations to run HPC workloads in the cloud with the same levels of performance that they get on premises. I envision massive server farms, sitting idle, with millions of dollars of unused hardware and gear waiting for the next simulation! Networking Innovations These bare metal cloud instances are powered by a new feature called clustered networking, which removes the need for enterprises to run specialized networking equipment on premises for their HPC workloads. Additionally, Oracle Cloud Infrastructure is the first and only public bare metal cloud with a high-bandwidth, low-latency RDMA network. For more information about what we're doing with networking, watch this video with Karan Batta, Senior Principal Product Manager, and Jag Brar, Network Architect: Robust Security HPC applications are addressing some of the biggest problems in the world. They help develop new cancer treatments, improve weather prediction, and make cars safer. Being able to run these workloads in the cloud without a performance hit is a major advancement, but it's not enough to clear all cloud migration roadblocks. For the organizations running these workloads, keeping them safe is the top priority. These workloads are as mission critical as it gets, and we take the responsibility to run them effectively in the cloud seriously, as backed by our industry-leading service-level agreements. Security is also a priority, and is a fully integrated pillar of Oracle Cloud Infrastructure. Oracle has protected enterprises' mission-critical data for more than 40 years, and that experience drove how we built our cloud. As Larry Ellison, CTO and Executive Chairman, explained at OpenWorld, we can't see customer data and customers can't access our control plane. Plus, we're using the latest in artificial intelligence and machine learning to protect against increasingly advanced attackers. Industry Partnerships Oracle is able to provide a secure, high-performance cloud, in part, because we collaborate with other innovators and leaders in the market. Our new HPC bare metal instances are powered by Intel Xeon Scalable processors, with RDMA functionality provided by Mellanox Technologies. We also offer AMD EPYC instances and GPU instances powered by NVIDIA. No other cloud provider offers this combination of hardware choice, networking prowess, and commitment to security. It's what makes Oracle Cloud Infrastructure the only enterprise-grade cloud—especially for HPC workloads. Today is the last day you can see on the exhibit floor at SC18 in Dallas.  Stop by booth #2806, say hello, and learn more!

In my last post, I discussed some reasons why most enterprise workloads still haven't moved to the cloud. One of those reasons is that mission-critical applications require levels of performance and...

Security

Mapping the Future of Security at Oracle Cloud Infrastructure

"Make it easy for people to do the right thing, and they tend to." Those words were first spoken to me by a very wise and talented Chief Information Officer (CIO) who mentored me early in my career. The quote really stuck with me, and it came to define my overall approach to leading security teams at the companies I've worked at over the last 15 years. It's certainly on my mind as I move into the new role of Chief Security Officer (CSO) for Oracle Cloud Infrastructure. Whether it's securing a public cloud or securing personal computers at home, most people want to do what's right. But if the process of enabling security is overly difficult— if it requires too many steps, takes too much time, or is impossible to understand—people tend to procrastinate, and that can lead to gaps or weakness in their security posture. If security is simple enough and features are built directly into processes whenever possible, people will do the right thing. I'm beyond thrilled to be the new CSO for Oracle Cloud Infrastructure. Moving forward, I'll always be asking questions like these: How do we continue to make it easier to scan code for typical coding errors? Are we constantly integrating security features into day-to-day standups and weekly sprints in keeping with Oracle tradition? How do we consistently reinforce and uphold Oracle's long held commitment to highly secure systems as defined by our core security pillars? I'm very much looking forward to the challenge. After serving as Chief Information Security Officer (CISO) at companies like PricewaterhouseCoopers, Google, and startup Jet.com, where I was also a cloud customer, it's a challenge I'm ready to meet. Why I Chose Oracle Cloud Infrastructure Early in my career, I never thought I'd end up working at Oracle. Back then, I didn't know much about the company other than the fact that the world's largest enterprises depended on its relational database technologies. The first thing I noticed when I met the Oracle team is that the atmosphere in the Oracle Cloud Infrastructure organization is much like that of a startup—but a startup that's backed by resources that only a large, successful software company can provide. Things move fast in the Oracle Cloud Infrastructure team. Innovation and new ideas aren't just encouraged, they're mandatory. I became absolutely certain I wanted the position when I realized that Oracle Cloud Infrastructure exceeded my two main criteria for selecting a company to work at. First, I knew the job would be fascinating—that I would have the opportunity to solve complex problems that others hadn’t solved before. I like the sound of that. Second, it was clear that I was going to enjoy working with the Oracle Cloud Infrastructure team. Everyone here has been amazing. I also liked the fact that Oracle had taken on the colossal challenge of entering a crowded market and building a public cloud from scratch. And it's not just any cloud. It's a cloud designed for large and small enterprises that truly care about security features. It's for government agencies and other organizations that need to run highly secure workloads. Oracle is doing cloud better than the competition, and I'm proud to be part of the team that's making it happen. I've learned a lot in the short time since I joined the team. Oracle's devotion to enabling security features is evident in every corner of the organization. It's evident in the design of Oracle's enterprise resource planning, human capital management, and other business applications. And it's evident in the architecture of Oracle Cloud Infrastructure, where database systems are deployed into a virtual cloud network by default. This allows a high level of security and privacy and gives users control over the networking environment. Closing Thoughts from a Former Cloud Customer One other thing will significantly inform my approach as CSO for Oracle Cloud Infrastructure: the time I spent using a competing cloud when I was CISO at another company. Cloud providers offer access to certain systems via user interfaces (UIs) and application program interfaces (APIs). As a cloud customer, I found some of those UIs and APIs didn't adequately enable security teams to perform anomaly detection, incident response, and forensics. As CSO, I will ensure that my team upholds Oracle's commitment to customers having access to the right systems. A great example of this are Oracle's bare metal offerings, where customers can directly access hardware, memory, storage, and other systems with no need for virtualization. As a CSO, I have strong demands before I'll allow sensitive data to be stored in our cloud. As a former cloud user, I can put myself in the customers' place and understand the true impact of our security decisions. I'm excited to use those skills and experiences as my team builds the security roadmap and the future of Oracle Cloud Infrastructure.

"Make it easy for people to do the right thing, and they tend to." Those words were first spoken to me by a very wise and talented Chief Information Officer (CIO) who mentored me early in my...

Product News

Bring Your Own Custom Image in Paravirtualized Mode for Improved Performance

We are excited to announce the availability of a new way to move and improve existing workloads with Oracle Cloud Infrastructure. You can now import a range of new and legacy operating systems using paravirtualized hardware mode. VMs using paravirtualized devices provide much faster performance compared to running in emulated mode, with at least six times faster disk I/O performance. To get started, you simply export VMs from your existing virtualization environment and import them directly to Oracle Cloud Infrastructure as custom images. You can import images in either QCOW2 or VMDK format. The following screenshot shows the Import Image dialog box, where you can choose to import your image in paravirtualized mode. Once the image is imported successfully, you can launch new VM instances with this image to run your workloads. Paravirtualized mode is available on X5 and X7 VM Instances running Linux OS which include the kvm virtio drivers. Linux kernel versions 3.4 and higher include the drivers by default. This includes the following Linux OSs: Oracle Linux 6.9 and later, Red Hat Enterprise Linux 7.0 and later, CentOS 7.0 and later, and Ubuntu 14.04 and later. We recommend using emulated mode to import older Linux OS and Windows OS images. If your image supports paravirtualized drivers, it is easy to convert your existing emulated mode instances into paravirtualized instances. You can create a custom image of your instance, then export it to object storage, and re-import it in paravirtualized mode. Bring your own custom image in paravirtualized mode is offered in all regions at no extra cost. Now you have another way to bring existing workloads to Oracle Cloud Infrastructure, with improved performance! Bringing existing OS images: [NEW] Bring your own custom image to Oracle Cloud Infrastructure VMs using paravirtualized mode: Import existing Linux OS images using VMDK or QCOW2 formats and run in paravirtualized mode VMs for improved performance. For details, see Bring Your Own Custom Image for Paravirtualized Mode Virtual Machines. Bring your own custom image to Oracle Cloud Infrastructure VMs using emulation mode: Import existing Linux OS images using VMDK or QCOW2 formats and run in emulation mode VMs. For details, see Bring Your Own Custom Image for Emulation Mode Virtual Machines. Move an entire virtualized workload to Oracle Cloud Infrastructure by using your existing hypervisor, management tools, and processes. OS images or older OSs, such as Ubuntu 6.x, Red Hat Enterprise Linux 3.x, or CentOS 5.4, can use KVM on Oracle Cloud Infrastructure bare metal instances. For detailed instructions, see Bring Your Own KVM and Bring Your Own Nested KVM on VM Shapes. Bring Your Own Oracle VM:  For details, see Oracle VM on Oracle Cloud Infrastructure. You can also building new OS images: Oracle Cloud Infrastructure Published OS Images: Oracle provides prebuilt images for Oracle Linux, Microsoft Windows, Ubuntu, and CentOS. For details, see the complete list of Oracle-provided images. Red Hat Enterprise Linux 7.4 to Oracle Cloud Infrastructure bare metal and VM instances: You can also generate a Red Hat Enterprise Linux 7.4 image for bare metal and VM instances by using a Terraform template available from the Terraform provider. To learn more about bringing your workload to Oracle Cloud Infrastructure, including custom images, see Bring Your Own Image.

We are excited to announce the availability of a new way to move and improve existing workloads with Oracle Cloud Infrastructure. You can now import a range of new and legacy operating systems using...

Customer Stories

Why you need to bet on Oracle Cloud Infrastructure as THE Cloud for your HPC needs

You’d imagine, with the growth of the public cloud, that the majority of the HPC workloads and applications would have transitioned to the cloud; however, almost all enterprise HPC workloads are still running in on-premises datacenters. This means millions of mission critical use-cases such as engineering crash simulations, cancer research, visual effects and new cutting edge workloads such as deep learning in Artificial Intelligence (AI) are still constrained by on-premise environments. What’s stopping these HPC workloads from moving to the cloud? Simply – bad or incomplete cloud infrastructure solutions – inconsistent performance, no flexibility, high costs and no integration. If cloud infrastructure were as good as it needed to be, all these workloads would already be in the cloud. But they’re not. There is still clearly a lot of innovation to be done to move entire HPC and AI workloads and applications to the cloud. Enterprise HPC workloads have specialized needs. Traditional cloud providers don’t support these. If you want to run the most demanding HPC, AI or Database workloads, you need clusters of servers working as a single piece of infrastructure. Most cloud providers see this as a hard problem. Oracle solved these challenges on-premises 10 years ago with Exadata. What made Exadata so good? We built a clustered network, connected high speed compute and storage, and wrote software to optimize it end-to-end for performance and security. Today, we’re going to solve this problem for customers! First, we’re starting with announcing a brand-new capability called “Clustered Networking”. Clusters seem like an old idea, but everyone still runs them on-premises for their tough workloads: HPC Clusters, AI Research GPU Clusters, Simulation Clusters, etc. With Oracle Cloud, customers no longer need expensive, specialized networking gear on-premises. Customers can now get single digit micro-second latency and a 100G bandwidth with the first and only public cloud provider with a bare-metal RDMA capability. You can now migrate workloads into Oracle Cloud with better performance than on-premises or any other cloud provider. None of our competitors offer anything close. A cloud provider like Microsoft Azure offers a more expensive and niche solution with their H-Series Instances. You don’t have to compromise anymore! As part of Clustered Networking capability, we are announcing a new set of HPC instances available in preview today in our London (UK) and Ashburn (US) regions with expansion into other regions in the future: These new HPC instances are powered by Intel® Xeon® processors with 3.7Ghz all-core frequency. Additionally, to support local data check-pointing for MPI workloads or local file access for cutting edge Deep Learning workloads; these instances also contain local NVMe SSD storage for predictable high-performance IO. Additionally, we’ve worked with Mellanox to deliver 100G RDMA capability with ultra-low latency for MPI workloads supporting all market leading MPI frameworks including IntelMPI, OpenMPI or PlatformMPI. This is truly new ground-breaking innovation no other cloud provider has been able to solve at this scale. “As organizations look to ensure they stay ahead of the competition, they are looking for more efficient services to enable higher performing workloads. This requires fast data communication between CPUs, GPUs and storage, in the cloud,” said Michael Kagan, CTO, Mellanox Technologies. “Over the past 10 years we have provided advanced RDMA enabled networking solutions to Oracle for a variety of its products and are pleased to extend this to Oracle Cloud Infrastructure to help maximize performance and efficiency in the cloud.” Finally, we’re excited to offer these new instances in the cloud with leading on-demand cost of $0.075 cents per core hour. You no longer need to spend hundreds of millions of dollars on purpose-built super computers like Cray when you have on-demand HPC Clusters in Oracle CIoud Infrastructure for a couple of dollars an hour! Further innovation and commitment to Artificial Intelligence If you’re a data scientist or an AI developer, we’ve got great news. You will be able use our RDMA Clustered Network along with new GPU instances based on the HGX-2 architecture, providing over 1 petaflop of performance! With these new instances, Oracle Cloud becomes the first cloud provider with 32GB Tesla Volta GPUs along with the new NVSWITCH based architecture. We’ve plugged these GPUs into our Clustered Network as well, so that customers can launch GPUs with a click of their finger and are able to enable workloads utilizing RDMA across 1000s of GPUs! These new instances will be available in 2019 in our major regions globally at launch: Our HPC ISV Ecosystem At the recent Altair Global Conference – we also announced a new collaboration with Altair to offer HyperWorks CFD Unlimited as a new ground-breaking Engineering Simulation on Oracle Cloud Infrastructure. This new ground-breaking service offers computational fluid dynamics (CFD) solvers as a service in Oracle. Advanced CFD solvers such as Altair ultraFluidX™ and Altair nanoFluidX™ are optimized on Oracle to provide overnight simulation results for the most complex cases on a single server. You can find out more information on this service at https://www.altair.com/oracle. “We are excited to expand our relationship with Oracle,” said Sam Mahalingam, Chief Technical Officer for Enterprise Solutions at Altair. “We find that access to GPU compute resources can be challenging for our customers. The integration with Oracle’s cloud platform addresses this challenge, and provides customers the ability to use GPU-based solvers in the cloud for accelerated performance without the need to purchase expensive hardware. Ultimately this leads to improved productivity, optimized resource utilization, and faster time to market.” Come see us at Supercomputing Conference 2018 We’re extremely proud and excited to be showcasing these new capabilities this week at Supercomputing Conference in Dallas with our partners and customers in full force. Come see us at booth #2806 to talk to our engineering and product teams, get free credits and hands on demos. Some of the other activities you should check out: Come talk to us at the Oracle + Altair Happy Hour on Tuesday at 5pm HPC instance demos in Intel’s booth #3223 and a tech talk on the 15th November at 12.00pm. AMD instance demos at the AMD booth #2824 including a presentation on 13th November at 11am. Altair HyperWorks CFD Unlimited Presentation at booth #2833 at 11.30am on Tuesday 13th November. NVIDIA’s Theater (Booth #2417, Hall D) on Wednesday 14th November at 3pm HPC and AI at your fingertips… Most cloud infrastructure is just a set of unrelated and commodity parts. It’s the enterprise’s responsibility to figure out which parts will work, and which portions of the application need to be rebuilt for the cloud. Oracle Cloud enables customers to run new AI workloads, next to HPC workloads, next to database workloads, next to traditional applications. We’ve figured out how to run the hardest pieces of your applications, so you don’t have to. We provide the performance that enterprises need, with guarantees you require. We’re enabling HPC in a way that no other cloud can match. And we’re charging less for it. See you in Dallas!

You’d imagine, with the growth of the public cloud, that the majority of the HPC workloads and applications would have transitioned to the cloud; however, almost all enterprise HPC workloads are still...

Events

On-Premises HPC Performance with Oracle Cloud Infrastructure

In conjunction with Supercomputing 2018 we are announcing the availability of one of the fastest high performance cloud computing offerings. Oracle Cloud Infrastructure now offers the BM.HPC2.36 shape, which provides the exact same HPC performance you see on-premises. This new shape strengthens the end-to-end HPC experience on Oracle Cloud Infrastructure, read more about how Oracle is addressing HPC here. Migrating HPC workloads to the cloud involves surmounting several challenges, not the least of which is ensuring you have the same levels of performance, security, and control as your on on-premises infrastructure.  With this new bare metal compute instance, it is possible. Built on Intel's 6154 processor, this new bare metal compute instance offers an all core turbo clock speed of 3.7 GHz, and because it's bare metal there’s no virtualization performance penalty. In addition to the 6.7 TB local NVME drive and the 384 GB of dual rank memory, Oracle Cloud Infrastructure's new HPC shape provides the world's first public cloud Bare Metal RDMA network, enabled by a Mellanox 100 Gbps Network card, in addition to the 25 Gbps Network card for standard traffic. No virtualization means no jitter or bulky and unnecessary cloud monitoring agents. Run any MPI or any HPC workload in the cloud with a performance similar to your on-premises infrastructure. We're going to share a lot of data in this blog and we encourage you to take a free HPC test drive to validate for yourself. You can deploy a 1,000 core cluster for a few hours for free , you can access these clusters in our Ashburn, VA datacenter or our London datacenter. Raw Performance First let's look at raw performance. The BM.HPC2.36 shape has two 18-core Intel Xeon Gold 6154 processors. Intel integrates world-class compute with powerful fabric, memory, storage, and acceleration. You can move your research and innovation forward faster to solve some of the world’s most complex challenges. Working with leading HPC hardware providers like Intel and Mellanox ensures that Oracle Cloud Infrastructure customers get access to on-premises levels of performance with cloud flexibility. HPC applications perform the same on Oracle Cloud Infrastructure as they do on-premises, for both large and small models. A common benchmark for compute intensive workloads comes from the Standard Performance Evaluation Corporation or SPEC. SPEC has designed test suites to provide a comparative measure of compute-intensive performance across the widest practical range of hardware using workloads developed from real user applications. Publicly available results for some on-premises clusters compared to BM.HPC2.36 is shown below. Typically cloud vendors are hesitant to share their numbers because virtualized environments do not perform nearly as well as on-premises environments, we are happy to share our results.   Scaling Oracle Cloud Infrastructure scales HPC workloads efficiently. Some cloud vendors have typically expected you to pay for poor single node performance and to overlook their lack of scaling. We invite you to bring your workload to OCI and let us show you that you can run your MPI, compiler, and application workloads on bare metal. HPC applications do not handle virtualization well, on-premises HPC vendors have shown the significant negative impact that virtualization has on HPC workloads. The performance hit you take when running on some cloud vendors grows exponentially when you run an HPC cluster. Cloud monitoring agents will run frequently and are not synchronized across a cloud cluster, with bare metal you have complete control over the servers in your cluster, this makes a huge performance difference. Running RDMA in a virtualized environment undercuts the value of RDMA, to get the best performance out of RDMA it has to be run on bare metal. To get the best performance from RDMA, it must run on bare metal, as illustrated in the following graph: When running simulation applications across an HPC cluster the ability to efficiently scale at high node counts is important. It guarantees predictability of the simulation and increases the return-on-investment for expensive application licenses. In a CFD simulation BM.HPC2.36 scales over 100% from 450,000 cells per core to below 6,000 cells per core, consistently, the same performance that you see with on-premises clusters. Price With true HPC performance all of the cost and flexibility benefits of the cloud can now be applied to HPC workloads. Our customers are seeing a significant advantage in terms of simulation time, cost per jobs, and capacity. Additionally, on the cloud the concept of “one user, one cluster” means no queue times. and "one user, one cluster." Many HPC customers are able to attach a per job cost to their jobs. It is very easy to optimize per job cost in the cloud, in fact if the job utilizes RDMA the cost of the job remains the same independent of the speed at which it completes. When a customer is able to specify the number of jobs that they burst per month or per year the value of high performance cloud computing becomes clear.  See the table below, even with conservative numbers for an on-premises HPC cluster, can help customers save money in the short and long term running in the cloud. Oracle Cloud Infrastructure enables ad-hoc on-demand HPC clusters. This means that each user can spin up a cluster as needed. There is no need to support hundreds of users and a massive file server for your HPC cluster. You can size your HPC cluster specifically for the workload and stop paying for it when you are done with your job. In addition to the performance, scalability, and price performance Oracle Cloud Infrastructure provides an end-to-end HPC experience with GPUs, Intel and AMD bare metal processors, high performance block storage, and a full POSIX File Storage Service. Conclusion You can now run any HPC workload on Oracle cloud with the same predictable performance of your on-premises HPC infrastructure. With the fast Intel processors and RDMA technology, jobs will scale efficiently. At 7.5 cents per core hour, Oracle Cloud Infrastructure's HPC offering provides one of the most FLOPs per penny in the cloud. Navigate to https://cloud.oracle.com/iaas/hpc to test drive an HPC cluster for yourself or signup for our free HPC benchmarking service. Come talk with us about HPC on Oracle Cloud Infrastructure at SC18 in Dallas next week in booth #2806.

In conjunction with Supercomputing 2018 we are announcing the availability of one of the fastest high performance cloud computing offerings. Oracle Cloud Infrastructure now offers the...

Oracle Cloud Infrastructure

End of Sale of First-Generation X5 Compute Instances

One of the great things about using cloud infrastructure is that you don't have to upgrade hardware. Over a year ago, Oracle Cloud Infrastructure introduced new X7 options for its Compute service. Today, we are announcing the end of sale for older X5 options to manage the capacity of this older hardware. As of November 9, 2018, we are restricting the following X5 options (SKUs): Oracle Cloud Infrastructure – Compute – Bare Metal Standard – X5 (BM.Standard1.36) Oracle Cloud Infrastructure – Compute – Bare Metal Dense I/O – X5 (BM.DenseIO1.36) Oracle Cloud Infrastructure – Compute – Virtual Machine Standard – X5 (VM.Standard1.x) – all shapes: 1, 2, 4, 8, and 16 OCPUs Oracle Cloud Infrastructure – Compute – Virtual Machine Dense I/O – X5 (VM.DenseIO1.x) – all shapes: 4, 8, and 16 OCPUs Oracle Cloud Infrastructure – Database – Virtual Machine – X5 Standard Capacity – Bring Your Own License (BYOL) Oracle Cloud Infrastructure – Database – Bare Metal – X5 – Dense I/O Capacity – Bring Your Own License (BYOL) Oracle Cloud Infrastructure – Database – Virtual Machine – X5 Standard Capacity – License Included (non-BYOL): Database Standard Edition Database Enterprise Edition Database Enterprise Edition High Performance Database Enterprise Edition Extreme Performance Oracle Cloud Infrastructure – Database – Bare Metal – X5 – Dense I/O Capacity – License Included (non-BYOL): Database Standard Edition Database Enterprise Edition Database Enterprise Edition High Performance Database Enterprise Edition Extreme Performance For current monthly universal credit customers, we will continue to support the use of these options in the three regions in which they were offered—Phoenix (PHX), Ashburn (IAD), and Frankfurt (FRA). However, the availability of these options is limited, and we recommend that you use X7 shapes or AMD shapes as your deployment grows. Starting November 9, 2018, new monthly universal credit customers will have access only to the X7 and AMD shapes. We will set the service limits for these restricted options to zero for Pay-As-You-Go customers, unless they requested a limit increase before November 9. Pay-As-You-Go customers will be able to launch only new X7 or AMD instances, although any X5 instances that are currently being used will continue to work. The following table lists the comparable, recommended X7 or AMD shape for each X5 shape. These options generally provide increased resources compared to the older instances. X7 compute instance shapes are priced at the same cost per OCPU per hour as the X5 shapes, and AMD EPYC processor-based instances cost less. Compute Recommendations SKU Compute Instance Shape Recommended Alternative Shapes Bare Metal Standard – X5 BM.Standard1.36: OCPU: 36 Memory: 512 GB Network bandwidth: 10 Gbps X7 BM.Standard2.52: OCPU: 52 Memory: 768 GB Network bandwidth: 2x25 Gbps AMD BM.Standard.E2.64: OCPU: 64 Memory: 512 GB Network bandwidth: 2x25 Gbps X7 VM.Standard2.24: OCPU: 24 Memory: 320 GB Network bandwidth: 24.6 Gbps Bare Metal Dense IO – X5 BM.DenseIO1.36: OCPU: 36 Memory: 512 GB Local disk: 28.8 TB NVMe SSD Network bandwidth: 10 Gbps X7 BM.DenseIO2.52: OCPU: 52 Memory: 768 GB Local disk: 51.2 TB NVMe SSD Network bandwidth: 2x25 Gbps X7 VM.DenseIO2.24: OCPU: 24 Memory: 320 GB Local disk: 25.6 TB NVMe SSD Network bandwidth: 24.6 Gbps Virtual Machine Standard – X5 VM.Standard1.1: OCPU: 1 Memory: 7 GB Network bandwidth: Up to 600 Mbps X7 VM.Standard2.1: OCPU: 1 Memory: 15 GB Network bandwidth: 1 Gbps AMD VM.Standard.E2.1: OCPU: 1 Memory: 8 GB Network bandwidth: 0.7 Gbps Virtual Machine Standard – X5 VM.Standard1.2: OCPU: 2 Memory: 7 GB Network bandwidth: Up to 1.2 Gbps X7 VM.Standard2.2: OCPU: 2 Memory: 30 GB Network bandwidth: 2 Gbps AMD VM.Standard.E2.2: OCPU: 2 Memory: 16 GB Network bandwidth: 1.4 Gbps Virtual Machine Standard – X5 VM.Standard1.4: OCPU: 4 Memory: 28 GB Network bandwidth: 1.2 Gbps X7 VM.Standard2.4: OCPU: 4 Memory: 60 GB Network bandwidth: 4.1 Gbps AMD VM.Standard.E2.4: OCPU: 4 Memory: 32 GB Network bandwidth: 2.8 Gbps Virtual Machine Standard – X5 VM.Standard1.8: OCPU: 8 Memory: 56 GB Network bandwidth: 2.4 Gbps X7 VM.Standard2.8: OCPU: 8 Memory: 120 GB Network bandwidth: 8.2 Gbps AMD VM.Standard.E2.8: OCPU: 8 Memory: 64 GB Network bandwidth: 5.6 Gbps Virtual Machine Standard – X5 VM.Standard1.16: OCPU: 16 Memory: 112 GB Network bandwidth: 4.8 Gbps X7 VM.Standard2.16: OCPU: 16 Memory: 240 GB Network bandwidth: 16.4 Gbps Virtual Machine Dense IO – X5 VM.DenseIO1.4: OCPU: 4 Memory: 60 GB Local storage: 3.2 TB NVMe SSD Network bandwidth: 1.2 Gbps X7 VM.DenseIO2.8: OCPU: 8 Memory: 120 GB Local storage: 6.4 TB NVMe SSD Network bandwidth: 8.2 Gbps Virtual Machine Dense IO – X5 VM.DenseIO1.8: OCPU: 8 Memory: 120 GB Local storage: 6.4 TB NVMe SSD Network bandwidth: 2.4 Gbps X7 VM.DenseIO2.8: OCPU: 8 Memory: 120 GB Local storage: 6.4 TB NVMe SSD Network bandwidth: 8.2 Gbps Virtual Machine Dense IO – X5 VM.DenseIO1.16: OCPU: 16 Memory: 120 GB Local storage: 12.8 TB NVMe SSD Network bandwidth: 4.8 Gbps X7 VM.DenseIO2.16: OCPU: 16 Memory: 240 GB Local storage: 12.8 TB NVMe SSD Network bandwidth: 16.4 Gbps Database Recommendations For customers using the BM.DenseIO1.36 shape (BYOL or non-BYOL), we recommend upgrading to the X7 bare metal instance BM.DenseIO2.52 shape. It provides newer Intel Skylake processors, higher additional OCPU count, and more memory (768 GB of RAM). Additionally, the BM.DenseIO2.52 shape offers higher network bandwidth: 2x25 Gbps network connections versus a single 10-Gbps network connection for bare metal X5 Dense I/O. For customers using the VM.Standard1.N virtual machine shapes, we recommend upgrading to X7 Standard virtual machine instances with newer Intel Skylake processors and higher network bandwidth. SKU Compute Instance Shape Recommended Alternative Shapes Bare Metal – X5 Dense I/O – Standard Edition Bare Metal X5 – Dense/IO: OCPU: 2 enabled, up to 6 additional OCPUs (purchased separately) Memory: 512 GB Storage: 28.8 TB NVMe SSD raw, ~9.4 TB with two-way mirroring, ~5.4 TB with three-way mirroring Network bandwidth: 10 Gbps Bare Metal X7 – Dense I/O: OCPU: 2 enabled, up to 6 additional OCPUs (purchased separately) Memory: 768 GB Storage: 51.2 TB NVMe SSD raw, ~16 TB with two-way mirroring, ~9 TB with three-way mirroring Network bandwidth: 2x25 Gbps Bare Metal – X5 Dense I/O – Enterprise Editions (Enterprise Edition, High Performance, Extreme Performance) Bare Metal X5 – Dense/IO: OCPU: 2 enabled, up to 34 additional OCPUs (purchased separately) Memory: 512 GB Storage: 28.8 TB NVMe SSD raw, ~9.4 TB with two-way mirroring, ~5.4 TB with three-way mirroring Network bandwidth: 10 Gbps Bare Metal X7 – Dense I/O: OCPU: 2 enabled, up to 50 additional OCPUs (purchased separately) Memory: 768 GB Storage: 51.2 TB NVMe SSD raw, ~16 TB with two-way mirroring, ~9 TB with three-way mirroring Network bandwidth: 2x25 Gbps Bare Metal – X5 Dense I/O – BYOL Bare Metal X5 – Dense/IO: OCPU: 2 enabled, up to 6 additional OCPUs (purchased separately) for Standard Edition, up to 34 additional OCPUs (purchased separately) for Enterprise Edition Memory: 512 GB Storage: 28.8 TB NVMe SSD raw, ~9.4 TB with two-way mirroring, ~5.4 TB with three-way mirroring Network bandwidth: 10 Gbps Bare Metal X7 – Dense I/O: OCPU: 2 enabled, up to 6 additional OCPUs (purchased separately) for Standard Edition, up to 50 additional OCPUs (purchased separately) for Enterprise Edition Memory: 768 GB Storage: 51.2TB NVMe SSD raw, ~16 TB with two-way mirroring, ~9 TB with three-way mirroring Network bandwidth: 2x25 Gbps Virtual Machine Standard – X5 – all editions (Standard, Enterprise, High Performance, Extreme Performance, BYOL) VM.Standard1.1: OCPU: 1 Memory: 7 GB Network bandwidth: Up to 600 Mbps X7 VM.Standard2.1: OCPU: 1 Memory: 15 GB Network bandwidth: 1 Gbps Virtual Machine Standard – X5 – all editions (Standard, Enterprise, High Performance, Extreme Performance, BYOL) VM.Standard1.2: OCPU: 2 Memory: 14 GB Network bandwidth: Up to 1.2 Gbps X7 VM.Standard2.2: OCPU: 2 Memory: 30 GB Network bandwidth: 2 Gbps Virtual Machine Standard – X5 – all editions (Standard, Enterprise, High Performance, Extreme Performance, BYOL) VM.Standard1.4: OCPU: 4 Memory: 28 GB Network bandwidth: 1.2 Gbps X7 VM.Standard2.4: OCPU: 4 Memory: 60 GB Network bandwidth: 4.1 Gbps Virtual Machine Standard – X5 – all editions (Standard, Enterprise, High Performance, Extreme Performance, BYOL) VM.Standard1.8: OCPU: 8 Memory: 56 GB Network bandwidth: 2.4 Gbps X7 VM.Standard2.8: OCPU: 8 Memory: 120 GB Network bandwidth: 8.2 Gbps Virtual Machine Standard – X5 – all editions (Standard, Enterprise, High Performance, Extreme Performance, BYOL) VM.Standard1.16: OCPU: 16 Memory: 112 GB Network bandwidth: 4.8 Gbps X7 VM.Standard2.16: OCPU: 16 Memory: 240 GB Network bandwidth: 16.4 Gbps   For more information, see the Database Shapes Details in the service documentation, or contact your Oracle Cloud Infrastructure CSM.

One of the great things about using cloud infrastructure is that you don't have to upgrade hardware. Over a year ago, Oracle Cloud Infrastructure introduced new X7 options for its Compute...

Events

Bare Metal vs. Virtual Machines: Which is Best for HPC in the Cloud?

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Companies that want to run high performance computing (HPC) workloads in the cloud can get a significant performance boost by choosing bare metal servers over virtual machines (VMs)—and nobody does bare metal like Oracle Cloud Infrastructure. I recently sat down with Karan Batta, who manages HPC for Oracle Cloud Infrastructure, to discuss several HPC topics, including the key differences between running HPC workloads on bare metal and running them on VMs. We also talk about Oracle's approach to bare metal cloud and how it differs significantly from the competition. Listen to our conversation here and read a condensed version:   Your browser does not support the audio player   You often speak about the concept of bare metal cloud. Can you explain why HPC workloads are some of the best types of workloads to run in a bare metal cloud environment? Karan Batta: Certainly. But first, let's take a step back. A lot of cloud providers have tried bare metal, but they haven't done it the way we have. With them, bare metal cloud always comes with an "if" or a "but" and there is always a catch. They say things like: "You want bare metal? Great. Tell us how many servers you need. We'll go buy them and provision them manually and you can come back in three months." For us, bare metal is all about providing the same consistent performance compared to your on-premises cluster or on-premises data center—but with the added benefits and flexibility of the cloud. That's really what we've enabled here. Our bare metal offering is a fully multi-tenant bare metal environment where any customer can come in and spin up an instance that looks just like any other instance. It just so happens that there is no Oracle software running on it, there is no hypervisor running on it, and you get better performance for what you pay. This is really what it means to be running on a bare metal cloud. The reason HPC workloads are well-suited for bare metal is because of the great performance boost that bare metal provides. You mentioned that there is no hypervisor. But Oracle Cloud Infrastructure offers virtual machines (VMs) as well, correct? Batta: Yes, definitely. We were initially called Bare Metal Cloud, but we've rebranded as Oracle Cloud Infrastructure because we offer VMs as well. So, if you want to do some test dev workloads on a VM and then move them to bare metal, you can absolutely do that. Why would an organization avoid running HPC workloads in cloud-based VMs? Batta: When you use a hypervisor, you're essentially looking at anywhere from 10-15 percent performance tax. That's a rough idea of how much performance you're going to lose because you're adding overhead on top of your server. If I'm already paying $3, $4, or $5 per hour for an instance and losing 10-15 percent of performance, that kind of defeats the purpose of running HPC in the cloud. We've tried to make sure that when we talk about HPC, we mean that we're going to match your on-premises performance and we're going to give you an amazing price for it. You mentioned that bare metal cloud offers a 10-15 percent performance boost over virtualized cloud environments. What does that mean for our customers? Batta: What it means is that customers can reduce the time that workloads take from days to hours to minutes. Some people might say a 10-15 percent performance boost is not a big deal. But for anyone who runs resource-intensive HPC workloads, that is not the case. For them, 10% could translate to hours. If you're running, for example, a machine learning or an artificial intelligence job, or if you're running a distributed deep learning training job for image recognition or voice translation —those types of jobs can take 16-20 hours. In some of the bigger cases, like a search engine optimization, those things take weeks to run. So, a 10 percent performance boost there could mean that you're reducing the job by hours if not days. So, I think there is a huge difference between bare metal and VMs. Suppose an enterprise wants to run a combinational HPC workload, with some on bare metal cloud and some in a virtualized environment simultaneously. Is it possible to run that and scale up and down? Batta: Yes, you could do that today on Oracle Cloud Infrastructure. And the great thing is you can scale this up, down, left, right—you know you name it, we can do it. With Oracle Cloud Infrastructure you get performance and flexibility side by side. If you're running an HPC job, and you just want to quickly test it, you can spin up a couple of VMs and one core or even a fraction of a core. Then you can move to a fully bare metal instance with something like 52 physical cores—the largest bare metal instance you can find on any cloud—and you can run your production workloads. The other thing bare metal provides is flexibility. Not only do you have the ability to run HPC on our VMs, but you can move your entire virtualized environment and we will para-virtualize it on top of our bare metal nodes.  Come talk with Karan and the rest of the team about HPC on Oracle Cloud Infrastructure at SC18 in Dallas next week in booth #2806.

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Companies that...

Copying Instances or Images Across Regions

Sometimes you need to move the data that is stored on a block volume between Oracle Cloud Infrastructure regions. In this case, you can use cross-region block volume backup copy. Unfortunately, you can’t use this method with boot volumes. If you need to transfer your instance or image between regions, use one of the methods outlined in this post, depending on the configuration of your instance. In this post, I assume that you're familiar with  the Oracle Cloud Infrastructure CLI and have it set up. If not, use our quick start guide to help you do that. Custom Image for Boot Volumes Smaller Than 50 GB By default, every Linux instance launched in Oracle Cloud Infrastructure is created with a 50-GB boot volume. This value can be altered during instance creation, but that might lead to some limitations, which I’ll explain in the next section. This scenario assumes that the size of your boot volume is 50 GB or less. Create a custom image of an instance. The instance reboots during the process, so ensure that it’s not a production workload. oci compute image create --display-name <display_name> --instance-id <instance_ID> --compartment-id <compartment_ID> --wait-for-state AVAILABLE Export the custom image to an Object Storage bucket. oci compute image export to-object --image-id <custom_image_ID> --namespace <tenancy_namespace> --bucket-name <object_storage_bucket> --name <object_name> Wait for the process to complete. You can monitor the life cycle status using the following command: oci compute image get --image-id <image_ID> When the life cycle state changes from EXPORTING to AVAILABLE, the process is complete. Copy the object to the new region. Ensure that you have relevant permissions and policies applied as outlined in https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/copyingobjects.htm). Cross-region copy lets you asynchronously copy objects to other buckets in the same region, to buckets in other regions, or to buckets in other tenancies within the same region or in other regions. When copying the objects, you can keep the same name or modify the object name. The object copied to the destination bucket is considered a new object with unique ETag values and MD5 hashes. Before you start the operation, ensure that destination bucket exists or create one by using the oci os bucket create ... command); otherwise, the operation fails. oci os object copy --bucket-name <bucket_name> --source-object-name <object_name> --destination-region <destination_region_name> --destination-bucket <destination_bucket_name> Create a pre-authenticated request for the object. oci --region <new_region_name> os preauth-request create --bucket-name <bucket_name> --name <object_name> --access-type ObjectRead --time-expires <date_time> The URL is objectstorage.<region>.oraclecloud.com/<preauth_path>. Import the custom image in to the region by using the Object Storage URL. oci --region <region_name> compute image import from-object-uri --uri <URI> --compartment-id <compartment> --display-name <display_name> --launch-mode NATIVE --source-image-type QCOW2 After the image is imported, you should see it in the custom images list and be able to launch an instance by using the Console or CLI. Custom Image for Boot Volumes Larger Than 50 GB If the image is created from an instance with a boot volume larger than 50 GB, the process might fail because of a limitation of the pre-authenticated request object size. As a workaround, we clone the boot volume, use a disposable instance to create an image, and then upload and import the image. You can also use this method with a regular-sized image to avoid restarting the original instance. Before you start, ensure that you have required permissions in the IAM policies, that you have an API key that you can apply on the remote machine, and that the Oracle Cloud Infrastructure CLI is installed locally. Create a boot volume clone. This process is almost instantaneous. oci bv boot-volume create --source-boot-volume-id <boot_volume_ID> --display-name <new_boot_volume_display_name> Create a block volume big enough to store the temporary image in the same availability domain. oci bv volume create --availability-domain <availability_domain> --size-in-gbs 1024 Launch a new Oracle Linux 7.5 instance in the same availability domain. This is your disposable instance. Attach the block volume and the boot volume. oci compute volume-attachment attach --instance-id <instance_ID> --volume-id <block_volume_ID> --type ISCSI oci compute volume-attachment attach --instance-id <instance_ID> --volume-id <cloned_boot_volume_ID> --type ISCSI Use SSH to connect to the instance. Install the required packages: sudo yum -y install qemu-img pv python-pip sudo pip install oci-cli Configure the CLI on the remote system, following the procedure at https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliinstall.htm. Configure disks on the temporary system and format the partition. sudo oci-iscsi-config sudo parted -s -a optimal /dev/sdb mklabel gpt sudo parted -s -a optimal /dev/sdb mkpart primary 0% 100% sudo mkfs.xfs /dev/sdb1 sudo mount /dev/sdb1 /media Create an image from the boot volume. This process might take a long time, even hours depending on the disk size. sudo qemu-img convert -p -S1M /dev/sdc -O qcow2 /media/image.qcow2 The result is an image.qcow2 file that contains a QCOW2 formatted disk image of a boot volume that you can upload to Object Storage and import into your new region. Upload the file to the Object Storage bucket. oci --region <region_name> os object put -ns <namespace> -bn <bucket_name> --file /media/image.qcow2 --name image.qcow2 Import the image from the bucket. oci --region <region_name> compute image import from-object --namespace <ns> --bucket-name <bucket_name> --compartment-id <compartment> --display-name <display_name> --launch-mode NATIVE --source-image-type QCOW2 After the image is imported, you should see it in the custom images list and be able to launch an instance by using the Console or the CLI. Now you can terminate the temporary instance and delete the block volumes that you created during the process. I hope this helps!

Sometimes you need to move the data that is stored on a block volume between Oracle Cloud Infrastructure regions. In this case, you can use cross-region block volume backup copy. Unfortunately, you...

Oracle Cloud Infrastructure

Enhanced Compute Instance Management on Oracle Cloud Infrastructure

Oracle Cloud Infrastructure has released two new features that augment compute instance management with the introduction of Instance Configurations and Instance Pools. While the cloud provides a lot of useful standards - standard OS images, standard shape configurations, etc. - there is still additional overhead in provisioning and attaching resources like volumes and VNIC's. Provisioning at scale and managing instance setup has been difficult, until now. The release of Instance Configurations feature simplifies the provisioning of an instance and all required resources with a single API call. This simplification then extends to Instance Pools, where Instance Configurations are used to create a logical grouping of many identical compute instances and automatically launched at scale. What are Instance Configurations? An Instance Configuration is a template that defines a set of required and optional parameters needed to create a compute instance on Oracle Cloud Infrastructure, including OS image, shape and resources, such as block volumes attached to the instance as a single configuration entity. You can create an Instance Configuration from an existing running instance or construct a custom Instance Configuration via the CLI. When Boot or Data storage Volumes do not already exist, these resources will automatically be created for you when launching an instance. With one single action, you can launch an instance, we create storage volumes, attach VNIC's and stripe the set number of Instances evenly across the desired availability domains (AD's) for you. This is something that would normally require manual provisioning of each individual resource on the platform to launch an instance.. Creating an Instance Configuration Create an instance configuration from an existing running instance with the new Create Instance Configuration button. Select a compartment and give your configuration a name. All the metadata from this instance is then captured for you. On the left menu, go to Compute, select Instances, and then click Instance Details. To view the saved configuration, go to Compute, and then Instance Configuration. How Instance Pools Work Oracle Cloud Infrastructure has created a new powerful approach that launches and manages identical VM instances in a logical group called an Instance Pool. The pool automatically provisions a horizontal scalable pool of VM instances. An Instance Pool uses an instance configuration template that contains all the settings for how you want an instance created. Instance Pools manage the launching of identical instances based on the instance configuration template. The pool maintains your configured instance count and can be updated to scale on demand. The Instance Pool constantly monitors its own health state to ensure all instances are in a running state. In the event of any instance failure, the pool will automatically self-heal and take corrective action to bring the pool back to a healthy state. Easily Create and Launch a New Instance Pool Create an instance pool in less than 30 seconds.  Go to Compute, and select Instance Pools. Click the Create Instance Pool button, and enter the number of instances you want for the pool. Select the instance configuration that you created previously. Select the availability domains for desired resiliency. (Instances will evenly be distributed across selected AD's) Select the primary VNIC and subnet. Provisioning the Instance Pool launches the configured instances.   After the Instance Pool is running, you can perform power actions on the pool.  The Edit button allows you to update the pool size with the number instances (0-50). Stopping the pool stops all instances in the pool. Reboot restarts all instances and Terminate destroys all the instances and the pool itself. Common Use Cases Instance Configurations: Clone an instance and save to a configuration file. Create standardized baseline instance templates. Easily deploy instances from CLI with a single configuration file. Automate the provisioning of many instances, its resources and handle the attachments. Instance Pools: Centrally manage a group of instance workloads that are all configured with a consistent configuration. Scale out instances on-demand by increasing the instance size of the pool. Update a large number of instances with a single instance configuration change. Maintain high availability and distribute instances across availability domains within a region. Scale up the VM size within a pool easily by updating the instance configuration with a larger shape. Enable automatic self-healing within the pool to maintain pool size and availability. Keep up with customer demand with large-scale support for hundreds of custom VM images.  With the synergy of both Instance Configurations and Instance Pools, these features have removed the complexity it takes to manage and deploy hundreds of VMs with ease on Oracle Cloud Infrastructure. There is no additional cost for Instance Configurations and Instance Pools. You only pay for the resources consumed from a launched VM instance.  Next Steps Learn more about how to get started with Oracle Cloud Infrastructure Instance Configurations and Instance Pools in our Managing Compute Instances documentation.

Oracle Cloud Infrastructure has released two new features that augment compute instance management with the introduction of Instance Configurations and Instance Pools. While the cloud provides a lot...

Events

What is HPC in the Cloud? Exploring the Need for Speed

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. High Performance Computing (HPC) refers to the practice of aggregating computing power in a way that delivers much higher horsepower than traditional computers and servers. HPC is used to solve complex, performance-intensive problems—and organizations are increasingly moving HPC workloads to the cloud. HPC in the cloud is changing the economics of product development and research because it requires fewer prototypes, accelerates testing, and decreases time to market. I recently sat down with Karan Batta, who manages HPC for Oracle Cloud Infrastructure, to discuss how HPC in the cloud is changing the way that organizations new and old, develop products and conduct cutting-edge scientific research. We talked about varying topics including the key differences between legacy, on-premises HPC workloads, and newer HPC workloads that were born in the cloud. Listen to our conversation here and read a condensed version below: Your browser does not support the audio player Let's start with a basic definition. What is HPC and why is everyone talking about it? Karan Batta: HPC stands for High Performance Computing—and people tend to bucket a lot of stuff into the HPC category. For example, artificial intelligence (AI) and machine learning (ML) is a bucket of HPC. And if you're doing anything beyond building a website—anything that is dynamic—it's generally going to be high performance. From a traditional perspective, HPC is very research-oriented, or scientifically-oriented. It's also focused on product development. For example, think about engineers at a big automotive company making a new car. The likelihood is that the engineers will bucket all of that development—all of the crash testing analysis, all of that modeling of that car—into what's now called HPC. The reason the term HPC exists is because it's very specialized. You may need special networking gear, special compute gear, and high-performance storage, whereas less dynamic business and IT applications may not require that stuff. Why should people care about HPC in the cloud? Batta: People and businesses should care because it really is all about product development. It's about the value that manufacturers and other businesses provide to their customers. Many businesses now care about it because they've moved some of their IT into the cloud. And now they're actually moving stuff into the cloud that is more mission-critical for them—things like product development. For example, building a truck, building a car, building the next generation of DNA sequencing for cancer research, and things like that. Legacy HPC workloads include things like risk analysis modeling and Monte Carlo simulation, and now there are newer kinds of HPC workloads like AI and deep learning. When it comes to doing actual computing, are they all the same or are these older and newer workloads significantly different? Batta: At the end of the day, they all use computers and servers and network and storage. The concepts from legacy workloads have been transitioned into some of these modern cloud-native type workloads like AI and ML. Now, what this really means is that some of these performance-sensitive workloads like AI and deep learning were born in the cloud when cloud was already taking off. It just so happened that they could use legacy HPC primitives and performance to help accelerate those workloads. And then people started saying, "Okay, then why can't I move my legacy HPC workloads into the cloud, too?" So, at the end of these workloads all use the same stuff. But I think that how they were born and how they made their way to the cloud is different. What percentage of new HPC workloads coming into the cloud are legacy, and what percentage are newer workloads like AI and deep learning? Which type is easier to move to the cloud? Batta:  Most of the newer workloads like AI, ML, containers, and serverless were born in the cloud so there already ecosystems available to support them in the cloud. Rather than look at it percentage-wise, I would suggest thinking about it in terms of opportunity. Most HPC workloads that are in the cloud are in the research and product development phase. Cutting-edge startups are already doing that. But the big opportunity is going to be in legacy HPC workloads moving into the cloud. I'm talking about really big workloads—think about Pfizer, GE and all these big monolithic companies that are running production workloads of HPC on their on-premises clusters. These things have been running 30 or 40 years and they haven't changed. Is it possible to run the newer HPC workloads in my old HPC environment if I already have it set up? Can companies that have invested heavily in on-premises HPC just stay on the same trajectory? Batta: A lot of the latest HPC workloads are the more cutting-edge workloads were born in the cloud. You can absolutely run those on old HPC hardware. But they're generally cloud-first, meaning that they have been integrated into graphics processing units (GPUs). Nvidia, for example, is doing a great job of making sure any new workloads that pop up are already hardware accelerated. In terms of general-purpose legacy workloads, a lot of that stuff is not GPU accelerated. If you think about crash testing, for example, that's still not completely prevalent on GPUs. Even though you could run it on GPUs if you wanted, there's still a long-term timeline for those applications to move on. So, yes, you can run new stuff on the old HPC hardware. But the likelihood is that those newer workloads have already been accelerated by other means, and so it becomes a bit of a wash. In other words, these newer workloads are built cloud-native, so trying to run them on premises on legacy hardware is a bit like trying to put a square peg in a round hole. Is that correct? Batta: Exactly. And you know, somebody may do that, because they've already invested in a big data center on premises and it makes sense. But I think over time this is going to be the case less and less. Come talk with Karan and others about HPC on Oracle Cloud Infrastructure at SC18 in Dallas next week in booth #2806.

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. High...

Oracle Cloud Infrastructure

Part 2 of 4 - Oracle IaaS and Seven Pillars of Trusted Enterprise Cloud Platform

This is the second part of our blog series where we do a deep dive into the Oracle Cloud Infrastructure security approach. As a recap, we design our security architecture and build security solutions based on seven core pillars. And under each of these pillars, we focus on delivering solutions and capabilities to help ensure our customers can improve the security posture of their overall cloud infrastructure. In the first post, we discussed how we enable customers to achieve isolation and encrypt their data. In this post, we dig into our 3rd and 4th pillars, and discuss how you can obtain the security controls and visibility needed for your cloud environment. 3. Security Controls Security controls offer customers effective and easy-to-use security management. The solutions that we offer allow you to control access to your services and segregate operational responsibilities to reduce the risk associated with potentially malicious and accidental user actions. User authentication and authorization-based security controls: Each user has one or more of the following credentials to authenticate themselves to Oracle Cloud Infrastructure. Users can generate and rotate their own credentials. In addition, a tenancy security administrator can reset credentials for any user within their tenancy. Console password: Used to authenticate a user to the Oracle Cloud Infrastructure Console. API key: All API calls are signed using a user-specific 2048-bit RSA private key. The user creates a public key pair and uploads the public key in the Console. A user can access an instance via SSH. This requires that the user has an SSH key pair.  Swift password: Used by Recovery Manager (RMAN) to access the Object Storage service for database backups. To ensure sufficient complexity, the IAM services create the password and the customer cannot provide it. Customer secret key: Used by Amazon S3 clients to access the Object Storage service’s S3-compatible API. To ensure sufficient complexity, the IAM services create the password and the customer cannot provide it. Instances: Instances are a new principal type in IAM. Customers no longer need to configure user credentials on the services running on their compute instances or rotate those credentials. Each compute instance has its own identity, and it authenticates using the certificates that get added to the instances by instance principals. Because these certificates are automatically created, assigned to instances, and rotated, customers do not need to distribute credentials to their hosts nor rotate them. You can group instances in logical groups called dynamic groups and you can define IAM policies for these groups. ynamic groups allow you to group Oracle Cloud Infrastructure instances as principal actors, similar to user groups. You can then create policies to permit instances in these groups to make API calls against Oracle Cloud Infrastructure services. Membership in the group is determined by a set of matching rules. Federated Users: Federated users who attempt to authenticate to the Oracle Cloud Infrastructure graphical administration console are redirected to the configured identity provider, after which they can manage Oracle Cloud Infrastructure resources in the console just like a native IAM user. Currently Oracle Cloud Infrastructure supports the Oracle Identity Cloud Service and Microsoft Active Directory Federation Service (ADFS) as identity providers. Federated groups can be mapped to native IAM groups to define what policy applies to a federated user. Security Lists: Oracle IaaS also provides a native firewall-as-a-service in the form of security lists that are applied at the subnet level. The security list rules for the database subnet restrict it to connecting from and to the web server’s subnet. The security list for the web server subnet allows all outgoing connections while restricting incoming connections. A security list provides a virtual firewall for an instance, with ingress and egress rules that specify the types of traffic allowed in and out. Each security list is enforced at the instance level. However, you configure your security lists at the subnet level, which means that all instances in a given subnet are subject to the same set of rules. The security lists apply to a given instance whether it's talking to another instance in the VCN or a host outside the VCN. When you create a security list rule, you choose whether it's stateful or stateless.  Stateful: If you add a stateful rule to a security list, that indicates that you want to use connection tracking for any traffic that matches that rule (for instances in the subnet the security list is associated with). This means that when an instance receives traffic matching the stateful ingress rule, the response is tracked and automatically allowed back to the originating host, regardless of any egress rules applicable to the instance. When an instance sends traffic that matches a stateful egress rule, the incoming response is automatically allowed, regardless of any ingress rules. Stateless: If you add a stateless rule to a security list, that indicates that you do not want to use connection tracking for any traffic that matches that rule (for instances in the subnet that the security list is associated with). This means that response traffic is not automatically allowed. To allow the response traffic for a stateless ingress rule, you must create a corresponding stateless egress rule. Containers: For containers, the Kubernetes RBAC Authorizer can enforce more fine-grained access control for users on specific clusters via Kubernetes RBAC roles and clusterroles. A Kubernetes RBAC role is a collection of permissions. For example, a role might include read permission on pods and list permission for pods. A Kubernetes RBAC clusterrole is just like a role, but can be used anywhere in the cluster. A Kubernetes RBAC rolebinding maps a role to a user or set of users, granting that role's permissions to those users for resources in that namespace. Similarly, a Kubernetes RBAC clusterrole binding maps a clusterrole to a user or set of users, granting that clusterrole's permissions to those users across the entire cluster. IAM and the Kubernetes RBAC Authorizer work together to enable users who have been successfully authorized by at least one of them to complete the requested Kubernetes operation. When a user attempts to perform any operation on a cluster (except for create role and create clusterrole operations), IAM first determines whether the group that the user belongs to has the appropriate permissions. If so, the operation succeeds. If the attempted operation also requires additional permissions granted through a Kubernetes RBAC role or clusterrole, the Kubernetes RBAC Authorizer then determines whether the user has been granted the appropriate Kubernetes role or clusterrole. By default, users are not assigned any Kubernetes RBAC roles (or clusterroles). So before attempting to create a new role (or clusterrole), users must be assigned an appropriately privileged role (or clusterrole). You can connect to worker nodes using SSH. If you provided a public SSH key when creating the node pool in a cluster, the public key is installed on all worker nodes in the cluster. On UNIX and UNIX-like platforms (including Solaris and Linux), you can then connect through SSH to the worker nodes using the SSH utility (an SSH client) to perform administrative tasks. Before you can connect to a worker node using SSH, you must define a security ingress rule in the security list for the worker node subnet to allow SSH access. 4. Visibility In order to give you the visibility you need over your cloud infrastructure, Oracle offers comprehensive log data and security analytics that you can use to audit and monitor actions on your resources. This allows you to meet your audit requirements and reduce security and operational risk. The Oracle Cloud Infrastructure Audit service records all API calls to resources in a customer’s tenancy as well as login activity from the graphical management console. Using the Audit service, customers can achieve their own security and compliance goals by monitoring all user activity within their tenancy. Because all Console, SDK, and command line (CLI) calls go through our APIs, all activities from those sources are included.  Audit records are available through an authenticated, filterable query API or can be retrieved as batched files from Oracle Cloud Infrastructure Object Storage. You can also search for API calls via the Console. Audit log contents include what activity occurred, the user that initiated it, the date and time of the request, as well as source IP, user agent, and HTTP headers of the request. New activities are usually appended to the audit logs within 15 minutes of occurrence. By default, audit logs are retained for 90 days, but you can configure it to retain logs for up to 365 days. In addition to the audit services, Oracle CASB-based security monitoring performs Oracle Cloud Infrastructure resource activity configuration checks, IAM user behavior analysis, and IP reputation analysis. Examples of CASB Oracle Cloud Infrastructure security checks: Publicly accessible object store buckets Open VCN Security Lists 0.0.0.0/0 VCN accessible to the internet IAM user password not rotated for more than 90 days IAM user API keys not rotated for more than 90 days IAM user password complexity checks MFA not enabled on admin account In my next blog post, I will cover the next 2 pillars - secure hybrid cloud and high availability. In the meantime, use these resources to learn more about Oracle Cloud Infrastructure security: • Oracle Cloud Infrastructure Security White Paper  • Oracle Cloud Infrastructure GDPR White Paper • Oracle Cloud Infrastructure Security Best Practices Guide • Services Security Documentation   Blogs: Part 1 of 4 - Oracle IaaS and Seven Pillars of Trusted Enterprise Cloud Platform Guidance for PCI Compliance Guidance for cSOC Guidance for third-party firewall installation on Oracle Cloud Infrastructure - Check Point, vSRX Guidance for IAM configuration for MSPs Guidance for IAM Best Practices Guidance for Migration and DR using Rackware

This is the second part of our blog series where we do a deep dive into the Oracle Cloud Infrastructure security approach. As a recap, we design our security architecture and build security solutions...

Strategy

Enterprise Cloud Infrastructure Shouldn't Be a Commodity

Oracle entered the infrastructure as a service (IaaS) market for two main reasons. From an internal perspective, we're a pioneer and leader in databases. Today's databases need the performance and scalability of cloud infrastructure to meet the demands of enterprise customers. That means we needed a cloud to support our business and our customers' businesses. Even more importantly, we believe IaaS is not and should not be a commodity. There are significant opportunities to innovate and improve cloud migration, integration, and performance. We're capitalizing on these opportunities to provide a truly enterprise-grade cloud to businesses of all sizes. Enterprise Migration Obstacles Most enterprise workloads still haven't moved to the cloud, and those that remain on-premises are typically the most mission-critical. Why? There are several obstacles to the large-scale migration of enterprise workloads: Many organizations have serious security concerns—rightfully so, given how some cloud providers handle customer tenancy, keys, and data. Many applications simply can't run in earlier-generation clouds without significant re-architecture, because of their existing hardware and software dependencies. Enterprises must be able to run their entire businesses in the cloud. Unfortunately, they haven't had that option for much of the past decade. Mission-critical applications require high, consistent performance and reliability. Customers haven't found this, or the proper service levels, in earlier-generation clouds. The migration process itself is often risky, from downtime to the complexity of translating on-premises security to the cloud. Purpose-Built for the Enterprise In a real enterprise-grade cloud, large businesses can easily and securely migrate entire systems, even those that rely on multiple technologies from multiple vendors. An on-premises Oracle application, for example, might run on Exadata and use Real Application Clusters (RAC), all in a virtualized environment, protected by several different monitoring and security tools. Oracle Cloud Infrastructure enables enterprises to move these integrated systems to the cloud all at once. Our approach ensures the integrity of customers' existing security capabilities and significantly reduces the risk of migration. It's not enough to enable seamless cloud migration, however. Enterprise-grade clouds must also provide room to grow, from scaling existing systems to supporting and integrating with new technologies. At Oracle Cloud Infrastructure, we're committed to openness and we embrace transformative new technologies and methodologies, including DevOps, containers, serverless, Kubernetes, and Terraform. Other clouds support these too, but it's often through a hodgepodge of services that don't integrate well with each other or with existing systems. We're doing cloud infrastructure better because we're purpose-built for the enterprise of today, and the enterprise of tomorrow.

Oracle entered the infrastructure as a service (IaaS) market for two main reasons. From an internal perspective, we're a pioneer and leader in databases. Today's databases need the performance and...

Solutions

Oracle Jump Start Learning: Introducing Self-Paced Hands-On Labs

I remember my early days of trying out a cloud platform to create a couple of virtual machines. I got “ready” by reading documentation and watching quite a few online tutorials. But when I first logged into the platform, I was lost. I had to revisit the documents and tutorials to navigate my way around the platform. After I had some hands-on experience, everything became a lot easier. Moral of the story: hands-on experience beats theoretical knowledge. Oracle Cloud Infrastructure provides unparalleled price and performance for customers deploying workloads in a cloud environment. We recognize that our existing and future customers have a varied skill set. It's important for us to provide our customers and partners with tools and solutions to bridge any skill gap so that our customers and partners can successfully use and deploy solutions on Oracle Cloud Infrastructure. In our endeavor to enable, empower, and expedite our customers, we are delighted to introduce Jump Start Learning. These self-paced, hands-on labs provide a live environment with step-by-step instructions for performing different tasks in Oracle Cloud Infrastructure. Best of all, the instructions and the Oracle Cloud Infrastructure Console are visible in a single split screen. No more switching back and forth between browser windows to read instructions. Want to create a virtual cloud network and deploy a compute instance on it? There's a lab for that. Want to learn how to use Terraform to deploy infrastructure as code? There's a lab for that. Want to deploy and configure Oracle Autonomous Data Warehouse? Well, there's a lab for that, too. Following are some of the key advantages and features of the labs: Five beginner level labs are free. There is minimal cost for rest of the labs Step-by-step instructions and access to the Console in a single browser screen. Labs based on skill level. Start as a beginner, learn the basics, work your way to advanced, and then use the experience for a production rollout. Configure and deploy the latest features, such as a service gateway and Autonomous Data Warehouse. No need to install any tools on your laptop; all necessary tools are built in. Best hands-on experience, period! Start taking the labs today by registering your account at https://ocitraining.qloudable.com/. Remember to rate the lab at the end and provide your feedback. We are always listening to our customers and, more importantly, acting on your feedback, so let us know if you want to see a specific lab. Happy learning!

I remember my early days of trying out a cloud platform to create a couple of virtual machines. I got “ready” by reading documentation and watching quite a few online tutorials. But when I first...

Security

Part 1 of 4 - Oracle IaaS and Seven Pillars of Trusted Enterprise Cloud Platform

Oracle Cloud Infrastructure’s security approach is based on seven core pillars. Each pillar has multiple solutions designed to maximize the security and compliance of the platform. You can read more about Oracle Cloud Infrastructure's security approach here. The seven core pillars of trusted enterprise cloud platform are:  Customer Isolation Data Encryption Security Controls Visibility Secure Hybrid Cloud High Availability Verifiably Secure Infrastructure Oracle employs some of the world’s foremost security experts in information, database, application, infrastructure, and network security. By using Oracle Cloud Infrastructure, our customers directly benefit from Oracle’s deep expertise and continuous investments in security. In this blog (Part 1), I am going to explain how Oracle Cloud Infrastructure security services map to our first two pillars - Customer Isolation and Data Encryption. In the next blog (Part 2), I will cover the next 2 pillars. 1. Customer Isolation Customer isolation allows customers to deploy application and data assets in an environment that commits full isolation from other tenants and Oracle’s staff. Let's dive into how we offer isolation at different resource levels. Compute At the Compute level, we offer two types of instance isolation. Bare metal instances offer complete workload and data isolation. Customers have full control of these instances. Every bare metal instance is a single-tenant solution. Oracle personnel have no access to memory or local storage while the instance is running, and there is no Oracle-managed hypervisor on bare metal instances. Virtual machine instances are a multi-tenant solution. VM instances run on an Oracle-managed hypervisor and come with strong isolation controls. Both instances offer strong security controls. Customers who want to have higher performance instances and complete workload and data isolation often prefer bare metal instances.  Networking Next, the Oracle Cloud Infrastructure Networking service offers a customizable private network (a VCN, or virtual cloud network) to customers. VCNs enforce the logical isolation of a customer's Oracle Cloud Infrastructure resources.  Oracle’s VCN gives you the complete set of network services you need in the cloud with the same network flexibility you have today on-premises. You can build an isolated virtual network with granular controls, including subnets and security lists. We provide secure and dedicated connectivity from your data center to the cloud through FastConnect with multiple providers like Equinix and Megaport. You can provide end-customers high performance and predictable access to your applications with services like provisioned bandwidth load balancing. All networking services are API-driven and programmable for more automated management and application control. As with an on-premises network in a data center, customers can set up a VCN with hosts and private IP addresses, subnets, route tables, and gateways using VCN. The VCN can be configured for internet connectivity using an Internet Gateway, or connected to the customer's private data center through an IPSec VPN gateway or FastConnect. FastConnect offers a private connection between an existing network's edge router and dynamic routing gateways. In this case, traffic does not traverse the internet. Subnets, the primary subdivision of a VCN, are specific to an availability domain. They can be marked as private upon creation, which prevents instances launched in that subnet from having public IP addresses.  Compartments and Policies From an authorization perspective, Identity and Access Management (IAM) compartments can be used for isolation. A compartment is a heterogeneous collection of resources for the purposes of security isolation and access control. All end-user calls to access Oracle Cloud Infrastructure resources are first authenticated by the IAM service and then authorized based on IAM policies. A customer can create a policy that gives a specific set of users permission to access the infrastructure resources (network, compute, storage, and so on) within a compartment in the tenancy. These policies are flexible and are written in a human-readable form that is easy to understand and audit. The easy-to-understand syntax include verbs which define the level of access given to end-users.  2. Data Encryption Our second core security pillar, data encryption protects customer data at-rest and in-transit in a way that allows customers to meet security and compliance requirements with respect to cryptographic algorithms and key management. Block Volume Encryption The Oracle Cloud Infrastructure Block Volumes service provides persistent storage that can be attached to compute instances using the iSCSI protocol. The volumes are stored in high-performance network storage and support automated backup and snapshot capabilities. Volumes and their backups are accessible only from within a customer's VCN and are encrypted at-rest using unique keys. For additional security, iSCSI CHAP authentication can be required on a per-volume basis. Object Storage Encryption The Oracle Cloud Infrastructure Object Storage service provides highly scalable, strongly consistent, and durable storage for objects; ideal for media archives, data lakes, and data protection applications like backup and restore. API calls over HTTPS provide high-throughput access to data. All objects are encrypted at rest using unique keys. Objects are organized by bucket, and, by default, access to buckets and objects within them requires authentication. Users can use IAM security policies to grant users and groups access privileges to buckets. To allow bucket access by users who do not have IAM credentials, the bucket owner (or a user with necessary privileges) can create pre-authenticated requests that allow authorized actions on buckets or objects for a specified duration.  Alternately, buckets can be made public, which allows unauthenticated and anonymous access. Given the security risk of inadvertent information disclosure, Oracle highly recommends carefully considering the business case for making buckets public. Object Storage enables you to verify that an object was not unintentionally corrupted by allowing an MD5 hash to be sent with the object (or with each part, for multipart uploads) and returned upon successful upload. This hash can be used to validate the integrity of the object.  In addition to a native API, the Object Storage service supports Amazon S3 compatible APIs. Using the Amazon S3 Compatibility API, customers can continue to use existing S3 tools (for example, SDK clients). Partners can also modify their applications to work with Object Storage with minimal changes to their applications. Their native API can co-exist with the Amazon S3 Compatibility API, which supports CRUD operations. Before customers can use the Amazon S3 Compatibility API, they must create an S3 Compatibility API key. After generating the necessary key, customers can use the Amazon S3 Compatibility API to access Object Storage in Oracle Cloud Infrastructure. Key Management Service In addition, Oracle provides an enterprise-grade Key Management service with following characteristics: Backed by FIPS 140-2 Level 3 HSMs Tightly integrated with Oracle Block Volumes and Object Storage Full control of key creation and lifecycle (with automatic rotation options) Full audit of key usage (with signed attestation by HSM vendor) Choice of key shape via Advanced Encryption Standard (AES) keys with three key lengths: AES-128, AES-192, and AES-256 Load Balancer For data at-transit, applications should use TLS-based certificates and encryption. Oracle IaaS load balancer services support customer-provided TLS certificates.  In addition, the Load Balancing service supports TLS 1.2 by default, and prioritizes the following forward-secrecy ciphers in the TLS cipher-suite: ECDHE-RSA-AES256-GCM-SHA384 ECDHE-RSA-AES256-SHA384 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-RSA-AES128-SHA256 DHE-RSA-AES256-GCM-SHA384 DHE-RSA-AES256-SHA256 DHE-RSA-AES128-GCM-SHA256 DHE-RSA-AES128-SHA256 Many customers prefer EC cipher suites due to high performance. However, customers can add weaker cipher suites changes via a support ticket if their legacy clients need them. Database Encryption Database encryption is achieved by using Transparent Data Encryption (TDE).  In the second part, I will cover the next 2 pillars. In the meantime, please use these resources to learn more about Oracle Cloud Infrastructure Security: • Oracle Cloud Infrastructure Security White Paper  • Oracle Cloud Infrastructure GDPR White Paper • Oracle Cloud Infrastructure Security Best Practices Guide • Services Security Documentation Blogs: Guidance for PCI Compliance Guidance for cSOC Guidance for Security Checklist for Application Migration Guidance for third party firewall installation on Oracle Cloud Infrastructure - Checkpoint, vSRX Guidance for IAM configuration for MSPs Guidance for IAM Best Practices Guidance for Migration and DR using Rackware  

Oracle Cloud Infrastructure’s security approach is based on seven core pillars. Each pillar has multiple solutions designed to maximize the security and compliance of the platform. You can read more...

So the Internet is Your Corporate Network. Now What?

Almost every major enterprise is considering, or in the process of, moving their sensitive workloads to the cloud. Within the next 18 months, the amount of enterprise workloads running on the cloud will surpass those living on premises for the first time ever, according to IDG, increasing from 47% to a whopping 69%. This means the internet will be part of the enterprise network.  To be completely confident about this “new network,” enterprises need to have complete visibility into the internet -- or at least the portion through which their messages pass. The internet is a core component of the cloud's physical infrastructure, with its own nuances around reliability, performance, volatility, and security issues. You can’t really fix how the overall internet behaves, but you should be aware of these nuances and how they affect your network. When enterprises built their networks, they always assumed that everything inside the firewall was secure, controlled, monitored, measured, and metered, and that the entire network was visible all the time. But when sales, marketing, and support organizations tried to move their applications to the cloud, more often than not, IT had neither the inclination nor the bandwidth to support them. This gave birth to the shadow IT concept. Those organizations had money but no patience or time to waste, waiting for the enterprise IT department to come around. They tried to replicate corporate security, compliance, and privacy standards the best that they could. Today, with the need for faster innovation, combined with the cost, scalability, and availability benefits of the cloud, enterprises are building cloud native applications that are either cloud-first or cloud-only concepts. More and more workloads will live entirely in the cloud, without ever touching the corporate network. In these scenarios, organizations need complete visibility from edge to core. But how?    Enterprise IT, security, and compliance teams have a choice to make: either completely trust the standards and services supported by their cloud provider of choice at the infrastructure level, or build everything from the ground up on the cloud that can mimic the standards they are used to. Most cloud providers provide basic infrastructure performance details but not broader insights into internet performance, largely because they are not measuring, collecting, or analyzing the relevant metrics. Not only should this insight be made available, but it should be available on a real-time basis via a dashboard or APIs. In addition, these insights should be able to enable performance optimization of cloud workloads through integration into other tools. Key insights and services that should be provided by your cloud provider should include: Internet telemetry: Understanding historical internet performance between multiple worldwide locations and your cloud infrastructure, and being able to test that performance in real time, can help ensure that your users are experiencing the best possible application performance. In addition, this performance data can be used for dynamic steering of inbound traffic, enabling you to direct or balance incoming traffic across multiple cloud locations. This Oracle Internet Intelligence dashboard shows historical latency to multiple global markets within the Oracle Cloud Infrastructure customer console: Security: It is critical to incorporate the same security measures that your enterprise IT built for over the years when shifting workloads to the cloud. Your cloud provider should support tightly integrated web application firewall, DDoS mitigation, bot management, and API protection services. Insights from these services should be easily accessible through the customer console, and the underlying data available for security information and event management ingestion and security operations center analysis. Routing: Monitoring and alerting on routing announcements associated with your organization’s address space enables immediate responses to accidental leaks or malicious hijacks, limiting the effects and lowering the risk that user traffic is sent to fake sites or applications designed to steal information. If you are using a cloud provider, confirm that they are taking these steps for the address space in which their critical service platforms (compute, storage, DNS, etc.) reside, and that they have a plan for immediately addressing issues should they arise. Additionally, your cloud provider should have visibility into network paths to and from their regions to ensure traffic isn’t taking circuitous routes, increasing latency, or transiting through hostile nation-states. Performance issues in real time with measurements from a global network of vantage points Moving to the cloud is a big deal for most organizations. When making a choice on cloud provider, ask some tough questions about whether or not they will give you visibility into both their infrastructure and the performance and visibility of the broader internet. Oracle Cloud Infrastructure offers better performance at a lower price point than competing infrastructure as a service providers. And because it can isolate customers from one another using bare metal compute, it avoids the risk and exposure that comes with shared instances. In other words, Oracle Cloud Infrastructure is built for the enterprise from the ground up, with security from edge to core. As you move your enterprise network to the cloud, make sure your provider is enterprise grade.

Almost every major enterprise is considering, or in the process of, moving their sensitive workloads to the cloud. Within the next 18 months, the amount of enterprise workloads running on the cloud...

Events

Cloud Transformation: Recapping Oracle OpenWorld 2018

We had a number of exciting announcements at Oracle OpenWorld 2018, but I'll summarize the conference through the eyes of our customers and partners, and what they shared throughout the week. Customers Are Ready to Transform Their Mission-Critical Applications We've long said that our focus is building a true enterprise cloud, one that can handle tough, mission-critical database workloads. All week, in customer briefings, in roundtable discussions, and in the expo hall, customers told us about these workloads. They want to move and improve E-Business Suite, PeopleSoft, JD Edwards, EPM, Cognos, and many more. It was exciting to see so many customers eager to begin their cloud transformation. It was equally great to hear the successful transformations of companies like Covanta Energy, HID Global, 7-Eleven, and Allianz. Allianz is one of the largest insurance companies in the world. They specifically chose a mission-critical business-intelligence workload as their first project on Oracle Cloud Infrastructure, not only because it would help them meet delivery timelines, but also, maybe more importantly, to accelerate their people's cloud transformation. Lessons and best practices learned from moving SAS and MicroStrategy to the cloud have convinced Allianz to form a cloud operations practice to operate the new environment and drive additional projects throughout this 140,000 employee, multinational enterprise. Allianz describes their use of @OracleIaaS #oow18 pic.twitter.com/UWLPnQ0Yz4 — Rex Wang (@wrecks47) October 23, 2018 External customers aren't the only ones moving mission-critical applications; Oracle is drinking our own champagne. Oracle NetSuite, which serves tens of thousands of businesses as a large SaaS provider, is integrated with Oracle Cloud Infrastructure and will be provisioning new customers on the new infrastructure starting next year. Brian Chess, the EVP of Infrastructure, Security, and Compliance for NetSuite has a great video about the benefits of running on Oracle Cloud Infrastructure. Our roadmap for region expansion, particularly into Asia, is important for growing the NetSuite business. @NetSuite + Oracle Cloud Infrastructure means the utmost reliability! 👍🏻👍🏻 #suiteconnect @OracleIaaS pic.twitter.com/QwIuJoyMDQ — Danielle Tarp (@danielletarp) October 25, 2018 High Performance Computing Applications Are Also Transforming Many categories of applications have never run on the cloud, often because most cloud infrastructure vendors have been unable to meet performance and other requirements. Product engineering is one of these categories. Altair is one of the key software vendors in the product engineering space, which has largely transformed to completely digital designs and simulations. Altair software has helped companies design everything from planes, trains, and automobiles to medical devices and buildings, from improving aerodynamics to reducing product weight. This type of software has been stuck on-premises because the cloud hasn't been able to meet the high and predictable performance required. Excellent presentation from @Altair_US CTO Sam Mahalingam on their use of @OracleIaaS and new Altair product announcements based on it! #oow18 https://t.co/kDmd7brecX — Phil Francisco (@frisco0303) October 23, 2018 So there was a large opportunity to broaden the market for product engineering software by moving it into the cloud. Altair chose to run their new Hyperworks CFD Unlimited cloud service on Oracle Cloud Infrastructure because of the unique performance capabilities of our bare metal instances and nonblocking network, and our significantly superior price performance. Our announcements around new lower-cost AMD EPYC based compute instances and RDMA-powered cluster networking will further benefit HPC customers and partners like Altair. Transforming Newer Real-Time and Big Data Apps More and more companies require massive amounts of real-time data processing for business analytics and use cases like security. Use cases for IoT and streaming data, as well as more traditional Hadoop, were actually fairly common in my conversations with customers. Like HPC, these applications have also been traditionally performed on-premises, in custom environments built by enterprises and software vendors. Cisco, which is investing heavily in the software and security markets, chose to build a SaaS version of their Cisco Tetration product on Oracle Cloud Infrastructure. This application ingests and processes millions of events per second at their current scale, and is growing with each end customer. Cisco went from inception to production on Oracle Cloud Infrastructure in only two months, achieving significantly better performance than on-premises or other cloud providers, and lowering costs. Cisco Tetration moves to OCI, lower costs and 60x perf improvement vs other cloud provider. Praises OCI agility. #OOW18 pic.twitter.com/j7FjF0XMDN — blaine noel (@blainenoel) October 23, 2018 If Cisco can build a big data security product on Oracle Cloud, it's certainly an interesting option for other software vendors and enterprises. I engaged in a number of interesting discussions with customers after they heard the Cisco story. This deployment was also a strong vote of confidence for Oracle's core security architecture and continued efforts around core (Key Management, Cloud Access) and edge security. It's About People Transformation, Too Mark Hurd predicted that 60 percent of IT jobs haven't been created yet. That provoked a reaction among attendees and analysts, but there's no denying the continued, accelerating change in skills required for IT success. .@MarkVHurd: 60% of IT jobs (in 2025) have not been created yet. I love the spirit of this but people are quite slow to change. Note, 5.7 million people work in enterprise IT in the USA Today. That’s a lot of retraining in just 7 years. #OOW18 — Matt Eastwood (@matteastwood) October 23, 2018 At OpenWorld, we were excited to work with customers and partners to teach them more about cloud operations with technologies like Terraform and Kubernetes; to give them the basics on Autonomous Data Warehouse, machine learning, and Big Data; and to help certify some of them on our platform. We heard repeatedly how "Peopleware" is critical to cloud transformation. While customers were attending sessions to learn more about how autonomous databases would make their day-to-day administration much simpler, the discussions about skills gaps were ever-present. Increasing the skills of internal IT is important, but expert partners can accelerate time to market. Throughout the week, Oracle partners like Astute Associates, Velocity Technology, Accenture, DXC, and Viscosity shared insights and best practices, in presentations and interactive sessions, on how to succeed in the cloud. It's never easy to make big changes in technology infrastructure, but it was encouraging to see a dramatic rise in the level of expertise and experience this year from the partner ecosystem. What Did You Experience at This Year's OpenWorld? The level of real change and success felt different this year. Some interesting innovations were revealed. What was your experience at OpenWorld 2018? We'd be excited to continue the conversation at our official handle (@OracleIaaS) or my personal one (@lleung). Leo Leung Senior Director, Product Management, Oracle Cloud Infrastructure

We had a number of exciting announcements at Oracle OpenWorld 2018, but I'll summarize the conference through the eyes of our customers and partners, and what they shared throughout the week. Customers...

Product News

Tracking Costs with Oracle Cloud Infrastructure Tagging

We understand the importance of being able to attribute the right Oracle Cloud Infrastructure usage costs to the right department or cost center in your organization. Oracle Cloud Infrastructure enables you to track costs at the service or compartment level by using the My Services dashboard, but our users also need the flexibility to track costs for projects that have resources across multiple compartments or that share a compartment with other projects. With that in mind, I'm pleased to announce that we are introducing cost tracking tags, which allow you to tag resources by user, project, department, or any other metadata that you choose for billing purposes. A cost tracking tag is in essence a type of defined tag that is sent to our billing system and show up on your online statement in the My Services dashboard. This feature builds on our easy-to-control, schema-based defined tagging approach. While other clouds support free-form tags, Oracle Cloud Infrastructure offers better control by providing defined tags. Defined tags support a schema to help you control tagging, ensure consistency, and prevent tag spam; all critical attributes when it comes to ensuring proper usage and billing management. Read my prior blog post as a primer for setting up defined tags. Creating Cost Tracking Tags Let's explore how you can create a cost tracking tag and how it flows through the system so that you can attribute costs. We'll start by looking at a tag namespace that I defined in the Oracle Cloud Infrastructure Console. Note the new field, Number of Cost-tracking Tags. This value shows all the cost tracking tag definitions in the tag namespace. The number is important to know because you can have a maximum of 10 cost tracking definitions at any given time.   Now, let's see how I set up my cost tracking tags. I need to track my costs along four separate dimensions, so I set up four cost tracking tags: CostCenter is the internal department to which these costs are attributed. Project groups customers together inside a single product offering. Customer is the customer to which the usage is billed. Customer_Job is the actual job that is running on the Compute instance. Note that three of these tags already show Cost-tracking set to Yes, which indicates that they are sent to Oracle's billing system. Customer_Job has Cost-tracking set to No, which is in error, so I need to convert this defined tag to a cost tracking tag. To do that, I open the Customer_Job tag key definition, click the pencil icon next to Cost-tracking: No, and select the Cost-tracking check box. Now that these tag key definitions are set up as cost tracking, the tags are included in the usage data sent to My Services. When a tag is marked as cost tracking, it can take from two to four hours before it’s processed by My Services and included in the online statement. Viewing Cost Tracking Tags in My Services You can now view these tags in the My Services dashboard. After logging in to My Services, click Account Management and select a filter based on a cost tracking tag. As shown in the following screenshot, you can filter your costs based on the cost tracking tags that you define and determine how much cost a particular cost center (for instance, a Finance department) has incurred. This example shows the costs associated with a database I was running with the tag Finance:CostCenter=w1234. Not only can you see this information in the My Services dashboard, but you can also download the results into a CSV file, which is ideal for analyzing in Excel or other tools. If you want to automate the process of gathering cost-tracking data by tag by using the API, you can do that as well. You can use the API documentation to get started, but following is an explicit example. This is a sample URL that I made of the Metering API service:     https://itra.oraclecloud.com/metering/api/v1/usagecost/cacct-{your caact}/tagged?     startTime=2018-09-01T00:00:00.000&endTime=2018-10-04T00:00:00.000&computeTypeEnabled=Y&tags=CostTracking%3ACostCenter%3Dw1234     &timeZone=America/Los_Angeles&usageType=DAILY&rollupLevel=RESOURCE   The URL now includes /tagged to indicate that you are filtering for a particular tag. The tag field must be URL encoded, which means that you must convert the colon (:) to %3A and the equal sign (=) to %3D. In this example, I used CostTracking:CostCenter=w1234, which URL encoded is CostTracking%3ACostCenter%3Dw1234. The following example shows the costs associated with a database I was running with the tag Finance:CostCenter=w1234. {     "accountId": "cacct-your caact",     "canonicalLink": "/metering/api/v1/usagecost/cacct-caact /tagged?timeZone=America%2FLos_Angeles&startTime=2018-09-01T00%3A00%3A00.000&endTime=2018-10-04T00%3A00%3A00.000&computeTypeEnabled=Y&tags=Finance%3ACostCenter%3Dw1234&usageType=DAILY&rollupLevel=RESOURCE",     "items": [         {             "costs": [                 {                     "computedAmount": 19.44,                     "computedQuantity": 48.0,                     "overagesFlag": "N"                 }             ],             "currency": "USD",             "endTimeUtc": "2018-09-19T17:00:00.000",             "gsiProductId": "B88331",             "resourceDisplayName": "Database Standard Added CPUs",             "resourceName": "PIC_DATABASE_STANDARD_ADDITIONAL_CAPACITY",             "startTimeUtc": "2018-09-18T17:00:00.000",             "tag": "Finance:CostCenter=w1234"         },         ... Please add your comments and question in the comments section below if you have questions about how cost tracking tags can benefit your organization.

We understand the importance of being able to attribute the right Oracle Cloud Infrastructure usage costs to the right department or cost center in your organization. Oracle Cloud Infrastructure...

Events

Oracle Cloud Infrastructure Is Ready for Any and All Workloads

At Oracle OpenWorld this week, we have one clear message: Oracle Cloud Infrastructure is ready for any and all workloads. For over 40 years, Oracle has provided the information technology that powers the world’s best enterprises. Over the years, this technology has come in different packages: databases, middleware, applications, and hardware. Today, it is also delivered via the cloud, which gives customers flexibility. Oracle Cloud Infrastructure, which launched at Oracle OpenWorld 2016, is the underlying platform for Oracle’s applications and autonomous database. It enables companies of any size to run even their most mission-critical, high-volume, high-performance applications and databases. It was built by a team of cloud industry veterans to uniquely meet enterprise-grade computing requirements—and also to live up to cloud's promises of competitive costs, rapid provisioning, and nearly limitless scale. Helping our existing customer base modernize and extend their businesses to the cloud remains a major priority. We are also focused on extending enterprise-grade capabilities to developers, startups, and small- and medium-size businesses. Oracle Cloud Infrastructure runs production workloads for large companies such as Verizon and for startups such as Snap Tech, which makes an innovative visual search tool. To achieve these ambitious goals, we continue to make strategic investments. Let me share some of our latest announcements in greater detail.  Databases and Applications As I mentioned in a recent interview with Seeking Alpha, Oracle Cloud’s product strategy has two key areas of focus: Cloud applications: We are quickly becoming the world’s leading cloud applications provider, with unparalleled innovation and an expanding SaaS portfolio. Oracle Cloud Infrastructure: Our consolidated platform in the hyperscale infrastructure as a service (IaaS) market is the underlying platform for Oracle’s applications portfolio and autonomous database. Oracle Autonomous Database and our leading suite of applications run better on Oracle Cloud Infrastructure than on any other cloud. The networking architecture of Oracle Cloud Infrastructure is designed to support optimal performance of Oracle Database and the applications that depend on it. We accomplish this by using direct point-to-point connections between compute and database instances running within Oracle Cloud Infrastructure. Those point-to-point connections translate to low latency and superior application performance. Read our exciting new announcement about Autonomous Database. Security Enhancements Security is a pillar of everything we do, from deploying data centers and architecting networks to monitoring and scaling services. Oracle Cloud Infrastructure helps secure the most mission-critical, hardened applications and databases on the planet. Our security capabilities are designed to protect applications and services whether they live within Oracle Cloud Infrastructure, in other clouds, or on premises. This hybrid and multicloud approach differentiates Oracle from other cloud platforms. We’re able to do all of this because we think about security from the core of the infrastructure to the edge of the cloud. We’re extending our commitment to security with several announcements, which you can read about in our press release. We aren’t talking about security as a standalone market, but as a fully integrated pillar of our cloud. Making Collaboration Easier The internet infrastructure industry is a collaborative one. Customers simply want to solve problems. This is why we’re continuing to build a robust ecosystem to support Oracle Cloud Infrastructure. Oracle announced a new integrated experience for partners and customers that makes it easier for them to publish and deploy business applications from Oracle Cloud Marketplace on Oracle Cloud Infrastructure. Cloud Performance We have long touted that Oracle Cloud Infrastructure outperforms the competitors. Our price and performance advantages continue to be well documented. The market is taking note, and the media and third-party analysts are strongly validating our leadership position. StorageReview gave Oracle an Editor’s Choice award for the performance and innovation that they saw when testing Oracle Cloud Infrastructure bare metal and virtual machine instances. Gartner published its updated IaaS score cards and included Oracle Cloud Infrastructure as one of the four hyperscale cloud providers that they reviewed. Global Footprint This news and industry recognition is important only because it helps our customers. We are proud of the growing customer base using Oracle Cloud Infrastructure to expand their business. Two examples are IdentityMind, which offers a widely used RegTech SaaS platform that builds, maintains, and analyzes digital identities worldwide, and FICO, which helps lenders make accurate, reliable, and fast credit-risk decisions across the customer life cycle. The expansion of our business is also important from a network-design standpoint. We need to be where our customers’ customers are. Read more about our cloud region roadmap and our high-capacity edge network. The Oracle Cloud Infrastructure Edge Network The cloud edge is the point where people and devices connect to the network, making it both a crucial point for users’ interactions with applications in the cloud and a potential launch point for attacks. Our cloud edge is mature, proven, and fully scaled. The Oracle Cloud Infrastructure edge network is built to deliver the following advantages in a multicloud environment:  Ensure high-speed web traffic with minimal latency Defend against targeted application-layer attacks Protect against volumetric attacks on network infrastructure Community Involvement Products can help, but we believe we must also tackle the security problem at the macro level. We want to be part of the global conversations happening around the internet. We are happy to further our commitment to the internet infrastructure community. That’s why today we’re announcing partnerships with both the Internet Society, a global non-profit organization dedicated to the open development, evolution, and use of the internet, and the Internet Infrastructure Coalition (i2Coalition), which ensures that those who build the infrastructure of the internet have a voice in global public policy. We’re also recommitting Oracle to the Cloud Security Alliance through a revised engagement. “We are excited to be partnering with Oracle Cloud Infrastructure,” said Andrew Sullivan, CEO of the Internet Society. “Their Internet Intelligence team does deep analysis of the internet and its many nuances and is developing routing security tools that can aid our efforts of making the internet more secure.” “As a leader in the cloud industry, Oracle Cloud Infrastructure recognizes the importance of Internet innovation, and we look forward to working with them on important public policy and Internet governance issues.” said Hillary Osborne, membership director, i2Coalition. Read more about our relationship with the Internet Society and with the i2Coalition. The cloud holds the promise of accelerating innovation and simplifying operations. But that can’t come at the expense of performance, security, or manageability. That’s why the mission of Oracle Cloud is to enable our customers to run any and every enterprise application and workload securely in the cloud. And with today's news, they can—far more confidently than ever before.

At Oracle OpenWorld this week, we have one clear message: Oracle Cloud Infrastructure is ready for any and all workloads. For over 40 years, Oracle has provided the information technology that powers...

Customer Stories

Altair Engineering Brings the Power of Supercomputing to CFD Engineers with the Help of Oracle Cloud Infrastructure

At Oracle Cloud, we want to bring the power of supercomputing to every engineer and scientist. To deliver on this vision, we strive to achieve the best performance in the cloud for our high-performance computing (HPC) customers, investing in technologies like bare metal compute, high-performance networking, and NVMe SSD-based high-performance storage. These core Oracle Cloud technologies and cutting-edge offerings, like our bare metal GPU instances with 8x NVIDIA Tesla Volta V100s, enable us to deliver predictable performance for applications like engineering simulation, AI/ML, seismic processing, and reservoir modeling. Our customers can realize the potential of these technologies only when a rich ISV ecosystem of applications is running on our platform. After collaborating for over a year, we are excited to announce our work with Altair Engineering. Together, Altair and Oracle will better serve customers globally. Altair provides enterprise-class engineering software that enables innovation, reduces development times, and lowers costs through the entire product lifecycle from concept design to in-service operation. Altair’s simulation-driven approach to innovation is powered by their integrated suite of software, which optimizes design performance across multiple disciplines encompassing structures, motion, fluids, thermal management, electromagnetics, system modeling, and embedded systems, while also providing data analytics and true-to-life visualization and rendering. Today Altair is announcing the availability of HyperWorks CFD Unlimited on Oracle Cloud Infrastructure. HyperWorks CFD Unlimited is a service that offers computational fluid dynamics (CFD) solvers as a service in the Oracle Cloud. Advanced CFD solvers such as Altair ultraFluidX™ and Altair nanoFluidX™ are optimized on the Oracle Cloud to provide overnight simulation results for the most complex cases on a single server. ultraFluidX provides fast prediction of the aerodynamic properties of passenger and heavy-duty vehicles, buildings, and other environmental use cases. nanoFluidX predicts the flow in complex geometries with complex motion, such as oiling in powertrain systems with rotating gears and shafts, using the Smoothed-Particle Hydrodynamics (SPH) simulation method. Both solvers are now available on Oracle Cloud Infrastructure and can leverage GPU instances, bringing the power of HPC to advanced CFD simulation. “The combination of Oracle’s HPC capabilities, such as our cutting edge bare-metal GPU infrastructure, including the recently announced GPUs, our new leading low latency RDMA network, and high-performance storage options combined with Altair’s market leading CFD solvers makes this collaboration extremely compelling for large enterprises looking to optimize their product development,” said Vinay Kumar, Vice President, Product Management, Oracle Cloud Infrastructure. “We’re working together with Altair to truly define what it means to run HPC workloads in the cloud, and today’s availability of HyperWorks CFD Unlimited proves this." Both technologies leverage our GPU offering powered by 8x Tesla V100 GPUs and 2x Tesla P100 GPUs. With the launch of a service like HyperWorks CFD Unlimited on Oracle Cloud Infrastructure from Altair, you can truly bring the power of supercomputing to CFD engineer’s fingertips. “We are excited to expand our relationship with Oracle,” said Sam Mahalingam, Chief Technical Officer for Enterprise Solutions at Altair. “We find that access to GPU compute resources can be challenging for our customers. The integration with Oracle’s cloud platform addresses this challenge, and provides customers the ability to use GPU-based solvers in the cloud for accelerated performance without the need to purchase expensive hardware. Ultimately this leads to improved productivity, optimized resource utilization, and faster time to market.”             We can’t wait to see what customers do with this service. To find out more about HyperWorks CFD Unlimited and to test the service, visit www.altair.com/oracle. You can also find out more about Oracle Cloud Infrastructure's GPU offerings at https://cloud.oracle.com/iaas/gpu or HPC offerings at https://cloud.oracle.com/iaas/hpc.

At Oracle Cloud, we want to bring the power of supercomputing to every engineer and scientist. To deliver on this vision, we strive to achieve the best performance in the cloud for...

Product News

Announcing the Launch of AMD EPYC Instances

As the world of computing continues to evolve, you require a diverse set of hardware and software tools to tackle your workloads in the cloud. With this in mind, I am excited to share that today, at Oracle OpenWorld 2018, we announced a collaboration with AMD to provide a new “E” series of compute instances on Oracle Cloud Infrastructure. The “E” series compute instances, showcase the higher core count, memory bandwidth, I/O bandwidth, advanced security features, and value of AMD EPYC processors.       Today, we are announcing the general availability of "Compute Standard E2 platform" which is the first addition to the E series. The Compute Standard E2 platform will be available in both Bare Metal and also 1 core, 2 core, 4 core and 8 core VM shapes. With the launch of Compute Standard E2 instances, Oracle Cloud Infrastructure becomes the first public cloud to have a generally available AMD EPYC processor-based compute instance. With 64 cores per server, Oracle has the largest core count instance available in production in the public cloud. With 33 percent more memory channels than comparable x86 instances, this new instance provides more than 269 Gb/s memory bandwidth, the highest recorded by any instance in the public cloud. Additionally, AMD EPYC processors are not affected by Meltdown and Foreshadow security vulnerabilities. You get all of this for $0.03 per core hour, which is 66 percent less than general purpose instances offered by other clouds, 53 percent lower than Oracle's other compute instances, and is the lowest price offered by any non-burstable compute instance in the public cloud. Initial capacity is available in the Ashburn (IAD) region, expanding to other regions soon, for bare metal compute instances and 1-, 2-, 4-, and 8-core VM compute instances. The 16- and 24-core shapes will be offered in the first half of 2019, as shown in the following table.   Launching these instances through the Oracle Cloud Infrastructure Console or by using tools such as Terraform is the same as launching other x86 instances on Oracle Cloud Infrastructure. At launch, all of the images except the  Windows VM image are available.  Key Use Cases AMD EPYC-based instances are ideal for general-purpose workloads where you want to maximize price performance. On low-level CPU benchmarks, namely SPEC_int and SPEC_FP, the AMD instance performs on par with the comparable x86 instance, at a lower cost. Oracle applications, including E-Business Suite, JD Edwards, and PeopleSoft, are supported on any Oracle Cloud Infrastructure x86 compute instances of appropriate size, including AMD EPYC-based instances. AMD EPYC-based instances are ideally suited for Big Data analytics workloads that rely on higher core counts and are hungry for memory bandwidth. AMD has a partnership with, and is certified to run software from, leading ISVs who are a part of the Hadoop ecosystem, including Cloudera, Hortonworks, MapR, and Transwarp. On a 10-TB full TeraSort benchmark, including TeraGen, TeraSort and TeraValidate, the AMD system demonstrated a 40 percent reduction in cost per OCPU compared to the other x86 alternatives with only a very slight increase in run times. AMD EPYC-based instances are also ideally suited to certain high-performance computing (HPC) workloads that rely on memory bandwidth, like computational fluid dynamics (CFD). On a 4-node, 14M cell Fluent CFD simulation of an aircraft wing, the AMD EPYC-based instance demonstrated a 30 percent reduction in cost along with a slight reduction in overall run times as compared to an x86 alternative.  Performance Numbers We compared the AMD EPYC-based instances to our current x86 standard alternatives. The following table shows detailed configurations.   AMD EPYC System x86 Alternative System CPU 2 x AMD 7551, 32 cores per Socket @ 2.0 GHz 2   x86 processor, 26 cores per Socket @ 2.0 GHz Memory   512 GB DDR4   786 GB DDR4 Network    2 x 25 Gbps    2 x 25 Gbps   We ran performance tests to exercise the CPU performance, memory subsystem performance, floating point compute power, and performance of server-side Java with emphasis on the middle tier. All of the tests were run on vendor-recommended proprietary compilers. The tests were run a number of times, and the results were averaged.       Tests     Benchmark Target  SPECrate 2017 Integer Integer performance SPECrate 2017 Floating Point Floating point performance STREAM Memory subsystem performance SPECjbb2015 Middle tier performance   The following graphs show how the AMD system compared against the x86 alternative. Figure 1 shows a normalized bare-metal-to-bare-metal comparison at the system level. Figure 2 shows a normalized performance-per-core comparison. Figure 3 shows a normalized performance-per-dollar-per-core comparison. The AMD system fared well in basic CPU and memory benchmarks, which can be attributed to the increased number of cores and the higher number of memory channels in the AMD system.    Figure 1: Bare Metal Comparison of AMD EPYC and x86 Standard SystemFigure 2: Figure 2:  Performance/Core Comparison of AMD EPYC and x86 Standard System Figure 3: Performance/Dollar/Core Hour Comparison of AMD EPYC and x86 Standard System At Oracle OpenWorld on October 24 in Moscone South, Room 154, from 12:30–1:15 p.m., we'll be presenting a session with AMD about these new compute instances. We'll also be showcasing AMD EPYC-based compute instances at SC18 in Dallas on November 12–15.  Thanks to the Compute team, our friends at OHD, and the rest of the Oracle Cloud Infrastructure team that worked day and night to launch the AMD EPYC offering. If you have any questions, feel free to reach out. Rajan Panchapakesan Principal PM, Compute and HPC

As the world of computing continues to evolve, you require a diverse set of hardware and software tools to tackle your workloads in the cloud. With this in mind, I am excited to share that today, at...

Product News

Announcing Oracle Cloud Infrastructure Key Management

Customers of Oracle Cloud Infrastructure moved their workloads to the cloud knowing that their data would be protected by encryption keys that are securely stored and controlled by Oracle. However, some customers, especially those operating in regulated industries, asked Oracle to help them verify their security governance, regulatory compliance, and homogeneous encryption of their data where it is stored. Effective immediately, Oracle Cloud Infrastructure Key Management is available to customers in all Oracle Cloud Infrastructure regions. Key Management is a managed service that enables you to encrypt your data using keys that you control. Key Management durably stores your keys in key vaults that use FIPS 140-2 Level 3 certified hardware security modules (HSMs) to protect the security of your keys. You can use the Key Management service through the Console, API, or CLI to create, use, rotate, enable, and disable Advanced Encryption Standard (AES) symmetric keys. As a managed service, Key Management lets you focus on your data encryption needs without requiring you to worry about procuring, provisioning, configuring, updating, and maintaining HSMs and key management software or appliances.  Integration with Oracle Cloud Infrastructure Block Volumes, Oracle Cloud Infrastructure Compute boot volumes, and Oracle Cloud Infrastructure Object Storage means that encrypting your data with keys that you control is as straightforward as selecting a key from the Key Management service when you create or update a block volume or bucket. Example: Creating a Block Volume using keys from Key Management Example: Edit or unassign a previously assigned key from a Block Volume Integration with Oracle Cloud Infrastructure Identity & Access Management (IAM) and Oracle Cloud Infrastructure Audit lets you control the permissions on individual keys and key vaults, and monitor their life cycles. Example: Enable Block and Boot Volume encryption using Key Management Learn more about how to get started with Oracle Cloud Infrastructure Key Management in our documentation and our FAQs.         This post was written by guest blogger Ulf Schoo, a consulting member of the technical staff on the Oracle Cloud Infrastructure team.

Customers of Oracle Cloud Infrastructure moved their workloads to the cloud knowing that their data would be protected by encryption keys that are securely stored and controlled by Oracle. However,...

Product News

Oracle CASB Enables Security Monitoring for Oracle Cloud Infrastructure

At Oracle Cloud Infrastructure, customer security is of paramount importance. We understand that enterprises of all industries and sizes require comprehensive visibility, security and compliance monitoring over their cloud resources. Oracle Cloud Infrastructure provides maximum visibility to customers regarding the actions taken on their cloud resources through the availability of various logs, including the Oracle Cloud Infrastructure Audit service which tracks all actions taken on Oracle Cloud Infrastructure tenancy resources. Oracle Cloud Access Security Broker (CASB) Cloud Service helps take security a step further by providing automated capabilities for customers to monitor the security of their cloud infrastructure resources. Additionally, Oracle CASB supports monitoring of Oracle Cloud Applications (SaaS), Oracle Cloud Platform (PaaS), and other public clouds, including AWS, Azure, Office 365, and Salesforce. The solution helps customers with heterogeneous multiple-cloud deployments achieve better security postures for their cloud resources. Security Monitoring Use Cases Oracle CASB monitors the security of Oracle Cloud Infrastructure deployments through a combination of predefined Oracle Cloud Infrastructure-specific security controls and policies, customer-configurable security controls and policies, and advanced security analytics that use machine learning for detecting anomalies. Following are the different types of security monitoring that Oracle CASB performs: Security misconfiguration of Oracle Cloud Infrastructure resources: Oracle CASB monitors configurations of Oracle Cloud Infrastructure compute, virtual cloud networks (VCNs), and storage, based on Oracle Cloud Infrastructure security best practices. For example, Oracle CASB can alert administrators on Oracle Cloud Infrastructure Object Storage buckets that are made public. Monitoring of credentials, roles and privileges: Oracle Cloud Infrastructure Identity and Access Management (IAM) security policies assign various privileges (inspect, read, use, and manage) to IAM groups. Oracle CASB monitors IAM users and groups for excessive privileges and for changes to administrator groups. For example, Oracle CASB monitors the use and age of IAM credentials that are used to authenticate users, such as console passwords and API keys. Any deviations from the acceptable standards can result in alerts. User behavior analysis (UBA) for anomalous user actions: User logins and access patterns are analyzed to establish expected behavior, and deviations from expected baselines are detected with advanced analytics based on machine-learning (ML) algorithms. UBA generates risk scores for events, and customers have options to configure security alerts based on risk-score thresholds. Risk events from threat analytics: Oracle CASB is integrated with third-party threat intelligence feeds, and it uses them to analyze access events to customer Oracle Cloud Infrastructure tenancies. This is done in order to detect potential security threats such as accesses to Oracle Cloud Infrastructure resources from suspicious IP addresses or any anomalous patterns of IP addresses used. Register Your Tenancy with Oracle CASB This section provides an overview of how to register your Oracle Cloud Infrastructure tenancy with Oracle CASB and how to view security alerts. To enable CASB monitoring, you create an Oracle Cloud Infrastructure application instance with Oracle CASB and provision it by using the API key credentials of a least-privilege IAM user that is authorized to get configuration information and audit logs from your Oracle Cloud Infrastructure tenancy. The following screenshot (Figure 1) shows the registration page where you provide the tenancy OCID, IAM user OCID, public key fingerprint of the IAM user API key, and private key of the IAM user API key to register an Oracle Cloud Infrastructure application instance. Figure 1. Oracle Cloud Infrastructure Application Instance Registration Oracle CASB has preconfigured security controls and prebuilt policy controls for Oracle Cloud Infrastructure security monitoring. Examples include checking for public buckets, open (0.0.0.0/0) VCN security lists, monitoring privileges granted using IAM policies, and more. The following screenshot (Figure 2) shows predefined Oracle Cloud Infrastructure security controls that you can enable. Figure 2. Oracle Cloud Infrastructure Security Controls  At this point, Oracle CASB is ready to get Oracle Cloud Infrastructure audit logs and configuration information from your tenancy to conduct security monitoring based on security and policy controls. The following screenshot shows the dashboard with Oracle Cloud Infrastructure security alerts generated by Oracle CASB. Figure 3. Oracle Cloud Infrastructure Security Alerts  As a recap, Oracle CASB provides comprehensive security monitoring for customer Oracle Cloud Infrastructure tenancies and generates security alerts with actionable remediation steps to triage the issues. What's more, Oracle CASB enables you to get going quickly because it doesn't require installation of any software agent and uses customer-provided privileges to get security configuration information and logs required for analytics. For more information about how to configure Oracle CASB for use with Oracle Cloud Infrastructure, see the Using Oracle CASB Cloud Service documentation. Oracle CASB is currently used by Oracle Cloud Infrastructure customers, including large enterprises, whose feedback is integrated into the product, enabling us to continue to improve security and user experience. As new Oracle Cloud Infrastructure services and features are released, Oracle CASB will transparently offer corresponding security checks to Oracle Cloud Infrastructure customers. Oracle CASB provides maximum Oracle Cloud Infrastructure security monitoring for customers, with a relatively low total cost of ownership (TCO). And our Universal Credits Model (UCM) covers Oracle CASB and can be used to pay by consumption for CASB security monitoring. For more information about Oracle CASB and Oracle Cloud Infrastructure-specific security checks, see the following documentation: Oracle CASB Cloud Service documentation Viewing Key Security Indicators and Reports for OCI This post was written by guest blogger Nachiketh Potlapally, a consulting member of the technical staff on the Oracle Cloud Infrastructure team.

At Oracle Cloud Infrastructure, customer security is of paramount importance. We understand that enterprises of all industries and sizes require comprehensive visibility, security and compliance...

Product News

Introducing the Generation 2 Cloud at Oracle OpenWorld 2018

Oracle built its Generation 2 Cloud from the ground up to provide businesses with better performance, pricing, and—above all else—security. That was the message from founder and CTO Larry Ellison during his opening keynote at Oracle OpenWorld 2018, where he announced new security features and explained the overall benefits of Oracle Cloud Infrastructure. "Other clouds have been around for a long time, and they were not designed for the enterprise," Ellison said. Security First The Oracle Cloud is a secure, unified architecture for all applications, from the Oracle Autonomous Database and SaaS applications to enterprise and cloud native applications. Generation 1 clouds place user code and data on the same computers as the cloud control code with shared CPU, memory, and storage. That means cloud providers can see customer data, and it enables customer code to access cloud control code, which can lead to breaches and cyberattacks, Ellison said. Oracle's Generation 2 Cloud, on the other hand, puts customer code, data, and resources on a bare metal computer, while cloud control code lives on a separate computer with a different architecture. With this approach, Oracle cannot see customer data, and there is no user access to the cloud control code. "We will never put our cloud control code in the same computer that has customer code," Ellison said. We at @forrester have talked about this for years. Security and privacy done right engenders customer trust and creates competitive differentiation. @oracle announced today that it’s Gen 2 cloud was built from the ground up for security. Security is its design point. #oow18 https://t.co/5uhlRAhh5l — Stephanie Balaouras (@sbalaouras) October 22, 2018 Oracle's Generation 2 Cloud also uses the latest artificial intelligence and machine learning technologies to level the security playing field, because malicious hackers are using these same technologies. "It's their robots versus your people," Ellison said. "Who do you think is faster? Who do you think's going to win?" Ellison also announced four new Oracle Cloud Infrastructure security features: a web application firewall, DDoS protection, cloud access security broker support, and a key management service. Price and Performance Security was the primary reason that Oracle Cloud Infrastructure was built from the ground up, Ellison said. Other major drivers were the opportunity to improve the cloud migration process and to provide greater performance and pricing to customers who make the move. By the by, if anyone is wondering, Oracle Cloud IaaS is legit, Oracle is a hyperscale provider and I called it 3 years ago. Might share some scratch math to explain tomorrow. #OOW18 — Carl Brooks (@eekygeeky) October 22, 2018 If you run an enterprise application in a Generation 1 cloud, it usually costs more to run than it did on-premises, but that's not the case on Oracle Cloud Infrastructure, Ellison said. He also provided benchmarks that showed significant price and performance benefits over Amazon Web Services (AWS). Ellison shows Amazon network charges 100x to move data out of their cloud compared to moving data out of Oracle cloud. The @awscloud comparisons continue, including compute, block storage, network costs where Oracle Cloud is significantly less. #OOW18 — Hyoun Park (박현경) 🏳️‍🌈 (@hyounpark) October 22, 2018 To stay on top of all the Oracle Cloud Infrastructure news at OpenWorld 2018, follow @OracleIaaS on Twitter and follow the #oow18 hashtag.

Oracle built its Generation 2 Cloud from the ground up to provide businesses with better performance, pricing, and—above all else—security. That was the message from founder and CTO Larry Ellison...

Partners

Migration to Oracle Cloud Infrastructure with Deloitte ATADATA™

Oracle and Diamond Partner Deloitte Consulting LLP Jointly Validate Enterprise Workload Migration to the Cloud with ATADATA Oracle Cloud Infrastructure provides a true enterprise cloud, with the consistent high performance and predictable low pricing that enterprises require to consider moving their most critical workloads to cloud. We are seeing many enterprise customers leverage the advantages of Oracle Cloud to fuel innovation in their businesses, build environments more securely, avoid the cost and risk of refresh, and realize significant cost savings. This has led to significant interest in automated workload migration to accelerate the move to Oracle Cloud. Customers frequently look for their System Integrator (SI) partners to facilitate the migration of key workloads and reduce risk through proven experience, streamlined operations, and automated capabilities. Oracle applications play a central role in complex enterprise workflows. Varied infrastructure types and inherent application complexity make it challenging to move applications and associated points of integration to the cloud without unintended disruption to functionality. Effective cloud migration requires investigation, data collection, compatibility assessment, financial measurement, and lockstep coordination between the target cloud platform and migration partners to ensure a smooth and efficient transition. With Deloitte ATADATA, the effort and risk of moving critical workloads from traditional environments to Oracle Cloud Infrastructure is dramatically reduced through all phases of migration, from discovery and planning all the way to the physical migration to Oracle Cloud Infrastructure and validation after completion. We're proud of this partnership as a key enabler to realizing cloud benefits without being overwhelmed by a migration process that is cost-prohibitive and risky. In July 2018, Oracle started a joint validation initiative with Deloitte ATADATA for automated cloud migration. For the evaluation, use cases were designed to validate ATAVision discovery and ATAMotion migration modules across four (4) phases that are typical when migrating an application to Oracle Cloud Infrastructure: Discovery of a source environment across all infrastructure elements Provisioning of compute instances, storage, and network connections in the target Oracle Cloud Infrastructure environment Migration of VMs and data from the source to Oracle Cloud Infrastructure Validation of migrated data and application configurations ATAVision Discovery Overview Although most organizations possess basic inventory and utilization data, the level of detail and accuracy is often not sufficient to address the needs of a successful migration project. The ATAVision module collects all the data required to develop a comprehensive migration plan including, but not limited to, infrastructure details, affinity relationships, compatibility issues, and software dependencies. The discovery software is agentless, i.e. no installation or reboots are required on the source candidate servers, and the discovery process doesn’t impact system performance. By combining these elements, ATAVision’s automated move group engine creates a detailed migration plan based on a full view of the environment. ATADATA software can be installed anywhere, provided it has access to the source environment. ATAVision can collect data on physical servers, on-premises VM servers, VMware clusters, or across any hypervisor or competitive cloud platform. ATAMotion Migration Overview A significant benefit of Deloitte ATADATA products is their automated integration capability. The ATAVision module combines dependent servers into migration units called move groups. Move groups are imported into an integrated migration module, ATAMotion, and can be migrated independently or combined into a larger orchestrated migration wave plan. Consequently, servers with a high affinity relationship are moved together without omitting critical pieces of a complex application architecture. The ATAMotion migration technology orchestrates provisioning through integration with Oracle Cloud Infrastructure APIs. When a migration job is created, all volumes can either be migrated as a set, or specific volumes can be selected for migration. Although ATAMotion can use all available bandwidth, throttling is supported to minimize disruption to production workloads. Once migration is initiated, Oracle Cloud Infrastructure APIs are leveraged to provision cloud resources. After the target is up and running, the agentless ATAMotion software is deployed at the target. The target then communicates back to the source server to enable data transfer over a secure connection (using either secure cypher key encoding or AES encryption). The direct connectivity between source and target is the key to migrating data and workloads at scale. Final Thoughts The Oracle team finds ATADATA tools to be easy to use and effective. By deeply integrating with the Oracle Cloud Infrastructure APIs, ATADATA has differentiated their offering from the competition and enabled customers to accomplish comprehensive migrations quickly and successfully. Specifically, the team has noted innovative approaches to automatic provisioning of cloud compute instances based on configuration schemas and seamless migration of VMware servers to Oracle Cloud Infrastructure. Based on our evaluation, ATADATA’s capabilities and integration with Oracle Cloud APIs are “best in class” for migrating enterprise applications smoothly and effectively. For additional questions, please contact Donald Schmidt Jr., Managing Director Deloitte Consulting, at doschmidt@deloitte.com. Co-authored by: Donald Schmidt Jr., Managing Director Deloitte Consulting LLP; Manoj Mehta, Director of Product Management, OCI Development and Andrew Reichman, Sr. Director, OCI Development

Oracle and Diamond Partner Deloitte Consulting LLP Jointly Validate Enterprise Workload Migration to the Cloud with ATADATA Oracle Cloud Infrastructure provides a true enterprise cloud, with the...

Developer Tools

Cloud-Native Technologies and Solutions Make a Strong Showing at Oracle OpenWorld 2018

If you're one of the more than 60,000 attendees at Oracle OpenWorld next week, you'll have a dizzying array of choices from thousands of sessions, hands-on labs, birds-of-a-feather sessions, case-study presentations, meetings with experts and peers, and parties! No matter what you choose, you’re certain to hear a lot from us about the role of cloud computing and how we think it will transform the way in which you create applications, manage your IT infrastructure, and conduct your business. We’ll cover why and how you move workloads from your data centers to the cloud, which we refer to as "move and improve." But we'll also extensively cover new technologies and solutions that are "cloud native"—cloud-based tools and services that you can use to develop, deploy, and manage your cloud-based applications. Here are some of the sessions about cloud-native technologies and solutions that I plan to attend. I hope this selection will be helpful to you as you create your calendar for the coming week.   Managing the Transformation Moving to cloud native is about more than deploying in the cloud and using new tools. These sessions will inform you about new cloud technologies, how they impact your business, and the best practices for adopting them. Your Cloud Transformation Roadmap on Oracle Cloud Infrastructure (PKN6351) Clay Magouyrk (Senior Vice President, Software Development, Oracle) and Rahul Patil (Vice President, Software Development, Oracle) will review the developments in Oracle Cloud Infrastructure and what they are working on. A great session to get the "big picture." Making Cloud Native Universal and Sustainable (KEY6962) Dee Kumar, Vice President at the Cloud Native Computing Foundation, CNCF, a division of the Linux Foundation, which plays an important role in the ecosystem, incubating many of the leading open source cloud native technologies, delivers a keynote session. Cloud Native Architectures on Oracle Cloud Infrastructure with Linkd (MYC6865) Linkd (formerly Wireflare) chose to run its performance-sensitive MEAN stack application on Oracle Cloud Infrastructure. This session discusses the decision process and experience with the CTO and founder of the company. Learn about the performance requirements and the benefits achieved in comparison to alternative cloud providers and gain insight into cloud selection and results in a cloud-native application environment. Cloud Native Developer Panel: Innovative Startup Use Cases (DEV5600) Startup development teams are pushing the limits with novel use cases and advanced architectures—from Kubernetes to AI/ML workloads and serverless microservice deployments. Representatives of startups in this panel discussion will walk through how they are using open source technologies on top of a high-performance cloud, lessons learned, and what’s on the horizon.   DevOps It is in the cloud that DevOps reaches its full potential. With resources that can be described and versioned as code, and provisioned and scaled on demand, and with automation for every step of the application life cycle, Oracle Cloud Infrastructure dramatically increases the productivity of development teams and the quality of their applications.   DevOps on Oracle Cloud Infrastructure (FLP6872) Oracle Cloud Infrastructure provides DevOps practitioners with the services required to automate the deployment of large, complex distributed systems while giving engineers the flexibility to choose the languages and tools of their preference. In this session, explore DevOps solutions, including expanded support for popular tools and languages. Learn how to solve problems with this suite of offerings, how it’s differentiated from other options on the market, and how the product team made these choices. Introducing DevOps Solutions on Oracle Cloud Infrastructure (THT6958) Join this session to learn about the services Oracle Cloud Infrastructure provides DevOps practitioners, and a sneak peek into upcoming solutions such as monitoring services, expanded support for popular tools and languages, integrated development environments, continuous integration/continuous delivery, and collaboration tools such as ChatOps. Learn how to deploy Oracle Cloud Infrastructure resources using Terraform, including a fully managed service and a group of open source Terraform modules. Using Ansible, Terraform, and Jenkins on Oracle Cloud Infrastructure (DEV5582) DevOps teams need the right tools and technologies to safely and reliably build and support large, complex cloud systems. This session explores an example architecture and then walks through building it out using Terraform to define infrastructure as code, Ansible for configuration management, HashiCorp's Vault for secret management, and Jenkins for continuous integration/continuous delivery.   Containers A key characteristic of cloud-native applications is that they benefit from the distributed nature, resilience, and elastic scalability of cloud infrastructure. In most cases, that means these applications are deployed as loosely coupled, containerized components that can be scaled up and down with ease. Container Registry 2.0: Enabling Enterprise Container Deployments (DEV5604) Container registries are evolving as container workloads move to production. This session explores some of the new must-have requirements for security, policy, automation, and additional artifact storage. It also examines how registries work closely with coupled Kubernetes deployments and presents best practices for building container-native deployment strategies. A Guide to Enterprise Kubernetes: Journeys to Production (DEV5623) This session presents a guide for enterprises looking to move to production with Kubernetes. You’ll hear from customers who've made the journey and their stories of operationalizing Kubernetes. The presentation covers best practices and lessons learned across areas such as network and storage integration, scaling, monitoring, logging, and deploying across multiple regions. Kubernetes in an Oracle Hybrid Cloud (BUS5722) Are you moving to the cloud? Looking at containers? Keeping some workloads on-premises? Shifting workloads from on-premises to the cloud, and from the cloud to on-premises? Maybe splitting workloads between the two into a hybrid cloud? If you answered "yes" to any of these questions, you are not alone. In this session, learn how Kubernetes can be used to run cloud-native and existing workloads both in Oracle Cloud and on-premises on Oracle Cloud at Customer. See how customers are using both use cases. Kube me this! Kubernetes Ideas and Best Practices (DEV5369) This session covers best practices you’ll want to consider when making a shift from deploying applications to web servers to moving to a microservices model and Kubernetes. You'll learn about topics you should consider while moving to Kubernetes and the principles you should follow when building out your Kubernetes-based applications or infrastructure. You'll leave the session with best practices to implement in your own organization when it comes to Kubernetes.   Serverless Serverless approaches take cloud-native principles an abstraction step further than containers. Forget about provisioning infrastructure for your applications—Oracle Cloud Infrastructure will provide it when you need it and make it go away when you don’t, so you pay only for what you actually use. Bringing Serverless to Your Enterprise with the Fn Project (PRO4600) Serverless computing is one of the hottest trends in computing because of its simplicity and cost-efficiency. Oracle recently open-sourced a new project that enables developers to run their own serverless infrastructure anywhere. In this session, learn how to use the functions platform with a demo, how to deploy functions in multiple languages, the benefits of bringing serverless to your organization, how to identify low-hanging fruit projects, and best practices. Serverless Java: Challenges and Triumphs (DEV5525) This session examines the challenges of using Java for serverless functions and the latest Java platform features that address them. It also digs into the open source Fn project’s unparalleled Java support, which makes it possible to build, test, and scale out Java-based functions applications.   Hands-on Labs Learn more about Terraform, Kubernetes, Big Data, AI/ML, and HPC in these instructor-led classes: Oracle Cloud Infrastructure Hands-on Labs. Bring your own laptop!   Open Source Technologies Oracle has made a strategic commitment to open source and standards for Oracle Cloud Infrastructure. It builds its services on unforked, supported, open source projects. It ensures that it's as easy to bring workloads to its cloud as it is to take them elsewhere. And Oracle’s developer teams are very active participants in the Cloud Native Computing Foundation and many of the projects in that ecosystem. Check out some of these sessions that discuss exciting new open source projects that can be deployed on Oracle Cloud Infrastructure: Using Terraform with Oracle Cloud Infrastructure (HOL6376) GraphPipe: Blazingly Fast Machine Learning Inference (DEV5593) Istio and Envoy: Enabling Sidecars for Microservices (BOF5714) Istio, Service Mesh Patterns on Container Engine for Kubernetes (DEV6078) Building a Stateful Interaction with Stateless FaaS with Redis (THT6878) Serverless Kotlin in Action: A Black/Silver Combo? (DEV5695)

If you're one of the more than 60,000 attendees at Oracle OpenWorld next week, you'll have a dizzying array of choices from thousands of sessions, hands-on labs, birds-of-a-feather...

Improved Availability of Your Instances with Customer-Managed VM Maintenance

We are excited to announce customer-managed virtual machine (VM) maintenance, a major step in Oracle Cloud Infrastructure’s ongoing effort to improve the availability of your VM instances. You can now easily reboot your instances and avoid scheduled downtime for planned infrastructure maintenance. What is customer managed VM maintenance? Today when an underlying infrastructure component needs to undergo maintenance, we notify you in advance of the planned maintenance downtime. To avoid this planned downtime, you can opt to terminate and re-redeploy your instances prior to the planned maintenance. With the introduction of customer managed VM maintenance, we give you another option. Instead of terminating and re-deploying your instance manually, you can now reboot your instance from the Console, API or CLI. This new experience makes it easy for you to control your instance downtime during the notification period. The reboot or restart of a VM instance during the notification period is different from a normal reboot. The reboot or stop/start workflow stops your instance on the existing VM host which needs maintenance and starts it on a healthy VM host. Customer managed VM maintenance makes it easier for you to avoid the planned maintenance downtime. If you choose not to reboot during the notification period, then Oracle Cloud Infrastructure will reboot your instance for you before we proceed with the planned infrastructure maintenance. How do I get started? Getting started is easy. When there is a maintenance event, Oracle Cloud Infrastructure will notify you via email. You can identify the affected VMs in the Console by checking the Maintenance Reboot field (or check the timeRebootMaintenanceDue property using the API/CLI) - which will show the date and time after which the infrastructure maintenance will occur. The instance reboot will occur within a 24 hour period following the specified time. Both the Instance list view and the Instance details view (below) will display the Maintenance Reboot field. For Standard VM instances with a boot volume, additional iSCSI block volume attachments, and a single VNIC, you can proceed to reboot or stop and start the instance. If you have non-iSCSI (paravirtualized or emulated) block volume attachments or secondary VNICs, you must detach them first before rebooting or restarting your instance. When you reboot or stop and start the instance, it is migrated to a different physical VM host - while preserving all the instance configuration properties, including ephemeral and public IP addresses. When the Reboot Maintenance field is blank, the instance is no longer impacted by the maintenance event. Finding affected instances To make it easier to find and act on your instances, you can search for the instances that have been set to reboot in your tenancy by using the Advanced Search and choosing the Query for all instances which have an upcoming scheduled maintenance reboot sample query. Customer-managed VM maintenance is currently supported on Standard VM instances running Linux OS. It supports instances launched from Oracle Cloud Infrastructure images and images imported from external sources. It is offered in all regions at no extra cost.   To learn more about customer-managed VM maintenance on Oracle Cloud Infrastructure, see the Best Practices for Your Compute Instances. For more information about the Oracle Cloud Infrastructure Compute service, see the Oracle Cloud Infrastructure Getting Started guide, Compute service overview, and FAQ.

We are excited to announce customer-managed virtual machine (VM) maintenance, a major step in Oracle Cloud Infrastructure’s ongoing effort to improve the availability of your VM instances. You can now...

Oracle Cloud Infrastructure

Data Tiering Enhancement for Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure

Hello, my name is Zachary Smith, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure. In June 2018, we announced the availability of Terraform automation to easily deploy Cloudera Enterprise Data Hub on Oracle Cloud Infrastructure. Today we are proud to introduce the next version of the automation templates, which enables you to use data tiering with Cloudera Enterprise Data Hub deployments on Oracle Cloud Infrastructure. You can now leverage the multiple classes of storage available in Oracle Cloud Infrastructure—block volumes, local NVMe SSD, object, file, and archive—in a single Hadoop cluster. You can also define storage policies customized for your workloads, which can help lower costs without compromising on SLAs. The storage policies can now be defined through the command line automation tool. You can continue to use the Cloudera Manager to set up new policies or update defined policies. You can find out more in Cloudera's documentation. Splitting up data within tiers reduces costs. Less frequently used data resides on block volumes or is copied to Object Storage, which are both less expensive and allow for higher storage density. This enables you to meet storage capacity requirements while minimizing compute costs to meet workload demands. We are already seeing large enterprise customers leverage this feature to drive cost and operational efficiencies by using the fast bare metal NVME storage for hot data while using the Block Volumes storage for cooler data. In addition to using the updated automation scripts to configure Enterprise Data Hub to use various data tiers, you can use the mover tool periodically to move data between storage classes for greater efficiency in data storage and to ensure compliance with storage policies. In our initial experiments, we found an average transfer rate of 4 GB/s between local NVME and block volumes in a six-worker-node cluster with 12 2-TB block volumes per worker. Additionally, the data movement between tiers scales with the number of nodes. Because the recommended guidance is to run the mover tool on a regular basis, we don't expect the data movement overhead to be significant during regular operations of the cluster. You can find the Terraform automation template on GitHub, included with the availability domain spanning architecture that we announced last month.

Hello, my name is Zachary Smith, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure. In June 2018, we announced the availability of Terraform automation to easily deploy...

Oracle

Integrated Cloud Applications & Platform Services