X

Recent Posts

Developer Tools

Set Up a Machine-Learning Workbench on Oracle Cloud Infrastructure

Machine learning is fast becoming an integral part of enterprise applications and a core competency for enterprises. A basic machine-learning workflow involves collecting, preparing, and curating data, followed by building, training, evaluating, and deploying models by using different machine-learning algorithms, which you can then use to make predictions. This post focuses on the model development part of the workflow. A rich ecosystem of open source software and managed offerings are available, and I've picked some popular open source offerings to build an example workbench. Using this workbench, you can start coding and building models. I don't cover model development in depth, but I present one example to show how to use the workbench. Python and R are commonly used languages for machine learning. I use Python 3 here because it supports a robust set of popular machine-learning frameworks, libraries, and packages, such as TensorFlow, scikit-learn, Matplotlib, SciPy, NumPy, Keras, and pandas.   Notebooks like Jupyter and Apache Zeppelin provide integrated environments that incorporate results, visualizations, and documentation inline with code, providing an excellent workbench for data scientists, machine-learning engineers, and other stakeholders to develop, visualize, and share their work.  I show the steps to set up a basic, cloud native, machine-learning workbench from scratch on Oracle Cloud Infrastructure by using Python 3, TensorFlow, and Jupyter Notebook. I use an Oracle Cloud Infrastructure bare metal instance running Oracle Linux. Let’s get started. Launch an Instance Follow the steps in the documentation to launch a bare metal instance in Oracle Cloud Infrastructure. I made the following choices: Region: us-ashburn-1 OS: Oracle Linux 7.6 Instance shape: BM.Standard2.52 Use SSH to connect to the instance: $ ssh –i <path_to_your_private_key> opc@<public_IP_address_of_your_instance> I used Oracle Linux and a bare metal shape, but you can make other choices. These steps should work for the most part, regardless of your choices, although some commands will change based on the OS, and you need to use the right libraries for your instance shape. Install Python 3 By default, Oracle Linux 7 comes with Python 2.7. You can continue to use that, but I'm going to show you how to use Python 3. To install Python 3, following these steps: Install the EPEL repository: [opc@ml-workbench ~]$ sudo yum install -y oracle-epel-release-el7 oracle-release-el7 Install Python 3.6: [opc@ml-workbench ~]$ sudo yum install -y python36​ Set up a Python virtual environment (venv), called mlenv here: [opc@ml-workbench ~]$ python3.6 -m venv mlenv​ Activate the virtual environment: [opc@ml-workbench ~]$ source mlenv/bin/activate​ Using pip3, you can install some of the commonly used Python libraries like scikit-learn, Keras, NumPy, Matplotlib, pandas, and SciPy. Install TensorFlow Pick the correct TensorFlow package to use. Based on my choices, I can install TensorFlow by using the following command: (mlenv) [opc@ml-workbench ~]$ pip3 install tensorflow​ Install Jupyter and Run a Notebook Install Jupyter: (mlenv) [ml-workbench ~]$ python3 -m pip install jupyter​ Run a Jupyter notebook: (mlenv) [opc@ ml-workbench ~]$ jupyter notebook --ip=0.0.0.0​ You will see a warning such as No web browser found: could not locate runnable browser. Connect to Jupyter from Your Local Machine with SSH Tunneling Open another terminal window, and use the following command with an available port number to access the notebook: $ ssh –i <path_to_ your_private_key> opc@<public_IP_address_of_your_instance> -L 8000:localhost:8888​ Open a web browser on your local machine and browse to http://localhost:8000. When you are prompted for a token, use the token key listed in your previous terminal window to log in to the Jupyter notebook. Start Coding Click the New button to create a new Python 3 notebook. You can also upload existing IPython notebooks. Following is an example from a TensorFlow tutorial for a basic image classifier: Conclusion In this post, I set up a machine learning workbench with an Oracle Cloud Infrastructure bare metal instance running Oracle Linux. I used TensorFlow, a popular open source, machine-learning framework, and I described how to install and use Python 3 and Jupyter notebooks, which make it easy to build, train, and deploy models using Python. You can follow similar steps to install other languages, tools, and frameworks in the rapidly evolving machine-learning ecosystem. If you don’t have an Oracle Cloud account, you can sign up for a free account and get a $300 credit to start building.

Machine learning is fast becoming an integral part of enterprise applications and a core competency for enterprises. A basic machine-learning workflow involves collecting, preparing, and curating...

Product News

Manage Petabytes of Cloud Block Storage in Seconds in Your IaaS Console

You can now leverage a powerful and differentiating feature from Oracle Cloud Infrastructure to manage multiple block storage volumes and boot volumes more quickly and easily. In May 2018, we introduced our volume groups feature, which enables you to streamline the creation and management of groups of block volumes by using the CLI, SDK, and Terraform. The ability to manage volume groups is ideal for the protection and lifecycle management of enterprise applications, which typically require multiple volumes across multiple compute instances to function effectively. We are now extending our volume group feature to the web-based Oracle Cloud Infrastructure Console. With volume groups, you can manage petabytes of cloud block storage in a crash-consistent, coordinated manner in seconds and in just a few clicks. You can create groups of your volumes, back up volume groups, restore from volume group backups, and create deep disk-to-disk clones of volume groups, all within a few seconds. That includes the boot volumes for your compute instances and all your storage volumes for crash-consistent instance disaster recovery or environment duplication and expansion. A single volume group can have up to 128 TB of storage and 32 volumes. If you need to manage more storage space, you can create multiple volume groups. Because the solution is scalable, you can start with a small storage allocation. As your business demand grows, you can add more volumes and groups, and combine that with the offline resize feature, to easily scale and manage petabytes of storage by using the Oracle Cloud Infrastructure Block Volumes service. With this feature update, we are also improving the policies that control the volume group operations. You now have the ability to manage more granular permissions by separating volume group permissions from other volume operations. For details, see the Block Volumes service documentation on policies. This feature is free of charge and is available in all regions. You pay only for the amount of provisioned storage using the Oracle Cloud Infrastructure storage pricing. The following sections show how to use the volume group functionality in the Oracle Cloud Infrastructure Console. Create a Volume Group From the navigation menu, select Block Storage, select Volume Groups, and then click Create Volume Group. Enter a name for the volume group, and select and add the volumes to place in it. You can add boot volumes for your compute instances and block volumes. Click Create Volume Group. The volume group is created in seconds.   Add or Remove Volumes as Needed From the volume group's details page, you can add volumes to the group by clicking Add Block Volume or Add Boot Volume. You can remove a volume by clicking the Actions menu (three dots) for the volume and selecting Remove. Perform a Crash-Consistent Backup of a Volume Group From the Actions menu for the volume group, select Create Volume Group Backup. Enter a name for the volume group backup, and then click Create. Restore from a Volume Group Backup to a Crash-Consistent State From the Actions menu for the volume group backup, select Create Volume Group. Create a Clone of a Volume Group From the Actions menu for the volume group, select Create Volume Group Clone. After entering a name for the clone, click Create. The cloned volume group and the cloned volumes in it become available within seconds. Conclusion The Oracle Cloud Infrastructure Console provides a straightforward and simple way to create and manage volume groups. You can perform a crash-consistent, coordinated backup of a group of volumes, and restore from those backups in a few clicks. You can also create crash-consistent, coordinated, deep disk-to-disk volume group clones. The clones become available in seconds, and they are completely isolated from the source volume group and the volumes in it. You can use volume group clones to duplicate your entire instance OS boot disk and block storage for production troubleshooting, dev/test, and UAT/QA purposes. We want you to experience the block storage features and all the enterprise-grade capabilities that Oracle Cloud Infrastructure offers. It’s easy to take advantage of these capabilities with a US$300 free trial. For more information about block storage, see the Oracle Cloud Infrastructure Getting Started guide, Block Volumes service overview, and FAQ. Watch for announcements in this space about additional features and capabilities. We value your feedback as we continue to make our cloud service the best for enterprises. Send me your thoughts about how we can continue to improve our services or if you want more details about any topic.

You can now leverage a powerful and differentiating feature from Oracle Cloud Infrastructure to manage multiple block storage volumes and boot volumes more quickly and easily. In May 2018, we introduce...

Developer Tools

What's Next for Cloud Native Development Technologies?

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. The next wave of cloud native development will be about inclusivity, according to Bob Quillin, Vice President of Developer Relations at Oracle Cloud Infrastructure. That means enabling more enterprises to access and use technologies like containers and serverless, and being more inclusive of traditional, on-premises applications. I recently chatted with Quillin about the future of cloud native development, and he also gave me some updates on Oracle Container Engine for Kubernetes (OKE) and the current state of cloud native security. Listen to our conversation here, and read a condensed version below:   Your browser does not support the audio player   Oracle Cloud Infrastructure unveiled its Container Engine for Kubernetes awhile back. Is OKE mature and can enterprises start using it? Where does it stand? Quillin: Yes. It's been out for over a year now. It was one of the earliest platforms to be certified as Kubernetes conformant. As a managed service, it provides a fully managed control plane, it will manage the masters for you and provide total lifecycle management. It's currently being used by hundreds of enterprises. It's all based on standard Docker and standard Kubernetes, and it integrates very easily with Oracle Cloud networking, storage, and load balancing. Plus, you can leverage all of the power of our enterprise-grade cloud infrastructure. As a developer advocate, I'm super proud of the fact that we've got a certified, Kubernetes platform managed service running on top of an enterprise-grade cloud. This provides a unique combination of a great cloud with high levels of security and superior performance that can run simple cloud native applications, open source applications, and the most advanced applications out there. We're finding that it has a lot of interest from the startup community who are often up and running within an hour or so. I also work with larger organizations who are running WebLogic and Java and database applications, and those development teams are seeing amazing successes, too. So, it's an inclusive technology that can provide value no matter where you are on the spectrum. What is happening in the area of cloud native applications and security? Quillin: I think security was a much more contentious topic in the container world probably three or four years ago. There's been tremendous progress and focus on security since then. Obviously, Oracle has lots of security experts, and enterprise security is certainly one of the core tenants that we push forward in terms of applications, and this will continue to be a major area of focus. I think the registry is an area where there is a lot of good work happening with image scanning, tagging containers, and having registered containers and images. We're working with partners like Twistlock, for example, to integrate their image scanning and tagging into our registry, too. There are several levels of security in our cloud. On top of that you have Kubernetes security, which is both role and application based. You have multiple levels of security that create many different dimensions and options to control or constrain access. The tools are there to create a very secure and stable environment. We still have more work to do because security is always a moving target, but Oracle is a great security partner to have, and continuing to leverage that technology in the container and application world is a big goal for Oracle Cloud. Looking ahead, what do you think are going to be the hottest new trends? Quillin: One of them is definitely serverless technologies. I think they are the next big opportunity for standardization and open Cloud Native Computing Foundation (CNCF)-sanctioned activities. The focus will be on creating more ways to build out serverless applications based on standard technologies, and the CNCF is starting to address that. You'll likely see a lot of progress on that going forward. It's also one of the bigger challenges people are going to face, and the industry needs to come together on that. What else do you see happening in the future? Quillin: We're basically exiting this first wave of cloud native, and I think it's pretty clear that there's a set of patterns and methodologies that have emerged to build new cloud native applications. For new greenfield applications, the tooling and technology is at just the right moment. The next big challenge to enable a second wave of cloud native development is reaching out to more underserved communities, being more inclusive in terms of on-premises technologies and traditional technologies, and enabling more enterprise access to these technologies so we can get more organizations to adopt cloud native. That will happen through better training, more managed services so they don't have to do it themselves, and then more blueprints and solutions that provide access to best practices. We could all benefit from finding ways to simplify. Oracle, in particular, is focusing on ways to take away all that complexity from the user. Open standards, more inclusive enterprise strategies, and simplification of complexities are the three big things I'm looking forward to. If our readers want to learn more about cloud native technologies at Oracle, where should they go? Quillin: A good starting point is cloudnative.oracle.com. It's a microsite that’s a great focal point for learning what's going on in cloud native. It also branches into related content sources throughout Oracle. I'd also recommend going to some local meetups. We spend a lot of time here in Austin, for example, with local meetups. But our evangelist teams are all over the world, so look for an Oracle Cloud Native Evangelist in your neck of the woods. And if you have an interesting meetup or conference, definitely reach out to us. We're happy to participate.

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. The next wave...

Security

Addressing the Top Technological Risks in 2019

Nearly 30 years of predominantly digital-based, technological advances have tremendously affected our world, primarily through the birth, adoption, and growth of the internet. But a set of risks accompanies this phenomenon, according to the latest World Economic Forum Global Risks Report.  The report is derived from data collected through the World Economic Forum's Global Risks Perception Survey, and it incorporates the knowledge and viewpoints of the forum’s vast network of business, government, civil society, and thought leaders. According to the report, the world is facing an increasing number of multifaceted and interrelated challenges, including slowing global growth, persistent economic inequality, climate change, geopolitical tensions, and the accelerating pace of the Fourth Industrial Revolution.  The report also raises significant concerns about the following technological risks or instabilities: The adverse consequences of technological advances The breakdown of critical information infrastructure and networks Large-scale cyberattacks Massive incidents of data fraud and theft Interestingly, these technological risks rise in importance year-over-year compared to other, nontechnical risks. For example, massive data fraud and theft was ranked the fourth-highest global risk over a 10-year span, and cyberattacks were ranked number five. A similar theme can also be observed in many of the past reports. Because the internet-related and technology-related risks facing the world are only rising, the need to secure the internet overall has become more critical than at any other time. Core to Edge Security Oracle Cloud Infrastructure was built from the ground up to provide security and availability not only in the cloud core but also at the cloud edge, where users and their devices connect to the cloud, often over the internet.  Oracle Cloud Infrastructure's Edge Services range from Internet Intelligence, which provides global internet performance and availability data, to DNS and web application security services. And Oracle Dyn Web Application Security integrates a web application firewall, bot management, and API security in the cloud to keep web applications safe from cyber-induced outages, data breaches, and other threats.   In the Oracle Cloud Infrastructure core, organizations will find compute, storage, connectivity, applications, and database instances that are exceedingly protected by the highest levels of data encryption, identity and access management, key management, API management, and configuration and compliance management.  As more organizations move to the cloud, securing both the cloud core and the cloud edge is critical to address the technological risks highlighted in the World Economic Forum Global Risks Report.

Nearly 30 years of predominantly digital-based, technological advances have tremendously affected our world, primarily through the birth, adoption, and growth of the internet. But a set of...

Push Time-Sensitive Notifications to Many Distributed Applications

As enterprises transform and build modern cloud native applications, they need a foundational, easy-to-use, cloud-scale, publish-subscribe messaging service to help application development teams in the following ways: Simplify development of event-driven applications Enable 24x7 DevOps for application development in the cloud Provide a mechanism for cloud native applications to easily deliver messages to large numbers of subscribers. We are proud to announce the general availability of the Oracle Cloud Infrastructure Notifications service in all Oracle Cloud Infrastructure commercial regions. Notifications is a fully managed publish-subscribe service that pushes messages, such as monitoring alarms, to subscription endpoints at scale. This service delivers secure, low-latency, and durable messages for applications hosted anywhere. As part of our initial launch, Notifications supports email and PagerDuty delivery. Notifications reduces code complexity and resource consumption by pushing messages to endpoints, so your applications no longer need to poll messages periodically. And because the service provides integration with subscription endpoints such as email and PagerDuty, there's no need for direct point-to-point integration. As part of Oracle Cloud Infrastructure, Notifications is integrated with Identity and Access Management (IAM), which enables fine-grained, security-rules enforcement via access control policies. Notifications pricing is intuitive, simple and elastic; customers pay per message delivery. Notifications is accessible via the Oracle Cloud Infrastructure Console, SDKs, CLI, and REST API, and also provides Terraform integration. Getting Started Getting started with the Notifications service is straightforward in the Oracle Cloud Infrastructure Console and using the REST API. The following steps show how to create a topic, add subscribers to the topic, and start producing messages by using the topic. In the Console main menu, navigate to the Notifications section (Application Integration > Notifications) in the appropriate compartment. Click Create Topic. Alternatively, you can use the REST API CreateTopic operation. Specify the topic name, and then click Create. After the topic is created, add subscribers by clicking Create Subscription. With the REST API, use the CreateSubscription operation. From the console, choose the protocol, provide the email address for the email protocol or the PagerDuty integration URL for the HTTPS (PagerDuty) protocol, and then click Create. Confirm the subscription to activate the subscriber and start delivering messages to the subscriber. In the Console, click Publish Message. With the REST API, use the PublishMessages operation and pass a payload to produce data to a topic. Specify a title and message, and then click Publish. Check the delivered message to the respective endpoint. Next Steps We want you to experience this new service and all the enterprise-grade capabilities that Oracle Cloud Infrastructure offers. It’s easy to try it with our US$300 free credit. For more information, see the Oracle Cloud Infrastructure Getting Started guide and Notifications documentation. Be on the lookout for announcements about additional features and capabilities, including integration with Oracle Functions, generic HTTPS subscription endpoints, message filtering, and custom retries for message delivery. We value your feedback as we continue to enhance our offering and make our service the best in the industry. Let us know how we can continue to improve or if you want more information about any topic. We are excited for what's ahead and are looking forward to building the best publish-subscribe messaging platform.

As enterprises transform and build modern cloud native applications, they need a foundational, easy-to-use, cloud-scale, publish-subscribe messaging service to help application development teams in...

Developer Tools

Track Critical Health and Performance Metrics with Oracle Cloud Infrastructure Monitoring

In today’s always-connected world, there’s an expectation that the systems that power our apps, businesses, and entertainment won't falter in their reliability, performance, and customer experience. Using the same technology that ensures our infrastructure’s availability, we have built a monitoring and alarming service designed to give our customers the insight that they need to exceed these expectations. We're proud to announce the launch of the Oracle Cloud Infrastructure Monitoring service in all Oracle Cloud Infrastructure commercial regions. The Monitoring service gives you the insight that you need to understand the health of your resources, optimize the performance of your applications, and respond to anomalies in real time. Out-of-the-box metrics and dashboards are provided for your Oracle Cloud Infrastructure resources such as compute instances, block volumes, virtual NICs, load balancers, object storage buckets, and more. You can also have your applications emit their own custom metrics, enabling you to visualize, monitor, and alert on all critical time-series data in one place. Combined with a powerful query language, Oracle Cloud Infrastructure Monitoring includes a robust metrics engine that enables flexible aggregation and complex queries across multiple metric streams and dimensions in real time. Alarms are a key component of the Monitoring service. The service quickly detects fluctuations in performance or health and notifies you about them. The alarm definition language lets you create a variety of alarm types that can use multiple statistics, trigger operators, time intervals, and wildcards for applying them to the entire fleet of resources. Alarms are integrated with the newly released Oracle Cloud Infrastructure Notifications service, which delivers messages to destinations such as PagerDuty and email securely and reliably. Getting started is straightforward. Use the Metrics Explorer available in the Oracle Cloud Infrastructure Console to search and visualize multiple metrics across various dimensions and time. Alternatively, you can use the Oracle Cloud Infrastructure Data Source for Grafana, available from GitHub and the Grafana Marketplace, to natively access metrics directly from Grafana. Additional features and capabilities are coming in the near future, including an expanded list of Oracle Cloud Infrastructure resource metrics, import and export functionality, and customizable retention times. We are excited to share this initial launch as a first step towards providing a best-in-class cloud monitoring solution. We want you to experience these new features and all the enterprise-grade capabilities that Oracle Cloud Infrastructure offers. It’s easy to try them with our US$300 free credits. Beyond the free tier, Oracle Cloud Infrastructure Monitoring provides a simple, competitive pricing model based on two dimensions: the number of data points sent as custom metrics and the number of data points used for analysis. For more information, see the Oracle Cloud Infrastructure Resource Monitoring service essentials and the Oracle Cloud Infrastructure Monitoring documentation.

In today’s always-connected world, there’s an expectation that the systems that power our apps, businesses, and entertainment won't falter in their reliability, performance, and customer experience....

Performance

Announcing Parallel File Tools for File Storage

We are excited to share a new suite of Parallel File Tools that take advantage of the full performance of the Oracle Cloud Infrastructure File Storage service. File Storage offers elastic performance where throughput grows with the amount of data stored in every file system. File Storage is best suited for parallel workloads that can use the scalable throughput. The Parallel File Tools provide parallel versions of tar, rm, and cp that run requests on large file systems in parallel, enabling you to make the best use of the performance characteristics of the File Storage service. The current toolkit is distributed as an RPM for Oracle Linux, Red Hat Enterprise Linux, and CentOS. The current Parallel File Toolkit includes: partar, a parallelized subset of tar functionality to create and extract tarballs in parallel parrm, a parallelized recursive remove of a directory parcp, a parallelized recursive copy of a directory Installing and Running Use the following commands for your OS. Oracle Linux yum install sudo yum install -y fss-parallel-tools CentOS and Red Hat 6.x OL6 install sudo wget http://yum.oracle.com/public-yum-ol6.repo -O /etc/yum.repos.d/public-yum-ol6.repo sudo wget http://yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle sudo yum --enablerepo=ol6_developer install fss-parallel-tools CentOS and Red Hat 7.x yum install sudo wget http://yum.oracle.com/public-yum-ol7.repo -O /etc/yum.repo.d/public-yum-ol7.repo sudo wget http://yum.oracle.com/RPM-GPG-KEY-oracle-ol7 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle sudo yum --enablerepo=ol7_developer install fss-parallel-tools Using the Manual To display the man pages, use the following commands: man partar man parrm man parcp Reporting Bugs or Enhancement Requests Install the debuginfo RPM, rerun the command to collect the core, and send that to us at parallel-tools-support_ww@oracle.com with the version of the package and all the error messages that the command emits. To ensure that you have the latest version, check the yum update or yum install command. To install the debuginfo package, run: sudo debuginfo-install fss-parallel-tools What's Next? In the near future, we plan to provide support for Ubuntu. More About File Storage Oracle Cloud Infrastructure File Storage is a fully managed, network-attached storage that offers high scalability, high durability, and high availability for your data in any Oracle Cloud Infrastructure availability domain. File Storage provides the reliability and consistency of traditional NFS filers and offers enterprise-grade file systems that can scale up in the cloud without any upfront provisioning. You can start with a file system that contains only a few kilobytes of data, and grow it to 8 exabytes of data. Because File Storage is a fully managed service, you don't have to worry about capacity planning, software upgrades, security patches, hardware installation, and storage maintenance. You pay only for your capacity stored monthly; you stop paying as soon as you delete your data. File Storage supports NFS version 3 protocol with Network Lock Manager (NLM) as the locking mechanism to provide POSIX interfaces. You can also use the NFS client on Microsoft Windows to access File Storage. File Storage protects your data by maintaining multiple replicas locally, along with encryption and the ability to take frequent snapshots. Within an availability domain, File Storage uses synchronous replication and high availability failover to keep your data safe and available. Interested in trying File Storage? I can help. Just sign up for a free trial or drop me a line at mona.khabazan@oracle.com. Mona Khabazan, Principal Product Manager, Oracle Cloud Infrastructure File Storage References https://cloud.oracle.com/storage/file-storage/features https://cloud.oracle.com/storage/file-storage/faq https://cloud.oracle.com/en_US/storage/tutorials

We are excited to share a new suite of Parallel File Tools that take advantage of the full performance of the Oracle Cloud Infrastructure File Storage service. File Storage offers elastic performance...

Product News

Introducing Oracle Cloud Infrastructure Load Balancing Metrics

In today’s digital world, customers expect applications to be always available and responsive, and to provide a superior end-user experience. As the first gateway between users and an application, load balancers are a critical piece of any scalable application infrastructure. An unhealthy or improperly configured load balancer can cause degraded user experiences like higher latency, reachability errors, or, much worse, an application outage, which often leads to customer churn and lost business. It's imperative to have meaningful metrics on your load balancer that can provide insights on the health of your application and help remediate issues faster. Introducing Load Balancing Service Metrics Oracle Cloud Infrastructure Load Balancing service metrics provide an array of critical metrics to proactively monitor the health and load of your Oracle load balancer infrastructure. The Load Balancing service metrics measure the number and type of connections, the HTTP responses, and the quantity of data managed by your load balancer. These metrics are statistics calculated from relevant data points as an ordered set of time-series data and are divided by load balancer, listener, and backend set component groups. Accessing Load Balancing Service Metrics The service metrics are an integral part of the Oracle Cloud Infrastructure Monitoring service and are automatically available for any load balancer that you create in your tenancy. You don't need to enable monitoring on the resource to get these metrics. In the Oracle Cloud Infrastructure Console, you can view the metrics details for load balancers in your compartment by selecting Monitoring > Service Metrics from the navigation menu, selecting your compartment, and  then selecting oci_lbaas from the Metric Namespace menu. Figure 1: Load Balancing Service Metrics You can further filter or group the service metrics by dimensions such as availability domain, backend set name, listener name, region, or OCID. You do this by adding a dimension filter under the Dimensions option in the Service Metrics page. Figure 2: Filtering Based on Metric Dimensions The metrics are automatically refreshed every minute. You can modify the metrics time-interval data on the charts to one-minute, five-minute, or one-hour time periods. You can modify the aggregate statistic to perform functions such as Rate, Mean, Sum, and so on by choosing the option from the Statistic menu. You can also view the metrics for a load balancer by navigating to the details page for the load balancer and accessing the Metrics tab under Resources. Similarly, you can view the metrics for a specific backend set by navigating to the Metrics tab on the backend set's details page. The Load Balancing service metrics are also available through the Monitoring API endpoint. You can use the Monitoring API to manage metric queries, alarms, and the performance of your load balancing resources. Using Load Balancing Metrics to Deliver a Great Digital Experience One of the key objectives of our Monitoring service is to deliver metrics that provide actionable insights that enable you to deliver a great digital experience for your end users. For example, you can use the load balancing metrics to understand your baseline performance metrics, such as the average/peak-time traffic trends over time. The metrics can also be used as a demand signal for business decisions such as future-capacity planning. Let's walk through an example scenario. You are the head of operations for a travel website hosted on Oracle Cloud Infrastructure. Your business has been running a social media campaign for a summer travel sale. You have been tasked to ensure that users have a great digital experience and that no business is lost because of application infrastructure issues. You wonder, can load balancing metrics help in this scenario? Absolutely! You can leverage the Inbound Requests, Active Connections, and Bytes Received load balancing metrics, in addition to your compute metrics, to gather insights on incoming traffic patterns and predict load balancer/compute capacity needs. The service metrics enable you to make data-driven decisions and dynamically adjust to the changing needs of your application infrastructure. Troubleshooting Scenario: HTTP 502 Bad Gateway Apart from proactive monitoring and management, load balancing metrics also help you to identify, isolate, and troubleshoot issues with your load balancer infrastructure. In this example scenario, you are deploying a new web application, ociexample.com, with an Oracle Cloud Infrastructure public load balancer as the front end in your development environment. However, when you try to access the application, you see an HTTP 502 response on the browser. Let's explore how load balancing metrics can help you troubleshoot this issue. When you browse to a load-balanced IP address, you see 502 Bad Gateway error. You can confirm this behavior by running a curl test: curl -v http://ociexample.com > GET / HTTP/1.1 > Host: 129.146.93.99 > User-Agent: curl/7.54.0 > Accept: */* > < HTTP/1.1 502 Bad Gateway < Content-Type: text/html < Content-Length: 161 < Connection: keep-alive In the Oracle Cloud Infrastructure Console, navigate to Monitoring > Service Metrics. Select your compartment and select oci_lbaas as the metric namespace. You will notice that an HTTP 502 response appears for each curl or browser test. Navigate to the Load Balancer Details page, and note that the load balancer backend set health is critical. If you run the same curl test against the IP address of the instance, you get the following error: Connection failed connect to 129.146.161.17 port 80 failed: Connection refused Failed to connect to 129.146.161.17 port 80: Connection refused Closing connection 0 However, you can log in to the backend instance via SSH, and running curl -v href="http://127.0.0.1/ returns an HTTP 200 OK response. HTTP/1.1 200 OK Server: Apache/2.4.6 () Accept-Ranges: bytes Content-Length: 5 charset=UTF-8 In this scenario, the host firewall is preventing the traffic from reaching the instance from beyond the instance on port 80. To resolve this issue, open port 80 on the firewall using the firewall-offline-cmd --add-port=80/tcp command and then using systemctl restart firewalld to cycle. Setting Up Alarms and Notifications The Monitoring service provides alarms and notifications functionality that is tightly integrated with the metrics. We recommend setting up alarms and notifications to be proactively notified on deviations from your baseline metrics. Let's walk through the steps to create an alarm and a notification for the HTTP 502 responses in our previous example. On the metric chart for which you want to create an alarm, click Options and then select Create an Alarm on this Query. Select a name, severity, and message body for the alarm message. Keep the Compartment, Metric Namespace, and Metric Name values the same, but adjust the interval and statistic as needed. You can optionally set up a metrics dimension, such as region, to filter the alarm. Create a trigger rule condition to enable firing of the alarm when the condition is met. Create a notification, which is the most common approach to managing alarms. You can add a list of recipients to a notification, and those recipients are emailed a notification in the event of an alarm. The Monitoring service also supports a native integration with PagerDuty, which allows companies to configure services, on-call rotations, acknowledgment requirements, and escalation rules for inbound notifications. Select Notification Service. Click Create a topic. Enter a name and a description for the topic. Select Email as the subscription protocol. Enter the email address or addresses to send notifications to. Click Create topic and subscription. Click Save alarm. Next Steps We recommend that you use the Monitoring service and Load Balancing service metrics to monitor any critical application that you are delivering, whether it's hosted solely in Oracle Cloud Infrastructure or across your hybrid environment. Load Balancing metrics will be extended to include more metrics across the Load Balancing infrastructure. If there is a specific metric or integration that you would like us to support, let us know. For more information, see Monitoring Overview and Load Balancing Metrics in the Oracle Cloud Infrastructure documentation. If you haven't tried Oracle Cloud Infrastructure yet, you can try it for free.  

In today’s digital world, customers expect applications to be always available and responsive, and to provide a superior end-user experience. As the first gateway between users and an...

Developer Tools

Getting Started with the Resource Manager on Oracle Cloud Infrastructure

Continuing our commitment to openness and open source software, today we're announcing the general availability of Resource Manager, a fully managed service that makes it significantly easier to use HashiCorp Terraform on Oracle Cloud Infrastructure. This is especially true for large or distributed enterprise development teams that share common infrastructure. Teams can use Resource Manager's deep integration with the Oracle Cloud Infrastructure platform by, for example, creating policies to constrain the operations that different users and groups can perform. We previewed Resource Manager at KubeCon + CloudNativeCon North America in December 2018. This article provides a high level overview and steps to get started. To get the most out of this walk-through, the reader should understand the basics of Terraform and have some experience using it to manage cloud infrastructure. If you need a refresher, take a moment to review the basics. If you don't have an account on Oracle Cloud Infrastructure and want to work through these examples, you can create a free account and get started with up to 3,500 free hours. When migrating their Terraform workflows to use Resource Manager, developers will be happy to know that all of the Oracle Cloud Infrastructure Terraform Provider functionality is completely supported. Resource Manager is the latest example of Oracle Cloud Infrastructure's commitment to openness by building on top of powerful open source software technologies like Terraform and honoring Oracle's commitment to preserving our customers' choice to migrate their workloads with minimal impact to their business, code, and runtime. "We are excited to partner with Oracle on Resource Manager and have it power the core infrastructure provisioning on Oracle Cloud Infrastructure. With Resource Manager, HashiCorp Terraform becomes accessible to even more users who care about open source standards and are looking to deploy their applications consistently on Oracle Cloud Infrastructure.” –Burzin Patel, VP, Worldwide Alliances, HashiCorp This post provides a high-level overview and steps to get you started. To get the most out of the walkthrough, it's helpful to understand the basics of Terraform and have some experience using it to manage cloud infrastructure. If you need a refresher, take a moment to review the basics. If you don't have an account on Oracle Cloud Infrastructure and want to work through these examples, you can create a free account and get started with up to 3,500 free hours. Overview At the highest level, Resource Manager is an orchestration service for managing Oracle Cloud Infrastructure resources by using Terraform. Using Resource Manager introduces many benefits, for example: Improves team collaboration: Securely manages Terraform state files and provides state locking when needed.  Terraform uses state files to map resources and track metadata when creating, modifying or destroying infrastructure. This enables teams to understand existing infrastructure and see any changes when running a Terraform plan. Enables automation: Unlocks infrastructure automation capabilities by using its fully supported SDKs, APIs, and CLI tooling. Integrates seamlessly with the platform: Enables developers to leverage the entire Oracle Cloud Infrastructure API catalog (for example, authentication). For example, you can create policies that govern access to Resource Manager stacks and jobs (more about those later). Provides a familiar console experience: Conveniently manage Oracle Cloud Infrastructure resources and other services that also leverage identity federation. The Resource Manager workflow is straightforward: you upload a bundled Terraform configuration file set, associate the bundle with a user-defined stack, and then run a job's Terraform action against the stack. A stack defines a distinct set of cloud resources within a compartment, and a job is the set of Terraform commands that can be executed against a stack. The Terraform commands supported include plan, apply, and destroy. After you perform a Terraform action, the refreshed Terraform state file is securely managed by Resource Manager and is subsequently referenced for each  Terraform action coordinated by Resource Manager. By managing the Terraform state file for you, Resource Manager ensures a single source of truth for the current state of the infrastructure. Target Architecture The source code for this walkthrough is available on GitHub. It deploys a public regional load balancer on Oracle Cloud Infrastructure as described in the Getting Started with Load Balancing tutorial. The resources defined in the Terraform configuration files are as follows: A compartment, and a virtual cloud network in a single region that contains an internet gateway A regional load balancer in a public subnet, a listener, and a failover load balancer also within a public subnet Two backend sets each with a single compute instance in separate private subnets within a different availability domain The following diagram shows the target architecture: In this post, we manage Oracle Cloud Infrastructure resources with Resource Manager in the Oracle Cloud Infrastructure Console. Walkthrough This walkthrough assumes that you've created a compartment named orm-demo-cmpt and a group named orm-demo-admin-grp, and added a user to that group. If you need help creating these, see the documentation on managing compartments, users, and groups. Step 1: Create an IAM Policy In the Console navigation menu, select Identity and then click Policies. On the Policies page, click Create Policy. In the Create Policy dialog box, name the policy orm-demo-admin-policy. This policy gives the orm-demo-admin-grp group the permissions to manage all Resource Manager stacks and jobs in the orm-demo-cmpt compartment. Click Create. The policy is listed on the Policies page. Note: In a production system, it's both more secure (principle of least privilege) and practical to create additional groups with more granular permissions. For example, it's likely that we'd need to create a development team group that can only use predefined stacks and run jobs against it (use-orm-stack and use-orm-job, respectively). Step 2: Create the Stack As mentioned earlier, a stack represents definitions for a collection of Oracle Cloud Infrastructure resources within a specific compartment. In this step, we configure a new stack in the orm-demo-cmpt compartment in the us-phoenix-1 region and name it HA Load Balanced Simple Web App. As the stack's name suggests, its configuration files define the load balancing, networking, and compute resources to deploy the target architecture plus an HTTP server. In the Console navigation menu, select Resource Manager and then click Stacks. Click Create Stack. Before you upload the Terraform files to Resource Manager, bundle them into a zip archive. In the Create Stack dialog box, verify the compartment (and change it if necessary), provide a meaningful name for the stack and a description, upload the Terraform configuration file zip archive, and add the keys and values for any Terraform variables referenced in the uploaded bundle. Important: Don't include confidential information (for example, SSH private key values and authentication credentials) in variables, tags, or descriptions. Depending on the context, consider using a secret vault like the Oracle Cloud Infrastructure Key Management service or HashiCorp Vault, as appropriate. A description can help teams quickly identify a stack's use and purpose, which is especially helpful as the infrastructure increases in complexity. Also, when Resource Manager uploads the zip archive, it looks in the root directory for all *.tf files. If the Terraform files are not located in the root directory, add the relative path to the subdirectory that contains those files in the Working Directory field. Lastly, tags address many use cases, from associating stacks with environments in a CI/CD build pipeline to enterprise cost-center allocation. For more information about tagging, see Managing Tags and Tag Namespaces. Before moving on to running a job, quickly review the new stack and then click the hyperlinked stack name. Step 3: Execute Jobs: Plan, Apply, and Destroy As previously mentioned, jobs perform actions against the Terraform configuration files associated with a stack. The Terraform actions that a job can perform are plan, apply, and destroy. Because Terraform command execution is not atomic, it's crucial to prevent any race conditions or state corruption from occurring because of parallel execution. To prevent this, Resource Manager ensures that only one job can run against a stack at a given time against a single state file. On the Stack details page, you can completely manage the stack's configuration (for example, update and delete the stack, add tags, and edit variables) and also download the zip archive that contains the latest Terraform configuration (which can be especially helpful when troubleshooting).   From the Terraform Actions menu, select Plan. In the Plan dialog box, optionally provide a more readily identifiable name and specify a tag to associate with this action. Then, click Plan to run the job. In the Jobs section of the stack's details page, the job's state appears as Accepted, which indicates that the platform is spinning up the necessary resources to run the command. The state changes to In Progress and then to either Succeeded or Failed. Click the Actions menu (three dots) to display actions related to the job. You can also click the job name to view the job's details and its logs containing, which contain the Terraform output. You can scroll through the logs or download them. Because the previous plan action succeeded, select the Apply from the Terraform Actions menu. Make any necessary changes in the dialog box, and then click Apply. The job's state is updated as the job execution nears completion. After the apply action succeeds, verify that the resources have been provisioned by reading the Terraform output contained with the logs. You can also navigate to the Networking section of the Console to view the different resources that now exist (VCN, load balancer, subnets, and so on), and to the Compute section to view the two instances. Now that we've successfully applied Terraform to build some cloud resources, we can use Resource Manager to release them. From the Terraform Actions menu, select Destroy. In the Destroy dialog box, accept the defaults (or customize the values) and then click Destroy. The state change is reflected in the console. To verify that the command completed successfully and that all resources are released, click the job name and view the Terraform output in the logs. Step 4: Delete the Stack On the Stack details page, click Delete Stack. In the confirmation message that appears, click Delete to confirm the action. The stack disappears from the list of stacks. Wrapping Up There is a lot of history and momentum behind Oracle’s commitment to open source, and Oracle Cloud Infrastructure is making rapid progress in building out a truly open public cloud platform. See it yourself by creating a free account with up to 3,500 free hours.

Continuing our commitment to openness and open source software, today we're announcing the general availability of Resource Manager, a fully managed service that makes it significantly easier to use...

Events

AI, IoT, and Blockchain on Full Display at Oracle Cloud Day Leadership Summit

Cloud technology has been transformative across all types of businesses, industries, and products. One fascinating result of the rise of cloud computing is that it's enabling enterprises to adopt emerging technologies—like artificial intelligence (AI), blockchain, digital assistants, cloud native technologies, and the internet of things (IoT)—faster than ever before. All of these technologies and more will be on display at the upcoming Oracle Cloud Day Leadership Summit—along with plenty of expert advice and discussions about how you can use them to build the future of your organization. Businesses are no longer experimenting with emerging technologies in a sandbox. Instead, they're applying them in meaningful ways across their operations that result in new business, new value, and more sophisticated applications than ever before. In the coming years, cloud providers and the enterprises that they serve will move toward a next-generation cloud model in which they'll have access to these new technologies, better security, improved price-performance, and deep automation capabilities. Based on research conducted by Oracle and independent IT analyst firms, we predict that by 2025: 80 percent of all enterprise workloads will run in the cloud. AI and other emerging technologies will double productivity around the world. More than 50 percent of data will be managed autonomously. Taken together, these predictions demonstrate the need for organizations to adopt a comprehensive enterprise cloud strategy and cloud native IT environments. The Oracle Cloud Day Leadership Summit is a great place to start. Join us for an afternoon dedicated to looking ahead with leaders just like you. The next Oracle Cloud Day Leadership Summit is happening in Houston, TX, in early March. Here are the details: Date: March 6, 2019 Time: 2 p.m.–6 p.m. (CST) Location: Karbach Brewing Co.                  2032 Karbach Street                  Houston, TX 77092 Register for the Houston event But if you can't make it to Houston, don't worry. Several more Oracle Cloud Day Leadership Summit events are happening soon. Denver, CO: March 12, 2019 Minneapolis, MN: March 20, 2019 Montreal, QC, Canada: March 28, 2019 Vancouver, BC, Canada: April 3, 2019 El Segundo, CA: April 9, 2019 More Reasons to Attend the Summit The Oracle Cloud Day Leadership Summit is a half-day event designed to help technology leaders stay up-to-date on the latest technologies, use cases, and trends. Join us for an afternoon dedicated to learning, exploring, and networking. This quick, information-rich event will have you generating new ideas in just one afternoon. Some highlights include: An informative, densely packed keynote featuring cloud visionaries and thought leaders Unique and inspiring customer success stories Networking with peers in a relaxed setting at cool local venues The Oracle Cloud Day Leadership Summit will bring you up-to-date on the latest emerging technologies and trends. Learn more about the Oracle Cloud Day Leadership Summit today!

Cloud technology has been transformative across all types of businesses, industries, and products. One fascinating result of the rise of cloud computing is that it's enabling enterprises to...

Oracle Cloud Infrastructure

Making It Easier to Move Oracle-Based SAP Applications to the Cloud

For decades, Oracle has provided a robust, scalable, and reliable infrastructure for SAP applications and customers. For over 30 years, SAP and Oracle have worked closely to optimize Oracle technologies with SAP applications to give customers the best possible experience and performance. The most recent certification of SAP Business Applications on Oracle Cloud Infrastructure makes sense within the context of this long-standing partnership. As this blog post outlines, SAP NetWeaver® Application Server ABAP/Java is the latest SAP offering to be certified on Oracle Cloud Infrastructure, providing customers with better performance and security for their most demanding workloads, at a lower cost. Extreme Performance, Availability, and Security for SAP Business Suite Applications Oracle works with SAP to certify and support SAP NetWeaver® applications on Oracle Cloud Infrastructure, which makes it easier for organizations to move Oracle-based SAP applications to the cloud. Oracle Cloud enables customers to run the same Oracle Database and SAP applications, preserving their existing investments while reducing costs and improving agility. Unlike products from first-generation cloud providers, Oracle Cloud Infrastructure is uniquely architected to support enterprise workloads. It is designed to provide the performance, predictability, isolation, security, governance, and transparency required for your SAP enterprise applications. And it is the only cloud optimized for Oracle Database. Run your Oracle-based SAP applications in the cloud with the same control and capabilities as in your data center. There is no need to retrain your teams. Take advantage of performance and availability equal to or better than on-premises. Deploy your highest-performance applications (that require millions of consistent IOPs and millisecond latency) on elastic resources with pay-as-you-go pricing. Benefit from simple, predictable, and flexible pricing with universal credits. Manage your resources, access, and auditing across complex organizations. Compartmentalize shared cloud resources by using simple policy language to provide self-service access with centralized governance and visibility. Run your Oracle-based SAP applications faster and at lower cost. Moving SAP Workloads: Use Cases There are a number of different editions and deployment options for SAP Business Suite applications. As guidance, we are focusing on the following use cases: Develop and test in the cloud Test new customizations or new versions Validate patches Perform upgrades and point releases Backup and disaster recovery in the cloud Independent data center for high availability and disaster recovery Duplicated environment in the cloud for applications and databases Extend the data center to the cloud  Transient workloads (training, demos) Rapid implementation for acquired subsidiary, geographic expansion, or separate lines of business Production in the cloud Reduce reliance on or eliminate on-premises data centers Focus on strategic priorities and differentiation, not managing infrastructure Oracle Cloud Regions Today we have four Oracle Cloud Infrastructure regions and we’ve announced new regions in coming months. This provides the global coverage that enterprises need. Additional details at Oracle Cloud Infrastructure Regions. SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure Oracle Cloud Infrastructure offers hourly and monthly metered bare metal and virtual machine compute instances with up to 51.2 TB of locally attached NVMe SSD storage or up to 1PB (Petabyte) of iSCSI attached block storage. A Bare Metal  instance with a 51.2TB of NVMe storage with is capable of around 5.5 million 4K IOPS at < 1ms latency flash, the ideal platform for an SAP NetWeaver® workload using an Oracle Database. Get 60 IOPS per GB, up to a maximum of 25,000 IOPS per block volume, backed by Oracle's first in the industry performance SLA. Instances in the Oracle Cloud Infrastructure are attached using a 25 Gbps non-blocking network with no oversubscription. While each compute instance running on bare metal has access to the full performance of the interface, virtual machine servers can rely on guaranteed network bandwidths and latencies; there are no “noisy neighbors” to share resources or network bandwidth with. Compute instances in the same region are always less than 1 ms away from each other, which means that your SAP application transactions will be processed in less time, and at a lower cost than with any other IaaS provider.  To support highly available SAP deployments, Oracle Cloud Infrastructure builds regions with at least three availability domains. Each availability domain is a fully independent data center with no fault domains shared across availability domains. An SAP NetWeaver® Application Server ABAP/Java landscape can span across multiple availability domains. Planning Your SAP NetWeaver® Implementation For detailed information about deploying SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure, see the SAP NetWeaver Application Server ABAP/Java on Oracle Cloud Infrastructure white paper. This document also provides platform best practices and details about combining parts of Oracle Cloud Infrastructure, Oracle Linux, Oracle Database instances, and SAP application instances to run software products based on SAP NetWeaver® Application Server ABAP/Java in Oracle Cloud Infrastructure.  Topologies of SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure There are various installation options for SAP NetWeaver® Application Server ABAP/Java. You can place one complete SAP application layer and the Oracle Database on a single compute instance (two-tier SAP deployment). You can install the SAP application layer instance and the database instance on two different compute instances (three-tier SAP deployment). Based on the sizing of your SAP systems, you can deploy multiple SAP systems on one compute instance in a two-tier way or distribute those across multiple compute instances in two-tier or three-tier configurations. To scale a single SAP system, you can configure additional SAP dialog instances (DI) on additional compute instances. Recommended Instances for SAP NetWeaver® Application Server ABAP/Java Installation You can use the following Oracle Cloud Infrastructure Compute instance shapes to run the SAP application and database tiers. Bare Metal Compute BM.Standard1.36 BM.DenseIO1.36 BM.Standard2.52 BM.DenseIO2.52 Virtual Machine Compute VM.Standard2.1 VM.Standard2.2 VM.Standard2.4 VM.Standard2.8 VM.Standard2.16 VM.DenseIO2.8 VM.DenseIO2.16 For additional details, review the white paper referenced in the "Planning Your SAP NetWeaver® Implementation" section. Technical Components A SAP system consists of several application server instances and one database system. In addition to multiple dialog instances, the System Central Services (SCS) for AS Java instance and the ABAP System Central Services (ASCS) for AS ABAP instance provide message server and enqueue server for both stacks.  The following graphic gives an overview of the components of the SAP NetWeaver® Application Server: Conclusion This post provides some guidance about the main benefits of using Oracle Cloud Infrastructure for SAP NetWeaver® workloads, along with the topologies, main use cases, installation, and migration process. To learn more about migrating SAP to Oracle Cloud, register for our March 6 webinar. And for more information, review the following additional resources.  Additional Resources SAP NetWeaver® Application Server ABAP/Java on Oracle Cloud Infrastructure white paper Oracle Cloud Infrastructure technical documentation Oracle Cloud for SAP Overview SAP Solutions Portal SAP on Oracle Community High Performance X7 Compute Service Review and Analysis

For decades, Oracle has provided a robust, scalable, and reliable infrastructure for SAP applications and customers. For over 30 years, SAP and Oracle have worked closely to optimize Oracle...

Product News

Announcing the Launch of Traffic Management

Today we’re excited to announce the release of DNS traffic management on Oracle Cloud Infrastructure. Our industry-leading DNS is already the most reliable and resilient DNS network across the world. Now you can use customizable steering policies to provide an optimal end-user experience based on factors such endpoint availability and end-user location. In addition to traditional global load-balancing capabilities, like distributing traffic across two Oracle Cloud Infrastructure regions, Oracle Cloud Infrastructure DNS now provides granular control of incoming queries and where to route these requests. Traffic management steering policies support a variety of use cases, ranging from a simple point-A-to-point-B failover to the most complex enterprise architectures. This support enables you to set predictable business expectations for service differentiation, geographic market targeting, and disaster recovery scenarios. Your rule sets can direct global traffic to any assets under your control, allowing you to optimize user experience and infrastructure efficiency in both hybrid and multicloud scenarios. Why Do I Need Traffic Management Steering Policies? Traffic management—a critical component of DNS—lets you configure routing policies for serving intelligent responses to DNS queries. Different answers can be served for a query according to the logic in the customer-defined traffic management policy, thus sending users to the most optimal location in your infrastructure. Oracle Cloud Infrastructure DNS is also optimized to make the best routing decisions based on current conditions on the internet. What Are the Most Common Use Cases? As more enterprises move to cloud and hybrid architectures, high availability and disaster recovery are more important than ever before. DNS failover provides the ability to move traffic away from an endpoint when it is unresponsive and send it to an alternate location. The availability status is monitored by Oracle health checks, now also available in Oracle Cloud Infrastructure. Cloud-based DNS load balancing is ideal for scaling your infrastructure across multiple geographic reasons. Common use cases include scaling out new infrastructure, cloud migration, and controlling the release of new features across your user base. It’s easy to set up “pools” of endpoints across all of your infrastructure (on-premises or cloud-based) and assign ratio-based weighting. Source-based steering lets you automate routing decisions based on where requests are coming from. These source-based policies can route traffic to the closest geographic assets, create a “split horizon” to control traffic based on the IP prefix of the originating query, and make decisions based on preferred providers. You can load balance traffic across different Oracle Cloud Infrastructure regions by sending users to the closest geographic location. You can also use health checks to divert users to an alternate region if your assets are unreachable in a particular region. Getting Started After you set up basic DNS, select Traffic Management Steering Policies under Edge Services, where you can create and customize your own steering policies. If you have any questions about getting started or how to handle your use case, please reach out to us.

Today we’re excited to announce the release of DNS traffic management on Oracle Cloud Infrastructure. Our industry-leading DNS is already the most reliable and resilient DNS network across the world....

Partners

Click to Launch Images by Using the Marketplace in Oracle Cloud Infrastructure

At Oracle, our mission is to enable your business transformation by migrating and modernizing your most demanding enterprise workloads onto the cloud without rearchitecting them. Our development efforts for Oracle Cloud Infrastructure are focused on this primary objective. We also know that to support your critical system-of-record workloads effectively with the least amount of rearchitecture, you require supporting applications from a broad range of vendors that surround, secure, and extend your core enterprise workloads. To that end, we want to dramatically simplify how you find, learn about, and launch both Oracle and third-party applications from our Oracle Cloud Marketplace. Announcing the General Availability of Marketplace in Oracle Cloud Infrastructure Today, I’d like to announce the general availability of our Marketplace in Oracle Cloud Infrastructure. We introduced this feature at OpenWorld San Francisco in October 2018, and we’re proud to make it available to our cloud customers effective immediately. Embedding the Marketplace in Oracle Cloud Infrastructure gives you ready access to security solutions from Fortinet and Check Point, DevOps solutions from Bitnami, and high-performance computing (HPC) workload management tools from Altair. If you’re an Oracle Applications customer, you can easily find and “click to launch” the automated lift and shift, provisioning, and lifecycle management tools for Oracle E-Business Suite and PeopleSoft. The best part is that you can launch any of these applications directly on your Oracle Cloud Infrastructure Compute instance, which dramatically reduces deployment times to minutes or hours, instead of days or weeks. Next, I’d like to highlight some of the innovative and unique solutions that we have in the Marketplace, and take you through some of their use cases. Enhance Security with Solutions That You Know Oracle Cloud Infrastructure is dedicated to offering you a secure cloud. We ensure the security of our cloud through our infrastructure architecture. We also enable you to choose the level of isolation and security controls that you need to run your most important workloads securely. We know that enterprise customers have implemented security and networking systems for their on-premises data centers and governance frameworks. Because we want to help you modernize your critical workloads by moving them to our cloud without massive rearchitecture work, we want to make it easier for you to use popular security solutions from third-party providers that you already know and trust. Oracle Cloud Infrastructure has supported integration with leading virtual firewall solutions from Fortinet and Check Point for some time. However, deployment used to require several steps, including importing or running setup into a running instance and then creating custom images. In fact, we wrote blog posts to help guide our customers through those steps. Now, turnkey images for Fortinet's FortiGate-VM NGFW, FortiADC (load balancing), FortiAnalyzer, FortiManager solutions, and Check Point’s CloudGuard IaaS NGFW are available on the Marketplace. You can quickly launch these images to your Oracle Cloud Infrastructure environment by using the Console. You can select the image of choice, click “Launch Instance”, and the GUI will guide you through deployment. It's as easy as that. Lior Cohen, Senior Director of Cloud Security Products and Solutions at Fortinet, talks about the impact of enabling customers to easily launch turnkey solutions via the embedded Marketplace in Oracle Cloud Infrastructure. "More and more enterprises are shifting their critical production workloads to hyperscale IaaS cloud vendors like Oracle. Because of the demands of today’s digital marketplace, it’s crucial for those customers to be able to instantly launch solutions that can secure and efficiently deliver applications at the speed their users expect. Rapid and simplified access to essential tools, like our award-winning FortiGate next-generation firewall security solution, equips organizations with the breadth of protection and confidence needed to migrate even their most critical enterprise applications. With just a few clicks, customers can launch and use our award-winning firewall and application delivery controller solutions, enabling them to secure critical applications in minutes." Launch E-Business Suite and PeopleSoft Cloud Manager, Available Only with Oracle Cloud Oracle Cloud Infrastructure is designed and optimized to run Oracle Applications such as E-Business Suite and PeopleSoft. These solutions help run the back offices of leading enterprises worldwide. Companies that depend on these applications, but demand the benefits of cloud, choose to modernize by running on Oracle Cloud Infrastructure, which offers better performance, lower price, and higher-availability options, including RAC and Exadata, that cannot be found with any other cloud. For example, E-Business Suite Cloud Manager and PeopleSoft Cloud Manager help automate provisioning and facilitate lifecycle management for these application environments in our cloud. And these solutions are unique to Oracle Cloud Infrastructure. They offer application modernization capabilities to facilitate migrations from on-premises environments to the cloud. They also provide intuitive UIs so that you can define your topologies and create templates for more streamlined deployments. Finally, these web-based solutions enable you to subscribe to the latest updates and stay current with the latest images, improving your security. Both solutions are now available through the Marketplace in Oracle Cloud Infrastructure, where you can click to launch the latest E-Business Suite Cloud Manager and PeopleSoft Cloud Manager images. For more information on E-Business Suite Cloud Manager, check out their blog. Click to Launch DevOps Tools Building on Oracle’s commitment to support open standards through our Oracle Cloud Native Framework, the new Marketplace in Oracle Cloud Infrastructure also makes it easier for DevOps practitioners to launch the following images: Continuous Integration (CI) software, such as Jenkins Certified by Bitnami Source code management with CI features, such as GitLab CE Certified by Bitnami Bug tracking software, such as Redmine Certified by Bitnami By simplifying how software development teams access solutions to build, test, and deploy the latest cloud native innovations, we’re supporting our customers’ ability to innovate and respond to changing business requirements. “For enterprises running open source, it is critical that they choose a trusted, secure, up-to-date version,” said Pete Catalanello, Bitnami Vice President of Business Development and Sales. “By adding Bitnami certified solutions such as Jenkins and Redmine to its Marketplace, Oracle is helping DevOps to add agility and best practices to their processes.” Making HPC Solutions Accessible to Engineers Oracle continues to deliver on our vision to make the power of supercomputing readily accessible to every engineer and scientist. Historically, enterprise HPC workloads have remained on-premises because they require specialized technology and demand high, consistent performance that wasn’t possible or was too cost prohibitive on cloud infrastructure. At Oracle, we're challenging you to bring your most demanding HPC applications to our cloud. Not only do we offer bare metal GPUs based on cutting-edge technology and enable clustered networking to deliver single-digit microsecond latency, we also partner with innovators like Altair to offer easy access to their market-leading HPC workload management solution, Altair PBS Works™. Sam Mahalingam, Chief Technical Officer for Enterprise Solutions at Altair talks about teaming up with Oracle on the new Marketplace in Oracle Cloud Infrastructure. “Organizations all over the world depend on Altair PBS Works to simplify the administration of their largest and most complex clusters and supercomputing environments,” said Mahalingam. “Many of our customers are smaller organizations that need HPC solutions that are easy to adopt and use. Partnering with Oracle and offering Altair PBS Professional™, the flagship product of the PBS Works suite, through the new Marketplace delivers on our joint mission of making HPC more accessible.” We’ll continue to add partner solutions to our Marketplace. If there are any third-party solutions that you just can’t live without, we welcome your comments to this post as we continue to build out our ecosystem of solutions for you.

At Oracle, our mission is to enable your business transformation by migrating and modernizing your most demanding enterprise workloads onto the cloud without rearchitecting them. Our development...

Strategy

Why Oracle Cloud Infrastructure?

We recently launched a "Why Oracle Cloud Infrastructure" web page that describes how Oracle's approach to cloud is different, and why we think our cloud infrastructure can be uniquely valuable to customers. I’ve been at Oracle for two years, and I joined this team because I believe that Oracle is in a position to solve technology challenges in cloud that other vendors can't. This page is our way of telling that story to the world. It’s part of a conversation that we’ve been having with many customers, and I’m excited that we're sharing the story with a wider audience. Why Build a Cloud? For decades, Oracle has been partnering with enterprises to solve some of their most challenging business problems by using the world’s most scalable, reliable, and performant database and business application software, and differentiated hardware to run it on. Like every other technology company in the world, we know that cloud offers significant benefits. It promises to make IT more agile to drive innovation and get customers out of the tedious business of infrastructure management.  When we thought about the best way to serve our current and future customers, we didn't believe that any existing cloud infrastructure provider could meet their needs for consistently high performance, isolation from other tenants, compatibility with what they wanted to run, and the ability to support the most critical features of Oracle technologies. We felt that by building our own cloud infrastructure, we’d better serve customers in the long run by allowing our database and applications teams to work closely with our own cloud infrastructure teams to offer optimized solutions.  Our Imperatives To cloud-enable the class of applications that we know best—systems of record that Oracle customers rely on to build products, transact finances, and effectively run the most critical parts of their operations—we knew that we needed a cloud that was up to the task. Specifically, we had three imperatives in mind: Our customers could not take a step backwards when moving to cloud, especially regarding performance and reliability. The economics had to work by reducing costs compared to on-premises deployments and being predictable enough to facilitate budget planning. Our customers had to be able to get their workloads to cloud easily, without requiring massive refactoring or excessive risk. Performance and Reliability For the first imperative, we built a cloud for enterprise with top-tier components and access to bare metal servers to give customers full isolation and the ability to run what they want. From an infrastructure perspective, the most important part is the network. We avoided the oversubscription that’s common with other clouds, so that performance isn’t variable from moment to moment, day to day, or month to month. This reliability enables high performance that’s validated by third party testing to have better results for key application workloads. What’s more, the level of performance that customers get doesn’t change depending on what neighbors are doing, a key requirement for demanding systems of record. Finally, we built a cloud that natively supports crucial elements of Oracle Database functionality such as Oracle Real Application Clustering (RAC), Exadata, and deep DBA controls, all of which customers rely on for production applications and are unique to the Oracle Cloud. Economics We offer low component pricing and simplified pricing models that make it easy to predict and manage that total cost of operations at scale. Our on-demand rates for compute, network, and storage components are materially lower than those of our competitors. But perhaps more importantly, our rates include everything that enterprises need to get the most out of our services, like storage performance, inter-region transfer and high levels of internet data egress, and unlimited data movement across dedicated private line connections. Our discount structure is simple, with a straight percentage discount offered across of all services rather than a reserved instance model that is commonly restricted to compute and is tremendously complicated to optimize.  Moving to Cloud We wanted to make it easy for our enterprise customers to get to cloud. We make this possible through a combination of compatibility with what our customers run, automation and expertise in lifting and shifting workloads as they are to cloud, and integration and innovation in cloud native application frameworks. When our customers run workloads in Oracle Cloud, they can take the data that they create and manage in core systems of record, and get more value out of it with integration, analytics, and new ways to distribute and understand that data. So, Why Oracle Cloud Infrastructure? Other clouds were built primarily for web applications, but we took a different approach. Our cloud is built to withstand the rigorous demands of critical enterprise workloads, and to transform those business processes for improved agility, reliability, and long-term business results. We believe our long history of solving business problems in technology, combined with a focus on building a cloud for the most demanding category of workloads, will drive differentiation and results for enterprise environments around the world.  Hope that you give our new page a read and feel free to let me know what you think.

We recently launched a "Why Oracle Cloud Infrastructure" web page that describes how Oracle's approach to cloud is different, and why we think our cloud infrastructure can be uniquely valuable...

Product News

Oracle Cloud Simplifies Identity Management with Enhanced Okta Support

We are enhancing our federation support by enabling users who are federated with Okta to directly access the Oracle Cloud Infrastructure SDK and CLI. Federation enables you to use identity management software - often times this is existing an existing identity management solution that is integrated with your corporate directory - to manage users and groups while giving them access to the Oracle Cloud Infrastructure Console, CLI, and SDK. If you're an Okta user, that means you can leverage the same set of credentials in the Oracle Cloud Infrastructure web console as well as in long-running, unattended CLI or SDK scripts. Users that are members of Okta groups that you select are synchronized from Okta to Oracle Cloud Infrastructure. You control which Okta users have access to Oracle Cloud Infrastructure, and you can consolidate all user management in Okta. To use this new feature, follow the setup process described in the documentation.  Following is an example cost-management scenario that is greatly simplified by this feature. Suppose that you want to use the SDK to run a Python script that finds and terminates compute instances that don't have the CostCenter cost tracking tag. Instead of creating a local Oracle Cloud Infrastructure user, you can set up a user in Okta to run this script. You would follow these steps to enable this scenario: Step 1: Set up or upgrade your Okta federation to provision users If you do not have an existing federation with Okta, follow the instructions in the white paper, Oracle Cloud Infrastructure Okta Configuration for Federation and Provisioning. This paper includes instructions for both setting up your federation and provisioning with SCIM. If you have an existing federation with Okta with group mappings that you want to maintain, you can add SCIM provisioning via the instructions in our documentation. Step 2: Set up the user in Okta and associate that user with the correct groups Managing all your users from your identity provider is a more scalable, manageable, and secure way to manage your user identities. Be sure to follow the principal of least privilege by creating an Okta user and associating that user with only the Okta groups that they need to do their job. Step 3: Set up the Oracle Cloud Infrastructure group Create a local Oracle Cloud Infrastructure group that will be used for the task and ensure that it has a policy that enables just the access control needed for the task. Consider setting up a group specifically for the type of administrator that you want (for example, compute instances administrator). For a detailed explanation of best practices in setting up granular groups and access policies, see the Oracle Cloud Infrastructure Security white paper. You can also create the group when you map it (next step). Step 4: Map the Okta group to the Oracle Cloud Infrastructure group Follow the instructions on adding groups and users for tenancies federated with Okta, and ensure that you map the correct group from Okta to the equivalent group in Oracle Cloud Infrastructure. You will know that you succeeded if you see users created in your tenancy from Okta (there is a filter that allows you to see only federated users).   Step 5: Set up the user with an API key Now that the Okta user exists as a provisioned user in Oracle Cloud Infrastructure, you must create an API key pair and upload it to the user. Each user should have their own key pair. For details, see the SDK setup instructions. Step 6: Check the user's capabilities  As a final check, ensure that the user has the capability to use API keys. You can also set the user's capabilities to use only API keys for the SDK and not the web console. Now you've set up the Okta user to use the SDK and run scripts that the Oracle Cloud Infrastructure user has access to.    Tips You know that the user is federated if the user name is prefixed with the name that you gave the identity provider. For example, if you called the Okta federation okta, your user would be okta/username. There is also a feature that lets you filter the list of local users by which federation provider they came from. Only users assigned to mapped groups are replicated. If you see some users but not the Okta user that you want, then that user doesn't belong to a group that has been mapped from Okta to Oracle Cloud Infrastructure. If no users are being replicated, verify that you've followed the setup procedure and mapping between the groups. If that doesn’t work, visit My Oracle Support to open a support ticket. To use the SDK or CLI, the client that runs the CLI or SDK must have the matching private key material stored on the client machine. Secure the client machine appropriately to prevent inappropriate access. Conclusion This feature streamlines how Okta users can be used with Oracle Cloud Infrastructure and especially the CLI and SDK. Stay tuned for future feature announcements regarding federation.  

We are enhancing our federation support by enabling users who are federated with Okta to directly access the Oracle Cloud Infrastructure SDK and CLI. Federation enables you to use identity management...

Developer Tools

External Health Checks on Oracle Cloud Infrastructure

When you run a solution in the public cloud, it's important to monitor the availability and performance of the service to end users, to ensure access from outside the host cloud. To meet this need, a cloud provider must provide monitoring from a diverse set of locations within relevant regional markets around the globe. Oracle Cloud Infrastructure is pleased to announce the release of external health checks to help you monitor the availability and performance of any public-facing service, whether hosted in Oracle Cloud or in a hybrid infrastructure. What Are Health Checks? External health checks enable you to perform scheduled testing, by using a variety of different protocols, from a set of Oracle managed vantage points around the globe to any fully qualified domain name (FQDN) or IP address specified by the user. Health checks provides support for HTTP and HTTPS web application checks and TCP and ICMP pings for monitoring IP addresses. You can also choose high-availability testing with tests running as frequently as every 10 seconds from each vantage point. On-demand testing, which allows for one-off validation or troubleshooting tests, is also available through a REST API. Additionally, the Health Check service is fully integrated with the DNS Traffic Management service to enable automated detection of service failures and trigger DNS failovers to ensure continuity of service. Creating a Health Check From the Edge Services menu, navigate to Health Checks. In the Health Checks area, click Create Health Check, and enter the details of your check in the dialog box. Enter name that will help you to remember the purpose of this check when you return to this page. Select the compartment that you want to add this check to. Add the target endpoints that you want to monitor. The Targets field is prepopulated with suggested endpoints drawn from public IP addresses already configured in your compartment. You can select one of these endpoints to monitor or add a new one. Select vantage points from which you intend to monitor the targets. These vantage points are located in locations around the globe, and we generally recommend selecting vantage points that are  located in the same continent as your application. Select the type of test that you want to run—HTTP or HTTPS for a web page, or TCP or ICMP for a public IP address. Set the frequency of the tests as appropriate to the level of monitoring that your service requires. Current options include every 30 or 60 seconds for basic tests, and premium tests run at the higher frequency of every 10 seconds.  An additional fee is calculated for premium tests.  Add any tags to help you quickly search for this check in the future. Click Create Health Check.  After the check is created, a details page shows information specific to this check. Retrieving Metrics The Health Check service delivers metrics directly to the Oracle Cloud Infrastructure Monitoring service to enable you to query the input metrics, build reports, and configure alerts based on the external monitoring. This integration gives you the flexibility to identify service failures visible from locations around the globe. A full REST API lets you access up to 90 days of historic health check monitoring data. Within the Health Check UI, each test also provides access to measurement data from the probes. Suspending Health Checks If you need to temporarily suspend a health check—for example, you are working to operationally maintain or alter a service—you can do that by selecting the affected check and clicking Disable on the Health Check details page. Next Steps Health checks are simple to configure through a flexible UI or REST API. We recommend using the service to monitor any critical publicly exposed IP address or FQDN that you are delivering, whether hosted solely in Oracle Cloud Infrastructure or across your hybrid environment. Health checks will be extended to include more types of tests across different protocols and with more configuration options. If there is a specific test you would like us to support, let us know. If you haven't tried Oracle Cloud Infrastructure yet, you can try it for free. 

When you run a solution in the public cloud, it's important to monitor the availability and performance of the service to end users, to ensure access from outside the host cloud. To meet this need, a...

Using File Storage Service with Container Engine for Kubernetes

Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. One of the best practices for containerized applications is to use stateless containers. However, many real-world applications require stateful behavior for some of their containers. For example, a classic three-tier application might have three containers: One for the presentation layer, stateless One for the application layer, stateless One for persistence ( such as database), stateful In Kubernetes, each container can read and write to its own file system. But when a container is restarted, all data is lost. Therefore, containers that need to maintain state would store data in a persistent storage such as Network File System (NFS). What’s already stored in NFS isn't deleted when a pod, which might contain one or more containers, is destroyed. Also, an NFS can be accessed from multiple pods at the same time, so an NFS can be used to share data between pods. This behavior is really useful when containers or applications need to read configuration data from a single shared file system or when multiple containers need to read from and write data to a single shared file system. Oracle Cloud Infrastructure File Storage provides a durable, scalable, and distributed enterprise-grade network file system that supports NFS version 3 along with Network Lock Manager (NLM) for a locking mechanism. You can connect to File Storage from any bare metal, virtual machine, or container instance in your virtual cloud network (VCN). You can also access a file system from outside the VCN by using Oracle Cloud Infrastructure FastConnect or an Internet Protocol Security (IPSec) virtual private network (VPN). File Storage is a fully managed service so you don't have to worry about hardware installations and maintenance, capacity planning, software upgrades, security patches,  and so on. You can start with a file system that contains only a few kilobytes of data and grows to handle 8 exabytes of data. This post explains how to use File Storage (sometimes referred to as FSS) with Container Engine for Kubernetes (sometimes referred to as OKE). We'll create two pods. One pod runs on Worker Node 1, the other pod runs on Worker Node 2, and they share the same File Storage file system:   Then we'll look inside the pod and see how to configure it with File Storage. Prerequisites Oracle Cloud Infrastructure account credentials for the tenancy. A Container Engine for Kubernetes cluster created in your tenancy. An example of that is shown in the Container Engine for Kubernetes documentation. Security lists configured to support File Storage as explained in the File Storage documentation. The following image shows a sample security list configuration: A file system and a mount target created according to the instructions in Announcing File Storage Service UI 2.0. High-Level Steps Create storage class. Create a persistent volume (PV). Create a persistent volume claim (PVC). Create a pod to consume the PVC. Create a Storage Class Create a storage class that references the mount target ID from file system that you created: kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: oci-fss provisioner: oracle.com/oci-fss parameters: # Insert mount target from the FSS here  mntTargetId: ocid1.mounttarget.oc1.iad.aaaaaaaaaaaaaaaaaaaaaaaaaa   Create a Persistent Volume (PV) apiVersion: v1 kind: PersistentVolume metadata: name: oke-fsspv spec: storageClassName: oci-fss capacity: storage: 100Gi accessModes: - ReadWriteMany mountOptions: - nosuid nfs: # Replace this with the IP of your FSS file system in OCI server: 10.0.32.8 # Replace this with the Path of your FSS file system in OCI path: "/okefss" readOnly: false   Create a Persistent Volume Claim (PVC) apiVersion: v1 kind: PersistentVolumeClaim metadata: name: oke-fsspvc spec: storageClassName: oci-fss - ReadWriteMany resources: requests: # Although storage is provided here it is not used for FSS file systems  storage: 100Gi volumeName: oke-fsspv   Verify That the PVC Is Bound raghpras-Mac:fss raghpras$ kubectl get pvc oke-fsspvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE oke-fsspvc Bound oke-fsspv 100Gi RWX oci-fss 1h raghpras-Mac:fss raghpras$   Label the Worker Nodes Label two worker nodes so that a pod can be assigned to each of them: kubectl label node 129.213.110.23 nodeName=node1 kubectl label node 129.213.137.236 nodeName=node2   Use the PVC in a Pod The following pod (oke-fsspod) on Worker Node 1 (node1) consumes the file system PVC (oke-fsspvc). #okefsspod.yaml apiVersion: v1 kind: Pod metadata: name: oke-fsspod spec: containers: - name: web image: nginx volumeMounts: - name: nfs mountPath: "/usr/share/nginx/html/" ports: - containerPort: 80 name: http volumes: - name: nfs persistentVolumeClaim: claimName: oke-fsspvc readOnly: false nodeSelector: nodeName: node1   Create the Pod kubectl apply -f okefsspod.yaml   Test After creating the pod, use kubectl exec to test that you can write to the file share: raghpras-Mac:fss raghpras$ kubectl get pods oke-fsspod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE oke-fsspod 1/1 Running 0 33m 10.244.2.11 129.213.110.23 <none> raghpras-Mac:fss raghpras$   Write to the File System by Using kubectl exec raghpras-Mac:fss raghpras$ kubectl exec -it oke-fsspod bash root@oke-fsspod:/# echo "Hello from POD1" >> /usr/share/nginx/html/hello_world.txt root@oke-fsspod:/# cat /usr/share/nginx/html/hello_world.txt Hello from POD1 root@oke-fsspod:/#   Repeat the Process with the Other Pod Ensure that this file system can be mounted into the other pod (oke-fsspod2), which is on Worker Node 2 (node2): apiVersion: v1 #okefsspod2.yaml kind: Pod metadata: name: oke-fsspod2 spec: containers: - name: web image: nginx volumeMounts: - name: nfs mountPath: "/usr/share/nginx/html/" ports: - containerPort: 80 name: http volumes: - name: nfs persistentVolumeClaim: claimName: oke-fsspvc readOnly: false nodeSelector: nodeName: node2   raghpras-Mac:fss raghpras$ kubectl apply -f okefsspod2.yaml pod/oke-fsspod2 created raghpras-Mac:fss raghpras$ kubectl get pods oke-fsspod oke-fsspod2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE oke-fsspod 1/1 Running 0 12m 10.244.2.17 129.213.110.23 <none> NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE oke-fsspod2 1/1 Running 0 12m 10.244.1.9 129.213.137.236 <none> raghpras-Mac:fss raghpras$ kubectl exec -it oke-fsspod2 -- cat /usr/share/nginx/html/hello_world.txt Hello from POD1 raghpras-Mac:fss raghpras$   Test Again You can also test that the newly created pods can write to the share: raghpras-Mac:fss raghpras$ kubectl exec -it oke-fsspod2 bash root@oke-fsspod2:/# echo "Hello from POD2" >> /usr/share/nginx/html/hello_world.txt root@oke-fsspod2:/# cat /usr/share/nginx/html/hello_world.txt Hello from POD1 Hello from POD2 root@oke-fsspod2:/# exit exit   Conclusion Both File Storage and Container Engine for Kubernetes are fully managed services that are highly available and highly scalable. File Storage also provides highly persistent and durable storage for your data on Oracle Cloud Infrastructure. File Storage is built on distributed architecture to provide scale for your data and for accessing your data. Leveraging both services will simplify your workflows in the cloud and give you flexibility and options on how you store your container data. What's Next Dynamic volume provisioning for File Storage Service., which is in development, creates file systems and mount targets when a customer requests file storage inside the Kubernetes cluster. If you want to learn more about Oracle Cloud Infrastructure, Container Engine for Kubernetes, or File Storage, our cloud landing page is a great place to start. Update Just a quick note that FSS requires public subnets which means that the OKE worker nodes need to be on the public subnet in order for this solution to work. Resources Container Engine for Kubernetes workshop File Storage Overview Creating a Kubernetes Cluster

Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. One of...

Developer Tools

The Intersection of Hybrid Cloud and Cloud Native Adoption in the Enterprise

Welcome to Oracle Cloud Infrastructure Innovators, a series of articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Enterprises are turning in droves to hybrid cloud computing strategies, especially for testing and development, quality assurance, and DevOps activities. But before the majority of enterprises can move on to more advanced hybrid cloud use cases, they'll need to overcome some lingering challenges. I recently sat down with Bob Quillin, Vice President of Developer Relations at Oracle Cloud Infrastructure, to discuss Oracle’s cloud native direction. We discussed the biggest trends in hybrid cloud computing and the major obstacles—things like skills shortages, resistance to cultural change, and rapidly evolving technologies—that often stand in the way of adoption. Listen to our entire conversation, followed by a condensed written version:   Your browser does not support the audio player   One of the common trends I'm seeing in just about every enterprise is the move toward hybrid cloud strategies. What are you seeing on that front? Bob Quillin: We're seeing a lot of demand and interest in hybrid cloud. People have been trying out different models and patterns and testing out different technologies, and there have been some challenges. But one of the first major areas where we're seeing a lot of traction with customers is using the cloud for development, quality assurance, DevOps, and for running tests, with production still largely on premises. Many people feel more comfortable in their on-premises environment for certain production applications. But with a cloud native and DevOps environment in the cloud, you can spin up, spin down, and support a variety of testing, staging, and QA projects. It gives you a lot of elasticity, it's cheaper, and the test cases can run in containers. Sometimes people say, "Oh, I can't run my database applications in the cloud." Well, that isn’t the case for test and QA use cases. You can put them in a container, run the test, break it back down, and you're good to go. The disposability and quick reusability of these environments is where we're seeing a lot of success, and that is where a lot of people get started. What's the next step on the road to a hybrid cloud strategy? Quillin: The next step is getting to the point where you have a platform that gives you confidence that you can develop on the cloud or on premises, and that you have bi-directional portability—on premises to cloud or cloud to on premises. Ensuring that kind of application portability is the next major pattern we've seen. Disaster recovery and high availability deployments is the third approach we've seen. For example, people will mirror their application in the cloud to have it available. But they keep running the existing application on premises so they can failover if they have a disaster event. Disaster recovery is one of the classic hybrid models. Those three areas are the ones we see being most successful right now. Are organizations using hybrid strategies at all in more advanced areas? Quillin: There are two more use cases we're seeing that are more advanced. One is a workload balancing application that's able to run both on premises and in the cloud. It lets you choose where to run each workload based on its regulatory requirements, governance, latencies, whether it’s a new or legacy workload, etc. This approach requires a bit more sophistication and a little more targeting. The other big one that people have been working toward for a long time has been cloud bursting, where users can expand resources into the cloud dynamically back and forth. Or, users enlist some kind of federated automation where, based on performance or quality of service, I'm able to choose where I run my application and have a federated, single view of all that. These use cases have been highly desirable from an enterprise perspective. But what's been lacking is a platform from which to do it and a framework that enables it. Let's talk a bit more about challenges. I'm sure almost every organization that you deal with is facing certain setup challenges in deploying, particularly to the hybrid model. What are you seeing? Quillin: Cultural change and training continue to be inhibitors—and I think those roll up into an overall operational readiness challenge. Organizations are struggling with how to get started on this. At Oracle, what we're providing is an easier way to get started. The Oracle Cloud Native Framework provides a set of patterns and a model that gives the customer a supported blueprint for hybrid cloud. The next challenge is dealing with portability complexities related to a variety of underlying integration issues, including storage, networking, and the wide variety of Kubernetes settings and configurations. A related challenge—and this is one of the dark secrets of cloud native—is that there are a lot of “devil in the details” problems based on the rapid rate of change of Kubernetes, its quick release cadence, shifting APIs, and the general way the technology is rocketing forward. What you need is a vendor that supports you through these changes by supporting a bi-directional portability model. At Oracle Cloud Infrastructure, we're helping organizations through this process—and we're not going to leave them high and dry by using a proprietary approach. We're committed to open standards. Many organizations think that open source is great. But there are also those who think that sourcing software from a single proprietary vendor can be cheaper due to the DevOps and maintenance costs associated with open source. What are your thoughts on that? Quillin: All sorts of studies have been made around organizations that use an open source and DevOps culture, and they're always faster and more successful in terms of business agility. But also, the developers are happy. It's true that some on the business side of an organization would choose proprietary technology. But if you really want to recruit the best developers, you're going to want to work in open source because that's the most marketable set of skills today. You get happier developers, you can recruit better, and you get the best development teams. Oracle is a platinum member of the CNCF (Cloud Native Computing Foundation). How does the CNCF help in terms of enabling enterprises to overcome these challenges? Quillin: I think the most important thing they've done—which is amazing to me—is they've enabled the market by creating a standard cloud native platform based around Kubernetes. That's been their crowning achievement so far. If you remember back to just a few years ago, everyone had their own orchestration technology and it was all over the place. That's settled down now. The CNCF has created stability and enablement for the market. What is next for Oracle and CNCF? Quillin: The challenge is to continue that success. There's some next-level tooling that needs to come out. Some of the fastest-growing projects in the CNCF are around monitoring and tracing and logging, around networking and storage, and around the best ways to manage a Kubernetes environment and connect it to existing storage and networking infrastructure. Kubernetes is growing, but what's really growing faster, which is a good sign, are the things that make Kubernetes more manageable, more secure, and more integrated into your existing infrastructure. Learn more about Oracle Cloud Infrastructure's cloud native technologies.

Welcome to Oracle Cloud Infrastructure Innovators, a series of articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Enterprises are turning in...

Oracle Cloud Infrastructure

Security in the Cloud: Are Audits and Certifications Really Enough?

For most organizations, the process of verifying that cloud providers manage data securely involves looking at security and compliance certifications and reading reports from independent, third-party auditors. At first glance, this approach makes sense. After all, organizations need some way to confirm that sensitive customer, supplier, and financial information is adequately protected. They also need to verify that data is stored and handled in compliance with applicable security requirements like the Health Insurance Portability and Accountability Act (HIPAA). Third-party audits and certifications can be a big help in that regard. But although audits and certifications provide some level of assurance that cloud providers and other enterprises are meeting certain requirements related to security and compliance, they don't always go far enough. For example, some of the biggest security breaches in the last several years happened to vendors with active Payment Card Industry Data Security Standard (PCI DSS) certifications.   The fact that organizations subject to regular security audits can experience breaches shows that certifications and audits aren't a substitute for vetting a cloud's security architecture and controls framework for sound design. Organizations that want to be confident that data stored in the cloud is appropriately secured should take some additional steps. Here are a few things that you can do to verify that a cloud provider places a high priority on security. Understand the Cloud's Architecture You can tell a lot about a cloud provider's approach to security by looking closely at their cloud's architecture. How was the service built? Was it designed using security-first principles? Oracle Cloud Infrastructure, for example, was designed with a security-first focus, isolating customer resources such as network, compute, and data. This single-tenant approach increases the granularity of control and reduces the attack footprint. It also results in predictable and superior performance by eliminating problems caused by "noisy neighbors."    Oracle Cloud Infrastructure users can also create their own virtual cloud networks (VCNs). A VCN is a customizable and completely private network that gives you full control to create an IP address space, subnets, route tables, and stateful firewalls. You can also configure inbound and outbound security lists to protect against unwarranted access and malicious users. Clarify Roles and Responsibilities Migrating to the cloud means shifting to a shared-responsibility model for security. This model is often a source of confusion for cloud adopters, as highlighted in a Cloud Threat Report jointly authored by Oracle and KPMG. As you move to the cloud, understanding your cloud use cases and how these impact the division of security roles is hugely important. Before you select a cloud provider, begin documenting your cloud use cases by making a comprehensive list of your security requirements. This action helps you create priorities and guide conversations with providers. When negotiating with providers, consider using Standardized Information Gathering (SIG) questionnaires from Shared Assessments or the Consensus Assessments Initiative Questionnaire (CAIQ) from the Cloud Security Alliance. And ensure that security roles and responsibilities are clearly defined in contracts and service level agreements (SLAs). Not all vendors offer availability and performance SLAs, for instance. It's also important to remember that the customer's level of responsibility for security shifts depending on which types of cloud services are being used. For example, Oracle customers who choose bare metal cloud deployments have extensive control over their cloud infrastructure. Therefore, they have far greater responsibility for things like identity and access management, password management, firewall configuration, and other controls. Learn About the Cloud Provider's Culture Security should be integral to the culture and everyday activities of a cloud provider—it should never be an afterthought. Ask the following questions to determine if a cloud provider truly embraces a culture of security: How do you ensure that engineers know their security responsibilities? How do you enable engineers to perform their security-related tasks, and how do you measure their results? What are the processes and technologies for reviewing new code and checking for vulnerabilities, and how do you learn from the things that you discover? What kind of penetration testing do you use, and how often are tests run? Do you give security issues a high priority at daily and weekly stand-ups and meetings? Have you ever made a tough decision between shipping a product to meet a commitment or fixing a security bug? As a longtime security professional and someone who has worked with many IT security engineers over the years, I can attest that we all have good intentions and want to keep customer data private and secure. But it takes more than good intentions to do a job correctly. We must embrace experience and innovation to continually improve the security architecture and ensure the maximum effectiveness of protection measures. Learn more about Oracle Cloud Infrastructure security.

For most organizations, the process of verifying that cloud providers manage data securely involves looking at security and compliance certifications and reading reports from independent, third-party...

Product News

Improving the User Experience for the Oracle IaaS and PaaS Console

As a product manager who's focused on improving the user experience for Oracle IaaS and PaaS customers, I love getting feedback about how we can make your jobs easier. Even better, I love being a part of the team that helps to bring that feedback to life in the form of enhancements to our console. We originally unveiled our new console homepage on October 2018, when we updated our look and feel, and made it easier for you to find the information that you need. Today, we're excited to build on that momentum by introducing key console enhancements that will further improve how you leverage, manage, and get support for your IaaS and PaaS services. Enhanced Service Announcements We've enhanced how we deliver service-related announcements directly in the console. All users with the right set of permissions, and not just the tenant administrator, can stay up-to-date on relevant service updates or planned changes. Announcements for the most high-impact events appear as banners at the top of the console. You can click them for details or dismiss them when you're already up to speed. Additionally, you can view all announcements by clicking on the announcement icon (it looks like a bell) on the top bar of the console. You can filter by the type of announcement, and if you're searching for a specific announcement made during a certain time period, you can filter by date range. Improved Support Experience for Service Limit Increases We've simplified the ability for you to see when you're about to reach your service limits, and we've added the ability to request increases within the console. Just enter your contact information, the service category (for example, Compute), and the resource for which you want to request an increase. For most requests, we offer a one-day response window. Relevant and Contextual Help We've enhanced our help functionality by making it more relevant to the service that you're currently provisioning or managing. First, we analyzed the most common issues that our users need help with, by service. Then we curated short lists of the top help topics and focused on those in the help navigation window. For example, if you're provisioning compute, all help topics are relevant to compute. And when you move on to provisioning Autonomous Data Warehouse, all help topics focus on the most common guidelines for that service. Over time, we will add these links across all services so that you can quickly find what you need. Improved Cost and Billing Management Capabilities A new cost analysis dashboard makes it easier for administrators and controllers to stay on top of usage and costs. You can easily see your latest usage charges at-a-glance. If you're a customer who's trying us out with our US$300 free trial, you can see how many trial credits you've used and the days left in your trial. You can filter by specific date ranges, and filter by compartment and tags to analyze usage and costs by department and projects. Additionally, you can expand by service to analyze how much each service has been used over time. For example, if you tag resources used for different development projects, you can easily filter and track service usage by development team. This is only the beginning. We'll be rolling out more user experience enhancements soon. As always, we want and appreciate your feedback, so keep it coming. If you're new to the Oracle Cloud Platform, we invite you to see how easy it is to get started with Oracle Cloud services with our US$300 free trial.

As a product manager who's focused on improving the user experience for Oracle IaaS and PaaS customers, I love getting feedback about how we can make your jobs easier. Even better, I love being a part...

Events

Learn the Benefits of Running PeopleSoft on Oracle Cloud

Hundreds of PeopleSoft customers have moved their PeopleSoft applications to Oracle Cloud Infrastructure and many more are planning their move now. The challenge that customers face is not whether they should move PeopleSoft to the cloud, but rather when and how they should do it. PeopleSoft is supported and viable at least until 2030. Continued investment in product development is realized in the quarterly PeopleSoft Update Manager (PUM) image updates that deliver new functionality requested by customers. This continued innovation means that you can enjoy the latest features in an application that you have relied on for a long time. The benefits of Oracle Cloud Infrastructure are proven. Oracle Cloud Infrastructure improves performance, reduces costs, and automates lifecycle management. Customers move PeopleSoft to Oracle Cloud Infrastructure to accomplish the following goals: Maximize their current investment in PeopleSoft apps, customizations, and add-ons by running them on Oracle Cloud Infrastructure for a lower cost Exit the data center business and focus on business enablement instead, deliver PeopleSoft implementations with agility and speed, and deliver upgrade and update projects with 40 to 70 percent savings Improve business continuity with Oracle Cloud Infrastructure based disaster recovery with significantly better Recovery Point Objective (RPO) and Recovery Time Objective (RTO) metrics than onsite, and at a reduced cost Automate PeopleSoft lifecycle management tasks such as instance deployment, cloning, tools patching, tools upgrade, backup, monitoring and much more—all backed by Oracle Cloud Infrastructure’s industry leading SLA for IaaS and PaaS services Leverage native TDE to secure PeopleSoft application data at rest and in motion, end-to-end application security, 90 percent faster environment setup, LOB self-service, and more streamlined PeopleSoft lifecycle management Don’t believe it? At the upcoming PeopleSoft on Oracle Cloud event in Atlanta, hear directly from Oracle, Care.org (a PeopleSoft customer), and their systems integrator, Astute Business Solutions, about how the cloud enables innovation at a much faster pace.

Hundreds of PeopleSoft customers have moved their PeopleSoft applications to Oracle Cloud Infrastructure and many more are planning their move now. The challenge that customers face is not whether...

Developer Tools

Oracle Simplifies Cloud Native Development

Welcome to Oracle Cloud Infrastructure Innovators, a series of articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. The majority of enterprises are ready to join the cloud native development movement. But some stubborn obstacles—such as resistance to cultural change, complexity, and skills shortages—continue to stand in their way. I recently sat down for a conversation with Bob Quillin, Vice President of Developer Relations at Oracle Cloud Infrastructure, to talk about Oracle’s cloud native direction. We discussed cloud vendor lock-in and other difficulties enterprises face when moving to cloud. We also talked about creating a sustainable, open standards-based strategy to overcome the challenges to cloud adoption. Listen to our entire conversation here, and read a condensed version following.   Your browser does not support the audio player   How does Oracle Cloud Infrastructure support concepts like serverless programming and cloud native development? Quillin: Oracle last year started the Fn Project. Fn is an open source, container-native, serverless platform. It's one of the first serverless solutions that lets you run serverless applications basically anywhere, whether that's in the cloud or on-premises, or both. It supports any programming language, and it's very extensible. It was developed by the serverless group from Iron.io that was hired into Oracle a couple of years ago. Instead of creating a proprietary service, what we did was build out the Fn Project over the last year. Contributors have been adding in new features to build out the platform. For example, there's a new Cloud Events project that came out recently and is being hosted by the Cloud Native Computing Foundation (CNCF) as a sandbox project. It focuses on how events and serverless functions work together. We're one of the early adopters of that. What else can customers expect from the Fn Project? Quillin: One thing we just rolled out at the KubeCon conference in Seattle is a product that's based on the Fn Project called Oracle Functions. It's a fully managed, scalable, on-demand, functions-as-a-service (FaaS) platform that runs on top of Oracle Cloud Infrastructure. It's all based on the Fn engine. It's a very unique service in that regard in that you can still use the Fn Project capabilities on your laptop, or on any other cloud, for example. But if you want a managed service—like an AWS Lambda but better—we are offering Oracle Functions. But unlike Lambda, Oracle Functions is an open solution and won't lock you in with proprietary APIs. You can run anything you develop on Oracle Functions anywhere else with the Fn Project. So, it's really an environment for deploying and executing any functions-based application, and there's no need to manage the infrastructure. While there are servers underneath it all that Oracle manages for you—it isn't truly “serverless” as people say—you don't have to worry about the servers. That's one of the huge benefits because it makes deploying managed functions simple. It's DevOps friendly and Docker-based, so each function and serverless component is a container. Thus, it's a truly container-native approach. You can actually deploy it using your favorite container management solution. In particular, it works very well with Kubernetes, but also other types of container platforms, too. In terms of serverless products, almost all other solutions that are out there are all proprietary. But it's always tough to teach an old dog some new tricks. What major challenges are traditional enterprises facing as they try to become successful as cloud native companies? Quillin: That's a good question. The CNCF did a survey a couple of months ago asking about the big challenges that organizations are facing with container technology. The top three challenges were cultural change for developers, complexity, and lack of training. We've made some amazing progress as an industry, particularly over this last year and a half. But many developers and teams still feel left behind. As the culture changes and as we push DevOps forward, they're looking for ways to connect into those technologies that use these new technologies. But they're also responsible for maintaining and using existing platforms, like WebLogic or database applications. You can't just “lift and shift” those overnight. I've talked to CIOs at enterprises that are going through this change, and it may be easy to move a five- or ten-person team, but moving a thousand-person team or multi-thousand-person team is challenging. Then, combine that challenge with the complexity of all the open source options and all the solutions that are available to you. If your choice is to choose from 5,000 different solutions or choose between a single vendor that offers you five solutions, you're between a rock and a hard place. You're faced with either too much choice or not enough choice. How are enterprises addressing this issue of too much choice versus not enough choice? Quillin: Sometimes they address it by saying, "Well, I'm just going to choose one cloud. I'm going to single source it." Unfortunately, that approach has left many people locked in. What they find is that the fastest solution is not always the best solution. They started using closed APIs, proprietary services, and inch by inch, application by application, they get more and more locked in. The whole value proposition of open source is choice and portability—being able to take an application and move it wherever is appropriate for that workload, for that geography, for your business. So, if you're going to choose open source technology, you really need to embrace that and push your vendors. As part of the selection process you should ask, "Is this going to lock me in?" What we're seeing now is that people want to go hybrid cloud or multicloud, but their single cloud vendor strategy won't let them. Can you tell me a little bit about the Oracle Linux Cloud Native Environment?  Quillin: The Oracle Linux Cloud Native Environment is a software stack that is available on premises and runs on the cloud itself. It is a curated set of open source Cloud Native Computing Foundation (CNCF) projects that can be easily deployed, have been tested for interoperability, and for which enterprise-grade support is offered. It's included with an Oracle Linux Premier support subscription at no additional cost and is unique in that it's actually driven by true open source technologies. It uses no proprietary approaches to lock you in. If you develop on top of the Oracle Linux Cloud Native Environment, you can run that application anywhere. We're also combining that with Oracle Cloud Infrastructure cloud native services, so now you have a really strong one-two punch. The whole solution is called the Oracle Cloud Native Framework. The Oracle Cloud Native Framework consists of the Oracle Cloud Infrastructure cloud native services, which includes Kubernetes, the Oracle Cloud Infrastructure Registry, and a whole set of new observability and application definition, development, and provisioning technologies delivered as managed services right on top of our Generation 2 cloud. How does this help those enterprises that are struggling to go cloud native? Quillin: We've talked about the teams who are being left behind by complexity and cultural changes in the push to cloud. The Oracle Cloud Native Framework provides a pattern, a model by which these teams can easily move applications back and forth on-premises to cloud and back—and it's a sustainable strategy. If teams lack the training or complexity is slowing down adoption, the services in Oracle Cloud Infrastructure are offered as managed cloud services. For example, many—if not most—development teams did not become experts in managing Kubernetes or deploying Docker in the last two years. These teams can go directly into a managed cloud environment where all of that complexity is managed for you. You don't actually have to become a Kubernetes expert. You can just run the application and understand how you need to build the application to run on top of it. Plus, you don't have to run the underlying infrastructure, the clusters, the cluster management, and all of the tools and techniques that go along with that. The Oracle Cloud Native Framework is providing a truly inclusive, sustainable strategy for these developers, and it's all based on open CNCF technologies. Learn more about the Oracle Cloud Native Framework today.

Welcome to Oracle Cloud Infrastructure Innovators, a series of articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. The majority of...

Oracle Cloud Infrastructure

Learn the ABCs of Data Science Concepts

Data scientists are some of the most sought-after professionals on the planet. These highly skilled individuals work with all types of organizations, using scientific processes and algorithms to unlock valuable business insights hidden within mountains of structured and unstructured data. But sometimes it can feel like data science concepts are cloaked in mystery. If you're one of the many nontechnical professionals increasingly being asked to collaborate with data science teams, you might be wondering, what are those scientific processes? And how do data scientists work their magic? The good news is that the team at Oracle Cloud Infrastructure and DataScience.com has the answers. Our new ebook, The Data Science ABCs, demystifies data science concepts, making it easier for nontechnical professionals and data scientists to work together productively. Oracle acquired DataScience.com last year to provide customers with a single data science platform that leverages Oracle Cloud Infrastructure. DataScience.com centralizes data science tools and projects in a fully-governed workspace and removes the barriers to deploying machine learning models in production. Data Science Is Everywhere Over the last decade, data science has made its way into nearly every industry, from finance and hospitality to gaming and manufacturing. Data scientists use their skills to help agriculture businesses improve crop yield and to help customer service agents reduce customer churn. They’re even responsible for those automated recommendation engines that keep us glued to Netflix and improve our shopping experience with Amazon and other online retailers. The list of ways that data scientists provide value to businesses goes on and on—and that's one of the reasons why data scientist is the most promising career of 2019, according to new research from LinkedIn. The networking site for business professionals found that data scientists saw a 56 percent increase in job openings in the US over the past year. And career-search site Indeed.com reports that data scientist job listings rose 75 percent from 2015 to 2018. What does it all mean? If you haven't already encountered a data science team at your organization, there's a strong possibility you will someday soon. Learn Data Science Concepts from A to Z Did you ever wonder how data scientists use historical information to accurately predict the future? Or how graphical processing units helped advance the field of machine learning? Have you ever thought about neural networks and how they mimic the structure of biological nervous systems to find patterns and meaning in data? Our new ebook doesn't just cover these and other data science concepts from A to Z, it also puts them into historical context and offers real-world examples of data science in action. For those interested in digging even deeper into data science, the ebook provides links to helpful resources and related information. To learn the data science lingo and discover how data scientists are bolstering businesses around the globe, be sure to read our new ebook today.

Data scientists are some of the most sought-after professionals on the planet. These highly skilled individuals work with all types of organizations, using scientific processes and algorithms...

Oracle Cloud Infrastructure

Part 4 of 4 - Oracle IaaS and Seven Pillars of Trusted Enterprise Cloud Platform

The concluding post of this series, in which we mapped Oracle's seven pillars of a trusted computing platform to Oracle Cloud Infrastructure security capabilities, covers a few services that were introduced or enhanced since the publication of earlier posts (Part 1, Part 2 and Part 3), along with relevant services from the Oracle Cloud Security portfolio for enterprises. New and Enhanced Features First, let's explore the major new services and features that enhance the security of customer environments on Oracle Cloud Infrastructure. Encrypt your Data using Keys you Control In October 2018, we announced the release of Oracle Cloud Infrastructure Key Management, a managed service that enables customers to encrypt their data by using keys that they control. Customers who have the following requirements should consider using Key Management: Customers who want to centralize encryption management of their data in the public cloud Customers who are currently using hardware security module (HSM) based key management in their physical data centers (on-premises) and want to use a similar type of secure service for encryption key management Customers who want to have full control of their encryption key management for their public cloud assets Customers who want to have the public cloud based key management systems backed by cloud service providers' HSMs that meet Federal Information Processing Standards (FIPS) 140-2 Security Level 3 security certification For more information, see the Key Management documentation. Ensuring Secure Network Isolation between Departments with Transit Routing through a Hub VCN This feature involves connecting a customer's on-premises network to a virtual cloud network (VCN) with either Oracle Cloud Infrastructure FastConnect or an IPSec VPN. The following is a basic use case for using transit routing: A customer organization has different departments, each with its own VCN. The customer's on-premises-based security information and event management (SIEM) tool needs access to the applications and servers running in different VCNs, but the customer doesn't want the administration overhead of maintaining a secure connection from each VCN to the on-premises network. Instead, the customer wants to use a single FastConnect or IPSec VPN. I'll use the below diagram to give you an idea of how transit routing works. One of the VCNs acts as the hub (VCN-H) and connects to the customer's on-premises network by way of FastConnect or an IPSec VPN. The other VCNs are locally peered with the hub VCN. The traffic between the on-premises network and the peered VCNs transits through the hub VCN. The VCNs must be in the same region but can be in different tenancies. For details, see the transit routing documentation. Isolate Resources by Teams and Projects with Nested Compartments Be able to isolate resources as needed based on your corporate structure or hierarchy by nesting compartments. Nesting enables a managed service provider, or a customer's central IT department that provides IT as a service to the business units, to grant granular rights by assigning policies that correspond to nested compartments. Consider the following use case: The Central IT network team is responsible for managing networks elements such as VCNs across projects. Central IT would like to enable the App/Dev project teams to create subnets in the prebuilt VCNs on demand, through their CI/CD pipeline, during application and associated compute/storage deployment. Central IT would also like to hide certain projects based on business units. The following diagram depicts the nested compartment architecture that the Central IT team could create to grant access to specific groups: I'd appreciate your comments below if you're interested in a detailed blog post further explaining this use case. Enterprise Cloud Security Offerings Now let's move on to some of the enterprise-scale cloud security offerings from Oracle that can be consumed as a platform as a service (PaaS). Customers can use these services to fulfill their portion of the shared security responsibility model. Enhance your Security Controls with Oracle Identity Cloud Service Oracle Identity Cloud Service (IDCS) enables enterprises to seamlessly connect their users to cloud-based and on-premises applications. IDCS integrates tightly with on-premises systems such as Active Directory as well as Oracle’s IAM to extend identities to the cloud. IDCS provides administration capabilities in the cloud such as user/group and application administration, including provisioning and deprovisioning of applications. It also provides access management capabilities such as single sign-on, strong authentication, and adaptive risk-based policies. Finally, it is the platform upon which governance capabilities like access requests, certifications, and workflows will be built for cloud applications. IDCS acts as the identity foundation for the Oracle Cloud. In other words, if you purchase any service in the Oracle Cloud, an instance of IDCS is automatically created for your tenant instance, where all users are managed in it. For details, see the IDCS service page. Maintain Security Control and Detect Threats with Oracle Cloud Access Security Broker Oracle Cloud Access Security Broker (CASB) Cloud Service is a multimode cloud-access security broker that provides advanced threat analytics using user-behavior analytics (UBA) and third-party feeds, configuration seeding, monitoring and alerts, and shadow IT discovery. For details, see the CASB service page. Following are the key features for Oracle CASB on Oracle Cloud Infrastructure: Policy alerts: Alerting and notifications on policy changes to resources Security controls: Detection of insecure settings of Oracle Cloud Infrastructure resources Threat detection: Detection of user risks and threats using machine learning (ML) analytics Key security indicator reports Exporting data and threat remediation: Enterprise integrations with SIEM or ITSM systems The following sections provide details. CASB Policy Alerts Following are some examples of these alerts and notifications on policy changes to Oracle Cloud Infrastructure resources: Compute images: Updates to or removal of images Compute instances: Launch or termination DB systems: Launch or termination actions Identity groups and policies: Creation, updates, and deletion Identity users: Lifecycle actions, API key actions, login failures, and resets Network load balancers: Creation, updates, and deletion Network security lists: Creation, updates, and deletion Network VCNs: Creation, updates, and deletion Object storage: Creation and deletion, and preauthentication requests Storage block volumes: Attaching and export/import events CASB Security Controls Following are some examples of the controls for detecting insecure settings of Oracle Cloud Infrastructure resources: Compute Instances having public IP addresses or public images Untagged resources Users and IAM User groups having too many or too few users Too broad IAM policies Use of API keys Storage Unattached storage volume Public storage buckets Network VCNs or load balancers with no inbound security lists, or attached Internet gateway insecure security lists with open ports for telnet, FTP, Finger, or other attack vector protocols Imminent expiration of load balancer certificates CASB Threat Detection CASB uses ML-based analytics to detect the following threats in Oracle Cloud Infrastructure: IP hopping Brute force attacks User behavior risks and anomalies Admin behavior risks Audit activity Number of successful or failed logins per day Network IP addresses and mapped geolocations Time of access Endpoint context (OS, browsers) External threat feeds Geolocation feeds IP address reputation CASB Key Security Indicator Compliance Reports Following are some of the out-of-the-box reports used for Oracle Cloud Infrastructure: API Key Roll Over report: Key state and rollover status for API keys Privileged IAM Group Membership report: Users added to or removed from groups Privileged IAM Users and Groups report: Actions targeting users and groups Public Buckets report: Details on publicly accessible buckets Swift Passwords report: Information about Swift passwords CASB Data Exports and Remediation CASB provides the following enterprise integrations with SIEM or ITSM systems: Manual incident management: Creation and management of incidents generated from reported events External incident management: Integration with ServiceNow Integration with SIEM: Export events to Splunk, QRadar Export to CSV Security Monitoring with Oracle Management Cloud Oracle Management Cloud is an integrated suite of capabilities that enable customers to perform the following actions: Easily monitor applications, end to end, and reduce false alerts and give notifications where possible Quickly troubleshoot issues with all the data needed to solve that problem at that time—metrics, logs, topology Keep applications secure and compliant Automatically remediate the most common problems whether they are security or management events Analyze data over a longer period of time to spot trends and issues Regardless of whether the application is running on-premises, in the Oracle Cloud, or in anyone else’s cloud and on any technology stack, customers can use parts of these capabilities individually or use them all together. This unified platform brings a rich set of potentially interrelated data to a single place that allows you to get a complete view, entity and topology. For details, see the Oracle Cloud Management service page. Oracle Management Cloud Security Features This post highlights the Oracle Management Cloud features related to security, such as monitoring security events and user behavior, and catching data access (SQL-based) anomalies at the user, group, database, and application level. The security monitoring tools can tell you that a user accessing a database host was normal. The Security Monitoring and Analytics (SMA) module can go deeper and tell you that the query that the user ran was abnormal for the user based on behavioral analysis, thereby providing benefits like a broader threat-detection range. SMA can detect nuanced anomalies through multi-dimensional baselines (for example, user logins by location, time, and host). SMA also provides the following security features: Addresses scalability problems through our platform (next-generation service with auto scaling) and visualization problems through intelligent security visualization (for example, timeline). Investigates faster with session awareness and kill chain visualization (for example, account hijacking). In general, user context is rarely present in logs. SMA determines the underlying user by stitching together DHCP, IDM, VPN, and other activity context. Then it enables visualizing threats at the user level (rather than the account level), thereby providing benefits like a dramatic reduction in manual investigative work, resulting in faster time to detection. Helps Security Operations Center (SOC) analysts understand internal and external threat vectors by ensuring security visibility across a heterogeneous, evolving infrastructure. SMA can collect and analyze any log or other data from the IT stack on bare metal, in private clouds, or in SaaS, PaaS, and IaaS infrastructure. SMA can be used to automate SOC runbooks with out-of-the-box vendor independent security and compliance content (rules, reports, and so on). Categorizes events so content is future-proofed against changes in vendors and products (that is, a failed login is just that, regardless of the device type and vendor). This results in actionable insights, automated remediation, and faster time to value. Uses underlying ML algorithms to leverage continuous threat intelligence context (URL classification, URL/IP reputation) in detection and triage of threat indicators. Customers can bring their own threat intelligence feed or leverage Oracle's out-of-the-box feed for early awareness of threat indicators in detection and investigation, thereby getting benefits like reduced false negatives by leveraging the latest threat context as activity happens. Works with Oracle Management Cloud Orchestration to continually harden systems by triggering runbook automation (account lockouts, and port or other configuration changes). SMA can hook its correlation and detection logic to any instrumentation framework so the appropriate SOC remediation procedures for a given threat type can be automated. This results in unprecedented benefits like faster mean time to remediation. Conclusion The primary goal of the posts in the series is to provide guidance for the customer to securely develop, migrate, and run workloads on Oracle Cloud Infrastructure. The posts throughout the series depicted how to use various Oracle IaaS and PaaS services to protect data, achieve required compliance, and secure the application environments across Oracle Cloud Infrastructure. Links to other relevant Oracle Cloud Infrastructure security blogs: Guidance for Setting Up a Cloud Security Operations Center (cSOC) Security Checklist for Application Migrating to Oracle Cloud Infrastructure Security Patterns for Customers Achieving PCI Compliance on Oracle Cloud Infrastructure

The concluding post of this series, in which we mapped Oracle's seven pillars of a trusted computing platform to Oracle Cloud Infrastructure security capabilities, covers a few services that were...

Developer Tools

Usability Improvement: Consistent Device Path Names and Ordering for Block Volume Attachments

Today we released an exciting service update to ensure that block-volume-device path names and hierarchies stay consistent and persist across reboots. This usability-related update simplifies the management and ordering of block storage devices and improves the user experience for many of the workloads that run on Oracle Cloud Infrastructure. Now when you attach a volume to an instance, you can select a device path name from a drop-down list. Device path names have the format dev/oracleoci/oraclevdxx, where the value of xx ranges from a to af, corresponding to up to 32 volume attachments per instance. This new format is compatible and aligned with the direction of the open source community. It's supported on all Linux OS flavors that are available on Oracle Cloud Infrastructure. However, it's not supported for Windows OSs, legacy OSs and versions, and customer-provided OS images that aren't enabled for this feature. Like other service features, this update is available in the Oracle Cloud Infrastructure Console, CLI, SDK, and Terraform. Specifying a device path name for a volume attachment is straightforward in the Oracle Cloud Infrastructure Console with a click. In the Compute section of the console, access the instance in the appropriate compartment. When you attach a volume on the instance, select a device path from the drop-down list. Confirm the device path for the attached volume. To access the device on the instance and see the device path names, use the lsscsi and ll commands. In the preceding example, the /dev/oracleoci/oraclevdb device path points to the /dev/sdb legacy path. Also note that the LUN # (4:0:0:2) for /dev/sdb is 2, which corresponds to b in /dev/oracleoci/oraclevdb. Following are some examples of how to use consistent volume names on Linux based systems: Example Scenario Previous Experience New Experience with Consistent Volume Names Creating partitions fdisk /dev/sdb fdisk /dev/oracleoci/oraclevdb Creating a file system /sbin/mkfs.ext3 /dev/sdb1 /sbin/mkfs.ext3 /dev/oracleoci/oraclevdb1 /etc/fstab changes UUID=84dc162c-43dc-429c-9ac1-b511f3f0e23c /oradiskvdb1 xfs defaults,_netdev,noatime 0 2 /dev/oracleoci/oraclevdb1   /oradiskvdb1    ext3    defaults,_netdev,noatime  0  2 Mounting a file system mount /dev/sdb1 /oradiskvdb1 mount /dev/oracleoci/oraclevdb1 /oradiskvdb1   Watch for announcements about additional service updates that continue to streamline the storage management experience. Let us know how we can help ease your cloud management or if you want more information about any topic.

Today we released an exciting service update to ensure that block-volume-device path names and hierarchies stay consistent and persist across reboots. This usability-related update simplifies the...

Strategy

Cloud Computing Predictions for 2019: More Migrations, Onward with Openness

It's an interesting time in the cloud computing industry. Cloud is an established technology, and it's the default deployment model for new applications and services. But at the same time, the majority of enterprise workloads still live on-premises. As the market approaches a crossroads, we asked experts from Oracle Cloud Infrastructure and the industry at large to share their predictions for 2019. Karan Batta Director of Product Management, Oracle Cloud Infrastructure @karan_batta For a lot of organizations, AI and machine learning are still a science experiment. But in 2019, most of them will start to implement these technologies in production. That will drive a lot more usage of high-performance computing (HPC) in the cloud. At the same time, companies have spent millions of dollars to build bespoke and very specific data centers for their HPC workloads, and the hardware is now coming up to its depreciation cycle. Most of these businesses want to move to the cloud because they don't want to have to keep buying new hardware. The pace of innovation is so quick now that cloud providers are introducing new hardware every year. On-premises shops can't keep up with that. Mark Cliff Lynd Managing Partner, Relevant Track @mclynd Enterprises are struggling to manage and maintain their hybrid-cloud environments. Vendors will be needed to fill the automation, orchestration, management, and security gaps to ensure a seamless environment that supports their growth. The use of containers and orchestration products like Kubernetes will grow, and security offerings will need to integrate and collaborate accordingly. Bob Quillin Vice President, Developer Relations, Oracle Cloud Native Labs @bobquillin Enterprises will choose inclusive solutions that can cover cloud and on-premises, modern and traditional, dev and ops. Managed cloud native services will replace do-it-yourself models so enterprises can leapfrog learning how to administer and maintain complex, rapidly changing platforms like Kubernetes and instead start using them immediately. Truly open and community-driven solutions in areas such as serverless will replace proprietary cloud services. These will allow enterprises to embrace open source, hybrid cloud, and multicloud options, as opposed to the single-source cloud model that has left users with cloud lock-in issues, diminished choices, and spiraling costs. Andy Thurai Emerging Technology Strategist and Evangelist, Oracle Cloud Infrastructure @AndyThurai Open source software and the pay-as-you-go model will dominate the cloud industry in 2019. This will lead to newer licensing models for all enterprise software. Instead of pricing based on the cores, servers, and machines the software runs on, the market will demand pricing based on the volume of data, time of usage, and -- most importantly -- business value. Cheaper combined costs from software, hardware, infrastructure, storage, etc. will lead to higher operational efficiency. This will free up enterprises to spend more time and energy on experimenting with their data, business models, and expansion into adjacent areas. Sophina Kio-Lawson and Lilian Douglas Cofounders, SheSecures @she_secures There is going to be a huge demand for more public cloud services from different industries, from the telecommunications sector to financial institutions, health sectors, etc. A lot of organizations ran into losses from managing their super-expensive physical data centers. Andrew Reichman Director of Product Management, Oracle Cloud Infrastructure @reichmanIT The industry's shift from infrastructure as a service to platform as a service will continue in 2019. Cloud vendors are moving up the stack to get to stickier solutions and offer end users more automation, which adds value faster. As this happens, storage will be more closely tied to the workload running. Instead of customers selecting and configuring storage on their own, higher-level solutions will allow customers to better tailor storage services to meet the needs of the workload itself. Additionally, there will be deeper usage of object storage. This has the potential to ease capacity concerns and reduce the effort required to manage and change workload configurations, because storage management can be coded directly into an application. Laurent Gil Security Product Strategy Architect, Oracle Cloud Infrastructure @laurentgil Enterprise multi-cloud strategies are going to have some unintended consequences. As enterprises accelerate their move to the cloud over the next two to three years, their security operations centers will have to become fluent in powerful data analytics systems. These systems must be able to ingest and reconcile incompatible and apparently uncorrelated security events, using massive compute capacity, and organize relevant security events for human analysts.

It's an interesting time in the cloud computing industry. Cloud is an established technology, and it's the default deployment model for new applications and services. But at the same time,...

Customer Stories

Develop Your Cloud Computing Skills in 2019

January is a time when most of us create personal and professional plans for the year. For me, this time of the year is about creating a plan for investing in myself so that I am better prepared for the innovation that is happening in cloud computing. Today, enterprises spend over $3 trillion in IT. According to Forrester’s "Predictions 2019: Cloud Computing" report, the global cloud market will exceed $200 billion in 2019. And distributed and cloud computing skills are the most sought-after skills globally, according to LinkedIn. As enterprises move their workloads to cloud, it's important to bridge the skills gap in the current workforce to drive adoption and successful deployments. At Oracle Cloud Infrastructure, we're building a world-class cloud for enterprises and startups alike. We're releasing services and features at a rapid pace to provide our customers a secure, robust, high-performance cloud infrastructure that meets the demands of their mission-critical workloads. We know that for our customers to be able to take advantage of these services and features, they need to know how to use them properly. To that end, we've collaborated with Qloudable to bring customers a self-paced learning platform called Oracle Cloud Infrastructure Jump Start Learning Labs. As I mentioned in my TD at Work issue, "Foster Learning Through Engaging Content," to bridge the skills gap and to foster a learning culture, it's important to keep the learner engaged and have a well-defined outcome. Oracle Cloud Infrastructure's 15-minute to 45-minute hands-on, task-based labs provide step-by-step instructions for configuring and working with services at your own pace and convenience. It's a great way to bridge the cloud computing skills gap and be prepared for the cloud wave in 2019, and all you need is a laptop with a browser. The labs are categorized by skill level: Beginner: For users with little-to-no experience with Oracle Cloud Infrastructure or other cloud technologies Experienced: For users with some hands-on experience with Oracle Cloud Infrastructure and knowledge of cloud technologies Advanced: For users with extensive experience with Oracle Cloud Infrastructure We launched these labs at Oracle OpenWorld 2018, and our customers and partners are loving them. They have huge goals to close the cloud computing skills gap, and this platform is enabling them to accelerate their journey to the cloud. Here’s what some of our customers had to say about these labs: "Oracle Cloud Infrastructure Jump Start Learning Labs are a great way to provide actual hands-on experience. The labs are easy to use and a great learning tool. Darling Ingredients decided to go with Oracle Cloud Infrastructure last year. Having these labs available for our team will help close the skills gap and drive adoption. We expect to see a significant impact on our business as our teams are preparing for the Oracle Cloud Infrastructure Architect Associate certification." —Tom Morgan, Darling Ingredients "Oracle has paved a remarkable way for its partners to have a live, intuitive experience of their 'click-and-go' cloud infrastructure. Jump Start Learning Labs have been instrumental in assisting our architects and developers to get familiar with Oracle Cloud Infrastructure in minutes and gain confidence in the platform. We have 10-plus certified associates now, which shall definitely help us to increase our reach to more potential customers globally and provide better cloud solutions." —Ashish Thakkar, L&T Infotech "At Deloitte, we are constantly providing innovative cloud solutions to our customers. And therefore, it is important that our consultants and architects have in-depth, hands-on knowledge of Oracle Cloud Infrastructure. The availability of such labs is imperative for our teams in closing the skill gap and driving adoption of Oracle Cloud Infrastructure for our clients. The ease-of-use of the learning labs makes it a great learning tool, and thus it is an excellent way to provide actual hands-on experience to our consultants/architects." —Abhinav Phadnis, Deloitte Incorporate the Oracle Cloud Infrastructure Jump Start Learning Labs into your 2019 goals and be prepared for the coming cloud wave. Happy learning!

January is a time when most of us create personal and professional plans for the year. For me, this time of the year is about creating a plan for investing in myself so that I am better prepared for...

Oracle Cloud Infrastructure

Part 3 of 4 - Oracle IaaS and Seven Pillars of Trusted Enterprise Cloud Platform

This post is the third one in the series in which we are mapping Oracle's seven pillars of a trusted computing platform to Oracle Cloud Infrastructure security capabilities. This post covers the rest of the pillars. The fourth and final installment in this series will highlight some security services and enhancements that have been added to the portfolio.   Links to Part 1 and Part 2. 5: Secure Hybrid Cloud Oracle Cloud Infrastructure supports SAML 2.0 federation via Oracle Identity Cloud Service (IDCS), Microsoft Active Directory Federation Service (ADFS), and any SAML 2.0 compliant identity provider. Customers can also use Oracle Cloud Infrastructure native IAM for federated access. IDCS is offers broad integration services with various identity providers. Oracle Cloud Infrastructure also offers two ways to securely connect customers' on-premises data centers or other public cloud providers to Oracle Cloud Infrastructure virtual cloud networks (VCNs). One way to connect is to use an IPSec VPN over the internet. IPSec is a protocol suite that encrypts the entire IP traffic before the packets are transferred from the source to the destination. IPSec can be configured in tunnel mode and transport mode, although Oracle Cloud Infrastructure supports only the tunnel mode for IPSec VPNs. In tunnel mode, IPSec encrypts and authenticates the entire packet. After encryption, the packet is then encapsulated to form a new IP packet that has different header information. Each Oracle IPSec VPN consists of multiple redundant IPSec tunnels that use static routes to route traffic. Border Gateway Protocol (BGP) is not supported for the Oracle IPSec VPN. For more information, see IPSec VPN Overview. For a higher bandwidth and more reliable and consistent networking experience compared to internet-based connections, Oracle Cloud Infrastructure FastConnect provides an easy way to create a dedicated, private connection between customers' data center and Oracle Cloud Infrastructure. For more information, see FastConnect Overview. Additionally, Oracle Cloud Infrastructure is collaborating with various third-party security vendors (for example, FireEye, Fortinet, Symantec, and CheckPoint) to make their solutions accessible on Oracle Cloud Infrastructure so that customers can use their existing security tools when securing data and applications in the cloud. Visit the Oracle Cloud Marketplace for a list of partners who have been successfully tested on Oracle Cloud Infrastructure. 6: High Availability To provide data availability and durability, Oracle Cloud Infrastructure enables customers to select from infrastructure with distinct geographic and threat profiles. A region is the top-level component of the infrastructure. Each region is a separate geographic area with multiple, fault-isolated locations called availability domains. Availability domains are designed to be independent and highly reliable. Each one is built with fully independent infrastructure: buildings, power generators, cooling equipment, and network connectivity. With physical separation comes protection against natural and other disasters. Availability domains within the same region are connected by a secure, high-speed, low-latency network, which allows customers to build and run highly reliable applications and workloads with minimum impact to application latency and performance. All links between availability domains are encrypted. Each region in the US has at least three availability domains, which allows customers to deploy highly available applications. Each availability zone is the US has three fault domains. Because of geographic constraints, some regions contain a single availability domain with multiple fault domains for application redundancies. When resources are placed across fault domains, they are far less likely to fail together. From a customer's perspective, instances placed across fault domains are guaranteed to be on different racks. Each tenancy has its own fault domain identifiers for an availability domain. Instances returned by Compute APIs include these fault domain identifiers.   7: Verifiably Secure Infrastructure Oracle Cloud Infrastructure's verifiably secure infrastructure is built using multiple security solutions that complement each other.  Oracle is continuously investing time and resources to meet customers’ strict requirements for internal control over financial reporting and data protection across a variety of highly regulated industries. ISO 27001 Regions: Phoenix (Arizona), Ashburn (Virginia), London (United Kingdom), and Frankfurt (Germany) Services covered: Block Volumes, Compute, Database, Governance, Load Balancing, Networking, and Object Storage  SOC 1, SOC 2, and SOC 3 Regions: Phoenix (Arizona), Ashburn (Virginia), and Frankfurt (Germany) Services covered: Block Volumes, Compute, Database, Governance, Load Balancing, Networking, and Object Storage    PCI DSS Attestation of Compliance Services covered: Archive Storage, Block Volumes, Compute, Container Engine for Kubernetes, Data Transfer Service, Database, Exadata, FastConnect, File Storage, Governance, Load Balancing, Networking, Object Storage, and Registry  HIPAA Attestation Services covered: Archive Storage, Block Volumes, Compute, Data Transfer, Database, Exadata, FastConnect, File Storage, Governance, Load Balancing, Networking, and Object Storage Strong security controls to meet GDPR requirements For a complete and updated list of compliance certifications and attestations, please visit https://cloud.oracle.com/en_US/cloud-compliance. Oracle regularly performs penetration and vulnerability testing and security assessments against the Oracle Cloud infrastructure, platforms, and applications. These tests are intended to validate and improve the overall security of Oracle Cloud Services. However, Oracle does not assess or test any components that customers manage through or introduce into the Oracle Cloud Services. For more information, see Oracle Cloud Security Testing Policy. Conclusion Oracle Cloud Infrastructure is gaining the trust of Customer Security teams by having: A world-class security team Foundational core and edge security capabilities built around seven pillars  Deeper customer isolation  Easy-to-use IAM policies   Geographic security compartmentalization Secure access to APIs via asymmetric keys For more information, visit the following sites: Oracle Cloud Infrastructure Security white paper Oracle Cloud Infrastructure GDPR white paper Oracle Cloud Infrastructure Security Best Practices (provides actionable security guidance, including IAM policies and scripts, for each service) Services Security Documentation Blog posts https://blogs.oracle.com/cloud-infrastructure (search on Sanjay Basu) https://blogs.oracle.com/cloud-infrastructure/heres-a-nifty-checklist-to-secure-a-cloud-application

This post is the third one in the series in which we are mapping Oracle's seven pillars of a trusted computing platform to Oracle Cloud Infrastructure security capabilities. This post covers the rest...

Events

Four Can't-Miss Cloud Sessions at Oracle OpenWorld Europe: London

This year, Oracle is taking OpenWorld global. Following October's Oracle OpenWorld 2018 conference in San Francisco, we're holding three regional events to show customers in Europe, the Middle East, and Asia how we can help transform and secure their businesses. The first of these events, Oracle OpenWorld Europe: London, starts January 16. Oracle Cloud Infrastructure was central in the strategic announcements at OpenWorld in San Francisco. In his keynote, Oracle Executive Chairman and CTO Larry Ellison touted its advantages around security, performance, and pricing. And attendees packed dozens of sessions to learn why Oracle offers the only true enterprise-grade public cloud. At OpenWorld Europe, attendees will learn what Oracle Cloud Infrastructure has been working on since then. (Hint: a lot!) Here's a preview of some can't-miss cloud sessions: Move and Improve Your Apps in the Cloud January 16, 9 a.m. GMT Other providers talk about how you can "lift and shift" your applications from on-premises to the cloud. Oracle Cloud Infrastructure takes that one step further. We enable organizations to "move and improve" their applications by running them in a purpose-built cloud that delivers higher performance and better security at a lower cost. Attendees of this session will learn why Oracle's cloud is now ready for any and all workloads. Real-World Enterprise Outcomes and Reactions January 16, 12:55 p.m. GMT In this breakout session, I'll be joined by two customers sharing their stories about deploying and using Oracle Cloud Infrastructure. They'll cover a variety of use cases, including workload migrations and cloud native applications. Your Cloud Transformation Roadmap January 16, 1:40 p.m. GMT In this solution keynote, Kyle York, Vice President of Product Strategy for Oracle Cloud Infrastructure, will explain why infrastructure should be the foundation on which a successful cloud transformation strategy is built. An enterprise customer will also detail the performance and cost benefits they achieved by moving to the cloud. Running Mission-Critical Apps January 16, 3:10 p.m. GMT The majority of enterprise applications do not yet live in the cloud. Most of these apps are the mission-critical workloads that have high demands around latency, availability, and performance. In this breakout session, Don Mowbray, Director of Product Management for Oracle Cloud Services, will explain how Oracle Cloud Infrastructure delivers the security, predictability, and performance that these workloads require. If you can't attend Oracle OpenWorld Europe: London, we're also holding events in Dubai in February and Singapore in March. We hope to see you there!

This year, Oracle is taking OpenWorld global. Following October's Oracle OpenWorld 2018 conference in San Francisco, we're holding three regional events to show customers in Europe, the Middle East,...

Developer Tools

Serverless Image Classification with Oracle Functions and TensorFlow

Image classification is a canonical example used to demonstrate machine learning techniques. This post shows you how to run a TensorFlow-based image classification application on the recently announced cloud service Oracle Functions. Oracle Functions Oracle Functions which is a fully managed, highly scalable, on-demand, function-as-a-service platform built on enterprise-grade Oracle Cloud Infrastructure. It's a serverless offering that enables you to focus on writing code to meet business needs without worrying about the underlying infrastructure, and get billed only for the resources consumed during the execution. You can deploy your code and call it directly or in response to triggers— Oracle Functions does all the work required to ensure that your application is highly available, scalable, secure, and monitored. Oracle Functions is powered by the Fn Project, which is an open source, container native, serverless platform that can be run anywhere—in any cloud or on-premises. You can download and install the open source distribution of Fn Project, develop and test a function locally, and then use the same tooling to deploy that function to Oracle Functions. What to Expect Before we dive into the details, let's see what you can expect from your serverless machine learning function. After it's set up and running, you can point the app to images and it will return an estimate of what it thinks the image is, along with the accuracy of the estimate. For example, when passed to the classification function, this image returned—This is a ‘pizza’ Accuracy—100%. Photo by Alan Hardman on Unsplash The Code The image classification function is based on an existing TensorFlow example. It leverages the TensorFlow Java SDK, which in turn uses the native C++ implementation using JNI (Java Native Interface). Function Image Input The image classification function leverages the Fn Java FDK, which simplifies the process of developing and running Java functions. One of its benefits is that it can seamlessly convert the input sent to your functions into Java objects and types. This includes: Simple data binding, like handling string input. Binding JSON data types to POJOs. You can customize this because it’s internally implemented using Jackson. Working with raw inputs, enabled by an abstraction of the raw Fn Java FDK events received or returned by the function. The binding can be further extended if you want to customize the way your input and output data is marshaled. The existing TensorFlow example expects a list of image names (which must be present on the machine from which the code is being executed) as input. The function behaves similarly, but with an important difference —it uses the flexible binding capability provided by the Fn Java FDK. The classify method serves as the entry point to the function and accepts a Java byte array (byte[]), which represents the raw bytes of the image that is passed into the function. This byte array is then used to create the Tensor object using the static Tensor.create(byte[]) method: public class LabelImageFunction { public String classify(byte[] image) { ... Tensor<String> input = Tensors.create(image); ... } } The full source code is available on GitHub. Machine Learning Model Typically, a machine-learning-based system consists of the following phases: Training: An algorithm is fed with past (historical) data in order to learn from it (derive patterns) and build a model. Very often, this process is ongoing. Predicting: The generated model is then used to generate predictions or outputs in response to new inputs based on the facts that were learned during the training phase This application uses a pregenerated model. As an added convenience, the model (and labels) required by the classification logic are packaged with the function itself (part of the Docker image). These can be found in the resources folder of the source code. This means that you don’t have to set up a dedicated model serving component (like TensorFlow Serving). Function Metadata The func.yaml file contains function metadata, including attributes like memory and timeout (for this function, they are 1024 MB and 120 seconds, respectively). This metadata is required because of the (fairly) demanding nature of the image classification algorithm (as opposed to simpler computations). schema_version: 20180708 name: classify version: 0.0.1 runtime: java memory: 1024 timeout: 120 triggers: - name: classify type: http source: /classify Here is a summary of the attributes used: schema_version represents the version of the specification for this file. name is the name and tag to which this function is pushed. version represents the current version of the function. When deploying, it is appended to the image as a tag. runtime represents the programming language runtime, which is java in this case. memory (optional) is the maximum memory threshold for this function. If this function exceeds this limit during execution, it's stopped and an error message is logged. timeout (optional) is the maximum time that a function is allowed to run. triggers (optional) is an array of trigger entities that specify triggers for the function. In this case, we’re using an HTTP trigger. Function Dockerfile Oracle Functions uses a set of prebuilt, language-specific Docker images for build and runtime phases. For example, for Java functions, fn-java-fdk-build is used for the build phase and fn-java-fdk is used at runtime. Here is the default Dockerfile that is used to create Docker images for your functions: FROM fnproject/fn-java-fdk-build:jdk9-1.0.75 as build-stage WORKDIR /function ENV MAVEN_OPTS -Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttps.proxyHost= -Dhttps.proxyPort= -Dhttp.nonProxyHosts= -Dmaven.repo.local=/usr/share/maven/ref/repository ADD pom.xml /function/pom.xml RUN ["mvn", "package", "dependency:copy-dependencies", "-DincludeScope=runtime", "-DskipTests=true", "-Dmdep.prependGroupId=true", "-DoutputDirectory=target", "--fail-never"] ADD src /function/src RUN ["mvn", "package"] FROM fnproject/fn-java-fdk:jdk9-1.0.75 WORKDIR /function COPY --from=build-stage /function/target/*.jar /function/app/ CMD ["com.example.fn.HelloFunction::handleRequest"] It’s a multiple-stage Docker build that performs the following actions (out-of-the-box): Maven package and build Copying (using COPY) the function JAR and dependencies to the runtime image Setting the command to be executed (using CMD) when the function container is spawned But there are times when you need more control over the creation of the Docker image, for example, to incorporate native third-party libraries. In such cases, you want to use a custom Dockerfile. It's powerful because it gives you the freedom to define the recipe for your function. All you need to do is extend from the base Docker images. Following is the Dockerfile used for this function: FROM fnproject/fn-java-fdk-build:jdk9-1.0.75 as build-stage WORKDIR /function ENV MAVEN_OPTS -Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttps.proxyHost= -Dhttps.proxyPort= -Dhttp.nonProxyHosts= -Dmaven.repo.local=/usr/share/maven/ref/repository ADD pom.xml /function/pom.xml RUN ["mvn", "package", "dependency:copy-dependencies", "-DincludeScope=runtime", "-DskipTests=true", "-Dmdep.prependGroupId=true", "-DoutputDirectory=target", "--fail-never"]' ARG TENSORFLOW_VERSION=1.12.0 RUN echo "using tensorflow version " $TENSORFLOW_VERSION RUN curl -LJO https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-$TENSORFLOW_VERSION.jar RUN curl -LJO https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-linux-x86_64-$TENSORFLOW_VERSION.tar.gz RUN tar -xvzf libtensorflow_jni-cpu-linux-x86_64-$TENSORFLOW_VERSION.tar.gz ADD src /function/src RUN ["mvn", "package"] FROM fnproject/fn-java-fdk:jdk9-1.0.75 ARG TENSORFLOW_VERSION=1.12.0 WORKDIR /function COPY --from=build-stage /function/libtensorflow_jni.so /function/runtime/lib COPY --from=build-stage /function/libtensorflow_framework.so /function/runtime/lib COPY --from=build-stage /function/libtensorflow-$TENSORFLOW_VERSION.jar /function/app/ COPY --from=build-stage /function/target/*.jar /function/app/ CMD ["com.example.fn.LabelImageFunction::classify"] Notice the additional customization that it incorporates,  in addition to the default steps like Maven build: Automates TensorFlow setup (per the instructions), extracts the TensorFlow Java SDK and the native JNI (.so) libraries (as part of the second stage of the Docker build) Copies the JNI libraries to /function/runtime/lib and the SDK JAR to /function/app so that they are available to the function at runtime Deploying to Oracle Functions As mentioned previously, you can use the open source Fn CLI to deploy to Oracle Functions. Ensure that you have the latest version. curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install | sh ​You can also download it directly from https://github.com/fnproject/cli/releases. Oracle Functions Context Before using Oracle Functions, you have to configure the Fn Project CLI to connect to your Oracle Cloud Infrastructure tenancy. When the Fn Project CLI is initially installed, it’s configured for a local development context. To configure the Fn Project CLI to connect to your Oracle Cloud Infrastructure tenancy instead, you have to create a new context. The context information is stored in a .yaml file in the ~/.fn/contexts directory. It specifies Oracle Functions endpoints, the OCID of the compartment to which deployed functions belong, the Oracle Cloud Infrastructure configuration file, and the address of the Docker registry to push images to and pull images from. This is what a context file looks like: api-url: https://functions.us-phoenix-1.oraclecloud.com oracle.compartment-id: <OCI_compartment_OCID> oracle.profile: <profile_name_in_OCI_config> provider: oracle registry: <OCI_docker_registry>  Oracle Cloud Infrastructure Configuration The Oracle Cloud Infrastructure configuration file contains information about user credentials and the tenancy OCID. You can create multiple profiles with different values for these entries. Then, you can define the profile to be used by the CLI by using the oracle.profile attribute. Here is an example configuration file: [DEFAULT] user=ocid1.user.oc1..exampleuniqueID fingerprint=20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34 key_file=~/.oci/oci_api_key.pem tenancy=ocid1.tenancy.oc1..exampleuniqueID pass_phrase=tops3cr3t region=us-ashburn-1 [ORACLE_FUNCTIONS_USER] user=ocid1.user.oc1..exampleuniqueID fingerprint=72:00:22:7f:d3:8b:47:a4:58:05:b8:95:84:31:dd:0e key_file=/.oci/admin_key.pem tenancy=ocid1.tenancy.oc1..exampleuniqueID pass_phrase=s3cr3t region=us-phoenix-1 You can define multiple contexts, each stored in a different context file. Switch to the correct context according to your Functions development environment: fn use context <context_name> Create the Application Start by cloning the contents of the GitHub repository: git clone https://github.com/abhirockzz/fn-hello-tensorflow Here is the command required to deploy an application: fn create app <app_name> --annotation oracle.com/oci/subnetIds='["<subnet_ocid>"]' <app_name> is the name of the new application. <subnet_ocid> is the OCID of the subnet in which to run your function. For example: fn create app fn-tensorflow-app --annotation oracle.com/oci/subnetIds='["ocid1.subnet.oc1.phx.exampleuniqueID","ocid1.subnet.oc1.phx.exampleuniqueID","ocid1.subnet.oc1.phx.exampleuniqueID"]' Deploy the Function After you create the application, you can deploy your function with the following command: fn deploy --app <app_name> <app_name> is the name of the application in Oracle Functions to which you want to add the function. If you want to use TensorFlow version 1.12.0 (for Java SDK and corresponding native libraries), use the following command: fn -v deploy --app fn-tensorflow-app You can also choose a specific version. Ensure that you specify it in pom.xml file before you build the function. For example, if you want to use version 1.11.0: <dependency> <groupId>org.tensorflow</groupId> <artifactId>tensorflow</artifactId> <version>1.11.0</version> <scope>provided</scope> </dependency> To specify the version during function deployment, you can use --build-arg (build argument) as follows: fn -v deploy --app fn-tensorflow-app --build-arg TENSORFLOW_VERSION=<version> For example, if you want to use 1.11.0: fn -v deploy --app fn-tensorflow-app --build-arg TENSORFLOW_VERSION=1.11.0 When the deployment completes successfully, your function is ready to use. Use the fn ls apps command to list down the applications currently deployed. fn-tensorflow-app should be listed. Time to Classify Images! As mentioned earlier, the function can accept an image as input and tell you what it is, along with the percentage accuracy. You can start by downloading some of the recommended images or use images that you already have on your computer. All you need to do is pass them to the function while invoking it: cat <path to image> | fn invoke fn-tensorflow-app classify Ok, let’s try this. Can it detect the sombrero in this image? cat /Users/abhishek/manwithhat.jpg | fn invoke fn-tensorflow-app classify “366 • 9 • Gringo” (CC BY-NC-ND 2.0) by Pragmagraphr Result: This is a ‘sombrero’ Accuracy — 92% How about a terrier? cat /Users/abhishek/terrier.jpg | fn invoke fn-tensorflow-app classify “Terrier” (CC BY-NC 2.0) by No_Water Result: This is a 'West Highland white terrier' Accuracy - 88% What will you classify? Summary We just deployed a simple yet fully functional machine learning application in the cloud! Eager to try this out? Oracle Functions will be generally available in 2019, but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Oracle Functions or to request access, please register. You can also learn more about the underlying open source technology used in Oracle Functions at FnProject.io. Featured image by Franck V. on Unsplash

Image classification is a canonical example used to demonstrate machine learning techniques. This post shows you how to run a TensorFlow-based image classification application on the recently...

Partners

NoSQL in Kubernetes: Couchbase Operator on Container Engine for Kubernetes

In my last post about Couchbase, I covered how to run Couchbase on Oracle Cloud Infrastructure with Terraform. That's all well and good, but now for something completely different...   At KubeCon + CloudNativeCon 2018, Oracle made lots of announcements, many about the Oracle Cloud Native Framework. This framework builds on earlier work we've done, joining the Cloud Native Computing Foundation (CNCF) as a Platinum member and releasing a managed Kubernetes service called Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE). Conventional wisdom is that Kubernetes is great for running stateless containers but not stateful workloads. Recommended architectures have application servers managed by Kubernetes with databases running outside on VMs or bare metal. This approach performs well, but it negates one of the biggest advantages of Kubernetes—providing a single orchestrator that can be used to manage an entire application. The result is that you must manage software both inside and outside of Kubernetes along with the connections between the two. It's not fun. The Kubernetes community recognized all this as a limitation and tried a number of approaches for running stateful workloads. The first was called Pet Sets, drawing on the pets/cattle analogy common in the cloud. This was later renamed to StatefulSets. That model was superseded by an approach called Sidecar, which suffered from a variety of issues, including single point of failure. All that culminated in CoreOS proposing a model called Operator, which uses a custom resource definition (CRD) to manage a stateful application inside a Kubernetes cluster. A variety of stateful pieces of software now run with the Operator model, including Confluent, Hazelcast, and Couchbase. If you're looking for an industry-leading NoSQL database on which to build your Kubernetes application, Couchbase is an obvious choice. Couchbase itself is a great product, with all the features you'd expect in a NoSQL database. Couchbase Autonomous Operator distinguishes it a bit further.   We've partnered closely with Couchbase to make it easy to deploy Operator on Container Engine for Kubernetes: “Couchbase is collaborating with Oracle to bring the benefits of the Oracle Kubernetes Engine (OKE) to applications like the Couchbase Data Platform. With Couchbase Autonomous Operator now users can easily deploy Couchbase Data Platform on OKE.” Anil Kumar, Director of Product Management, Couchbase Operator enables you to run your database next to the application in Kubernetes, lowering latency and simplifying administration of the application as a whole. Operator also blurs the line between enterprise software and a managed service, automating many tasks, including: Resizing the cluster Recovering from node failure Upgrading the database version Because Operator relies on the CRD API, it can run on any conformant Kubernetes distribution, including, of course, Container Engine for Kubernetes. Getting started with Couchbase Autonomous Operator on Container Engine for Kubernetes takes just a few steps. First you deploy a Container Engine for Kubernetes cluster. Instructions to do that, along with a Terraform module that automates the process, are on GitHub. With your cluster up and running, the next step is to deploy Couchbase. We've created a walkthrough for that. As always, it's been a pleasure working with the Couchbase team to set this up. Special thanks to Tommie McAfee at Couchbase for helping with some permissions issues! If you have any questions, reach out to me at ben.lackey@oracle.com or on Twitter @benofben.

In my last post about Couchbase, I covered how to run Couchbase on Oracle Cloud Infrastructure with Terraform. That's all well and good, but now for something completely different...   At KubeCon +...

Customer Stories

Virtual Humans on Oracle Cloud Infrastructure HPC

A great cloud success story is how Oracle Cloud Infrastructure has enabled ELEM Biotech. We've invited ELEM Biotech founders Mariano Vazquez, Chris Morton, Guillaume Houzeaux, and Jose Maria Cela to share how Oracle Cloud Infrastructure High-Performance Computing (HPC) enables them to make Virtual Humans. Check out their company at http://www.elem.bio/. Oracle Cloud Infrastructure is at the heart of the revolution that ELEM Biotech wants to lead. Oracle Cloud Infrastructure provides the flexible and secure HPC power that we need, especially thanks to the performance of its bare metal instances. ELEM Biotech develops Alya Red. With a software as a service (SaaS) model deployed in Oracle Cloud Infrastructure, Alya Red allows users to set up and run advanced simulations and analyze them through a sophisticated, tailor-made biomedical interface. Alya Red is the biomedical interface, the cloud deployment and orchestration, the simulation engine, and a cloud-based database. Pacemakers, valve replacements, stents, anti-arrhythmic drugs, obstructive pulmonary diseases and asthma treatments, drug pumps—all of these scenarios can be set up now using Alya Red. Our Virtual Humans are created in Oracle Cloud Infrastructure, where medical devices manufactures, pharmaceutical companies, and CROs can analyze their products and optimize the treatments to better fit patients. The Alya Read team has almost 50 developers at the Barcelona Supercomputing Center and another 40 developers elsewhere. We have provided services to and received funding from Juan Yacht Design, Repsol, Iberdrola, and Medtronic. Alya Red is running on Oracle Cloud Infrastructure’s HPC instances with large cases, using up to 1,000 cores. The results are extremely promising: we are seeing the same scalability on Oracle Cloud Infrastructure HPC that we see in our dedicated MareNostrum cluster, and we are able to get the results back 90 percent faster because there is no queue time. It's worth mentioning that the code, which was tuned for MareNostrum, was compiled and run on Oracle Cloud Infrastructure's bare metal instances with RDMA without requiring any additional libraries or tuning—truly "lift and shift." Public HPC resources are largely focused on academia, and they can't provide the required quality of service that software such as Alya Red needs. That's where Oracle solves the problem with its HPC instances and cloud infrastructure. Thanks to the Oracle Startup Ecosystem and Oracle Cloud Infrastructure's HPC offering, ELEM Biotech has been able to hit the ground running in the HPC world. Our vision is that with Oracle Cloud Infrastructure, Alya Red will progressively include more human systems and organs, all of them validated and certified by regulatory agencies for their context of use. Our Virtual Humans will contribute to largely reducing animal and human testing, and reducing product costs and time to market. We hope to reduce the healthcare gap between rich and poor by streamlining innovation in medical devices and pharmaceutical industries.

A great cloud success story is how Oracle Cloud Infrastructure has enabled ELEM Biotech. We've invited ELEM Biotech founders Mariano Vazquez, Chris Morton, Guillaume Houzeaux, and Jose Maria Cela to...

Customer Stories

Exabyte.io for Scientific Computing on Oracle Cloud Infrastructure HPC

We recently invited Exabyte.io, a cloud-based, nanoscale modeling platform that accelerates research and development of new materials, to test the high-performance computing (HPC) hardware in Oracle Cloud Infrastructure. Their results were similar to the performance that our customers have been seeing and what other independent software vendors (ISVs) have been reporting: Oracle Cloud Infrastructure provides the best HPC performance for engineering and simulation workloads. Exabyte.io enables their customers to design chemicals, catalysts, polymers, microprocessors, solar cells, and batteries with their Materials Discovery Cloud. Exabyte.io allows scientists in enterprise R&D units to reliably exploit nanoscale modeling tools, collaborate, and organize research in a single platform. As Exabyte.io seeks to provide their customers with the highest-performing and lowest-cost modeling and simulation solutions, they have done extensive research and benchmarking with cloud-based HPC solutions. We were eager to have them test the Oracle Cloud Infrastructure HPC hardware. Exabyte.io ran several benchmarks, including general dense matrix algebra with LINPACK, density functional theory with Vienna Ab-initio Simulation Package (VASP), and molecular dynamics with GROMACS. The results were impressive and prove the value, performance, and scale of HPC on Oracle Cloud Infrastructure. The advantage of Oracle Cloud Infrastructure's bare metal was obvious with LINPACK, throughput is almost double the closest cloud competitor and consistent with on-premises performance. Latency is even more interesting: the BM.HPC2.36 shape with RDMA provides the lowest latency at any packet size and is orders of magnitude faster than cloud competitors. In fact, for every performance metric that Exabyte.io tested on VASP and GROMACS, they saw Oracle's BM.HPC2.36 shape with RDMA (shown as OL in the following graph) outperform the other cloud competitors. Below is a great example of both performance and scaling of Oracle Cloud Infrastructure on VASP. When parallelizing over electronic bands for large-unit-cell materials and normalizing for core count, the single node performance of the BM.HPC2.36 exceeds it's competitors and then scales consistently as the cluster size increases. The BM.HPC2.36 runs large VASP jobs faster and can scale larger than any other cloud competitor. Exabyte.io has provided the full test results on their website. Their blog concluded that "Running modeling and simulations on the cloud with similar performance as on-premises is no longer a dream. If you had doubts about this before, now might be the right time to give it another try." By offering bare metal HPC performance in the cloud Oracle Cloud Infrastructure enables customers running the largest workloads on the most challenging engineering and science problems to get their results faster. The results that Exabyte.io has seen are exceptional, but the results are not unique among our customers. Spin up your own HPC cluster in 15 minutes on Oracle Cloud Infrastructure.  

We recently invited Exabyte.io, a cloud-based, nanoscale modeling platform that accelerates research and development of new materials, to test the high-performance computing (HPC) hardware in Oracle...

Migrate Oracle Database to Oracle Cloud Infrastructure by Using Storage Gateway

This blog post outlines the process of migrating a single-instance Oracle Database from on-premises to the Oracle Cloud Infrastructure Database as a Service (DBaaS) instance. There are many other way to migrate an on-premises Oracle Database to the Oracle Cloud Infrastructure DBaaS instance. However, this blog post uses the Oracle Cloud Infrastructure Storage Gateway Service and Oracle RMAN utility to migrate an on-premises Oracle single-instance database to Oracle Cloud Infrastructure.  Oracle Cloud Infrastructure Storage Gateway is a cloud storage gateway that lets you connect your on-premises applications with Oracle Cloud Infrastructure. Any application that can write data to an NFS target can also write data to Oracle Cloud Infrastructure Object Storage by using Storage Gateway, without requiring application modification. At very high level, the OCI Storage Gateway and Object Storage are used to create an NFS share. The NFS share is mounted on the database host and offline full database backup is performed using the the Oracle RMAN utility to the NFS share. This backup copy is getting stored on Object Storage through the Storage Gateway. Restore the Oracle Database on the DBaaS instance using Oracle RMAN utility from Object Storage mounted through Storage Gateway, not the DBaaS host. Before You Start Before you start the database migration, consider the following requirements: Ensure that Storage Gateway is already installed on a virtual or physical host in your on-premises data center as well as on Oracle Cloud Infrastructure. Create the file system on the Storage Gateway host and map it to Object Storage. Install the Cloud Backup Module on both the source and destination database. Encrypt the backup by using RMAN (wallet or password based). Create a manifest file for RMAN to know about the contents of backup set files (manifest.xml). Consider a high-throughput network for Storage Gateway to reduce the latency. Appropriately size the Storage Gateway cache for read and write operations. Set up a strong password and share passwords with others only as needed. Evaluate and Plan Use the Evaluation and Planning Checklist to help you evaluate and plan for the migration of your on-premises Oracle Databases to Oracle Cloud Infrastructure, based on the unique requirements of your source and target databases.  Back Up the Oracle Database to the NFS Share Before starting the backup of the Oracle database, mount the file system that you created using Storage Gateway and export to the NFS share on the database host by using the appropriate NFSv4 mount options. Use the RMAN utility to create a full backup of the Oracle database to the NFS share as follows: Mount the file system on your database host. [root@db-host ~]# mount –t nfs –o vers=4,port=32769  <IP_of_storage_gateway>:/<filesystem_name>  /<local_directory> [root@db-host ~]# chown –R oracle:oinstall /<mount_directory> Connect to the source database, enable backup encryption, and set the compression to medium. [oracle@db-host ~]$ rman target / RMAN > set encryption on; RMAN > set compression algorithm 'medium'; Perform a full database backup including controlfile and spfile. RUN { ALLOCATE CHANNEL ch11 DEVICE TYPE DISK MAXPIECESIZE 1G; BACKUP FORMAT '/mydb_backup/%d_D_%T_%u_s%s_p%p' DATABASE CURRENT CONTROLFILE FORMAT '/mydb_backup/%d_C_%T_%u' SPFILE FORMAT '/mydb_backup/%d_S_%T_%u' PLUS ARCHIVELOG FORMAT '/mydb_backup/%d_A_%T_%u_s%s_p%p'; RELEASE CHANNEL ch11; } Copy the password file and TDE wallet files. [oracle@db-host ~]$cp $ORACLE_HOME/dbs/orapwdorcl  /mydb_backup/. [oracle@db-host ~]$ zip –rj   /mydb_backup/tde_wallet.zip /u01/app/oracle/admin/orcl/tde_wallet Restore and Recover the Oracle Database on Oracle Cloud Infrastructure Database Before starting the restore process, disconnect the file system from the on-premises Storage Gateway host and create a file system on Oracle Cloud Infrastructure Storage Gateway by using the same object storage. Create the target database in Oracle Cloud Infrastructure. To ensure that the target database has all the required metadata for Oracle Cloud Infrastructure tooling to work, create the target database by using one of the supported methods: Oracle Cloud Infrastructure Console, CLI, or Terraform provider. This target database will be cleaned to be used as a shell for the migration, as needed. Configure the Oracle Database Cloud Backup Module. Configuring the Cloud Backup Module for Existing or Fresh Backups provides an example of how to configure the Cloud Backup Module to point to the Object Storage backup bucket. For details, including variables and commands, see Installing the Oracle Database Cloud Backup Module. Shut down the database on Oracle Cloud Infrastructure and clean up the existing files. Delete the existing data files, temp files, redo log files, wallet file, and password file using the grid user and the oracle user. Note: Do not delete the parameter file. Create a local directory and mount the Storage Gateway file system on the database host. [opc@dbhost ~]$ sudo mount –t nfs –o vers=4,port=32769  <IP_of_storage_gateway>:/<filesystem_name>  /<local_directory> [opc@dbhost ~]$ sudo chown –R oracle:oinstall /<mount_directory> Copy the source password file and TDE wallet files at the target location. Use the oracle user to copy the password file and TDE wallet files from the NFS mount directory to the target location of the Oracle Cloud Infrastructure database host. Ensure that sqlnet.ora has the right ENCRYPTION_WALLET_LOCATION. cat $ORACLE_HOME/network/admin/sqlnet.ora Restore the database on the Oracle Cloud Infrastructure host by using the RMAN utility. Run the RMAN utility as the oracle user to restore and recover the database. Set the appropriate dbid and start up the instance to the nomount stage before restoring the database. [oracle@dbhost ~]$ rman target / RMAN>  startup nomountpfile=’$ORACLE_HOME/dbs/initorcl.ora’ RMAN>  restore controlfile from ‘/mydb_backup/ORCL_C_xxxxxxx_xxxxx’; RMAN>  alter database mount; RMAN> catalog start with ‘/mydb_backup/ORCL’; RMAN> RUN { SET NEWNAME FOR DATAFILE 1 TO '+DATA/mydb/system01.dbf'; SET NEWNAME FOR DATAFILE 2 TO '+DATA/mydb/sysaux01.dbf'; SET NEWNAME FOR DATAFILE 3 TO '+DATA/mydb/undotbs01.dbf'; SET NEWNAME FOR DATAFILE 4 TO '+DATA/mydb/users01.dbf'; SET NEWNAME FOR DATAFILE 6 TO '+DATA/mydb/soe.dbf'; RESTORE DATABASE; SWITCH DATAFILE ALL; RECOVER DATABASE; } RMAN> alter database open resetlogs; SELECT open_mode from v$database; Note: Create an spfile by using the pfile.

This blog post outlines the process of migrating a single-instance Oracle Database from on-premises to the Oracle Cloud Infrastructure Database as a Service (DBaaS) instance. There are many other way...

Oracle Cloud Infrastructure

Bacula Enterprise Integrates Natively with Oracle Cloud Infrastructure

In an exciting addition to the Oracle Cloud ecosystem, Bacula Enterprise now connects natively to Oracle Cloud Infrastructure. Bacula is a highly scalable, modular enterprise backup and recovery solution that has a wide range of features designed for medium and large organizations. What makes this latest development for Oracle Cloud even more interesting is that Bacula offers unique cloud interaction tools. For example, with Bacula you can perform the following tasks: Back up and restore data to and from Oracle Cloud by using either command line or BWeb (GUI) interfaces. Manage network bandwidth when transferring backup data to Oracle Cloud, which ensures that your backup doesn't monopolize your network. Perform and manage concurrent, asynchronous uploads and downloads of backup data from Oracle Cloud. Employ multiple-bucket support in a single storage daemon. By providing the ability to configure each bucket to suit the user’s personal needs, Bacula Enterprise pushes the boundaries of customization and flexibility far beyond its competitors. Use Bacula’s unique disk-caching system to recover and restore specific files with extreme speed. These special cloud management tools provide data center managers with new levels of control and integration with Oracle Cloud Infrastructure. The tools are part of Bacula’s backup software feature set that works on entire physical and virtual environments, regardless of architecture—all from a single platform. Bacula’s ability to interoperate with databases, virtual environments, and practically any type of storage destination (VTLs, disk, tape, Oracle Cloud Infrastructure, and so on), coupled with the absence of data volume-related charges, gives you an opportunity to simplify and modernize your backup and recovery strategy while cutting costs. The following diagram provides an overview of Bacula Enterprise's wider feature set: To get started, explore Bacula’s free trial software and videos.

In an exciting addition to the Oracle Cloud ecosystem, Bacula Enterprise now connects natively to Oracle Cloud Infrastructure. Bacula is a highly scalable, modular enterprise backup and recovery...

Partners

Major Updates to DataStax on Oracle Cloud Infrastructure with Terraform

DataStax is the company behind Apache Cassandra, a distributed NoSQL database. DataStax offers DataStax Enterprise (DSE), an enterprise version of Cassandra with added capabilities such as integrated Spark and Solr, improved security features, and a graph database written by the same engineers who built TitanDB. Major enterprises like Walmart, Safeway, and ING rely on DSE for operational database use cases in which the database must always be on. That high availability is the result of architecture decisions that provide redundancy at the data center, rack and node levels. Applications powered by DSE can suffer the failure of entire regions and continue operating, ensuring uninterrupted service for the end user. DataStax and Oracle have a relationship going back to Oracle OpenWorld 2016, when Mahesh Thiagarajan and I worked on the launch of Oracle Cloud Infrastructure. At the time, I was leading the Partner Architecture team at DataStax, Oracle Cloud Infrastructure was a brand new cloud, and DataStax Lifecycle Manager (LCM) hadn't come out yet. Since then, much has changed. Gilbert Lau at DataStax worked to incrementally enhance those integrations, moving from a proprietary infrastructure as code (IaC) API to the open source industry standard of Terraform. He also added LCM support. Oracle Cloud Infrastructure has continued to advance, adding new regions, VMs, and services. One of the first things I worked on when I started at Oracle Cloud Infrastructure a few months ago was revving the DataStax Terraform module. The latest version of that is now available. We've also submitted a pull request to the root DSPN repo. It looks like the net change is to drop 700 lines. The best code is deleted code! The update module includes several improvements: DataStax 6.0.2 Terraform 0.11 Latest Oracle Cloud Infrastructure Terraform provider Reorganized Terraform and scripts Arbitrary VM types and node counts Removed dependency on remote-exec Improved README.md I worked with our video team to record a demo of the new module: More is coming. In November, Collin Poczatek joined the Oracle Cloud Infrastructure team. He used to maintain the AWS Quick Start and Azure Marketplace listings for DataStax, so he brings deep expertise in cloud deployments. He's currently working on a number of projects: Additional updates to the Terraform module for Oracle Cloud Infrastructure. Hardware recommendations for one of our customers running DSE on Oracle Cloud Infrastructure. This includes benchmarking block versus NVMe, different machine types, and DSE 4.8 versus 6.0. We're planning to report on the results of all that work. A comparison showing the zData AWS benchmark running on Oracle Cloud Infrastructure. At Oracle Cloud Infrastructure, we're committed to building an open cloud that is the best place to run a variety of ISV workloads, including NoSQL databases like DSE and Cassandra. If you're running one of these databases today or looking at deploying one, we'd love to show you how Oracle Cloud Infrastructure can offer the best price and performance, saving two to three times over AWS. If we can help in any way, reach out to me at ben.lackey@oracle.com or say hi on twitter @benofben.

DataStax is the company behind Apache Cassandra, a distributed NoSQL database. DataStax offers DataStax Enterprise (DSE), an enterprise version of Cassandra with added capabilities such as integrated...

Customer Stories

Princess House Extends JD Edwards Capabilities on Oracle Cloud

Princess House is a direct sales company that sells cookware, serveware, and housewares through a network of 25,000 independent business owners. For 20 of the 55 years that they've been in business, they've leveraged JD Edwards for finance and procurement. To support their growing business, Princess House needed to modernize JD Edwards and the supporting infrastructure. They had the following goals: Extend JD Edwards beyond just financial and procurement management to become their core enterprise resource planning (ERP) system Add agility and resiliency into their infrastructure, specifically for dev/test use cases To accomplish these goals, Princess House demanded a cloud infrastructure that would not only make it easy for them to move JD Edwards, but also offer them superior availability and support. Oracle Cloud Infrastructure was the clear choice. Needed Cloud Agility for JD Edwards Dev/Test Like many other enterprises, Princess House had been running their own data center on-premises, and they wanted to get out of the business of doing so. They wanted to be able to add and remove resources as needed for their dev/test workloads, and their legacy infrastructure didn't provide them with this level of agility. “Our main driver for moving to cloud was to be able to create resources on the fly without the upfront investments, constant capacity planning and hardware renewals that come with running your own data center,” said Bassam Alqassar, Vice President of Information Systems at Princess House. In addition to gaining much-needed agility, Princess House's IT team could rely on Oracle Cloud Infrastructure to deliver a highly available infrastructure. Now they could devote their resources to improving their enterprise applications for business users. Princess House upgraded to JD Edwards EnterpriseOne, adding in supply chain management capabilities and integrating with Softeon, a warehouse management system. They were able to do so with limited disruption to existing business processes. And end-users who were already familiar with JD Edwards were able to get trained quickly on the upgraded system. Oracle Cloud Infrastructure Was Best Choice for JD Edwards Princess House could have gone with any other cloud provider to get the agility benefits that they were seeking. However, when they looked into hosting with a prior partner, they discovered that it would have been three times more expensive than Oracle's solution. Additionally, they wanted to work with a cloud vendor that would be able to offer strong support across the stack, from the application itself down to the database and the infrastructure. “We looked at other clouds, but we knew Oracle Cloud Infrastructure was the best choice to run an Oracle solution,” said Alqassar. Learn more about Princess House's story to move and improve JD Edwards to Oracle Cloud Infrastructure.

Princess House is a direct sales company that sells cookware, serveware, and housewares through a network of 25,000 independent business owners. For 20 of the 55 years that they've been in business,...

Product News

IDCS Users Can Now Use the Oracle Cloud Infrastructure SDK and CLI

We're announcing an enhancement to our federation capabilities using Oracle Identity Cloud Service. Available today, users who are federated with IDCS can directly access the Oracle Cloud Infrastructure SDK and CLI. This enhancement supports a broad range of use cases, including the simplification of governance and management tasks.  You can now use an IDCS user for all CLI access.  For example, IDCS users can use scripts to automate common tasks using the CLI as well as integrating OCI tasks with other infrastructure tools and systems you might use.  For another example, if you want to create a script that copies files to Object Storage, you can now do that by using an IDCS user instead of creating a local Oracle Cloud Infrastructure user. As a result, you can greatly reduce the number of users that you have to secure and manage. Federation enables you to use identity management software to manage users and groups. All tenancies created after December 2017 are automatically federated with IDCS. If you're an IDCS user, that means your can leverage the same set of credentials across all Oracle Cloud solutions, including Oracle Cloud Applications and Oracle Cloud Infrastructure. In addition, all users that are members of IDCS groups that are mapped to Oracle Cloud Infrastructure groups will be synchronized from IDCS to Oracle Cloud Infrastructure. This synchronization enables you to control which IDCS users have access to Oracle Cloud Infrastructure and to consolidate all user management in IDCS. To take advantage of this new feature, follow the setup process described in Upgrading Your Oracle Identity Cloud Service Federation.  Next, I'd like to give an example of a cost management scenario that is greatly simplified by this feature. Let's say you want to run a Python script, using the SDK, that finds and terminates compute instances that don't have the CostCenter cost tracking tag. Instead of creating a local Oracle Cloud Infrastructure user, you can set up a user in IDCS to run this script. You would follow these steps to enable this scenario: Step 1: Ensure that your federation has been upgraded If you haven't already followed the setup process described in Upgrading Your Oracle Identity Cloud Service Federation, do so now. Step 2: Set up the user in IDCS and associate that user with the correct groups Managing all your users from your identity provider is a more scalable, manageable, and secure way to manage your user identities. Be sure to follow the principal of least privilege by creating an IDCS user and associating that user with only the IDCS groups that they need to do their job. Step 3: Set up the Oracle Cloud Infrastructure group Create a local Oracle Cloud Infrastructure group that will be used for this task, and ensure that it has a policy that enables just the access control that it needs to do the work. Consider setting up a group specifically for the type of administrator you want (for example, compute instances administrator). For a detailed explanation of best practices in setting up granular groups and access policy, see the Oracle Cloud Infrastructure Security white paper. You can also create the group when you map it. Step 4: Map the IDCS group to the Oracle Cloud Infrastructure group Follow the instructions on adding groups and users for tenancies federated with Oracle Identity Cloud Service, and ensure that you map the correct group from IDCS to the equivalent group in Oracle Cloud Infrastructure. You will that you succeeded if you see users created in your tenancy from IDCS (there is a filter that allows you to see only federated users). You can also create groups as you map them. Step 5: Set up the user with an API key Now that the IDCS user exists as a provisioned user in Oracle Cloud Infrastructure, you must create an API key pair and upload it to the user. Each user should have their own key pair. See the SDK setup instructions for details. Step 6: Check the user's capabilities  As a final check, ensure that the user has the capability to use the CLI or SDK. You could also set the user's capabilities to use only the SDK and not the web console. Now you've set up the IDCS user so that they can take advantage of the SDK and run scripts that the Oracle Cloud Infrastructure user has been granted.    Tips You know that the user is federated if the user name is prefixed with the name of the identity provider. By default, IDCS is called oracleidentitycloudservice. For example, oracleidentitycloudservice/Martin. If no users are being replicated, verify that you've followed the setup procedure and mapping between the groups. If that doesn’t work, visit My Oracle Support to open a support ticket. Only users assigned to mapped groups are replicated. If you see some users but not the IDCS user that you want, that user doesn't belong to a group that has been mapped from IDCS to Oracle Cloud Infrastructure. To use the SDK or CLI, the client that runs the CLI or SDK must have the matching private key material stored on the client machine. Secure the client machine appropriately to prevent inappropriate access. Conclusion Stay tuned for future feature announcements regarding federation. We plan to support other federation providers, and we'll keep you informed as we make updates.

We're announcing an enhancement to our federation capabilities using Oracle Identity Cloud Service. Available today, users who are federated with IDCS can directly access the Oracle...

Partners

Couchbase: Expanding the Oracle Cloud Infrastructure NoSQL Ecosystem

Couchbase is a distributed NoSQL database. It's a document store in the same class of databases as MongoDB, but it differs in some interesting ways. For example, Couchbase offers a query language called N1QL that enables a user to write ANSI SQL (including joins!) against the document store. These N1QL queries return JSON documents. It's a little weird, but it substantially simplifies the development of web, mobile, and IoT apps. Couchbase has three products: Couchbase Server, the core NoSQL database Couchbase Lite, a lightweight version of the database that runs on Android and iOS devices Couchbase Sync Gateway, which manages the synchronization between the mobile and server components This architecture leads to some amazing use cases, with companies like Ryanair using Couchbase in their mobile app to store information locally, improving customer experience through reduced latency when using a mobile app. This also provides the ability to continue using the app if the device is disconnected from the server or internet. Beyond this core functionality, Couchbase offers the following components, which can be used in various combinations on heterogeneous nodes: Data Query Index Full Text Search Analytics Eventing This flexibility enables users to scale whatever component of the database their use case demands. Couchbase calls this Multi-Dimensional Scaling (MDS). I have a particular affinity for Couchbase because I was leading their cloud partnerships before coming to Oracle Cloud Infrastructure. They have a great team of people who are a pleasure to work with. Today, Oracle Cloud Infrastructure has a close partnership with Couchbase. We've created a Terraform module that automates the deployment of Couchbase on Oracle Cloud Infrastructure. Writing this module has been an interesting continuation of my introduction to Terraform. That introduction was working with Gruntwork.io on the Terraform module that they created to deploy Couchbase on AWS. One of the cofounders, Jim Brikman, literally wrote the book on Terraform. It's a good read. Gruntwork also created an open source framework for testing Terraform code called Terratest. We're currently looking at ways to use that in our work at Oracle Cloud Infrastructure. Before working on the Terraform module, I'd worked with the infrastructure as code (IaC) languages that each cloud provides: AWS CloudFormation, Azure Resource Manager, and Google Deployment Manager. It's been interesting to explore how Terraform presents a single framework for deploying on any cloud, providing an open source technology that is the basis of a multi-cloud world. I've been impressed by Oracle Cloud Infrastructure's choice to embrace an existing open source technology, both by joining the Cloud Native Computing Foundation (CNCF) as a Platinum member and contributing the Oracle Cloud Infrastructure provider to the Terraform project. This open approach to building a cloud seems preferable to technology stacks that lock users into a single platform. I worked with our video team to record a demo that shows how to run the Terraform module to get a Couchbase cluster on Oracle Cloud Infrastructure:   The module deploys both Couchbase Server and Sync Gateway. You can, of course, configure it to deploy different numbers of nodes, machines, and so on. This is just the start of our partnership with Couchbase. Here are some upcoming items: Oguz Pastirmaci on the Oracle Cloud Infrastructure Data and AI team is working to improve the module, including revving it to Couchbase 6 and adding MDS support.   We're debating whether to reuse the Python template generator approach I've used on some other technologies or wait for the control structures that Terraform 0.12 is introducing. A blog post about the Kubernetes Operator for Couchbase on Oracle Cloud Infrastructure Container Engine for Kubernetes is coming soon. We're also starting to run some POCs of Couchbase on Oracle Cloud Infrastructure. If you're interested in learning more, reach out to me at ben.lackey@oracle.com or on Twitter @benofben.

Couchbase is a distributed NoSQL database. It's a document store in the same class of databases as MongoDB, but it differs in some interesting ways. For example, Couchbase offers a query...

Partners

H2O.ai Driverless AI Cruises on Oracle Cloud Infrastructure GPUs

One of the things I'm most excited about at Oracle Cloud Infrastructure is the opportunity to do cool things with our partners in the artificial intelligence (AI)/machine learning (ML) ecosystem. H2O.ai is doing some really innovative things in the ML space that can help power these sorts of use cases and more. Their open source ML libraries have become the de facto standard in the industry, providing a simple way to run a variety of ML methods, from logistic regressions and GBT to an AutoML capability that tunes the model automatically. H2O.ai has continued to build on this functionality with GPU support with what I think might be the best-named product of all time, Sparkling Water. (Yes, it's H2O running on Spark. Get it?). The latest H2O.ai product is Driverless AI. The name is perhaps a bit misleading. Driverless AI isn't related to driverless cars. Instead, it's an ML platform that provides a GUI on top of the H2O ML libraries that we already know. The GUI provides support for a significant chunk of the ML lifecycle: Data loading Visualization Feature engineering Model creation Model evaluation Deployment for scoring Software to do all this simply wasn't available five years ago. Instead, a highly skilled person would have had to put everything together by hand over a period of weeks or months. There are still some gaps. For example, data wrangling is still a mess even with the time series support and automatic feature generation abilities of Driverless AI. That said, building accurate ML models has never been easier. So, what does this all have to do with Oracle Cloud Infrastructure? We're building data centers all over the world, and they're being populated with some nifty hardware, including cutting-edge GPU boxes. The new BM.GPU3.8 is the top of that range with 8 NVIDIA Volta cards. This is the perfect machine to handle the compute demands of DAI, and we're pricing them to be significantly less expensive than any competing platform. For our provisioning plane, Oracle Cloud Infrastructure has made an open choice. Rather than building a proprietary technology such as Amazon Web Services CloudFormation, we've chosen to adopt the open source industry standard of Terraform. We've joined the Cloud Native Computing Foundation (CNCF) as a Platinum member and contributed our Terraform provider to the open source project. We've partnered with H2O.ai to write some Terraform modules that deploy H2O.ai Driverless AI on Oracle Cloud Infrastructure. The first module deploys on GPU machines. I worked with our team to record this video that demonstrates how to use the module. It also includes a very basic demo.   This is just the beginning of our partnership with H2O.ai. We're working on several activities with them: Oguz Pastirmaci from the Oracle Cloud Infrastructure data and AI team is working to enhance the Terraform module. Building a model is fast with 8 GPUs. It's going to be a lot faster with a whole cluster of those machines humming in parallel. We're discussing how we might be able to simplify deployment even further, providing a more integrated experience with a higher-level interface. We'll be at H2O World San Francisco 2019 on Feb. 4-5. Although the event won't have booths, a number of us should be wandering around the conference. Say hi! If you're interested in learning more about H2O.ai on Oracle Cloud Infrastructure or about our AI/ML partnerships in general, reach out to me at ben.lackey@oracle.com. You can also follow me on Twitter @benofben.

One of the things I'm most excited about at Oracle Cloud Infrastructure is the opportunity to do cool things with our partners in the artificial intelligence (AI)/machine learning (ML) ecosystem. ...

Customer Stories

How TruGreen Got Two to Four Times Better Performance for JD Edwards

TruGreen is arguably the largest and one of the most recognizable lawn care brands in the US. When I'm out walking the dog, I often find that the neighbors' yards that I envy the most all boast physical signposts indicating that they are cared for by TruGreen. Because the company's business model and motto is all about "living life outside," they have an expansive team of seasonal and part-time employees in the field, and branches located all around the US. In addition, TruGreen operates multiple lines of business ranging from lawn care to mosquito defense, and supports not only residential but also commercial properties. Therefore, they need a comprehensive and modern enterprise resource planning (ERP) solution that enables them to support and power their diverse lines of business and their distributed workforce. What's more, their ERP system needs to be backed by a high-performing and cost-effective infrastructure. An Outdated Legacy Application Environment TruGreen faced several challenges regarding their ERP application and the infrastructure supporting it: The JD Edwards version that they were running had been inherited from their parent company, ServiceMaster. It had been released over 15 years ago and was no longer supported. Their presentation layer leveraged outdated versions of IBM WebSphere, running on IBM DB2 Universal Database and AIX in the backend. TruGreen had disjointed ERP processes, a result from historical merger and acquisitions (M&A) activity, and the obsolete technology was preventing them from streamlining operations for their many lines of business and over 13,000 employees. In 2014, TruGreen spun off from ServiceMaster. This action served as a key trigger and perfect opportunity to modernize both their ERP application and the infrastructure that it ran on. The IT team knew they hadn't taken advantage of all the capabilities that modern JD Edwards had to offer, so they decided to upgrade their solution and to move to the cloud. They needed good technology partners that could help them get up and running in the new cloud environment as effectively as possible. Needed Best Performing, Most Reliable Cloud Infrastructure for JD Edwards TruGreen found a great partner in Velocity, which provided consulting and implementation services during their cloud transformation process. They considered both private and public cloud options and decided, in the end, to take a hybrid approach. TruGreen chose to archive their historical data in Velocity's private cloud offering, and they also wanted the best performing and most reliable cloud infrastructure for JD Edwards. Velocity recommended implementing JD Edwards EnterpriseOne 9.2 with Oracle Database in Oracle Cloud Infrastructure. TruGreen chose Oracle for several reasons. Not only was it the ideal IaaS to run JD Edwards, but TruGreen would also be able to take advantage of infrastructure and database services from a single vendor. The ability to purchase both IaaS and PaaS services through the Universal Credit pricing model was a key benefit. Additionally, Oracle Cloud Infrastructure made it easier for enterprises to create a cloud environment that offered the same level of isolation as on-premises, something that TruGreen required. With Oracle's highly customizable and private virtual cloud network, TruGreen and Velocity were able to create subnets to isolate private resources from public ones. Improved Performance and Streamlined Business Operations Despite TruGreen's outdated legacy environment, Velocity was able to help them stand up their new JD Edwards environment quickly. They leveraged Oracle APIs and their own proprietary tools to accelerate deployment. Since moving to Oracle Cloud Infrastructure, TruGreen has already seen significant improvements in performance. Paul Shearer, Director of JD Edwards Professional Services at Velocity Technology Solutions, believes that TruGreen's JD Edwards implementation is performing two to four times better than average. “Of the entire portfolio of JD Edwards customers Velocity hosts today, TruGreen is the most performant,” said Shearer. Clif Lee, Director of Corporate Systems at TruGreen, added, "Our finance and branch teams that use JD Edwards are just ecstatic over the performance." With Oracle, not only was TruGreen able to streamline business processes, but they now have a modern platform on which to build in additional capabilities, like multi-currency processing, to support their Canadian operations. For more details about TruGreen's JD Edwards implementation in Oracle Cloud Infrastructure, including the quantifiable results that they are seeing so far, read the full case study.

TruGreenis arguably the largest and one of the most recognizable lawn care brands in the US. When I'm out walking the dog, I often find that the neighbors' yards that I envy the most all boast...

Building Enterprise Onramps to the Cloud Native Freeway

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Businesses of all sizes, from startups to large, established enterprises, are talking about cloud native technologies such as Kubernetes, containers, and serverless. In fact, cloud native is becoming a market imperative for technology-centric businesses that want to remain competitive in the era of cloud computing. I recently sat down for a conversation with Bob Quillin, vice president of developer relations at Oracle Cloud Infrastructure, who founded StackEngine, a container management startup acquired by Oracle in 2015. In this interview, Quillin explains how Oracle Cloud Infrastructure enables enterprises to use traditional technologies in a cloud-native context. He also talks about Oracle's longstanding embrace of open standards and open source software and offers recent examples of cloud native uses cases around Oracle Database, Java microservices, and WebLogic technologies. Listen to our conversation here and read a condensed version below: Your browser does not support the audio player You came to Oracle Cloud Infrastructure from a startup. How does it feel here? Bob Quillin: It has actually been a pretty exciting time. The Oracle Cloud Infrastructure team up in Seattle was basically formed as a startup within Oracle. Over the last two to three years, we acquired Wercker for continuous integration and continuous delivery (CI/CD). They were a startup in the CI/CD space. We also brought in the Iron.io serverless team, and Dyn was another huge acquisition that added a whole bunch of edge services expertise into the Oracle Cloud Infrastructure team. So, we really have a lot of innovators. There are lots of people trying to do some cool and interesting things inside Oracle, and also help us take our second-generation capabilities around cloud and cloud native to the market. It's been a great opportunity. What is your team hearing from customers about the cloud and cloud native technologies? Quillin: As we talk to customers—enterprises, startups, any technology-centric company—we've learned that going cloud and cloud native is really a market imperative. It's a mandate for businesses that want to be digital in this era. The movement is customer driven—and one thing I've found here in my time at Oracle is that Oracle is very customer-centric. Working on this team and evangelizing cloud native technologies over the last four to five years, I've seen that this movement toward cloud native is pervasive. It's both large and small organizations, and it's happening across the board. Can you tell me more about what's happening with cloud native and open source technologies at Oracle? Quillin: Oracle actually has a long history in open source and open standards with technologies like SQL, Java, and Linux, for example. We now have this new whole new breed of startups that came into Oracle Cloud Infrastructure, and they're bringing a real startup mentality, a new kind of DNA, into Oracle. If you think about the Oracle Cloud Infrastructure team in Seattle, a lot of them came out of cloud businesses like Amazon and Azure. So, in many ways, we're used to cloud, we're open source software developers, and we're really committed to taking this forward in the right way. To that end, last year we joined the Cloud Native Computing Foundation (CNCF) as a Platinum member. We rolled out a whole bunch of new cloud native technologies based on CNCF standards and Kubernetes. We were one of the first solutions to be certified conformant by the CNCF, and we're also one of the first to release open source serverless solutions through our Fn project. People don't necessarily equate Oracle and open source and cloud, but we’re here to help change that. The way to do it is to really commit from the bottom up by engaging with the community and working with developers organically. That's what we're doing. We're working with developers from the bottom up and enterprises from the top down. Cloud native development is obviously taking the world by storm. But is it only suitable for enterprises that focus mostly on developing new applications? Or, can it also help if I'm a big enterprise with lots of traditional applications, databases, and legacy workloads? Quillin: It can absolutely help big enterprises with traditional workloads. Most of these technologies came out of web-scale companies, be they Netflix, or Google, or Spotify, for example. A lot of the cloud native technologies came from Generation 1 cloud or first-wave cloud native offerings. I think what we're seeing now is the second wave, where you have more and more organizations like Oracle trying to build more onramps to the cloud native freeway, and getting more people, teams, and technologies on board with cloud native. We've got to reach out and connect to the technologies that people know so they have a starting point from which they can actually adopt cloud native strategies. Can you provide some examples of our cloud native technologies? How are organizations using them? Quillin: For starters, the WebLogic team here built a Kubernetes operator which basically extends Kubernetes to create, configure, and manage a whole WebLogic domain. One of our big technology customers is CERN, the European research organization with the largest particle physics lab in the world located in Switzerland. CERN is a huge WebLogic and Java shop. They've embraced Kubernetes, and they're using this operator to move a lot of existing technology to this cloud-native world. That's great. Do you have any other examples? Quillin: Another good example is a project we announced at Oracle OpenWorld in October called Helidon. It's a Java microservices architecture that was rolled out to really simplify the process of doing microservice cloud native deployments in Java. My solutions team basically helped write a Kubernetes wrapper to connect that into Kubernetes. As a result, Java applications that are written in microservices format using that pattern can easily connect easily into Kubernetes. A third example is one we're working on right now: We're seeing a lot of Oracle Database customers starting to leverage cloud native apps based on Kubernetes for new web frontends or for some artificial intelligence back-end processing. They're looking at moving to the autonomous database that Oracle launched at OpenWorld this year and using the Oracle Container Engine for Kubernetes to get autonomy and automation, not just on the database side but throughout the whole application. So, those are three pretty powerful examples of database technology, Java technology, WebLogic technology. Over the last year, we've seen a huge leap forward in enabling customers to use more traditional technologies but use them in a cloud-native context. Learn more about Oracle cloud native technologies today.

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Businesses of...

Customer Stories

Covanta Migrates a PeopleSoft Application to Oracle Cloud Infrastructure

Energy-from-waste giant Covanta recently migrated a critical PeopleSoft implementation to Oracle Cloud Infrastructure. But first the company ramped up security around its network edge with the Oracle Dyn Web Application Security platform. Covanta—which offers a variety of waste management services—has long used Oracle PeopleSoft to run its finance, supply chain management, and procurement portal. The company initially managed the application on premises, and then moved to Oracle Managed Cloud Services, but the company recently decided that they were ready for the simplicity and automation of a public cloud deployment for their critical application stack. Citing benefits such as excellent support, faster provisioning, greater scalability, and lower management costs, Covanta migrated the PeopleSoft implementation to Oracle Cloud Infrastructure. "We did an analysis and we looked at a couple of different options for the PeopleSoft application, including [Amazon Web Services], and we spent countless hours with the Oracle team," said Ben Cabrera, Covanta's Vice President and Chief Information Officer. "Everyone at Oracle stepped up to the plate, and now we're in the Oracle Cloud Infrastructure environment." As planning for the migration began, Covanta realized that it had additional security issues to address. With hackers increasingly targeting internet-facing applications, the company needed a higher level of security at the edge of its network. Following a successful proof of concept, Covanta decided to go live with a next-generation Web Application Firewall (WAF) from the cloud-based Oracle Dyn Web Application Security platform. Oracle Dyn Web Application Security enabled Covanta to protect the PeopleSoft application from DDoS attacks and other threats before, during, and after the migration was complete. It also gives Covanta greater visibility into the different types of malicious traffic targeting internet-facing application environments. "[Oracle Dyn Web Application Security] is definitely one of the higher-performing solutions in this space," said Jason Gonsalves, a security architect and manager at Covanta. "We're really happy with the capabilities, the output, and the integration." Why Oracle Cloud Infrastructure? Covanta embraces a multi-cloud strategy and had several options when deciding which cloud infrastructure was right for the PeopleSoft deployment. While the company uses Microsoft Azure and Amazon Web Services for other applications, they decided that Oracle Cloud Infrastructure was the best choice for their mission critical business application stack because of the single vendor support model and optimization specifically for PeopleSoft.  What's more it allows Covanta to continue using Oracle management tools they are already successful with, including PeopleSoft Cloud Manager, an orchestration framework that enables users to easily provision and manage PeopleSoft within cloud environments. Oracle Cloud Infrastructure combines the elasticity and utility of a public cloud with the granular control, security, and predictability of on-premises infrastructure to deliver high performance, high availability, and cost-effective infrastructure services. Oracle Cloud Infrastructure makes it easier for companies like Covanta to provision new services and enhance existing ones over time. "Compute resources are actually a lot better from a CPU utilization perspective. It's a huge improvement," said Karish Chowdhury, a cloud architect and manager at Covanta. "The environment is also in our control, so making a change that goes through development, QA, and production is much easier and less time consuming." Among other tools, Covanta is using Oracle Cloud Infrastructure virtual cloud network (VCN) technology, which gives it complete control over the networking environment. By using VCNs, Covanta can assign its own IP address space, create subnets, create route tables, and configure firewalls. "The VMs that we have been provisioning seem to be a lot better than what other cloud providers offer at the same level in terms of disk resolution, memory, and CPUs," Chowdhury said. Watch Cabrera discuss Covanta's use of Oracle Cloud Infrastructure in this video: Learn more about Covanta's migration to Oracle Cloud Infrastructure today.

Energy-from-waste giant Covanta recently migrated a critical PeopleSoft implementation to Oracle Cloud Infrastructure. But first the company ramped up security around its network edge with the Oracle...

Oracle Cloud Infrastructure

Configure a FastConnect Direct Link with Equinix Cloud Exchange Fabric

This post was written by Sergio J. Castro, Senior Solutions Engineer at Oracle, and Bill Blake, Global Solutions Architect at Equinix. Oracle Cloud Infrastructure FastConnect is a network connectivity alternative to the public internet for connecting an on-premises data center or network with Oracle Cloud Infrastructure. Equinix is the first and the largest FastConnect partner, that connects the world's leading businesses to their customers, employees and partners inside the most interconnected data centers. With Equinix Cloud Exchange Fabric ™ (ECX Fabric), customers can extend their Oracle IaaS and PaaS solutions to the Oracle Cloud in 30 plus locations across US, EMEA and APAC. Equinix Cloud Exchange Fabric is optimized for connectivity to Oracle Cloud Infrastructure services, leveraging FastConnect. The result is a secure connection that offers predictable and consistent latency and high bandwidth, for dedicated speeds up to 10 GBPS. In this post, cloud architects from Oracle and Equinix provide all the necessary steps for completely configuring a FastConnect link from Oracle Cloud Infrastructure to an on-premise router by using Equinix Cloud Exchange Fabric. Accounts in both Oracle Cloud Infrastructure and Equinix are needed.  On the on-premise side of the connection, administrator access to the router that will serve as the customer premise equipment is required. In this post we use a Cisco CSR. On Oracle Cloud Infrastructure, we build a virtual cloud network (VCN), configure a dynamic routing gateway (DRG), associate the DRG with the VCN, and then add a route rule that points VCN traffic to the DRG. We then configure the FastConnect link. From the FastConnect configuration, we retrieve the virtual circuit OCID and pass it to Equinix for their Cloud Exchange Fabric configuration for setting a private peering. On Equinix Cloud Exchange Fabric, we create the connection to Oracle Cloud Infrastructure, using the OCID and other information like region, Border Gateway Protocol (BGP) IPs, and autonomous system number (ASN) to complete the configuration. Create a Virtual Cloud Network (You can skip this step if you already have the VPC that you want to use) A Virtual Cloud Network (VCN) is a software defined, private network that you set up in Oracle Cloud Infrastructure. It is a virtual representation of a physical network, with routers, routes and security rules. A VCN is not really needed for the purpose of configuring the Fastconnect link. But the purpose of it is for interconnecting on-premise to cloud networks. And, in this post, for testing end to end connectivity via ICMP. Sign in to your tenancy in the Oracle Cloud Infrastructure Console. Ensure that you’re in the Oracle Cloud Infrastructure region that matches the Equinix destination region that you’re going to configure. This example uses the Ashburn region. In the Quick Launch section of the home page, click Create a virtual cloud network: Networking. In the Create Virtual Cloud Network dialog box, select a compartment. If one is preselected, ensure that you want your VCN to reside there, or select another one. Oracle Cloud Infrastructure uses compartments to organize resources. Give your VCN a name. If you leave this field blank, the date and time of creation will be the VCN name. Select Create Virtual Cloud Network Plus Related Resources. This option assigns a default CIDR block, creates a subnet in each availability domain, adds an internet gateway, generates a security list, and generates a route table with a rule that routes out to the open internet. If you want to customize your own settings, select Create Virtual Cloud Network instead and then create each of these resources. Click Create Virtual Cloud Network. The VCN detail page is displayed. Note: For this example, we launched a Linux VM compute instance with a private IP address of 10.0.2.2. For information about how to launch compute instances on Oracle Cloud Infrastructure, see the Getting Started guide. Create a Dynamic Routing Gateway A Dynamic Routing Gateway (DRG) is a virtual router that provides a pathway for private traffic between your VCN and other networks, like an on premise network. On the left side of the Console, under Networking, click Dynamic Routing Gateways. Click Create Dynamic Routing Gateway. In the Create Dynamic Routing Gateway dialog box, select the compartment where you want your DRG to reside, and give your DRG a name (in this example, EquinixDRG). Click Create Dynamic Routing Gateway. After your DRG is provisioned, select it. On the left side of the Console, under Resources, click Virtual Cloud Networks. Click Attach to Virtual Cloud Network. In the Attach to Virtual Cloud Network dialog box, select the same compartment where your VCN resides, and then select the VCN (in this example, EquinixVCN). You can ignore the Associate with Route Table settings. For more information about this option, click the help link or the information symbol in the dialog box. Click Attach. Your VCN is now attached to the DRG. Add a Rule to the DRG on Your Route Table A VCN uses virtual route tables to send traffic out of the VCN, for example, to the Internet or to your on-premises network, which is this case. Go back to the Networking section and select your VCN (in this example, EquinixVCN). Under Resources, click Route Tables. Click Default Route Table for EquinixVCN. Click Edit Route Rules. Click +Another Route Rule. In the expanded dialog box, provide the following information: For Target Type, select Dynamic Routing Gateway. For Compartment, select the same one that you’ve been using throughout this exercise (in this example, Equinix). For Destination CIDR Block, enter the on-premises network CIDR block. In this example, we are using 192.168.1.0/24. For Target Dynamic Routing Gateway, select the DRG that you just created (in this example, EquinixDRG). Click Save. Create a FastConnect Virtual Circuit The final step on Oracle Cloud Infrastructure is to configure the FastConnect Circuit that the DRG will be using for reaching the on-premise network. For these step you need to know the Border Gateway Protocol (BGP) IP Addresses, and the Autonomous System Number (ASN). Equinix will provide this information. Go back to the Networking section. Under Networking, click FastConnect. Click Create Connection. In the Create Connection dialog box, select Connect Through a Provider, and then select Equinix: CloudExchange. Click Continue. In the new Create Connection dialog box, provide the following information. The values provided here are specific to this example. Name: Give the connection a name (in this example, Equinix). Compartment: Select the same compartment that you’ve been using throughout this exercise (in this example, Equinix). Virtual Circuit Type: Private Virtual Circuit Dynamic Routing Gateway Compartment: Equinix Dynamic Routing Gateway: EquinixDRG Provisioned Bandwidth: 1 GBPS Customer BGP IP Address: 172.16.4.1/30 Oracle BGP IP Address: 172.16.4.2/30 Customer BGP ASN: 65100 Click Continue. The connection is created from Oracle Cloud Infrastructure. On the details page for the connection, copy the OCID. You need it to provision the virtual connection from Equinix in the next section. You can also click the Equinix link, which takes you to their main site, where you can log in to their portal (for the next section). Complete the Connection from Equinix to Oracle Cloud Infrastructure Now that you completed the Oracle Cloud Infrastructure part, as it is indicated in the image above, the FastConnect status is Pending Provider. Now you need to configure the Equinix part, which is the provider of the actual physical link. Log in to the Equinix Cloud Exchange Portal. Click the Create Connection tab. Select Oracle Cloud. From the four options, select Oracle Cloud Infrastructure –OCI- FastConnect (Layer 2) and then click Create a Connection. Select an origin and destination. In this example, we are creating a virtual connection from Equinix Chicago to the Oracle Cloud Infrastructure Ashburn region, which is local to Equinix Ashburn. Note that we are using the Equinix Cloud Exchange (ECX) WAN Fabric to transit between Chicago and Ashburn. Provide the required information to build the virtual connection: FastConnect Virtual Circuit: Provide a name for this connection. VLAN: Enter the VLAN used on your router. The values must match. Virtual Circuit OCID: Enter the OCID that you copied in the previous procedure from the Oracle Cloud Infrastructure Console. This ID is validated by the system. Purchase Order Number: This optional field is for customer tracking. Click Next. The circuit speed is automatically based on the OCID from Oracle Cloud Infrastructure. On the page that summarizes the virtual connection settings, validate the settings and add your email address for order notifications. A confirmation screen appears. Click Inventory and locate your new virtual connection. Click the virtual connection to view the status. It normally takes from 5 to 10 minutes for the Equinix Cloud Exchange to configure the Equinix and Oracle sides. Ensure that the Status and Provider Status fields say Provisioned. The additional information that shows the Oracle side of the virtual connection can be used later for troubleshooting. On the connection detail page in the Oracle Cloud Infrastructure Console, note that the link is provisioned but not yet synchronized. Complete the Router Configuration from Equinix to Your Network Now that the Equinix part is done, the final step is configuring the connection to the on-premise network. Access your router to configure the BGP properties and establish a peering relationship with Oracle Cloud Infrastructure DRG to exchange routes. This step can vary by vendor; this example is using a Cisco CSR. Refer to your vendor’s documentation for help with BGP. Oracle's BGP ASN is 31898 when using the Equinix Cloud Exchange. Your ASN can be any private or public ASN that you own. Configure the router IP address and BGP information: In this example, 172.16.4.0/30 is used. In this example, the private BGP ASN 65100 is used. Validate Connectivity Between the Router and Oracle Cloud Infrastructure Following are some suggested steps for testing the connectivity. Verify that BGP has been established. Verify that BGP routes are being sent and received from Oracle Cloud Infrastructure. Send ping and traceroute commands to the Oracle DRG. Send ping and traceroute commands to Oracle bare metal hosts or VMs within Oracle Cloud Infrastructure. If you are using multiple virtual connections, test failover. Verify that you can ping an Oracle VM (10.0.2.2) from your router (192.168.1.1). Verify that you can ping the Oracle DRG IP address (172.16.4.2) from your router. In the Oracle Cloud Infrastructure Console, verify that the status of the FastConnect connection is UP. Basic connectivity should now be established between the edge router and Oracle Cloud Infrastructure. To learn more about Oracle Cloud Infrastructure FastConnect, see FastConnect Overview. To learn more about Oracle Cloud partnership with Equinix, see this partner page. To learn more about Equinix Cloud Exchange Fabric, see ECX Overview. Bill Blake is a network veteran of over 13 years, and has covered nearly all related technologies including wireless, routing, switching, security, cloud, data centers, and load balancing. He has worked for large enterprises in technical, architectural, and managerial roles, as well as for a large VAR performing massive data center migrations. Most recently, Bill works at Equinix helping customers architect their data center, WAN, and cloud strategies on a global scale. Sergio Castro is an Oracle Cloud Infrastructure Certified Architect, Associate. He focuses on networking and on next-generation IT services. You can reach him at sergio.castro@oracle.com.  

This post was written by Sergio J. Castro, Senior Solutions Engineer at Oracle, and Bill Blake, Global Solutions Architect at Equinix. Oracle Cloud Infrastructure FastConnect is a network connectivity...

Solutions

Bring Your Domain Name to Oracle Cloud Infrastructure’s Edge Services

The Domain Name System, or DNS, is the first step in site and web application performance. Founded on Dyn’s DNS, the Oracle Cloud Infrastructure DNS service is an integral part of Oracle Cloud Infrastructure’s suite of edge services. It’s available through the Oracle Cloud Infrastructure Console and the API. Bringing your domain name to Oracle Cloud Infrastructure is a straightforward process. As the preceding image indicates, it can take from three to nine minutes to configure and start using your domain name in Oracle Cloud Infrastructure. This post describes how to bring a domain name from a third-party provider. We create a zone and the needed records, and then publish it. The DNS record will be mapped to a live web server running on an Oracle Cloud Infrastructure Compute Linux instance. Create a DNS Zone Sign in to your tenancy in the Oracle Cloud Infrastructure Console. In the Quick Launch section of the home page, click Manage a domain: DNS Zone Management. The Create Zone dialog box is displayed. Here you enter the domain name that you’re bringing from Dyn or from a third-party provider such as GoDaddy. Select a compartment where your DNS zone will reside. If one is preselected, you might need to update it. If so, perform the following actions. If not, skip to the next step. Click cancel in the Create Zone dialog box. On the DNS Zone Management page, click the Compartment list and choose a compartment. Oracle Cloud Infrastructure uses compartments to organize resources. Click Create Zone. In the Create Zone dialog box, enter the following values: For Method, select Manual. The manual method is the most common method, but you can also import your zone. For Zone Type, select Primary. The other option is Secondary. The primary zone serves as a master zone, and the secondary zone has a read-only copy of the zone that stays synchronized with the primary DNS server. For Zone Name, enter the domain name that you are going to associate with the IP address of your web server. This example uses the domain name castro.cloud. Click Submit. The DNS zone is created. Oracle Cloud Infrastructure created five records; four nameserver (NS) records and one start of authority (SOA) record. The four nameserver records are the records that you will take to the third-party DNS provider from which you’re bringing the DNS (in the next section). Add the Nameserver Records to the Third-Party DNS Provider In the third-party DNS provider, change the nameservers type from Default to Custom. Enter the four nameserver records provided by Oracle Cloud Infrastructure, and then click Save. Add the A and CNAME Records Go back to the zone details page in the Oracle Cloud Infrastructure Console, and click Add Record. For the record type, select A - IPv4 Address. Retrieve the public IP address of your web server, and enter it in the Address field. Enter values in the TTL and TTL Unit fields. If you want your domain name to be preceded by www (for example, www.castro.cloud), add a CNAME (canonical name) record by selecting the Add Another Record check box. Click Submit. If you are adding a CNAME record, select CNAME - CNAME as the record type and provide the necessary information. Then clear the Add Another Record check box and click Submit again. Publish Your New DNS Zone The last step is to publish your new DNS zone by clicking Publish Changes. As soon as you publish, you can access your web server with its new domain name address (in this example, castro.cloud and www.castro.cloud). To learn more about Oracle Cloud Infrastructure’s DNS Zone Management, see Overview of the DNS Service.

The Domain Name System, or DNS, is the first step in site and web application performance. Founded on Dyn’s DNS, the Oracle Cloud Infrastructure DNS service is an integral part of Oracle Cloud...

Oracle Cloud Infrastructure

Big Data Performance on Oracle Cloud Infrastructure

Hello, my name is Zachary Smith, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure. You might have seen Larry Ellison’s keynote at Oracle OpenWorld 2018 regarding the significant price and performance advantages of running Big Data workloads on Oracle Cloud Infrastructure. I ran TeraSort benchmarks to understand the price and performance advantage, and I’d like to share an in-depth look at the benchmark process. Creating the Environments For the benchmark environments, I used Cloudera as the Hadoop distribution (v5.15.1). Deployment on Oracle Cloud Infrastructure leveraged Terraform templates available on GitHub to automate cluster deployment and configuration. Oracle Cloud Infrastructure clusters were tested using four bare metal shapes: BM.StandardE2.64, BM.Standard1.36, BM.Standard2.52, and BM.HPC2.36, with 32 x 1-TB block volumes for HDFS capacity per worker, and 256-GB root volumes.    The AWS deployments were done using some of the same automation elements from the Terraform templates, but because of the differences in provider code, the provisioning and installation was done manually. I chose AWS M5.24xLarge and AWS M4.16xLarge shapes for comparative analysis, with 25 x 1-TB EBS GP2 volumes for HDFS capacity per worker, and 256-GB root volumes. I used 25 volumes instead of 32 (as done on Oracle Cloud Infrastructure) because the Cloudera Enterprise Reference Architecture for AWS Deployments document (page 20) says not to use more than 26 EBS volumes on a single instance, including root volumes. All hosts had the same OS (CentOS 7.5) and similar Cloudera cluster tunings aside from variances in CPU and memory, which depend on the worker host resources. For cluster sizing, I normalized the OCPU as close to 300 cores per cluster as possible. The following table shows relative cluster sizes, OCPU/vCPU, and RAM information: Running the Tests To run the TeraSort, I used a benchmarking script that submits jobs to the cluster with relative tuning to available cluster resources. It uses the following formulas to calculate the number of mappers/reducers, map/reduce memory, and Java JMX/JMS parameters: Mappers/reducers: (number of workers * cpu_vcores) Map/reduce memory: (number of workers * RAM per worker)/number of mappers Java JMS/JMX parameters: (map memory * 0.8) When running the tests on AWS shapes, I encountered Java heap space errors because the memory values using these formulas was too low for the same cluster heap settings. I tuned for higher memory/JMX/JMS values by increasing the yarn.cpu.vcores, halving the number of mappers, and then adjusting the minimum allocation vcores to compensate. These values produced equivalent job submission parameters.  The following table shows the relative cluster tuning parameters in detail: The following table shows the job submission parameters: The following graphs show 1-TB and 10-TB TeraSort times: I also want to show some comparative utilization graphs in Cloudera Manager. The following screenshot shows the AWS M4.16XLarge cluster: The following screenshot shows the M5.24xLarge cluster: Note that the Cluster Network IO profiles are similar for both clusters, peaking at about 2G/s. Cluster Disk IO peaks a little over 9G/s, as does HDFS IO. Now look at the comparative graphs on an Oracle Cloud Infrastructure cluster with BM.HPC2.36 worker shapes: On Oracle Cloud Infrastructure, the Cluster Disk IO peaks at around 25G/s and Cluster Network IO mirrors it. That’s 10 times the bandwidth compared to AWS! HDFS IO also peaks at almost 5 times that of AWS. Also note the CPU utilization graphs; the impact of reduced I/O in AWS directly affects the processing ability of the cluster. In essence, the network performance is a bottleneck. With the Oracle Cloud Infrastructure shapes, you get near line-speed, which eliminates that performance bottleneck and allows maximum utilization of cluster resources. This leads to substantially faster workload processing times. The bottom line: When you choose to run your Big Data workloads on Oracle Cloud Infrastructure, you are getting exponentially better performance, with guaranteed SLAs for performance, manageability, and availability, at a substantially cheaper price point as compared to AWS. Next Steps Sign up and try Oracle Cloud Infrastructure for free, download a Terraform deployment template, and experience the blazing fast performance of Big Data on Oracle Cloud Infrastructure yourself!

Hello, my name is Zachary Smith, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure. You might have seen Larry Ellison’s keynote at Oracle OpenWorld 2018 regarding the...

Security

Core-to-Edge Security: The Oracle Cloud Infrastructure Edge Network

As more customer, partner and employee interactions happen over internet-connected, digital channels and the threat landscape becomes more complex and varied, the imperative for security has compounded. That’s why Oracle Cloud Infrastructure takes a different approach to security, one that extends from the core infrastructure (including the database) to the user edge. Oracle Cloud Infrastructure's core-to-edge security strategy protects you and your organization from a variety of external and internal threats and incorporates common management of events, alerts, and orchestration of mitigations. Adding the edge to the core brings many benefits, including: Layers of defense that are designed to secure users, apps, data, and infrastructure Defense layer integration, so that detection of a botnet attack at the edge may automatically increase the security warnings and postures in the core Support for multi-location workloads (in the cloud, in many clouds, or at the edge), regardless of where users and customers are and what delivery mechanisms they use  Automatic detection and mitigation of attacks using simultaneous vectors on the network, user, and application layers A deep monitoring network of sensors that provides data on internet performance and security events all over the world New edge security services including a web application firewall and DDoS protection were announced at Oracle OpenWorld 2018 to provide a secure cloud with reliable performance. The services run on the new globally-distributed Oracle Cloud Infrastructure Edge Network and are designed to alleviate many enterprise cloud migration concerns. Oracle edge security services can protect any application in any cloud and any on-premises infrastructure. What Is an Edge Network? The cloud edge is where users and devices connect to the network. That makes it both a crucial point for users’ interactions with applications in the cloud and a potential launch-point of attacks. An enterprise-ready cloud needs to include an edge network that provides the following: Low latency and real-time processing of massive datasets such as web traffic Performance acceleration techniques such as load balancing, DNS resolution, local caching, and tracking of internet route changes Local learning and automation techniques Real-time internet health analysis Many applications and services are designed to work at the edge, leveraging compute from the devices on which they are accessed, as well as workloads on the nearest cloud server. Today, that needs to be just about anywhere to enable business-critical functions. As the capacity of core networks is outstripped by computational intensity, organizations become more reliant on edge services, servers, and devices themselves to process business logic. Oracle's edge network is deployed close to end users in many markets and complements the large, secure Oracle Cloud Infrastructure regions that host workloads by adding an important layer of security and performance for traffic coming into web applications. The network has now been deployed at scale in globally distributed, very high-capacity points of presence (POPs). Each POP is fully redundant, multi-tenant, fault-tolerant, and self-repairing. The compute capacity of the edge network secures applications at the edge before requests and data are routed optimally to an Oracle cloud region, any other cloud provider, or on-premises infrastructures used by Oracle customers. Below is a map of Oracle Cloud Infrastructure Edge Network POPs. Fifteen locations are dedicated to application security, and five locations have high-capacity DDoS scrubbing centers. Nineteen locations are dedicated to DNS. Stopping Attacks at the Edge Security is the top cloud challenge of 2018; 77% of IT professionals identified it as a challenge in the RightScale 2018 State of the Cloud Survey. And when it comes to security, location is key. Oracle Cloud Infrastructure’s security defense platform sits at the network edge, away from the core web server infrastructure and closer to the end user. Hence, the process of detection and mitigation happens before the potential threat reaches your network. Additionally, this configuration allows users to run ad hoc security defenses based on specific events -- say, the escalation of an attack -- or focus on only a specific section of applications that need to be addressed during an attack without affecting the rest of the infrastructure. How an application security POP works at the edge Protecting Hybrid and Multi-Cloud Architectures Enterprises commonly use several cloud providers, often in combination with on-premises legacy systems. This is why all security services that run on the Oracle Cloud Infrastructure Edge Network are designed to work independently from where applications are hosted. This design is especially important for security and performance, as it allows for a global view of all events and monitoring and protection of all and any applications in one unique platform -- regardless of where these applications are hosted and regardless of the delivery mechanisms. The edge security services are a pure, cloud native, multi-tenant solution. Helping Move and Improve One of the largest impediments enterprises face is maintaining a strong security posture during a migration of workloads to the cloud. Oracle understands this concern and has built tools and solutions that support this transition. Because the application security services are independent from the hosting location(s) of the applications, the same security postures that applied to the old infrastructure continue to apply seamlessly to the new infrastructure before, during, and after the migration. Hence, to take the risk out of the move-and-improve process, Oracle recommends that Oracle application security services are activated on the current applications sitting within the old infrastructure before the migration. Then, as the customer migrates their application servers to the new target infrastructure, all security services are already in place and activated. This is a key differentiator from the rest of the infrastructure as a service (IaaS) market, which can't offer the same no-risk solution for an enterprise cloud migration. Deep Monitoring of the Internet Oracle has also deployed a deep monitoring network that provides data on internet performance and security events all over the world, with real-time information about performance degradation, internet routing changes, and network security alerts. Oracle Cloud products, such as Market Performance and IP Troubleshooting, and based on this Internet Intelligence data. Oracle’s Internet Intelligence Map monitors the volatility of the internet as a whole. With ever more organizations relying on third-party providers for their most critical services, monitoring the collective health of the internet is increasingly important. Data gathered by the Oracle Cloud Infrastructure Edge Network is also used to provide valuable insight to the Oracle Security Research team around BGP route changes and DDoS activation worldwide. The Oracle Security Research team is able to monitor 250 million route updates per day, including where DDoS protection is being activated and when attacks are occurring. We measure the quality, in near real time, of any cloud DDoS protection activation by most cloud-based DDoS vendors. This information can be used to measure the effectiveness of protection solutions. The Pillar of Security The agility, scalability, and integration capabilities of the cloud, combined with extensive cost savings, have made migration to the cloud a necessity for enterprise-grade organizations. However, there are risks involved in an enterprise cloud migration, concerning everything from security to the sheer scale of such a move. Oracle Cloud Infrastructure was designed with this in mind. Security is a core pillar of everything we do, from deploying data centers and architecting networks to monitoring and scaling services. The Oracle Cloud Infrastructure Edge Network is part of Oracle’s forward-looking strategy. As the world moves to the cloud, we provide a core-to-edge solution to do so securely, efficiently, and without boundaries.

As more customer, partner and employee interactions happen over internet-connected, digital channels and the threat landscape becomes more complex and varied, the imperative for security...

Developer Tools

At KubeCon + CloudNativeCon, Oracle Extends Its Commitment to Openness

Today at KubeCon + CloudNativeCon, Oracle Cloud Infrastructure unveiled the Oracle Cloud Native Framework, the world's most comprehensive open source framework for deploying public cloud, hybrid cloud, and on-premises applications. I'm especially excited about this announcement because it represents another step forward in our mission to give developers the tools that they need to reduce complexity and deploy modern applications in any type of environment. It's also the latest example of Oracle Cloud Infrastructure's longstanding commitment to interoperability and open standards. The Oracle Cloud Native Framework introduces a comprehensive set of new cloud native resources for developers. One of those resources is Oracle Functions, a serverless cloud service based on the open source Fn Project. As part of the announcement, we also introduced a rich set of cloud native offerings built on the Oracle Cloud Infrastructure Container Engine for Kubernetes, our Kubernetes orchestration and management layer. These resources address developer needs in three key areas: provisioning, application definition and development, and observability and analysis. Embracing Open Standards A commitment to openness is one of the five key pillars that Oracle Cloud Infrastructure is built on, along with protecting customers' existing investments, ensuring security, delivering mission-critical performance, and providing unmatched enterprise expertise. We embrace open standards because they enable our customers to be agile and responsive to changing business requirements. Open standards ensure that customers have the freedom and flexibility to move workloads between their on-premises data centers and Oracle Cloud, and even to other vendors' clouds when needed. Additionally, open standards lower barriers to innovation and reduce the total cost of technology investments. Oracle has a long history of supporting open standards and making technical contributions to the open source communities responsible for Linux, Berkeley DB, Xen, MySQL, and many other technologies. Additionally, Oracle is a contributing member of several industry groups that promote open standards, including the Eclipse Foundation, the Cloud Security Alliance, and the Internet Society. And this year, we expanded our membership in and contributions to the Cloud Native Computing Foundation (CNCF). Oracle Cloud Infrastructure has made several announcements in recent months that advance our commitment to openness and support the needs of software developers. Following is a summary of some of the latest news. The Oracle Linux Cloud Native Environment Introduced at OpenWorld in October, the Oracle Linux Cloud Native Environment gives developers the features that they need to develop microservices-based applications that can be deployed in environments that support open standards. The Oracle Linux Cloud Native Environment is based entirely on open standards, specifications, and APIs defined by the CNCF. The environment makes it easier for developers to create, orchestrate, and manage containers. It also provides tools and resources for cloud native networking and storage, continuous integration and continuous delivery, and observability and diagnostics. GraphPipe Oracle recently introduced GraphPipe, an open source project designed to make it easier for enterprises to deploy and query machine learning models. GraphPipe gives developers a standard, high-performance protocol for transmitting tensor data over networks. Terraform We also recently released our Terraform provider, which gives developers access to an open source, enterprise-class orchestration tool that they can use to manage Oracle Cloud Infrastructure Compute. We'll soon be releasing a group of open source Terraform modules that enable easy provisioning of Oracle Cloud Infrastructure services. And those are just some of the advancements that Oracle Cloud Infrastructure is making as it builds out the world's first truly open public cloud platform. Stay tuned for more news. In the meantime, try our cloud for yourself. Create a trial account with up to 3,500 hours of free cloud computing.

Today at KubeCon + CloudNativeCon, Oracle Cloud Infrastructure unveiled the Oracle Cloud Native Framework, the world's most comprehensive open source framework for deploying public cloud, hybrid...

Announcing Oracle Cloud Native Framework at KubeCon North America 2018

This blog was originally published at https://blogs.oracle.com/cloudnative/ At KubeCon + CloudNativeCon North America 2018, Oracle has announced the Oracle Cloud Native Framework - an inclusive, sustainable, and open cloud native development solution with deployment models for public cloud, on premises, and hybrid cloud. The Oracle Cloud Native Framework is composed of the recently-announced Oracle Linux Cloud Native Environment and a rich set of new Oracle Cloud Infrastructure cloud native services including Oracle Functions, an industry-first, open serverless solution available as a managed cloud service based on the open source Fn Project. With this announcement, Oracle is the only major cloud provider to deliver and support a unified cloud native solution across managed cloud services and on-premises software, for public cloud (Oracle Cloud Infrastructure), hybrid cloud and on-premises users, supporting seamless, bi-directional portability of cloud native applications built anywhere on the framework.  Since the framework is based on open, CNCF certified, conformant standards it will not lock you in - applications built on the Oracle Cloud Native Framework are portable to any Kubernetes conformant environment – on any cloud or infrastructure Oracle Cloud Native Framework – What is It? The Oracle Cloud Native Framework provides a supported solution of Oracle Cloud Infrastructure cloud services and Oracle Linux on-premises software based on open, community-driven CNCF projects. These are built on an open, Kubernetes foundation – among the first K8s products released and certified last year. Six new Oracle Cloud Infrastructure cloud native services are being announced as part of this solution and build on the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services. Cloud Native at a Crossroads – Amazing Progress We should all pause and consider how far the cloud native ecosystem has come – evidenced by the scale, excitement, and buzz around the sold-out KubeCon conference this week and the success and strong foundation that Kubernetes has delivered! We are living in a golden age for developers – a literal "First Wave" of cloud native deployment and technology - being shaped by three forces coming together and creating massive potential: Culture: The DevOps culture has fundamentally changed the way we develop and deploy software and how we work together in application development teams. With almost a decade’s worth of work and metrics to support the methodologies and cultural shifts, it has resulted in many related off-shoots, alternatives, and derivatives including SRE, DevSecOps, AIOps, GitOps, and NoOps (the list will go on no doubt). Code: Open source and the projects that have been battle tested and spun out of webscale organizations like Netflix, Google, Uber, Facebook, and Twitter have been democratized under the umbrella of organizations like CNCF (Cloud Native Computing Foundation). This grants the same access and opportunities to citizen developers playing or learning at home, as it does to enterprise developers in the largest of orgs. Cloud: Unprecedented compute, network, and storage are available in today’s cloud – and that power continues to grow with a never-ending explosion in scale, from bare metal to GPUs and beyond. This unlocks new applications for developers in areas such as HPC apps, Big Data, AI, blockchain, and more.  Cloud Native at a Crossroads – Critical Challenges Ahead Despite all the progress, we are facing new challenges to reach beyond these first wave successes. Many developers and teams are being left behind as the culture changes. Open source offers thousands of new choices and options, which on the surface create more complexity than a closed, proprietary path where everything is pre-decided for the developer. The rush towards a single source cloud model has left many with cloud lock-in issues, resulting in diminished choices and rising costs – the opposite of what open source and cloud are supposed to provide. The challenges below mirror the positive forces above and are reflected in the August 2018 CNCF survey: Cultural Change for Developers: on premises, traditional development teams are being left behind. Cultural change is slow and hard. Complexity: too many choices, too hard to do yourself (maintain, administer), too much too soon? Cloud Lock-in: proprietary single-source clouds can lock you in with closed APIs, services, and non-portable solutions. The Cloud Native Second Wave – Inclusive, Sustainable, Open What’s needed is a different approach: Inclusive: can include cloud and on-prem, modern and traditional, dev and ops, startups and enterprises Sustainable: managed services versus DIY, open but curated, supported, enterprise grade infrastructure Open: truly open, community-driven, and not based on proprietary tech or self-serving OSS extensions Introducing the Oracle Cloud Native Framework – What’s New? The Oracle Cloud Native Framework spans public cloud, on-premises, and hybrid cloud deployment models – offering choice and uniquely meeting the broad deployment needs of developers. It includes Oracle Cloud Infrastructure Cloud Native Services and the Oracle Linux Cloud Native Environment. On top of the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services, a rich set of new Oracle Cloud Infrastructure cloud native services has been announced with services across provisioning, application definition and development, and observability and analysis.   Application Definition and Development Oracle Functions: A fully managed, highly scalable, on-demand, functions-as-a-service (FaaS) platform, built on enterprise-grade Oracle Cloud Infrastructure and powered by the open source Fn Project. Multi-tenant and container native, Oracle Functions lets developers focus on writing code to meet business needs without having to manage or even address the underlying infrastructure. Users only pay for execution, not for idle time. Streaming: Enables applications such as supply chain, security, and IoT to collect data from many sources and process in real-time. Streaming is a highly available, scalable and multi-tenant platform that makes it easy to collect and manage streaming data. Provisioning Resource Manager: A managed Oracle Cloud Infrastructure provisioning service based on industry standard Terraform. Infrastructure-as-code is a fundamental DevOps pattern, and Resource Manager is an indispensable tool to automate configuration and increases productivity by managing infrastructure declaratively. Observation and Analysis Monitoring: An integrated service that reports metrics from all resources and services in Oracle Cloud Infrastructure. Monitoring provides predefined metrics and dashboards, and also supports a service API to obtain a top-down view of the health, performance, and capacity of the system. The monitoring service includes alarms to track these metrics and act when they vary or exceed defined thresholds, helping users meet service level objectives and avoid interruptions. Notifications: A scalable service that broadcasts messages to distributed components, such as email and PagerDuty. Users can easily deliver messages about Oracle Cloud Infrastructure to large numbers of subscribers through a publish-subscribe pattern. Events: Based on the CNCF Cloud Events standard, Events enables users to react to changes in the state of Oracle Cloud Infrastructure resources, both when initiated by the system or by user action. Events can store information to Object Storage, or they can trigger Functions to take actions, Notifications to inform users, or Streaming to update external services. Use Cases for the Oracle Cloud Native Framework: Inclusive, Sustainable, Open Inclusive: The Oracle Cloud Native Framework includes both cloud and on-prem, supports modern and traditional applications, supports both dev and ops, can be used by startups and enterprises. As an industry, we need to create more on-ramps to the cloud native freeway – in particular by reaching out to teams and technologies and connecting cloud native to what people know and work on every day. The WebLogic Server Operator for Kubernetes is a great example of just that. It enables existing WebLogic applications to easily integrate into and leverage Kubernetes cluster management.  As another example, the Helidon project for Java creates a microservice architecture and framework for Java apps to move more quickly to cloud native. Many Oracle Database customers are connecting cloud native applications based on Kubernetes for new web front-ends and AI/big data processing back-ends, and the combination of the Oracle Autonomous Database and OKE creates a new model for self-driving, securing, and repairing cloud native applications. For example, using Kubernetes service broker and service catalog technology, developers can simply connect Autonomous Transaction Processing applications into OKE services on Oracle Cloud Infrastructure.   Sustainable: The Oracle Cloud Native Framework provides a set of managed cloud services and supported on-premises solutions, open and curated, and built on an enterprise grade infrastructure. New open source projects are popping up every day and the rate of change of existing projects like Kubernetes is extraordinary. While the landscape grows, the industry and vendors must face the resultant challenge of complexity as enterprises and teams can only learn, change, and adopt so fast. A unified framework helps reduce this complexity through curation and support. Managed cloud services are the secret weapon to reduce the administration, training, and learning curve issues enterprises have had to shoulder themselves. While a do-it-yourself approach has been their only choice up to recently, managed cloud services such as OKE give developers a chance to leapfrog into cloud native without a long and arduous learning curve. A sustainable model – built on an open, enterprise grade infrastructure, gives enterprises a secure, performant platform from which to build real hybrid cloud deployments including these five key hybrid cloud use cases: Development and DevOps: Dev/test in the cloud, production on-prem     Application Portability and Migration: enables bi-directional cloud native application portability (on-prem to cloud, cloud to on-prem) and lift and shift migrations.  The Oracle MySQL Operator for Kubernetes is an extremely popular solution that simplifies portability and integration of MySQL applications into cloud native tooling.  It enables creation and management of production-ready MySQL clusters based on a simple declarative configuration format including operational tasks such as database backups and restoring from an existing backup. The MySQL Operator simplifies running MySQL inside Kubernetes and enabling further application portability and migrations.     HA/DR: Disaster recovery or high availability sites in cloud, production on-prem Workload-Specific Distribution: Choose where you want to run workloads, on-prem or cloud, based on specific workload type (e.g., based on latency, regulation, new vs. legacy) Intelligent Orchestration: More advanced hybrid use cases require more sophisticated distributed application intelligence and federation – these include cloud bursting and Kubernetes federation   Open: Over the course of the last few years, development teams have typically chosen to embrace a single-source cloud model to move fast and reduce complexity – in other words the quick and easy solution. The price they are paying now is cloud lock in resulting from proprietary services, closed APIs, and non-portable solutions. This is the exact opposite of where we are headed as an industry – fueled by open source, CNCF-based, and community-driven technologies.   An open ecosystem enables not only a hybrid cloud world but a truly multi-cloud world – and that is the vision that drives the Oracle Cloud Native Framework!

This blog was originally published at https://blogs.oracle.com/cloudnative/ At KubeCon + CloudNativeCon North America 2018, Oracle has announced the Oracle Cloud Native Framework - an inclusive,...

Developer Tools

Announcing Oracle Cloud Infrastructure Notifications

At CloudNativeCon North America 2018 this week in Seattle, we are announcing a new service, Oracle Cloud Infrastructure Notifications. Notifications is a fully managed, durable, and secure pub-sub messaging service that broadcasts your messages to a large number of distributed applications at scale. It eliminates polling overhead by pushing messages to your subscribers' endpoints. Notifications delivers secure, highly reliable, low latency, and durable messages for applications hosted on Oracle Cloud Infrastructure and externally. It empowers you to send notifications and to integrate distributed systems and microservices. Notifications Use Cases Here are some of the many possible uses for Notifications: Operational alerts You can use Notifications to receive notifications triggered by your applications' alerts. For example, you can configure Oracle Cloud Infrastructure Alarms to send notifications to a Notifications topic. Then, you can subscribe to the topic by using either email or PagerDuty. Application integration In this "fan-out" scenario, you can send your application events to a Notifications topic. Then, Notifications pushes the messages to all subscriptions. For example, in a freight-management application, any change in freight status can be broadcast via Notifications to multiple applications to initiate a process or notify a customer. Getting Started Oracle Cloud Infrastructure Notifications will be generally available in early 2019, but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Notifications or to request access, please register.

At CloudNativeCon North America 2018 this week in Seattle, we are announcing a new service, Oracle Cloud Infrastructure Notifications. Notifications is a fully managed, durable, and secure pub-sub...

Developer Tools

Announcing Oracle Cloud Infrastructure Monitoring

At CloudNativeCon North America 2018 in Seattle this week, we are announcing a new service, Oracle Cloud Infrastructure Monitoring. Monitoring provides your enterprise with fine grained metrics and notifications to monitor your entire stack. Using the Monitoring service, your enterprise is able to understand the health and performance of your stack including Oracle Cloud Infrastructure Resources, optimize resource utilization, and respond to anomalies in real time. Out of the box performance and health metrics are provided for your Oracle Cloud Infrastructure Resources, including Compute Instances, Virtual Cloud Network (VCN) Virtual NICs, Block Store Volumes, and Load Balancer as a Service (LBaaS). You can also emit your own Custom Metrics to have visibility across your entire stack. Additionally, Alarms can be created on these Metrics using industry standard statistics, trigger operators, and time intervals. Alarms alert you in real time to important changes across your stack via email and PagerDuty using the Oracle Notification service. The interactive Metrics Explorer in the Oracle Cloud Infrastructure Console provides a comprehensive view of Metrics across your Resources and Custom Metrics with the ability to customize and filter the data. The Monitoring Service offers a best-in-class metric engine, allowing you to perform powerful aggregation and slice-and-dice queries across multiple metric streams and dimensions in real time. The Monitoring service public API, and SDK/CLI enable easy integration with your existing enterprise infrastructure. Sample Monitoring use cases Use the Metrics Explorer to understand the health of your Oracle Cloud Infrastructure Resources. Metrics can be visualized individually or aggregated across multiple resources. Below is the CPU percentage for multiple Compute Instances over the past day: OCI Monitoring provides notifications, such as CPU percentages passing a predefined utilization threshold. When a resource's CPU passes the threshold, a PagerDuty notification is triggered or an email is sent to your team. As an example below, an alarm can be triggered when a resource's utilization is below 3% for more than 24 hours. Availability With the Monitoring service, we are delivering another core pillar to ensure that Oracle Cloud Infrastructure offers a best-in-class foundation for all enterprise workloads and use cases. Monitoring will become available in early 2019, but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Monitoring or to request access to the technology, please register.

At CloudNativeCon North America 2018 in Seattle this week, we are announcing a new service, Oracle Cloud Infrastructure Monitoring. Monitoring provides your enterprise with fine grained metrics and...

Developer Tools

Announcing Oracle Cloud Infrastructure Streaming

At CloudNativeCon North America 2018 this week in Seattle, we are excited to announce a new service, Oracle Cloud Infrastructure Streaming. The Streaming service provides a fully managed, scalable, and durable storage for ingesting continuous, high-volume streams of data that you can consume and process in real-time. Streaming can be used for messaging, ingesting high volume data such as application log data, operational telemetry data, web click-stream data or other use cases in which data is produced and processed continually and sequentially in a publish-subscribe messaging model. Use Cases Here are some of the many possible use cases for Oracle Cloud Infrastructure Streaming: Messaging: Use streaming as a backplane to decouple components of large systems. Key-scoped ordering, low latency and guaranteed durability of streaming provide reliable primitives to implement a variety of messaging patterns, while high throughput potential allows for such a system to scale well. Web/Mobile activity data ingestion: Use streaming as your ingestion pipeline for usage data from web sites or mobile apps (such as page views, searches, or other actions users may take). Streaming’s consumer model makes it easy to feed information to multiple real-time monitoring and analytics systems or to a data warehouse for offline processing and reporting. Metric and log ingestion: Use streaming as an alternative for traditional log and metrics aggregation approaches to help make critical operational data more quickly available for indexing, analysis, and visualization. Infrastructure and application event processing: Use streaming as a unified entry point for cloud components to report their life cycle events for audit, accounting, and related activities.   Managed Service Oracle Cloud Infrastructure Streaming manages everything needed to operate the service, from provisioning, deployment, maintenance, security patches, infrastructure, storage, networking, and replication to configuration of the hardware and software that enables you to stream data. Security Oracle Cloud Infrastructure Streaming is secure by default. Only the account and data stream owners have access to the stream resources that they create. It supports user authentication to control access to data, allowing you to use Oracle Cloud Infrastructure Identity and Access Management (IAM) policies to selectively grant permissions to users and groups of users. You can securely put and get your data from Streaming through SSL endpoints, using the HTTPS protocol. Lastly, user data is encrypted both at rest and in transit. Getting Started Oracle Cloud Infrastructure Streaming will be generally available in 2019, but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Streaming or to request access, please register.

At CloudNativeCon North America 2018 this week in Seattle, we are excited to announce a new service, Oracle Cloud Infrastructure Streaming. The Streaming service provides a fully managed, scalable,...

Developer Tools

Announcing Oracle Cloud Infrastructure Resource Manager

At CloudNativeCon North America 2018 this week in Seattle, we are excited to announce a new service, Oracle Cloud Infrastructure Resource Manager, that makes it easy to manage your infrastructure resources on Oracle Cloud Infrastructure. Resource Manager enables you to use infrastructure as code (IaC) to automate provisioning for infrastructure resources such as compute, networking, storage, and load balancing. Using IaC is a DevOps practice that makes it possible to provision infrastructure quickly, reliably, and at any scale. Changes are made in code, not in the target systems. That code can be maintained in a source control system, so it’s easy to collaborate, track changes, and document and reverse deployments when required. HashiCorp Terraform To describe infrastructure Resource Manager uses HashiCorp Terraform, an open source project that has become the dominant standard for describing cloud infrastructure. Oracle is making a strong commitment to Terraform and will enable all its cloud infrastructure services to be managed through Terraform. Earlier this year we released the Terraform Provider, and we have started to submit Terraform modules for Oracle Cloud Infrastructure to the Terraform Module Registry. Now we are taking the next step by providing a managed service. Managed Service In addition to the provider and modules, Oracle now provides Resource Manager, a fully managed service to operate Terraform. Resource Manager integrates with Oracle Cloud Infrastructure Identity and Access Management (IAM), so you can define granular permissions for Terraform operations. It further provides state locking, gives users the ability to share state, and lets teams collaborate effectively on their Terraform deployments. Most of all, it makes operating Terraform easier and more reliable. With Resource Manager, you create a stack before you run Terraform actions. Stacks enable you to segregate your Terraform configuration, where a single stack represents a set of Oracle Cloud Infrastructure resources that you want to create together. Each stack individually maps to a Terraform state file that you can download. To create a stack, you define a compartment and upload the Terraform configuration while creating this stack. This zip file contains all the .tf files that define the resources that you want to create. You can optionally include a variables.tf file or define your variables in a (key,value) format on the console. After your stack is created, you can run different Terraform actions like plan, apply, and destroy on this stack. These Terraform actions are called jobs. You can also update the stack by uploading a new zip file, download this configuration, and delete the stack when required. Plan: Resource Manager parses your configuration and returns an execution plan that lists the Oracle Cloud Infrastructure resources describing the end state. Apply: Resource Manager creates your stack based on the results of the plan job. After this action is completed, you can see the resources that have been created successfully in the defined compartments. Destroy: Terraform attempts to delete all the resources in the stack. You can define permissions on your stacks and jobs through IAM policies. You can define granular permissions and let only certain users or groups perform actions like plan, apply, or destroy. Availability Resource Manager will become available in early 2019. We are currently providing access to selected customers through our Cloud Native Limited Availability Program. The currently available early version offers access to the Compute, Networking, Block Storage, Object Storage, IAM, and Load Balancing services. To learn more about Resource Manager or to request access to the technology, please register.

At CloudNativeCon North America 2018 this week in Seattle, we are excited to announce a new service, Oracle Cloud Infrastructure Resource Manager, that makes it easy to manage your...

Developer Tools

The Journey to Enterprise Managed Kubernetes

It’s been just over a year since we announced the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) and Registry (OCIR) offerings, foundational pieces in Oracle’s cloud for developing, deploying, and running container native applications. With KubeCon upon us, it’s a good time to consider not only how Oracle’s cloud offerings have evolved, but also the macro challenges facing Kubernetes and managed Kubernetes offerings, as they target deeper adoption by enterprise developer and DevOps teams. An interesting starting point is to review the challenges that folks are facing adopting containers and Kubernetes, and how that has evolved in the last year or so. The Cloud Native Computing Foundation (CNCF) survey measures these challenges, and the following graphic shows the results of four consecutive surveys: Source: CNCF Survey An interesting takeaway from these results is that the purely "technology-related" challenges, such as networking and storage, have dropped fairly dramatically. This drop could be attributed to continued standardization in the CNCF with efforts like Container Networking Interface (CNI) and Container Storage Interface (CSI), and to the rise of managed Kubernetes offerings, where storage and networking are pre-integrated for users and should "just work." At the same time, it’s interesting to see the prevalence of the "non-technical" challenges (acknowledging that some of these are new questions in the latest survey), things like "complexity," "cultural changes," and "lack of training." Let’s take a look at these through the lens of enterprise customers and users. Enterprise Needs and Culture Oracle Cloud Infrastructure Container Engine for Kubernetes has been built with the needs of enterprise developers in mind. We have the luxury of building on a Gen 2 cloud and being able to marry the best in managed open source with leading edge compute, network, and storage cloud technologies to provide a secure, compliant, and highly performant offering to our customers. With highly available clusters (and control plane) across multiple availability domains, bare metal and node pool support (of bare metal and VMs), and native integration with Identity and Access Management and Kubernetes RBAC, Container Engine for Kubernetes is ready for the most demanding enterprise workloads. Even better, customers get access to these capabilities while paying only for the underlying IaaS resources (compute, storage, network) consumed. (Just an aside—running Kubernetes clusters on bare metal makes a big difference for our customers; see this interesting comparison). A great example of cultural change here is the separation of concerns often present in enterprise IT teams, for example, between the network team and development team. With Container Engine for Kubernetes, compartments can be leveraged to provide the network team with control over the shared network aspects (VCNs, subnets, and so on) while enabling developers to control their clusters (and cluster nodes). Tackling Complexity Complexity has been the genesis of managed Kubernetes offerings. The majority of customers we speak to want to leverage the technology to build their applications faster and further their business, but they don’t want to continually maintain the Kubernetes infrastructure itself.  In addition to making it easy for users to upgrade to new Kubernetes versions, Container Engine for Kubernetes installs common tooling into those clusters by default (Kubernetes dashboard, Tiller, and Helm). Users can leverage the “quick cluster” capability to get a new cluster in a couple of clicks with a sensible set of defaults (see the following screenshot). Container Engine for Kubernetes worker nodes also leverage the standard Docker runtime that’s familiar to so many developers. Managed Registry users can help manage container image sprawl by setting global retention policies (or targeting specific repositories), for example, to remove images that haven’t been pulled or tagged within a certain time frame. Container Engine for Kubernetes "Quick Cluster" As more developers and DevOps teams leverage infrastructure as code, they want to leverage those capabilities against their Kubernetes clusters. In addition to full API and CLI support, Container Engine for Kubernetes also supports Terraform and Ansible providers, so DevOps teams can use the common open tooling that they’re already familiar with to provision and use their clusters, and to scale them, and even do so as part of their CI/CD pipelines. Openness Is Key Finally, a major point for tackling "complexity" and "lack of training" is being able to leverage the great resources available in the open source community. To that end, it’s critical that any managed Kubernetes offering conforms to true upstream open source, so that customers can be assured of portability and avoid lock in. As a platinum member of the CNCF, Oracle participates in the Certified Kubernetes Conformance Program, and we ensure that every new minor release on Container Engine for Kubernetes is conformant. Beyond conformance, it’s also important to understand the typical (open) tooling and solutions that customers are deploying. For example, key features like the "mutating webhook admission controller" must be enabled in the (managed) control plane to enable advanced features like automatic sidecar injection for Istio deployments. Solutions and best practices are part of the key to handling complexity. Many of these are available in the community, including from Oracle, such as monitoring (Prometheus), logging (EFK), Istio, Confluent, Couchbase, and Hazelcast. Others are provided by partners, including Twistlock and Bitnami. As Kubernetes continues to grow and its charter continues to expand as the de facto operating system of the cloud, tackling complexity continues to be an ongoing challenge for the community. In this blog, we’ve looked at how our Container Engine for Kubernetes can help to address this. This week we will be at Kubecon 2018. If you are there too, please stop by at our booth P4 and the nearby Oracle lounge!

It’s been just over a year since we announced the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) and Registry (OCIR) offerings, foundational pieces in Oracle’s cloud for...

Oracle Cloud Infrastructure

Creating a Windows Active Directory Domain Controller in Oracle Cloud Infrastructure

There are several circumstance under which you might want to create a new Windows Active Directory (AD) environment. This post talks about using Oracle Cloud Infrastructure to build a new AD domain controller. By using Microsoft PowerShell and the Oracle Cloud Infrastructure cloudbase-init scripts, you can automate the process and eliminate the headache of building Windows AD. You can place your script in the User data section of the Advanced options when you create the host in the Oracle Cloud Infrastructure Console. This post talks only about the automation of the AD domain controller, and not about your virtual cloud network (VCN) and network environment. You can learn more in the Creating Windows Active Directory Domain Servers in Oracle Cloud Infrastructure white paper. The following diagram shows the basic architecture of how you would build your VCN and subnets: Automating the Deployment of the AD Domain Controller When you are planning your domain, your first task is to determine how you want to structure your forest. Building an AD forest can get complicated if you have numerous subdomains and trust dependencies, so we are keeping it here simple by using a simple forest with a single tree. For more information about forests, see the Microsoft documentation. Scripting the Host Deployment Let’s jump into some PowerShell code. First, ensure that you have the correct header for the cloudbase-init script. You need to run in the ps1_sysnative mode for cloudbase-init and PowerShell to interpret the correct mode of execution. #ps1_sysnative Then, set the local administrator password and activate the administrator account (which is deactivated by default in Oracle Cloud Infrastructure Windows images). The account will be cloned, and the clone will be turned into the domain administrator, so ensure that the password is secure and meets the standard of special characters, numbers, and a mix of uppercase and lowercase letters. This password is temporary, and you’ll change it later in the process. $password="P@ssw0rd123!!" # Set the Administrator Password and activate the Domain Admin Account net user Administrator $password /logonpasswordchg:no /active:yes After you activate the administrator account, you install the prerequisites for the AD Domain Services. The first of these Windows features and roles is the .NET Framework 3.0, which has the backwards-compatible code line. Next are the AD Domain Services and the Remote Active Directory Services, which are the core management features for AD. Finally, install DNS Services, which eases most of the communication issues within the AD domain. Install-WindowsFeature NET-Framework-Core Install-WindowsFeature AD-Domain-Services Install-WindowsFeature RSAT-ADDS Install-WindowsFeature RSAT-DNS-Server After you install the prerequisites, you reboot the new host. However, before you reboot the host, create a RUNONCE script that will finish building the AD forest. For this task, use the adds module and create a text file that will become the RUNONCE script. This script, which runs on the next login by the local administrator after the reboot, imports the PowerShell Module ADDSDeployment and then runs the Install-ADDSForest command, which names the forest and promotes the host to a domain controller. After these actions are done, the host is automatically rebooted. # Create text block for the new script that will run once on reboot $addsmodule02 = @" #ps1_sysnative Try { Start-Transcript -Path C:\DomainJoin\stage2.txt `$password = "P@ssw0rd123!!" `$FullDomainName = "cesa.corp" `$ShortDomainName = "CESA" `$encrypted = ConvertTo-SecureString `$password -AsPlainText -Force Import-Module ADDSDeployment Install-ADDSForest `` -CreateDnsDelegation:`$false `` -DatabasePath "C:\Windows\NTDS" `` -DomainMode "WinThreshold" `` -DomainName `$FullDomainName `` -DomainNetbiosName `$ShortDomainName `` -ForestMode "WinThreshold" `` -InstallDns:`$true `` -LogPath "C:\Windows\NTDS" `` -NoRebootOnCompletion:`$false `` -SysvolPath "C:\Windows\SYSVOL" `` -SafeModeAdministratorPassword `$encrypted `` -Force:`$true } Catch { Write-Host $_ } Finally { Stop-Transcript } "@ Add-Content -Path "C:\DomainJoin\ADDCmodule2.ps1" -Value $addsmodule02 Then, add the RUNONCE key for the next time that the administrator logs in to the host. # Adding the RunOnce job # $RunOnceKey = "HKLM:\Software\Microsoft\Windows\CurrentVersion\RunOnce" set-itemproperty $RunOnceKey "NextRun" ('C:\Windows\System32\WindowsPowerShell\v1.0\Powershell.exe -executionPolicy Unrestricted -File ' + "C:\DomainJoin\ADDCmodule2.ps1") Rebooting After all the prerequisites have finished installing on the host, you are ready to reboot the host. Rebooting the host makes the installation cleaner and reduces the number of errors that can happen when you are installing the new AD forest. # Last step is to reboot the local host Restart-Computer -ComputerName "localhost" -Force After the reboot, the forest is installed using the RUNONCE script, as shown in the following screenshot. After the forest is installed, the host automatically reboots again. Final Checks Check your forest to ensure that everything is correct. Use the Get-ADForest command to get all the basic information that you need to confirm that your domain has been installed correctly. On your next login, change the administrator password with the Set-ADAccountPassword command to ensure the security of your domain. Final Script Here is the entire script. In addition to all the previously discussed parts, you can set logging in the script to ensure that you can track and troubleshoot the operations. #ps1_sysnative Try { # # Start the logging in the C:\DomainJoin directory # Start-Transcript -Path "C:\DomainJoin\stage1.txt" # Global Variables $password="P@ssw0rd123!!" # Set the Administrator Password and activate the Domain Admin Account # net user Administrator $password /logonpasswordchg:no /active:yes # Install the Windows features necessary for Active Directory # Features # - .NET Core # - Active Directory Domain Services # - Remote Active Directory Services # - DNS Services # Install-WindowsFeature NET-Framework-Core Install-WindowsFeature AD-Domain-Services Install-WindowsFeature RSAT-ADDS Install-WindowsFeature RSAT-DNS-Server # Create text block for the new script that will be ran once on reboot # $addsmodule02 = @" #ps1_sysnative Try { Start-Transcript -Path C:\DomainJoin\stage2.txt `$password = "P@ssw0rd123!!" `$FullDomainName = "cesa.corp" `$ShortDomainName = "CESA" `$encrypted = ConvertTo-SecureString `$password -AsPlainText -Force Import-Module ADDSDeployment Install-ADDSForest `` -CreateDnsDelegation:`$false `` -DatabasePath "C:\Windows\NTDS" `` -DomainMode "WinThreshold" `` -DomainName `$FullDomainName `` -DomainNetbiosName `$ShortDomainName `` -ForestMode "WinThreshold" `` -InstallDns:`$true `` -LogPath "C:\Windows\NTDS" `` -NoRebootOnCompletion:`$false `` -SysvolPath "C:\Windows\SYSVOL" `` -SafeModeAdministratorPassword `$encrypted `` -Force:`$true } Catch { Write-Host $_ } Finally { Stop-Transcript } "@ Add-Content -Path "C:\DomainJoin\ADDCmodule2.ps1" -Value $addsmodule02 # Adding the run once job # $RunOnceKey = "HKLM:\Software\Microsoft\Windows\CurrentVersion\RunOnce" set-itemproperty $RunOnceKey "NextRun" ('C:\Windows\System32\WindowsPowerShell\v1.0\Powershell.exe -executionPolicy Unrestricted -File ' + "C:\DomainJoin\ADDCmodule2.ps1") # End the logging # } Catch { Write-Host $_ } Finally { Stop-Transcript } # Last step is to reboot the local host Restart-Computer -ComputerName "localhost" -Force Summary This is a simple script for installing your first AD domain controller. You will have two reboots to make the process complete, and it takes about 20-25 minutes for all of the Windows features to install. This is just the first step in building a larger domain. The "Creating Your Windows Active Directory Domain Servers in Oracle Cloud Infrastructure" white paper walks you through additional steps to build a resilient AD environment on Oracle Cloud Infrastructure. Be sure to download the white paper, and check out how you can get your free Oracle Cloud Infrastructure trial account.

There are several circumstance under which you might want to create a new Windows Active Directory (AD) environment. This post talks about using Oracle Cloud Infrastructure to build a new AD domain...

Enabling NFS-Client on Windows at Instance Launch Time

One benefit of cloud offerings comes from the ability to spin up new resources on-demand, with minimal effort. Automation tools play a pivotal role in harnessing the power of cloud infrastructure. As announced in July 2018, Oracle Cloud Infrastructure Windows instances can also be configured using cloudbase-init through user data provided at launch time. Check out the Windows Custom Startup Scripts and Cloud-Init on Oracle Cloud Infrastructure post by Andy Corran that also covers Windows Remote Management (WinRM). Here, we have another example of how you can use a PowerShell script to configure a new instance at launch time. In January 2018, Oracle Cloud Infrastructure announced the launch of the File Storage service. You can use File Storage to share unstructured files between Windows and Linux-based hosts. The Oracle Cloud Infrastructure File Storage service is an NFSv3 file storage service that can scale to exabytes in size. I use File Storage as the provider of shared files accessed via NFS in the following example. If you would like to know more about our File Storage service, check out the Introducing Oracle Cloud Infrastructure File Storage Service blog post by my colleague Ed Beauvais or the official documentation. The commands required to enable the Windows NFS-Client come from the official documentation. Since Windows registry keys need to be created, including the required PowerShell commands as user data is a great option. In this example, I am going to be using the Oracle Cloud Infrastructure CLI to create my Windows Server 2016 Standard Edition instance and include PowerShell commands from my local machine. Read about setting up the CLI in the official documentation. Prepare the input files Create the PowerShell script. Be sure to include the #ps1_sysnative header so that cloudbase-init interprets the commands correctly. In my example, I named the file enable_nfs.ps1. #ps1_sysnative   ## Timestamp function for logging. function Get-TimeStamp {       return "[{0:MM/dd/yy} {0:HH:mm:ss}]" -f (Get-Date)   }   ## Create a log file. $path = $env:SystemRoot + "\Temp\" $logFile = $path + "CloudInit_$(get-date -f yyyy-MM-dd).log" New-Item $logFile -ItemType file Write-Output "$(Get-TimeStamp) Logfile created..." | Out-File -FilePath $logFile -Append   ## Install NFS-Client. Install-WindowsFeature -Name NFS-Client Write-Output "$(Get-TimeStamp) Installed NFS-Client." | Out-File -FilePath $logFile -Append   ## Configure NFS user mapping registry values. New-ItemProperty -Path "HKLM:\Software\Microsoft\ClientForNFS\CurrentVersion\Default" -Name "AnonymousGid" -Value 0 -PropertyType DWord New-ItemProperty -Path "HKLM:\Software\Microsoft\ClientForNFS\CurrentVersion\Default" -Name "AnonymousUid" -Value 0 -PropertyType DWord Write-Output "$(Get-TimeStamp) Created registry keys for NFS root user mapping." | Out-File -FilePath $logFile -Append   ## Restart NFS-Client. nfsadmin client stop nfsadmin client start Write-Output "$(Get-TimeStamp) Restarted NFS Cleint." | Out-File -FilePath $logFile -Append Create the instance JSON file. In my example, I named the file c01-win2016std.json. My JSON file only contains the required values. The values for ad, compartmentId, and subnetId are unique to an individual tenancy. {   "ad": "<AVAILABILITY_DOMAIN>",   "compartmentId": "<COMPARTMENT_OCID>",   "subnetId": "<SUBNET_OCID>",   "bootVolumeSizeInGbs": 256,   "displayName": "c01-win01",   "hostnameLabel": "c01-win01",   "imageId": "ocid1.image.oc1.phx.aaaaaaaaq3o6o4lwhrna3dlomvo6rgkyqzzcvtkuw7j3u4pf42ucpfmyzfia",   "shape": "VM.Standard2.1",   "skipSourceDestCheck": true } Launch the instance Use Oracle Cloud Infrastructure CLI to launch the instance using the two files created previously as input. Note the OCID of the new instance in the return JSON object. The next two steps require the OCID of the new instance. $ oci compute instance launch --from-json file://c01-win2016std.json --user-data-file enable_nfs.ps1 Use the CLI to find the IP address assigned to the primary VNIC. $ oci compute instance list-vnics --query "data [0].{IP:\"private-ip\"} --instance-id <OCID_FROM_PREVIOUS_JSON_RESPONSE> Use the CLI to find the initial password for the opc user. $ oci compute instance get-windows-initial-creds --query "data.{Password:password}" --instance-id <OCID_FROM_PREVIOUS_JSON_RESPONSE> Verify the Windows NFS-Client In my tenancy, I tunnel Windows RDP sessions through SSH to a Linux bastion hosts. The white paper on Protected Access for Virtual Cloud Networks describes this process in detail.  $ ssh -L 33389<IP_ADDRESS_OF_NEW_INSTANCE>:3389 opc@<IP_ADDRESS_OF_BASTION> After logging in to the new Windows instance and changing the initial opc user password, mount the NFS share as you would map any network drive in Windows. Success! This blog post gives you another example of how cloudbase-init userdata can be used to configure a Windows host at launch time. If you do not have an Oracle Cloud Infrastructure account, you can sign up for a free trial and evaluate the File Storage service for yourself. The Oracle Cloud Infrastructure Solution Architect team is working on a few other Windows-related publications. Keep an eye out for more blog posts and white papers from the team.

One benefit of cloud offerings comes from the ability to spin up new resources on-demand, with minimal effort. Automation tools play a pivotal role in harnessing the power of cloud infrastructure....

Strategy

Five Big Reasons to Run Oracle Database on Oracle Cloud Infrastructure

Oracle built its reputation on the power of the world's first commercially available relational database technology. Today, Oracle Database technologies are the number-one choice for enterprise database workloads—and Oracle Cloud Infrastructure is the best public cloud for running Oracle Database offerings. That's partly because the underlying architecture of Oracle Cloud Infrastructure was designed with the optimal performance of Oracle Database in mind. But it also has to do with the powerful capabilities built directly into our database technology. Here are five important reasons why Oracle Cloud Infrastructure is the right choice for companies that want to run Oracle Database technologies in the cloud. Superior Database and Application Performance The networking architecture of Oracle Cloud Infrastructure is designed to support optimal performance of Oracle Database and the applications that depend on it. We accomplish this by using direct, point-to-point connections between compute and database instances running within Oracle Cloud Infrastructure. These point-to-point connections translate to low latency and superior application performance. In addition to superior performance, users of Oracle Database on Oracle Cloud Infrastructure also benefit from predictable performance. They get dedicated servers with a defined shape and networking architecture, so they always know what they're getting in terms of performance. This approach also ensures that "noisy neighbors" don't negatively impact performance. Tight Security, Private Networking Oracle Database technologies on Oracle Cloud Infrastructure provide unmatched levels of privacy and security because database systems are deployed into a virtual cloud network (VCN) by default. A VCN is a customizable and completely private network that gives users full control over the networking environment. Users can assign their own private IP address space, create subnets, create route tables, and configure stateful firewalls. Users can also configure inbound and outbound security lists to protect against malicious users accessing DB systems. A single tenant can have multiple VCNs, and databases can be deployed into private subnets with no public IP address, said Sebastian Solbach, an Oracle Database expert and technical consultant with Oracle Cloud Infrastructure. "When you spin up a database instance in most clouds today, it will take a maximum of five minutes before the first hacker tries to access your system if it has a public IP address," Solbach said. "Oracle Cloud Infrastructure allows databases to run in a totally private subnet without any public networking access to these database instances." Real Application Clusters Oracle is the only cloud provider that offers the power of Oracle Real Application Clusters (RAC). Oracle RAC gives users the highest level of database availability by removing individual database servers as a single point of failure. In a clustered environment, the database itself is shared across a pool of servers, which means that if any server in the pool fails, the database continues to run on surviving servers. In Oracle Cloud Infrastructure, users run Oracle RAC on virtual machines (VMs) within a VCN. "Oracle RAC requires two things, and the first thing is shared storage. Oracle is the only cloud provider that officially offers shared block storage between instances in the cloud," Solbach said. "Oracle RAC also has networking requirements that can only be met by Oracle Cloud Infrastructure today." Exadata Oracle Database Exadata Cloud Service combines the best database with the best cloud platform. Exadata has proven itself at thousands of mission-critical deployments at leading banks, airlines, telecommunications companies, stock exchanges, government agencies, and ecommerce sites. Oracle Database Exadata Cloud Machines are optimized for performance and run in the same VCN as your VMs and bare metal server instances. Exadata supports Oracle RAC for highly available databases. Users can also configure Exadata in multiple availability domains with Oracle Active Data Guard for even higher availability. Oracle Active Data Guard eliminates single points of failure by maintaining synchronized physical replicas of production databases at a remote location. "Oracle Exadata is our most powerful database server offering," Solbach said. "Oracle Exadata is the place to run data warehouse workloads. It's also a very robust platform for running (Online Transaction Processing) workloads." One Point of Contact Fewer phone calls. Less hassle. Top-notch support. Users who run Oracle Database technologies in Oracle Cloud Infrastructure save time and increase efficiency by having a single point of contact for support when questions or problems arise. “If you're running in another cloud provider and you experience an issue with Oracle Database, you have to contact Oracle. If you have a problem with the underlying infrastructure, you have to contact that cloud provider,” Solbach said. "With Oracle Database on Oracle Cloud Infrastructure, you get a single point of contact who understands Oracle technologies better than anyone else." Learn more about Oracle Database cloud services today.

Oracle built its reputation on the power of the world's first commercially available relational database technology. Today, Oracle Database technologies are the number-one choice for enterprise...

Oracle Cloud Infrastructure

Customer Managed VM Maintenance with Reboot Migration

One of the key benefits of moving workloads to the cloud is the ability to rely on cloud providers to maintain underlying infrastructure. This helps you focus more resources on your specific business solutions. Occasionally, however, a maintenance event might affect the availability of your cloud resources, so it’s important you are not only informed of scheduled downtime, but also able to prepare for it according to your own business needs. With the Oracle Cloud Infrastructure Compute service, you can control downtime associated with those rare hardware maintenance events by using Customer Managed VM Maintenance. When a maintenance event is planned for an underlying system that supports one of your VMs, you can migrate that VM to other cloud infrastructure by rebooting the instance any time before the scheduled maintenance event. How It Works Approximately two weeks before an actual maintenance event is scheduled, you will receive an email notification from Oracle Cloud Infrastructure. It includes the date and duration of the downtime, a list of instances that are affected by the event, as well as clear instructions on how to move the instance(s) to different infrastructure at any time prior to the maintenance date.  It is a good idea to set up the Oracle Cloud Infrastructure account with an email alias that will ensure this notification email will reach the right inboxes. The same information is also available in the Oracle Cloud Infrastructure Console. Any VM instance to be affected by the scheduled downtime is marked with the Maintenance Reboot flag. If you choose to ignore this information, the instances go through the planned maintenance process, including a reboot. However, you can improve the availability of your services by rebooting the instance at a more convenient time before the scheduled maintenance. You can perform the reboot through the Console, the API, or the CLI. When the instance restarts, it’s running on other infrastructure in the cloud, and the maintenance flag is cleared. Checking for Scheduled Maintenance As part of your operational policies, you might want to regularly check for all instances that require a maintenance reboot. In the Console, you can use the following predefined query in the Advanced Search: "Query for all instances which have an upcoming scheduled maintenance reboot". The Console displays resources from a single region at a time, so you must run the query in each region separately. In the API or CLI, you can filter flagged instances by using the timeMaintenanceRebootDue property. You can use a script to list all such instances across all enabled regions of a tenancy. The script can be scheduled to run daily to ensure you have enough time to act on any flagged instances even in emergency situations. Considerations This feature is currently limited to StandardVM shapes running a Linux OS, either from Oracle images or custom images. Any instances that have non-iSCSI block volume attachments or secondary VNICs require the block volumes and secondary VNICs to be detached before the instance is rebooted. After reboot, these can be reattached to resume normal operations. In a future phase, the feature will support all VM shapes as well as instances that have non-iSCSI block volumes and secondary VNICs attached. Conclusion The Customer Managed VM Maintenance feature allows you to have control over downtime of VM instances running on top of infrastructure that requires maintenance. Once these instances are identified (either by email notification or by actively running a script) they can be migrated to new infrastructure by performing a reboot at a time that is convenient for you and your applications.

One of the key benefits of moving workloads to the cloud is the ability to rely on cloud providers to maintain underlying infrastructure. This helps you focus more resources on your specific business...

Using Availability Domains and Fault Domains to Improve Application Resiliency

The unfortunate truth about technology is that hardware requires maintenance and hardware failures do occur. Cloud resources are affected by the same hardware-related maintenance as traditional on-premises resources. In August 2018, Oracle Cloud Infrastructure introduced fault domains for virtual machine and bare metal Compute instances. Oracle Cloud Infrastructure fault domains ensure that applications can avoid hardware failures and planned hardware maintenance within an availability domain. An Oracle Cloud Infrastructure availability domain is: One or more data centers located within a region. Availability domains are isolated from each other, fault tolerant, and very unlikely to fail simultaneously. Because availability domains do not share infrastructure such as power or cooling, or the internal availability domain network, a failure at one availability domain is unlikely to impact the availability of the others. All the availability domains in a region are connected to each other by a low latency, high bandwidth network. An Oracle Cloud Infrastructure fault domain is: A grouping of hardware and infrastructure within an availability domain. Each availability domain contains three fault domains. Fault domains let you distribute your instances so that they are not on the same physical hardware within a single availability domain. A hardware failure or Compute hardware maintenance that affects one fault domain does not affect instances in other fault domains. Sanjay Pillai posted an excellent overview of the theory behind Oracle Cloud Infrastructure fault domains and how you place Compute instances in them. When you are deploying an application to cloud infrastructure, the unique architecture and affinity or anti-affinity requirements of the application determine how instances distribute across fault domains. The Best Practices for Your Compute Instances topic in the Oracle Cloud Infrastructure documentation explains fault domains with a couple of application scenarios. Deploying cloud services to multiple availability domains is the preferred method to ensure high availability. When your application architecture requires components to be in the same availability domain, choosing the proper fault domain for application components can help protect against resource failures. The sections below analyze the published architectures for JD Edwards EnterpriseOne (JDE) and Oracle E-Business Suite (EBS) and how availability domains and fault domains support the application. There are links at the end of this blog post to the Oracle solution documentation for EBS, JDE, and Siebel CRM. Multiple Availability Domains When you use multiple availability domains for an application stack, distributing instances across a single fault domain gives them the proper affinity to each other. In the following JD Edwards EnterpriseOne example, geographically separated availability domains provide application primary redundancy. Fault domains are assigned differently than in the preceding Oracle E-Business Suite example. Incoming connections to JD Edwards EnterpriseOne are routed to the load balancers in both availability domains via DNS that is external to Oracle Cloud Infrastructure. The load balancers route traffic to the application instances based on the configured distribution policy. In the following example, separate availability domains provide primary application redundancy. All hosts in the presentation tiers and middle tiers belong to FAULT-DOMAIN-1. Because fault domains are specific to an availability domain, FAULT-DOMAIN-1 in Availability Domain 1 is a different set of hardware than FAULT-DOMAIN-1 in Availability Domain 2, despite having the same name. Because all of the hosts in each availability domain are in the same fault domain, hardware failure or planned maintenance in a geographic region only minimally affects each availability domain. Hardware events that affect hosts in Availability Domain 2 to not affect hosts in Availability Domain 1. Placing all the hosts in the same fault domain ensures that similarly required infrastructure maintenance activities minimally affect the application stack. Diagram from Learn about Deploying JD Edwards EnterpriseOne on Oracle Cloud Infrastructure   One Availability Domain When you deploy an application stack in a single availability domain, distributing instances across multiple fault domains gives them the proper anti-affinity to each other. In the following Oracle E-Business Suite example, fault domains ensure that end users have access during hardware failures or planned infrastructure maintenance. Incoming connections to Oracle E-Business Suite are routed to the servers in the application pool via the load balancer. In the application tier, Host 1 is in FAULT-DOMAIN-1 and Host 2 is in FAULT-DOMAIN-2. If a hardware failure affects Host 1, Oracle E-Business Suite is accessible through Host 2. If Host 1 and Host 2 existed in the same fault domain, Oracle E-Business Suite would likely be inaccessible to end users. In the case of Oracle Cloud Infrastructure hardware maintenance, fault domain maintenance windows do not overlap. Diagram from Learn About Deploying Oracle E-Business Suite on Oracle Cloud Infrastructure   If you want to learn more about using fault domains in your application, the following solution documentation links have additional scenarios and details. Carefully selecting the proper fault domain for Compute instances in Oracle Cloud Infrastructure ensures hardware failures and scheduled maintenance activities do not unexpectedly affect your end users.  If you want another perspective, search the web for an article written by my colleague, Luke Feldman, about why fault domains are so crucial in Oracle Cloud Infrastructure. Finally, if you haven't already signed up for a free trial of Oracle Cloud Infrastructure, you can do it now. Let us provide the cloud so that you can build the future. Solutions Design Resources Learn About Deploying Oracle E-Business Suite on Oracle Cloud Infrastructure Learn About Deploying Siebel CRM on Oracle Cloud Infrastructure Learn about Deploying JD Edwards EnterpriseOne on Oracle Cloud Infrastructure

The unfortunate truth about technology is that hardware requires maintenance and hardware failures do occur. Cloud resources are affected by the same hardware-related maintenance as...

Strategy

Rethinking How to Build a Cloud

Oracle Cloud Infrastructure has faced many challenges with its late entry into infrastructure as a service (IaaS), but that late entry has also come with a significant benefit: we've been able to hire the best and brightest people from the market leaders. Most of the people building Oracle's cloud have worked on at least one other cloud. Their experiences have guided us as we've purpose-built the industry's first truly enterprise-grade cloud. Let's take a look at our approach and how it has evolved. In the Beginning Our initial goal was to create a high-scale cloud that would be a better fit for enterprise workloads such as Oracle applications and databases. Many Oracle customers wanted to move these mission-critical workloads to the cloud, but they found that it wasn't easy. And in some cases, it was downright impossible. To address their needs, the Oracle Cloud had to be a virtual infrastructure that looked like their on-premises environments, but it also needed the scalability of a public cloud. That combination would make it possible to more easily migrate not just applications but entire on-premises systems—virtualization, storage, and management and security software—to the cloud. The ability to move and improve these systems is one of our most important differentiators in the market. A lot of hard work and ingenuity went into making it happen. Oracle Cloud Infrastructure engineers spent the first year or so designing and building the foundation, which included data centers, automation, storage, and networking—plus the tools to build on top of that foundation. They all took a fresh perspective on what they had learned from working at companies such as Amazon, Google, Microsoft, and Facebook, and they rethought how to build a cloud. The three guiding principles of this approach, which we still follow today, are: We offer the same and oftentimes better performance than on-premises and other clouds, backed up by performance service-level agreements. We offer predictable pricing with a focus on long-term total cost of ownership. We make migrations from on-premises to our cloud more seamless and secure. The next step was to build out the core infrastructure pieces and keep iterating. The first Oracle Cloud Infrastructure offering was bare-metal compute, and then came virtual machine instances, followed by integrated database services, and we're adding new services today. To Infrastructure and Beyond! With everything we do, we're trying the answer this critical question: how do we support and improve our customers' mission-critical systems? A current area of focus is security and compliance. Oracle Cloud Infrastructure holds a PCI DSS Attestation of Compliance for more than a dozen services, and an attestation for HIPAA's requirements around security, breach notification, and privacy. We also recently announced a web application firewall, DDoS protection, cloud access security broker support, and a key management service for increased cloud security. Additionally, Oracle is making advancements in next-generation cloud infrastructure. We've announced a streaming service to receive, process, and archive all infrastructure and platform events in near real time. And we're the first public cloud with generally available compute instances powered by AMD EPYC processors. These enable us to offer 64 cores per server—more than any other cloud in production—which is a great fit for database, big data analytics, and high-performance computing (HPC) workloads.

Oracle Cloud Infrastructure has faced many challenges with its late entry into infrastructure as a service (IaaS), but that late entry has also come with a significant benefit: we've been able to hire...

Strategy

Rapid Global Expansion for Oracle Cloud Infrastructure

At OpenWorld this year, Oracle described a bold plan to extend global coverage of our next-generation enterprise-centric cloud infrastructure platform. This cloud is designed to meet the needs of our core customers, with consistent high performance, optimization around Oracle database and applications, and a broad suite of support for the demanding, data-centric workloads our customers run.  Cloud has become the dominant technology approach worldwide, allowing organizations to stop wasting effort on data center management, refreshes, and system upgrades. Most enterprises have a strong imperative to get their workloads onto the cloud.  But the first generation of cloud platforms were built to meet the needs of developers, with variable performance and a heavy reliance on oversubscription and shared tenancies. These platforms offer low cost to get started but high and unpredictable cost in the long run. They focus on proprietary services that require painful transitions to run existing applications in cloud, and allow little ability to move elsewhere. Oracle is trusted around the world for solving organizations’ biggest technology problems, especially around data.  When we reimagined a cloud to meet the needs of this category of workloads, we got it up and running quickly with coverage in the US and Europe.  But we knew that we needed to extend this footprint globally, to meet the latency and data residency requirements of multi-national organizations as well as those based outside of our initial locations.  To this end, we developed an aggressive plan to increase our footprint of next generation cloud data regions to extend coverage to the majority of our customer base by the end of 2019.  We plan to add a region in Toronto, Canada in the beginning of the year and we'll open more new regions over the course of the year in Australia, Europe, Japan, South Korea, India, Brazil, and the United States, including Virginia, Arizona, and Illinois to support public sector and Department of Defense customers. We’re excited to bring our unique next-generation enterprise cloud to the world.  Customers with demanding workloads will benefit from the consistent high performance, low predictable pricing, and the compatibility and portability that we bring to the table. Come talk to us about how we can help your enterprise IT environment get better results with less time wasted on remedial infrastructure management by running key workloads on Oracle Cloud Infrastructure.

At OpenWorld this year, Oracle described a bold plan to extend global coverage of our next-generation enterprise-centric cloud infrastructure platform. This cloud is designed to meet the needs of our...

Events

Top Five Reasons to Attend Oracle Cloud Day

I work with senior technology leaders who continuously tell me that it’s hard to keep up with—and secure—all of the new technologies that are coming online: AI, bots, IoT, blockchain, and so on. Oracle is hosting Oracle Cloud Day to help you stay up-to-date on these technologies and learn how to best use them. At Oracle Cloud Day events, which take place in several cities across North America, you’ll hear from companies that have deployed successful use cases with these new technologies. Here are five things that you can do at Oracle Cloud Day that make me really excited about this year's events: Find out how to solve your current IT challenges: Learn best practices for running mission-critical apps in the cloud, including tips for meeting high-performance and reliability requirements. Learn from companies that have already had success: Hear Oracle customers such as CMiC, OUTFRONT Media, Alliance Data Systems, and QMP share best practices and tell their cloud success stories. Attend sessions and hands-on labs with tracks geared specifically for IT experts, architects, integrators, and data professionals: Sessions are tailored with your role in mind, so you can focus on what really matters to you.   Try the latest technologies for yourself: Technology experts from Oracle and our partners will show you how to use emerging technologies such as AI, machine learning, and blockchain to build smart applications, machines, and systems. Talk to people who share your interests, challenges, and expertise: The Innovation Lounge is the very heart of Oracle Cloud Day. It's designed to give you an opportunity to connect with your peers, see expert demonstrations, visit with our partners, and recharge with terrific food and drink. How could Oracle Cloud Day best help you? Share your thoughts in the comments below.

I work with senior technology leaders who continuously tell me that it’s hard to keep up with—and secure—all of the new technologies that are coming online: AI, bots, IoT, blockchain, and so on. Oracle...

SAP Highlights at Oracle OpenWorld 2018

It was a very exciting time at Oracle OpenWorld this year! There were lots of great keynotes regarding further advancements in cloud technologies and database, and packed sessions with a ton of new announcements about Oracle Cloud services in general. Now is a great time to recap some of the major SAP highlights that we presented at Oracle's main conference in October.  Oracle provides robust, scalable, and reliable infrastructure for SAP applications running in demanding environments around the world. SAP and Oracle have worked closely to optimize Oracle technologies with SAP applications to give customers the best possible experience and performance. The certification of SAP Business Applications on Oracle Cloud Infrastructure furthers this long-standing partnership.   Oracle and SAP certify and support SAP NetWeaver applications on Oracle Cloud Infrastructure, making it easy for organizations to move Oracle-based SAP applications to the cloud. Oracle Cloud Infrastructure enables customers to run the same Oracle Database and SAP applications as they have done previously in their own data centers, preserving existing investments while reducing costs and improving agility. For additional details, click here. SAP NetWeaver on Oracle Cloud Infrastructure has been available on bare metal shapes since 2017, and in 2018 we announced support for virtual machine shapes. Having all these options available for SAP workloads on Oracle Cloud Infrastructure made it easier for us to continue the strong momentum of customers migrating their SAP applications to the Oracle Cloud. I'd like to share some highlights of successful SAP migration stories leveraging these recent service enhancements that our partners, SAP and Oracle engineering teams presented at OpenWorld. Partner Highlights Cintra, an Oracle Platinum partner, has been in business for over 20 years, working with big names in financial services, retail, aviation, healthcare, and gaming. During their SAP on Oracle Cloud Infrastructure and Cintra deliver extreme performance for top retailer [CAS4027] session at OpenWorld, they highlighted one of their SAP use cases that they have successfully migrated from on-premises to Oracle Cloud Infrastructure and Oracle Database Exadata Cloud Service, along with insightful details about that process.  The customer was a leading retailer whose on-premises architecture that was supporting their SAP applications - including SAP ERP, ERP Central Component (ECC), Business Warehouse, Solution Manager and Enterprise Portal - was nearing its end of life. Like many customers who look to cloud, they wanted to take advantage of a continuously modern infrastructure and wanted to replace their traditional CAPEX model with an OPEX one. However, they also required the utmost reliability and the highest performance for the database workloads that were supporting these applications. Based on the performance requirements, Cintra recommended running SAP on Oracle Cloud Infrastructure as it is the only IaaS provider to support Exadata Cloud Service and RAC. What's more, both Oracle and SAP have jointly tested and certified Oracle Engineered Systems like Exadata for SAP, ensuring a faster time to deployment. And customers who are running their SAP applications on Exadata on-premises can move to the cloud with 100% compatibility while taking advantage of Oracle's Bring your own License (BYOL) program. Cintra leveraged their RapidCloud Transformation Methodology and a strong collaboration with Oracle's SAP Cloud Platform team to execute an aggressive cloud deployment timeline and successfully migrated their customer's production implementation of SAP onto Oracle Cloud Infrastructure.  In the end, the customer was able to realize performance and business continuity improvements, and eliminated $2.5M in on-premises technology refresh costs. Abdul Sheikh, CTO at Cintra, said: “Having delivered the world’s first production SAP to Oracle Cloud Infrastructure, Cintra was delighted to present the details of our successful SAP cloud transformation project at OpenWorld 2018. It was a tremendous conference, and off the back of our involvement, we’re talking to a number of organizations who are keen to move their on-premises SAP platforms onto an enterprise-grade cloud. We’re excited to be working with the Oracle SAP and Oracle Cloud Infrastructure teams to realize these customers’ visions.” SAP and Oracle Engineering Highlights Valuable sessions like SAP on Oracle: Development Update [PRO4405] - given by Christian Graf, Manager / Supervisor, SAP SE; Gerhard Kuppler, Oracle VP, SAP Alliances, Oracle; and Jan Klokkers, Senior Director SAP Development, Oracle - provided a roadmap for our joint customers, explaining what's currently available and what's lined up in the coming months.  Here's a view of what's currently available to support SAP on Oracle Cloud Infrastructure. To keep up to date on the latest SAP on Oracle news, product certifications and best practices, I would encourage you to bookmark these resources: SAP on Oracle Community Follow SAP on Oracle on Twitter Another successful Oracle OpenWorld has finished strong. Over 60,000 customers and partners from 175 countries gathered in San Francisco, where they had a chance to see and hear all the great things that we have delivered and have planned for the next months. In Oracle Cloud Infrastructure sessions at Oracle OpenWorld, attendees learned why Oracle Cloud Infrastructure is different from other clouds—from its standout performance, to its commitment to openness, to its focus on security—and how it enables organizations to succeed. 

It was a very exciting time at Oracle OpenWorld this year! There were lots of great keynotes regarding further advancements in cloud technologies and database, and packed sessions with a ton of new...

Developer Tools

Apache Spark on Kubernetes: Maximizing Big Data Performance on Container Engine for Kubernetes

In this post, I demonstrate how you can quickly create a Kubernetes cluster on Oracle Cloud Infrastructure by using the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) service. Then I showcase how you can achieve a significant performance boost by running applications on Container Engine for Kubernetes bare metal instances, and I perform a comparison by running the same workload on Container Engine for Kubernetes virtual machine (VM) instances. Oracle Cloud Infrastructure is currently the only public cloud provider that provides the capability to run bare metal Kubernetes clusters. Running Kubernetes and containers on bare metal machines shows a 25 to 30 percent performance improvement for both CPU and IO operations, compared to running them on VMs, which makes it suitable for running Big Data applications. Deployment Architecture At a high level, the deployment looks as follows: Deploy a highly available Kubernetes cluster across three availability domains. Deploy two node pools in this cluster, across three availability domains. One node pool consists of VMStandard1.4 shape nodes, and the other has BMStandard2.52 shape nodes. Deploy Apache Spark pods on each node pool. Deployment Steps Perform the following steps to set up and test the deployment. Step 1: Deploy a Three-Node VMStandard1.4 Shape Kubernetes Cluster Create a Kubernetes cluster on Oracle Cloud Infrastructure using Container Engine for Kubernetes, using the steps outlined in this tutorial. This step involves creating the necessary virtual cloud network (VCN), subnets, security list rules, and Identity and Access Management (IAM) policies. After creating the cluster, deploy a three-node VMStandard1.4 shape node pool on the cluster, with one node per subnet in each availability domain. Download the kubeconfig file from the Oracle Cloud Infrastructure Console to your local environment and have kubectl installed. You need this to create Spark pods and to work with your Kubernetes environment. The cluster should look similar to the following example: Step 2: Deploy Apache Spark and Apache Zeppelin Pods on the Node Pool On the node pool that you just created, deploy one replica of Spark master, one replica of Spark UI-proxy controller, one replica of Apache Zeppelin, and three replicas of Spark master pods. You will use Apache Zeppelin to run Spark computation on the Spark pods. To create the Spark pods, follow the steps outlined in this GitHub repo. The spark-master-controller.yaml and spark-worker-controller.yaml files are the necessary Kubernetes manifest files for deploying Spark master and worker controllers, and the spark-master-service.yaml file exposes this as a Kubernetes service. Similarly, there are zepplin-controller.yaml and zepplin-service.yaml manifest files for deploying Zeppelin pods and to expose Zeppelin as a service. Note: The CPU and memory available for the entire cluster is 24 vCPUs (8 vCPUs per node) and 84 GB of memory (24 GB per node). If you look at the manifest files, you will observe that we are assigning 1vCPU per pod and 1000MiB of memory for Spark worker pods, and 200m vCPU per pod and100MiB of memory for each for Spark master, UI proxy, and Zeppelin pods. We are using the same allocation of memory and CPU per pod throughout this post. The deployment so far should look as follows: The process works as follows: Connect to Apache Zeppelin’s UI and trigger a Spark computation, which in turn interacts with the cluster's hosted Container Engine for Kubernetes master. The Container Engine for Kubernetes master sends the request to the node that contains the Spark master. The Spark master delegates the scheduling back to the Kubernetes master to run the Spark jobs on the Spark worker pods. The Kubernetes master schedules the Spark jobs on the Spark worker pods. The Spark worker and master pods interact with one other to perform the Spark computation. In the next step, you initiate the Spark computation by using Zeppelin. Step 3: Initiate the Spark Computation to Measure the Performance of the Cluster At the end of step 2, you took the Zeppelin pod and port-forwarded the WebUI port as follows: kubectl port-forward zepplin-controller-ja9s 8080:8080 After you load the Zeppelin UI, create a new notebook. In it, paste the Python code needed to run the Spark computation, which finds prime numbers in a data set from 0 to 100 million natural numbers. You need to add a %pyspark hint for Zeppelin to understand it. %pyspark from math import sqrt; from itertools import count, islice def isprime(n): return n > 1 and all(n%i for i in islice(count(2), int(sqrt(n)-1))) nums = sc.parallelize(xrange(100000000)) print nums.filter(isprime).count() After pasting the code, press Shift+Enter or click the play icon to the right of the snippet. The Spark job runs and the result is displayed. Observe that it takes about 387 seconds to complete this task (completion numbers may vary). Step 4: Scale the Replicas of Spark Worker Pods and Measure the Performance Again Use the following command to scale the replicas of Spark worker pods to six, using the same allocation of vCPU and memory per pod as described in step 2. kubectl scale --replicas=6 rc/spark-worker-controller You can check the CPU and memory allocation of the cluster by using kubectl describe nodes. CPU Requests and Memory Requests indicate the allocated values. kubectl describe nodes | grep -A 2 -e "^\\s*CPU Requests" Now, you can run the Spark computation again on the Zeppelin UI on the newly scaled six replica Spark worker pods cluster, and measure the performance. Notice that the computation happens slightly faster because the Spark jobs were distributed across more sets of worker pods. Lastly, scale the worker pods to 20 replicas and test the performance of the cluster again. Notice that the performance actually deteriorates because of excessive scaling of worker pods. This time it takes 371 seconds to complete. The “Inferences” section explains why this pattern occurs. Step 5: Deploy a Three-Node BMStandard2.52 Node Pool in the Same Cluster Deploy a three-node BMStandard2.52 shape node pool on the same cluster by clicking the Add Node Pool button in the Node Pools section on the Oracle Cloud Infrastructure Console. Step 6: Repeat Steps 2, 3, and 4 Repeat steps 2, 3, and 4 of deploying Apache Spark and Zeppelin, and running the performance tests on the new node pool. In Container Engine for Kubernetes, each node pool has a unique Kubernetes label assigned. Use these labels to have a "node-affinity" to schedule the Spark jobs on the BMStandard2.52 shape node pool, rather than on the VMStandard1.4 shape node pool. Note: Use the same vCPU and memory allocation for pods in the bare metal shape cluster, which ensures that the performance comparison is consistent across both the node pools. Spark computation on three replica Spark worker pods takes just 74 seconds to complete. Spark computation on six replica Spark worker pods takes 39 seconds to complete. Finally, the Spark computation on a 20-replica Spark worker pods cluster takes 30 seconds to complete. The Spark jobs on BMStandard2.52 bare metal node pools finish 5 to 10 times faster compared to the same workloads running on the VMStandard1.4 node pool, although both the node pools have the same vCPU and memory allocation for Spark and Zeppelin pods. The following section discusses the reasons for the performance differences. Inferences With no hypervisor overhead, containers on bare metal perform up to 30 percent better than those on VMs, which makes them well suited for running performance-intensive workloads like Big Data and HPC jobs. Bare metal instances also offer higher packing density for containers, which causes better resource use and provides minimal network traversal for intercontainer communication. For a massively parallel and distributed computation like Spark or Hadoop, minimizing the network traversal for intercontainer communication results in significant performance gain. Lastly, bare metal instances come with two extremely fast NICs offering 25 Gbps of raw bandwidth each. The bandwidth on VM shapes scales with the size of the VM; a VMStandard1.4 shape offers 1.2 Gbps of network bandwidth. As a result, bare metal instances are better suited for applications that require high network throughput. Conclusion This post demonstrated how to quickly deploy a highly available, multiple-availability-domain Kubernetes cluster on Oracle Cloud Infrastructure; running VM and bare metal instances on the same cluster, as separate node pools; and the significant performance benefits that you can achieve by running Big Data and HPC applications on bare metal Kubernetes clusters. Container Engine for Kubernetes is the only managed Kubernetes offering in the public cloud provider space that lets you create a node pool of bare metal instance shapes in a Kubernetes cluster. As shown in this post, you can create independent node pools of VM shapes and bare metal shapes in the same Kubernetes cluster, and use Kubernetes labels to intelligently route high-performance computation workloads to bare metal node pools and the rest to VM node pools. Having this flexibility is extremely useful, as shown in the following figure: References Overview of Container Engine for Kubernetes GitHub code for deploying Apache Spark and Zeppelin on Kubernetes Tutorial for creating a Kubernetes cluster using Container Engine for Kubernetes   -Abhiram Annangi  Twitter | LinkedIn

In this post, I demonstrate how you can quickly create a Kubernetes cluster on Oracle Cloud Infrastructure by using the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) service. Then...

Product News

Cross-Region Block Volume Backups for Business Continuity, Migration, and Expansion

We are excited to announce that you can now copy your block volume backups between Oracle Cloud Infrastructure regions. This new capability is part of our continual investment to provide customers with comprehensive application and data protection solutions in cloud. Cross-region backup copy enhances the following capabilities: Disaster recovery and business continuity: By copying block volume backups to another region at a regular interval, you can rebuild applications and data in another region if a region-wide disaster occurs. Migration and expansion: You can easily migrate and expand your applications to another region. This capability is provided at no additional cost to Oracle Cloud Infrastructure customers beyond the cost of the amount of block storage, object storage and outbound data transfer consumed by the remote copy as listed in Oracle Cloud Infrastructure pricing page. It is available in the Oracle Cloud Infrastructure Console, API, CLI, SDK, and Terraform. Copying a block volume backup to another region is straightforward in the Oracle Cloud Infrastructure Console. The following steps show how to copy a backup across regions and how to restore volume content from a backup in another region. In the Block Storage section of the console, access the block volume backups in the appropriate compartment. From the action menu (...) for the backup that you want to copy to another region, select Copy to Another Region. Specify a name for the backup and the destination region, and click Copy Block Volume Backup. Confirm the backup copy settings. In the Console, go to the destination region and verify that the backup is available in that region. Now you can restore from the backup on the destination region by creating a new volume from the backup. From the action menu (...) for the backup in the destination region, select Create Block Volume. Enter a name for the restored volume, provide the necessary parameters, and then click Create Block Volume. In the Block Volumes section on the destination region, verify that the restored volume is available. We want you to experience these new features and all the enterprise-grade capabilities that Oracle Cloud Infrastructure offers. It’s easy to try them with our $300 free credit. For more information, see the Oracle Cloud Infrastructure Getting Started guide, Block Volume service overview, Block Volume documentation, and FAQ. Watch for announcements about additional features and capabilities in this space. Features such as policy-based, scheduled copy of backups across regions are on our near-term road map. We value your feedback as we continue to enhance our offering and make our service the best in the industry. Let us know how we can continue to improve or if you want more information about any topic. Max Verun

We are excited to announce that you can now copy your block volume backups between Oracle Cloud Infrastructure regions. This new capability is part of our continual investment to provide customers with...

Oracle Cloud Infrastructure

Deploying Confluent Platform Using Helm Charts on Oracle Kubernetes Engine

Hello, my name is Pinkesh Valdria, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure.   This post is a follow-up to our post about deploying Confluent on Oracle Cloud Infrastructure Compute instances. Now you can use Terraform automation to deploy Confluent Platform using Helm charts on Oracle Cloud Infrastructure Container Engine for Kubernetes.   Oracle Cloud Infrastructure Container Engine for Kubernetes Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. Use Container Engine for Kubernetes when your development team wants to reliably build, deploy, and manage cloud-native applications. You specify the compute resources that your applications require, and Container Engine for Kubernetes provisions them on Oracle Cloud Infrastructure in an existing tenancy. Figure 1: OKE High-Level Architecture Confluent Open Source Provides a More Complete Distribution of Apache Kafka Confluent Open Source brings together the best distributed streaming technology from Apache Kafka and takes it to the next level by addressing the requirements of modern enterprise streaming applications. It includes clients for C, C++, Python, and Go programming languages; Connectors for JDBC, Elasticsearch, and HDFS; Confluent Schema Registry for managing metadata for Kafka topics; and Confluent REST Proxy for integrating with web applications. Figure2: Confluent Platform Components Helm Helm is an open source packaging tool that helps you install applications and services on Kubernetes. Helm uses a packaging format called charts, which are a collection of YAML templates that describe a related set of Kubernetes resources. Deploying Confluent Platform on Container Engine for Kubernetes The Terraform automation template performs the following steps: Deploys a Kubernetes cluster on Oracle Cloud Infrastructure in a new virtual cloud network (VCN), including subnets, a load balancer, a security list, and node pools across three availability domains. Prepares your local machine to access the Kubernetes cluster, as follows: Generates a kube configuration file to access the cluster. Installs kubectl. Installs Helm. Adds Confluent Helm charts to the Helm repo. Installs the Confluent Open Source platform (named my-confluent-oss, but you can change the name). This Terraform template is available on our cloud partner repository. Figure 3: Cluster with Node Pools By default, the Confluent Helm chart deploys three pods for Zookeeper, three pods for Kafka, and one each for Schema Registry, Kafka Rest, Kafka Connect, and KSQL. Grafana and Prometheus monitoring are optional. Figure 4:  Pods Deployed on Clusters We hope that you are as excited as we are about Confluent Platform deployment on Container Engine for Kubernetes. Let us know what you think!    Pinkesh Valdria Principal Solutions Architect, Big Data https://www.linkedin.com/in/pinkesh-valdria/

Hello, my name is Pinkesh Valdria, and I'm a Solutions Architect working on Big Data for Oracle Cloud Infrastructure.   This post is a follow-up to our post about deploying Confluent on Oracle Cloud...

Product News

Connect Your On-Premises Corporate Resources with Multiple Virtual Cloud Networks

If you have organized your resources in multiple virtual cloud networks (VCNs) to meet your governance model and regional presence, connecting all those VCNs to your on-premises networks can be a real challenge. Until now, your only option was to have a FastConnect or IPSec connection terminate at each of your VCNs. However, this option means you incur costs for multiple FastConnect links, and you have the operational burden of provisioning a new FastConnect or IPSec connection for each new VCN you add. We are excited to announce the availability of Oracle Cloud Infrastructure VCN Transit Routing, which now offers an alternative. This solution is based on a hub-and-spoke topology and enables the hub VCN to provide consolidated transit connectivity between an on-premises network and multiple spoke VCNs within the Oracle Cloud Infrastructure region. Only a single FastConnect or IPSec VPN (connected to hub VCN) is required for the on-premises network to communicate with all the spoke VCNs. This solution is based on our existing local VCN peering and dynamic routing gateway (DRG) offerings.  With this solution, you no longer need to attach a DRG to each of your VCNs to access your on-premises network. You attach only a single DRG to the hub VCN and allow resources in the spoke VCNs to share the (FastConnect or IPSec VPN) connectivity to the on-premises resources. At a high level, you do this by performing the following steps: Establishing a peering relationship between each spoke VCN and the hub VCN Associating route tables with the hub VCN's local peering gateways (LPG) and DRG Setting up rules in those route tables to direct traffic from each LPG on the hub VCN to the DRG, and from the DRG to each LPG.   VCN transit routing offers customers many benefits: Better network design: Simplified network management and fewer connections required to establish traffic flow between multiple VCNs and on-premises networks. The hub VCNs enables shared transit connectivity to remote networks and act as a central point of policy enforcement for all your traffic transiting in and out of the region. Increased service velocity/Faster Time-to-Market: This solution supports Fastconnect and IPSec VPN connectivity requirements between VCNs and remote resources with minimal on-premises network changes.  If you add a new VCN, it will now just take few minutes to set up local peering and routing to the hub VCN. This is a marked improvement to previous lead times of days/weeks waiting on corporate change management procedures for establishing Fastconnect or provisioning IPsec connections at on-premises edge routers. Centralized control of route advertisements:  This solution does not enable the traffic to flow between the on-premises network and the spoke VCNs by default You may want to allow the spoke VCNs to access all or specific corporate network partitions (or subnets) in the customer premises. You control this with the route table associated with the LPG on the hub VCN. You can configure restrictive route rules that specify only the on-premises subnets that you want to make available to the spoke VCN. The routes advertised to the spoke VCN are those in that route table and the hub VCN's CIDR. This control enables isolated connectivity of spoke VCNs' resources to their on-premises counterparts. Similarly, you may want to allow the on-premises network to access all or specific subnets in the spoke VCNs. You control this using the route table associated with the DRG on the hub VCN. You can configure restrictive route rules that specify only the spoke VCN subnets that you want to make available to the on-premises network. The BGP route advertisements to the on-premises network are those in that route table and the hub VCN's CIDR. Cost savings: Significant TCO benefits based on centralized management of the private connectivity (Fast Connect/VPN links) and routing to corporate resources in customer premises. Streamlined operations: This solution enables the service provider model where the hub VCN provides the transit connectivity service to the spoke VCNs. The governance boundaries of these hub VCNs may be different from those of the spoke VCNs and hence may be managed by separate compartments/tenancies. In some cases, the hub and spoke VCNs are in the same company, and the central IT team’s hub VCN provides transit service to spoke VCNs managed by LOBs (Lines of Business). In other cases, the hub and spoke VCNs are in different companies, and one company provides transit service to others. I hope you enjoyed learning about the VCN Transit Routing feature in Oracle Cloud Infrastructure.

If you have organized your resources in multiple virtual cloud networks (VCNs) to meet your governance model and regional presence, connecting all those VCNs to your on-premises networks can be a real...

Oracle Cloud Infrastructure

Here's a Nifty Checklist to Secure a Cloud Application

When customers are migrating existing applications from on-premises data centers and from other cloud providers to Oracle Cloud Infrastructure, or even when they are building new cloud native applications on Oracle Cloud Infrastructure, I often get asked for advice on how they can best secure their applications in a cloud environment. First of all, it is critical that development teams and security teams work in tandem to secure applications as well as the cloud environment. Following the agile methodology, most modern IT organizations have transformed to a DevSecOps model. In fact, Continuous Integration and Continuous Deployment (CI/CD) with on-demand releases has also led to Continuous Security (CS). Based on past experience with customer deployments and in training I've done with SANS and OWASP, I've put together a nifty checklist that can be used as a guide when securing any cloud application. This can also be used by your Cloud Security Operations Center (cSOC), should you have one or are looking to establish one. The checklist is categorized into seven sections: SecOPS and Configuration Management Data Protection Authentication and Access Control I/O Handling Logging Error Handling Session Management SecOPS and Configuration Management From the outset, it's important to ensure that all security requirements are documented, and that these requirements are accounted for in your deployment, design, review, testing and change management processes. Checklist Item Notes         Document security requirements Work with the cloud Governance, Risk, and Compliance (GRC) group and the application team to document all the security-related requirements. These can be across functional and non-functional requirements. Transforming requirements to user stories allows you to track them using your agile ticketing system (like Rally or Jira). DevSecOps friendly change management Automate the change management process and align with the current CI/CD process so that new releases can be deployed only after proper testing and associated documentation. Automated deployment Use automation for Continuous Integration and Continuous Deployment to ensure that releases are consistent and repeatable in all environments. Continuous design review Continuously review the design and architecture of the application throughout its life cycle. Security analysis, risk identification, and mitigation are key focus areas. Continuous code review Continuously review the code of the application as the application is updated or modified. Security analysis, risk identification, and mitigation are key focus areas. Continuous security testing Continuously test the application for security vulnerabilities throughout the DevOps process and the application lifecycle. Infrastructure hardening based on releases Harden all components of the logical infrastructure that the application uses as per the guidelines and compliance required for that application environment. Incident response automation Automate and continuously update the defined incident-handling plan. Continuous training Train developers, cloud engineers, and architects on the new features of the cloud services that the application uses.    Data Protection It is also important to ensure you have these data protection capabilities. At Oracle, many are built into our cloud infrastructure by default, and others are available as a service. Checklist Item Notes                           HTTPS only Use HTTPS (TLS) for front end and backend application flows. HTTP access disabled Disable HTTP for all publicly exposed interfaces. Ideally disable it globally. Use vaults for user password stores Use secret management with Oracle wallet or Oracle Key Vault. Use of Strict-Transport-Security header Strict-Transport-Security header helps to mitigate any HTTP downgrade attacks using variations of the sslsniff tool. Secure key management Properly store, secure, and rotate keys. Oracle Cloud Infrastructure Key Management can provide this solution. Strong TLS configuration Use TLS 1.2 or above with strong EC cipher strength. Oracle Cloud Infrastructure LBaaS uses TLS 1.2 with following cipher sets: p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.5px Helvetica; color: #464646} ECDHE-RSA-AES256-GCM-SHA384 ECDHE-RSA-AES256-SHA384 ECDHE-RSA-AES128-GCM-SHA256 p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 10.5px Helvetica; color: #464646} ECDHE-RSA-AES128-SHA256 DHE-RSA-AES256-GCM-SHA384 DHE-RSA-AES256-SHA256 DHE-RSA-AES128-GCM-SHA256 DHE-RSA-AES128-SHA256 Reputable certificate authority  Ensure certificates are valid and signed by reputable certificate authorities. Match the name on the certificate with the FQDN of the website. Browser data caching  Configure browsers not to cache data using cache control HTTP header or meta tags. Data at rest  In Oracle Cloud Infrastructure, by default, all storage types (block, file, and object) are encrypted. Key exchange Exchange keys over a secure channel. Tokenization of sensitive data Where possible, don't store sensitive data at the web or application layer. If necessary, use tokenization to reduce exposure.    Authentication & Access Control When it comes to authentication and access control over your cloud infrastructure, I would advise following these guidelines. Items Notes Access control checks Apply access controls checks consistently all along the stack following the principle of complete mediation. Least privilege Apply the principle of the least privilege by using a mandatory access control systems such as Oracle Identity Cloud Service (IDCS) and Oracle Cloud Infrastructure IAM mediation. Direct object reference Avoid referring to objects directly. Always use relative pointers based on the authenticated user identity and trusted server-side information. Unvalidated redirects Don't permit unvalidated redirects. Put a strong access control policy in place to validate any redirect requests. Credential security Avoid hardcoding credentials. Secure the database storing the credentials using multiple tiers of security controls. Strong password policy Implement a strong password policy along with an automated multi-factor identity-based password reset system. Account lockout policy Implement an account lockout policy to protect against brute-force attacks. Display appropriate nonspecific messages around wrong credentials to confuse an attacker. Multi-factor authentication Ensure multi-factor authentication is in place using Yubikeys or other hardware or software-based tokens.   I/O Handling To help ensure secure I/O handling, I recommend reviewing this checklist to mitigate possible security attacks. Checklist Item Notes Whitelist Use whitelists in place of blacklists. Validate each input or output within the context of use. Standard encoding for the application Use standard encoding like UTF-8 consistently for all the application pages using HTTP headers or meta tags to reduce risks like cross-site scripting attacks. Nosniff header usage X-Content-Type-Options: Use nosniff headers to stop browsers from guessing the data type. Tabnabbing Prevent tabnabbing by denying the linked page the ability to change the opener's tab. This is a common look-a-like phishing attack.  Well formed SQL queries Use parameterized SQL Queries with user content passed into a bind variable to make queries safe against SQL injection attacks. Never build SQL Query strings dynamically from user input. X-Frame-Options Use Content-Security-Policy (CSP) header frame-ancestors directive to mitigate clickjacking. Secure HTTP response header To defend against MITM and XSS attacks, use X-XSS-Protection, CSP, and Publik-Key-Pin headers.   Logging Of course, logging is a critical part of ensuring adherence to compliance and for maintaining a good security posture. Below are some guidelines on the types of activities to log. Checklist Item Notes Sensitive data access logging Log sensitive data access to meet regulatory compliance such as PCI and HIPAA. Privilege escalation logging  Log all privilege escalation requests for audit and compliance. Administrative activities using Console, CLI, and API logging Log all administrative access for application configurations or infrastructure configurations. Authentication and validation activities logging Log all authentication, session management, and input validations. Ignore unimportant data Avoid logging unimportant or inappropriate data to reduce storage and the associated encryption overhead. Secure all logs Securely store logs using encryption and as per the established log retention policy.   Error Handling Below are some best practices for how to handle unexpected errors and the error messages your system sends. Checklist Item Notes Handle all exceptions Handle unexpected errors and gracefully return to the user or the invoking application. Generic error messages Display generic error messages to the user to protect details of the application stack. Framework generated messages Suppress framework-generated messages because they can reveal sensitive information about the framework used and can lead to sophisticated exploits.   Session Management And finally, I would recommend implementing these session management attributes to avoid any potential security risks. Checklist Item Notes Session tokens Every time a user authenticates or escalates their privilege level, generate a new session token. Regenerate the token even if the encryption status changes. Idle session timeout  To prevent against Ajax application-based attacks, implement an idle session timeout. Absolute session timeout  To mitigate against a session hijacking, log users out every 4–6 hours. Session destruction In case any tampering or intrusion is detected, immediately destroy the session. Cookie domain path Restrict the domain and the path scope for the application in context. Avoid any wildcard domain setting. Cookie expiration time Set a reasonable expiration time for every session cookie. Cookie attributes Set secure attributes using HttpOnly and secure flags to make the session id invisible to any client-side scripts. Session log out Once the user logs out of their session, invalidate and destroy the session.   In conclusion, I hope this checklist will come in handy as you're migrating or building applications in the cloud. I'd also like to cite that this is a subset of various checklists that can be downloaded from the following resource sites: NIST OWASP SANS

When customers are migrating existing applications from on-premises data centers and from other cloud providers to Oracle Cloud Infrastructure, or even when they are building new cloud native...

Events

Inside NVIDIA and Oracle's Partnership on AI and HPC in the Cloud

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Oracle is now offering NVIDIA's unified artificial intelligence (AI) and high performance computing (HPC) platform on Oracle Cloud Infrastructure. I recently caught up with Karan Batta, who manages HPC for Oracle Cloud Infrastructure, to find out what this partnership means for Oracle customers who run performance-intensive workloads and are looking to move to the cloud. He also explains how Oracle makes it easy for customers to transfer NVIDIA HPC workloads to the cloud. Listen to our conversation and read a condensed version: Your browser does not support the audio player Why is the partnership between Oracle and NVIDIA such a big deal? Karan Batta: It's a big deal in part because we are the first public cloud provider to support NVIDIA HGX-2, the company's unified AI and HPC platform. But let's talk about the GPU market for a minute. I would say that the GPU-accelerated market is going to be a huge portion of the future market. Obviously, it doesn't make sense to move everything to a GPU. But certainly, a lot of computationally intensive tasks like risk modeling, DNA sequencing, and a lot of real-time analysis makes sense for GPUs. The big use cases today are things like AI and ML, and in the future, it will be things like autonomous driving and weather simulation. Many tasks can benefit from GPUs. Why did Oracle choose to partner with NVIDIA? Batta: NVIDIA is the global leader right now in terms of not just the GPU hardware but the software ecosystem as well. They've done a fantastic job of growing their ecosystem around CUDA and different open source libraries such as cuDNN and cuML. What we're trying to do at Oracle Cloud Infrastructure is enable the entire ecosystem on our platform. We're not going to tell people to rip up their application and use our APIs instead of anybody else's like other cloud providers do. If you're already invested in the ecosystem, you want to come to Oracle. Not only do we offer the best GPU infrastructure, you can also get the ecosystem along with it. As part of that effort, we also announced that we've integrated the NVIDIA GPU Cloud (NGC) container registry. NVIDIA essentially builds, manages, qualifies, certifies, benchmarks, tests, and publishes many containers for deep learning, ML, AI, HPC, and now they're moving into data analytics as well. We're supporting all of that in our public cloud. Are we certified for this? Batta: Yes. Right now, we're the only ones that have RAPIDS available on a public cloud certified through NGC. RAPIDS is a suite of open-source software libraries for executing data science training pipelines entirely on NVIDIA GPUs. It's generally available and you can find documentation on NVIDIA's and Oracle's websites. What do we offer in terms of making it easier for customers to transfer NVIDIA HPC workloads to Oracle Cloud Infrastructure? Batta: We've made it much easier for customers to use the NVIDIA stack on top of Oracle. I think that is one of the biggest things that people are starting to notice. You can take any framework or application that is already running on GPUs and quickly run it on Oracle Cloud Infrastructure without changing the image or anything else. That's true even if you have an on-premises image. You can run it para-virtualized on Oracle Cloud Infrastructure and it just works. On top of that, we are co-building this hardware with NVIDIA. We're doing special things in regard to how we build that hardware and especially how we spec that hardware for different types of markets, whether it's AI or a legacy HPC workload. Can you tell me how many Oracle Cloud Infrastructure regions have these capabilities right now and what are the future plans? Batta: This is available today in all of our regions. We have four major regions today - Virginia, Phoenix, London, and Frankfurt. And we've announced numerous new regions that will come online in the next 12 months in places like Korea, Japan, and India. We're also going to have quite a few government regions along with additional regions in Europe and Asia-Pacific—so we are in this for the long term. All of these capabilities are going to be uniform across of our regions. Okay, I'm sold. I want to take this for a test drive. How do I try it out? Batta: We offer 300$ in free credits so you can go to our website and try it out. If you have additional questions or if you want to try out something different, feel free to reach out to me and my team. We'd be more than happy to guide you and make sure that you're successful on Oracle Cloud Infrastructure.  

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. Oracle is now...

Oracle

Integrated Cloud Applications & Platform Services