X

Recent Posts

X Marks the Spot: Cloud Native CI/CD with Jenkins X

Overview A rapid pace of product and feature delivery is key to companies remaining competitive in the market. This need has led to a paradigm shift in the software industry: a move from traditional development practices to agile DevOps and the adoption of Continuous Integration and Continuous Deployment (CI/CD). DevOps provides a strong relationship between development and IT operations teams. Continuous Integration encourages development teams to check in code to a centralized version control repository and iterate frequently with small changes. Continuous Deployment consists of running code changes through a series automated tests in before packaging up and promoting successful builds to production. These philosophies have enabled the building, testing, and release of software to come faster and more reliably. A lot can be said about this new model, but for the purpose of this article I will be focusing on how it is applied to cloud native development. One major component of cloud native development is the adoption of a microservice framework. This framework encourages developers to breakup monolithic applications into decoupled services based on specific functions. This is accomplished through the use of containers. Containers are isolated software packages that include system tools, libraries, and settings or configuration files. They have the potential to be very lightweight because they are run with a single operating system kernel. Their lightweight nature makes them essential to the microservice paradigm. The challenge of microservices and containers is managing and developing on environments with so many moving parts. Existing build and deploy technologies have had to keep pace with the shift to containerization and an increase in the frequency of releases in order to stay relevant and useful. Solution The container orchestration tool Kubernetes addresses many of the issues of managing complex container environments. Over the past few years it has become the standard for doing so at scale. But what CI/CD tool pairs well with it? A good place to start looking is Jenkins, the leading open source software development automation solution. Jenkins is capable of performing most if not all of the technical aspects of the CI/CD space. The challenge is that the tool was not originally designed with containerized environments in mind and that it is almost too flexible. There are a glut of plugins and configurations to choose from for your jenkinsfile, which can be overwhelming for new users or developers looking for a cloud native or container native solution. Fortunately, the makers of Jenkins released cloud-based version of their tool with automation specifically designed for Kubernetes environments. Their solution, Jenkins X, is an open source, opinionated version of Jenkins designed to run on Kubernetes. If you are already familiar with Jenkins, Jenkins X provides a familiar tool for you to use. If you use another CI/CD tool or are new to the space, the simplified installation of Jenkins X, will give you a good jumping off point from which to explore cloud native development. The tool is pre-configured with runners to build your code for almost any language, such as Go, Javascript, and Java. This is a must-have in a polyglot, multi-service cloud native environment. What Does It Do? Jenkins X provides a drastically simplified installation process in which each component of Jenkins is installed as its own pod in a cluster. When it comes time to deploy, rather than spending your time on packaging software as Docker images and writing Kubernetes manifest files, you can rely on ready-made configurations to build and deploy containers across several common software languages. Rather than having to manually configure a git repository yourself, running jx create will create a staging and production git repository for your project in Github, register a Webhook to capture git push events, and create a Helm chart for your project. Each change to your code will be automatically built and deployed onto Kubernetes. You can easily test in the same environment you will deploy to in production. Configuring all of this manually is time consuming, challenging, and ultimately a distraction from writing code. Some of the useful components of Jenkins X include the jx CLI tool, automated CI/CD pipelines, and GitOps. The jx CLI can be used to create projects from templates, such as Spring Boot applications, quickstart projects, and existing source code. This tool can also be used to provision Kubernetes clusters on supported cloud environments, tail logs of applications running in Kubernetes, open remote shells into any pod containers, import existing applications, start builds, and a host of other helpful activities. The pipelines that Jenkins X creates for you are cloud native in mind. They come complete with a jenkinsfile which builds your code in a container, a dockerfile to package up your application, and a Helm chart to deploy your application to Kubernetes. If you are an experienced Jenkins user, all of those files are available to be further customized. Jenkins X also supports GitOps, a methodology in which changes to code made in git are automatically rolled out to your infrastructure via delivery pipelines. Staging and production environments are also created for you out of the box and it is easy to add additional environments as well. Installation To use Jenkins X, download the latest release from the official GitHub page and run jx install. For a more detailed walkthrough of how to install Jenkins X on Oracle Container Engine for Kubernetes and how to connect to the Oracle Registry check out this guide. Conclusion Jenkins X is a good choice both for people looking to lift and shift their existing Jenkins server to Kubernetes and those just getting started with cloud native development. The opinionated nature of the service, the comprehensive CLI tool, and the out of the box support for numerous quickstart templates make it easy to begin the journey to cloud native development. Once you have a process to start testing, building, and deploying your code to the cloud you can start experimenting and iterating to find out what practices best suit your development needs.

Overview A rapid pace of product and feature delivery is key to companies remaining competitive in the market. This need has led to a paradigm shift in the software industry: a move from...

Java and Cloud Native at The Thirsty Bear

This was the ninth annual pre-conference gathering of Java practitioners and cloud native developers on the Sunday evening before the start of Oracle OpenWorld / Java One-now-Code One. In a new twist this year, demo stations dotted the space, and attendees were able to catch up on the latest news about Jakarta EE, try their hand at playing Reversi, find out more about the Helidon project, see a WebLogic on Kubernetes demo, get the scoop on what Oracle is doing with Microservices, and track what the Serverless / Fn team is up to.  The usual suspects were present - Java User Group (JUG) leaders and Java Champions from around the world; Fabiane Nardon, Bruno Souza, Sven Reimers, Otavio De Santa, Roy Wasse, Hillmer Chona and Bert Breeman to name a few, plus a healthy showing of other modern technologists. Attendees hailed from 23 countries, and over 127 different companies and user groups were represented, including IBM, Red Hat, JetBrains, Tomitribe, Software AG, VMWare, Harvard, Microsoft, Well Fargo, Alibaba, Comcast, Google, Payara, Equifax, Sutter Health, NASA, Fujitsu, and many more. The place was buzzing with news from a few of the stellar Oracle Startup Cloud Accelerator (OSCA) participants who were in attendance; it was great catching up with Peter Lilley of iGeolise and Sean Phillips of AI Solutions, to hear how their work is progressing.  Emily Tanaka-Delgado and the video crew captured some incisive interviews with a few attendees and demo station owners, plus some live action shots of the other activities - we will be sharing these soon! Helidon project team Jakarta news The Reversi game and demo station was buzzing with interest The WebLogic on Kubernetes and Microservices team, Monica Riccelli and Maciej Gruszka Two former CEOs, now on the Cloud Native Labs team at Oracle - Mies Hernandez van Leuffen (Wercker) and Chad Arimura (iron.io) We filled a table full of GlassFish and other Java-related memorabilia from years past - stickers, hats, plushies, mugs, T shirts and more - all of it was scooped up in no time! If you joined us, thanks for attending! And if you missed it, be sure to join us next year, or invite us to present at a meetup in your city! 

This was the ninth annual pre-conference gathering of Java practitioners and cloud native developers on the Sunday evening before the start of Oracle OpenWorld / Java One-now-Code One. In a new twist...

Running a MySQL Database in Containers - The Right Way

Can I Even Run a Database in a Container? Are you interested in running a database in containers? Are you trying to understand if it's reasonable, or even possible, to run a database in a containerized environment? The answer is: yes, it is possible. And it can even be beneficial when done the right way. But how do you make sure you run your database in containers "the right way"? Containers in their most popular current form (Docker) have only been around since about 2013. The leading container orchestrator, Kubernetes, has only been available open source since 2015. With such a short history, it's hard to find solid guidance on best practices for running much of anything in containers. Not to mention something so sensitive as a database. So I Know That I Can, but When Should I Run My Database in a Container? Probably not every database should go in a container, but there are lots of cases where running a database in a container makes sense and could even help improve the way your team works. Great use cases for containerized databases include: Test environments - say you need to test out a new product or feature that needs to pull data from a database. It can streamline your testing if you could just spin up a new database with some test data for each test, rather than having to share a big test database, possibly with other teams. Development environments - say your developers are working on a product or feature that needs to access a database. They may need to make sure their code is successfully accessing a database, but may not care exactly what the data is so long as it fits the right format, or perhaps a small sample of data would be sufficient. Imagine being able to spin up many small databases efficiently, that your developers can use in their development process. Demos - say you need to put together a demo that's easy to spin up and tear down, and the demo needs a database. It'd probably be more convenient if you could spin up everything together rather than baking in accessing a remote database (in a lot of cases at least). Containers provide a great way to package up that demo and deploy it. Generally speaking, if your use case could be satisfied by a relatively small sample database that can be spun up quickly on-demand, a containerized database might be a good fit. Here is a case where you might want to be careful when considering containerizing your database:  Production environments with strict performance requirements - this is primarily a bit of "If it ain't broke, don't fix it" wisdom. Containers as a technology are still relatively young, and although there are some use cases where they can provide performance benefits, they may not be the best choice for the strict and extremely high level of performance and reliability needed in a full-fledged production environment. For example, containers are great as a way to spin up a pre-packaged application reliably. Container orchestrators, especially Kubernetes, expect containers to die unexpectedly (a possibility to be accounted for in VM or bare metal environments too, to be fair). They make up for that possibility by providing good tools to help you create and manage many instances of the same thing (HA), and by automatically starting new containers to replace ones that are failing. But spinning up a new container and/or failing over to another container in the cluster does take some time. Getting used to the way container failures should be handled takes time, and you'll want to make sure your team is ready before you take the leap into running your production databases in containers. Introducing the MySQL Operator for Kubernetes! MySQL is a database that is popular with developers and devops engineers. As such, it's an excellent candidate for a containerized database. But running MySQL the right way is something that requires knowledge both of MySQL and of running containerized infrastructure. The team at Oracle has worked to simplify the task of running and managing MySQL in containers so that you can run it the right way, without having to worry about learning everything there is to know about the way containers and MySQL should work together. The MySQL Operator for Kubernetes essentially teaches Kubernetes what it needs to do to run MySQL the right way. It teaches Kubernetes to treat MySQL clusters as a first-party resource type in Kubernetes' API. It teaches it how many nodes you need to have a minimally HA cluster in your Kubernetes environment. It allows you to create and manage MySQL backups for your MySQL containers more easily. It even teaches Kubernetes what to do if one of your MySQL database instances goes down; that is, to auto-heal by creating a new one and adding it in to the cluster (though you may need to do some configuration around your backups to get fully back up and running). By using the MySQL Operator, you can create an HA MySQL cluster, within your Kubernetes cluster, with ease. So in a nutshell, the MySQL Operator teaches Kubernetes how to run MySQL in containers, the right way, with as little outside help as possible. Of course that doesn't mean you won't have to learn anything new. Running a database in containers has its own quirks and differences from traditional database paradigms. Differences that the team in charge of it will have to learn about to be able to manage it most efficiently. But over time, you'll likely come to find that running databases in containers allows your team and your business to do new things and solve new problems, that you couldn't have done with more traditional deployment methods. Let's Get Started! So why not spend a little time to try something new that could innovate the whole way your team handles databases? You can learn about all the things mentioned in this post and try out the MySQL Kubernetes Operator for yourself by following our quickstart guide.  By following this guide, you'll not only set up a MySQL database in containers in a Kubernetes cluster, you'll also learn more about the features and variables of the MySQL Operator that allow you to configure your containerized MySQL database to best fit your needs. If you want to explore the MySQL Kubernetes Operator code and documentation on your own, check out the github repo.

Can I Even Run a Database in a Container? Are you interested in running a database in containers? Are you trying to understand if it's reasonable, or even possible, to run a database in a containerized...

Putting the Open in Oracle OpenWorld with Oracle Cloud Native

Cloud Native technology is everywhere this week at Oracle OpenWorld and Code One. The Cloud Native Labs team will be out in force talking about Kubernetes, Serverless, and Cool Startup Use Cases – plus so much more. If you take a moment to look closely at what’s being showcased here at Code One and OpenWorld and more broadly across Oracle today, you might be surprised to see open source technologies everywhere – Oracle Cloud Infrastructure, Oracle Linux, Java, MySQL, Machine Learning, Serverless, Big Data, and more. You shouldn’t be – as there’s actually a very rich history at Oracle across all these technologies. Now, with the rise of the Developer, the emergence of the Cloud Native Computing Foundation (CNCF) as a unifying and organizing force in the cloud native ecosystem (end users and vendors alike), and development teams big and small moving to a multi-cloud or hybrid-cloud model, the vision that application development and DevOps technology should be open and thus cloud neutral is within reach - allowing developers, enterprises, and startups alike to run anywhere they choose. Let’s take a cloud native, open source tour around OpenWorld and Code One.  Below I’ve listed some open source highlights – curated a bit – and I encourage folks to explore all these subjects on their own! Kubernetes – Last year at OpenWorld 2017 we launched a fully managed Kubernetes service - Oracle Container Engine for Kubernetes (OKE) which was one of the first certified by CNCF in November 2017. This year there’s sessions to learn how customers are using Kubernetes in production, best practices for building apps, and Meet the Expert sessions where you can get all the answers to your questions. Check them all out here and more. Serverless – The open source serverless Fn Project has many sessions at Code One and in the Meet the Expert area. These range from a Hands-on-Lab, tutorial, a sessions on “Serverless Java: Challenges and Triumphs” and “Bringing Serverless to Your Enterprise with the Fn Project.”  Learn more about the project at the GitHub repo https://github.com/fnproject. Lots of options to learn a little, get started, go deeper, and even play around with Fn in a lab environment. Linux - Today, Oracle announced the Oracle Linux Cloud Native Environment, a curated set of open source CNCF projects that can be easily deployed, have been tested for interoperability, and include enterprise-grade support. This is big news as it extends cloud native coverage to any environment a developer needs – from on-premise to hybrid to public cloud.  By choosing an open source, CNCF centric stack, developers can avoid cloud lock-in, deploy where they want to, and with the Oracle Linux Cloud Native Environment, Oracle provides a unique, ubiquitous framework – from on-prem to cloud – with full support for open standards and specifications. Java –  A can’t miss open source staple of this year’s conference originates from the original JavaOne conference tracks which continue on this year throughout the Code One conference.  Make sure to check out the Core Java Platform Track which has deep roots from the JavaOne conference and continues to be stewarded by the Oracle Java Platform Team.  Also be sure not to catch the sessions from the Oracle Java Platform Team that run as part of the Core Java Platform Track at the event!  Other highlights include the open source Graal sessions, too. Big Data, Data Science, Deep Learning – There’s a ton of open source driven data science, machine learning, deep learning and artificial intelligence sessions at OpenWorld this year with a key focus on Spark, Kafka, and Hadoop platforms.  With the confluence of big data and machine learning on top of Cloud Native platforms don’t miss the open source session on Graphpipe – “Blazingly Fast Machine Learning Inference [DEV5593]” on Monday, Oct 22, 11:30 AM - 12:15 PM at Moscone West - Room 2009. MySQL – Finally, OpenWorld and Code One have a whole track dedicated to MySQL. I am particularly looking forward to the “Containerized MySQL [DEV6110]” session by Patrick Galbraith. The session digs into the ins and outs of running containerized MySQL from a simple standalone MySQL container, backups, Helm charts, to Kubernetes operators. OpenWorld, CodeOne – Cloud Native Team Highlights Some more detail on where to find the Cloud Native Labs team through this week: Tuesday, Oct 23 2:30 p.m. - 3:15 p.m. Operating a Global-Scale FaaS on Top of Kubernetes [DEV5599] Moscone West - Room 2009, Chad Arimura, VP Software Development, Oracle, Matt Stephenson, Consulting Member of Technical Staff, Oracle 2:45 p.m. - 4:45 p.m. Hands-on Lab - Getting Started with Functions and the Open Source Fn Project - BYOL [HOL5446] Moscone West - Overlook 2A (HOL),Shaun Smith, Director of Product Management--Serverless, Oracle, David Delabassee, Software Evangelist, Oracle, Matthew Gilliard, Principal Software Engineer - Container Development, Oracle Wednesday, Oct 24 4:45 p.m. - 5:30 p.m. Kubernetes in an Oracle Hybrid Cloud [BUS5722] Moscone South - Room 160, David Cabelus, Senior Principal Product Manager, Oracle, Jason Looney, VP of Enterprise Architecture, Beeline, Irshad Ismail, IT Director, Dr. Sulaiman Al Habib Medical Thursday, Oct 25 9:00 a.m. - 9:45 a.m. Kube Me This! Kubernetes Ideas and Best Practices [DEV5369] Moscone West - Room 2007, Karthik Gaekwad, Principal Engineer, Oracle Cloud Native Developer Panel: Innovative Startup Use Cases [DEV5600], Moscone West - Room 2014, Bob Quillin, Vice President, Developer Relations, Oracle, Michal Meiri, CEO, Agamon, Mariano Vazquez, Co-Founder and CTO, ELEM Biotech, Kenny Gorman, Founder and CEO, Eventador.io 10:00 a.m. - 10:45 a.m. A Guide to Enterprise Kubernetes: Journeys to Production [DEV5623] Moscone West - Room 2001, Jon Reeve, Senior Director, Product Management, Oracle; Charlie Davies, CEO, iGeolise; Jenny Griffiths, Founder & CEO, Snap Tech) Bringing Serverless to Your Enterprise with the Fn Project [PRO4600] Moscone South - Room 160, Chad Arimura, VP Software Development, Oracle 1:00 p.m. - 1:45 p.m. Serverless Java: Challenges and Triumphs [DEV5525] Moscone West - Room 2009, Shaun Smith, Director of Product Management--Serverless, Oracle, David Delabassee, Software Evangelist, Oracle, Matthew Gilliard, Principal Software Engineer - Container Development, Oracle Meet the Real Experts There’s also a great opportunity in the Hub area for Code One to connect and meet up with some of our Kubernetes, serverless, MySQL, and cloud native subject matter experts: Monday, Oct 22  GraphPipe: Blazingly Fast Machine Learning Inference [MTE6748]  4:00 p.m. - 4:50 p.m. | Moscone West - The Hub - Lounge A            Tuesday, Oct 23 Kubernetes in the Enterprise AMA [MTE6754]10:00 a.m. - 10:50 a.m. | Moscone West - The Hub - Lounge B MySQL Kubernetes [MTE6762] 1:00 p.m. - 1:50 p.m. | Moscone West - The Hub - Lounge A Wednesday, Oct 24 Straight to Serverless: Shortening Your Path to Cloud Native [MTE6773] 3:00 p.m. - 3:50 p.m. | Moscone West - The Hub - Lounge A

Cloud Native technology is everywhere this week at Oracle OpenWorld and Code One. The Cloud Native Labs team will be out in force talking about Kubernetes, Serverless, and Cool Startup Use Cases –...

Kubeapps for Oracle Container Engine for Kubernetes

Guest author Ara Pulido, Engineering Manager, Bitnami Kubeapps is a web-based UI for deploying and managing applications in Kubernetes clusters. It allows your cluster users to deploy applications packaged as Helm charts directly from their browsers. Bitnami has been working on making the experience of running Kubeapps on top of an Oracle Container Engine for Kubernetes (OKE) cluster great, including testing and improving Bitnami’s authored Helm charts so they work out of the box in OKE clusters. In this blog post, we will explain how you can deploy Kubeapps into your OKE cluster and use it to deploy any of the many Bitnami Helm charts available. This post assumes that you already have an OKE cluster and kubectl is configured to talk to it. Install Helm CLI locally and in your cluster When creating an OKE cluster you have the option to have Tiller (Helm’s server component) deployed into your cluster. You can check if Tiller is already running in your cluster: $ kubectl get pods -n kube-system NAME                                                      READY    STATUS   RESTARTS   AGE kube-dns-66d8df795b-j6jnb                          3/3       Running       0              22h kube-dns-66d8df795b-p4md6                      3/3        Running       0              21h kube-dns-66d8df795b-twg4r                        3/3        Running       0              21h kube-dns-autoscaler-87496f994-46gwr       1/1        Running       0              22h kube-flannel-ds-c9knx                                  1/1        Running       2              21h kube-flannel-ds-jtcm8                                   1/1        Running       3              21h kube-flannel-ds-lbbwc                                  1/1        Running       2              21h kube-proxy-5bm72                                       1/1        Running       1              21h kube-proxy-m2xbd                                       1/1        Running       0              21h kube-proxy-zflnz                                          1/1        Running       0              21h kubernetes-dashboard-8698b85796-5n6rr 1/1         Running       0              22h tiller-deploy-5f547b596c-djbnb                    1/1        Running       0              22h If you have a pod called tiller-deploy-* running, then Tiller is already deployed in your cluster. To interact with Helm you need to install the CLI tool locally in your system. As the version of Tiller that comes with your OKE cluster is 2.8.2, you need to install a compatible CLI version. Follow the instructions on the Github 2.8.2 release page to install the Helm CLI into your system. If Tiller is not deployed yet in your cluster, you can deploy it easily by running: $ helm init Deploy Kubeapps in your cluster The next step is to deploy Kubeapps in your cluster. This can be done with Helm in your terminal by running: $ helm repo add bitnami https://charts.bitnami.com/bitnami $ helm install --namespace kubeapps -n kubeapps bitnami/kubeapps Kubeapps requires a token to login; it will then be used in any request to make sure that the user has enough permissions to perform the required API calls (if your cluster has RBAC enabled). For this blog post, we will create a service account with cluster-admin permissions as explained in the Kubeapps documentation. $ kubectl create serviceaccount kubeapps-operator $ kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=default:kubeapps-operator With the following command we reveal the token that we will use to login into the Kubeapps dashboard: $ kubectl get secret $(kubectl get serviceaccount kubeapps-operator -o jsonpath='{.secrets[].name}') -o jsonpath='{.data.token}' | base64 --decode Accessing the Kubeapps dashboard and logging in The default values for the options in the Kubeapps Helm chart deploy the Kubeapps main service as a ServiceIP, which cannot be accessed externally. We will use Kubernetes port-forward option to be able to access it locally: echo "Kubeapps URL: http://127.0.0.1:8080"    export POD_NAME=$(kubectl get pods --namespace kubeapps -l "app=kubeapps" -o jsonpath="{.items[0].metadata.name}")    kubectl port-forward --namespace kubeapps $POD_NAME 8080:8080 Once the port-forward is running you can access Kubeapps in your browser at http://localhost:8080 You will be prompted with a login screen. To log in, paste the token you obtained in the previous section: Once you are logged in, you can browse all available charts in the Charts link:   Using Kubeapps to deploy Bitnami charts in your OKE cluster Bitnami maintains a catalog of more than 50 charts and those have been fully tested in OKE clusters and polished to work out of the box on an OKE cluster. You can see the Helm charts in the Bitnami repo by accessing http://localhost:8080/charts/bitnami/ As an example, we will deploy the Bitnami Wordpress Helm chart through Kubeapps.     After selecting the Wordpress chart, we will deploy it with the default values, which will create a LoadBalancer service and deploy a MariaDB database in the cluster. You can check that both pods are up and running, and that PVCs, backed by OCI, have been provisioned: $ kubectl get pods NAME                                                     READY    STATUS   RESTARTS   AGE my-wordpress-mariadb-0                              1/1     Running   0                  4m my-wordpress-wordpress-5cfc65b9-dnblz   1/1      Running   0                  4m $ kubectl get pvc NAME                        STATUS   VOLUME             CAPACITY   ACCESS MODES   STORAGECLASS   AGE data-my-wordpress-mariadb-0   Bound        ocid1.volume.oc1.phx.abyhqljsslsar3caoqzxotoaci3svroeci4g2pkh7rva6gtck4tqbckslmnq    50Gi      RWO        oci             4m my-wordpress-wordpress           Bound        ocid1.volume.oc1.phx.abyhqljsiewzlxlyuaeulf6nsm4w2wqnzwq3ho3vbisrcjumroga6l765r6q  50Gi    RWO        oci             4m Also, as this is a LoadBalancer service, OKE will provide a load balancer with an IP you can use to access your new Wordpress website:   Summary In this blog post, we explained how you can use Kubeapps in your Oracle Container Engine for Kubernetes cluster to deploy and maintain OKE-ready Kubernetes applications from Bitnami. These applications were specifically tested for the Oracle platform, and you can rest assured that they follow Bitnami’s secure and up-to-date packaging standards. You can visit the Kubeapps Hub to keep track of what Bitnami charts are available and the supported versions.

Guest author Ara Pulido, Engineering Manager, Bitnami Kubeapps is a web-based UI for deploying and managing applications in Kubernetes clusters. It allows your cluster users to deploy applications...

Going Onsite with Cloud Native Labs

Summary The cloud native landscape is filled with countless interwoven projects and constantly evolving best practices making it complex to navigate. The Cloud Native Labs team brings experience, informed opinions, and best practices regarding which technologies to use as you begin to explore Kubernetes and Cloud Native development. Last week, the Cloud Native Labs and the Oracle A-Team had the opportunity to run onsite workshops with customers in Denmark and the Czech Republic. The purpose was to share best practices and experiences running applications in a production-ready containerized environment. Using Oracle Container Engine for Kubernetes as our foundation, we discussed how to configure continuous integration and continuous deployment pipelines, logging and monitoring, and service mesh. In each two-day session, we were joined by 15-20 developers from European enterprises who brought various level of experience with Kubernetes and other cloud native technologies. Content of the Labs   Our presentations and labs covered the lifecycle of a company's use of Kubernetes. We began by creating a multi-node cluster with Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE), on top of Oracle Cloud Infrastructure, an enterprise-grade IaaS. This included the configuration of virtual cloud networks, setting up our node pool, and configuring the OCI CLI to download our kubeconfig file. After connecting to our newly created cluster we interacted with it using kubectl. Next, we shared how to connect the Oracle Cloud Infrastructure Registry (OCIR) to our cluster and configure Kubernetes to deploy a sample application. To operationalize the application deployment process, we set up Wercker, a cloud-based CI/CD solution connected to GitHub. This is useful to have a reproducible and repeatable process that pulls from a common repository accessible to your development team. After we updated our code in GitHub, Wercker runs an automation pipeline and builds the container image. The new image is pushed to OCIR, a Docker API compatible container registry, and then deployed to an instance of OKE, which updated the application by replacing the existing containers/pods. After creating a cluster and configuring our CI/CD to deploy the application, we switched focus to productionizing our application. We selected a handful of open source projects from the Cloud Native Computing Foundation (CNCF) to share. Prior to diving into the operators, we discussed a commonly used package management tool for Kubernetes called Helm. We walked through how Helm can be used to package up Kubernetes resources into charts, which are useful for repeatable application installation, configuration, and versioning of Kubernetes resources. Helm is also useful because it offers a repository of charts for many CNCF projects, including the three solutions we chose to focus on: the EFK stack, Prometheus and Grafana, and Istio. We used Helm to install the EFK stack, which consists of Elasticsearch, Fluentd, and Kibana. This stack is a useful solution for capturing application and system logs in Kubernetes, which can help with diagnosing and addressing problems impacting cluster health or your application. The application we deployed earlier on in the workshop included middleware used to log the time, route, and user-agent of any HTTP request to the application. We used Apache Benchmark to ping the application in order to produce logs. As applications start on the nodes, the FluentD daemonset tails the logs of containers underlying the application. Those logs were forwarded to Elasticsearch and given additional tags used to enhance our search. We then graphed the data in Kibana.    In addition to showing how to capture and visualize log data, we wanted to demonstrate how to monitor a cloud native environment. For this we chose Prometheus and Grafana, an industry-standard pair of tools for application telemetry and observability. Prometheus, a time-series database, includes tools to scrape application data. Grafana is used to model that data into useful dashboards. While our application had not been alive long enough for the tool to gather particularly meaningful data, we were able to share a proof of concept showing how to instrument an application with a custom metric and display that metric on a Grafana dashboard. The final solution we showed during the workshop was the service mesh tool, Istio. Istio is a comprehensive tool used to connect, manage, and secure microservices, such as the sample application we deployed in Kubernetes. We chose to focus on a handful of aspects of Istio, including A/B testing, mirroring and shadowing, fault injection, tracing, and service mesh monitoring. We started by deploying multiple versions of the same application. Routing rules were then deployed to demonstrate routing to different versions of the same application based upon a percentage, such as 10/90 or 50/50, which is great for A/B testing. The next pattern was mirroring/shadowing. We had multiple versions of a service deployed, production and testing, but only the production version was receiving the production payload. This allowed us to test the new application without impacting the currently deployed production version. This is great for enabling testing prior to releasing to production. We demonstrated fault-injection for negative testing and error handling. Finally, we were able to demonstrate how to use Istio for monitoring and tracing. Istio leverages Prometheus for general monitoring, Jaeger for tracing through the service calls, and the Envoy sidecar for monitoring the service mesh. These tools are useful to identify potential bottlenecks. Developer Response During the workshop, we received great questions around access management and applications of RBAC (role based access control). People were interested in managing access to different levels of the cluster, which is typically something users consider when they are further along their path of using Kubernetes. The developers were also interested in more information about key management and secrets. They also asked our opinions about the types of container images used to run their application. The customers were very receptive to Oracle’s managed Kubernetes offering. The scope of the workshop went beyond the creation of a Kubernetes cluster and the deployment of a hello world application. The overarching theme was to deploy an application in a cloud native way: we walked through configuring a smart CI/CD process and how to make sure everything is resilient and properly instrumented, complete with logs, tracing, and telemetry data. We are looking forward to future customer workshops and our next chance to try Czech Pilsner.    

Summary The cloud native landscape is filled with countless interwoven projects and constantly evolving best practices making it complex to navigate. The Cloud Native Labs team brings experience,...

Oracle @ Grace Hopper Celebration of Women in Computing 2018!

Grace Hopper Celebration of Women in Computing The Grace Hopper Celebration of Women in Computing is a unique technology conference that promotes and celebrates women technologists. With around 20,000 attendees, GHC18 was easily the largest conference I've ever attended. Aside from its unusual size, one of the more notable differences between GHC and other tech conferences is that most of those 20,000 attendees were women. The sessions at the conference ranged from deep technical topics like machine learning, to leadership and career advice - all of which could be useful to anyone in the technology field. But above all (from my perspective), the Grace Hopper Celebration excels at encouraging networking. Most of this networking happens at the Employer Showcase. The Employer Showcase The showcase is a giant space where employers set up booths to attract attendees, particularly those in search of employment. Booths included anything from simple tables with signs, to novelty designs for instagramming photos, and of course swag galore. There were so many big names and flashy booths that walking through the showcase felt like being right in the middle of the opening of HBO's hit show Silicon Valley. This year, Grace Hopper split up the employer showcase into two parts - about 3/4 of the giant expo hall was dedicated to recruiting booths, while the last 1/4 was all booths dedicated to technical demos. The showcase provides employers and attendees alike an incredible venue to network, learn, and - for employers at least - recruit! A high percentage of the attendees at GHC are students looking for internships or first-time jobs. GHC provides employers with a unique environment where they can meet young women technologists who are just entering the workforce, and provides students and other attendees with the opportunity to network with a huge number of companies who are there to see them, wherever they may be in their tech journey. Oracle at GHC18 The calm before the storm. A sneak peek at the Oracle recruiting booth at GHC18 before the expo opened. As platinum sponsors, Oracle had their passion for diversity and inclusion on display with two flashy booths and several cool tech demos. That's right, Oracle had not one - but TWO booths at GHC18!  The booth pictured above was geared toward attendees who wanted to talk about career opportunities at Oracle. Another booth was located in the Technology Showcase portion of the floor and featured a rotation of three Oracle teams demoing their products. The three teams doing demos were Oracle Cloud Infrastructure, Oracle Autonomous Database, and the Virtual Reality research team. Each team had its own demo so attendees could learn about each one by stopping by the booth at different times. Pictured above: The Virtual Reality team demonstrating data visualization using VR! The demo booth was set up well for presenting demos to an audience of both passersby, and those who wished to sit and listen to the whole demo. Attendees really seemed to appreciate having a place to sit down and watch the demos after spending hours on their feet making their way through the vast showcase of employers! Conclusion At the Grace Hopper Celebration, the value that diversity brings to companies was on display wherever you looked, with highlights of women founders, a section with displays describing the historical impacts of women in computing, and talks concerning everything from technical concepts like machine learning to career and leadership advice.  It's hard to imagine what you're missing on teams with few or no women, until you experience something like Grace Hopper, where you are surrounded by women technologists. Going to an event like Grace Hopper can be an eye-opening experience for anyone in technology, not just women.              

Grace Hopper Celebration of Women in Computing The Grace Hopper Celebration of Women in Computing is a unique technology conference that promotes and celebrates women technologists. With around 20,000...

Tomorrow is All Day DevOps

  I am super stoked to be a part of the All Day DevOps conference tomorrow, October 17! It's the third installment of the conference, and it is one of my top three devops conferences in the year along with Devopsdays and Devops Enterprise Summit. I track chaired the Cloud Native sessions the first year along with fellow Agile Admins Ernest Mueller and James Wickett, and I'm excited to do it again! It's much, much, much bigger this year with over 125 speakers, 24 hours long, and five session tracks including CI/CD, Cloud Native Infrastructure, Cultural Transformation, DevSecOps, and SRE. I'm looking forward to listening to some great speakers on each track. If you need some convincing to register, here are three reasons: It's a FREE conference. Register here: https://www.alldaydevops.com/devops/register. There are over 25,000 folks registered. You can watch it from your bed. All of the talks are broadcast online via google hangouts and YouTube. Of course, you don't have to view it alone if you don't want to- there are plenty of watch parties as well. Your Cloud Native Labs Evangelists, Jesse Bulter and I will be speaking on the Cloud Native Infrastructure track! Jesse's session will cover Serverless computing and how the FN project fits into the picture. He'll be speaking online from 12:00pm-12:30pm CST. I will be speaking on Kubernetes and Cloud Native security, and give you some practical tips on how to keep your K8s environments more secure with platform extensions and examples. You can catch my talk from 8:00am-8:30 am CST. I will also be moderating the Cloud Native Infrastructure talk track from 11am-5pm CST, so plenty of time to catch me on the ADDO Slack group. I hope you can make it. Catch you then, and, uh, time to finish up my slides!!!  

  I am super stoked to be a part of the All Day DevOps conference tomorrow, October 17! It's the third installment of the conference, and it is one of my top three devops conferences in the year...

Take the Helm: Kubernetes Package Management

Overview As with any new technology, there is a learning curve when it comes to implementing Kubernetes. Companies adopting the container orchestration tool can be hampered by the increased complexity of deploying Microservice environments. Companies often start by manually deploying their applications through the use of kubectl. In doing so, Kubernetes treats your application as a number of decoupled resources and requires you to individually deploy each component. To repeat deployments you will have to keep track of the step by step process used for the application and manage releases for each component. Each additional resource will likely require another command to run. As the complexity of the environment increases, this approach becomes not only exhausting, but also prone to human error. Imagine having to share this process with another group or with an end user and expecting them to follow through without making a typo or forgetting a single command along the way. What if you prefer to treat your application as a whole: a collection of parts neatly wrapped up and managed as a package? You can decrease the time spent deploying development and testing environments and instead spend that time on application development. Simplifying the deployment process might also shorten the learning curve and lead to the faster adoption of Kubernetes within your organization. The process becomes easier to share and to follow. Removing human error and creating a templated deployment process might help you overcomethe final jitters preventing you from moving your deployment to production. Helm offers a solution. Solution In a nautical setting, a helm is a wheel used to steer a ship. In the world of containers, the open source tool Helm has a similar purpose: it is used to steer or manage Kubernetes applications. Helm is a package management tool for Kubernetes and, just like Kubernetes, Helm is a CNCF project. The simplest way to think of Helm is as a package manager similar to yum, brew, apt-get, choco, etc., but specifically for Kubernetes. Helm addresses the challenges above by providing a method for repeatable application installation, configuration, and versioning through the use of charts, repositories, and releases. Chart: a package of pre-configured Kubernetes resources defined by a YAML file used to describe the components of applications designed to run in a Kubernetes cluster and searchable metadata. Repository: a searchable collection of Helm Charts. Release: an instance of a Helm Chart deployed to a Kubernetes cluster. There is a new release created each time a chart is installed. Thanks to repositories and releases, Charts are easy to update, version, and share. By packaging up the various resources involved in our configuration, Helm enables us to treat our application as a whole rather than a series of resources individually deployed on a cluster. With Helm, user configurations become reusable. There are two key parts of the Helm architecture: Helm: a client running on your local system. Tiller: a server in your Kubernetes cluster that interacts with the Kubernetes API server used to manage your Helm deployments It is worth noting that Tiller is expected to be deprecated with the release of Helm 3 as the client/server separation is removed. Architecture Installation To run Helm you need access to a Kubernetes cluster. For help creating a Kubernetes cluster with Oracle Cluster Engine for Kubernetes (OKE) follow this guide. If you choose to use OKE for your Kubernetes environment you can simplify the Helm set up process by clicking the box that says "Tiller (Helm) Enabled" when provisioning an OKE cluster. One advantage of using Helm is being able to find and access an existing repository of stable Charts containing popular software packages. I recommend starting out by running a helm search command to see the list of Charts. Run helm inspect followed by the Chart name to find more information about a particular Chart. When you have chosen a Chart to deploy, run helm install followed by the chart name. The Helm client will send a request to Tiller to install the Chart in your Kubernetes cluster. This process will create a release name for your application. One of my favorite parts of Helm is the simplified deletion or cleanup process. Once again, thanks to Helm's ability to treat our group of resources as a single application, we can run a single command to remove everything associated with our application rather than having to run individual kubectl commands to clean up each resource. After verifying the application was successfully deployed, you can clean up the environment with helm delete followed by the release name. After you get more comfortable with Helm you can begin to use it to define custom Charts and create your own Chart repository. Helm can also be tied to a CI/CD system and used to configure releases. Helm can be also be useful to productionize your application by reducing complexity and increasing the repeatability of deployments. Conclusion If you are a developer just starting to use Kubernetes or an advanced user wanting to streamline your application deployment process, Helm is a good fit for you. With Helm installed, you have the ability to efficiently manage packages of Kubernetes resources, access and share a repository of Charts, and control the version history and upgrades of your releases. Check out this guide for more information about how to configure and use Helm in your environment. For additional information about using Helm, check out the Official Helm Project.    

Overview As with any new technology, there is a learning curve when it comes to implementing Kubernetes. Companies adopting the container orchestration tool can be hampered by the increased complexity...

Running .NET Core on the Oracle Container Engine for Kubernetes

Photo by Leone Venter on Unsplash   Recently, I got a note from an attendee I met at the Getting Function'l with Kubernetes event about running .NET core applications on Kubernetes. It got me thinking, "how far has the Windows and .NET world come regarding running cloud native apps in the last year?" I remembered when I filmed my LinkedIn Learning tutorial on Kubernetes; it wasn't that difficult to run Kubernetes with Minikube on Windows, but what about running .NET apps?  I took some time to explore the ecosystem, and in this post, I'll show you how to run a sample .NET application on the Oracle Container Engine. If you're a developer and want to get straight to the code, you can check out my entire walkthrough on GitHub. I built all of these examples on my mac, but you should be able to do all of this on any platform you choose. If for some reason you get stuck, comment below, and I can try to replicate your issues! It's been a good five years since I wrote my last .NET application, so I had to install all the things on my mac. I started with the simple .NET tutorial published on Microsoft to create a .NET Core Web API application. I thought about adding more endpoints and code to it, but in the end, kept it the same as the generated code to keep it simple. I didn't want to confuse the reader with .NET REST API magic, but instead, get the application running in the Kubernetes platform. As a side note, it is SUPER cool to have the dotnet command now. It makes development, building, and testing all very seamless and is innate to cloud native developers who run docker and kubectl commands all day long. After installing the needful, I created a simple ASP.NET Core Web API application with the command: ~$ dotnet new webapi --auth None --no-https For a full-fledged app, you will most likely want to set up authentication if you're working with sensitive data, and turn on https as well. Running the command will create a new webapp that creates a simple REST API. The output looks something like:    ~$ dotnet new webapi --auth None --no-https The template "ASP.NET Core Web API" was created successfully. Processing post-creation actions... Running 'dotnet restore' on /Users/karthik/dev/src/github.com/karthequian/dotnet-example/dotnet-example.csproj...   Restoring packages for /Users/karthik/dev/src/github.com/karthequian/dotnet-example/dotnet-example.csproj...   Generating MSBuild file /Users/karthik/dev/src/github.com/karthequian/dotnet-example/obj/dotnet-example.csproj.nuget.g.props.   Generating MSBuild file /Users/karthik/dev/src/github.com/karthequian/dotnet-example/obj/dotnet-example.csproj.nuget.g.targets.   Restore completed in 1.42 sec for /Users/karthik/dev/src/github.com/karthequian/dotnet-example/dotnet-example.csproj. Restore succeeded. Great! Now, to run this newly created application, I can type dotnet run, and it should launch my API as shown below: ~$ dotnet run Using launch settings from /Users/karthik/dev/src/github.com/karthequian/dotnet-example/Properties/launchSettings.json... : Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[0] User profile is available. Using '/Users/karthik/.aspnet/DataProtection-Keys' as key repository; keys will not be encrypted at rest. Hosting environment: Development Content root path: /Users/karthik/dev/src/github.com/karthequian/dotnet-example Now listening on: http://localhost:5000 Application started. Press Ctrl+C to shut down.   If I wanted to test, I could curl the /api/values/ endpoint as shown below. If I get a list of values back, I know that the application is working as expected.   ~$ curl localhost:5000/api/values/ ["value1","value2"]   Now that I have a .NET webapp running on my mac, I can do the fun stuff - dockerize and kubify! To dockerize, I followed the guidelines to build a Docker image for a .NET application from the Docker documentation. I changed the name of the entrypoint to dotnet-example.dll because that was the name of my project. You can find my final Dockerfile here. After running and testing again with curl, I pushed the application to the Docker store as karthequian/dotnetexample, so that anyone can just run the example as a test. Finally, to run this in Kubernetes, I created a deployment and service here: https://github.com/karthequian/dotnet-example/blob/master/Deployment.yaml. This yaml exposes the container as a deployment with a nodePort service running on port 32000 of the host. For ease of use, you can type kubectl apply -f https://raw.githubusercontent.com/karthequian/dotnet-example/master/Deployment.yaml to run the deployment and service on Kubernetes. I chose to run my application in the Oracle Container Engine as a proof of concept, but it should work in any Kubernetes distro out there. I've tested this on a Kubernetes 1.9.7 cluster, but it will be forward compatible with the latest version as well.    Following is an example to run this: ~$ kubectl apply -f https://raw.githubusercontent.com/karthequian/dotnet-example/master/Deployment.yaml deployment "dotnetworld" created service "dotnetworld" created ~$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE dotnetworld 1 1 1 1 20s ~$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE dotnetworld 10.96.79.93 80:32080/TCP 26s kubernetes 10.96.0.1 443/TCP 30d Finally, to test this out, we can once again run a curl against the node IP and port as shown below: ~$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE dotnetworld 10.96.79.93 80:32080/TCP 26s kubernetes 10.96.0.1 443/TCP 30d ~/dev/src/github.com/karthequian/dotnet-example$ kubectl get nodes NAME STATUS AGE VERSION 129.146.123.174 Ready 30d v1.9.7 129.146.133.234 Ready 30d v1.9.7 129.146.162.102 Ready 30d v1.9.7 ~$ curl 129.146.162.102:32080/api/values ["value1","value2"] So there you have it - a .NET core web application, built on a mac, containerized with Docker and running in Kubernetes 1.9.7, managed by the Oracle Container Service for Kubernetes. It might sound time consuming, but I was able to get all of this running in about an hour, so it wasn't that much work at all.  Let me know if you run into issues or have any questions by commenting below, finding me on twitter @iteration1 or in the GitHub repo itself. Good luck!!

Photo by Leone Venter on Unsplash   Recently, I got a note from an attendee I met at the Getting Function'l with Kubernetes event about running .NET core applications on Kubernetes. It got me thinking,...

Your Cloud Estate in the Shell

  Oracle Cloud has a fully-featured web console that makes managing cloud accounts straightforward and simple. But what if, like me, you are more at home in a shell?  There's a CLI for that. The aptly-named oci utility is a CLI which offers much of the same functionality as the web console in a nice, compact command line package for Windows, MacOS and Linux.  With it, users can create, destroy, modify and monitor aspects of their OCI estate. The CLI is organized the same way as the web console, logically following the API. This means that if you're familiar with the console, it'll be pretty easy to poke around and find what you're looking for. For example, images can be found in the console on the side bar under Compute->Images. To list images with the CLI, we invoke: # oci compute image list   Getting Started To use the CLI, you'll need an OCI account, a user configured in that account along with a policy that allows that user to do useful administrative things, an API keypair and a system with Python installed. Check out the Getting Started section of the OCI documentation, as well sections on Adding Users and Security Credentials. For configuration, the CLI uses a config file (e.g. ~/.oci/config). This file contains some required OCI-related data, such as user credentials and tenancy info. Some optional data can be laid in there too for ease of use. Configuration elements are referred to by their OCID, or their Oracle Cloud Identifier. Every Oracle Cloud Infrastructure resource has a unique OCID attached to it. You can collect these from the web console as needed, and we can get things rolling. For super-quick startup, pop open a shell and say: # bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh) # oci setup config This will install the CLI and then drive through creation of a basic configuration. For more detailed installation and configuration instructions, check out the Installing the CLI and Configuring the CLI sections of the CLI documentation.   Using the CLI We can create, destroy and otherwise observe most of the objects within our OCI estate with the CLI. This is really great if your typical workflows are CLI-based, and is also great for scripting various tasks. As an example to get started, let's create a new bucket in our object store and put a few files in it. Note we need to identify the compartment to create the bucket in. For simplicity's sake we can set an environment variable in our shell to make that easy. We'll discuss an even better way to configure that later on. # cid=ocid1.compartment.oc1..aaaaaaaa... # oci os bucket create --compartment-id $cid --name jb2 {   "data":     "compartment-id": "ocid1.compartment.oc1..aaaaaaaau7ewrssyclcr4oeoce23dwkziadqtkvaka4s4rjd352367ntbkga",     "created-by": "ocid1.user.oc1..aaaaaaaaoob2jbqhzmdnzegaej4hkgr22byvg5mnvfnwswlaeorzgwi56ewa",     "defined-tags": {},     "etag": "9b66f44e-0bef-42c9-b7e0-bead354bce47",     "freeform-tags": {},     "metadata": {},     "name": "jb2",     "namespace": "cloudnative-devrel",     "public-access-type": "NoPublicAccess",     "storage-tier": "Standard",     "time-created": "2018-08-30T20:45:32.642000+00:00"   },   "etag": "9b66f44e-0bef-42c9-b7e0-bead354bce47" } Let's add a couple of files to our object store bucket and list them out. # for i in 1 2 ; do mkfile 128k file${i} oci os object put --compartment-id $cid --bucket-name jb2 --name file${i} --file file${i} done Uploading object  [####################################]  100% {   "etag": "74AD8A3C39D315A8E053424BC10AF4B4",   "last-modified": "Thu, 30 Aug 2018 20:47:19 GMT",   "opc-content-md5": "DfvoqkwgtS4bi/PLbL3xkw==" } Uploading object  [####################################]  100% {   "etag": "74ADE678D52C7707E053424BC10A8352",   "last-modified": "Thu, 30 Aug 2018 20:47:50 GMT",   "opc-content-md5": "DfvoqkwgtS4bi/PLbL3xkw==" } # oci os object list --compartment-id $cid --bucket-name jb2 {   "data":     {       "md5": "DfvoqkwgtS4bi/PLbL3xkw==",       "name": "file1",       "size": 131072,       "time-created": "2018-08-30T20:47:17.286000+00:00"     },     {       "md5": "DfvoqkwgtS4bi/PLbL3xkw==",       "name": "file2",       "size": 131072,       "time-created": "2018-08-30T20:47:48.071000+00:00"     },   ],   "prefixes": [] } When listing things out, there are a few nice features to help organize and present data in a meaningful way. Queries Often the output of a given list command is very noisy when you may only be interested in a couple of key pieces of data. The --query option allows for filtering of the output presented per object. This option uses the JMESPath query language for JSON. Check out its documentation and dig into sorting out cool, complex queries. Share them with friends and neighbors! Table Output JSON is the default output format. For any output list, you can optionally provide the --output table option and argument and get nice table-formatted output. As an example of putting these two together, we'll list out all of the OCIDs for the CentOS images available to our tenant. # oci compute image list --compartment-id $cid --operating-system "CentOS" --query 'data[*].{"id":"id","display-name":"display-name"}' --output table +--------------------------+----------------------------+ | display-name             | id                        | +--------------------------+----------------------------+ | CentOS-7-2018.08.15-0    | ocid1.image.oc1.iad.aaaaaaaah6ui3hcaq7d43esyrfmyqb3mwuzn4uoxjlbbdwoiicdmntlvwpda | | CentOS-7-2018.06.22-0    | ocid1.image.oc1.iad.aaaaaaaa5o7kjzy7gqtmu5pxuhnh6yoi3kmzazlk65trhpjx5xg3hfbuqvgq | ocid| CentOS-7-2018.05.11-0    | ocid1.image.oc1.iad.aaaaaaaal46dansrxksqwbnyburbm6kvqp7zsptjvzxfys7vtydkuo6yezia | | CentOS-6.9-2018.06.22-0  | ocid1.image.oc1.iad.aaaaaaaanmzl3piptv7um5ewh2ctxjlgtk6fszedwrq2t7f2bruc3swy65aa | | CentOS-6.9-2018.05.11-0  | ocid1.image.oc1.iad.aaaaaaaa6e4br344oatmwuhhbiopsrk7gtcunksr3p34eokejkp5grbsjwdq | | CentOS-6.10-2018.08.15-0 | ocid1.image.oc1.iad.aaaaaaaafjth3ljlnwsw2elccwu5igjer64bs3wkly2trqblbkv4ryen6qxq | +--------------------------+----------------------------+ Note we used the --operating-system option to the oci compute image list command. The list commands have options for filtering the output based upon useful criteria. Another useful example is the --lifecycle-state option on the oci compute instance list command. These are all documented, keep them in mind as you start to work more with the CLI.   Customizing the CLI Working with the oci utility, you'll find yourself typing long-ish strings often. Also, you end up spelling out the same queries repeatedly. Happily both of these sorts of things can be configured. Like many CLI utilities, oci makes use of an optional run commands shell file (e.g. ~/.oci/oci_cli_rc) to allow for customization. With the rc file, users can define aliases for both commands and options, and also set up more advanced features like formats and canned queries. To get a stock rc file, you can use # oci setup oci-cli-rc --file ~/.oci/oci_cli_rc That will create a basic rc file with some nice queries and aliases to get started with. A nice feature of this rc file is that you can configure default option arguments for the CLI. For example, if you typically work in a specific compartment, you can set up the compartment-id option as a default option for all commands so you don't need to continually specify it as we've done above. To set up default options, add a DEFAULT section to your rc file. As an example, I'll add in my compartment-id and bucket-name from the above object store example. # head -5 .oci/oci_cli_rc [DEFAULT] compartment-id = ocid1.compartment.oc1..aaaaaaaa... bucket-name = jb2 # oci os object list {   "data": [     {       "md5": "DfvoqkwgtS4bi/PLbL3xkw==",       "name": "file1",       "size": 131072,       "time-created": "2018-08-30T20:47:17.286000+00:00"     },     {       "md5": "DfvoqkwgtS4bi/PLbL3xkw==",       "name": "file2",       "size": 131072,       "time-created": "2018-08-30T20:47:48.071000+00:00"     },   ],   "prefixes": [] } One more thing to point out is that you can set up pre-canned query formats in your rc file. This is super handy for ones that you use a lot. To get you started, the stock file has a handful of very useful ones. For example, we can use the query string above like so: # oci compute instance list --operating-system "CentOS" --output table --query query://get_id_and_display_name_from_list +--------------------------+----------------------------+ | display-name             | id                         | +--------------------------+----------------------------+ | CentOS-7-2018.08.15-0    | ocid1.image.oc1.iad.aaaaaaaah6ui3hcaq7d43esyrfmyqb3mwuzn4uoxjlbbdwoiicdmntlvwpda | | CentOS-7-2018.06.22-0    | ocid1.image.oc1.iad.aaaaaaaa5o7kjzy7gqtmu5pxuhnh6yoi3kmzazlk65trhpjx5xg3hfbuqvgq | | CentOS-7-2018.05.11-0    | ocid1.image.oc1.iad.aaaaaaaal46dansrxksqwbnyburbm6kvqp7zsptjvzxfys7vtydkuo6yezia | | CentOS-6.9-2018.06.22-0  | ocid1.image.oc1.iad.aaaaaaaanmzl3piptv7um5ewh2ctxjlgtk6fszedwrq2t7f2bruc3swy65aa | | CentOS-6.9-2018.05.11-0  | ocid1.image.oc1.iad.aaaaaaaa6e4br344oatmwuhhbiopsrk7gtcunksr3p34eokejkp5grbsjwdq | | CentOS-6.10-2018.08.15-0 | ocid1.image.oc1.iad.aaaaaaaafjth3ljlnwsw2elccwu5igjer64bs3wkly2trqblbkv4ryen6qxq | +--------------------------+----------------------------+ Nice.   Further Reading We've just gotten started working with oci and can already see how great it is for shell-based workflows. If you find it as handy as I do, I suggest taking a look at the documentation, which is well written and well organized. Feel free to share any tips and tricks you've picked up along the way here in the comments, and ping me anytime on Twitter @jlb13.

  Oracle Cloud has a fully-featured web console that makes managing cloud accounts straightforward and simple. But what if, like me, you are more at home in a shell?  There's a CLI for that. The...

Captain's Log: Container Native Logging with the EFK Stack

Capturing logs in Kubernetes Do you ever wonder how to capture logs for a container native solution running on Kubernetes? Containers are frequently created, deleted, and crash, pods fail, and nodes die, which makes it a challenge to preserve log data for future analysis. Application and system logs are critical to diagnosing and addressing problems impacting the health of your cluster, but there is a good chance you will run into hairy problems associated with the dynamic nature of containers and schedulers. The simplest option is to avoid logging altogether. However, this comes with an obvious cost: you will have little-to-no understanding of what is going on in your cluster. If this is a problem you need to solve, which is the case for just about everyone deploying cloud native solutions, then read on. Solution So how do we do this? When using Kubernetes for your container orchestration platform, there are a couple of built-in options available to grab logs: Docker logs and Kubernetes logging. These options are good tools to use in small-scale environments, such as development, however they are difficult to scale - users would need to log into each container to view the logs - and they do not address storing logs somewhere independent of ephemeral nodes, pods, or containers. Another option would be to purchase $$$ an enterprise tool. There are many paid services available, but most have more features than typical users might need and the cost can outweigh the benefits offered. What about a choice that solves the problem without breaking the bank? Something that adds instrumentation on top of the built-in options? The option many people turn to is the EFK stack - a scalable open source solution used to capture and aggregate logs and then visualize them in order to provide actionable business insights. What is EFK? The EFK stack is composed of Elasticsearch, FluentD, Kibana. This is similar to the ELK stack, which swaps Logstash with FluentD as its log collector. I have chosen EFK rather than ELK because FluentD is a Cloud Native Computing Foundation project that is simple to configure and has a smaller memory footprint than Logstash. The pieces: Elasticsearch: an open source search engine based on Apache Lucene that has become an industry standard for indexing and storing logs. FluentD: an open source CNCF projet for log collection used to capture logs and forward them to Elasticsearch. Kibana: an open source web UI used to visualize and easily search logs stored in Elasticsearch. Together these tools form a centralized, scalable, flexible, and easy to use log collection system for IT environments. FluentD captures the logs from each microservice and forwards them onto Elasticsearch, which addresses the issue of preserving logs after the end of a service lifecycle. FluentD is deployed as a DaemonSet, which means that the pod will be provisioned on every node of the cluster, which addresses the need for easily deploying the solution at scale. Kibana will then be used to visualize the aggregated logs. This is an example of a basic three-node Kubernetes cluster running EFK with a single microservice operating on each node: Installation For help creating a Kubernetes cluster with Oracle Kubernetes Engine follow the this guide. After your cluster is up and running, the process of spinning up the EFK stack on Kubernetes is simple thanks to Helm, a package manager for Kubernetes. Rather than having to install each component from scratch, you can use a Helm package, known as a chart, to quickly deploy the services on your cluster. Check out this article for more information about how to get Helm set up on your cluster. There are stable Helm charts available for each of the three components, but our quickstart documentation includes a link to our custom chart that further simplifies deployment by combining Elasticseach, FluentD, and Kibana into one chart. After a few minutes of the cluster launching things, you will be able to navigate to the Kibana interface. You may want to take the additional step of deploying a sample application to test out Kibana. Here is a guide for deploying a sample application on OKE. Our guide also includes information about how to create a basic index pattern to gain insight into your application. Conclusion You should now have a running deployment of Elasticsearch to store logs, FluentD to format and forward them along, and Kibana to visualize and make sense of them. We hope this means you are one step closer to debugging all of those nasty production bugs! For more details about the deployment process check out this solution.    

Capturing logs in Kubernetes Do you ever wonder how to capture logs for a container native solution running on Kubernetes? Containers are frequently created, deleted, and crash, pods fail, and...

Industry Thought Leaders Look to the Future with Cloud Native Labs

Not everyone is aware that Oracle offers a robust set of cloud services on Oracle Cloud Infrastructure (OCI), and that some of our most forward-thinking developers are building out solutions on Oracle Container Engine, our managed Kubernetes service. The team at Cloud Native Labs explores some of the most interesting new technologies and helps users understand best practices when working with them, including how to get started efficiently. As mentioned in Karthik Gaekwad’s post, we hosted a recent developer event in Austin where we shone a spotlight on some of the cool new work we are doing. It included a panel discussion with five dynamic industry thought leaders, to get their take on some pressing questions with huge impact. What are the meatiest challenges facing your company and our industry today? What are the biggest untapped opportunities of the near future? Blaize Berry  of Walmart Tech, Andrew Busey of Conversable, Lee Calcote of Solarwinds, Rob Hirschfeld of RackN, and Dustin Kirkland of Google Cloud eagerly tackled these questions in this panel discussion: Bob Quillin, Oracle VP of Developer Relations, emceed the event and led off with an introduction on what Oracle and Cloud Native Labs are up to. Karthik’s session, described in his blog post, can be viewed with his demo in its entirety.  And TJ Fontaine’s FaaS case study session on building out cloud-scale Functions-as-a-Service using the open source Fn Project is here: We invite you to review what was presented if you missed it, and/or share it with colleagues. Here’s a quick look at the full agenda with direct links to the sessions. Get Function’l with Kubernetes – introduction (4:12) Karthik on Kube Apps in Action (33:42) TJ on Kubernetes Service Delivery at Scale without Magic: a FaaS Case Study (31:45) Thought Leaders’ Panel Discussion (42:02) If you are interested in learning more or hosting a similar event at your local meetup, please let us know!

Not everyone is aware that Oracle offers a robust set of cloud services on Oracle Cloud Infrastructure (OCI), and that some of our most forward-thinking developers are building out solutions on Oracle...

Starting with Serverless

There are a lot of blog posts and projects related to Serverless concepts out there, and I am hesitant to add another introductory post to the mix, yet I find myself writing one. As much content as there is discussing Serverless, I think it could be useful to explain some of the overall concepts before we carry on to more specific details in the context of this blog. Also, we have to start somewhere and here seems a logical place. There are enough definitions out there, some more opinionated than others. For an excellent overview of the concepts, I suggest a read of Martin Fowler's 'Serverless Architectures.' Rather than try to redefine things, here I will present my understanding of what Serverless is, how I see it fitting into today's and tomorrow's solutions landscape and how users of Oracle Cloud Infrastructure can leverage it. Serverless is an architecture which allows developers to focus on developing code, abstracting away the system and platform concerns related to building, deploying and scaling their solutions. Most of us have surely seen the memes declaring a simple truth: there are servers in Serverless. As much as we threaten to play out this phrase, it's important to consider it for a few reasons.                      There are servers in Serverless, and of course, no one ever said there weren't. While we're at it, the cloud is just someone else's computer. There's no magic in the cloud, after all. The cloud is about powerful abstractions which simplify our use of complex resources.  Serverless is the latest in the long line of abstractions in computing. No magic, and that allows us to bring things back down to earth a bit. One of the many valuable features of a serverless architecture is that you pay only for execution time. If your code is not executing, the meter is not running. Another is that scaling is the responsibility of the serverless solution you're leveraging. These are abstractions that allow us as developers to forego in-depth knowledge of how the components work and enable the use of those resources far more effectively than ever before. So there are servers, but they become infinitely simpler to account for and should cost a whole lot less to use. I've used the term abstraction a few times, let's briefly dig into this. In computing, abstractions are used to hide the implementation details of a lower level. Typically, with each new abstraction, less domain-specific knowledge is required to make use of the underlying system. To unroll this a bit more, let's reach back to the early days of computing and bring that context forward through the age of the PC and into the cloud computing solutions of today. Back in the very early days of modern computing, it was primarily about designing logic circuitry with vacuum tubes, and later transistors, and implementing algorithms to make use of that circuitry. The advent of integrated circuits and later microprocessors changed things considerably, making it simpler to build systems with off-the-shelf components. The resulting systems, including personal computers, were more easily designed and built, and more readily adopted and effectively used by everyday consumers. Similarly, on the software side of things, early computers leveraged unique machine languages. These made way for assembler languages, which create machine language for use on a given platform. This gave rise to a third generation of languages, such as C, that abstract the assembler for a given system away allowing for portable code across platforms. Today, there are a plethora of expressive and compact languages that we all know and love which are even more abstract from the underlying system. The cloud represents another abstraction by making compute, networking and storage resources indirectly available to consumers. Cloud providers continue to abstract these components, offering more resources with less complexity. Where we once rented time on remote hosts and virtual machines, we now consume complex managed resources that are single-button deployable. Virtualization, containerization and other computing paradigms make this possible.     Serverless is a logical next step in abstraction. Deploying applications and services in VMs or containers in the cloud can be much more cost effective than buying and maintaining racks of hardware on premises. However, VMs need to be created and monitored, their software patched and maintained. Containers help (as a packaging solution), but any meaningful application deployed in containers should live within the context of a more complex orchestration solution. As things become more streamlined, new complexities arise. When leveraging a serverless architecture, these complexities are handled by the underlying platform. The platform could be a FaaS that you deploy, or a managed Functions product offered by your cloud provider, or something else. Regardless, the platform handles the complexity.  Developers implement functions which are triggered by events and coexist as applications. The functions are deployed and executed as needed, scale when needed and otherwise run merrily along without the developer thinking too much past the end of their code. For some introductory material, check out the Fn Project. Fn is a leading open source, cloud-agnostic functions platform which provides an excellent environment both for exploring how to create applications and how to deploy a platform which provides such a solution. While you're considering the overall space, it's worth it to take a look at this project. Walking through some tutorials gives some quick exposure to the concepts at play.  We will be adding more content here related to various aspects of Serverless, from how to build serverless applications to deploying your own functions solution with open source software, and we'll often be digging into specific topics. Many have been practicing in this space for a while now, yet in some ways, we're still at the start of something big that will be with us for a long while to come.

There are a lot of blog posts and projects related to Serverless concepts out there, and I am hesitant to add another introductory post to the mix, yet I find myself writing one. As much content...

iGeolise Case Study

iGeolise uses Oracle technology to make their service faster and more efficient, which has allowed them to scale their business to become a global back-end search provider. What does iGeolise do? iGeolise is a tech scale up that allows users to search for locations by time, instead of distance. They do this by providing companies with access to their API as part of their TravelTime Platform. The TravelTime Search API can be integrated into a company’s existing website. Users can then search for services by time via different modes of transport. The purple area on the map below shows which locations a house hunter should consider when renting a property within 45 minutes of an office location. Zoopla is an iGeolise client and one of the biggest property search pages in the UK. Before Oracle In order for the TravelTime Platform to function, iGeolise needs to assimilate data from thousands of transport agencies and road networks. This takes a lot of data processing power. To process the data, iGeolise originally used a solution hosted on their own premises. Their single server only allowed them to process their maps one-by-one. Each map could take up to several hours. This meant that parsing multiple maps could take weeks. iGeolise also had no flexibility in the use of their server.   The problem SCALE The time it took to parse maps meant that it was impossible for iGeolise to scale their business beyond 20 countries. Previous approaches took a whole day just to update maps for one country. With 26 countries to process that was becoming a barrier, because it would take a week’s worth of processing time to update the system, and that clearly was preventing the business from scaling. COST The inflexibility of the on-premises solution meant that iGeolise was paying for the server even when it was not in use. It also meant that they could not complete all the elements of the search in-house. For example, iGeolise had to rely on third party services for geocoding, which added to their costs.   The solution THE ORACLE CLOUD iGeolise switched to the Oracle Cloud. The Oracle Cloud provides servers, storage, network, applications and services through a global network of Oracle Corporation managed data centers. Oracle provides an on-demand server to iGeolise. This means that iGeolise can parse maps on several servers, taking hours instead of weeks. It also means that iGeolise can use multiple servers during periods of high demand, and reduce use during periods of low demand. To speed up processing, iGeolise also switched to Oracle Cloud Infrastructure Container Engine for Kubernetes with an integrated Docker-compliant container registry as a fully managed service.   After Oracle SCALE Using the Oracle Cloud has allowed iGeolise to scale their business to become a global back-end search provider with limitless capacity. This cut parsing time from well over a week, every week, to well under a day, which meant when there was an error in the data it could be corrected and reparsed almost immediately. Kubernetes and Docker on Oracle made a significant difference in enabling iGeolise to scale their processing, and in turn their business. FLEXIBILITY The flexibility of the server has allowed iGeolise to start expanding the services they offer. For example, they have just launched their own geocoder, which is hosted on the Oracle Cloud. EFFICIENCY iGeolise has increased efficiency by only using the processing power they need. TEST THE RESULTS iGeolise has a demo tool that allows users to explore which destinations are within ‘X minutes travel time’. The tool is live in 26 countries thanks to the fast processing power of the Oracle Cloud. Charlie Davies, co-founder and CEO of iGeolise, says: “Oracle Cloud is helping us process data quickly and efficiently, which is a very important part of mapping. This allows us to expand by adding more territories and countries to our platform, meaning we grow as a company. If you’re looking to grow your business and take advantage of the latest cloud based hosting technologies, there is nowhere else”.

iGeolise uses Oracle technology to make their service faster and more efficient, which has allowed them to scale their business to become a global back-end search provider. What does iGeolise do? iGeolis...

FAQ: An Introduction to Kubernetes

Virtualization transformed data centers by enabling workload portability and dynamic resource allocation. Similarly, containers are now poised to dramatically transform the cloud. And Kubernetes plays an important part in that.    Interested in learning about Kubernetes but don't know where to start? Check out the answers to these frequently asked questions and follow the links to more information from Patrick Galbraith, principal platform engineer at Oracle Dyn.    What's the difference between a container and a VM?    A virtual machine (VM) consists of a guest operating system (OS), an application, and its data. The VM runs on top of software called a hypervisor, which enables multiple VMs to run on one physical server and controls how each VM interacts with the underlying host OS and hardware.    Containers also abstract applications from hardware, but they do so through process, network, and file system isolation. Also, containers, unlike VMs, don't require their own operating systems. They have just the code an application needs to run, and they only run the processes required to complete a specific task, utilizing the host OS and its libraries. As a result, they use even fewer resources and are even more portable than VMs.    What does that have to do with Kubernetes?    Kubernetes is an open source management platform that enables containers to run across a cluster. It relies heavily on monitoring and automation, and efficiency is its priority as it makes decisions about which host a container should run on and which resources it should access. Further, Kubernetes automatically configures containers as they move about the environment, so users don't have to manually modify things like database connections.    Another benefit of Kubernetes is its flexible deployment model; it can run on bare metal, in VMs or containers, or in the cloud – or even a combination of those environments.     Why is Kubernetes important?    Containers have been around for a long time, but they didn't start taking off until Docker emerged about five years ago and made them easier to run. The next logical question was, how could containers run at scale? That's the real value proposition of Kubernetes and why it's so important. Kubernetes also has a vibrant, active community that evolves the project rapidly and has helped it become a leading platform for running applications in containers.    What does the future hold for Kubernetes?    The largely untapped potential of Kubernetes lies in the concept of immutable infrastructure. Traditional IT infrastructure, even with virtualization and cloud computing, is fairly rigid. A workload (native application or VM) is built to run on a specific system, and changes to that underlying system often require changes to the workload (or vice versa) to maintain compatibility.    A container, however, runs the same on any system. This fact, combined with the lightweight and portable nature of containers, means users can deploy, scale, or even destroy a container environment without affecting the underlying infrastructure. That's what it means to be immutable.    For more information, check out Oracle Dyn's glossary of Kubernetes terms you should know.

Virtualization transformed data centers by enabling workload portability and dynamic resource allocation. Similarly, containers are now poised to dramatically transform the cloud. And Kubernetes...

It's a Wrap! Getting Function'l with Kubernetes

The Getting Function'l with Kubernetes event in Austin, TX was awesome! We had a great audience, and lots of great questions on Kubernetes, cloud native applications, how to build serverless applications (and why), and future looking items to how cloud native can help with AI and big data applications. The slides to my talk are here, and code here. In contrast to most of my other presentations, you'll only find a few slides because I went full blown demo. Started with a Java application, containerized it, deployed to the Oracle Kubernetes Engine service, and then added ingress with Contour, monitoring with Prometheus and packaging with Helm. I was pretty nervous because... live demo, but it went awesome! Let me know if you have questions on any of the content. After the talk, a few folks came up to me and asked me a variety of questions, which had one common theme: "How do I start with...", where the "..." completes with Kubernetes, Microservices and the Oracle Container Engine. Here's my best response: The most comprehensive free course I know of is the Introduction to Kubernetes course by Neependra Khare. It's a bit long but will give you the best overall hands-on coverage on everything you'd want to know about Kubernetes to get started. Check it out on edX! If you're busy, only have a couple of hours to spare, and have access to either lynda.com or LinkedIn Learning, check out the Learning Kubernetes course. It's a quick primer to all the Kubernetes basics you'll want to know as a developer, and as a bonus, you have a direct line to talk to the course author, me :) The course was released in January 2018, and has already hit over 100,000 course completions, with a lot of positive feedback from folks via Twitter, LinkedIn, and surveys. In fact, it was from user feedback that we created a Kubernetes track on Lynda which includes a tips and tricks course on Kubernetes tooling, a course on the CNCF landscape, and a followup course on using Microservices with Kubernetes, where I build a microservice-based application with Kubernetes. Check out the entire lineup here. The other resource that I look at is the core Kubernetes docs. Before the two courses above, there were not a lot of resources one might look up on how to get started with Kubernetes, and the single source of everything was the Kubernetes documentation. I used to describe the Kubernetes documentation like an elaborate buffet; fantastic if you know what you're looking for, but confusing to a newbie. However, this has come a long way in 2018 and has become very digestible to a brand new Kubernetes user. You can find all that coolness here.  If you're looking to get started with the Oracle Container Engine, which is the managed Kubernetes service on the Oracle Cloud Infrastructure, look no further than here. Our excellent documentation team here on OCI came up with a set of 20-minute tutorials to get you going on our Kubernetes service, our container registry, how to link up the two services, and build a continuous integration pipeline with some Wercker added in as well! Well, you now have all the ways to start learning this Kubernetes thing that everyone is talking about! Hurry up and go, because a lot is going on in this space, and the quicker you understand the fundamentals, the faster you'll be able actually do cool stuff! As always, I'm here for you! If you have questions, reach out to me via the comments below, or LinkedIn or Twitter.

The Getting Function'l with Kubernetes event in Austin, TX was awesome! We had a great audience, and lots of great questions on Kubernetes, cloud native applications, how to build...

3 Features I Love in Kubernetes 1.11

Kubernetes 1.11 was released last week, and I spent some time looking at the features and fixes released. It's the second Kubernetes release this year, and this one comes with a lot of cool features to try out. You can take a look at the release notes here, or if you want to get down in the weeds, check out the changelog.   I'm most excited about the "Dynamic Kubelet Configuration" feature! This feature existed previously but has graduated to a "beta" feature. This means that's it's more stable than before, and the feature is well recognized. The feature essentially allows you to change the configuration of Kubelet on a running cluster in a more accessible manner using configmaps. The configmap is saved as a part of the Node object which is monitored by Kubelet. Any changes to it and Kubelet will download the reference and stop. If you're using something like systemd to watch Kubelet, it'll automagically restart Kubelet, which will start with the new configuration. This feature is super exciting because it gives admins who manage all of the nodes a little break. In the past, any updates to the config had to be rolled individually to each node, which could be a time-consuming process.   I like that Custom Resource Definitions (CRD) are a lot more usable now with versioning. In the past, you were limited to a single version of a CRD; any changes, and you had to create a new one and manually convert everything that used the old CRD to the new one. All a bit painful! With versioning, the path to using updated custom resources is more straightforward than before.   Finally, CoreDNS was promoted to General Availability! In the early Kubernetes years, there was some confusion on what DNS provider to use, and there were a few options. For someone who was looking at the ecosystem from the outside, it was hard to tell what DNS solution to pick. I touched on this in my Kubernetes: CNCF Ecosystem course, and how the CNCF was able to steer the community to a better default! It took some time, but in the end, having CoreDNS as a default DNS server will help Kubernetes be more reliable, and make DNS debugging simpler for those of us dealing with the inner workings of Kubernetes.   There are a lot more things released, so check out the release announcement if you haven't already!   There are also a few tiny things that were released that have me excited: First, this PR allows for Base64 decoding in a kubectl get command using go-templates. Super useful to have a one-liner to decode what something might be in a secret. Second, from a monitoring perspective, Kubelet will expose a new endpoint, /metrics/probes. This presents a new Prometheus metric that contains the liveness and readiness probe results for the Kubelet. It will allow you to build better health checks and get a better picture of the state of your cluster. Third, RBAC decisions are in the audit logs as audit events! Since I've worked on authn and authz systems in the past, I get irrationally excited about stuff like this. In the past, we'd have to go hunting through logs to find why an RBAC call passed/failed, whereas now we can quickly look at the audit events stream.    That's my (biased) list! What about you? What feature or bugfix has you excited? Let me know in the comments below, or tweet at me @iteration1!

Kubernetes 1.11 was released last week, and I spent some time looking at the features and fixes released. It's the second Kubernetes release this year, and this one comes with a lot of cool features...

Cloud-native helloworld

Speaking and writing come pretty naturally to me, but setting a title is always the hardest part. It's true while writing code as well- writing 1000 lines of code comes naturally, but when I have to create and name a new file, it's a different story... But, I digress- Hi! I'm Karthik Gaekwad, and I'm the newest member of the Developer Relations team here at Cloud-native labs. If you live in Austin, we've probably already crossed paths at one of the many meetups I attend or run including CloudAustin, Austin Devops, Docker Austin, OWASP, etc; or perhaps at Devopsdays Austin, for which I've been one of the core organizers since its inception in 2012. I'm also an author on Lynda.com, and have authored a few courses on Kubernetes, and Agile devops methodologies.  I'm joining the Cloud-native labs team from the Oracle Container Engine team- which is Oracle's managed Kubernetes service running on Oracle Cloud Infrastructure. Naturally, I'll be focusing my efforts on Kubernetes, microservices and Cloud Native architectures and applications. There are many things I'm excited about with the new job, but I'm most excited to learn and teach! The one constant theme that I've noticed with Kubernetes over the last few years since it got hot is the word "How?". As a user of Kubernetes, I've frequented in the Kubernetes doc searching for answers, and as a Lynda author, I've received many messages of thanks from viewers that they now knew how to use Kubernetes. The cloud-native ecosystem is one of the fastest growing ecosystems I've seen, and it's hard to keep up with the changes, new releases, and new projects that support the ecosystem. As a result, I'm excited to spend more time keeping pace with all the new happenings and spend time researching best practices for microservices and cloud-native apps, welcome new users to the world of K8s, and bridge the gap between the cloud-native platforms we have on OCI today. I'll be spending a lot of time researching, speaking, blogging and answering questions! Feel free to reach out to me on Twitter, Linkedin or comment on here as well- I'm here for you! -Karthik

Speaking and writing come pretty naturally to me, but setting a title is always the hardest part. It's true while writing code as well- writing 1000 lines of code comes naturally, but when I have to...

Get Function'l with Kubernetes

Calling all cloud native developers in Austin, Texas! Join Oracle Cloud Native Labs for functions and Kubernetes talks, drinks, and demos on July 17 at Capital Factory.  Save your seat now! Hear from technical instructors, contributors, and startup innovators as they share their real-world experience from developing with CNCF tools to building a cloud scale FaaS on Kubernetes.   This educational evening is fixin' to include: Kube Apps in Action There are many Kubernetes presentations, but seeing is believing! In this talk, we'll run through some demos of popular tooling provided by the Cloud Native Computing Foundation (CNCF) that help you build resilient microservices based applications. We'll start by deploying a sample Java web application on top of Oracle Container Engine for Kubernetes, and build an ingress controller for the application. Next, we'll verify that our application logs flow into Kibana using Fluentd, followed by looking at application metrics, monitoring using Prometheus, and tracing application issues using Jaeger. Gain a better understanding of all of the tooling that cloud native developers use to build and monitor their Microservices. Presented by Karthik Gaekwad, Oracle Software Developer, Container Development. Kubernetes Service Delivery at Scale Without Magic: a FaaS Case Study Sift through the hype of Kubernetes to learn how to build cloud scale Functions-as-a-Service using the open source Fn Project. In this talk we'll cover being able to identify the key requirements for your service, how to translate those requirements into useful Kubernetes abstractions, understanding the availability and blast radius for your service, when and when not to lean into Kubernetes, and how to leverage Helm for deployments and rollbacks. This session is presented by TJ Fontaine, Consulting Member of Technical Staff, Container Development, Oracle. Influencers Panel Hear from the most progressive thinkers in the industry today as they debate the merits, challenges and rewards of innovating and navigating in the cloud native ecosystem. Demos and Hands on Labs: Try your hand at functions and Kubernetes labs, while mixing it up with our lead developers! If this sounds like a good way to spend a warm Tuesday Texas summer evening, please join us at Austin’s Capital Factory on July 17 at 5:00PM. Registration is required! Register or get more information here. See y'all there.

Calling all cloud native developers in Austin, Texas! Join Oracle Cloud Native Labs for functions and Kubernetes talks, drinks, and demos on July 17 at Capital Factory.  Save your seat now! Hear from...

DockerCon 2018 Wrap Up and Looking Forward

DockerCon 2018 is wrapped up and I'm back on the east coast at my desk, semi-rested and fairly caffeinated. Having recently made the move from platform development to cloud native developer advocacy, this is a great opportunity for me to share my thoughts. At least that's what my second coffee is telling me. This year, DockerCon continued its journey from scrappy-yet-polished tech upstart showcase toward a large-scale enterprise conference. While not fully arrived at the latter, the main focus was definitely on enterprise readiness.  Significant bandwidth was also spent showing off improvements in Docker's desktop development tooling as well as integration with Kubernetes and various cloud platforms. Compared to my three previous DockerCon's worth of experience, there was less new-and-edgy on the main stage, replaced with more showcase-and-pitch. Docker is "all growed up", but there's still a lot of new, edgy things happening in the greater ecosystem. Thankfully, the company behind the conference is still happy to share its presence and continues to promote new ideas and projects. Of particular interest to me was a hosted panel on the top Serverless projects, with Oracle's Chad Arimura representing the Fn Project as well as representation from OpenFaaS, Galactic Fog, Nuclio and OpenWhisk. Most of these were also present for a Serverless SIG and BoF the previous day. Present for both Serverless sessions as well as a closing day "Cook Hacks" keynote, Idit Levine Solo presented the Gloo family of projects which seek to glue the varied Serverless and Microservices components together. Both Gloo and Qloo are compelling projects tell a migration and maintenance story which helps round out the promise of Serverless. For the big ticket messaging, Docker threw a big heart in Kubernetes' general direction during day one's general session. A lot of work has gone into fully integrating Docker's Desktop and EE products with Kubernetes, and promotion of their own Swarm orchestration solution is trending down. This is a fairly simplified view of the overall play, and it's more of a high-five than white flag on Docker's part. But it appears that Docker has come to agree with the top managed services providers, Oracle Cloud included, that Kubernetes is the platform of choice. With Docker, Inc.'s cluster hosting business winding down they've also done some work to integrate their Desktop and EE products with various cloud providers.  This considerably lowers the barrier to entry while attempting to avoid the pretense of preferred vendors. I don't personally use the desktop product, but it looks pretty slick. DockerCon 2018 spent a lot of its core messaging time speaking directly to the ecosystem's arrival in the enterprise. Solomon Hykes's "tools of mass innovation" have grown into a broad and varied landscape capable of deploying and managing enterprise-class production workloads. Generally speaking, this is more about Kubernetes and Microservices at large, but it's all a win when the word "Docker" is synonymous with "containers". Large scale deployment use cases were highlighted in keynote content, community and partner theater presentations and track talks. Kicking this off in the first general session was the story of McKesson completely restructuring its 180-year old enterprise around containers and cloud native development. During day two's general session, Liberty Mutual told a similar story of container and cloud native stack adoption to gain massive shifts in agility and efficiency. Even talks on relatively new paradigms and young projects made reference to real world use cases in the enterprise. For example, a state-of-the-union type session with Istio project team members made reference to a couple of enterprise use cases, including American Airlines. I'll link some of the talks I found most interesting once they are posted. The conference was a good opportunity for those new to DockerCon (nearly half of the 5,000 attendees) to see what it's all about, and also for those of us who have been working with containers since Docker first spun up to welcome those new to the technology space overall. With more and more enterprise shops coming to realize that containers are the way forward, we can all expect more new faces and a consistent stream of new ideas as we continue to expand this ecosystem. This is good news for those of us building out cloud native, enterprise-ready platforms, and even better news for those looking to migrate existing workloads or build new solutions in the cloud. It's early days yet for managed mesh services, the internet of things and Serverless architectures, but things move fast in this space. These early days provide an opportunity to leapfrog into leading edge cloud native solutions like never before.

DockerCon 2018 is wrapped up and I'm back on the east coast at my desk, semi-rested and fairly caffeinated. Having recently made the move from platform development to cloud native developer advocacy,...

Oracle Cloud Containers, Serverless, and Kubernetes at Dockercon 2018

Make sure to check out the latest containerization, serverless, and Kubernetes updates from the experts at the Oracle Booth S3 this week at Dockercon 2018. We’ll have a crew of our cloud native evangelists and engineers there to answer questions and demo – so drop on by, say hello,and grab some swag. Later today - Thursday, June 14 – at 3:50 pm plan to attend the Serverless Panel at Innovation Room 2020. Along with Chad Arimura from the Fn Project there will be veritable who’s who in the serverless world including Anthony Skipper (Galactic Fog), Yaron Haviv (Nuclio), Michael Behrendt (OpenWhisk) Alex Ellis (OpenFaas), Idit Levine (Solo.io), and Patrick Chanezon (Docker). This panel will include leaders from the top 5 container based serverless frameworks - Fn Project, Galactic Fog, Nuclio, OpenWhisk and OpenFaaS, and from the Gloo project to discuss the state of portable serverless frameworks on container platforms. If you’re developing container-native applications on Kubernetes, you’ll want to learn more about the Oracle Container Engine for Kubernetes and the Oracle Cloud Infrastructure Registry.  Container Engine for Kubernetes is a fully managed and enterprise-ready cloud service.  It combines production-grade container orchestration of standard upstream Kubernetes with the security, control, and high predictable performance of Oracle’s next generation cloud infrastructure. Registry is a private container registry service providing highly available storage and sharing of container images within the same regions as your deployments. We’ll see you at Dockercon 2018 in San Francisco!

Make sure to check out the latest containerization, serverless, and Kubernetes updates from the experts at the Oracle Booth S3 this week at Dockercon 2018. We’ll have a crew of our cloud native...

Serverless, Kubernetes, Containers - Get Ready for Oracle Code London This Week

The Oracle Code event lands in London this week on Wednesday, May 30, bringing with it a jam-packed agenda of developer focused labs, sessions, and demos with a heavy focus on containers, serverless, and Kubernetes. There’s a wide range of content from advanced deep dives to getting-started courses and labs. Here are a few of the highlights to check out: Honey I Shrunk the Container: 11:15am - 12:00pm, Room A, Ewan Slater Containers have become the standard mechanism for packaging, delivering and deploying microservice architectures, new apps, and legacy projects. They provide a simpler, more lightweight architecturally significant form of virtualization than VMs. They are a natural candidate for running microservices. But can we do better? If we can simplify our containers, we can shrink them further, improving performance, security, and manageability. This talk discusses the advantages of microcontainers, how to build and use them and a set of open source tools (donated by Oracle) that can help. Read up on the Microcontainer Manifesto here! Hands-on Lab: Getting Started with Functions and The Open Source Fn Project: 11:15am - 1:15pm, Room D/E In this hands-on lab you’ll be introduced to the open source Fn functions platform and learn how easy it is to write and deploy functions using Java and other popular programing languages. We’ll walk through installing Fn on your laptop and how to define, build, and deploy functions. Once we’re done with the basics we’ll dive into using the Java FDK (Function Developer Kit) to accelerate function development and testing. We’ll also see how Fn’s packaging of functions as Docker containers enables advanced use cases not commonly possible with function platforms. When it's all over you’ll be ready to start using Fn to build highly responsive and scalable applications and services. Familiarity with Java and/or Go is recommended but not absolutely necessary for this lab. (PLEASE NOTE: PRE-REGISTRATION IS REQUIRED FOR HANDS-ON LABS. YOU MUST BRING YOUR OWN LAPTOP TO PARTICIPATE IN THE HANDS-ON LABS.) Will Serverless and Kubernetes Kill DevOps?: 12:10pm - 12:55pm, Room A Thom Leggett from Oracle will take you on a guided climb up the modern application development stack. Start at base-camp: the operating system, then proceed via containers and container orchestration to finally summit at serverless computing. With a panoramic view from the top of the stack you will learn how these emerging technologies can streamline your operations and engineering, saving you money and letting you focus on meeting your business needs instead of managing infrastructure. Along the way you will encounter such specimens as Oracle Linux, Docker, Kubernetes and serverless Functions-as-a-Service platforms such as the Fn Project. Finally, before descending to base-camp for a nice cup of tea, you will discover how native species such as the DevOps will have to adapt or die in this new habitat. Implementing Functions as a Service on a Container Native Serverless Platform: 1:55pm - 2:40pm, Room A, Andrea Morena Serverless computing is gaining popularity because it allows developers to focus on the functionality of their code and not be worried about the target deployment environment. Developer's code can scale as needed automatically and another advantage is that serverless is elastic in its compute utilization which means you use compute resources only as they are needed in an on-demand fashion. Democratizing Serverless: The Fn Project: 4:05pm - 4:50pm, Auditorium, Thom Leggett & Matthew Gilliard Developers just want to write and deploy their code. This sounds easy enough but typically before you can deploy you need to allocate the necessary infrastructure including provisioning machines, configuring storage, configuring networking, and so on—all just to run even one line of code! Fortunately, serverless function platforms that eliminate infrastructure concerns entirely are appearing, although the majority of those platforms are proprietary. Don’t miss Martin Thompson’s keynote to lead things off in the morning on “High Performance Managed Languages,” and make sure to check out the Developer Lounge and a wide range of other hot sessions from the future of Java to blockchain to the super cool Bloodhound project. See you all there! To register or for more info make sure to check out https://developer.oracle.com/code/london-may-2018.

The Oracle Code event lands in London this week on Wednesday, May 30, bringing with it a jam-packed agenda of developer focused labs, sessions, and demos with a heavy focus on containers, serverless,...

Top Kubecon Takeaways and What's Next

Kubecon + CloudNativeCon Europe 2018 is now in the record books, and after a solid Day 1 and Day 2, the show closed off with a great set of last day sessions. With a little bit of retrospect now, here a mixtape of my top takeaways from the conference.  Serverless Strategy: Better Get a Map "Crossing the River by Feeling the Stones" from #KubeCon. I really enjoyed giving this talk on #Maps, #OODA, #Strategy and #Serverless. Huge thank you to @kelseyhightower for inviting me - https://t.co/QIU3uDl9C6 — Simon Wardley (@swardley) May 8, 2018 Leading off with a bit of Art of War by Sun Tzu, Simon Wardley's keynote on Friday called out five factors in competition that matter: purpose, landscape, climate, doctrine and leadership and then jumped into OODA loops - observe, orient, decide, and act - developed by United States Air Force Colonel John Boyd, laying the foundation for his theories on Strategy, Situational Awareness, and Mapping. Maps must be "visual, have context, an anchor, position, and movement." The video above walks through the rest of the story which ends on applying strategic mapping to the serverless space - where too much focus on containers may miss the massive shift ahead in serverless. Source: @swardley Twitter feed Standards, Serverless, FaaS, and CNCF Friday was a heavy serverless day which honestly put serverless in the unenviable "last day" position. Nonetheless, some of the best sessions of the conference (and keynotes, too - see above) came through. Chad Arimura and Matt Stephenson from the Fn project wowed the crowd with their presentation and demo on "Operating a Global-Scale FaaS on Top of Kubernetes." The CloudEvents announcement - supported by Oracle Cloud Infrastructure, the Fn Project, and all the major providers - was the high point of the conference in terms of community standards and industry interoperability. #ICYMI ➡️ check out this post from @austencollins introducing #CloudEvents 0.1, a new specification that aims to help people & applications handle event data https://t.co/LQMltmmgPd — CNCF (@CloudNativeFdn) May 8, 2018 And Austen Collins pulled one massive serverless interoperability demo in his late morning presentation.  Here's a massively multi-cloud #serverless architecture, featuring CloudEvents. This scenario is over-the-top, but it was fun and proves there's no need to fear the AI takeover. Shout-out to all of the vendors who collaborated on this - https://t.co/CQOqgR3IKN — Austen Collins (@austencollins) May 8, 2018 In Denmark, Danish are called Viennese  Pausing on the technology updates, I have to admit I'm a huge pastry and breakfast breads fan.  And Denmark did not let me down with some of the best made Danish I've ever had - awesome!  But imagine my shock to learn that in Denmark, Danish are called Viennese - huh! What I learned in Copenhagen - Danish are called Viennese in Denmark - who knew? https://t.co/tWtQicW6y8 pic.twitter.com/EWIM1f3HpW — Bob Quillin (@bobquillin) May 8, 2018  Git on the GitOps Train GitOps - introduced (to me) in Alexis Richardson's keynote and reinforced in a variety of other sessions - furthers the culture and logic of DevOps using Git as the "source of truth for the desired state of the whole system." Check out the whole session here for all the details. Time to Get Kozy with Kubernetes I'll close with the other major theme from Kubecon Copenhagen, and that was Kubernetes and it's post graduate status - having crossed the chasm and into it's next phase: Do you think #Kubernetes has crossed the chasm and reaching early majority? #KubeCon pic.twitter.com/ku37uzJNKO — Arun Gupta in Bay Area (@arungupta) May 2, 2018  To that end, the community including Oracle has rallied around taking Kubernetes to that next level of maturity, focusing on key real-world Kubernetes issues including governance, security, networking, storage, scale, and manageability. Enterprises can now get "kozy" with Kubernetes leveraging all the aspects of Hygge we learned in Copenhagen. That's a wrap on Kubecon Europe 2018 - see you all in Kubecon Seattle or sooner down the road!

Kubecon + CloudNativeCon Europe 2018 is now in the record books, and after a solid Day 1 and Day 2, the show closed off with a great set of last day sessions. With a little bit of retrospect now,...

Highlights from KubeCon Europe Day 2

Photos courtesy of Voltaire Yap Day 2 of KubeCon + CloudNativeCon Europe 2018 settled into a full day of content, sessions, demos and conversations, and ended with a nighttime event at Tivoli Gardens, all after a jam-packed Day 1 highlighted by themes of cloud platforms, GitOps, serverless standards, and Kubernetes best practices and lessons learned. At the Oracle Cloud Native booth, our Day 2 conversations with developers focused on similar themes and challenges with serverless platforms, Kubernetes operators, and architectural decisions (lift & shift or just shift?). Here are a few of the Day 2 highlights: MySQL Operator for Kubernetes Not surprisingly, at KubeCon this week there’s a lot of interest around Oracle’s recently open sourced Kubernetes operator for MySQL that simplifies running, managing, and controlling MySQL on Kubernetes. You can install the MySQL Operator on any existing Kubernetes cluster and use it to create and manage production-ready MySQL clusters with a simple declarative configuration. Why the interest around MySQL operators at a Cloud Native conference? MySQL is the world’s most popular open source database, and the Oracle MySQL Operator seamlessly integrates MySQL management and operations into Kubernetes environments, providing an “open with open” alternative for AWS RDS users and the general MySQL community. Make sure to try it out! Serverless: Something New or Kubernetes++ ? At KubeCon you’ll hear a variety of perspectives on where we as an industry are going in regards to serverless platforms. There’s the view that serverless is merely an extension of Kubernetes as we work up the stack, and others view it as a brand-new software paradigm that is in itself the future. CNCF standardization efforts will surely provide much needed guidance and help set a path for the industry that bridges these two potentially divergent but fundamentally interdependent technologies. Day 3 serverless track sessions will also undoubtedly shed more light on this discussion. Make sure to check out Chad Arimura and Matt Stephenson and their talk today on “Operating a Global-Scale FaaS on Top of Kubernetes.” They are going to cover everything you ever wanted to know about building a planet-scale FaaS using the serverless Fn Project on top of Kubernetes – designing and building a service to operate under the most demanding production workloads. Tivoli Gardens: So Many Metaphors, So Little Time Day 2 wrapped up at Tivoli Gardens – the second oldest amusement park in the world, which opened in 1843 and is celebrating its 175th anniversary. It was a beautiful night and a chance to connect with new and old friends and share a few thrills at the same time. Stay tuned here on the Oracle Cloud Native channel for a KubeCon wrap up and for more container, serverless, edge, ML, and Kubernetes content in the future!

Photos courtesy of Voltaire Yap Day 2 of KubeCon + CloudNativeCon Europe 2018 settled into a full day of content, sessions, demos and conversations, and ended with a nighttime event at Tivoli Gardens,...

Highlights from KubeCon Europe 2018 – Day One and More

KubeCon + CloudNativeCon Europe 2018 has kicked off in Copenhagen. The show is full of positive energy, buzzy, sold out, and packed full of 4300 cloud native developer attendees, very much on par with Kubecon Austin in terms of quality, quantity, and momentum.  With Kubernetes in graduate status there’s been lots of focus on the next wave of production-level, real-world challenges ahead and a much-needed focus on efforts to standardize serverless computing, including the CloudEvents project. Yesterday, Oracle announced new support for several open serverless standards on the open Fn Project and a set of critical new Kubernetes features for Oracle Container Engine targeting many of the top governance, security, networking, storage, scale, and manageability issues facing Kubernetes developers today.  Make sure to dig into the work the Fn Project is doing with the Serverless framework team to ensure “freedom for FaaS users and less vendor lock-in.” Check out these Day One Twitter moments from the CNCF team: https://twitter.com/i/moments/991747409796616195 Some of my personal highlights: Morning Keynotes: My favorite insights came from Alexis Richardson, co-founder and CEO of Weaveworks, who waxed poetic on what’s next for cloud native and Kubernetes along with many solid Lego references https://twitter.com/CloudNativeFdn/status/991638445830410240 .  What’s needed?  “Developers to write code that powers Applications integrating pre-built Marketplace services deployed to a Cloud Platform that is easy, stable, and operable using best practices for Continuous Delivery at high velocity.”   His view on the Cloud Platform included a top layer called “Just run my code”, encapsulating much of the focus on top of Kubernetes now and what’s coming next, including serverless, security (e.g., Spiffe), service mesh (e.g., Istio), and testing efforts here at Kubecon. The perspective that in the cloud platform “serverless and K8s will essentially converge…into Kubernetes, just run my code” – this is the future we all hope for – with many steps along the way required to get there. Finally, an impassioned plea for ethics, diversity, and morality closed the presentation and framed a common theme we all share, “these are now table stakes.” GitOps: A theme emerged around many Day One sessions on GitOps starting with Alexis Richardson’s keynote: “Using ‘git push’ as the fundamental unit of cloud-native computing, with no worries about the underlying infrastructure, and Kubernetes serving as the gateway to Serverless services.”   Deep Kapadia and Tony Li from the New York Time Delivery and Site Reliability Engineering team presented a great session on “Building a Cloud Native Culture in an Enterprise.”   With a focus on using more modern tools including GitOps (Github driven workflow), Drone (CI/CD), Terraform, and Vault (to take secrets seriously) the NYT “shifted instead of lifted” their processes, development, communications, monitoring, and more. Check them out at https://open.nytimes.com/ Kelsey Hightower led us through the keynotes with a sense of humor (NoCode – “the best way to write secure and reliable applications. Write nothing; deploy nowhere”) and progress on serverless standards on “Serverless.  Not so FaaS.”  Kelsey re-focused on the end game, knowledge: “Data is what we collect and store. It becomes information once we analyze it. It becomes knowledge once we understand it."  CloudEvents will be a key building block creating more interoperability and standardization for the serverless community! The energy, buzz and enthusiasm will no doubt produce further exciting highlights – stay tuned for more KubeCon commentary!

KubeCon + CloudNativeCon Europe 2018 has kicked off in Copenhagen. The show is full of positive energy, buzzy, sold out, and packed full of 4300 cloud native developer attendees, very much on par...

Oracle Adds New Support for Open Serverless Standards to Fn Project and Key Kubernetes Features to Oracle Container Engine

{line-height:1.7em;} Open serverless project Fn adds support for broader serverless standardization with CNCF CloudEvents, serverless framework support, and OpenCensus for tracing and metrics. Oracle Container Engine for Kubernetes tackles toughest real-world governance, scale, and management challenges facing K8s users today Today at Kubecon + CloudNativeCon Europe 2018, Oracle announced new support for several open serverless standards on its open Fn Project and a set of critical new Oracle Container Engine for Kubernetes features addressing key real-world Kubernetes issues including governance, security, networking, storage, scale, and manageability. Both the serverless and Kubernetes communities are at an important crossroads in their evolution, and to further its commitment to open serverless standards, Oracle announced that the Fn Project now supports standards-based projects CloudEvents and the Serverless Framework. Both projects are intended to create interoperable and community-driven alternatives to today’s proprietary serverless options. Bringing Kubernetes to Maturity The New Stack, in partnership with the Cloud Native Computing Foundation (CNCF) recently published a report analyzing top challenges facing Kubernetes users today. The report found that infrastructure-related issues – specifically security, storage, and networking – had risen to the top, impacting larger companies the most. Source: The New Stack In addition, when evaluating container orchestration, classic non-functional requirements came into play: scaling, manageability, agility, and security. Solving these types of issues will help the Kubernetes project move through the Gartner Hype Cycle “Trough of Disillusionment”, up the “Slope of Enlightenment” and onto the promised land of the “Plateau of Productivity.” Source: The New Stack Addressing Real-World Kubernetes Challenges To address these top challenges facing Kubernetes users today, Oracle Container Engine for Kubernetes has integrated tightly with the best-in-class governance, security, networking, and scale of Oracle Cloud Infrastructure (OCI). These are summarized below: Governance, compliance, and auditing: Identity and Access Management (IAM) for Kubernetes enables DevOps teams to control who has access to Kubernetes resources, but also set policies describing what type of access they have and to which specific resources. This is a crucial element to managing complex organizations and rules applied to logical groups of users and resources, making it really simple to define and administer policies. Governance: DevOps teams can set which users have access to which resources, compartments, tenancies, users, and groups for their Kubernetes clusters. Since different teams typically manage different resources through different stages of the development cycle – from development, test, staging, through production – role-based access control (RBAC) is crucial. Two levels of RBAC are provided: (1) at the OCI IaaS infrastructure resource level defining who can for example spin up a cluster, scale it, and/or use it, and (2) at a Kubernetes application level where fine-grained Kubernetes resource controls are provided. Compliance: Container Engine for Kubernetes will support The Payment Card Industry Data Security Standard (PCI DSS), the globally applicable security standard that customers use for a wide range of sensitive workloads, including the storage, processing and transmission of cardholder data. DevOps teams will be able to run Kubernetes applications on Oracle’s PCI-compliant Cloud Infrastructure Services. Auditing (logging, monitoring): Cluster management auditing events have also been integrated into the OCI Audit Service for consistent and unified collection and visibility.   Scale: Oracle Container Engine is a highly available managed Kubernetes service. The Kubernetes masters are highly available (cross availability domains), managed, and secured. Worker clusters are self-healing, can span availability domains, and can be composed of node pools consisting of compute shapes from VMs to bare metal to GPUs. GPUs, Bare Metal, VMs: Oracle Container Engine offers the industry’s first and broadest family of Kubernetes compute nodes, supporting small and virtualized environments, to very large and dedicated configurations. Users can scale up from basic web apps up to high performance compute models, with network block storage and local NVMe storage options. Predictable, High IOPS: The Kubernetes node pools can use either VMs or Bare Metal compute with predictable IOPS block storage and dense I/O VMs. Local NVMe storage provides a range of compute and capacities with high IOPS. Kubernetes on NVIDIA Tesla GPUs: Running Kubernetes clusters on bare Metal GPUs gives container applications access to the highest performance possible. With no hypervisor overhead, DevOps teams should be delighted to have access to bare metal compute instances on Oracle Cloud Infrastructure with two NVIDIA Tesla P100 GPUs to run CUDA based workloads allowing for over 21 TFLOPS of single-precision performance per instance.   Networking: Oracle Container Engine is built on a state-of-the-art, non-blocking Clos network that is not over-subscribed and provides customers with a predictable, high-bandwidth, low latency network. Load balancing: Load balancing is often one of the hardest features to configure and manage – Oracle has integrated seamlessly with OCI load balancing to allow container-level load balancing. Kubernetes load balancing checks for incoming traffic on the load balancer's IP address and distributes incoming traffic to a list of backend servers based on a load balancing policy and a health check policy. DevOps teams can define Load Balancing Policies that tell the load balancer how to distribute incoming traffic to the backend servers. Virtual Cloud Network: Kubernetes user (worker) nodes are deployed inside a customer’s own VCN (virtual cloud network), allowing for secure management of IP addresses, subnets, route tables and gateways using the VCN.   Storage: Cracking the code on a simple way to manage Kubernetes storage continues to be a major concern for DevOps teams. There are two new IaaS Kubernetes storage integrations designed for Oracle Cloud Infrastructure that can help, unlocking OCI’s industry leading block storage performance (highest IOPS per GB of any standard cloud provider offering), cost, and predictability: OCI Volume Provisioner: Provided as a Kubernetes deployment, the OCI Volume provisioner enables dynamic provisioning of Block Volume storage resources for running Kubernetes on OCI. It leverages the OCI Flexvolume driver (see below) to bind storage resources to Kubernetes nodes. OCI Flexvolume Driver: This driver was developed to mount OCI block storage volumes to Kubernetes Pods using the flexvolume plugin interface.   Simplified, Unified Management: Bundled in Management: By bundling in commonly used Kubernetes utilities, Oracle Container Engine for Kubernetes makes for a familiar and seamless developer experience. This includes built-in support for Helm and Tiller (providing standard Kubernetes package management), the Kubernetes dashboard, and kube-dns. Running Existing Applications with Kubernetes: Kubernetes supports an ever-growing set of workloads that are not necessarily net new greenfield apps. A Kubernetes Operator is “an application-specific controller that extends the Kubernetes API to create, configure and manage instances of complex stateful applications on behalf of a Kubernetes user.” Oracle has open-sourced and will soon generally release an Oracle WebLogic Server Kubernetes Operator which allows WebLogic users to manage WebLogic domains in a Kubernetes environment without forcing application rewrites, retesting and additional process and cost. WebLogic 12.2.1.3 has also been certified on Kubernetes, and the WebLogic Monitoring Exporter, which exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana, has been released and open sourced.   Fn Project: Open serverless initiatives are progressing within the CNCF and the Fn Project is actively engaged and supporting these emerging standards: CloudEvents: The Fn Project has announced support for the Cloud Event standard effort. CloudEvents seeks to standardize event data and simplify event declaration and delivery among different applications, platforms, and providers. Until now, developers have lacked a common way of describing serverless events. This not only severely affects the portability of serverless apps but is also a significant drain on developer productivity. Serverless Framework: Fn Project, an open source functions as a service and workflow framework, has contributed a FaaS provider to the Serverless Framework to further its mission of multi-cloud and on-premise serverless computing. The new provider allows users of the Serverless Framework to easily build and deploy container-native functions to any Fn Cluster while getting the unified developer experience they’re accustomed to. For Fn’s growing community, the integration provides an additional option for managing functions in a multi-cloud and multi-provider world. "With a rapidly growing community around Fn, offering a first-class integration with the Serverless Framework will help bring our two great communities closer together, providing a “no lock-in” model of serverless computing to companies of all sizes from startups to the largest enterprises,” says Chad Arimura, VP Software Development, Oracle. OpenCensus: Fn is now using OpenCensus stats, trace, and view APIs across all Fn code. OpenCensus is a single distribution of libraries that automatically collects traces and metrics from your app, displays them locally, and sends them to any analysis tool. OpenCensus has made good decisions in defining their own data formats that allow developers to use any backends (explicitly not having to create their own data structures simply for collection). This allows Fn to easily stay up to date in the ops world without continuously having to make extensive code changes. For more information, join Chad Arimura and Matt Stephenson Friday, May 4 for their talk at KubeCon on Operating a Global Scale FaaS on top of Kubernetes.

Open serverless project Fn adds support for broader serverless standardization with CNCF CloudEvents, serverless framework support, and OpenCensus for tracing and metrics. Oracle Container Engine for...

Oracle

Integrated Cloud Applications & Platform Services