X

Step Up to Modern Cloud Development

Recent Posts

Cloud

Enterprise applications meet cloud native

Speaking with Enterprise customers, many are adopting a cloud-native strategy for new, in-house development projects. This approach of short development cycles, iterative functional delivery and automated CI/CD tooling is allowing them to deliver innovation for users and customers quicker than ever before. One of Oracle’s top 10 predictions for developers in 2019 is that legacy, enterprise applications jump to cloud-native development approaches. The need to move to cloud-native is seated in the fact that, at heart, all companies are software companies. Those that can use software to their advantage, to speed, automate their business and make it easier for their customers to interact with them, win.  This is the nature of business today, and the reason that start-ups, such as Uber, can disrupt whole existing industries. Cloud native technologies like Kubernetes, Docker containers, micro-services and functions provide the basis to scale, secure and enable these new solutions.  However, enterprises typically have a complex stack of applications and infrastructure; this usually means monolithic custom or ISV applications that are anything but cloud-native. These new cloud-native solutions need to be able to interact with these legacy systems but are running in the cloud rather an on-premises and need delivery cycles of days rather than months. Enterprises need to address this technical debt in order to realise the full benefits of a cloud-native approach. Re-writing these monoliths is not practical in the short-term due to resource and time needed. So, what are the options to modernise enterprise applications? Move the Monolith Moving these applications to the cloud can realise the cloud economics of elasticity and pay for what you use. This thinks of infrastructure as code rather than physical compute, network and storage. Using tools such as Terraform – https://www.terraform.io – to create and delete infrastructure resources and Packer – https://www.packer.io – to manage machine images, means we can create environment when needed and tear down when not. Although this does not immediately address modernisation of the application itself, it does start to automate the infrastructure and begin to integrate them into cloud native development and delivery. https://blogs.oracle.com/developers/build-oracle-cloud-infrastructure-custom-images-with-packer-on-oracle-developer-cloud Containerise and Orchestrate  A cloud native strategy is largely based on running applications in Docker containers to give the flexibility of deployment on premises and across different cloud providers. A common approach is to containerise existing applications and run them on premises before moving to the cloud.  Many enterprise applications, both in-house developed and ISV supplied, are Weblogic based and enterprises are looking to do the same with these. Weblogic now runs in docker containers, so the same approach can be taken – https://hub.docker.com/_/oracle-weblogic-server-12c.    As initial, and suitable workloads (workloads that have less on-prem intergration points, or are good candidates from a compliance standpoint) become containerised and moved to the cloud, the management and orchestration of containers into solutions begins to become an issue. Container management or orchestration platforms such as Kubernetes, Docker Swarm etc are being adopted. Kubernetes is emerging as the platform of choice for enterprises to manage containers in the cloud. Oracle has developed a Weblogic Kubernetes operator that allows Kubernetes to understand and manage Weblogic domains, clustering, etc. https://github.com/oracle/weblogic-kubernetes-operator Integrating with version control like Git Hub, secure docker repositories and using CI/CD tooling to deploy to Kubernetes, really brings these enterprise applications to the core of a cloud native strategy. It also means existing Weblogic and Java skills in the organisation continue to be relevant in the cloud.  Breaking It Down To fully benefit from running these applications in the could, the functionality needs to be integrated with the new cloud native services and also to become more agile. An evolving pattern is to take an agile approach, taking a series of iterations to refactoring the enterprise application. A first step is to separate the UI from the functional code and create API’s to access the business functionality. This will allow new cloud native applications access to the required functionality and facilitate the shorter delivery cycles enterprises are demanding. Over time, these services can be rebuilt and deployed as cloud services, eventually migrate away from the legacy application. Helidon is a collection of java libraries for writing microservices that helps to re-use existing java skills to re-developing the code behind the services.  As more and more services are deployed management, versioning and monitoring become increasingly important. Using a tool like a service mesh is evolving as the way to do this. A service mesh is a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application. Istio is evolving as an enterprise choice and can easily be installed on Kubernetes.  In Conclusion More and more enterprises are adopting a cloud native approach for new development projects. They are also struggling with the technical debt of large monolithic enterprise applications when trying to modernise them. However, there are a number of strategies and technologies that be used to help migrate and modernise these legacy applications in the cloud. With the right approach existing skills can be maintained and evolved into a container based, cloud native environment.  

Speaking with Enterprise customers, many are adopting a cloud-native strategy for new, in-house development projects. This approach of short development cycles, iterative functional delivery and...

Cloud

Kata Containers: An Important Cloud Native Development Trend

Introduction One of Oracle’s top 10 predictions for developers in 2019 was that a hybrid model that falls between virtual machines and containers will rise in popularity for deploying applications. Kata Containers are a relatively new technology that combine the speed of development and deployment of (Docker) containers with the isolation of virtual machines. In the Oracle Linux and virtualization team we have been investigating Kata Containers and have recently released Oracle Container Runtime for Kata on Oracle Linux yum server for anyone to experiment with. In this post, I describe what Kata containers are as well as some of the history behind this significant development in the cloud native landscape. For now, I will limit the discussion to Kata as containers in a container engine. Stay tuned for a future post on the topic of Kata Containers running in Kubernetes. History of Containerization in Linux The history of isolation, sharing of resources and virtualization in Linux and in computing in general is rich and deep. I will skip over much of this history to focus on some of the key landmarks on the way there. Two Linux kernel features are instrumental building blocks for the Docker Containers we’ve become so familiar with: namespaces and cgroups. Linux namespaces are a way to partition kernel resources such that two different processes have their own view of resources such as process IDs, file names or network devices. Namespaces determine what system resources you can see. Control Groups or cgroups are a kernel feature that enable processes to be grouped hierarchically such that their use of subsystem resources (memory, CPU, I/O, etc) can be monitored and limited. Cgroups determine what system resources your can use. One of the earliest containerization features available in Linux combine both namespaces and cgroups was Linux Containers (LXC). LXC offered a userspace interface to make the Linux kernel containment features easy to use and enabled the creation of system or application containers. Using LXC, you could run, for example, CentOS 6 and Oracle Linux 7, two completely different operating systems with different userspace libraries and versions on the same Linux kernel. Docker expanded on this idea of lightweight containers by adding packagaging, versioning and component reuse features. Docker Containers have become widely used because they appealed to developers. They shortened the build-test-deploy cycle because they made it easier to package and distribute an application or service as a self-contained unit, together with all the libraries needed to run it. Their popularity also stems from the fact that they appeal to developers and operators alike. Essentially, Docker Containers bridge the gap between dev and ops and shorten the cycle from development to deployment. Because containers —both LXC and Docker-based— share the same underlying kernel, it’s not inconceivable that an exploit able to escape a container could access kernel resources or even other containers. Especially in multi-tenant environments, this is something you want to avoid. Projects like Intel® Clear Containers Hyper runV took a different approach to parceling out system resources: their goal was to combine the strong isolation of VMs with the speed and density (the number of containers you can pack onto a server) of containers. Rather than relying on namespaces and cgroups, they used a hypervisor to run a container image. Intel® Clear Linux OS Containers and Hyper runV came together in Kata Containers, an open source project and community, which saw its first release in March of 2018. Kata Containers: Best of Both Worlds The fact that Kata Containers are lightweight VMs means that, unlike traditional Linux containers or Docker Containers, Kata Containers don’t share the same underlying Linux kernel. Kata Containers fit into the existing container ecosystem because developers and operators interact with them through a container runtime that adheres to the Open Container Initiative (OCI)specification. Creating, starting, stopping and deleting containers works just the way it does for Docker Containers. Image by OpenStack Foundation licensed under CC BY-ND 4.0 In summary, Kata Containers: Run their own lightweight OS and a dedicated kernel, offering memory, I/O and network isolation Can use hardware virtualization extensions (VT) for additional isolation Comply with the OCI (Open Container Initiative) specification as well as CRI (Container Runtime Interface) for Kubernetes Installing Oracle Container Runtime for Kata As I mentioned earlier, we’ve been researching Kata Containers here in the Oracle Linux team and as part of that effort we have released software for customers to expermiment with. The packages are available on Oracle Linux yum server and its mirrors in Oracle Cloud Infrastructure (OCI). Specifically, we’ve released a kata-runtime and related compontents, as well an optimized Oracle Linux guest kernel and guest image used to boot the virtual machine that will run a container. Oracle Container Runtime for Kata relies on QEMU and KVM as the hypervisor to launch VMs. To install Oracle Container Runtime for Kata on a bare metal compute instance on OCI: Install QEMU Qemu is available in the ol7_kvm_utils repo. Enable that repo and install qemu sudo yum-config-manager --enable ol7_kvm_utils sudo yum install qemu Install and Enable Docker Next, install and enable Docker. sudo yum install docker-engine sudo systemctl start docker sudo systemctl enable docker Install kata-runtime and Configure Docker to Use It First, configure yum for access to the Oracle Linux Cloud Native Environment - Developer Preview yum repository by installing the oracle-olcne-release-el7 RPM: sudo yum install oracle-olcne-release-el7 Now, install kata-runtime: sudo yum install kata-runtime To make the kata-runtime an available runtime in Docker, modify Docker settings in /etc/sysconfig/docker. Make sure SELinux is not enabled. The line that starts with OPTIONS should look like this: $ grep OPTIONS /etc/sysconfig/docker OPTIONS='-D --add-runtime kata-runtime=/usr/bin/kata-runtime' Next, restart Docker: sudo systemctl daemon-reload sudo systemctl restart docker Run a Container Using Oracle Container Runtime for Kata Now you can use the usual docker command to run a container with the --runtime option to indictate you want to use kata-runtime. For example: sudo docker run --rm --runtime=kata-runtime oraclelinux:7 uname -r Unable to find image 'oraclelinux:7' locally Trying to pull repository docker.io/library/oraclelinux ... 7: Pulling from docker.io/library/oraclelinux 73d3caa7e48d: Pull complete Digest: sha256:be6367907d913b4c9837aa76fe373fa4bc234da70e793c5eddb621f42cd0d4e1 Status: Downloaded newer image for oraclelinux:7 4.14.35-1909.1.2.el7.container To review what happened here. Docker, via the kata-runtime instructed KVM and QMEU to start a VM based on a special purpose kernel and minimized OS image. Inside the VM a container was created, which ran the uname -r command. You can see from the kernel version that a “special” kernel is running. Running a container this way, takes more time than a traditional container based on namespaces and cgroups, but if you consider the fact that a whole VM is launched, it’s quite impressive. Let’s compare: # time docker run --rm --runtime=kata-runtime oraclelinux:7 echo 'Hello, World!' Hello, World! real 0m2.480s user 0m0.048s sys 0m0.026s # time docker run --rm oraclelinux:7 echo 'Hello, World!' Hello, World! real 0m0.623s user 0m0.050s sys 0m0.023s That’s about 2.5 seconds to launch a Kata Container versus 0.6 seconds to launch a traditional container. Conclusion Kata Containers represent an important phenomenon in the evolution of cloud native technologies. They address both the need for security through virtual machine isolation as well as speed of development through seamless integration into the existing container ecosystem without compromising on computing density. In this blog post I’ve described some of the history that brought us Kata Containers as well as showed how you can experiment with them yourself with packages using Oracle Container Runtime for Kata.

Introduction One of Oracle’s top 10 predictions for developers in 2019 was that a hybrid model that falls between virtual machines and containers will rise in popularity for deploying applications. Kata...

Cloud

Nine Ways Oracle Cloud is Open

In the recent Break New Ground paper, 10 Predictions for Developers in 2019, openness was cited as a key factor. Developers want to choose their clouds based on openness. They want a choice of languages, databases, and compute shapes, among other things. This allows them to focus on what they care about – creating – without ops concerns or lock in. In this post, we outline the top ways that Oracle is delivering a truly open cloud.  Databases Oracle Cloud’s Autonomous Database, which is built on top of Oracle Database, conforms to open standards, including ISO SQL:2016, JDBC, Python PEP 249, ODBC, and many more. Autonomous Database is a multi-model database and supports relational as well as non-relational data, such as JSON, Graph, Spatial, XML, Key/Value, Text, amongst others. Because Oracle Autonomous Database is built on Oracle Database technology, customers can “lift and shift” workloads from/to other Oracle Database environments, including those running on third-party clouds and on-premises infrastructure. This flexibility makes Oracle Autonomous Database a truly open cloud service compared to other database cloud services in the market. Steve Daheb from Oracle Cloud Platform provides more information in this Q&A. In addition, Oracle MySQL continues to be the world's most popular open source database (source code) and is available in Community and Enterprise editions. MySQL implements standards such as ANSI/ISO SQL, ODBC, JDBC and ECMA. MySQL can be deployed on-premises, on Oracle Cloud, and on other clouds. Integration Cloud With Oracle Data Integration Platform, you can access numerous Oracle and non-Oracle sources and targets to integrate databases with applications. For example, you can use MySQL databases on a third-party cloud as a source for Oracle apps, such as ERP, HCM, CX, NetSuite, and JD Edwards. In addition, Integration Cloud allows you to integrate Oracle Big Data Cloud, Hortonworks Data Platform, or Cloudera Enterprise Hub with a variety of sources: Hadoop, NoSQL, or Oracle Database. You can also connect apps on Oracle Cloud with third-party apps. Consider a Quote to Order system. When a customer accepts a quote, the salesperson can update it in the CRM system, leverage Oracle’s predefined integration flows, with Oracle ERP Cloud, and turn the quote into an order.   Java Java is one of the top programming languages on Github (Oracle Code One 2018 keynote), with over 12 million developers in the community. All development for Java happens in OpenJDK and all design and code changes are visible to the community. Therefore, the evolution of ongoing projects and features is transparent. Oracle has been talking with developers who are and aren’t using Java to ensure that Java remains open and free, while making enhancements to OpenJDK. In 2018, Oracle open sourced all remaining closed source features: Application Class Data Sharing, Project ZGC, Flight Recorder and Mission Control. In addition, Oracle delivers binaries that are pure OpenJDK code, under the GPL, giving developers freedom to distribute them with frameworks and applications. Oracle Cloud Native Services, including Oracle Container Engine for Kubernetes Cloud Native Services include the Oracle Container Engine for Kubernetes and Oracle Cloud Infrastructure Registry. Container Engine is based off unmodified Kubernetes codebase and clusters can support bare-metal nodes, virtual machines or heterogeneous BM/VM environments. Oracle’s Registry is based off open Docker v2 standards, allowing you to use the same Docker commands to interact with it as you would with Docker Hub. Container images can be used on-premises and on Container Engine giving you portability. It can also interoperate with third-party registries and Oracle Cloud Infrastructure Registry with third-party Kubernetes environments. In addition Oracle Functions is based off the open source Fn Project. Code written for Oracle Functions will therefore run not only on Oracle Cloud, but with Fn clusters on third-party clouds and on-premises environments as well. Oracle offers the same cloud native capabilities as part of Oracle Linux Cloud Native Environment. This is a curated set of open source Cloud Native Computing Foundation (CNCF) projects that can be easily deployed, have been tested for interoperability, and for which enterprise-grade support is offered. With Oracle’s Cloud Native Framework, users can run cloud native applications in the Oracle Cloud and on-premises, in an open hybrid cloud and multi-cloud architecture. Oracle Linux Operating System Oracle Linux, which is included with Oracle Cloud subscriptions at no additional cost, is a proven, open source operating system (OS) that is optimized for performance, scalability, reliability, and security. It powers everything in the Oracle Cloud – Applications and Infrastructure services. Oracle extensively tests and validates Oracle Linux on Oracle Cloud Infrastructure, and continually delivers innovative new features to enhance the experience in Oracle Cloud. Oracle VM VirtualBox Oracle VM VirtualBox is the world’s most popular, open source, cross-platform virtualization product. It lets you run multiple operating systems on Mac OS, Windows, Linux, or Oracle Solaris. Oracle VM VirtualBox is ideal for testing, developing, demonstrating, and deploying solutions across multiple platforms on one machine. It supports exporting of virtual machines to Oracle Cloud Infrastructure and enables them to run on the cloud. This functionality facilitates the experience of using VirtualBox as the development platform for the cloud. Identity Cloud Services Oracle Identity Cloud Service provides 100% API coverage of all product capabilities for rich integration with custom applications. It allows compliance with open standards such as SCIM, REST, OAuth and OpenID Connect for easy application integrations. Customers can easily consume these APIs in their applications to take advantage of identity management capabilities. Oracle Identity Cloud Service seamlessly interoperates with on-premises identities in Active Directory to provide Single Sign On between Cloud and On-Premises applications. Through its Identity Bridge component, Identity Cloud can synchronize all the identities and groups from Active Directory into its own identity store in the cloud. This allows organizations to take advantage of their existing investment in Active Directory. And, they can extend their services to Oracle Cloud and external SaaS applications. Oracle Blockchain Platform Oracle Blockchain Platform is built on open source Hyperledger Fabric making it interoperable with non-Oracle Hyperledger Fabric instances deployed in your data center or in third-party clouds. In addition, the platform uses REST APIs for plug-n-play integration with Oracle SaaS and on-premises apps such as NetSuite ERP, Flexcube core banking, Open Banking API Platform, among others. Oracle Mobile Hub (Mobile Backend as a Service – MBaaS) Oracle Mobile Hub is an open and flexible platform for mobile app development. With Mobile Hub, you can: Develop apps for any mobile client: iOS or Android based phones Connect to any backend via a standard RESTful interfaces and SOAP web services Support both native mobile apps and hybrid apps. For example, you can develop with Swift or Objective C for native iOS apps, Java for native Android apps, and JavaScript for Hybrid mobile apps In addition, Oracle Visual Builder (VB) is a cloud-based software development Platform as a Service (PaaS) and a hosted environment for your application development infrastructure. It provides an open source, standards-based solution to develop, collaborate on, and deploy applications within Oracle Cloud that provides an easy way to create and host web and mobile applications in a secure cloud environment. Takeaway In choosing a cloud vendor, openness can provide a significant advantage, allowing you to choose amongst languages, databases, hardware, clouds, and on-premises infrastructure.  With a free trial on Oracle Cloud, you can experience the benefits of these open technologies – no strings attached. Feel free to start a conversation below.

In the recent Break New Ground paper, 10 Predictions for Developers in 2019, openness was cited as a key factor. Developers want to choose their clouds based on openness. They want a choice of...

APIs

How to Use OSvC Restful APIs in Python: Quickly and Easily

Have you ever had to quickly act to create an automation process to process data in Oracle Service Cloud such as restore, update or even delete wrong data? If so, you'll know that there are different approaches. So what do you do? Many people have found success by writing a PHP Script and hosting it in Oracle Service Cloud Customer Portal (CP). But there are a few things you should know before you take down this road to ensure you will not overload your Customer Portal Server or create a bad experience to your end-user customers or generate extra sessions to your license compliance agreement. This post will tell you what you need to know to take a different road and with just a few lines of Python script create a process, so that will let you successfully implement integration with little time investment. First, make sure you have python installed in your local. Take a look at python documentation online to get your first step done. If you want to play with it first, Anaconda Distribution is the easiest way to go. Let's get it started. Here is a simple python script you can use to make a REST API request using Python. Make sure you replace variable values where it says [REPLACE ....] import requests import json import base64 from requests.auth import HTTPBasicAuth def main(): try: site = '[REPLACE FOR YOUR SITE]' payload = {"id":[REPLACE FOR YOUR REPORT ID], "filters":[{"name": "[REPLACE FOR REPORT FIELD]","operator": {"lookupName": "="},"values":"[VALUE]"}]} response = requests.post(site +'/analyticsReportResults', auth=HTTPBasicAuth('[REPLACE FOR YOUR USER]', '[REPLACE FOR PASSWORD]'), data=json.dumps(payload)) json_data = json.loads(response.text) print(json_data['rows']) except Exception as e: print('Error: %s' % e) main()   Now that you know how to create a Python script to make an API request quickly, you are ready to solve data issues such as restore, backup, updates, creation, deletion, etc. You can create a request from one site to insert in another site. e.g., You have a restored place or you have a backup, then you want to create the same process to request from A to insert in B. Make sure you won't create parallel threads that will to massive attack your OSvC Server. Yep, that's it! I hope this helps!

Have you ever had to quickly act to create an automation process to process data in Oracle Service Cloud such as restore, update or even delete wrong data? If so, you'll know that there are...

Kubernetes and the "Platform Engineer"

One of Oracle's top 10 predictions for developers in 2019 was that developers will need to partner with a platform engineer, which will emerge as a key new role for cloud native development.  Recent conversations with Enterprise customers have reinforced this, and it is becoming clear that a separation of concerns is emerging for those delivering production applications on top of Kubernetes infrastructure.  The application developers building the containerized apps driven by business requirements, and the “Platform Engineers”, owning and running the supporting Kubernetes infrastructure, and platform components.  For those familiar with DevOps, SRE (pick your term) – this is arguably nothing new, but the consolidation of these teams around the Kubernetes API is leading to something altogether different.  In short, the Kubernetes YAML file (via the Kubernetes API) is becoming the contract or hand-off between application developers and the platform team (or more succinctly between dev and ops). In the beginning, there was PaaS Well, actually there was infrastructure! – but for application developers, there was an awful lot of pieces to assemble (compute, network, storage) to deliver an application.  Technologies like Virtualization and Infrastructure as Code (Terraform et al) made it easier to automate the infrastructure part, but still, a lot of moving parts.  Early PaaS (Platform as a Service) pioneers, recognizing this complexity for developers, created (PaaS) platforms, abstracting away much of the infrastructure (and complexity), albeit for a very targeted (or “opinionated”) set of application use cases or patterns – which is fine if your application fits into that pattern, but if not, you are back to dealing with infrastructure. Then Came CaaS Following the success of Container technology popularized in recent years by Docker, so called “Containers as a Service” offerings emerged a few years back, sitting somewhere between IaaS and PaaS, CaaS services abstract some of the complexity of dealing with raw infrastructure, allowing teams to deploy and operate container based applications without having to build, setup and maintain their own container orchestration tooling and supporting infrastructure. The emergence of CaaS also coincided largely with the rise of Kubernetes as the de facto standard in container orchestration.  The majority of CaaS offerings today are managed Kubernetes offerings (not all offerings are created equal though, see The Journey to Enterprise Managed Kubernetes for more details).  As discussed previously, Kubernetes has essentially become the new Operating System for the Cloud, and arguably the modern application server, as Kubernetes continues to move up the stack.  At a practical level, this means that in addition to the benefits of a CaaS described above, customers benefit from standardization, and portability of their container applications across multiple cloud providers and on-prem (assuming those providers adhere to and are conformant with upstream Kubernetes). Build your Own PaaS? Despite CaaS and the standardization of Kubernetes for delivering these, there is still a lot of potential complexity for developers.  With “complexity”, “cultural changes” and “lack of training” recently cited as some of the most significant inhibitors to container and Kubernetes adoption, we can see there’s still work to do.  An interesting talk at KubeCon Seattle played on this with the title: “Kubernetes is Not for Developers and Other Things the Hype Never Told You”. Enter the platform engineer.  Kubernetes is broad and deep, and only a subset of it ultimately needs be exposed to end developers in many cases.   As an enterprise that wants to offer a modern container platform to its developers, there are a lot of common elements/tooling that every end developer/application team consuming the platform shouldn’t have to reinvent.  Examples include (but are not limited to): monitoring, logging, service mesh, secure communication/TLS, ingress controllers, network policies, admission controllers etc…  In addition to common services being presented to developers, the platform engineer can even extend Kubernetes (via extension APIs), with things like the Service Catalog/Open Service Broker to facilitate easier integration for developers with other existing cloud services, or by providing Kubernetes Operators, helpers essentially that developers can consume for creating (stateful) services in their clusters (see examples here and here). The platform engineer then in essence, has an opportunity to carve out the right cross section of Kubernetes (hence build your own PaaS) for the business, both in terms of the services that are exposed to developers to promote reuse, but also in enforcement of business policy (security and compliance). Platform As Code And the fact that you can leverage the same Kubernetes API or CLI (“Kubectl”) and deployment (YAML) file to drive the above platform, has led some to talk about the approach as “Platform as code” – essentially an evolution of Infrastructure as Code, but in this case, native Kubernetes interfaces are driving the entire creation of a complete Kubernetes based application platform for enterprise consumption. The platform engineer and the developer now have a clear separation of concerns (with the appropriate Kubernetes RBAC roles and role bindings in place!).  The platform engineer can check the complete definition of the platform described above into source control.  Similarly, the developer consuming the platform, checks their Kubernetes application definition into source control – and the Kubernetes YAML file/definition becomes the contract (and enforcement point) between the developer and platform engineer Platform engineers ideally have a strong background in infrastructure software, networking and systems administration.  Essentially, they are working on the (Kubernetes) platform to deliver a product/service to (and in close collaboration with) end development teams. In the future, we would expect there to be additional work in the community around both sides of this contract.  Both for developers, and how they can discover what common services are provided by the platform being offered, and for platform engineers in how they can provide (and enforce) a clear contract to their development team customers.

One of Oracle's top 10 predictions for developers in 2019 was that developers will need to partner with a platform engineer, which will emerge as a key new role for cloud native development.  Recent...

Cloud

Four New Oracle Cloud Native Services in General Availability

This post was jointly written by Product Management and Product Marketing for Oracle Cloud Native Services.  To those who participated in the Cloud Native Services Limited Availability Program, thank you from the team! We have an important update: four more Cloud Native Services have just gone into General Availability. Resource Manager for DevOps and Infrastructure as Code Resource Manager is a fully managed service that uses open source HashiCorp Terraform to provision, update, and destroy Oracle Cloud Infrastructure resources at-scale. Resource Manager integrates seamlessly with Oracle Cloud Infrastructure to improve team collaboration and enable DevOps. It can be useful for repetitive deployment tasks such as replicating similar architectures across Availability Domains or large numbers of hosts. You can learn more about Resource Manager through this blog post. Streaming for Event-based Architectures Streaming Service provides a “pipe” to flow large volumes of data from producers to consumers. Streaming is a fully managed service with scalable and durable storage for ingesting large volumes of continuous data via a publish-subscribe (pub-sub) model. There are many use cases for Streaming: gathering data from mobile and IoT devices for real-time analytics, shipping logs from infrastructure and applications to an object store, and tracking current financial information to trigger stock transactions, to name a few. Streaming is accessible via the Oracle Cloud Infrastructure Console, SDKs, CLI, and REST API, and provides Terraform integration. Additional information on Streaming is available on this blog post. Monitoring and Notifications for DevOps Monitoring provides a consistent, integrated method to obtain fine-grained telemetry and notifications for your entire stack. Monitoring allows you to track infrastructure utilization and respond to anomalies in real-time. Besides performance and health metrics available out-of-the-box for infrastructure, you can get custom metrics for visibility across the stack, real-time alarms based on triggers and Notifications via email and PagerDuty. The Metrics Explorer provides a comprehensive view across your resources. You can learn more through these blog posts for Monitoring and Notifications. In addition, using the Data Source for Grafana, users can create Grafana dashboards for monitoring metrics.  Next Steps We would like to invite you to try these services and provide your feedback below. A free $300 trial is available at cloud.oracle.com/tryit. To evaluate other Cloud Native Services in Limited Availability, including Functions for serverless applications, please complete this sign-up form.

This post was jointly written by Product Management and Product Marketing for Oracle Cloud Native Services.  To those who participated in the Cloud Native Services Limited Availability Program, thank...

Containers, Microservices, APIs

CI/CD Automation for Fn Project with Oracle FaaS and Developer Cloud Service

By this time you probably seen multiple blogs about the Fn Project - an open-source, multi-languages, container-native serverless platform. And you might have already heard that Oracle is going to offer a cloud hosted Function as a Service (FaaS) for Fn-based functions called Oracle Functions - currently in limited access (get your invite to try it out here). So how do you create an automated CI/CD chain for your Fn functions? Oracle Developer Cloud Service now provides built-in functionality to support you. DevCS supports Fn Project functions life cycle command definition in our CI/CD jobs. This means that you can automate Fn build and deploy steps in a declarative way. We also added support that enables you to leverage the hosted FaaS offering in the cloud and CI/CD directly into that environment. Here are the basic steps to get DevCS hooked up to your Fn based FaaS service running in the Oracle Cloud Infrastructure. Your build will have several steps in it including: Docker Login This will let you connect to the hosted docker registry in the Oracle Cloud (OCIR) Provide your OCIR url (phx.ocir.io for example), your user (tenancy/username), and your auth token (note this is not the password but rather the auth token you can get in identity->user->auth tokens). OCIcli Configuration The next step is to configure the access to your OCI environment - you do this by picking up the OCIcli build step. Then provide the information including your user's OCID and Fingerprint, your tenancy OCID, your region, and paste in your private key that you generated. OCI Fn Configuration Now that your OCI connection is set, let's add the specific configuration for your FaaS instance. From the Fn menu in DevCS pick up the Fn OCI option. Configure it with the details of the Fn environment you created including the compartment ID, the provider (oracle), and the passphrase you used when you created your private key. Your environment is now ready for using the specific Fn lifecycle commands. We are going to assume that your Fn function code is in your root directory of the Git repository you hooked up to the build job. Fn Build The first step will build the function for us. If the code is at the root of your Git, then you only need to specify the Registry Host (phx.ocir.io) and the username (tenant/user), you can also check the box to get verbose output from the build operation. Fn Deploy If the Build was successful the next step is to deploy it to our FaaS service. First make sure you created an app in your FaaS function console. Use the name of that app to fill the "Deploy to App" field. Fill out the Registry Host and Username field similar to the previous step, and don't forget to add the API URL (https://functions.us-phoenix-1.oraclecloud.com). You can then decide on some additional options such as verbose output, bumping the version of the app, etc. Now run the Build and watch the magic take place. Check out the video below to see it in action.    

By this time you probably seen multiple blogs about the Fn Project - an open-source, multi-languages, container-native serverless platform. And you might have already heard that Oracle is going to...

DevOps

Setting up Oracle Cloud Infrastructure Compute and Storage for Builds on Oracle Developer Cloud

With the 19.1.3 release of Oracle Developer Cloud, we have started supporting OCI based Build slaves for the continuous integration and continuous deployment. So now you are enabled to use OCI Compute, Storage for the Build VMs and for the artifact storage respectively. This blog will help you understand how you can configure the OCI account for Compute and Storage in Oracle Developer Cloud. How to get to the OCI Account configuration screen in Developer Cloud? If your user has Organization Administrator privileges then you will by default land on the Organization Tab after you successfully login into you Developer Cloud instance. In the Organization screen, you need to click on the OCI Account tab. Note: You will not be able to access this tab if you do not have the Organization Administrator privileges.    Existing users of Developer Cloud will see their OCI Classic account configuration and will notice that unlike the previous version, both Compute and Storage configuration have now been consolidated to a single screen. Click on the Edit button for configuring the OCI account. Click on the OCI radio button to get the form for configuring OCI account. This wizard will help you configure both compute and storage for OCI to be used on Developer Cloud.     Before we start to understand, what each of the fields in the wizard means and where to retrieve its value from the OCI console, let us understand what does the message displayed on top of the Configure OCI Account wizard(as shown in the screenshot below) means:   It means that, if you change from OCI Classic to OCI Account, the Build VMs that were created using  Compute on OCI Classic will now be migrated to OCI based Build VMs. It also gives the count of the existing Build VMs created using OCI Classic compute that will be migrated. This change will also result in the migration of the build and Maven artifacts from Storage Classic to OCI storage automatically. Prerequisite for the OCI Account configuration: You should have access to the OCI account and you should also have a native OCI user with the Admin privilege created in the OCI instance. Note: You will not be able to use the IDCS user or the user with which you are able to login into the Oracle Cloud Myservices console, until and unless that user also exists as native OCI user. By native user, it means that you should be able to see the user (eg: ociuser) in the Governance & Administration > Identity > Users tab on the OCI console as shown in the screenshot below. If not then you will have to go ahead and create a user following this link. OCI Account Configuration: Below are the list of values, explanation of what it is and finally a screenshot of OCI console to show where it can be found. You will need these values to configure the OCI account in Developer Cloud. Tenancy OCID - This is the cloud tenancy identifier in OCI. Go to Governance and Administration > Administration > Tenancy Details in the OCI console. Under Tenancy Information, click on the Copy link for the Tenancy OCID.   User OCID: ID for the native OCI user. Go to Governance and Administration > Identity > Users in the OCI console. For the user of your choice click on the Copy link for the User OCID.   Home Region: On the OCI console look at the right-hand top corner and you should find the region for your tenancy, as highlighted in the screenshot below.   Private Key: The user has to generate a Public and Private Key pair in the PEM format. The Public key in the PEM format has to be configured in the OCI console. Use this link to see understand how you can create the Public and Private Key Pair.  You will have to go to Governance and Administration > Identity > Users in the OCI console. Select the user by clicking on the username link and then click on the Add Public Key button and then configure the Public Key here. While the Private key needs to be copied in the Private Key field of the Configure OCI Account wizard in Developer Cloud.   Passphrase: If you have given any passphrase while generating the Private Key, then you will have to configure the same here, else you can leave it empty. Fingerprint: It is the fingerprint value of the OCI user who’s OCID you had copied earlier from the OCI console. You will have to go to Governance and Administration > Identity > Users in the OCI console. Select the user by clicking on the username link and for the Public Key created, copy the fingerprint value as shown in the screenshot below.   Compartment OCID: You can either select the root compartment for which the OCID would be the same as the Tenancy OCID. But it is recommended that you create a separate compartment for the Developer Cloud Build VMs for the better management. You can create a new compartment by going to Governance and Administration > Identity > Compartments in the OCI console and then click on the Create Compartment button, give the Compartment Name, Description values of your choice and select the root compartment as the Parent Compartment. Click on the link in the OCID column for the compartment that you have created and then click on the Copy link to copy the DevCSBuild compartment OCID.   Storage Namespace: This is the Storage Namespace where the artifacts will be stored on the OCI. Go to Governance and Administration > Administration > Tenancy Details in the OCI console. Under Object Storage Settings, copy the Storage Namespace name as shown in the screenshot below.   After you have entered all the values, select the checkbox to accept the terms and conditions. Click the Validate button, if validation is successful, then click the Save button to complete the OCI Account configuration.    You will get a confirmation dialog for the account switch from OCI Classic to OCI. Select the checkbox and click the Confirm button. By doing this you are giving your consent to migrate the VMs, build and Maven artifacts to OCI compute and storage respectively. This action will also remove the artifacts from the Storage classic. On confirmation, you should see the OCI Account configured with the provided details. You can edit it at any point of time by clicking the Edit button.   You can check for the Maven and build artifacts in the projects to confirm the migration.   To know more about Oracle Developer Cloud, please refer the documentation link. Happy Coding! **The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

With the 19.1.3 release of Oracle Developer Cloud, we have started supporting OCI based Build slaves for the continuous integration and continuous deployment. So now you are enabled to use OCI...

APIs

Oracle Cloud on a Roll: From One 'Next Big Things' Session to Another…

The Oracle Open World Showcase in London this January We wrapped up an exciting Open World in London last month with a spotlight on all things Oracle Cloud. Hands-on Labs and demos took center stage to showcase the hottest use cases in apps and converged infrastructure (IaaS + PaaS).  From autonomous databases and analytics, platform solutions for SaaS -  like a digital assistant (Chatbot), app and data integration, and API gateways for any SaaS play across verticals, and cloud-native application development on OCI, we ran a series of use cases. Several customers joined us on stage for various keynote streams to share their experience and to demonstrate the richness of Oracle’s offering. Macty’s (an Oracle Global Startup Ecosystem Partner) Move from AWS to the Oracle Cloud Macty is one such customer who transitioned out of AWS to Oracle Cloud to build their fashion e-commerce platform with a focus on AI/ML to power visual search. Navigating AWS was hard for Macty. Expensive support, complex pricing choices, lack of automated backups for select devices, and delays in getting to the support workforce were some of the reasons why Macty embarked on to Oracle’s Cloud Infrastructure. Macty used Oracle’s bare metal GPU to train deep learning models. They used the compartments to isolate and use the correct billing for customers and the DevCS platform (Terraform and Ansible) to update and check the environment from a deployment and configuration perspective. Macty’s CEO @Susana Zoghbi presented the Macty success story with the VP of Oracle Cloud, Ashish Mohindroo. She demonstrated the power of the Macty chatbot (through Facebook Messenger) that was built on Oracle’s platform to enable e-commerce vendors to engage with their customers better.  The other solutions that Macty brings with their AI/API powered platform are: a recommendation engine to complete the look in real time, find similar items, customize the fashion look, and get customer analytics to connect e-commerce with the in-store experience. Any of these features can be used by e-commerce stores to delight their customers and up their game against big retailers. And now, Oracle Open World is Going to Dubai! Ashish Mohindroo, VP of Oracle Cloud will be keynoting the Next Big Things session again and this time at the Oracle Open World in Dubai next week. He will be accompanied by Asser Smidt, Founder of BotSupply (an Oracle Global Startup Ecosystem Partner). BotSupply assists companies with conversational bots, have an award-winning multi-lingual NLP and are also a leader in conversational design. While Ashish and Asser are going to explore conversational AI and design via bots powered by Oracle cloud, Ashish is also going to elaborate on how Oracle Blockchain and Oracle IoT are becoming building blocks for extending modern applications in his ‘Bringing Enterprises to Blockchain’ session. He will be accompanied by Ghassan Sarsak from ICS Financial Services, and Thrasos Thrasyvoulu from the Oracle Cloud Platform App Dev team. Last, but never the least, Ashish will explain how companies can build compelling user interfaces with augmented reality (AR/VR) and show how content is at the core of this capability. Oracle content cloud makes it easy for customers to build these compelling experiences on any channel: mobile, web, and other device. If you're in Dubai next week, swing by Open World to catch the action.  

The Oracle Open World Showcase in London this January We wrapped up an exciting Open World in London last month with a spotlight on all things Oracle Cloud. Hands-on Labs and demos took center stage to...

APIs

OSvC BUI Extension - How to create a library extension

Library is an existing extension type that you can find as part of BUI Extensibility Framework. If you are not familiar with the library concept, it is a collection of non-volatile resources or implementations of behavior that can be invoked from other programs (in our case, across extensions that share the same resources or behaviors). For example, your extension project requires a common behavior such as a method for authentication, global variable, a method for trace/log, and others. In this case, a library is a useful approach because it can wrap all common methods in a single extension that can be invoked from others, it prevents your project from inconsistently repeated methods over different extensions. The following benefits can be observed when this approach is used: centralized maintenance of core methods; reduced size of other extensions, which might improve the time of download content; standardized methods; and others...   Before we can get further, let's see what this sample code delivers.  Library myLibrary.js: This file includes a common implementation of behavior such as a method to trace, to return authentication credentials and to execute ROQL queries. myGlobalHeaderMenu init.html: This file is initializing the required js. ** if you have experience with require.js, probably you are thinking why not use require.js. We can work with require.js in another post. Although, the library concept is still needed. js/myGlobalHeaderMenu.js: This file is creating our user interface extension. We want to see a Menu Header with thumbs-ups icon like we did before. As it is a sample code, we want to have something simple to trigger our methods implemented as a library and see it in action.   The global header menu is invoking a trace log and ROQL Query function that was implemented as part of the library sample code. When the thumbs-up is clicked a ROQL Query Statement( "select count(*) from accounts") is passed as a parameter to a function that was implemented as part of the library. The result is presented by another library behavior which was defined to trace any customization. In order to have the trace log function on, we've implemented a local storage item (localStorage.setItem('debugMyExtension',true);) as you can see in the animation gif below.   It will make more sense in the next session of this post where you can read the code line with comments to understand the logic under the hood. For now, let's see what you should expect when this sample code is uploaded to your site.     Here are the sample codes to create a ‘Global Header Menu’ and a ‘Library.’ Please, download from attachment and add each one of the add-is as Agent Browser UI Extension, then select Console for the myGlobalHeaderMenu extension, and init.html as the init file. Lastly, upload myLibrary.js as a new extension (the extension name should be Utilities), then select library as extension type.    Library   Here is the code line implemented in myLibrary.js. Read the code line with comments for a better understanding.   /* As mentioned in other posts, we want to keep the app name and app version consistent for each extension. Later, it will help us to better troubleshoot and read the logs provided by BUI Extension Log Viewer.*/ var appName = "UtilityExtension"; var appVersion = "1.0"; /*We have created this function in order to troubleshoot our extensions. You don't want to have your extension tracing for all agents, so in this sample code, we are using a local storage to check whether the trace mode is on or off. In order to have the trace on, with the console object opened set a local item as follow; localStorage.setItem('debugMyExtension',true);*/ let myExtensions2log = function(e){ if (localStorage.getItem('debugMyExtension') == 'true') window.console.log("[My Extension Log]: " + e); } /* Authentication is required to connect to Oracle Service Cloud APIs. This function returns the current session token and the REST API end-point, you don't want to have this information hard-coded.*/ let myAuthentication = new Promise(function(resolve, reject){ ORACLE_SERVICE_CLOUD.extension_loader.load(appName,appVersion).then(function(extensionProvider){ extensionProvider.getGlobalContext().then(function(globalContext){ _urlrest = globalContext.getInterfaceServiceUrl("REST"); _accountId = globalContext.getAccountId(); globalContext.getSessionToken().then( function(sessionToken){ resolve({'sessionToken': sessionToken,'restEndPoint': _urlrest, 'accountId': _accountId}); }); }); }); }); /* This function will receive a ROQL statement and will return the result object. With this function, other extensions can send a ROQL statement and receive a JSON object as result.*/ let myROQLQuery = function(param){ return new Promise(function(resolve, reject){ var xhr = new XMLHttpRequest(); myAuthentication.then(function(result){ xhr.open("GET", result['restEndPoint'] + "/connect/latest/queryResults/?query=" + param, true); xhr.setRequestHeader("Authorization", "Session " + result['sessionToken']); xhr.setRequestHeader("OSvC-CREST-Application-Context", "UtilitiesExtension"); xhr.onload = function(e) { if (xhr.readyState === 4) { if (xhr.status === 200) { var obj = JSON.parse(xhr.responseText); resolve(obj); } else { reject('myROQLQuery from Utilities Library has failed'); } } } xhr.onerror = function (e) { console.error(xhr.statusText); }; xhr.send(); }); }); }   myGlobalHeaderMenu   init.html This is the init.html file. The important part here is to understand the src path. If you are not familiar with "src path" here is a quick explanation. Notice that each extension resides in a directory and the idea is to work with directory paths.   /   = Root directory .   = This location ..  = Up a directory ./  = Current directory ../ = Parent of current directory ../../ = Two directories backwards   In our case, it is ./../[Library Extension Name]/[Library File name]  -> "./../Utilities/myLibrary.js"   <!--This HTML file was created to make a call on the required files to run this extension--> <!--myLibrary is the first extension to be called. This file has the common resources that is needed to run the second .js file--> <script src="./../Utilities/myLibrary.js"></script> <!--myGlobalHeaderMenu is the main extension which will create the Global Header Menu and call myLibrary for dependet resources--> <script src="./js/myGlobalHeaderMenu.js"></script>       js/myGlobalHeaderMenu.js   let myHeaderMenu = function(){ ORACLE_SERVICE_CLOUD.extension_loader.load("GlobalHeaderMenuItem", "1.0").then(function (sdk) { sdk.registerUserInterfaceExtension(function (IUserInterfaceContext) { IUserInterfaceContext.getGlobalHeaderContext().then(function (IGlobalHeaderContext) { IGlobalHeaderContext.getMenu('').then(function (IGlobalHeaderMenu) { var icon = IGlobalHeaderMenu.createIcon("font awesome"); icon.setIconClass("fas fa-thumbs-up"); IGlobalHeaderMenu.addIcon(icon); IGlobalHeaderMenu.setHandler(function (IGlobalHeaderMenu) { myROQLQuery("select count(*) from accounts").then(function(result){ result["items"].forEach(function(rows){ rows["rows"].forEach(function(value){ myExtensions2log(value); }) }); }); }); IGlobalHeaderMenu.render(); }); }); }); }); } myHeaderMenu();   We hope that you find this post useful. We encourage you to try the sample code from this post and let us know what modifications you have made to enhance it. What other topics would you like to see next? Let us know in the comments below.

Library is an existing extension type that you can find as part of BUI Extensibility Framework. If you are not familiar with the library concept, it is a collection of non-volatile resources or...

DevOps

Code Merge as part of a Build Pipeline in Oracle Developer Cloud

This blog will help you understand how to use code merge as part of a build pipeline in Oracle Developer Cloud. You’ll use out-of-the-box build job functionality only. This information should also help you see how useful this feature can be for developers in their day-to-day development work. Creating a New Git Repository Click Project Home in the navigation bar. In the Project page, select a project to use (I chose DemoProject), and then click the + Create Repository button to create a new repository. I’ll use this repository for the code merge in this blog. In the New Repository dialog box, enter a name for the repository. I used MyMergeRepo, but you can use whatever name you want. Then, select the Initialize repository with README file option and click the Create button. Creating the New Branch Click Git in the navigation bar. In the Refs view of the Git page, from the Repositories drop-down list, select MyMergeRepo.git. Click on the + New Branch button to create a new branch. In the New Branch dialog, enter a unique branch name. I used change, but you can use any name you want. Select the appropriate Base branch from the drop-down list. For this repository, master branch is the only option we have. Click the Create button to create the new branch.   Creating the Build Job Configuration In the navigation bar, click Builds. In the Jobs tab, click on the + Create Job button to create a new build job.   In the New Job dialog, enter a unique name for the job name. I’ll use MergeCode but you can enter any name you want. Select the Use for Merge Request checkbox, the Create New option, and then select any Software Template from the drop-down list. You don’t need a specific software bundle to execute a merge. The required software bundle, which by default is part of any software template you create, is sufficient. Finally, click the Create Job button. Note: If you are new to creating Build VM templates and Build VMs, see Set Up the Build System.   When you create a build job with the Use for Merge Request checkbox selected, the Merge Request Parameters get placed in the Repository and Branch fields of the Source Control tab. You can go ahead and select the Automatically perform build on SCM commit checkbox. In the Build Parameters tab, you’ll notice that Merge Request parameters like GIT_REPO_URL, GIT_REPO_BRANCH, and MERGE_REQ_ID were added automatically. After reviewing it, click on the Save button.   Creating the Merge Request In the navigation bar, click Merge Requests.  Then click on the + Create Merge Request button. In the New Merge Request wizard, select the Git repository (MyMergeRepo), the target branch (master), and the review branch (change). You won’t see any commits because we haven’t done any yet. Click the Next button to advance to the second page. On the Details page, select MergeCode for Linked Builds and select a reviewer. If you created an issue that needs to be linked to the merge request, link it with Linked Issues. Click the Next button to advance to the last page. You can change the description for the merge request or just use the default one. Then click the Create button to create the merge request. In the Linked Builds tab, you should see the MergeCode build job as the linked build job.   Changing a File and Committing the Change to the Git Repository In the Git page, select the MyMergeRepo.git repository from the repository drop-down list and the change branch in the branches drop-down list. Then click the README.md file link. Click the pencil icon to edit the file. Add some text (any text will do), and then click the Commit button.   The code commit triggers the MergeCode build.   When a build of a linked job runs, a comment is automatically added to the Conversation tab. When the MergeCode build completes successfully, it auto-approves the merge request and adds itself to the Approve section of the Review Status list, waiting for an approval from the reviewer assigned to the merge request. Once the reviewer approves the merge request, the review branch code is ready to be merged into the target branch. To merge the code, click the Merge button. Note: For this merge request, Alex Admin, the user who is logged in, is the reviewer.   By including Merge Request Parameters as part of a build job, you can be sure that every commit will be auto-validated to be free of conflicts and approved. This comes in handy when multiple commits are linked to a merge request by linking the build job enabled for merge requests. The merge request will still wait for the assigned reviewer(s) to do the code review, approve the changes, and then merge the code in the target branch. This feature helps developers collaborate efficiently with their team members in their day-to day-development activities. Happy Coding!  **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

This blog will help you understand how to use code merge as part of a build pipeline in Oracle Developer Cloud. You’ll use out-of-the-box build job functionality only. This information should also...

DevOps

New Features in Oracle Developer Cloud - January 2019

We are happy to announce the January update for Oracle Developer Cloud - your team collaboration and CI/CD platform in the Oracle cloud. Here is a list of the key new features you can now leverage Oracle Cloud Infrastructure Compute based Build Servers Customer can now leverage OCI based compute and storage instances to run their build pipelines and store their CI/CD artifacts. New wizards allow you to configure OCI based environments with ease.  Project Level Password Variable You can now configure project level variables that will store passwords. You then refer to these password variables in your build scripts, build steps etc. When a password changes in your system, you'll only need to update it in a single place in Developer Cloud Service and the new password will be used everywhere it is referenced. Draft Mode for Wiki Pages We added support for draft status for wiki pages, allowing you to create and edit pages before you publish them to the public. As you edit a wiki page DevCS auto-saves the content, so even if you leave the page without publishing it, you are able to return to the draft later on to complete it and publish the content.   New Organization Page Organization Administrators have an updated Organization Page that provides easy access to all the admin tasks making it simpler to configure the environment for your organization including managing build servers, build templates, projects stats, and more. There's more These are just some of the highlights in this new version. Make sure to read about the rest in our "What's New" section of the documentation. Also check out the new tutorials and doc to help you leverage the new features. If you run into any questions, you can ask them on our new public Slack channel or our Oracle Cloud Customer Connect Forum.

We are happy to announce the January update for Oracle Developer Cloud - your team collaboration and CI/CD platform in the Oracle cloud. Here is a list of the key new features you can now leverage Oracl...

New Features in Oracle Visual Builder for the New Year

We are happy to announce the December 2018 release of Oracle Visual Builder (known as 18.4.5). This version adds several key new features, and in addition implements many enhancements to the overall development experience in Oracle's high-productivity JavaScript development platform. Here are some of the new features you can now leverage: Integration Cloud Service Catalog If you are using Oracle Integration Cloud to connect and access data from various sources, Visual Builder now makes it even simpler to leverage those integrations. The new integrations service catalog will list integrations that are defined in your Oracle Integration Cloud, and allow you to add them as sources of data and operation easily to your VB application. This is a nice addition to the existing Oracle SaaS service catalog already available in Oracle VB. iPad/Tablet Support for Mobile Apps We extended our mobile packaging capabilities to support specific packaging for iPads in addition to iPhones. In addition, in the UI emulator VB now supports an iPad/Tablet size preview as another option in the screen sizes menu. Nested Flows To help you further encapsulate flows, Visual Builder now supports the concept of nested flows. Nested flows allow you to create sub-flows that are contained inside another flow and can be used by various pages in that "master" flow. These sub-flows are then embedded into a flow-container region on a page. At runtime you can switch the sub-flow that is shown in such a region giving you a more dynamic interface. This encapsulation also helps with scenarios of multiple developers that need to work on various sections of an application - eliminating potential conflicts. Visual Builder Add In for Excel Sometimes neither web nor mobile are the right UI for your customer, maybe they want to work with your data directly from spreadsheets - well now they can. With the Visual Builder Add in for Excel plug-in you can directly access Business Objects you created in Visual Builder from Excel spreadsheet and query and manipulate the data. The plug-in gives you a complete development environment embedded in Excel to create interactions with your business objects. JET 6 Support Visual Builder now supports the latest Oracle JET 6.0 set of components and capabilities. This applies for both design-time and run-time. Note that existing applications will continue to use their current JET version, unless you open them with the new version to do modifications - when you do open them, we'll automatically upgrade them to use JET 6. Vanity URL Visual Builder lets you define a specific URL that will be used for your published web applications. This means that if you own the URL to www.yourname.com for example - you can specify that your apps will show up using this URL. Check out the application settings for more information on this capability. But Wait There's More... There are many many other enhancements in every area of Oracle Visual Builder - you can read about them in our what's new book, or even better - just try out Visual Builder and experience it on your own!    

We are happy to announce the December 2018 release of Oracle Visual Builder (known as 18.4.5). This version adds several key new features, and in addition implements many enhancements to the overall...

Developers

Announcing Oracle Cloud Infrastructure Resource Manager

We are excited to announce a new service, Oracle Cloud Infrastructure Resource Manager, that makes it easy to manage your infrastructure resources on Oracle Cloud Infrastructure. Resource Manager enables you to use infrastructure as code (IaC) to automate provisioning for infrastructure resources such as compute, networking, storage, and load balancing. Using IaC is a DevOps practice that makes it possible to provision infrastructure quickly, reliably, and at any scale. Changes are made in code, not in the target systems. That code can be maintained in a source control system, so it’s easy to collaborate, track changes, and document and reverse deployments when required. HashiCorp Terraform To describe infrastructure Resource Manager uses HashiCorp Terraform, an open source project that has become the dominant standard for describing cloud infrastructure. Oracle is making a strong commitment to Terraform and will enable all its cloud infrastructure services to be managed through Terraform. Earlier this year we released the Terraform Provider, and we have started to submit Terraform modules for Oracle Cloud Infrastructure to the Terraform Module Registry. Now we are taking the next step by providing a managed service. Managed Service In addition to the provider and modules, Oracle now provides Resource Manager, a fully managed service to operate Terraform. Resource Manager integrates with Oracle Cloud Infrastructure Identity and Access Management (IAM), so you can define granular permissions for Terraform operations. It further provides state locking, gives users the ability to share state, and lets teams collaborate effectively on their Terraform deployments. Most of all, it makes operating Terraform easier and more reliable. With Resource Manager, you create a stack before you run Terraform actions. Stacks enable you to segregate your Terraform configuration, where a single stack represents a set of Oracle Cloud Infrastructure resources that you want to create together. Each stack individually maps to a Terraform state file that you can download. To create a stack, you define a compartment and upload the Terraform configuration while creating this stack. This zip file contains all the .tf files that define the resources that you want to create. You can optionally include a variables.tf file or define your variables in a (key,value) format on the console. After your stack is created, you can run different Terraform actions like plan, apply, and destroy on this stack. These Terraform actions are called jobs. You can also update the stack by uploading a new zip file, download this configuration, and delete the stack when required. Plan: Resource Manager parses your configuration and returns an execution plan that lists the Oracle Cloud Infrastructure resources describing the end state. Apply: Resource Manager creates your stack based on the results of the plan job. After this action is completed, you can see the resources that have been created successfully in the defined compartments. Destroy: Terraform attempts to delete all the resources in the stack. You can define permissions on your stacks and jobs through IAM policies. You can define granular permissions and let only certain users or groups perform actions like plan, apply, or destroy. Availability Resource Manager will become generally available in early 2019. We are currently providing access to selected customers through our Cloud Native Limited Availability Program. The currently available early version offers access to the Compute, Networking, Block Storage, Object Storage, IAM, and Load Balancing services. To learn more about Resource Manager or to request access to the technology, please register.

We are excited to announce a new service, Oracle Cloud Infrastructure Resource Manager, that makes it easy to manage your infrastructure resources on Oracle Cloud Infrastructure. Resource Manager...

Community

Building the Oracle Code One 2018 Escape Rooms

By Chris Bensen, Cloud Experience Developer at Oracle I’ve built a lot of crazy things in my life but the “Escape Rooms” for Code One 2018 might just be one of the craziest. And funnest! The initial idea for our escape room came from Toni Epple where a Java based escape room was built for a German conference. We thought it was rather good, and escape rooms are trendy and fun so we decided to dial it up to eleven for 2018 Code One attendees. The concept was to have two escape rooms, one with a Java developer theme and one with the superhero theme of the developer keynote, and that’s when Duke’s Lab and Superhero Escape were born. We wanted to build a demo that was different than what is normally at a conference and make the rooms feel like real rooms. I actually built two rooms with 2x4 construction in my driveway. Each room consisted of two eight foot cubed rooms that could be split in two pieces for easy shipping. And shipping wasn’t easy as we only had 1/4” of clearance! Inside the walls were faux brick to have the Brooklyn New York look and feel where many of the Marvel comics take place. The faux brick is a particle board product that can be purchased at your favorite local hardware store and is fire retardant so it’s a turnkey solution.   Many escape rooms contain physical puzzles and with CodeOne being a conference about programming languages it seemed fitting to infuse electronics and software into each puzzle. Each room was powered by a 24 volt 12 amp power supply which is the same power supply used to power an Ultimaker 3D printers. Using voltage regulators this was stepped down to 12 volts and in some cases 5 and 3.3 volts depending on the needs. Throughout the room conduit was run with custom 3D printed outlets to power each device using aviation connectors because they are super secure. The project took just over two months to build, over 100 unique 3D printed parts were created and four 3D printers were running nearly 24by7 to produce over 400 parts total. 8 Arduinos and 5 Raspberry Pi ran the rooms with various electronics for sensors, displays, sounds and movement. The custom software was written using Python, Bash, C/C++ and Java. At the heart of Duke’s Lab and the final puzzle is a wooden crate with two locks. The intention was to look like something out of a Bond film or Indiana Jones. Once you open it you are presented with two devices as seen in the photo below. I wouldn’t want to ruin the surprise but let’s just say most people that open the crate get a little heart thump as the countdown timer starts ticking when the create is opened! At the heart of Superhero Escape we have The Mighty Thor’s hammer Mjölnir, Captain America’s shield and Iron Man’s arc reactor. The idea was to bring these three props to life and integrate them into an escape room of super proportions. And given the number of people that solved the puzzle and exited the room with Cap’s shield on one arm and Mjölnir in the other, I would say it was a resounding success! The goal and final puzzle for Superhero Escape is to wield Mjölnir. Mjölnir was held to the floor of the escape room by a very powerful electromagnet. At the heart of the hammer is a piece of solid 1” thick steel I had custom machined to my specifications connected to a pipe. The shell is one solid 3D print taking over 24 hours and an entire 1 kilogram of filament. For those that don’t know, that is an entire roll. Exactly an entire roll! As with any project I learned a lot. I leveraged all my knowledge of digital fabrication, traditional fabrication, electronics, programming, wood working and puzzles and did things I wasn’t sure were possible, especially in the timeframe we had. That’s what being an Oracle Groudbreaker is all about. And for all those Groudbreakers out there, keep dreaming and learning because you will never know when you’ll be asked to build something that will take every bit of knowledge you have to build something amazing.

By Chris Bensen, Cloud Experience Developer at Oracle I’ve built a lot of crazy things in my life but the “Escape Rooms” for Code One 2018 might just be one of the craziest. And funnest! The initial...

Announcing Oracle Functions

Photo by Tim Easley on Unsplash [First posted on the Oracle Cloud Infrastructure Blog] At KubeCon 2018 in Seattle Oracle announced Oracle Functions, a new cloud service that enables enterprises to build and run serverless applications in the cloud.  Oracle Functions is a serverless platform that makes it easy for developers to write and deploy code without having to worry about provisioning or managing compute and network infrastructure. Oracle Functions manages all the underlying infrastructure automatically and scales it elastically to service incoming requests.  Developers can focus on writing code that delivers business value. Pay-per-use Serverless functions change the economic model of cloud computing as customers are only charged for the resources used while a function is running.  There’s no charge for idle time! This is unlike the traditional approach of deploying code to a user provisioned and managed virtual machine or container that is typically running 24x7 and which must be paid for even when it’s idle.  Pay-per-use makes Oracle Functions an ideal platform for intermittent workloads or workloads with spiky usage patterns.  Open Source Open source has changed the way businesses build software and the same is true for Oracle. Rather than building yet another proprietary cloud functions platform, Oracle chose to invest in the Apache 2.0 licensed open source Fn Project and build Oracle Functions on Fn. With this approach, code written for Oracle Functions will run on any Fn server.  Functions can be deployed to Oracle Functions or to a customer managed Fn cluster on-prem or even on another cloud platform.  That said, the advantage of Oracle Functions is that it’s a serverless offering which eliminates the need for customers to manually manage an Fn cluster or the underlying compute infrastructure. But thanks to open source Fn, customers will always have the choice to deploy their functions to whatever platform offers the best price and performance. We’re confident that platform will be Oracle Functions. Container Native Unlike most other functions platforms, Oracle Functions is container native with functions packaged as Docker container images.  This approach supports a highly productive developer experience for new users while allowing power users to fully customize their function runtime environment, including installing any required native libraries.  The broad Docker ecosystem and the flexibility it offers lets developers focus on solving business problems and not on figuring out how to hack around restrictions frequently encountered on proprietary cloud function platforms.  As functions are deployed as Docker containers, Oracle Functions is seamlessly integrated with the Docker Registry v2 compliant Oracle Cloud Infrastructure Registry (OCIR) which is used to store function container images.  Like Oracle Functions, OCIR is also both serverless and pay-per-use.  You simply build a function and push the container images to OCIR which charges just for the resources used. Secure Security is the top priority for Oracle Cloud services and Oracle Functions is no different. All access to functions deployed on Oracle Functions is controlled through Oracle Identity and Access Management (IAM) which allows both function management and function invocation privileges to be assigned to specific users and user groups.  And once deployed, functions themselves may only access resources on VCNs in their compartment that they have been explicitly granted access to.  Secure access is also the default for function container images stored in OCIR.  Oracle Functions works with OCIR private registries to ensure that only authorized users are able to access and deploy function containers.  In each of these cases, Oracle Function takes a “secure by default” approach while providing customers full control over their function assets.   Getting Started Oracle Functions will be generally available in 2019 but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Oracle Functions or to request access, please let us know by registering with this form.  You can also learn more about the underlying open source technology used in Oracle Function at FnProject.io.

Photo by Tim Easley on Unsplash [First posted on the Oracle Cloud Infrastructure Blog] At KubeCon 2018 in Seattle Oracle announced Oracle Functions, a new cloud service that enables enterprises to...

Cloud

Announcing Oracle Cloud Native Framework at KubeCon North America 2018

This blog was originally published at https://blogs.oracle.com/cloudnative/ At KubeCon + CloudNativeCon North America 2018, Oracle has announced the Oracle Cloud Native Framework - an inclusive, sustainable, and open cloud native development solution with deployment models for public cloud, on premises, and hybrid cloud. The Oracle Cloud Native Framework is composed of the recently-announced Oracle Linux Cloud Native Environment and a rich set of new Oracle Cloud Infrastructure cloud native services including Oracle Functions, an industry-first, open serverless solution available as a managed cloud service based on the open source Fn Project. With this announcement, Oracle is the only major cloud provider to deliver and support a unified cloud native solution across managed cloud services and on-premises software, for public cloud (Oracle Cloud Infrastructure), hybrid cloud and on-premises users, supporting seamless, bi-directional portability of cloud native applications built anywhere on the framework.  Since the framework is based on open, CNCF certified, conformant standards it will not lock you in - applications built on the Oracle Cloud Native Framework are portable to any Kubernetes conformant environment – on any cloud or infrastructure Oracle Cloud Native Framework – What is It? The Oracle Cloud Native Framework provides a supported solution of Oracle Cloud Infrastructure cloud services and Oracle Linux on-premises software based on open, community-driven CNCF projects. These are built on an open, Kubernetes foundation – among the first K8s products released and certified last year. Six new Oracle Cloud Infrastructure cloud native services are being announced as part of this solution and build on the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services. Cloud Native at a Crossroads – Amazing Progress We should all pause and consider how far the cloud native ecosystem has come – evidenced by the scale, excitement, and buzz around the sold-out KubeCon conference this week and the success and strong foundation that Kubernetes has delivered! We are living in a golden age for developers – a literal "First Wave" of cloud native deployment and technology - being shaped by three forces coming together and creating massive potential: Culture: The DevOps culture has fundamentally changed the way we develop and deploy software and how we work together in application development teams. With almost a decade’s worth of work and metrics to support the methodologies and cultural shifts, it has resulted in many related off-shoots, alternatives, and derivatives including SRE, DevSecOps, AIOps, GitOps, and NoOps (the list will go on no doubt). Code: Open source and the projects that have been battle tested and spun out of webscale organizations like Netflix, Google, Uber, Facebook, and Twitter have been democratized under the umbrella of organizations like CNCF (Cloud Native Computing Foundation). This grants the same access and opportunities to citizen developers playing or learning at home, as it does to enterprise developers in the largest of orgs. Cloud: Unprecedented compute, network, and storage are available in today’s cloud – and that power continues to grow with a never-ending explosion in scale, from bare metal to GPUs and beyond. This unlocks new applications for developers in areas such as HPC apps, Big Data, AI, blockchain, and more.  Cloud Native at a Crossroads – Critical Challenges Ahead Despite all the progress, we are facing new challenges to reach beyond these first wave successes. Many developers and teams are being left behind as the culture changes. Open source offers thousands of new choices and options, which on the surface create more complexity than a closed, proprietary path where everything is pre-decided for the developer. The rush towards a single source cloud model has left many with cloud lock-in issues, resulting in diminished choices and rising costs – the opposite of what open source and cloud are supposed to provide. The challenges below mirror the positive forces above and are reflected in the August 2018 CNCF survey: Cultural Change for Developers: on premises, traditional development teams are being left behind. Cultural change is slow and hard. Complexity: too many choices, too hard to do yourself (maintain, administer), too much too soon? Cloud Lock-in: proprietary single-source clouds can lock you in with closed APIs, services, and non-portable solutions. The Cloud Native Second Wave – Inclusive, Sustainable, Open What’s needed is a different approach: Inclusive: can include cloud and on-prem, modern and traditional, dev and ops, startups and enterprises Sustainable: managed services versus DIY, open but curated, supported, enterprise grade infrastructure Open: truly open, community-driven, and not based on proprietary tech or self-serving OSS extensions Introducing the Oracle Cloud Native Framework – What’s New? The Oracle Cloud Native Framework spans public cloud, on-premises, and hybrid cloud deployment models – offering choice and uniquely meeting the broad deployment needs of developers. It includes Oracle Cloud Infrastructure Cloud Native Services and the Oracle Linux Cloud Native Environment. On top of the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services, a rich set of new Oracle Cloud Infrastructure cloud native services has been announced with services across provisioning, application definition and development, and observability and analysis.   Application Definition and Development Oracle Functions: A fully managed, highly scalable, on-demand, functions-as-a-service (FaaS) platform, built on enterprise-grade Oracle Cloud Infrastructure and powered by the open source Fn Project. Multi-tenant and container native, Oracle Functions lets developers focus on writing code to meet business needs without having to manage or even address the underlying infrastructure. Users only pay for execution, not for idle time. Streaming: Enables applications such as supply chain, security, and IoT to collect from many sources and process in real-time. Streaming is a highly available, scalable and multi-tenant platform that makes it easy to collect and manage streaming data. Provisioning Resource Manager: A managed Oracle Cloud Infrastructure provisioning service based on industry standard Terraform. Infrastructure-as-code is a fundamental DevOps pattern, and Resource Manager is an indispensable tool to automate configuration and increases productivity by managing infrastructure declaratively. Observation and Analysis Monitoring: An integrated service that reports metrics from all resources and services in Oracle Cloud Infrastructure. Monitoring provides predefined metrics and dashboards, and also supports a service API to obtain a top-down view of the health, performance, and capacity of the system. The monitoring service includes alarms to track these metrics and act when they vary or exceed defined thresholds, helping users meet service level objectives and avoid interruptions. Notification Service: A scalable service that broadcasts messages to distributed components, such as email and PagerDuty. Users can easily deliver messages about Oracle Cloud Infrastructure to large numbers of subscribers through a publish-subscribe pattern. Events: Based on the CNCF Cloud Events standard, Events enables users to react to changes in the state of Oracle Cloud Infrastructure resources, both when initiated by the system or by user action. Events can store information to Object Storage, or they can trigger Functions to take actions, Notifications to inform users, or Streaming to update external services. Use Cases for the Oracle Cloud Native Framework: Inclusive, Sustainable, Open Inclusive: The Oracle Cloud Native Framework includes both cloud and on-prem, supports modern and traditional applications, supports both dev and ops, can be used by startups and enterprises. As an industry, we need to create more on-ramps to the cloud native freeway – in particular by reaching out to teams and technologies and connecting cloud native to what people know and work on every day. The WebLogic Server Operator for Kubernetes is a great example of just that. It enables existing WebLogic applications to easily integrate into and leverage Kubernetes cluster management.  As another example, the Helidon project for Java creates a microservice architecture and framework for Java apps to move more quickly to cloud native. Many Oracle Database customers are connecting cloud native applications based on Kubernetes for new web front-ends and AI/big data processing back-ends, and the combination of the Oracle Autonomous Database and OKE creates a new model for self-driving, securing, and repairing cloud native applications. For example, using Kubernetes service broker and service catalog technology, developers can simply connect Autonomous Transaction Processing applications into OKE services on Oracle Cloud Infrastructure.   Sustainable: The Oracle Cloud Native Framework provides a set of managed cloud services and supported on-premises solutions, open and curated, and built on an enterprise grade infrastructure. New open source projects are popping up every day and the rate of change of existing projects like Kubernetes is extraordinary. While the landscape grows, the industry and vendors must face the resultant challenge of complexity as enterprises and teams can only learn, change, and adopt so fast. A unified framework helps reduce this complexity through curation and support. Managed cloud services are the secret weapon to reduce the administration, training, and learning curve issues enterprises have had to shoulder themselves. While a do-it-yourself approach has been their only choice up to recently, managed cloud services such as OKE give developers a chance to leapfrog into cloud native without a long and arduous learning curve. A sustainable model – built on an open, enterprise grade infrastructure, gives enterprises a secure, performant platform from which to build real hybrid cloud deployments including these five key hybrid cloud use cases: Development and DevOps: Dev/test in the cloud, production on-prem     Application Portability and Migration: enables bi-directional cloud native application portability (on-prem to cloud, cloud to on-prem) and lift and shift migrations.  The Oracle MySQL Operator for Kubernetes is an extremely popular solution that simplifies portability and integration of MySQL applications into cloud native tooling.  It enables creation and management of production-ready MySQL clusters based on a simple declarative configuration format including operational tasks such as database backups and restoring from an existing backup. The MySQL Operator simplifies running MySQL inside Kubernetes and enabling further application portability and migrations.     HA/DR: Disaster recovery or high availability sites in cloud, production on-prem Workload-Specific Distribution: Choose where you want to run workloads, on-prem or cloud, based on specific workload type (e.g., based on latency, regulation, new vs. legacy) Intelligent Orchestration: More advanced hybrid use cases require more sophisticated distributed application intelligence and federation – these include cloud bursting and Kubernetes federation   Open: Over the course of the last few years, development teams have typically chosen to embrace a single-source cloud model to move fast and reduce complexity – in other words the quick and easy solution. The price they are paying now is cloud lock in resulting from proprietary services, closed APIs, and non-portable solutions. This is the exact opposite of where we are headed as an industry – fueled by open source, CNCF-based, and community-driven technologies.   An open ecosystem enables not only a hybrid cloud world but a truly multi-cloud world – and that is the vision that drives the Oracle Cloud Native Framework!

This blog was originally published at https://blogs.oracle.com/cloudnative/ At KubeCon + CloudNativeCon North America 2018, Oracle has announced the Oracle Cloud Native Framework - an inclusive,...

DevOps

Deploy containers on Oracle Container Engine for Kubernetes using Developer Cloud

In my previous blog, I described how to use Oracle Developer Cloud to build and push the Node.js microservice Docker image on DockerHub. This blog will help you understand, how to use Oracle Developer Cloud to deploy the Docker image pushed to DockerHub on Container Engine for Kubernetes. Container Engine for Kubernetes Container Engine for Kubernetes is a developer-friendly, container-native, enterprise-ready managed Kubernetes service for running highly available clusters with the control, security, and predictable performance of Oracle Cloud Infrastructure. Visit the following link to learn about Oracle’s Container Engine for Kubernetes: https://cloud.oracle.com/containers/kubernetes-engine Prerequisites for Kubernetes Deployment Access to an Oracle Cloud Infrastructure (OCI) account A Kubernetes cluster set up on OCI This tutorial explains how to set up a Kubernetes cluster on OCI.  Set Up the Environment: Create and Configure Build VM Templates and Build VMs You’ll need to create and configure the Build VM template and Build VM with the required software, which will be used to execute the build job.   Click the user avatar, then select Organization from the menu.    Click VM Templates then New Template. In the dialog that pops up, enter a template name, such as Kubernetes Template, select “Oracle Linux 7” for the platform, then click the Create button.     After the template has been created, click Configure Software.   Select Kubectl and OCIcli (you’ll be asked to add Python3 3.6, as well) from the list of software bundles available for configuration, then click + to add these software bundles to the template.  Click the Done button to complete the software configuration for that Build VM template.             From the Virtual Machines page, click +New VM and, in the dialog that pops up, enter the number of VMs you want to create and select the VM Template you just created (Kubernetes Template).   Click the Add button to add the VM.   Kubernetes deployment scripts From the Project page, click the + New Repository button to add a new repository.   After creating the repository, Developer Cloud will bring you to the Code page, with the  NodejsKubernetes repository showing. Click the +File button to create a new file in the repository. (The README file in the repository was created when the project was created.)    Copy the following script into a text editor and save the file as nodejs_micro.yaml. apiVersion: apps/v1beta1 kind: Deployment metadata: name: nodejsmicro-k8s-deployment spec: selector: matchLabels: app: nodejsmicro replicas: 1 # deployment runs 1 pods matching the template template: # create pods using pod definition in this template metadata: labels: app: nodejsmicro spec: containers: - name: nodejsmicro image: abhinavshroff/nodejsmicro4kube:latest ports: - containerPort: 80 #Endpoint is at port 80 in the container --- apiVersion: v1 kind: Service metadata: name: nodejsmicro-k8s-service spec: type: NodePort #Exposes the service as a node port ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nodejsmicro   Click the Commit button to create the file and commit the code changes.   Click the Commit button in the Commit changes dialog that displays. You should see the nodejs_micro.yaml file in the list of files for the NodejsKubernetes.git repository, as shown in the screenshot below.   Configuring the Build Job Click Build on the navigation bar to display the Build page. Click the +New Job button to create a new build job. In the New Job dialog box, enter NodejsKubernetesDeploymentBuild for the Job name and, from the Software Template drop-down list, select Kubernetes Template as the Software Template. Then click the Create Job button to create the build job.   After the build job has been created, you’ll be brought to the configure screen. Click the Source Control tab and select NodejsKubernetes.git from the repository drop-down list. This is the same repository where you created the nodejs_micro.yaml file. Select master from the Branch drop-down list.   In the Builders tab, click the Add Builder drop-down and select OCIcli Builder from the drop-down list.  To see what you need to fill in for each of the input fields in the OCIcli Builder form and to find out where to retrieve these values, you can either read my “Oracle Cloud Infrastructure CLI on Developer Cloud” blog or the documentation link to the “Access Oracle Cloud Infrastructure Services Using OCIcli” section in Using Oracle Developer Cloud Service. Note: The values in the screenshot below have been obfuscated for security reasons.    Click the Add Builder drop-down list again and select Unix Shell Builder.   In the text area of the Unix Shell Builder, add the following script that downloads the Kubernetes config file and deploys the container on Oracle Kubernetes Engine, which you created by following the instructions in my previous blog. Click the Save button to save the build job.    mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.iad.aaaaaaaaafrgkzrwhtimldhaytgnjqhazdgmbuhc2gemrvmq2w --file $HOME/.kube/config --region us-ashburn-1 export KUBECONFIG=$HOME/.kube/config kubectl config view kubectl get nodes kubectl create -f nodejs_micro.yaml sleep 120 kubectl get services nodejsmicro-k8s-service kubectl get pods kubectl describe pods This script creates the kube directory, uses the OCIcli command oci ce cluster to download the Kubernetes cluster config file, then sets the KUBECONFIG environment variable. The kubectl config and get nodes commands just let you view the cluster configuration and see the node details of the cluster. The create command actually deploys the Docker container on the Kubernetes cluster. We run the get services and, get pods commands to retrieve the IP address and the port of the deployed container. Note that the nodejsmicro-k8s-service name was previously configured in the nodejs_micro.yaml file. Note: The OCID for the cluster, mentioned in the script above, needs to be replaced by the one which you have.    Click the Build Now button to start executing the Kubernetes deployment build. You can click the Build Log icon to view the build execution logs.   After the build job executes successfully, you can examine the build log to retrieve the IP address and the port for the deployed service on Kubernetes cluster. You’ll need to look for the IP address and the port under the deployment name you configured in the YAML file.   Use the IP address and the port that you retrieved in the format shown below and see the output in your browser. http://<IP Address>:port/message Note: The message output you see may differ from what is shown here, based on what you coded in the Node.js REST application that was containerized.   So, now you’ve seen how Oracle Developer Cloud streamlines and simplifies the process for managing the automation for building and deploying Docker containers on Oracle Kubernetes Engine. Happy Coding!   **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

In my previous blog, I described how to use Oracle Developer Cloud to build and push the Node.js microservice Docker image on DockerHub. This blog will help you understand, how to use Oracle...

Finding Symmetry

(Originally published on Medium) Evolving the design of Eclipse Collections through symmetry. Got Eclipse Collections stickers? Find the Missing Types New Eclipse Collections types on the left add to the existing JDK types on the right Eclipse Collections has a bunch of new types you will not find in the JDK. These types give developers useful functionality that they need. There is an extra cost to supporting additional container types, especially when you factor in having support for primitive types across these types. These missing types are important. They help Eclipse Collections return better return types for iteration patterns. Type Symmetry Eclipse Collections has pretty good symmetry between object and primitive types. The missing container types are fixed sized primitive arrays, primitive BiMaps, primitive Multimaps, and some of the primitive Intervals (only IntInterval exists today). String really only should exist as a primitive immutable collection of either char or int. Eclipse Collections has ,CharAdapter, CodePointAdapter and CodePointList which provide a rich set of iteration protocols that work with Strings. API Symmetry There is still much that can be done to improve the symmetry between the object and primitive APIs. There are some APIs that cannot be replicated without adding new types. For instance, it would be less than desirable to implement a primitive version of groupBy with the current Multimap implementations because the only option would be to box the primitive Lists, Sets or Bags. Since there are a large number of APIs in Eclipse Collections, I will only draw attention to some of the major APIs that do not currently have symmetry between object and primitive collections. The following methods are missing on the primitive iterables. groupBy / groupByEach countBy / countByEach aggregateBy / aggregateInPlaceBy partition reduce / reduceInPlace toMap All “With” methods Of all the missing APIs on primitive collections perhaps the most subtle and yet glaring difference is the lack of “With” methods. It is not clear if the “With” methods would be as useful for primitive collections as they are with object collections. For some usage examples of the “With” methods on the object collection APIs, read my blog titled “Preposition Preference”. The “With” methods allow for more APIs to be used with Method References. This is what the signatures for some of the “With” methods might look like on IntList. <P> boolean anySatisfyWith(IntObjectPredicate<? super P> predicate, P parameter); <P> boolean allSatisfyWith(IntObjectPredicate<? super P> predicate, P parameter); <P> boolean noneSatisfyWith(IntObjectPredicate<? super P> predicate, P parameter); <P> IntList selectWith(IntObjectPredicate<? super P> predicate, P parameter); <P> IntList rejectWith(IntObjectPredicate<? super P> predicate, P parameter); Default Methods to the Rescue The addition of default methods in Java 8 has been of tremendous help increasing the symmetry between our object and primitive APIs. In Eclipse Collections 10.x we will be able to leverage default methods even more, as we now have the ability to use container factory classes in interfaces. The following examples show how the default implementations of countBy and countByWith has been optimized using the Bags factory. default <V> Bag<V> countBy(Function<? super T, ? extends V> function) { return this.countBy(function, Bags.mutable.empty()); } default <V, P> Bag<V> countByWith(Function2<? super T, ? super P, ? extends V> function, P parameter) { return this.countByWith(function, parameter, Bags.mutable.empty()); } More on Eclipse Collections API design To find out more about the design of the Eclipse Collections API, check out this slide deck and the following presentation. You can also find a set of visualizations of the Eclipse Collection library in this blog post. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.

(Originally published on Medium) Evolving the design of Eclipse Collections through symmetry. Got Eclipse Collections stickers? Find the Missing Types New Eclipse Collections types on the left add to the...

Install Spinnaker with Halyard on Kubernetes

(Originally published on Medium) This article will walk you through the steps that can be used to install and setup a Spinnaker instance on Kubernetes that’s behind a corporate proxy. We will use Halyard on docker to manage our Spinnaker deployment. For a super quick installation, you can use Spinnaker’s Helm chart Prerequisites Make sure to take care of these prerequisites before installing Spinnaker: Docker 17.x with proxies configured (click here for OL setup) A Kubernetes cluster (click here for OL setup) Helm with RBAC enabled (click here for generic setup) Install Halyard on Docker Halyard is used to install and manage a Spinnaker deployment. In fact, all production grade Spinnaker deployments require Halyard in order to properly configure and maintain Spinnaker. Let’s use Docker to install Halyard. Create a docker volume or create a host directory to hold the persistent data used by Halyard. For the purposes of this article, let’s create a host directory and grant users full access: mkdir halyard && chmod 747 halyard Halyard needs to interact with your Kubernetes cluster. So we pass the $KUBECONFIG file to it. One way would be to mount a host directory into the container that has your Kubernetes cluster details. Let’s create the directory “k8s” and copy the $KUBECONFIG file and make it visible to the user inside the Halyard container. mkdir k8s && cp $KUBECONFIG k8s/config && chmod 755 k8s/config Time to download and run Halyard docker image: docker run -p 8084:8084 -p 9000:9000 \ --name halyard -d \ -v /sandbox/halyard:/home/spinnaker/.hal \ -v /sandbox/k8s:/home/spinnaker/k8s \ -e http_proxy=http://<proxy_host>:<proxy_port> \ -e https_proxy=https://<proxy_host>:<proxy_port> \ -e JAVA_OPTS="-Dhttps.proxyHost=<proxy_host> -Dhttps.proxyPort=<proxy_port>" \ -e KUBECONFIG=/home/spinnaker/k8s/config \ gcr.io/spinnaker-marketplace/halyard:stable Make sure to replace the “<proxy_host>” and “<proxy_port>” with your corporate proxy values. Login to the “halyard” container to test the connection to your Kubernetes cluster: docker exec -it halyard bash kubectl get pods -n spinnaker Optionally, if you want command completion run the following inside the halyard container: source <(hal --print-bash-completion) Set provider to “Kubernetes” In Spinnaker terms, to deploy applications we use integrations to specific cloud platforms. We have to configure Halyard and set the cloud provider to Kubernetes v2 (manifest based) since we want to deploy Spinnaker onto a Kubernetes cluster: hal config provider kubernetes enable Next we create an account. In Spinnaker, an account is a named credential Spinnaker uses to authenticate against an integration provider — Kubernetes in our case: hal config provider kubernetes account add <my_account> \ --provider-version v2 \ --context $(kubectl config current-context) Make sure to replace “<my_account>” with an account name of your choice. Save the account name in an environment variable $ACCOUNT. Next, we need to enable Halyard to use artifacts: hal config features edit --artifacts true Set deployment type to “distributed” Halyard supports multiple types of Spinnaker deployments. Let’s tell Halyard that we need a distributed deployment of Spinnaker: hal config deploy edit --type distributed --account-name $ACCOUNT Set persistent store to “Minio” Spinnaker needs a persistent store to save the continuous delivery pipelines and other configurations. Halyard let’s you choose from multiple storage providers. For the purposes of this article, we will use “Minio”. Let’s use Helm to install a simple instance of Minio. Run the command from outside the Halyard docker container on a node that has access to your Kubernetes cluster and Helm: helm install --namespace spinnaker --name minio --set accessKey= <access_key> --set secretKey=<secret_key> stable/minio Make sure to replace “<access_key>” and “<secret_key>” with values of your choosing. If you are using a local k8s cluster with no real persistent volume support, you can pass “persistence.enabled=false” as a set to the previous Helm command. As the flag suggests, if Minio goes down, you will lose your changes. According to the Spinnaker docs, Minio does not support versioning objects. So let’s disable versioning under Halyard configuration. Back in the Halyard docker container run these commands: mkdir ~/.hal/default/profiles && \ touch ~/.hal/default/profiles/front50-local.yml Add the following to the front50-local.yml file: spinnaker.s3.versioning: false Now run the following command to configure the storage provider: echo $MINIO_SECRET_KEY | \ hal config storage s3 edit --endpoint http://minio:9000 \ --access-key-id $MINIO_ACCESS_KEY \ --secret-access-key Make sure to set the $MINIO_ACCESS_KEY and $MINIO_SECRET_KEY environment variables to the <access_key> and <secret_key> values that you used when you installed Minio. Finally, let’s enable the s3 storage provider: hal config storage edit --type s3 Set version to “latest” You have to select a specific version of Spinnaker and configure Halyard so it knows which version to deploy. You can view the available versions by running this command: hal version list Pick the latest version number from the list (or any other version that you want to deploy) and update Halyard: hal config version edit --version <version> Deploy Spinnaker At this point, Halyard should have all the information that it needs to deploy a Spinnaker instance. Let’s go ahead and deploy Spinnaker by running this command: hal deploy apply Note that first time deployments might take a while. Make Spinnaker reachable We need to expose the Spinnaker UI and Gateway services in order to interact with the Spinnaker dashboard and start creating pipelines. When we deployed Spinnaker using Halyard, a number of Kubernetes services get created in the “spinnaker” namespace. These services are by default exposed within the cluster (type is “ClusterIP”). Let’s change the service type of the services fronting the UI and API servers of Spinnaker to “NodePort” to make them available to end users outside the Kubernetes cluster. Edit the “spin-deck” service by running the following command: kubectl edit svc spin-deck -n spinnaker Change the type to “NodePort” and optionally specify the port on which you want the service exposed. Here’s a snapshot of the service definition: ... spec: type: NodePort ports: - port: 9000 protocol: TCP targetPort: 9000 nodePort: 30900 selector: app: spin cluster: spin-deck sessionAffinity: None status: ... Next, edit the “spin-gate” service by running the following command: kubectl edit svc spin-gate -n spinnaker Change the type to “NodePort” and optionally specify the port on which you want the API gateway service exposed. Note that Kubernetes services can be exposed in multiple ways. If you want to expose Spinnaker onto the public internet, you can use a LoadBalancer or an Ingress with https turned on. You should configure authentication to lock down access to unauthorized users. Save the node’s hostname or its IP address that will be used to access Spinnaker in an environment variable $SPIN_HOST. Using Halyard, configure the UI and API servers to receive incoming requests: hal config security ui edit \ --override-base-url "http://$SPIN_HOST:30900" hal config security api edit \ --override-base-url "http://$SPIN_HOST:30808" Redeploy Spinnaker so it picks up the configuration changes: hal deploy apply You can access the Spinnaker UI at “http://$SPIN_HOST:30900” Create a “hello-world” application Let’s take Spinnaker for a spin (pun intended). Using Spinnaker’s UI, let’s create a “hello-world” application. Use the “Actions” drop-down and click “Create Application”: Once the application is created, navigate to “Pipelines” tab and click “Configure a new pipeline”: Now add a new stage to the pipeline to create a manifest based deployment: Under the “Manifest Configuration”, add the following as the manifest source text: apiVersion: apps/v1 kind: Deployment metadata: labels: app: hello-world name: hello-world spec: replicas: 1 selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - image: '<docker_repository>:5000/helloworld:v1' name: hello-world ports: - containerPort: 80 Replace the “<docker_repository>” with the name of your internal docker registry that is made available to your Kubernetes cluster. Let’s take a quick side tour to create a “helloworld” docker image. We will create a “nginx” based image that hosts an “index.html” file containing: <h1>Hello World</h1> We will then create the corresponding “Dockerfile” in the same directory that holds the “index.html” file from the previous step: FROM nginx:alpine COPY . /usr/share/nginx/html Next, we build the docker image by running the following command: docker build -t <docker_repository>:5000/helloworld:v1 . Make sure to replace the “<docker_repository>” with the name of your internal docker registry that is made available to your Kubernetes cluster. Push the docker image to the “<docker_repository>” to make it available to the Kubernetes cluster. docker push <docker_repository>:5000/helloworld:v1 Back in the Spinnaker UI, let’s manually run the “hello-world” pipeline. After a successful execution you can drill down into the pipeline instance details: To quickly test our hello-world app, we can create a manifest based “LoadBalancer” in the Spinnaker UI. Click the “+” icon: Add the following service definition to create the load balancer: kind: Service apiVersion: v1 metadata: name: hello-world spec: type: NodePort selector: app: hello-world ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 31080 Once Spinnaker provisions the load balancer, hit the hello-world app’s URL at “http://$SPIN_HOST:31080” in your browser. Voila! There you have it, “Hello World” is rendered. Conclusion Spinnaker is a multi-cloud continuous delivery platform for releasing software with high velocity. We used Halyard to install Spinnaker on a Kubernetes cluster and deployed a simple hello-world pipeline. Of course, we barely scratched the surface in terms of what Spinnaker offers. Head over to the guides to learn more about Spinnaker. .cb11v2-cover {display : none !important;}

(Originally published on Medium) This article will walk you through the steps that can be used to install and setup a Spinnaker instance on Kubernetes that’s behind a corporate proxy. We will use Halyar...

How to Connect a Go Program to Oracle Database using goracle

Given that we just released Go programming language RPMs on Oracle Linux yum server, I figured it would be a good opportunity to take the goracle driver for a spin on Oracle Linux and connect a Go program to Oracle Database. goracle implements a Go database/sql driver for Oracle Database using ODPI-C (Oracle Database Programming Interface for C) 1. Enable Required Repositories to install Go and Oracle Instant Client First, install the necessary release RPMs to configure Yum to access the Golang and Oracle Instant Client repositories: $ sudo yum install -y oracle-golang-release-el7 oracle-release-el7 2. Install Go and Verify Note that you must install git also so that go get can fetch and build the goracle module. $ sudo yum -y install git gcc golang $ go env GOARCH="amd64" GOBIN="" GOCACHE="/home/opc/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/opc/go" GOPROXY="" GORACE="" GOROOT="/usr/lib/golang" GOTMPDIR="" GOTOOLDIR="/usr/lib/golang/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build280983252=/tmp/go-build -gno-record-gcc-switches" $ go version go version go1.12 linux/amd64 3. Install Oracle Instant Client and Add its Libraries to the Runtime Link Path Oracle Instant Client is available directly from Oracle Linux yum server. If you are deploying applications using Docker, I encourage you to check out our Oracle Instant Client Docker Image. sudo yum -y install oracle-instantclient18.3-basic   Before you can make use of Oracle Instant Client, set the runtime link path so that goracle can find the libraries it needs to connect to Oracle Database. sudo sh -c "echo /usr/lib/oracle/18.3/client64/lib > /etc/ld.so.conf.d/oracle-instantclient.conf" sudo ldconfig   4. Install the goracle Driver Following the instructions from the goracle repo on GitHub: $ go get gopkg.in/goracle.v2 5. Create a Go Program to Test your Connection Create a file db.go as follows. Make sure you change your connect string. package main import ( "fmt" "database/sql" _ "gopkg.in/goracle.v2" ) func main(){ db, err := sql.Open("goracle", "scott/tiger@10.0.1.127:1521/orclpdb1") if err != nil { fmt.Println(err) return } defer db.Close() rows,err := db.Query("select sysdate from dual") if err != nil { fmt.Println("Error running query") fmt.Println(err) return } defer rows.Close() var thedate string for rows.Next() { rows.Scan(&thedate) } fmt.Printf("The date is: %s\n", thedate) } 6. Run it! Time to test your program. $ go run db.go The date is: 2019-03-21T17:58:49Z Conclusion In this blog post, I showed how you can install the Go programming language and Oracle Instant Client from Oracle Linux yum server and use it together with the goracle Driver to connect a Go program to Oracle Database. References   Oracle OpenWorld 2018 session slides: The Go Language: Principles and Practices for Oracle Database [DEV5047] go-oracle  

Given that we just released Go programming language RPMs on Oracle Linux yum server, I figured it would be a good opportunity to take the goracle driver for a spin on Oracle Linux and connect a Go...

From locally running Node application to Cloud based Kubernetes Deployment

(Originally published at technology.amis.nl) In this article I will discuss the steps I had to go through in order to take my locally running Node application — with various hard coded and sometimes secret values — and deploy it on a cloud based Kubernetes cluster. I will discuss the containerization of the application, the replacement of hard coded values with references to environment variables, the Docker container image manipulation, the creation of the Kubernetes yaml files for creating the Kubernetes resources and finally the actual execution of the application. Background A few days ago in Tokyo I presented at the local J-JUG event as part of the Oracle Groundbreakers Tour of Asia and Pacific. I had prepared a very nice demo: an update in a cloud based Oracle Database was replicated to another cloud based database — a MongoDB database. In this demo, I first used Twitter as the medium for exchanging the update event and then the Oracle Event Hub (managed Apache Kafka) cloud service. This picture visualizes what I was trying to do: However, my demo failed. I ran a local Node (JS) application that would be invoked over HTTP from within the Oracle Database — and that would publish to Twitter and Kafka. When I was working on the demo in my hotel room, it was all working just fine. I used ngrok to expose my locally running application on the public internet — a great way to easily integrate local services in cloud-spanning demonstrations. It turned out that use of ngrok was not allowed by the network configuration at the Oracle Japan office where I did my presentation. There was no way I could get my laptop to create the tunnel to the ngrok service that would allow it to hand over the HTTP request from the Oracle Database. This teaches me a lesson. No matter how convenient it may be to run stuff locally — I really should be able to have all components of this demo running in the cloud. And the most obvious way — apart from using a Serverless Function — is to deploy that application on a Kubernetes cluster. Even though I know how to get there — I realized the steps are not as engrained in my head and fingers as should be the case — especially in order to restore my demo to its former glory in less than 30 minutes. The Action Plan My demo application — somewhat quickly put together — contains quite a few hard coded values, including confidential settings such as Kafka Server IP address and Topic name as well as Twitter App Credentials. The first step I need to take is to remove all these hard coded values from the application code and replace them with references to environment variables. The second big step is to build a container for and from my application. This container needs to provide the Node runtime, have all npm modules used by the application and contain the application code itself. The container should automatically start the application and expose the proper port. At the end of this step, I should be able locally run my application in a Docker container — injecting values for the environment variables with the Docker run command. The third step is the creation of a Container Image from the container — and pushing that image (after meaningful tagging) to a container registry. Next is the preparation of the Kubernetes resources. My application consists of a Pod and a Service (in Kubernetes terms) that are combined in a Deployment in its own Namespace. The Deployment makes use of two Secrets — one contains the confidential values for the Kafka Server (IP address and topic name) and the other the Twitter client app credentials. Values from these Secrets are used to set some of the environment variables. Other values are hard coded in the Deployment definition. After arranging access to a Kubernetes Cluster instance — running in the Oracle Cloud Infrastructure, offered through the Oracle Kubernetes Engine (OKE) service — I can deploy the K8S resources and make the application running. Now, finally, I can point my Oracle Database trigger to the service endpoint on Kubernetes in the cloud and start publishing tweets for all relevant database updates. At this point, I should — and you likewise after reading the remainder of this article — have a good understanding for how to Kubernetalize a Node application, so that I will never be stymied in my demos by stupid network problems. I want to not even think twice about taking my local application and turn it into a containerized application that is running on Kubernetes. Note: the sources discussed in this article can be found on GitHub: https://github.com/lucasjellema/groundbreaker-japac-tour-cqrs-via-twitter-and-event-hub/tree/master/db-synch-orcl-2-mongodb-over-twitter-or-kafka. 1. Replace Hard Coded Values with Environment Variable References My application contained the hard coded values of the Kafka Broker endpoint and my Twitter App credentials secrets. For a locally running application that is barely acceptable. For an application that is deployed in a cloud environment (and whose source are published on GitHub) that is clearly not a good idea. Any hard coded value is to be removed from the code — replaced with a reference to a an environment variable, using the Node expression: process.env.NAME_OF_VARIABLE or process.env[‘NAME_OF_VARIABLE’] Let’s for now not worry how these values are set and provided to the Node application. I have created a generic code snippet that will check upon starting the application if all expected Environment Variables have been defined and if not writes a warning to the output: const REQUIRED_ENVIRONMENT_SETTINGS = [ {name:"PUBLISH_TO_KAFKA_YN" , message:"with either Y (publish event to Kafka) or N (publish to Twitter instead)"}, {name:"KAFKA_SERVER" , message:"with the IP address of the Kafka Server to which the application should publish"}, {name:"KAFKA_TOPIC" , message:"with the name of the Kafka Topic to which the application should publish"}, {name:"TWITTER_CONSUMER_KEY" , message:"with the consumer key for a set of Twitter client credentials"}, {name:"TWITTER_CONSUMER_SECRET" , message:"with the consumer secret for a set of Twitter client credentials"}, {name:"TWITTER_ACCESS_TOKEN_KEY" , message:"with the access token key for a set of Twitter client credentials"}, {name:"TWITTER_ACCESS_TOKEN_SECRET" , message:"with the access token secret for a set of Twitter client credentials"}, {name:"TWITTER_HASHTAG" , message:"with the value for the twitter hashtag to use when publishing tweets"}, ] for(var env of REQUIRED_ENVIRONMENT_SETTINGS) { if (!process.env[env.name]) { console.error(`Environment variable ${env.name} should be set: ${env.message}`); } else { // convenient for debug; however: this line exposes all environment variable values - including any secret values they may contain // console.log(`Environment variable ${env.name} is set to : ${process.env[env.name]}`); } } This snippet is used in the index.js file in my Node application. This file also contains several references to process.env — that used to be hard coded values. It seems convenient to use npm start to run the application — for example because it allows we to define environment variables as part of the application start up. When you execute npm start, npm will check the package.json file for a script with key “start”. This script will typically contain something like “node index” or “node index.js”. You can extend this script with the definition of environment variables to be applied before running the Node application, like this (taken from package.json): "scripts": { "start": "(export KAFKA_SERVER=myserver.cloud.com && export KAFKA_TOPIC=cool-topic ) || (set KAFKA_SERVER=myserver.cloud.com && set KAFKA_TOPIC=cool-topic && set TWITTER_CONSUMER_KEY=very-secret )&& node index", … }, Note: we may have to cater for Linux and Windows environments, that treat environment variables differently. 2. Containerize the Node application In my case, I was working on my Windows laptop, developing and testing the Node application from the Windows command line. Clearly, that is not an ideal environment for building and running a Docker container. What I have done is use Vagrant to run a Virtual Machine with Docker Engine inside. All Docker container manipulation can easily be done inside this Virtual Machine. Check out the Vagrantfile that instructs Vagrant on leveraging VirtualBox to create and run the desired Virtual Machine. Note that the local directory that contains the Vagrantfile and from which the vagrant up command is executed is automatically shared into the VM, mounted as /vagrant. Note: I have used this article for inspiration for this section of my article: https://nodejs.org/en/docs/guides/nodejs-docker-webapp/ . Note 2: I use the dockerignore file to exclude files and directories in the root folder that contains the Dockerfile. Anything listed in dockerignore is not added to the build context and will not end up in the container. A Docker container image is built using a Docker build file. The starting point of the Docker is the base image that is subsequently extended. In this case, the base image is node:10.13.0-alpine, a small and recent Node runtime environment. I create a directory /usr/src/app and have Docker set this directory as it focal point for all subsequent actions. Docker container images are created in layers. Each build step in the Dockerfile adds a layer. If the build is rerun, only layers for steps in the Dockerfile that have changed are rerun and only changed layers are actually uploaded when the image is pushed. Therefore, it is smart to have the steps that change the most at the end of the Dockerfile. In my case, that means that the application sources should be copied to the container image at a very late stage in the build process. First I only copy the package.json file — assuming this will not change very frequently. Immediately after copying package.json, all node modules are installed into the container image using npm install. Only then are the application sources copied. I have chose to expose port 8080 from the container — this is an extremely arbitrary decision. However, the environment variable PORT — whose value is read in index.js using process.env.PORT — needs to correspond exactly to whatever port I expose. Finally the instruction to to run the Node application when the container is run: npm start passed to the CMD instruction. Here is the complete Dockerfile: # note: run docker build in a directory that contains this Docker build file, the package.json file and all your application sources and static files # this directory should NOT contain the node-modules or any other resources that should not go into the Docker container - unless these are explicitly excluded in a .Dockerignore file! FROM node:10.13.0-alpine # Create app directory WORKDIR /usr/src/app # Install app dependencies # A wildcard is used to ensure both package.json AND package-lock.json are copied # where available (npm@5+) COPY package*.json ./ RUN npm install # Bundle app source - copy Node application from the current directory COPY . . # the application will be exposed at port 8080 ENV PORT=8080 #so we should expose that port EXPOSE 8080 # run the application, using npm start (which runs the start script in package.json) CMD [ "npm", "start" ] Running docker build — to be exact, I run: docker build -t lucasjellema/http-to-twitter-app . — gives the following output: The container image is created. I can now run the container itself, for example with: docker run -p 8090:8080 -e KAFKA_SERVER=127.1.1.1 -e KAFKA_TOPIC=topic -e TWITTER_CONSUMER_KEY=818 -e TWITTER_CONSUMER_SECRET=secret -e TWITTER_ACCESS_TOKEN_KEY=tokenkey -e TWITTER_ACCESS_TOKEN_SECRET=secret lucasjellema/http-to-twitter-app The container is running, the app is running and at port 8090 on the Docker host should I able to access the application:http://192.168.188.120:8090/about (not: 192.168.188.120 is the IP address exposed by the Virtual Machine managed by Vagrant) 3. Build, Tag and Push the Container Image In order to run a container on a Kubernetes cluster — or indeed on any other machine then the one on which it was built — this container must be shared or published. The easiest way of doing so is through the use of Container (Image) Registry, such as Docker Hub. In this case I simply tag the container image with the currently applicable tag of lucasjellema/http-to-twitter-app:0.9: docker tag lucasjellema/http-to-twitter-app:latest lucasjellema/http-to-twitter-app:0.9 I then push the tagged image to the Docker Hub registry: (note: before executing this statement, I have used docker login to connect my session to the Docker Hub): docker push lucasjellema/http-to-twitter-app:0.9 At this point, the Node application is publicly available for pull — and can be run on any Docker compatible container engine. It does not contain any secrets — all dependencies (such as Twitter credentials and Kafka configuration) needs to be injected through environment variable settings. 4. Prepare Kubernetes Resources (Pod, Service, Secrets, Namespace, Deployment) When the Node application is running on Kubernetes it shall have a number of constituents: a namespace cqrs-demo to isolate the other artifacts in their own compartment two secrets to provide the sensitive and dynamic, deployment specific details regarding Kafka and regarding the Twitter client credentials a Pod for a single container — with the Node application a Service — to expose the Pod on an (externally) accessible endpoint and guide requests to the port exposed by the Pod a Deployment http-to-twitter-app — to configure the Pod through a template that is used for scaling and redeployment The separate namespace cqrs-demo is created with a simple kubectl command: kubectl create namespace cqrs-demo The two secrets are two sets of sensitive data entries. Each entry has a key and a value and the value of course is the sensitive one. In the case of the application in this article I have ensured that only the secret-objects contain sensitive information. There is no password, endpoint, credential in any other artifact. So I can freely share the other files — even on GitHub. But not the secrets files. They contain the valuable goods. Note: even though the secrets may seem encrypted — in this case they are not. They simply contain the base64 representation of the actual values. These base64b values can easily be retrieved on the Linux command line using: echo -n '<value>' | base64 The secrets are created from these yaml files: apiVersion: v1 kind: Secret metadata: name: twitter-app-credentials-secret namespace: cqrs-demo type: Opaque data: CONSUMER_KEY: U0hh CONSUMER_SECRET: dT= ACCESS_TOKEN_KEY: ODk= ACCESS_TOKEN_SECRET: aUZv and apiVersion: v1 kind: Secret metadata: name: kafka-server-secret namespace: cqrs-demo type: Opaque data: kafka-server-endpoint: MTI5 kafka-topic: aWRj using these kubectl statements: kubectl create -f ./kafka-secret.yaml kubectl create -f ./twitter-app-credentials-secret.yaml The Kubernetes Dashboard displays the two secrets: And some details for one (but not the sensitive values): The file k8s-deployment.yml contains the definition of both the service as well as the deployment and through the deployment indirectly also the pod. The service is defined of type LoadBalancer. This results on Oracle Kubernetes Engine on a special external IP address assigned to this service. That could be considered somewhat wasteful. A more elegant approach would be to use a IngressController — that allows us to handle more than just a single service on an external IP address. For the current example, LoadBalancer will do. Note: when you run the Kubernetes artifacts on an environment that does not support LoadBalancer — such as minikube — you can change type LoadBalancer to type NodePort. A random port is then assigned to the service and the service will be available on that port on the IP address of the K8S cluster. The service is exposed externally at port 80 — although other ports would be perfectly fine too. The service connects to the container port with the logical name app-api-port in the cqrs-demo namespace. This port is defined for the http-to-twitter-app container definition in the http-to-twitter-app deployment. Note: multiple containers can be started for this single container definition — depending on the number of replicas specified in the deployment and for example depending on the question of (re)deployments are taking place. The service mechanism ensures that traffic is load balanced across all container instances that expose the app-api-port. kind: Service apiVersion: v1 metadata: name: http-to-twitter-app namespace: cqrs-demo labels: k8s-app: http-to-twitter-app kubernetes.io/name: http-to-twitter-app spec: selector: k8s-app: http-to-twitter-app ports: - protocol: TCP port: 80 targetPort: app-api-port type: LoadBalancer # with type LoadBalancer, an external IP will be assigned - if the K8S provider supports that capability, such as OKE # with type NodePort, a port is exposed on the cluster; whether that can be accessed or not depends on the cluster configuration; on Minikube it can be, in many other cases an IngressController may have to be configured After creating the service, it will take some time (up to a few minutes) before an external IP address is associated with the (load balancer for the) service. The external ip will then be shown as pending. Below what it looks like in the dashboard when the external IP has been assigned although I blurred most of the actual IP address) The deployment for now specifies just a single replica. It specifies the container image on which the container (instances) in this deployment are based: lucasjellema/http-to-twitter-app:0.9. This is of course the container image that I pushed in the previous section. The container exposes port 8080 (container port) and this port has been given the logical name app-api-port, that we have seen before. The K8S cluster instance I was using had an issue with DNS translation from domain names to IP address. Initially, my application was not working because the url api.twitter.com could not be translated into an IP address. Instead of trying to fix this DNS issue, I have made use of a built in feature in Kubernetes called hostAliases. This feature allows we to specify DNS entries that are added at runtime to the hosts file in the container. In this case I instruct Kubernetes to inject the mapping between api.twitter.com and its IP address into the hosts file of the container. Finally, the container template specifies a series of environment variable values. These are injected into the container when it is started. Some of the values for te environment variables are defined literally in the deployment definition. Others consist of references to entries in secrets, for example the value for TWITTER_CONSUMER_KEY that is derived from the twitter-app-credentials-secret using the CONSUMER_KEY key. apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: k8s-app: http-to-twitter-app name: http-to-twitter-app namespace: cqrs-demo spec: replicas: 1 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: k8s-app: http-to-twitter-app spec: hostAliases: - ip: "104.244.42.66" hostnames: - "api.twitter.com" containers: - image: "lucasjellema/http-to-twitter-app:0.9" imagePullPolicy: Always name: http-to-twitter-app ports: - containerPort: 8080 name: app-api-port protocol: TCP env: - name: PUBLISH_TO_KAFKA_YN value: "N" - name: TWITTER_HASHTAG value: "#GroundbreakersTourOrderEvent" - name: TWITTER_CONSUMER_KEY valueFrom: secretKeyRef: name: twitter-app-credentials-secret key: CONSUMER_KEY - name: TWITTER_CONSUMER_SECRET valueFrom: secretKeyRef: name: twitter-app-credentials-secret key: CONSUMER_SECRET - name: TWITTER_ACCESS_TOKEN_KEY valueFrom: secretKeyRef: name: twitter-app-credentials-secret key: ACCESS_TOKEN_KEY - name: TWITTER_ACCESS_TOKEN_SECRET valueFrom: secretKeyRef: name: twitter-app-credentials-secret key: ACCESS_TOKEN_SECRET - name: KAFKA_SERVER valueFrom: secretKeyRef: name: kafka-server-secret key: kafka-server-endpoint - name: KAFKA_TOPIC valueFrom: secretKeyRef: name: kafka-server-secret key: kafka-topic The deployment in the dashboard: Details on the Pod: Given admin privileges, I can inspect the real values of the environment variables that were derived from secrets. The Pod logging is easily accessed as well: 5. Run and Try Out the Application When the external IP has been allocated to the Service and the Pod is running successfully, the application can be accessed. From the Oracle Database — and also just from any browser: The public IP address was blurred in the location bar. Note that no Port is specified in the URL — because the port will default yo 80 and that happens to be the port defined in the service as the port to map to the container’s exposed port (8080). When the database makes its HTTP request, we can see in the Pod logging that the request is processed: And I can even verify that it has done what in the logging the application states it has done: Resources GitHub sources: https://github.com/lucasjellema/groundbreaker-japac-tour-cqrs-via-twitter-and-event-hub Kubernetes Cheatsheet for Docker developers: https://technology.amis.nl/2018/09/26/from-docker-run-to-kubectl-apply-quick-kubernetes-cheat-sheet-for-docker-users/ Kubernetes Documentation on Secrets: https://kubernetes.io/docs/concepts/configuration/secret/ Kubernetes Docs on Host Aliases: https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ Docker docs on dockerignore https://docs.docker.com/engine/reference/builder/#dockerignore-file Kubernetes Docs on Deployment: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

(Originally published at technology.amis.nl) In this article I will discuss the steps I had to go through in order to take my locally running Node application — with various hard coded and sometimes...

Connecting to Autonomous Database from a Node.js, Python or PHP App in Oracle Cloud Infrastructure

Introduction In this tutorial I demonstrate how to connect an app written in Python, Node.js or PHP running in Oracle Cloud Infrastructure(OCI) to an Autonomous Transaction Processing (ATP) Database running in Oracle Cloud. To complete these steps, it is assumed you have either a baremetal or VM shape running Oracle Linux with a public IP address in Oracle Cloud Infrastructure, and that you have access to Autonomous Transaction Processing Database Cloud Service. I used Oracle Linux 7.5 Note: this post has been updated to include optional use of the OCI CLI to download Client Credentials (Wallet) directly. We've recently added Oracle Instant Client to the Oracle Linux yum mirrors in each OCI region, which has simplified the steps significantly. Previously, installing Oracle Instant Client required either registering a system with ULN or downloading from OTN, each with manual steps to accept license terms. Now you can simply use yum install directly from Oracle Linux running in OCI. For this example, I use a Node.js app, but the same principles apply to Python with cx_Oracle, PHP with php-oci8 or any other language that can connect to Oracle Database with an appropriate connector via Oracle Instant Client.   Overview Installing Node.js, node-oracledb and Oracle Instant Client Using Node.js with node-oracledb and Oracle Instant Client to connect to Autonomous Transaction Processing Installing Node.js, node-oracledb and Oracle Instant Client Check Your Oracle Linux Yum Configuration First, verify your Oracle Linux yum server configuration as we've recently made some changes in the way repository definitions are delivered. Follow the steps here to verify your setup.   Install Release RPMs to Configure Yum repositories for Node.js and Oracle Instant Client Next, enable the required repositories to install Node.js 10 and Oracle Instant Client sudo yum install oracle-release-el7 oracle-nodejs-release-el7   Install Node.js, node-oracledb and Oracle Instant Client To install Node.js 10 from the newly enabled repo, we'll need to make sure the EPEL repo is disabled. Otherwise, Node.js from that repo may be installed and that's not the Node we are looking for. Oracle Instant Client will be installed automatically as a dependency of node-oracledb. sudo yum --disablerepo="ol7_developer_EPEL" -y install nodejs node-oracledb--node10   Add Oracle Instant Client to the runtime link path. sudo sh -c "echo /usr/lib/oracle/18.3/client64/lib > /etc/ld.so.conf.d/oracle-instantclient.conf" sudo ldconfig   Download and Configure Client Credentials (Wallet) To connect to ATP via SQL*Net, you'll need Oracle client credentials. An ATP service administrator can download these via the service console (Option 1). See this documentation for more details. Or, you can use the OCI comand line interface (CLI) to download it (Option 2). Option 1: Download Wallet From Service Console Download the client credentials by clicking on DB Connection on your Autonomous Database's Details page Figure 1. Downloading Client Credentials (Wallet) from Autonomous Transaction Processing Service Console   Copy the wallet from the machine to which you've downloaded it to the OCI compute instance. Here I'm copying the file wallet_ATP1.zip from my development machine using scp. Note that I'm using an ssh key file that matches the ssh key I created the instance with. Note: this next command is run on your development machine to copy the downloaded Wallet zip file to your OCI instance. In my case, wallet_ATP1.zip was downloaded to ~/Downloads on my MacBook. scp -i ~/.ssh/oci/oci ~/Downloads/wallet_ATP1.zip opc@<OCI INSTANCE PUBLIC IP>:/etc/ORACLE/WALLETS/ATP1 Option 2: Download Wallet Using OCI CLI Using the latest OCI CLI, you can download client credentials via the command line. Here I'm assigning Compartment ID to environment variable C using the oci-metadata command from OCI Utilities, then assigning the Autonomous Database ID to DB_ID by looking it up using the OCI cli based on its name. Finally, I download the client credentials zip archive using the OCI cli.   If you don't have the OCI CLI installed, run the following commands to install and configure it. Remember to upload the API PEM public key in the Console. sudo yum install python-oci-cli oci setup config $ export C=`oci-metadata -g compartmentid --value-only` $ export DB_ID=`oci db autonomous-database list --compartment-id $C | jq -r '.data[] | select( ."db-name" == "sergioblog" ) | .id'` $ oci db autonomous-database generate-wallet --autonomous-database-id $DB_ID --password MYPASSWORD123 --file wallet_ATP1.zip With the client credentials (wallet archive) downloaded, unzip it and set the permissions appropriately. First, prepare a location to store the Wallet: sudo mkdir -pv /etc/ORACLE/WALLETS/ATP1 sudo chown -R opc /etc/ORACLE Next, unzip it: cd /etc/ORACLE/WALLETS/ATP1 unzip ~/wallet_ATP1.zip sudo chmod -R 700 /etc/ORACLE Edit sqlnet.ora to point to the Wallet location, replacing ?/network/admin. After editing sqlnet.ora should look something like this. cat /etc/ORACLE/WALLETS/ATP1/sqlnet.ora WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/etc/ORACLE/WALLETS/ATP1"))) SSL_SERVER_DN_MATCH=yes Set the TNS_ADMIN environment variable to point Instant Client the Oracle configuration directory as well as NODE_PATH so that the node-oracledb module can be found by our Node.js program. export TNS_ADMIN=/etc/ORACLE/WALLETS/ATP1 export NODE_PATH=`npm root -g`   Create and run a Node.js Program to Test Connection to ATP Create a file, select.js based on the example below. Either assign values to the environment variables NODE_ORACLEDB_USER, NODE_ORACLEDB_PASSWORD, and NODE_ORACLEDB_CONNECTIONSTRING to suit your configuration or edit the placeholder values USERNAME, PASSWORD and CONNECTIONSTRING in the code below. The former being the username and password you've been given for ATP and the latter being one of the service descriptors in the $TNS_ADMIN/tnsnames.ora file. The default username for an Autonomous Database is admin 'use strict'; const oracledb = require('oracledb'); async function run() { let connection; try { connection = await oracledb.getConnection({ user: process.env.NODE_ORACLEDB_USER || "USERNAME", password: process.env.NODE_ORACLEDB_PASSWORD || "PASSWORD", connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING || "CONNECTIONSTRING" }); let result = await connection.execute("select sysdate from dual"); console.log(result.rows[0]); } catch (err) { console.error(err); } finally { if (connection) { try { await connection.close(); } catch (err) { console.error(err); } } } } run();   Run It! Let's run our Node.js program. You should see a date returned from the Database. node select.js [ 2018-09-13T18:19:54.000Z ] Important Notes As there currently isn't a service gateway to connect from Oracle Cloud Infrastructure to Autonomous Transaction Processing, any traffic between these two will count against your network quota. Conclusion In this blog post I've demonstrated how to run a Node.js app on an Oracle Linux instance in Oracle Cloud Infrastructure (OCI) and connect it to Autonomous Transaction Processing Database by installing all necessary software —including Oracle Instant Client— directly from yum servers within OCI itself. By offering direct access to essential Oracle software from within Oracle Cloud Infrastructure, without requiring manual steps to accept license terms, we've made it easier for developers to build Oracle-based applications on Oracle Cloud.   References Create an Autonomous Transaction Processing Instance Downloading Client Credentials (Wallets) Node.js for Oracle Linux Python for Oracle Linux PHP for Oracle Linux Connect PHP 7.2 to Oracle Database 12c using Oracle Linux Yum Server

Introduction In this tutorial I demonstrate how to connect an app written in Python, Node.js or PHP running in Oracle Cloud Infrastructure(OCI) to an Autonomous Transaction Processing (ATP) Database...

Why Your Developer Story is Important

Stories are a window into life. They can if they resonate provide insights into our own lives or the lives of others.They can help us transmit  knowledge, pass on traditions, solve present day problems or allow us to imagine alternate realities. Open Source software is an example of an alternate reality in software development, where proprietary has been replaced in large part with sharing code that is free and open. How is this relevant to not only developers but people who work in technology? It is human nature that we continue to want to grow, learn and share.   With this in mind, I started 60 Second Developer Stories and tried it out at various Oracle Code events, at Developer Conferences and now at Oracle OpenWorld 2018/Code One. For the latter we had a Video Hangout in the Groundbreakers Hub at CodeOne where anyone with a story to share could do so. We livestream the story via Periscope/Twitter and record it and edit/post it later on YouTube.  In the Video Hangout, we use a green screen and through the miracles of technology Chroma key it in and put in a cool backdrop. Below are some photos of the Video Hangout as well as the ideas we give as suggestions.     Share what you learned on your first job     Share a best coding practice.     Explain how  a tool or technology works     What have you learned recently about building an App?     Share a work related accomplishment     What's the best decision you ever made?     What’s the worst mistake you made and the lesson learned?     What is one thing you learned from a mentor or peer that has really helped you?     Any story that you want to share and community can benefit from         Here are some FAQs about the 60 Second Developer S