X

Step Up to Modern Cloud Development

Recent Posts

Connecting to Autonomous Transaction Processing Database from a Node.js, Python or PHP App in Oracle Cloud Infrastructure

Introduction In this tutorial I demonstrate how to connect an app written in Python, Node.js or PHP running in Oracle Cloud Infrastructure(OCI) to an Autonomous Transaction Processing (ATP) Database running in Oracle Cloud. To complete these steps, it is assumed you have either a baremetal or VM shape running Oracle Linux with a public IP address in Oracle Cloud Infrastructure, and that you have access to Autonomous Transaction Processing Database Cloud Service. I used Oracle Linux 7.5 We've recently added Oracle Instant Client to the Oracle Linux yum mirrors in each OCI region, which has simplified the steps significantly. Previously, installing Oracle Instant Client required either registering a system with ULN or downloading from OTN, each with manual steps to accept license terms. Now you can simply use yum install directly from Oracle Linux running in OCI. For this example, I use a Node.js app, but the same principles apply to Python with cx_Oracle, PHP with php-oci8 or any other language that can connect to Oracle Database with an appropriate connector via Oracle Instant Client. Overview Installing Node.js, node-oracledb and Oracle Instant Client Using Node.js with node-oracledb and Oracle Instant Client to connect to Autonomous Transaction Processing Installing Node.js, node-oracledb and Oracle Instant Client Grab the Latest Oracle Linux Yum Mirror Repo File This steps will ensure you have an updated repo file local to your OCI region with a repo definition for OCI-included software such as Oracle Instant Client. Note that I obtain the OCI region from the instance metadata service via an HTTP endpoint that every OCI instance has access to via the address 169.254.169.254. After connecting to your OCI compute instance via ssh, run the following commands: cd /etc/yum.repos.d sudo mv public-yum-ol7.repo public-yum-ol7.repo.bak export REGION=`curl http://169.254.169.254/opc/v1/instance/ -s | jq -r '.region'| cut -d '-' -f 2` sudo -E wget http://yum-$REGION.oracle.com/yum-$REGION-ol7.repo Enable yum repositories for Node.js and Oracle Instant Client Next, enable the required repositories to install Node.js 10 and Oracle Instant Client sudo yum install -y yum-utils sudo yum-config-manager --enable ol7_developer_nodejs10 ol7_oci_included Install Node.js, node-oracledb and Oracle Instant Client To install Node.js 10 from the newly enabled repo, we'll need to make sure the EPEL repo is disabled. Otherwise, Node.js from that repo may be installed and that's not the Node we are looking for. Also, note the name of the node-oracledb package for Node.js 10 is node-oracledb-12c-node10. Oracle Instant Client will be installed automatically as a dependency of node-oracledb. sudo yum --disablerepo="ol7_developer_EPEL" -y install nodejs node-oracledb-12c-node10 Add Oracle Instant Client to the runtime link path. sudo sh -c "echo /usr/lib/oracle/12.2/client64/lib > /etc/ld.so.conf.d/oracle-instantclient.conf" sudo ldconfig Using Oracle Instant Client Download Wallet and Configure Wallet Location To connect to ATP via SQL*Net, you'll need Oracle client credentials. An ATP service administrator can download these via the service console. See this documentation for more details. Figure 1. Downloading Client Credentials (Wallet) from Autonomous Transaction Processing Service Console Once you've obtained the wallet archive for your ATP Database, copy it to your OCI instance, unzip it and set the permissions appropriately. First prepare a location to store the wallet. sudo mkdir -pv /etc/ORACLE/WALLETS/ATP1 sudo chown -R opc /etc/ORACLE Copy the wallet from the machine to which you've downloaded it to the OCI instance. Here I'm copying the file wallet_ATP1.zip from my development machine using scp. Note that I'm using an ssh key file that matches the ssh key I created the instance with. Note: this next command is run on your development machine to copy the downloaded Wallet zip file to your OCI instance. In my case, wallet_ATP1.zip was downloaded to ~/Downloads on my MacBook. scp -i ~/.ssh/oci/oci ~/Downloads/wallet_ATP1.zip opc@<OCI INSTANCE PUBLIC IP>:/etc/ORACLE/WALLETS/ATP1 Returning to the OCI instance, unzip the wallet and set the permissions appropriately. cd /etc/ORACLE/WALLETS/ATP1 unzip wallet_ATP1.zip sudo chmod -R 700 /etc/ORACLE Edit sqlnet.ora to point to the Wallet location, replacing ?/network/admin. After editing sqlnet.ora should look something like this. cat /etc/ORACLE/WALLETS/ATP1/sqlnet.ora WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/etc/ORACLE/WALLETS/ATP1"))) SSL_SERVER_DN_MATCH=yes Set the TNS_ADMIN environment variable to point Instant Client the Oracle configuration directory as well as NODE_PATH so that the node-oracledb module can be found by our Node.js program. export TNS_ADMIN=/etc/ORACLE/WALLETS/ATP1 export NODE_PATH=`npm root -g` Create and run a Node.js Program to Test Connection to ATP Create a file, select.js based on the example below. Either assign values to the environment variables NODE_ORACLEDB_USER, NODE_ORACLEDB_PASSWORD, and NODE_ORACLEDB_CONNECTIONSTRING to suit your configuration or edit the placeholder values USERNAME, PASSWORD and CONNECTIONSTRING in the code below. The former being the username and password you've been given for ATP and the latter being one of the service descriptors in the $TNS_ADMIN/tnsnames.ora file. 'use strict'; const oracledb = require('oracledb'); async function run() { let connection; try { connection = await oracledb.getConnection({ user: process.env.NODE_ORACLEDB_USER || "USERNAME", password: process.env.NODE_ORACLEDB_PASSWORD || "PASSWORD", connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING || "CONNECTIONSTRING" }); let result = await connection.execute("select sysdate from dual"); console.log(result.rows[0]); } catch (err) { console.error(err); } finally { if (connection) { try { await connection.close(); } catch (err) { console.error(err); } } } } run(); Run It! Let's run our Node.js program. You should see a date returned from the Database. node select.js [ 2018-09-13T18:19:54.000Z ] Important Notes As there currently isn't a service gateway to connect from Oracle Cloud Infrastructure to Autonomous Transaction Processing, any traffic between these two will count against your network quota. Conclusion In this blog post I've demonstrated how to run a Node.js app on an Oracle Linux instance in Oracle Cloud Infrastructure (OCI) and connect it to Autonomous Transaction Processing Database by installing all necessary software —including Oracle Instant Client— directly from yum servers within OCI itself. By offering direct access to essential Oracle software from within Oracle Cloud Infrastructure, without requiring manual steps to accept license terms, we've made it easier for developers to build Oracle-based applications on Oracle Cloud. References Create an Autonomous Transaction Processing Instance Downloading Client Credentials (Wallets) Node.js for Oracle Linux Python for Oracle Linux PHP for Oracle Linux

Introduction In this tutorial I demonstrate how to connect an app written in Python, Node.js or PHP running in Oracle Cloud Infrastructure(OCI) to an Autonomous Transaction Processing (ATP) Database...

Cloud

Autonomous Database: Creating an Autonomous Transaction Processing Instance

In this post I’m going to demonstrate how quick and easy one can create an Autonomous Transaction Processing, short ATP, instance of Oracle’s Autonomous Database Cloud Services. Oracle’s ATP launched on the 7th of August 2018 and is the general purpose flavor of the Oracle Autonomous Database. My colleague SQLMaria (also known as Maria Colgan  ) has already done a great job explaining the difference between the Autonomous Transaction Processing and the Autonomous Data Warehouse services. She has also written another post on what one can expect from Oracle Autonomous Transaction Processing. I highly recommend reading both her articles first for a better understanding of the offerings. Last but not least, you can try ATP yourself today via the Oracle Free Cloud Trial. Now let’s get started. Provisioning an ATP service is, as said above, quick and easy. tl;dr To create an instance you just have to follow these three simple steps: Log into the Oracle Cloud Console and choose "Autonomous Transaction Processing" from the menu. Click "Create Autonomous Transaction Processing" Specify the name, the amount of CPU and storage, the administrator password and hit "Create Autonomous Transaction Processing" Creating an ATP instance In order to create an ATP environment you first have to logon to the Oracle Cloud Console. From there, click on the top left menu and choose “Autonomous Transaction Processing“. On the next screen you will see all your ATP databases, in my case none, because I haven’t created any yet. Hit the “Create Autonomous Transaction Processing” button. A new window will open that asks you about the display and database name, the amount of CPUs and storage capacity, as well as the administrator password and the license to use. The display name is what you will see in the cloud console once your database service is created. The database name is the name of the database itself that you will later connect to from your applications. You can use the same name for both or different ones. In my case I will use a different name for the database than for the service. The minimum CPU and storage count is 1, which is what I’m going for. Don’t forget that scaling the CPUs and/or storage up and down is fully online with Oracle Autonomous Database and transparent to the application. So even if you don’t know yet exactly how many CPUs or TBs of storage you need, you can always change that later on which no outages! Next you have to specify the password for the admin user. The admin user is a database user with administrative privileges that allows you to create other users and perform various other tasks. Last but not least, you have to choose which license model you want to use. The choice is either bringing your own license, i.e. “My organization already owns Oracle Database software licenses“, sometimes also referred to as “BYOL” or “Bring Your Own License“, which means that you do already have some unused Oracle Database licenses that you would like to reuse for your Autonomous Transaction Processing instance. This is usually done if you want to migrate your on-premises databases into the cloud and want to leverage the fact that you have already bought Oracle Database licenses in the past. The other option is to subscribe to new Oracle Database software licenses as part of the provisioning. This option is usually used if you want to have a new database cloud service that doesn’t replace an existing database. Once you have made your choice, it’s time to hit the “Create Autonomous Transaction Processing“. Your database is now being provisioned. Once the state changes to Green – Available, your database is up and running. Clicking on the name of the service will provide you with further details. Congratulations, you have just created your first Autonomous Transaction Processing Database Cloud Service. Make sure you also check out the Autonomous Transaction Processing Documentation. Originally published at geraldonit.com on August 28, 2018.

In this post I’m going to demonstrate how quick and easy one can create an Autonomous Transaction Processing, short ATP, instance of Oracle’s Autonomous Database Cloud Services. Oracle’s ATP launched...

Developers

Podcast: Developer Evolution: What's rockin’ roles in IT?

The good news is that the US Bureau of Labor Statistics predicts 24% growth in software developer jobs through 2026. That’s well above average. The outlook for Database administrators certainly isn’t bleak, but with projected job growth of 11% to 2026, that’s less than half the growth projected for developers. Job growth for System administrators, at 6% through 2016, is considered average by the BLS. So while the news is positive all around, developers certainly have an advantage. Each of these roles certainly has separate and distinct responsibilities. But why is the outlook so much better for developers, and what does this say about what’s happening in the IT ecosystem? "More than ever," says Oracle Developer Champion Rolando Carrasco, "institutions, organizations, and governments are keen to generate a new crop of developers that can help them to to create something new." In today's business climate competition is tough, and high premium is placed on innovation. "But developers have a lot of tools,  a lot of abilities within reach, and the opportunity to make something that can make a competitive difference." But the role of the developer is morphing into something new, according to Oracle ACE Director Martin Giffy D'Souza. "In the next couple years we're also going to see that  the typical developer is not going to be the traditional developer that went to school, or the script kitties that just got into the business. We're going see what is called the citizen developer. We're going to see a lot more people transition to that simply because it adds value to their job. Those people are starting to hit the limits of writing VBA macros in Excel and they want to write custom apps. I think that's what we're going to see more and more of, because we already know there's a developer job shortage." But why is the job growth for developers outpacing that for DBAs and SysAdmins? "If you take it at very high level, devs produce things," Martin says. "They produce value. They produce products.  DBAs and IT people are maintainers. They’re both important, but the more products and solutions we can create," the more value to the business. Oracle ACE Director Mark Rittman has spent the last couple of years working as a product manager in a start-up, building a tech platform. "I never saw a DBA there," he admits. "It was at the point that if I were to try to explain what a DBA was to people there, all of whom are uniformly half my age, they wouldn't know what I was talking about. That's because the platforms people use these days, within the Oracle ecosystem or Google or Amazon or whatever, it's all very much cloud, and it's all very much NoOPs, and it's very much the things that we used to spend ages worrying about," This frees developers to do what they do best. "There are far fewer people doing DBA work and SysAdmin work," Mark says. "That’s all now in the cloud. And that also means that developers can also develop now. I remember, as a BI developer working on projects, it was surprising how much of my time was spent just getting the system working in the first place, installing things, configuring things, and so on. Probably 75% of every project was just getting the thing to actually work." Where some roles may vanish altogether, others will transform. DBAs have become data engineers or infrastructure engineers, according to Mark. "So there are engineers around and there are developers around," he observes, "but I think administrator is a role that, unless you work for one of the big cloud companies in one of those big data centers, is largely kind of managed away now." Phil Wilkins, an Oracle ACE, has witnessed the changes. DBAs in particular, as well as network people focused on infrastructure, have been dramatically affected by cloud computing, and the ground is still shaking. "With the rise and growth in cloud adoption these days, you're going to see the low level, hard core technical skills that the DBAs used to bring being concentrated into the cloud providers, where you're taking a database as a service. They're optimizing the underlying infrastructure, making sure the database is running. But I'm just chucking data at it, so I don't care about whether the storage is running efficiently or not. The other thing is that although developers now get a get more freedom, and we've got NoSQL and things like that, we're getting more and more computing power, and it's accelerating at such a rate now that, where 10 years ago we used to have to really worry about the tuning and making sure the database was performant, we can now do a lot of that computing on an iPhone. So why are we worrying when we've got huge amounts of cloud and CPU to the bucketload? These comments represent just a fraction of the conversation captured in this latest Oracle Developer Community Podcast, in which the panelists dive deep into the forces that are shaping and re-shaping roles, and discuss their own concerns about the trends and technologies that are driving that evolution. Listen! The Panelists Rolando Carrasco Oracle Developer Champion Oracle ACE Co-owner, Principal SOA Architect, S&P Solutions Martin Giffy D'Souza Oracle ACE Director Director of Innovation, Insum Solutions   Mark Rittman Oracle ACE Director Chief Executive Officer, MJR Analytics   Phil Wilkins Oracle ACE Senior Consultant, Capgemini 5 Related Oracle Code One Sessions The Future of Serverless is Now: Ci/CD for the Oracle Fn Project, by Rolando Carrasco and Leonardo Gonzalez Cruz [DEV5325] Other Related Content Podcast: Are Microservices and APIs Becoming SOA 2.0? Vibrant and Growing: The Current State of API Management Video: 2 Minute Integration with Oracle Integration Cloud Service It's Always Time to Change Coming Soon The next program, coming on Sept 5, will feature a discussion of "DevOps to NoOps," featuring panelists Baruch Sadogursky, Davide Fiorentino, Bert Jan Schrijver, and others TBA. Stay tuned! Subscribe Never miss an episode! The Oracle Developer Community Podcast is available via: iTunes Podbean Feedburner

The good news is that the US Bureau of Labor Statistics predicts 24% growth in software developer jobs through 2026. That’s well above average. The outlook for Database administrators certainly isn’t...

DevOps

What's New in Oracle Developer Cloud Service - August 2018

Over the weekend we updated Oracle Developer Cloud Service - your cloud based DevOps and Agile platform - with a new release (18.3.3) adding some key new features that will improve the way you develop and release software on the Oracle Cloud. Here is a quick rundown of key new capabilities added this month. Environments A new top level section in Developer Cloud Service now allows you to define "Environments" - a collection of cloud services that you bundle together under one name. Once you have an environment defined, you'll be able to see the status of your environment on the home page of your project. You can for example define a development, test and production environments - and see the status of each one with a simple glance. This is the first step in a set of future features of DevCS that will help you manage software artifacts across environments in an easier way. Project Templates When you create a new project in DevCS you can base it on a template. Up until this release you were limited to templates created by Oracle, now you can define your own templates for your company. Template can include default artifacts such as wiki pages, default git repositories, and even builds and deployment steps. This is very helpful for companies who are aiming to standardize development across development teams, as well as for team who have repeating patterns of development. Wiki Enhancments The wiki in DevCS is a very valuable mechanism for your team to share information, and we just added a bunch of enhancements that will make collaboration in your team even better. You can now watch specific wiki pages or sections, which will notify you whenever someone updates those pages. We also added support for commenting on wiki pages - helping you to conduct virtual discussion on their content. More These are just some of the new features in Developer Cloud Service. All of these features are part of the free functionality that Developer Cloud Service provides to Oracle Cloud customers. Take them for a spin and let us know what you think. For information on additional new feature check out the What's New in Developer Cloud Service Documentation. Got technical questions - ask them on our cloud customer connect community page.  

Over the weekend we updated Oracle Developer Cloud Service - your cloud based DevOps and Agile platform - with a new release (18.3.3) adding some key new features that will improve the way you develop...

Auto-updatable, self-contained CLI with Java 11

.cb11splash{display:none;} (Originally published on Medium) Introduction Over the course of the last 11 months, we have seen two major releases of Java — Java 9 and Java 10. Come September, we will get yet another release in the form of Java 11, all thanks to the new 6 month release train. Each new release introduces exciting features to assist the modern Java developer. Let’s take some of these features for a spin and build an auto-updatable, self-contained command line interface. The minimum viable feature-set for our CLI is defined as follows: Display the current bitcoin price index by calling the free coin desk API Check for new updates and if available, auto update the CLI Ship the CLI with a custom Java runtime image to make it self-contained Prerequisites To follow along, you will need a copy of JDK 11 early-access build. You will also need the latest version (4.9 at time of writing) of gradle. Of course, you can use your preferred way of building Java applications. Though not required, familiarity with JPMS and JLink can be helpful since we are going to use the module system to build a custom runtime image. Off we go We begin by creating a class that provides the latest bitcoin price index. Internally, it reads a configuration file to get the URL of the coin desk REST API and builds an http client to retrieve the latest price. This class makes use of the new fluent HTTP client classes that are part of “java.net.http” module. var bpiRequest = HttpRequest.newBuilder() .uri(new URI(config.getProperty("bpiURL"))) .GET() .build(); var bpiApiClient = HttpClient.newHttpClient(); bpiApiClient .sendAsync(bpiRequest, HttpResponse.BodyHandlers.ofString()) .thenApply(response -> toJson(response)) .thenApply(bpiJson -> bpiJson.getJsonObject("usd").getString("rate")); Per Java standards, this code is actually very concise. We used the new fluent builders to create a GET request, call the API, convert the response into JSON, and pull the current bitcoin price in USD currency. In order to build a modular jar and set us up to use “jlink”, we need to add a “module-info.java” file to specify the CLI’s dependencies on other modules. module ud.bpi.cli { requires java.net.http; requires org.glassfish.java.json; } From the code snippet, we observe that our CLI module requires the http module shipped in Java 11 and an external JSON library. Now, let’s turn our attention to implement an auto-updater class. This class should provide a couple of methods. One method to talk to a central repository and check for the availability of newer versions of the CLI and another method to download the latest version. The following snippet shows how easy it is to use the new HTTP client interfaces to download remote files. CompletableFuture update(String downloadToFile) { try { HttpRequest request = HttpRequest.newBuilder() .uri(new URI("http://localhost:8080/2.zip")) .GET() .build(); return HttpClient.newHttpClient() .sendAsync(request, HttpResponse.BodyHandlers .ofFile(Paths.get(downloadToFile))) .thenApply(response -> { unzip(response.body()); return true; }); } catch (URISyntaxException ex) { return CompletableFuture.failedFuture(ex); } } The new predefined HTTP body handlers in Java 11 can convert a response body into common high-level Java objects. We used the HttpResponse.BodyHandlers.ofFile() method to download a zip file that contains the latest version of our CLI. Let’s put these classes together by using a launcher class. It provides an entry point to our CLI and implements the application flow. Right when the application starts, this class calls its launch() method that will check for new updates. void launch() { var autoUpdater = new AutoUpdater(); try { if (autoUpdater.check().get()) { System.exit(autoUpdater.update().get() ? 100 : -1); } } catch (InterruptedException | ExecutionException ex) { throw new RuntimeException(ex); } } As you can see, if a new version of the CLI is available, we download the new version and exit the JVM by passing in a custom exit code 100. A simple wrapper script will check for this exit code and rerun the CLI. #!/bin/sh ... start EXIT_STATUS=$? if [ ${EXIT_STATUS} -eq 100 ]; then start fi And finally, we will use “jlink” to create a runtime image that includes all the necessary pieces to execute our CLI. jlink is a new command line tool provided by Java that will look at the options passed to it to assemble and optimize a set of modules and their dependencies into a custom runtime image. In the process, it builds a custom JRE — thereby making our CLI self-contained. jlink --module-path build/libs/:${JAVA_HOME}/jmods \ --add-modules ud.bpi.cli,org.glassfish.java.json \ --launcher bpi=ud.bpi.cli/ud.bpi.cli.Launcher \ --output images Let’s look at the options that we passed to jlink: “ module-path” tells jlink to look into the specified folders that contain java modules “ add-modules” tells jlink which user-defined modules are to be included in the custom image “launcher” is used to specify the name of the script that will be used to start our CLI and the full path to the class that contains the main method of the application “output” is used to specify the folder name that holds the newly created self-contained custom image When we run our first version of the CLI and there are no updates available, the CLI prints something like this: Say we release a new version (2) of the CLI and push it to the central repo. Now, when you rerun the CLI, you will see something like this: Voila! The application sees that a new version is available and auto-updates itself. It then restarts the CLI. As you can see, the new version adds an up/down arrow indicator to let the user know how well the bitcoin price index is doing. Head over to GitHub to grab the source code and experiment with it.

(Originally published on Medium) Introduction Over the course of the last 11 months, we have seen two major releases of Java — Java 9 and Java 10. Come September, we will get yet another release in the...

Oracle Load Balancer Classic configuration with Terraform

(Originally published on Medium) This article provides an introduction to using the Load Balancer resources to provision and configure an Oracle Cloud Infrastructure Load Balancer Classic instance using Terraform When using the Load Balancer Classic resources with the opc Terraform Provider the  lbaas_endpoint  attribute must be set in the provider configuration. provider "opc" { version = "~> 1.2" user = "${var.user}" password = "${var.password}" identity_domain = "${var.compute_service_id}" endpoint = "${var.compute_endpoint}" lbaas_endpoint = "https://lbaas-1111111.balancer.oraclecloud.com" } First we create the main Load Balancer instance resource. The Server Pool, Listener and Policy resources will be created as child resources associated to this instance. resource "opc_lbaas_load_balancer" "lb1" { name = "examplelb1" region = "uscom-central-1" description = "My Example Load Balancer" scheme = "INTERNET_FACING" permitted_methods = ["GET", "HEAD", "POST"] ip_network = "/Compute-${var.domain}/${var.user}/ipnet1" } To define the set of servers the load balancer will be directing traffic to we create a Server Pool, sometimes referred to as an origin server pool. Each server is defined by the combination of the target IP address, or hostname, and port. For the brevity of this example we’ll assume we already have a couple instances on an existing IP Network with a web service running on port  8080  resource "opc_lbaas_server_pool" "serverpool1" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "serverpool1" servers = ["192.168.1.2:8080", "192.168.1.3:8080"] vnic_set = "/Compute-${var.domain}/${var.user}/vnicset1" } The Listener resource defines what incoming traffic the Load Balancer will direct to a specific server pool. Multiple Server Pools and Listeners can be defined for a single Load Balancer instance. For now we’ll assume all the traffic is HTTP, both to the load balancer and between the load balancer and the server pool. We’ll look at securing traffic with HTTPS later. In this example the load balancer is managing inbound requests for a site  http://mywebapp.example.com  and directing them to the server pool we defined above. resource "opc_lbaas_listener" "listener1" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "http-listener" balancer_protocol = "HTTP" port = 80 virtual_hosts = ["mywebapp.example.com"] server_protocol = "HTTP" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.load_balancing_mechanism_policy.uri}", ] } Policies are used to define how the Listener processes the incoming traffic. In the Listener definition we are referencing a Load Balancing Mechanism Policy to set how the load balancer allocates the traffic across the available servers in the server pool. Additional policy type could also be defined to control session affinity of resource "opc_lbaas_policy" "load_balancing_mechanism_policy" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "roundrobin" load_balancing_mechanism_policy { load_balancing_mechanism = "round_robin" } } With that, our first basic Load Balancer configuration is complete. Well almost. The last step is to configure the DNS CNAME record to point the source domain name (e.g. mywebapp.example.com ) to the canonical host name of load balancer instance. The exact steps to do this will be dependent on your DNS provider. To get the  canonical_host_name add the following output. output "canonical_host_name" { value = "${opc_lbaas_load_balancer.lb1.canonical_host_name}" } Helpful Hint: if you are just creating the load balancer for testing and you don’t have access to a DNS name you can redirect, a workaround is to set the  virtual host  in the listener configuration to the load balancers canonical host name, you can then use the canonical host name directly for the inbound service URL, e.g. resource "opc_lbaas_listener" "listener1" { ... virtual_hosts = [ "${opc_lbaas_load_balancer.lb1.canonical_host_name}" ] ... } Configuring the Load Balancer for HTTPS There are two separate aspects to configuring the Load Balancer for HTTPS traffic, the first is to enable inbound HTTPS requests to the Load Balancer, often referred to as SSL or TLS termination or offloading. The second is the use of HTTPS for traffic between the Load Balancer and the servers in the origin server pool. HTTPS SSL/TLS Termination To configure the Load Balancer listener to accept inbound HTTPS requests for encrypted traffic between the client and the Load Balancer, create a Server Certificate providing the PEM encoded certificate and private key, and the concatenated set of PEM encoded certificates for the CA certification chain. resource "opc_lbaas_certificate" "cert1" { name = "server-cert" type = "SERVER" private_key = "${var.private_key_pem}" certificate_body = "${var.cert_pem}" certificate_chain = "${var.ca_cert_pem}" } Now update the existing, or create a new listener for HTTPS resource "opc_lbaas_listener" "listener2" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "https-listener" balancer_protocol = "HTTPS" port = 443 certificates = ["${opc_lbaas_certificate.cert1.uri}"] virtual_hosts = ["mywebapp.example.com"] server_protocol = "HTTP" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.load_balancing_mechanism_policy.uri}", ] } Note that the server pool protocol is still HTTP, in this configuration traffic is only encrypted between the client and the load balancer. HTTP to HTTPS redirect A common pattern required for many web applications is to ensure that any initial incoming requests over HTTP are redirected to HTTPS for secure site communication. To do this we can we can update the original HTTP listeners we created above with a new redirect policy resource "opc_lbaas_policy" "redirect_policy" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "example_redirect_policy" redirect_policy { redirect_uri = "https://${var.dns_name}" response_code = 301 } } resource "opc_lbaas_listener" "listener1" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "http-listener" balancer_protocol = "HTTP" port = 80 virtual_hosts = ["mywebapp.example.com"] server_protocol = "HTTP" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.redirect_policy.uri}", ] } HTTPS between Load Balancer and Server Pool HTTPS between the Load Balancer and Server Pool should be used if the server pool is accessed over the Public Internet, and can also be used for extra security when accessing servers within the Oracle Cloud Infrastructure over the private IP Network. This configuration assumes the backend servers are already configured to server their content over HTTPS. To configure the Load Balancer to communicate securely with the backend servers create a Trusted Certificate, providing the PEM encoded Certificate and CA authority certificate chain for the backend servers. resource "opc_lbaas_certificate" "cert2" { name = "trusted-cert" type = "TRUSTED" certificate_body = "${var.cert_pem}" certificate_chain = "${var.ca_cert_pem}" } Next create a Trusted Certificate Policy referencing the Trusted Certificate resource "opc_lbaas_policy" "trusted_certificate_policy" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "example_trusted_certificate_policy" trusted_certificate_policy { trusted_certificate = "${opc_lbaas_certificate.cert2.uri}" } } And finally update the listeners server pool configuration to HTTPS, adding the trusted certificate policy resource "opc_lbaas_listener" "listener2" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "https-listener" balancer_protocol = "HTTPS" port = 443 certificates = ["${opc_lbaas_certificate.cert1.uri}"] virtual_hosts = ["mywebapp.example.com"] server_protocol = "HTTPS" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.load_balancing_mechanism_policy.uri}", "${opc_lbaas_policy.trusted_certificate_policy.uri} ] } More Information Example Terraform configuration for Load Balancer Classic Getting Started with Oracle Cloud Infrastructure Load Balancing Classic Terraform Provider for Oracle Cloud Infrastructure Classic

(Originally published on Medium) This article provides an introduction to using the Load Balancer resources to provision and configure an Oracle Cloud Infrastructure Load Balancer Classic instance...

Developers

A Quick Look At What's New In Oracle JET v5.1.0

On June 18th, the v5.1.0 release of Oracle JET was made available. It was the 25th consecutive on-schedule release for Oracle JET. Details on the release schedule are provided here in the FAQ. As indicated by the release number, v5.1.0 is a minor release, aimed at tweaking and consolidating features throughout the toolkit. As in other recent releases, new features have been added to support development of composite components, following the Composite Component Architecture (CCA). For details, see the entry on the new Template Slots in Duncan Mills's blog. Also, take note of the new design time metadata, as described in the release notes.  Aside from the work done in the CCA area, the key new features and enhancements to be aware of in the release are listed below, sorted alphabetically: Component Enhancement Description oj-chart New "data" attribute. Introduces new attributes, slots, and custom elements. oj-film-strip New "looping" attribute. Specifies filmstrip navigation behavior, bounded ("off) or looping ("page"). oj-form-layout Enhanced content flexibility. Removes restrictions on the types of children allowed in the "oj-form-layout" component. oj-gantt New "dnd" attribute and "ojMove" event.  Provides new support for moving tasks via drag and drop. oj-label-value New component. Provides enhanced layout flexibility for the "oj-form-layout" component. oj-list-view Enhanced "itemTemplate" slot. Supports including the <LI> element in the template. oj-swipe-actions New component. Provides a declarative way to add swipe-to-reveal functionality to items in the "oj-list-view" component. For all the details on the items above, see the release notes. Note: Be aware that in Oracle JET 7.0.0, support for Yeoman and Grunt will be removed from generator-oraclejet and ojet-cli. As a consequence, the ojet-cli will be the only way to use the Oracle JET tooling, e.g., to create new Oracle JET projects from that point on. Therefore, if you haven't transferred from using Yeoman and Grunt to ojet-cli yet, e.g., to command line calls such as "ojet create", take some time to move in that direction before the 7.0.0 release. As always, your comments and constructive feedback are welcome. If you have questions, or comments, please engage with the Oracle JET Community in the Discussion Forums and also follow @OracleJET on Twitter. For organizations using Oracle JET in production, you're invited to be highlighted on the Oracle JET site, with the latest addition being a brand new Customer Success Story by Cagemini. On behalf of the entire Oracle JET development team: "Happy coding!"

On June 18th, the v5.1.0 release of Oracle JET was made available. It was the 25th consecutive on-schedule release for Oracle JET. Details on the release schedule are provided here in the FAQ. As indicat...

APIs

Vibrant and Growing: The Current State of API Management

"Vibrant and growing all the time!" That's how Andrew Bell, Oracle PaaS API Management Architect at Capgemini, describes the current state of API management. "APIs are the doors to organizations, the means by which organizations connect to one another, connect their processes to one another, and streamline those processes to meet customer needs. The API environment is growing rapidly as we speak," Bell says. "API management today is quite crucial," says Bell's Capgemini colleague Sander Rensen, an Oracle PaaS lead and architect, "especially for clients who want to go on a journey of a digital transformation. For our clients, the ability to quickly find APIs and subscribe to them is a very crucial part of digital transformation. "It's not just the public-facing view of APIs," observes Oracle ACE Phil Wilkins, a senior Capgemini consultant specializing in iPaaS. "People are realizing that APIs are an easier, simpler way to do internal decoupling. If I expose my back-end system in a particular way to another part of the organization — the same organization — I can then mask from you how I'm doing transformation or innovation or just trying to keep alive a legacy system while we try and improve our situation," Wilkins explains. "I think that was one of the original aspirations of WSDL and technologies like that, but we ended up getting too fine-grained and tying WSDLs to end products. Then the moment the product changed that WSDL changed and you broke the downstream connections." Luis Weir, CTO of Capgemini's Oracle delivery unit and an Oracle Developer Champion and ACE Director, is just as enthusiastic about the state of API management, but see's a somewhat rocky road ahead for some organizations. "APIs are one thing, but the management of those APIs is something entirely different," Weir explains "API management is something that we're doing quite heavily, but I don't think all organizations have actually realized the importance of the full lifecycle management of the APIs. Sometimes people think of API management as just an API gateway. That’s an important capability, but there is far more to it," Weir wonders if organizations understand what it means to manage an API throughout its entire lifecycle. Bell, Rensen, Wilkins, and Weir are the authors of Implementing Oracle API Platform Cloud Service, now available from Packt Publishing, and as you'll hear in this podcast, they bring considerable insight and expertise to this discussion of what's happening in API management. The conversation goes beyond the current state of API management to delve into architectural implications, API design, and how working in SOA may have left you with some bad habits. Listen! This program was recorded on June 27, 2018. The Panelists Andrew Bell Oracle PaaS API Management Architect, Capgemini     Sander Rensen Oracle PaaS Lead and Architect, Capgemini     Luis Weir CTO, Oracle DU, Capgemini Oracle Developer Champion Oracle ACE Director Phil Wilkins Senior Consultant specializing in iPaaS Oracle ACE   Additional Resources Book Excerpt: Implement an API Design-first approach for building APIs [Tutorial] Microservices in a Monolith World, Presentation by Phil Wilkins Video: API-Guided Drone Flight London Oracle Developer Meet-up Two New Articles on API Management and Microservices Podcast: Are Microservices and APIs Becoming SOA 2.0? Podcast Show Notes: API Management Roundtable Podcast: Taking Charge: Meeting SOA Governance Challenges Related Oracle Code One Sessions The Seven Deadly Sins of API Design [DEV4921], by Luis Weir Oracle Cloud Soaring: Live Demo of a Poly-Cloud Microservices Implementation [DEV4979], by Luis Weir, Lucas Jellema, Guido Schmutz   Coming Soon How has your role as a developer, DBA, or Sysadmin changed? Our next program will focus on the evolution of IT roles and the trends and technologies that are driving the changes. Subscribe Never miss an episode! The Oracle Developer Community Podcast is available via: iTunes Podbean Feedburner

"Vibrant and growing all the time!" That's how Andrew Bell, Oracle PaaS API Management Architect at Capgemini, describes the current state of API management. "APIs are the doors to organizations, the...

Blockchain

Keep Calm and Code On: Four Ways an Enterprise Blockchain Platform Can Improve Developer Productivity

A guest post by Sarabjeet (Jay) Chugh, Sr. Director Product Marketing, Oracle Cloud Platform Situation You just got a cool new Blockchain project for a client. As you head back to the office, you start to map out the project plan in your mind. Can you meet all of your client’s requirements in time? You're not alone in this dilemma. You attend a blockchain conference the next day and get inspired by engaging talks, meet fellow developers working on similar projects. A lunchtime chat with a new friend turns into a lengthy conversation about getting started with Blockchain. Now you’re bursting with new ideas and ready to get started with your hot new Blockchain coding project. Right? Well almost… You go back to your desk and contemplate a plan of action to develop your smart contract or distributed application, thinking through the steps, including ideation, analysis, prototype, coding, and finally building the client-facing application. Problem It is then that the reality sets in. You begin thinking beyond proof-of-concept to the production phase that will require additional things that you will need to design for and build into your solution. Additional things such as:   These things may delay or even prevent you from getting started with building the solution. Ask yourself the questions such as: Should I spend time trying to fulfill dependencies of open-source software such as Hyperledger Fabric on my own to start using it to code something meaningful? Do I spend time building integrations of diverse systems of record with Blockchain? Do I figure out how to assemble components such as Identity management, compute infrastructure, storage, management & monitoring systems to Blockchain? How do I integrate my familiar development tools & CI/CD platform without learning new tools? And finally, ask yourself, Is it the best use of your time to figure out scaling, security, disaster recovery, point in time recovery of distributed ledger, and the “illities” like reliability, availability, and scalability? If the answer to one or more of these is a resounding no, you are not alone. Focusing on the above aspects, though important, will take time away from doing the actual work to meet your client’s needs in a timely manner, which can definitely be a source of frustration. But do not despair. You need to read on about how an enterprise Blockchain platform such as the one from Oracle can make your life simpler. Imagine productivity savings multiplied hundreds of thousands of times across critical enterprise blockchain applications and chaincode. What is an Enterprise Blockchain Platform? The very term “enterprise”  typically signals a “large-company, expensive thing” in the hearts and minds of developers. Not so in this case, as it may be more cost effective than spending your expensive developer hours to build, manage, and maintain blockchain infrastructure and its dependencies on your own. As the chart below shows, the top two Blockchain technologies used in proofs of concept have been Ethereum and Hyperledger.   Ethereum has been a platform of choice among the ICO hype for public blockchain use. However, it has relatively lower performance, is slower and less mature compared to Hyperledger. It also uses a less secure programming model based on a primitive language called Solidity, which is prone to re-entrant attacks that has led to prominent hacks like the DOA attack that lost $50M recently.   Hyperledger Fabric, on the other hand, wins out in terms of maturity, stability, performance, and is a good choice for enterprise use cases involving the use of permissioned blockchains. In addition, capabilities such as the ones listed in Red have been added by vendors such as Oracle that make it simpler to adopt and use and yet retain the open source compatibility. Let’s look at how enterprise Blockchain platform, such as the one Oracle has built that is based on open-source Hyperledger Fabric can help boost developer productivity. How an Enterprise Blockchain Platform Drives Developer Productivity Enterprise blockchain platforms provide four key benefits that drive greater developer productivity:   Performance at Scale Faster consensus with Hyperledger Fabric Faster world state DB - record level locking for concurrency and parallelization of updates to world state DB Parallel execution across channels, smart contracts Parallelized validation for commit Operations Console with Web UI Dynamic Configuration – Nodes, Channels Chaincode Lifecycle – Install, Instantiate, Invoke, Upgrade Adding Organizations Monitoring dashboards Ledger browser Log access for troubleshooting Resilience and Availability Highly Available configuration with replicated VMs Autonomous Monitoring & Recovery Embedded backup of configuration changes and new blocks Zero-downtime patching Enterprise Development and Integration Offline development support and tooling DevOps CI/CD integration for chaincode deployment, and lifecycle management SQL rich queries, which enable writing fewer lines of code, fewer lines to debug REST API based integration with SaaS, custom apps, systems of record Node.js, GO, Java client SDKs Plug-and-Play integration adapters in Oracle’s Integration Cloud Developers can experience orders of magnitude of productivity gains with pre-assembled, managed, enterprise-grade, and integrated blockchain platform as compared assembling it on their own. Summary Oracle offers a pre-assembled, open, enterprise-grade blockchain platform, which provides plug-and-play integrations with systems of records and applications and autonomous AI-driven self-driving, self-repairing, and self-securing capabilities to streamline operations and blockchain functionality. The platform is built with Oracle’s years of experience serving enterprise’s most stringent use cases and is backed by expertise of partners trained in Oracle blockchain. The platform rids developers of the hassles of assembling, integrating, or even worrying about performance, resilience, and manageability that greatly improves productivity. If you’d like to learn more, Register to attend an upcoming webcast (July 16, 9 am PST/12 pm EST). And if your ready to dive right in you can sign up for $300 of free credits good for up to 3500 hours of Oracle Autonomous Blockchain Cloud Service usage.

A guest post by Sarabjeet (Jay) Chugh, Sr. Director Product Marketing, Oracle Cloud Platform Situation You just got a cool new Blockchain project for a client. As you head back to the office, you start...

DevOps

Build and Deploy Node.js Microservice on Docker using Oracle Developer Cloud

This is the first blog in the series to come, which will help you understand, how you can build a NodeJS REST microservice application Docker image and push it to DockerHub using Oracle Developer Cloud Service. The next blog in the series would focus on deployment of the container we build here to deploy on Oracle Kubernetes Engine on Oracle Cloud infrastructure. You can read about the overview of the Docker functionality in this blog. Technology Stack Used Developer Cloud Service - DevOps Platform Node.js Version 6 – For microservice development. Docker – For Build Docker Hub – Container repository   Setting up the Environment: Setting up Docker Hub Account: You should create an account on https://hub.docker.com/. Keep the credentials handy for use in the build configuration section of the blog. Setting up Developer Cloud Git Repository: Now login into your Oracle Developer Cloud Service project. And create a Git repository as shown below. You can give a name of your choice to the Git repository. For the purpose of this blog, I am calling it NodeJSDocker. You can copy the Git repository URL and keep it handy for future use.  Setting up Build VM in Developer Cloud: Now we have to create a VM Template and VM with the Docker software bundle for the execution of the build. Click on the user drop down on the right hand top of the page. Select “Organization” from the menu. Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button. On creation of the template click on “Configure Software” button. Select Docker from the list of software bundles available for configuration and click on the + sign to add it to the template. Then click on “Done” to complete the Software configuration. Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM(s) you want to create and select the VM Template you just created, which would be “DockerTemplate” for our blog.   Pushing Scripts to Git Repository on Oracle Developer Cloud: Command_prompt:> cd <path to the NodeJS folder> Command_prompt:>git init Command_prompt:>git add –all Command_prompt:>git commit –m “<some commit message>” Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL> Command_prompt:>git push origin master Below screen shots are for your reference.   Below is the folder structure description for the code that I have in the Git Repository on Oracle Developer Cloud Service. Code in the Git Repository: You will need to push the below 3 files in the Developer Cloud hosted Git repository which we have created. Main.js This is the main Node JavaScript code snippet which contains two simple methods, first one is to show the message and second one /add is for adding two numbers. The application listens at port 80.  var express = require("express"); var bodyParser = require("body-parser"); var app = express(); app.use(bodyParser.urlencoded()); app.use(bodyParser.json()); var router = express.Router(); router.get('/',function(req,res){   res.json({"error" : false, "message" : "Hello Abhinav!"}); }); router.post('/add',function(req,res){   res.json({"error" : false, "message" : "success", "data" : req.body.num1 + req.body.num2}); }); app.use('/',router); app.listen(80,function(){   console.log("Listening at PORT 80"); }) Package.json In this JSON code snippet we define the Node.js module dependencies. We also define the start file, which is Main.js for our project and the Name of the application. {   "name": "NodeJSMicro",   "version": "0.0.1",   "scripts": {     "start": "node Main.js"   },   "dependencies": {     "body-parser": "^1.13.2",     "express": "^4.13.1"     } } Dockerfile This file will contains the commands to be executed to build the Docker container with the Node.js code. It starts by getting the Node.js version 6 Docker image, then adds the two files Main.js and package.json cloned from the Git repository. Run the npm install to download the dependencies in package.json file. Expose port 80 for Docker container. And finally start the application to listen on port 80.   FROM node:6 ADD Main.js ./ ADD package.json ./ RUN npm install EXPOSE 80 CMD [ "npm", "start" ] Build Configuration: Click on the “+ New Job” button and in the dialog which pops up, give the build job a name of your choice(for the purpose of this blog I have given this as “NodeJSMicroDockerBuild”) and then select the build template (DockerTemplate) from the dropdown, that we had created earlier in the blog.  As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository we created earlier in the blog, which is NodeJSDocker and the master branch to which we have pushed the code. You may select the checkbox to configure automatic build trigger on SCM commits. Now from the Builders tab, select Docker Builder -> Docker Login. In the Docker login form you can leave the Registry host empty as we will be using Docker Hub which is the default Docker registry for Developer Cloud Docker Builder. You will have to provide the Docker Hub account username and password in the respective fields of the login form. In the Builders tab, select Docker Builder -> Docker Build from the Add Builder dropdown. You can leave the Registry host empty as we are going to use Docker Hub which is the default registry. Now, you just need to give the Image name in the form that gets added and you are all done with the Build Job configuration. Click on Save to save the build job configuration. Note: Image name should be in the format <Docker Hub user name>/<Image Name> For this blog we can give the image name as - nodejsmicro Then add Docker Push by selecting Docker Builder -> Docker Push from the Builders tab.Here you just need to mention the Image name, same as you have done in the Docker Build form to push the Docker Image build to the Docker Registry, which in this case is Docker Hub. Once you execute the build, you will be able to see the build in the build queue. Once the build gets executed the Docker Image that gets build is pushed to the Docker Registry which is Docker Hub for our blog. You can login into your Docker Hub account to see the Docker repository being created and the image being pushed to it, as seen in the screen shot below. Now you can pull this image anywhere, then create and run the container, you will have your Node.js microservice code up and running.   You can go ahead and try many other Docker commands both using the out of the box Docker Builder functionality and also alternatively using the Shell Builder to run your Docker commands. In the next blog, of the series, we will deploy this Node.js microservice container on a Kubernetes cluster in Oracle Kubernetes Engine. Happy Coding!  **The views expressed in this post are my own and do not necessarily reflect the views of Oracle    

This is the first blog in the series to come, which will help you understand, how you can build a NodeJS REST microservice application Docker image and push it to DockerHub using Oracle Developer...

Lessons From Alpha Zero (part 5): Performance Optimization

Photo by Mathew Schwartz on Unsplash (Originally published on Medium) This is the Fifth installment in our series on lessons learned from implementing AlphaZero. Check out Part 1, Part 2, Part 3, and Part4. In this post, we review aspects of our AlphaZero implementation that allowed us to dramatically improve the speed of game generation and training.   Overview The task of implementing AlphaZero is daunting, not just because the algorithm itself is intricate, but also due to the massive resources the authors employed to do their research: 5000 TPUs were used over the course of many hours to train their algorithm, and that is presumably after a tremendous amount of time was spent determining the best parameters to allow it to train that quickly. By choosing Connect Four as our first game, we hoped to make a solid implementation of AlphaZero while utilizing more modest resources. But soon after starting, we realized that even a simple game like Connect Four could require significant resources to train: in our initial implementation, training would have taken weeks on a single gpu-enabled computer. Fortunately, we were able to make a number of improvements that made our training cycle time shrink from weeks to about a day. In this post I’ll go over some of our most impactful changes.   The Bottleneck Before diving into some of the tweaks we made to reduce AZ training time, let’s describe our training cycle. Although the authors of AlphaZero used a continuous and asynchronous process to perform model training and updates, for our experiments we used the following three stage synchronous process, which we chose for its simplicity and debugability: While (my model is not good enough): Generate Games: every model cycle, using the most recent model, game play agents generate 7168 games, which equates to about 140–220K game positions. Train a New Model: based on a windowing algorithm, we sample from historical data and train an improved neural network. Deploy the New Model: we now take our new model, transform it into a deployable format, and push it into our cloud for the next cycle of training Far and away, the biggest bottleneck of this process is game generation, which was taking more than an hour per cycle when we first got started. Because of this, minimizing game generation time became the focus of our attention.   Model Size Alpha Zero is very inference heavy during self-play. In fact, during one of our typcal game generation cycles, MCTS requires over 120 Million position evaluations. Depending on the size of your model, this can translate to siginificant GPU time. In the original implementation of AlphaZero, the authors used an architecture where the bulk of computation was performed in 20 residual layers each with 256 filters. This amounts to a model in excess of 90 megabytes, which seemed overkill for Connect Four. Also, using a model of that size was impractical given our initially limited GPU resources. Instead, we started with a very small model, using just 5 layers and 64 filters, just to see if we could make our implementation learn anything at all. As we continued to optimize our pipeline and improve our results, we were able to bump our model size to 20X128 while still maintaining a reasonable game generation speed on our hardware.   Distributed Inference From the get-go, we knew that we would need more than one GPU in order to achieve the training cycle time that we were seeking, so we created software that allowed our Connect 4 game agent to perform remote inference to evaluate positions. This allowed us to scale GPU-heavy inference resources separately from game play resources, which need only CPU.   Parallel Game Generation GPU resources are expensive, so we wanted to make sure that we were saturating them as much as possible during playouts. This turned out to be trickier than we imagined. One of the first optimizations we put in place was to run many games on parallel threads from the same process. Perhaps the largest direct benefit of this, is that it allowed us to cache position evaluations, which could be shared amongst different threads. This cut the number of requests getting sent to our remote inference server by more than a factor of 2: Caching was a huge win, but we still wanted to deal with the remaining uncached requests in an efficient manner. To minimize network latency and best leverage GPU parallelization, we combined inference requests from different worker threads into a bucket before sending them to our inference service. The downside to this is that if a bucket was not promptly filled, any calling thread would be stuck waiting until the bucket’s timeout expired. Under this scheme, choosing an appropriate inference bucket size and timeout value was very important. We found that bucket fill rate varied throughout the course of a game generation batch, mostly because some games would finish sooner than others, leaving behind fewer and fewer threads to fill the bucket. This caused the final games of a batch to take a long time to complete, all while GPU utilization dwindled to zero. We needed a better way to keep our buckets filled.   Parallel MCTS To help with our unfilled bucket problem, we implemented Parallel MCTS, which was discussed in the AZ paper. Initially we had punted on this detail, as it seemed mostly important for competitive one-on-one game play, where parallel game play is not applicable. After running into the issues mentioned previously, we decided to give it a try. The idea behind Parallel MCTS is to allow multiple threads to take on the work of accumulting tree statistics. While this sounds simple, the naiive approach suffers from a basic problem: if N threads all start at the same time and choose a path based on the current tree statistics, they will all choose exactly the same path, thus crippling MCTS’ exploration component. To counteract this, AlphaZero uses the concept of Virtual Loss, an algorithm that temporarily adds a game loss to any node that is traversed during a simulation. A lock is used to prevent multiple threads from simultaneously modifying a node’s simulation and virtual loss statistics. After a node is visited and a virtual loss is applied, when the next thread visits the same node, it will be discouraged from following the same path. Once a thread reaches a terminal point and backs up its result, this virtual loss is removed, restoring the true statistics from the simulation. With virtual loss in place, we were finally able to achieve >95% GPU utilization during most of our game generation cycle, which was a sign that we were approaching the real limits of our hardware setup. Technically, virtual loss adds some degree of exploration to game playouts, as it forces move selection down paths that MCTS may not naturally be inclined to visit, but we never measured any detrimental (or beneficial) effect due to its use.   TensorRT/TensorRT+INT8 Though it was not necessary to use a model quite as large as that described in the AlphaZero paper, we saw better learning from larger models, and so wanted to use the biggest one possible. To help with this, we tried TensorRT, which is a technology created by Nvidia to optimize the performance of model inference. It is easy to convert an existing Tensorflow/Keras model to TensorRT using just a few scripts. Unfortunately, at the time we were working on this, there was no released TensorRT remote serving component, so we wrote our own. With TensorRT’s default configuration, we noticed a small increase in inference throughput (~11%). We were pleased by this modest improvement, but were hopeful to see an even larger performance increase by using TensorRT’s INT8 mode. INT8 mode required a bit more effort to get going, since when using INT8 you must first generate a calibration file to tell the inference engine what scale factors to apply to your layer activations when using 8-bit approximated math. This calibration is done by feeding a sample of your data into Nvidia’s calibration library. Because we observed some variation in the quality of calibration runs, we would attempt calibration against 3 different sets of sample data, and then validate the resulting configuraton against hold-out data. Of the three calibration attempts, we chose the one with the lowest validation error. Once our INT8 implementation was in place, we saw an almost 4X increase in inference throughput vs. stock libtensorflow, which allowed us to use larger models than would have otherwise been feasible. One downside of using INT8 is that it can be lossy and imprecise in certain situations. While we didn’t observe serious precision issues during the early parts of training, as learning progressed we would observe the quality of inference start to degrade, particularly on our value output. This initially led us to use INT8 only during the very early stages of training. Serendipitously, we were able to virtually eliminate our INT8 precision problem when we began experimenting with increasing the number of convolutional filters in our head networks, an idea we got from Leela Chess. Below is a chart of our value output’s mean average error with 32 filters in the value head, vs. the AZ default of 1: We theorize that adding additional cardinality to these layers reduces the variance in the activations, which makes the model easier to accurately quantize. These days, we always perfom our game generation with INT8 enabled and see no ill effects even towards the end of AZ training.   Summary By using all of these approaches, we were finally able to train a decent-sized model with high GPU utilization and good cycle time. It was initially looking like it would take weeks to perform a full train, but now we could train a decent model in less than a day. This was great, but it turned out we were just getting started — in the next article we’ll talk about how we tuned AlphaZero itself to get even better learning speed. Part 6 is now out. Thanks to Vish (Ishaya) Abrams and Aditya Prasad.

Photo by Mathew Schwartz on Unsplash (Originally published on Medium) This is the Fifth installment in our series on lessons learned from implementing AlphaZero. Check out Part 1, Part 2, Part 3, and Pa...

Chatbots

A Practical Guide to Building Multi-Language Chatbots with the Oracle Bot Platform

Article by Frank Nimphius, Marcelo Jabali - June 2018 Chatbot support for multiple languages is a worldwide requirement. Almost every country has the need for supporting foreign languages, be it to support immigrants, refugees, tourists, or even employees crossing borders on a daily basis for their jobs. According to the Linguistic Society of America1, as of 2009, 6,909 distinct languages were classified, a number that since then has been grown. Although no bot needs to support all languages, you can tell that for developers building multi-language bots, understanding natural language in multiple languages is a challenge, especially if the developer does not speak all of the languages he or she needs to implement support for. This article explores Oracle's approach to multi language support in chatbots. It explains the tooling and practices for you to use and follow to build bots that understand and "speak" foreign languages. Read the full article.   Related Content TechExchange: A Simple Guide and Solution to Using Resource Bundles in Custom Components  TechExchange - Custom Component Development in OMCe – Getting Up and Running Immediately TechExchange - First Step in Training Your Bot

Article by Frank Nimphius, Marcelo Jabali - June 2018 Chatbot support for multiple languages is a worldwide requirement. Almost every country has the need for supporting foreign languages, be it to...

Database

Announcing Oracle APEX 18.1

Oracle Application Express (APEX) 18.1 is now generally available! APEX enables you to develop, design and deploy beautiful, responsive, data-driven desktop and mobile applications using only a browser. This release of APEX is a dramatic leap forward in both the ease of integration with remote data sources, and the easy inclusion of robust, high-quality application features. Keeping up with the rapidly changing industry, APEX now makes it easier than ever to build attractive and scalable applications which integrate data from anywhere - within your Oracle database, from a remote Oracle database, or from any REST Service, all with no coding.  And the new APEX 18.1 enables you to quickly add higher-level features which are common to many applications - delivering a rich and powerful end-user experience without writing a line of code. "Over a half million developers are building Oracle Database applications today using  Oracle Application Express (APEX).  Oracle APEX is a low code, high productivity app dev tool which combines rich declarative UI components with SQL data access.  With the new 18.1 release, Oracle APEX can now integrate data from REST services with data from SQL queries.  This new functionality is eagerly awaited by the APEX developer community", said Andy Mendelsohn, Executive Vice President of Database Server Technologies at Oracle Corporation.   Some of the major improvements to Oracle Application Express 18.1 include: Application Features It has always been easy to add components to an APEX application - a chart, a form, a report.  But in APEX 18.1, you now have the ability to add higher-level application features to your app, including access control, feedback, activity reporting, email reporting, dynamic user interface selection, and more.  In addition to the existing reporting and data visualization components, you can now create an application with a "cards" report interface, a dashboard, and a timeline report.  The result?  An easily-created powerful and rich application, all without writing a single line of code. REST Enabled SQL Support Oracle REST Data Services (ORDS) REST-Enabled SQL Services enables the execution of SQL in remote Oracle Databases, over HTTP and REST.  You can POST SQL statements to the service, and the service then runs the SQL statements against Oracle database and returns the result to the client in a JSON format.   In APEX 18.1, you can build charts, reports, calendars, trees and even invoke processes against Oracle REST Data Services (ORDS)-provided REST Enabled SQL Services.  No longer is a database link necessary to include data from remote database objects in your APEX application - it can all be done seamlessly via REST Enabled SQL. Web Source Modules APEX now offers the ability to declaratively access data services from a variety of REST endpoints, including ordinary REST data feeds, REST Services from Oracle REST Data Services, and Oracle Cloud Applications REST Services.  In addition to supporting smart caching rules for remote REST data, APEX also offers the unique ability to directly manipulate the results of REST data sources using industry standard SQL. REST Workshop APEX includes a completely rearchitected REST Workshop, to assist in the creation of REST Services against your Oracle database objects.  The REST definitions are managed in a single repository, and the same definitions can be edited via the APEX REST Workshop, SQL Developer or via documented API's.  Users can exploit the data management skills they possess, such as writing SQL and PL/SQL to define RESTful API services for their database.   The new REST Workshop also includes the ability to generate Swagger documentation against your REST definitions, all with the click of a button. Application Builder Improvements In Oracle Application Express 18.1, wizards have been streamlined with smarter defaults and fewer steps, enabling developers to create components quicker than ever before.  There have also been a number of usability enhancements to Page Designer, including greater use of color and graphics on page elements, and "Sticky Filter" which is used to maintain a specific filter in the property editor.  These features are designed to enhance the overall developer experience and improve development productivity.  APEX Spotlight Search provides quick navigation and a unified search experience across the entire APEX interface. Social Authentication APEX 18.1 introduces a new native authentication scheme, Social Sign-In.  Developers can now easily create APEX applications which can use Oracle Identity Cloud Service, Google, Facebook, generic OpenID Connect and generic OAuth2 as the authentication method, all with no coding. Charts The data visualization engine of Oracle Application Express powered by Oracle JET (JavaScript Extension Toolkit), a modular open source toolkit based on modern JavaScript, CSS3 and HTML5 design and development principles.  The charts in APEX are fully HTML5 capable and work on any modern browser, regardless of platform, or screen size.  These charts provide numerous ways to visualize a data set, including bar, line, area, range, combination, scatter, bubble, polar, radar, pie, funnel, and stock charts.  APEX 18.1 features an upgraded Oracle JET 4.2 engine with updated charts and API's.  There are also new chart types including Gantt, Box-Plot and Pyramid, and better support for multi-series, sparse data sets. Mobile UI APEX 18.1 introduce many new UI components to assist in the creation of mobile applications.  Three new component types, ListView, Column Toggle and Reflow Report, are now components which can be used natively with the Universal Theme and are commonly used in mobile applications.  Additional enhancements have been made to the APEX Universal Theme which are mobile-focused, namely, mobile page headers and footers which will remain consistently displayed on mobile devices, and floating item label templates, which optimize the information presented on a mobile screen.  Lastly, APEX 18.1 also includes declarative support for touch-based dynamic actions, tap and double tap, press, swipe, and pan, supporting the creation of rich and functional mobile applications. Font APEX Font APEX is a collection of over 1,000 high-quality icons, many specifically created for use in business applications.  Font APEX in APEX 18.1 includes a new set of high-resolution 32 x 32 icons which include much greater detail and the correctly-sized font will automatically be selected for you, based upon where it is used in your APEX application. Accessibility APEX 18.1 includes a collection of tests in the APEX Advisor which can be used to identify common accessibility issues in an APEX application, including missing headers and titles, and more. This release also deprecates the accessibility modes, as a separate mode is no longer necessary to be accessible. Upgrading If you're an existing Oracle APEX customer, upgrading to APEX 18.1 is as simple as installing the latest version.  The APEX engine will automatically be upgraded and your existing applications will look and run exactly as they did in the earlier versions of APEX.     "We believe that APEX-based PaaS solutions provide a complete platform for extending Oracle’s ERP Cloud. APEX 18.1 introduces two new features that make it a landmark release for our customers. REST Service Consumption gives us the ability to build APEX reports from REST services as if the data were in the local database. This makes embedding data from a REST service directly into an ERP Cloud page much simpler. REST enabled SQL allows us to incorporate data from any Cloud or on-premise Oracle database into our Applications. We can’t wait to introduce APEX 18.1 to our customers!", said Jon Dixon, co-founder of JMJ Cloud.   Additional Information Application Express (APEX) is the low code rapid app dev platform which can run in any Oracle Database and is included with every Oracle Database Cloud Service.  APEX, combined with the Oracle Database, provides a fully integrated environment to build, deploy, maintain and monitor data-driven business applications that look great on mobile and desktop devices.  To learn more about Oracle Application Express, visit apex.oracle.com.  To learn more about Oracle Database Cloud, visit cloud.oracle.com/database. 

Oracle Application Express (APEX) 18.1 is now generally available! APEX enables you to develop, design and deploy beautiful, responsive, data-driven desktop and mobile applications using only a...

DevOps

Oracle Cloud Infrastructure CLI on Developer Cloud

With our May 2018 release of Oracle Developer Cloud, we have integrated Oracle Cloud Infrastructure command line interface (from here on, will be using OCIcli in the blog) as part of the build pipeline in Developer Cloud. This blog will help you understand how you can configure and execute OCIcli commands as part of the build pipeline, configured as part of the build job in Developer Cloud. Configuring the Build VM Template for OCIcli You will have to create a build VM with the OCIcli software bundle, to be able to execute the build with OCIcli commands. Click on the user drop down on the right hand top of the page. Select “Organization” from the menu. Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button. On creation of the template click on “Configure Software” button. Select OCIcli from the list of software bundles available for configuration and click on the + sign to add it to the template. You will also have to add the Python3.5 software bundle, which is a dependency for the OCIcli. Then click on “Done” to complete the Software configuration. Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “OCIcli” for our blog. Build Job Configuration Configure the Tenancy OCID as Build Parameter using String Parameter and give the name as per your wish. I have named it as "T" and have provided a default value to it, as shown in the screenshot below. In the Builders tab Select OCIcli Builder and a Unix Shell builder in this sequence from the Add Builder drop down. On adding the OCIcli Builder, you will see the form as below. For the OCIcli Builder, you can get the parameters from the OCI console. Below screenshots would show where to get each of these form values from the OCI console.Below highlighted are in red boxes shows where you can get the Tenancy OCID and the region for the “Tenancy” and “Region” fields respectively in the OCIcli builder form. For the “User OCID” and “Fingerprint” you need go to User Settings by clicking over the username drop down in the OCI console located at right hand side top. Please refer the screen shot below. Please refer the links below for understanding the process of generating the Private Key and configuring the Public Key for the user in the OCI console. https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3 In the Unix Shell Builder you can try out the below command: oci iam compartment list -c $T This command will list all the compartment in the Tenancy with OCID given to variable ‘T’ that we configured in the Build parameters tab as a String Parameter.   Post execution of the command, you can view the output on the console log. As shown below. There are tons of other OCIcli commands that you can run as part of the build pipeline. Please refer this link for the same. Happy Coding! **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

With our May 2018 release of Oracle Developer Cloud, we have integrated Oracle Cloud Infrastructure command line interface (from here on, will be using OCIcli in the blog) as part of the...

DevOps

Oracle Developer Cloud - New Continuous Integration Engine Deep Dive

We introduced our new Build Engine in Oracle Developer Cloud in our April release. This new build engine now comes with the capability to define build pipelines visually. Read more about it in my previous blog. In this blog we will delve deeper into some of the functionalities of Build Pipeline feature of the new CI Engine in Oracle Developer Cloud. Auto Start Auto Start is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure the pipeline execution auto starts when one of the build job in the pipeline is executed externally, then that would trigger the execution of rest of the build jobs in the pipeline. The below screen shot shows the pipeline for NodeJS application created on Oracle Developer Cloud Pipelines. The build jobs used in the pipeline are build-microservice, test-microservices and loadtest-microservice. And in parallel to the microservice build sequence we have, WiremockInstall and WiremockConfigure. Scenarios When Auto Start is enabled for the Pipeline: Scenario 1: If we run build-microservice build job externally, then it will lead to the execution of the test-microservice and loadtest-microservice build jobs in that order subsequently. But note this does not trigger the execution of WiremockInstall or WiremockConfigure build jobs as they are part of a separate sequence. Please refer the screen shot below, which shows only the build jobs executed in green. Scenario 2: If we run test-microservice build job externally, then it will lead to the execution of the loadtest-microservice build job only. Please refer the screen shot below, which shows only the build jobs executed in green. Scenario 3: If we run loadtest-microservice build job externally, then it will lead to no other build job execution in the pipeline across both the build sequences. Exclusive Build This enables the users to disallow the pipeline jobs to be built externally in parallel to the execution of the build pipeline. It is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure that the execution of build jobs in pipeline will not be allowed to be built in parallel to the pipeline execution. When you run the pipeline you would see the build jobs queued for execution which you can see in the Build History. In this case you would see two build jobs queued, one would be build-micorservice and other would be WiremockInstall as they are parallel sequences part of the same pipeline. Now if you try to run any of the build jobs in the pipeline, for example; like test-microservice, you will be given an error message, as shown in the screenshot below.   Pipeline Instances: If you click the Build Pipeline name link in the Pipelines tab you will be able to see the pipeline instances. Pipeline instance is the instance at which it was executed.  Below screen shot shows the pipeline instances with time stamp of when it was executed. It will show if the pipeline got Auto Started (hover on the status icon of the pipeline instance) due to an external execution of the build job or shows the success status if all the build jobs of the pipeline were build successfully. It also shows the build jobs that executed successfully in green for that particular pipeline instance. The build jobs that did not get executed have a white background.  You also get an option to cancel while the pipeline is getting executed and you may choose to delete the instance post execution of the pipeline.   Conditional Build: The visual build pipeline editor in Oracle Developer Cloud has a feature to support conditional builds. You will have to double click the link connecting the two build jobs and select any one of the conditions as given below: Successful: To proceed to the next build job in the sequence if the previous one was a success. Failed: To proceed to the next build job in the sequence if the previous one failed. Test Failed: To proceed to the next build job in the sequence if the test failed in the previous build job in the pipeline.   Fork and Join: Scenario 1: Fork In this scenario if you have a build job like build-microservice on which the other three build jobs, “DockerBuild” which builds a deployable Docker image for the code, “terraformBuild” which builds the instance on Oracle Cloud Infrastructure and deploy the code artifact and “ArtifactoryUpload” build job to upload the generated artifact to Artifactory are dependent on then you will be able to fork the build jobs as shown below.   Scenario 2: Join If you have a build job test-microservice which is dependent on two other build jobs, build-microservice which build and deploys the application and another build job WiremockConfigure to configure the service stub, then in this case you need to create a join in the pipeline as shown in the screen shot below.   You can refer the Build Pipeline documentation here. Happy Coding!  **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

We introduced our new Build Engine in Oracle Developer Cloud in our April release. This new build engine now comes with the capability to define build pipelines visually. Read more about it in my...

Community

Pizza, Beer, and Dev Expertise at Your Local Meet-up

Big developer conferences are great places to learn about new trends and technologies, attend technical sessions, and connect with colleagues. But by virtue of their size, their typical location in destination cities, and multi-day schedules, they can require a lot of planning, expense, and time away from work. Meet-ups, offer a fantastic alternative. They’re easily accessible local events, generally lasting a couple of hours. Meet-ups offer a more human scale and are far less crowded than big conferences, with a far more casual, informal atmosphere that can be much more conducive to learning through Q&A and hands-on activities. One big meet-up advantage is that by virtue of their smaller scale they can be scheduled more frequently. For example, while Oracle ACE Associate Jon Petter Hjulsted and his colleagues attend the annual Oracle User Group Norway (OUGN) Conference, they wanted to get together more often, three or four times a year. The result is a series of OUGN Integration meet-ups “where we can meet people who work on the same things.” As of this podcast two meet-ups have already taken place, with third schedule for the end of May. Luis Weir, CTO at Capgemini in the UK and an Oracle ACE Director and Developer Champion, felt a similar motivation. “There's so many events going on and there's so many places where developers can go,” Luis says. But sometimes developers want a more relaxed, informal, more approachable atmosphere in which to exchange knowledge. Working with his colleague Phil Wilkins, senior consultant at Capgemini and an Oracle ACE, Luis set out to organize a series of meet-ups that offered more “cool.” Phil’s goal in the effort was to organize smaller events that were “a little less formal, and a bit more convenient.” Bigger, longer events are more difficult to attend because they require more planning on the part of attendees. “It can take quite a bit of effort to organize your day if you’re going to be out for a whole day to attend a user group special interest group event,” Phil says. But local events scheduled in the evening require much less planning in order to attend. “It's great! You can get out and attend these things and you get to talk to people just as much as you would at a during a day-time event.” For Oracle ACE Ruben Rodriguez Santiago, a Java, ADF, and cloud solution specialist with Avanttic in Spain, the need for meet-ups arose out of a dearth of events focused on Oracle technologies. And those that were available were limited to database and SaaS. “So for me this was a way to get moving and create events for developers,” Ruben says. What steps did these meet-up organizers take? What insight have they gained along the way as they continue to organize and schedule meet-up events? You’ll learn all that and more in this podcast. Listen!   The Panelists Jon-Petter Hjulstad Department Manager, SYSCO AS    Ruben Rodriguez Santiago Java, ADF, and Cloud Solution Specialist, Avanttic    Luis Weir CTO, Oracle DU, Capgemini    Phil Wilkins Senior Consultant, Capgemini   Additional Resources Oracle User Group Norway (OUGN) Meet-Up Page Customer Experience OUGN SIG Oracle Developer Meet-Up London Slides from Oracle Developer Meetup March 2018 Bangalore Java User Group Meet-Up PaaS Meet-up List Tech Category on Meetup.com Coming Soon What Developers Need to Know About API Monetization Best Practices for API Development Subscribe Never miss an episode! The Oracle Developer Community Podcast is available via: iTunes Podbean Feedburner  

Big developer conferences are great places to learn about new trends and technologies, attend technical sessions, and connect with colleagues. But by virtue of their size, their typical location in...

DevOps

Build Oracle Cloud Infrastructure custom Images with Packer on Oracle Developer Cloud

In the April release of Oracle Developer Cloud Service we started supporting Docker and HashiCorp Terraform builds as part of the CI & CD pipeline.  HashiCorp Terraform helps you provision Oracle Cloud Infrastructure instance as part of the build pipeline. But what if you want to provision the instance using a custom image instead of the base image? You need a tool like  HashiCorp Packer to script your way into building images. So with Docker build support we can now build Packer based images as part of build pipeline in Oracle Developer Cloud. This blog will help you to understand how you can use Docker and Packer together on Developer Cloud to create custom images on Oracle Cloud Infrastructure. About HashiCorp Packer HashiCorp Packer automates the creation of any type of machine image. It embraces modern configuration management by encouraging to use automated scripts to install and configure the software within your Packer-made images. Packer brings machine images into the modern age, unlocking untapped potential and opening new opportunities. You can read more about HashiCorp Packer on https://www.packer.io/ You can find the details of HashiCorp Packer support for Oracle Cloud Infrastructure here. Tools and Platforms Used Below are the tools and cloud platforms I use for this blog: Oracle Developer Cloud Service: The DevOps platform to build your Ci & CD pipeline. Oracle Cloud Infrastructure: IaaS platform where we would build the image which can be used for provisioning. Packer: Tool for creating custom images on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would mostly be using OCI here on. Packer Scripts To execute the Packer scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload 3 files to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code: I was using windows machine for the script development, so below is what you need to do on the command line: Pushing Scripts to Git Repository on Oracle Developer Cloud Command_prompt:> cd <path to the Terraform script folder> Command_prompt:>git init Command_prompt:>git add –all Command_prompt:>git commit –m “<some commit message>” Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL> Command_prompt:>git push origin master Note: Ensure that the Git repository is created and you have the HTTPS URL for it. Below is the folder structure description for the scripts that I have in the Git Repository on Oracle Developer Cloud Service. Description of the files: oci_api_key.pem – This is the file required for the OCI access. It contains the SSH private key. Note: Please refer to the links below for details on OCI key. You will also need the SSH public key to be there https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3   build.json: This is the only configuration file that you need for Packer. This JSON file contains all the definitions needed for Packer to create an image on Oracle Cloud Infrastructure. I have truncated the ocids and fingerprint for security reasons.   { "builders": [ { "user_ocid":"ocid1.user.oc1..aaaaaaaa", "tenancy_ocid": "ocid1.tenancy.oc1..aaaaaaaay", "fingerprint":"29:b1:8b:e4:7a:92:ae", "key_file":"oci_api_key.pem", "availability_domain": "PILZ:PHX-AD-1", "region": "us-phoenix-1", "base_image_ocid": "ocid1.image.oc1.phx.aaaaaaaal", "compartment_ocid": "ocid1.compartment.oc1..aaaaaaaahd", "image_name": "RedisOCI", "shape": "VM.Standard1.1", "ssh_username": "ubuntu", "ssh_password": "welcome1", "subnet_ocid": "ocid1.subnet.oc1.phx.aaaaaaaa", "type": "oracle-oci" } ], "provisioners": [ { "type": "shell", "inline": [ "sleep 30", "sudo apt-get update", "sudo apt-get install -y redis-server" ] } ] } You can give values of your choice for image_name and it is recommended but optional to provide ssh_password. While I have kept ssh_username as “Ubuntu” as my base image OS was Ubuntu. Leave the type and shape as is. The base_image ocid would depend on the region. Different region have different ocid for the base images. Please refer link below to find the ocid for the image as per region. https://docs.us-phoenix-1.oraclecloud.com/images/ Now login into your OCI console to retrieve some of the details needed for the build.json definitions. Below screenshot shows where you can retrieve your tenancy_ocid from. Below screenshot of OCI console shows where you will find the compartment_ocid. Below screenshot of OCI console shows where you will find the user_ocid. You can retrieve the region and availability_domain as shown below. Now select the compartment, which is “packerTest” for this blog, then click on the networking tab and then the VCN you have created. Here you would see a subnet each for the availability_domains. Copy the ocid for the subnet with respect to the availability_domain you have chosen. Dockerfile: This will install Packer in Docker and run the Packer command to create a custom image on OCI. It pulls the packer:full image, then adds the build.json and oci_api_key.pem files the Docker image and then execute the packer build command.   FROM hashicorp/packer:full ADD build.json ./ ADD oci_api_key.pem ./ RUN packer build build.json   Configuring the Build VM With our latest release, you will have to create a build VM with the Docker software bundle, to be able to execute the build for Packer, as we are using Docker to install and run Packer. Click on the user drop down on the right hand top of the page. Select “Organization” from the menu. Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button. On creation of the template click on “Configure Software” button. Select Docker from the list of software bundles available for configuration and click on the + sign to add it to the template. Then click on “Done” to complete the Software configuration. Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “DockerTemplate” for our blog.   Build Job Configuration Click on the “+ New Job” button and in the dialog which pops up, give the build job a name of your choice and then select the build template (DockerTemplate) from the dropdown, that we had created earlier in the blog.  As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits. In the Builders tab Docker Builder -> Docker Build from the Add Builder dropdown. You just need to give the Image name in the form that gets added and you are all done with the Build Job configuration. Now Click on Save to save the build job configuration. On execution of the build job, the image gets created in the OCI instance in the defined compartment as shown in the below screenshot. So now you can easily automate custom image creation on Oracle Cloud Infrastructure using Packer as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud. Happy Packing!  **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

In the April release of Oracle Developer Cloud Service we started supporting Docker and HashiCorp Terraform builds as part of the CI & CD pipeline.  HashiCorp Terraform helps you provision Oracle...

DevOps

Infrastructure as Code using Terraform on Oracle Developer Cloud

With our April release, we have started supporting HashiCorp Terraform builds in Oracle Developer Cloud. This blog will help you understand how you can use HashiCorp Terraform in build pipeline to provision Oracle Cloud Infrastructure as part of the build pipeline automation.    Tools and Platforms Used Below are the tools and cloud platforms I use for this blog: Oracle Developer Cloud Service: The DevOps platform to build your CI & CD pipeline. Oracle Cloud Infrastructure: IaaS platform where we would provision the infrastructure for our usage. Terraform: Tool for provisioning the infrastructure on cloud. We would be doing for Oracle Cloud Infrastructure or OCI it is popularly known as. For this blog I would be using OCI here on.   About HashiCorp Terraform HashiCorp Terraform is a tool which helps you to write, plan and create your infrastructure safely and efficiently. It can manage existing and popular service providers like Oracle, as well as custom in-house solutions. Configuration files describe to HashiCorp Terraform the components needed to run a single application or your entire datacenter. It helps you to build, manage and version your code. To know more about HashiCorp Terraform go to: https://www.terraform.io/   Terraform Scripts To execute the Terraform scripts on the Oracle Developer Cloud as part of the build pipeline, you need to upload all the scripts to the Git repository. To upload the scripts to the Git repository, you will need to first install the Git cli on your machine and then use the below commands to upload the code: I was using windows machine for the script development so below is what you need to do on the command line: Pushing Scripts to Git Repository on Oracle Developer Cloud Command_prompt:> cd <path to the Terraform script folder> Command_prompt:>git init Command_prompt:>git add –all Command_prompt:>git commit –m “<some commit message>” Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL> Command_prompt:>git push origin master Below is the folder structure description for the terraform scripts that I have in the Git Repository on Oracle Developer Cloud Service. The terraform scripts are inside the exampleTerraform folder and the oci_api_key_public.pem and oci_api_key.pem are the OCI keys. In the exampleTerraform folder we have all the “tf” extension files along with the env-vars file. You will be able to see the definition of the files later in the blog. In the “userdata” folder you will have the bootstrap shell script which will be executed when the VM first boots up on OCI. Below is the description of each file in the folder and the snippet: env-vars: It is the most important file where we set all the environment variables which will be used by the Terraform scripts for accessing and provisioning the OCI instance. ### Authentication details export TF_VAR_tenancy_ocid="ocid1.tenancy.oc1..aaaaaaaa" export TF_VAR_user_ocid="ocid1.user.oc1..aaaaaaa" export TF_VAR_fingerprint="29:b1:8b:e4:7a:92:ae:d5" export TF_VAR_private_key_path="/home/builder/.terraform.d/oci_api_key.pem" ### Region export TF_VAR_region="us-phoenix-1" ### Compartment ocid export TF_VAR_compartment_ocid="ocid1.tenancy.oc1..aaaa" ### Public/private keys used on the instance export TF_VAR_ssh_public_key=$(cat exampleTerraform/id_rsa.pub) export TF_VAR_ssh_private_key=$(cat exampleTerraform/id_rsa) Note: all the ocids above are truncated for security and brevity. Below screenshot(s) of the OCI console shows where to locate these OCIDS: tenancy_ocid and region compartment_ocid: user_ocid: Point to the path of the RSA files for the SSH connection which are there in the Git repository and the OCI API Key private pem file in the Git repository. variables.tf: In this file we initialize the terraform variables along with configuring the Instance Image OCID. This could be the ocid for base image available out of the box on OCI instance. These may vary based on the region where your OCI instance has been provisioned. Use this link for knowing more about the OCI base images. Here we also configure the path for the bootstrap file which resides in the userdata folder, which will be executed on boot of the OCI machine. variable "tenancy_ocid" {} variable "user_ocid" {} variable "fingerprint" {} variable "private_key_path" {} variable "region" {} variable "compartment_ocid" {} variable "ssh_public_key" {} variable "ssh_private_key" {} # Choose an Availability Domain variable "AD" { default = "1" } variable "InstanceShape" { default = "VM.Standard1.2" } variable "InstanceImageOCID" { type = "map" default = { // Oracle-provided image "Oracle-Linux-7.4-2017.12.18-0" // See https://docs.us-phoenix-1.oraclecloud.com/Content/Resources/Assets/OracleProvidedImageOCIDs.pdf us-phoenix-1 = "ocid1.image.oc1.phx.aaaaaaaa3av7orpsxid6zdpdbreagknmalnt4jge4ixi25cwxx324v6bxt5q" //us-ashburn-1 = "ocid1.image.oc1.iad.aaaaaaaaxrqeombwty6jyqgk3fraczdd63bv66xgfsqka4ktr7c57awr3p5a" //eu-frankfurt-1 = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaayxmzu6n5hsntq4wlffpb4h6qh6z3uskpbm5v3v4egqlqvwicfbyq" } } variable "DBSize" { default = "50" // size in GBs } variable "BootStrapFile" { default = "./userdata/bootstrap" } compute.tf: The display name, compartment ocid, image to be used and the shape and the network parameters need to be configured here , as shown in the code snippet below.   resource "oci_core_instance" "TFInstance" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" compartment_id = "${var.compartment_ocid}" display_name = "TFInstance" image = "${var.InstanceImageOCID[var.region]}" shape = "${var.InstanceShape}" create_vnic_details { subnet_id = "${oci_core_subnet.ExampleSubnet.id}" display_name = "primaryvnic" assign_public_ip = true hostname_label = "tfexampleinstance" }, metadata { ssh_authorized_keys = "${var.ssh_public_key}" } timeouts { create = "60m" } } network.tf: Here we have the Terraform script for creating VCN, Subnet, Internet Gateway and Route table. These are vital for the creation and access of the compute instance that we provision. resource "oci_core_virtual_network" "ExampleVCN" { cidr_block = "10.1.0.0/16" compartment_id = "${var.compartment_ocid}" display_name = "TFExampleVCN" dns_label = "tfexamplevcn" } resource "oci_core_subnet" "ExampleSubnet" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" cidr_block = "10.1.20.0/24" display_name = "TFExampleSubnet" dns_label = "tfexamplesubnet" security_list_ids = ["${oci_core_virtual_network.ExampleVCN.default_security_list_id}"] compartment_id = "${var.compartment_ocid}" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" route_table_id = "${oci_core_route_table.ExampleRT.id}" dhcp_options_id = "${oci_core_virtual_network.ExampleVCN.default_dhcp_options_id}" } resource "oci_core_internet_gateway" "ExampleIG" { compartment_id = "${var.compartment_ocid}" display_name = "TFExampleIG" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" } resource "oci_core_route_table" "ExampleRT" { compartment_id = "${var.compartment_ocid}" vcn_id = "${oci_core_virtual_network.ExampleVCN.id}" display_name = "TFExampleRouteTable" route_rules { cidr_block = "0.0.0.0/0" network_entity_id = "${oci_core_internet_gateway.ExampleIG.id}" } } block.tf: The below script defines the boot volumes for the compute instance getting provisioned. resource "oci_core_volume" "TFBlock0" { availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" compartment_id = "${var.compartment_ocid}" display_name = "TFBlock0" size_in_gbs = "${var.DBSize}" } resource "oci_core_volume_attachment" "TFBlock0Attach" { attachment_type = "iscsi" compartment_id = "${var.compartment_ocid}" instance_id = "${oci_core_instance.TFInstance.id}" volume_id = "${oci_core_volume.TFBlock0.id}" } provider.tf: In the provider script the OCI details are set.   provider "oci" { tenancy_ocid = "${var.tenancy_ocid}" user_ocid = "${var.user_ocid}" fingerprint = "${var.fingerprint}" private_key_path = "${var.private_key_path}" region = "${var.region}" disable_auto_retries = "true" } datasources.tf: Defines the data sources used in the configuration # Gets a list of Availability Domains data "oci_identity_availability_domains" "ADs" { compartment_id = "${var.tenancy_ocid}" } # Gets a list of vNIC attachments on the instance data "oci_core_vnic_attachments" "InstanceVnics" { compartment_id = "${var.compartment_ocid}" availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}" instance_id = "${oci_core_instance.TFInstance.id}" } # Gets the OCID of the first (default) vNIC data "oci_core_vnic" "InstanceVnic" { vnic_id = "${lookup(data.oci_core_vnic_attachments.InstanceVnics.vnic_attachments[0],"vnic_id")}" } outputs.tf: It defines the output of the configuration, which is public and private IP of the provisioned instance. # Output the private and public IPs of the instance output "InstancePrivateIP" { value = ["${data.oci_core_vnic.InstanceVnic.private_ip_address}"] } output "InstancePublicIP" { value = ["${data.oci_core_vnic.InstanceVnic.public_ip_address}"] } remote-exec.tf: Uses a null_resource, remote-exec and depends on to execute a command on the instance. resource "null_resource" "remote-exec" { depends_on = ["oci_core_instance.TFInstance","oci_core_volume_attachment.TFBlock0Attach"] provisioner "remote-exec" { connection { agent = false timeout = "30m" host = "${data.oci_core_vnic.InstanceVnic.public_ip_address}" user = "ubuntu" private_key = "${var.ssh_private_key}" } inline = [ "touch ~/IMadeAFile.Right.Here", "sudo iscsiadm -m node -o new -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port}", "sudo iscsiadm -m node -o update -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -n node.startup -v automatic", "echo sudo iscsiadm -m node -T ${oci_core_volume_attachment.TFBlock0Attach.iqn} -p ${oci_core_volume_attachment.TFBlock0Attach.ipv4}:${oci_core_volume_attachment.TFBlock0Attach.port} -l >> ~/.bashrc" ] } } Oracle Infrastructure Cloud - Configuration The major configuration that need to be done on OCI is for the security for Terraform to be able work and provision an instance. Click the username on top of the Oracle Cloud Infrastructure console, you will see a drop down, select User Settings from it. Now click on the “Add Public Key” button, to get the dialog where you can copy paste the oci_api_key.pem(the key) in it and click on the Add button. Note: Please refer to the links below for details on OCI key. https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3   Configuring the Build VM Click on the user drop down on the right hand top of the page. Select “Organization” from the menu. Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. On creation of the template click on “Configure Software” button. Select Terraform from the list of software bundles available for configuration and click on the + sign to add it to the template. Then click on “Done” to complete the Software configuration. Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “terraformTemplate” for our blog. Build Job Configuration As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository and the branch that you have selected. You may select the checkbox to configure automatic build trigger on SCM commits. Select the Unix Shell Builder form the Add Builder dropdown. Then add the script as below. The below script would first configure the environment variables using env-vars. Then copy the oci_api_key.pem and oci_api_key_public.pem to the specified directory. Then execute the Terraform commands to provision the OCI instance. The important commands are terraform init, terraform plan and terraform apply. terraform init – The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control. It is safe to run this command multiple times. terraform plan – The terraform plan command is used to create an execution plan.  terraform apply – The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan. Post the execution it prints the IP addresses of the provisioned instance as output. And then tries to make a SSH connection to the machine using the RSA keys supplied in the exampleTerraform folder. Configure Artifact Archiver to archive the terraform.tfstate file which would get generated as part of the build execution. You may select the compression to GZIP or NONE. Post Build Job Execution In build log you will be able to see the private and public IP addresses for the instance provisioned by Terraform scripts and then try to make an SSH connection to it. If everything goes fine, you the build job should complete successfully.  Now you can go to the Oracle Cloud Infrastructure console to see the instance has already being created for you along with network and boot volumes as defined in the Terraform scripts.   So now you can easily automate provisioning of Oracle Cloud Infrastructure using Terraform as part of your continuous integration & continuous delivery pipeline on Oracle Developer Cloud. Happy Coding!  **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

With our April release, we have started supporting HashiCorp Terraform builds in Oracle Developer Cloud. This blog will help you understand how you can use HashiCorp Terraform in build pipeline...

DevOps

Developer Cloud Service May Release Adds K8S, OCI, Code Editing and More

Just a month after the recent release of Oracle Developer Cloud Service - that added support for pipelines, Docker, and Terraform - we are happy to announce another update to the services that adds even more option to help you extend your DevOps and CI/CD processes to support additional use cases. Here are some highlights of the new version: Extended build server software You can now create build jobs and pipelines that leverage: Kubernetese - use the kubectl command line to manage your docker containers OCI Command line - to automate provisioning and configuration of Oracle Compute  Java 9 - for your latest java projects deployments Oracle Development Tools - Oracle Forms and Oracle JDeveloper 12.2.3 are now available to automate deployment of Forms and ADF apps   SSH Connection in Build You can now define SSH connection as part of your build configuration to allow you to securely connect and execute shell scripts on Oracle Cloud Services. In Browser Code Editing and Versioning  A new "pencil" icon let's you edit code in your private git repositories hosted in Developer Cloud Service directly in your browser. Once you edited the code you can commit the changes to your branch directly providing commit messages. PagerDuty Webhook Continuing our principle of keeping the environment open we add a new webhook support to allow you to send events to the popular PagerDuty solution. Increased Reusability We are making it easier to replicate things that already work for your team. For example, you can now create a new project based on an existing project you exported. You can copy an agile board over to a new one. If you created a useful issue search - you can share it with others in your team. There are many other feature that will improve your daily work, have a look at the what's new in DevCS document for more information. Happy development!

Just a month after the recent release of Oracle Developer Cloud Service - that added support for pipelines, Docker, and Terraform - we are happy to announce another update to the services that adds...

Developers

A New Oracle Autonomous Visual Builder Cloud Service - Visual and Coding Combined

We are happy to announce the availability of Oracle Autonomous Visual Builder Cloud Service (VBCS) - Oracle's visual low-code development platform for JavaScript based applications with built-in autonomous capabilities. Over the past couple of years, the visual development approach of VBCS has made it a very attractive solution to citizen developers who leveraged the no-code required nature of the platform to build their custom applications. Many professional developers also expressed interest in the visual development experience they saw, but they were looking for additional capabilities. Specifically developers were demanding an option to have direct access to the code that the visual tools created so they can change it and enhance it with their own custom code to achieve richer behaviors. With the new VBCS version we are addressing these demands adding direct access to manipulate code, while keeping the low-code characteristics of VBCS. Visual and Code Based Development Combined Just like in previous versions, constructing the UI is done through a visual WYSIWYG layout editor. Existing VBCS users will notice that they now have access to a much richer set of UI components in the component palette. In fact they now have access to all of the components offered by Oracle JET (Oracle's open-source JavaScript Extension Toolkit). In addition you can add more components to the palette using the Web-components standard based Oracle JET composite components architecture (CCA). The thing to note about the visual editor is the new "Code" button at the top right, clicking this button will give professional developers direct access to the HTML code that makes up the page layout.  They'll be happy to discover that the code is pure HTML/JavaScript/CSS based - which will let them leverage their existing expertise to further enhance and customize it. Developers can directly manipulate that code through the smart code editor leveraging features such as code insight, syntax highlighting, doc access, and reformatting directly in their browser. The visual development approach is not limited to page layouts. We extend it also to the way you can define business logic. Defining the flow of your logic is done through our new action flow editor. With a collection of operations that you can define in a declarative way, and the ability to invoke your specific JavaScript code for unique functionality. Now that developers have direct access to the code, we also added integration with Git, leveraging the private Git repositories provided through Oracle Developer Cloud Service (DevCS). Teams can now leverage the full set of Agile methodology capabilities of DevCS when working on VBCS applications, including issue tracking, version management, agile planning and code review processes. Mobile and Web Development Unified With the new version of VBCS we further integrated the development experience across both web browser-based and on-device mobile applications.  In the same project you can create both types of applications, leveraging the same development approach, application architecture, UI components, and access to custom business objects and external REST services. Once you are done developing your mobile application, we'll package it for you as an on-device mobile app that you install, test, and run on your devices - leveraging the native look and feel provided by Oracle JET for the various mobile platforms. Standard-Based Data Openness With the new version you can now hook up VBCS to any REST data source with a few button clicks, leveraging a declarative approach to consuming external REST source in your application. VBCS is able to parse standard Swagger based service descriptors for easy consumption. Even if you don't have a detailed structure description for a service, the declarative dialog in VBCS makes it easy to define the access to any service, including security settings, header and URL parameters, and more. VBCS is smart enough to parse the structure returned from the service and create variables that will allow you to access the data in your UI with ease. Let's not forget that VBCS also lets you define your own custom reusable business services. VBCS will create the database objects to store the information in these objects, and will provide you with a powerful secure set of REST services to allow you to access these objects from both your VBCS and external applications. Visual Builder Cloud Service Goes Autonomous Today’s Visual Builder Cloud Service release also has built-in autonomous capabilities to automate and eliminate repetitive tasks so you can instead focus on app design and development. Configuring and provisioning your service is as easy as a single button click.All you need to do is tell us the name you want for your server, and with a click of a button everything is configured for you. You don't need to install and configure your underlying platform - the service automatically provision for you a database, an app hosting server, and your full development platform. The new autonomous VBCS eliminates any manual tasks for the maintenance of your development and deployment platforms. Once your service is provisioned we'll take care of things like patching, updates, and backups for you. Furthermore autonomous VBCS automatically maintains your mobile app publishing infrastructure. You just need to click a button and we'll publish your mobile app to iOS or Android packages, and host your web app on our scalable backend services that host your data and your applications. But Wait There is More There are many other new features you'll find in the new version of Oracle Visual Builder Cloud Service. Whether you are a seasoned JavaScript expert looking to accelerate your delivery, a developer taking your first steps in the wild world of JavaScript development, or a citizen developer looking to build your business application - Visual Builder has something for you. So take it for a spin - we are sure you are going to enjoy the experience. For more information and to get your free trial visit us at http://cloud.oracle.com/visual-builder    

We are happy to announce the availability of Oracle Autonomous Visual Builder Cloud Service (VBCS) - Oracle's visual low-code development platform for JavaScript based applications with built-in...

Community

Oracle Dev Moto Tour 2018

 "Four wheels move the body. Two wheels move the soul."   The 2018 Developers Motorcycle Tour will start their engines on May 8th, rolling through Japan and Europe to visit User Groups, Java Day Tokyo and Code events. Join Stephen Chin, Sebastian Daschner, and other community luminaries to catch up on the latest technologies and products, as well as bikes, food, Sumo, football or anything fun.    Streaming live from every location! Watch their sessions online at @OracleDevs and follow them for updates. For details about schedules, resources, videos, and more through May and June 2018, visit DevTours    Japan Tour: May 2018 In May, the dev tour motorcycle team will travel to various events, including the Java Day Tokyo conference.  Meet Akihiro Nishikawa, Andres Almiray, David Buck, Edson Yanaga, Fernando Badapoulis, Ixchel Ruiz, Kirk Pepperdine, Matthew Gilliard, Sebastian Daschner, and Stephen Chin.   May 8, 2018 Kumamoto Kumamoto JUG May 10, 2018 Fukuoka Fukuoka JUG May 11, 2018 Okayama Okayama JUG May 14, 2018 Osaka Osaka JUG May 15, 2018 Nagoya Nagoya JUG May 17, 2018 Tokyo Java Day Tokyo May 18, 2018 Tokyo JOnsen May 19, 2018 Tokyo JOnsen May 20, 2018 Tokyo JOnsen May 21, 2018 Sendai Sendai JUG May 23, 2018 Sapporo JavaDo May 26, 2018 Tokyo JJUG Event   The European Tour: June 2018 In June, the dev tour motorcycle team will travel to multiple European countries and cities to meet Java and Oracle developers. Depending on the city and the event, which will include the Code Berlin conference, you'll meet Fernando Badapoulis, Nikhil Nanivadekar, Sebastian Daschner, and Stephen Chin.   June 4, 2018 Zurich JUG Switzerland June 5, 2018 Freiburg JUG Freiburg June 6, 2018 Bodensee JUG Bodensee June 7, 2018 Stuttgart JUG Stuttgart June 11, 2018 Berlin JUG BB June 12, 2018 Berlin Oracle Code Berlin June 13, 2018 Hamburg JUG Hamburg June 14, 2018 Hannover JUG Hannover June 15, 2018 Münster JUG Münster June 16, 2018 Köln / Colone JUG Cologne June 17, 2018 Munich JUG Munich  

 "Four wheels move the body. Two wheels move the soul."   The 2018 Developers Motorcycle Tour will start their engines on May 8th, rolling through Japan and Europe to visit User Groups, Java Day Tokyo...

Containers, Microservices, APIs

Oracle Adds New Support for Open Serverless Standards to Fn Project and Key Kubernetes Features to Oracle Container Engine

li {line-height:1.7em;} Open serverless project Fn adds support for broader serverless standardization with CNCF CloudEvents, serverless framework support, and OpenCensus for tracing and metrics. Oracle Container Engine for Kubernetes tackles toughest real-world governance, scale, and management challenges facing K8s users today Today at Kubecon + CloudNativeCon Europe 2018, Oracle announced new support for several open serverless standards on its open Fn Project and a set of critical new Oracle Container Engine for Kubernetes features addressing key real-world Kubernetes issues including governance, security, networking, storage, scale, and manageability. Both the serverless and Kubernetes communities are at an important crossroads in their evolution, and to further its commitment to open serverless standards, Oracle announced that the Fn Project now supports standards-based projects CloudEvents and the Serverless Framework. Both projects are intended to create interoperable and community-driven alternatives to today’s proprietary serverless options. Solving Real World Kubernetes Challenges The New Stack, in partnership with the Cloud Native Computing Foundation (CNCF) recently published a report analyzing top challenges facing Kubernetes users today. The report found that infrastructure-related issues – specifically security, storage, and networking – had risen to the top, impacting larger companies the most.      Source: The New Stack In addition, when evaluating container orchestration, classic non-functional requirements came into play: scaling, manageability, agility, and security. Solving these types of issues will help the Kubernetes project move through the Gartner Hype Cycle “Trough of Disillusionment”, up the “Slope of Enlightenment” and onto the promised land of the “Plateau of Productivity.” Source: The New Stack Addressing Real-World Kubernetes Challenges To address these top challenges facing Kubernetes users today, Oracle Container Engine for Kubernetes has integrated tightly with the best-in-class governance, security, networking, and scale of Oracle Cloud Infrastructure (OCI). These are summarized below: Governance, compliance, & auditing: Identity and Access Management (IAM) for Kubernetes enables DevOps teams to control who has access to Kubernetes resources, but also set policies describing what type of access they have and to which specific resources. This is a crucial element to managing complex organizations and rules applied to logical groups of users and resources, making it really simple to define and administer policies. Governance: DevOps teams can set which users have access to which resources, compartments, tenancies, users, and groups for their Kubernetes clusters. Since different teams typically manage different resources through different stages of the development cycle – from development, test, staging, through production – role-based access control (RBAC) is crucial. Two levels of RBAC are provided: (1) at the OCI IaaS infrastructure resource level defining who can for example spin up a cluster, scale it, and/or use it, and (2) at a Kubernetes application level where fine-grained Kubernetes resource controls are provided. Compliance: Container Engine for Kubernetes will support The Payment Card Industry Data Security Standard (PCI DSS), the globally applicable security standard that customers use for a wide range of sensitive workloads, including the storage, processing and transmission of cardholder data. DevOps teams will be able to run Kubernetes applications on Oracle’s PCI-compliant Cloud Infrastructure Services. Auditing (logging, monitoring): Cluster management auditing events have also been integrated into the OCI Audit Service for consistent and unified collection and visibility. Scale: Oracle Container Engine is a highly available managed Kubernetes service. The Kubernetes masters are highly available (cross availability domains), managed, and secured. Worker clusters are self-healing, can span availability domains, and can be composed of node pools consisting of compute shapes from VMs to bare metal to GPUs. GPUs, Bare Metal, VMs: Oracle Container Engine offers the industry’s first and broadest family of Kubernetes compute nodes, supporting small and virtualized environments, to very large and dedicated configurations. Users can scale up from basic web apps up to high performance compute models, with network block storage and local NVMe storage options. Predictable, High IOPS: The Kubernetes node pools can use either VMs or Bare Metal compute with predictable IOPS block storage and dense I/O VMs. Local NVMe storage provides a range of compute and capacities with high IOPS. Kubernetes on NVIDIA Tesla GPUs: Running Kubernetes clusters on bare Metal GPUs gives container applications access to the highest performance possible. With no hypervisor overhead, DevOps teams should be delighted to have access to bare metal compute instances on Oracle Cloud Infrastructure with two NVIDIA Tesla P100 GPUs to run CUDA based workloads allowing for over 21 TFLOPS of single-precision performance per instance. Networking: Oracle Container Engine is built on a state-of-the-art, non-blocking Clos network that is not over-subscribed and provides customers with a predictable, high-bandwidth, low latency network. Load balancing: Load balancing is often one of the hardest features to configure and manage – Oracle has integrated seamlessly with OCI load balancing to allow container-level load balancing. Kubernetes load balancing checks for incoming traffic on the load balancer's IP address and distributes incoming traffic to a list of backend servers based on a load balancing policy and a health check policy. DevOps teams can define Load Balancing Policies that tell the load balancer how to distribute incoming traffic to the backend servers. Virtual Cloud Network: Kubernetes user (worker) nodes are deployed inside a customer’s own VCN (virtual cloud network), allowing for secure management of IP addresses, subnets, route tables and gateways using the VCN. Storage: Cracking the code on a simple way to manage Kubernetes storage continues to be a major concern for DevOps teams. There are two new IaaS Kubernetes storage integrations designed for Oracle Cloud Infrastructure that can help, unlocking OCI’s industry leading block storage performance (highest IOPS per GB of any standard cloud provider offering), cost, and predictability: OCI Volume Provisioner: Provided as a Kubernetes deployment, the OCI Volume provisioner enables dynamic provisioning of Block Volume storage resources for running Kubernetes on OCI. It leverages the OCI Flexvolume driver (see below) to bind storage resources to Kubernetes nodes. OCI Flexvolume Driver: This driver was developed to mount OCI block storage volumes to Kubernetes Pods using the flexvolume plugin interface. Simplified, Unified Management: Bundled in Management: By bundling in commonly used Kubernetes utilities, Oracle Container Engine for Kubernetes makes for a familiar and seamless developer experience. This includes built-in support for Helm and Tiller (providing standard Kubernetes package management), the Kubernetes dashboard, and kube-dns. Running Existing Applications with Kubernetes: Kubernetes supports an ever-growing set of workloads that are not necessarily net new greenfield apps. A Kubernetes Operator is “an application-specific controller that extends the Kubernetes API to create, configure and manage instances of complex stateful applications on behalf of a Kubernetes user.” Oracle has open-sourced and will soon generally release an Oracle WebLogic Server Kubernetes Operator which allows WebLogic users to manage WebLogic domains in a Kubernetes environment without forcing application rewrites, retesting and additional process and cost. WebLogic 12.2.1.3 has also been certified on Kubernetes, and the WebLogic Monitoring Exporter, which exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana, has been released and open sourced. Fn Project: Open serverless initiatives are progressing within the CNCF and the Fn Project is actively engaged and supporting these emerging standards: CloudEvents: The Fn Project has announced support for the Cloud Event standard effort. CloudEvents seeks to standardize event data and simplify event declaration and delivery among different applications, platforms, and providers. Until now, developers have lacked a common way of describing serverless events. This not only severely affects the portability of serverless apps but is also a significant drain on developer productivity. Serverless Framework: Fn Project, an open source functions as a service and workflow framework, has contributed a FaaS provider to the Serverless Framework to further its mission of multi-cloud and on-premise serverless computing. The new provider allows users of the Serverless Framework to easily build and deploy container-native functions to any Fn Cluster while getting the unified developer experience they’re accustomed to. For Fn’s growing community, the integration provides an additional option for managing functions in a multi-cloud and multi-provider world. "With a rapidly growing community around Fn, offering a first-class integration with the Serverless Framework will help bring our two great communities closer together, providing a “no lock-in” model of serverless computing to companies of all sizes from startups to the largest enterprises,” says Chad Arimura, VP Software Development, Oracle. OpenCensus: Fn is now using OpenCensus stats, trace, and view APIs across all Fn code. OpenCensus is a single distribution of libraries that automatically collects traces and metrics from your app, displays them locally, and sends them to any analysis tool. OpenCensus has made good decisions in defining their own data formats that allow developers to use any backends (explicitly not having to create their own data structures simply for collection). This allows Fn to easily stay up to date in the ops world without continuously having to make extensive code changes. For more information, join Chad Arimura and Matt Stephenson Friday, May 4 for their talk at KubeCon on Operating a Global Scale FaaS on top of Kubernetes.  

Open serverless project Fn adds support for broader serverless standardization with CNCF CloudEvents, serverless framework support, and OpenCensus for tracing and metrics. Oracle Container Engine for...

A Quick Look At What's New In Oracle JET v5.0.0

The newest release of Oracle JET was delivered to the community on April 16th. Continuing the foundational concept of delivering a toolkit on a consistent and predictable release schedule that application developers can rely on. This is the 24th consecutive on-schedule release for Oracle JET.   This release is primarily a maintenance release, with updates to the underlying open source dependencies where needed, and quite a bit of housekeeping, with removal of previously deprecated API's.  As always, the Release Notes will provide the details and it's highly recommended that you take some time to read through the sections that describe the removed API's.  Most have been under deprecation notice for well over a year.  In some cases API's are being removed that had deprecation notices announced almost 4 years ago.  This will help keep things as clean and lightweight as possible going forward. One of the first things that you'll probably notice is the Home page now has an option to checkout the new Visual Builder Cloud Service. For those that are more familiar and comfortable with a declarative approach to web development, Visual Builder provides a very comprehensive drag and drop approach to developing JET based applications. If you find yourself in a position where you need to get down to the code while working in Visual Builder, the newest release now provides full code level development as well.  Just hit the Code button and you'll find yourself writing real JET code, with code completion, inline documentation and more. It's the same code that you see in the Cookbook and other sample applications today.   New ways to Get Started The Get Started page has also received a bit of a face lift.  As the JET community continues to grow, there are more developers looking at JET for the first time, and providing multiple ways to get that first experience is important.  You'll now find that you can Get Started by using Visual Builder as described above, or take a quick look at how JET code is structured with a quick sample available on jsFiddle.  Of course the Command Line Interface (ojet-cli) is still the primary method for getting things off the ground with JET.   Growth and Success The JET Community continues to grow at a rapid pace and we are proud to have three new Oracle Partners/Customers added to the Success Stories page in this release.  We also added a new Oracle Product which is providing tremendous opportunities for Cloud Startups.  Visit the Success Stories page to learn more about:   If you have a JET application, or your company is using JET and you'd like to be included on the JET Success Stories page, please drop a note in the JET Community Forums.   A Single Source of Truth for Resource Paths The Oracle JET Command Line Interface itself has added a few new features in this release.  One of the most notable is the consolidation of resource path definitions into one configuration file.  If you have tried adding 3rd party libraries to a JET application in the past, you found yourself adding the path to those libraries in up to three different files take make sure things worked in both development as well as a production build of the application.  Everything is now in a single file called "path-mappings.json".  Checkout the Migration Chapter of the Developers Guide for details on how to work with this new single source of truth for paths.     Composite Component Architecture(CCA) continues to mature Composite Component Architecture(CCA) continues to be a major focus of Oracle JET and each release brings more enhancements to the metadata and structure of the overall Architecture. The best place to keep track of what is happening in CCA development, is on Duncan Mills' Blog series.  The latest installment covers changes made in the JET v5.0.0 release.     Theming gets an update Theming has always been a significant feature of JET with the inclusion of SASS (.scss) files for the default Alta theme, themes for Android, iOS, and Windows platforms, as well as a Theme Builder application to help you build your own theme as needed.  In JET v5.0.0 the method for defining the base color scheme as been revised.  Take a look at the Theme Changes section of the Release Notes for details, as well as the Theme Builder example on the JET Website.   New task types in oj-Gantt The Gantt chart has been gaining features over the last few releases, and with this release comes the ability to add new types of tasks such as a Summary and Milestone. Continue to watch this component over future releases as it matures to meet more and more use cases.     As always your comments and constructive feedback is welcome.  If you have questions, or comments, please engage with the Oracle JET Community in the Discussion Forums, or follow @OracleJET on Twitter On behalf of the entire JET development team, Happy Coding!!  

The newest release of Oracle JET was delivered to the community on April 16th. Continuing the foundational concept of delivering a toolkit on a consistent and predictable release schedule...

Database

Announcing the General Availability of MySQL 8.0

MySQL adds NoSQL and many new enhancements to the world’s most popular open source database: NoSQL Document Store gives developers the flexibility of developing traditional SQL relational applications and NoSQL, schema-free document database applications.  This eliminates the need for a separate NoSQL document database.  SQL Window functions, Common Table Expressions, NOWAIT and SKIP LOCKED, Descending Indexes, Grouping, Regular Expressions, Character Sets, Cost Model, and Histograms. JSON Extended syntax, new functions, improved sorting, and partial updates. With JSON table functions you can use the SQL machinery for JSON data. GIS Geography support. Spatial Reference Systems (SRS), as well as SRS aware spatial datatypes,  spatial indexes,  and spatial functions. Reliability DDL statements have become atomic and crash safe, meta-data is stored in a single, transactional data dictionary  Observability Performance Schema, Information Schema, Invisible Indexes,  Error Logging. Manageability Persistent Configuration Variables, Undo tablespace management, Restart command, and New DDL. High Availability InnoDB Cluster delivers an integrated, native, HA solution for your databases. Security OpenSSL improvements, new default authentication, SQL Roles, breaking up the super privilege, password strength, authorization. Performance Up to 2x faster than MySQL 5.7. Developer Features MySQL 8.0 delivers many new features requested by developers in areas such as SQL, JSON and GIS. Developers also want to be able to store Emojis, thus UTF8MB4 is now the default character set in 8.0. NoSQL Document Store MySQL Document Store gives developers maximum flexibility developing traditional SQL relational applications and NoSQL, schema-free document database applications.  This eliminates the need for a separate NoSQL document database.  The MySQL Document Store provides multi-document transaction support and full ACID compliance for schema-less JSON documents. SQL Window Functions MySQL 8.0 delivers SQL window functions in MySQL.   Similar to grouped aggregate functions, window functions perform some calculation on a set of rows, e.g. COUNT or SUM. But where a grouped aggregate collapses this set of rows into a single row, a window function will perform the aggregation for each row in the result set. Window functions come in two flavors: SQL aggregate functions used as window functions and specialized window functions. Common Table Expression MySQL 8.0 delivers [Recursive] Common Table Expressions (CTEs) in MySQL.  Non-recursive CTEs can be explained as “improved derived tables” as it allow the derived table to be referenced more than once. A recursive CTE is a set of rows which is built iteratively: from an initial set of rows, a process derives new rows, which grow the set, and those new rows are fed into the process again, producing more rows, and so on, until the process produces no more rows. MySQL CTE and Window Functions in MySQL Workbench 8.0 NOWAIT and SKIP LOCKED MySQL 8.0 delivers NOWAIT and SKIP LOCKED alternatives in the SQL locking clause. Normally, when a row is locked due to an UPDATE or a SELECT ... FOR UPDATE, any other transaction will have to wait to access that locked row. In some use cases there is a need to either return immediately if a row is locked or ignore locked rows. A locking clause using NOWAIT will never wait to acquire a row lock. Instead, the query will fail with an error. A locking clause using SKIP LOCKED will never wait to acquire a row lock on the listed tables. Instead, the locked rows are skipped and not read at all. Descending Indexes MySQL 8.0 delivers support for indexes in descending order. Values in such an index are arranged in descending order, and we scan it forward. Before 8.0, when a user create a descending index, we created an ascending index and scanned it backwards. One benefit is that forward index scans are faster than backward index scans. GROUPING MySQL 8.0  delivers GROUPING(), SQL_FEATURE T433. The GROUPING() function distinguishes super-aggregate rows from regular grouped rows. GROUP BY extensions such as ROLLUP produce super-aggregate rows where the set of all values is represented by null. Using the GROUPING()function, you can distinguish a null representing the set of all values in a super-aggregate row from a NULL in a regular row. JSON MySQL 8.0 adds new JSON functions and improves performance for sorting and grouping JSON values. Extended Syntax for Ranges in JSON path expressions MySQL 8.0 extends the syntax for ranges in JSON path expressions. For example SELECT JSON_EXTRACT('[1, 2, 3, 4, 5]', '$[1 to 3]');results in [2, 3, 4]. The new syntax introduced is a subset of the SQL standard syntax, described in SQL:2016, 9.39 SQL/JSON path language: syntax and semantics. JSON Table Functions MySQL 8.0 adds JSON table functions which enables the use of the SQL machinery for JSON data. JSON_TABLE() creates a relational view of JSON  data. It maps the result of a JSON data evaluation into relational rows and columns. The user can query the result returned by the function as a regular relational table using SQL, e.g. join, project, and aggregate. JSON Aggregation Functions MySQL 8.0 adds the aggregation functions JSON_ARRAYAGG() to generate JSON arrays and JSON_OBJECTAGG() to generate JSON objects . This makes it possible to combine JSON documents in multiple rows into a JSON array or a JSON object. JSON Merge Functions The JSON_MERGE_PATCH() function implements the semantics of JavaScript (and other scripting languages) specified by RFC7396, i.e. it removes duplicates by precedence of the second document. For example, JSON_MERGE('{"a":1,"b":2 }','{"a":3,"c":4 }'); # returns {"a":3,"b":2,"c":4}. JSON Improved Sorting MySQL 8.0 gives better performance for sorting/grouping JSON values by using variable length sort keys. Preliminary benchmarks shows from 1.2 to 18 times improvement in sorting, depending on use case. JSON Partial Update MySQL 8.0 adds support for partial update for the JSON_REMOVE(), JSON_SET() and JSON_REPLACE() functions.  If only some parts of a JSON document are updated, we want to give information to the handler about what was changed, so that the storage engine and replication don’t need to write the full document. GIS MySQL 8.0 delivers geography support. This includes meta-data support for Spatial Reference System (SRS), as well as SRS aware spatial datatypes,  spatial indexes,  and spatial functions. Character Sets MySQL 8.0 makes UTF8MB4 the default character set. UTF8MB4 is the dominating character encoding for the web, and this move will make life easier for the vast majority of MySQL users. Cost Model Query Optimizer Takes Data Buffering into Account MySQL 8.0 chooses query plans based on knowledge about whether data resides in-memory or on-disk. This happens automatically, as seen from the end user there is no configuration involved. Historically, the MySQL cost model has assumed data to reside on spinning disks. The cost constants associated with looking up data in-memory and on-disk are now different, thus, the optimizer will choose more optimal access methods for the two cases, based on knowledge of the location of data. Optimizer Histograms MySQL 8.0 implements histogram statistics. With Histograms, the user can create statistics on the data distribution for a column in a table, typically done for non-indexed columns, which then will be used by the query optimizer in finding the optimal query plan. The primary use case for histogram statistics is for calculating the selectivity (filter effect) of predicates of the form “COLUMN operator CONSTANT”. Reliability Transactional Data Dictionary MySQL 8.0 increases reliability by ensuring atomic, crash safe DDL, with the transactional data dictionary. With this the user is guaranteed that any DDL statement will either be executed fully or not at all. This is particularly important in a replicated environment, otherwise there can be scenarios where masters and slaves (nodes) get out of sync, causing data-drift. Observability Information Schema (speed up) MySQL 8.0 reimplements Information Schema. In the new implementation the Information Schema tables are simple views on data dictionary tables stored in InnoDB. This is by far more efficient than the old implementation with up to 100 times speedup. Performance Schema (speed up) MySQL 8.0 speeds up performance schema queries by adding more than 100 indexes on performance schema tables.  Manageability INVISIBLE Indexes MySQL 8.0 adds the capability of toggling the visibility of an index (visible/invisible). An invisible index is not considered by the optimizer when it makes the query execution plan. However, the index is still maintained in the background so it is cheap to make it visible again. The purpose of this is for a DBA / DevOp to determine whether an index can be dropped or not. If you suspect an index of not being used you first make it invisible, then monitor query performance, and finally remove the index if no query slow down is experienced. High Availability MySQL InnoDB Cluster delivers an integrated, native, HA solution for your databases. It tightly integrates MySQL Server with Group Replication, MySQL Router, and MySQL Shell, so you don’t have to rely on external tools, scripts or other components. Security features OpenSSL by Default in Community Edition MySQL 8.0 is unifying on OpenSSL as the default TLS/SSL library for both MySQL Enterprise Edition and MySQL Community Edition.  SQL roles MySQL 8.0 implements SQL Roles. A role is a named collection of privileges. The purpose is to simplify the user access right management. One can grant roles to users, grant privileges to roles, create roles, drop roles, and decide what roles are applicable during a session. Performance MySQL 8.0 is up to 2x faster than MySQL 5.7.  MySQL 8.0 comes with better performance for Read/Write workloads, IO bound workloads, and high contention “hot spot” workloads. Scaling Read/Write Workloads MySQL 8.0 scales well on RW and heavy write workloads. On intensive RW workloads we observe better performance already from 4 concurrent users  and more than 2 times better performance on high loads comparing to MySQL 5.7. We can say that while 5.7 significantly improved scalability for Read Only workloads, 8.0 significantly improves scalability for Read/Write workloads.  The effect is that MySQL improves  hardware utilization (efficiency) for standard server side hardware (like systems with 2 CPU sockets). This improvement is due to re-designing how InnoDB writes to the REDO log. In contrast to the historical implementation where user threads were constantly fighting to log their data changes, in the new REDO log solution user threads are now lock-free, REDO writing and flushing is managed by dedicated background threads, and the whole REDO processing becomes event-driven.  Utilizing IO Capacity (Fast Storage) MySQL 8.0 allows users to use every storage device to its full power. For example, testing with Intel Optane flash devices we were able to deliver 1M Point-Select QPS in a fully IO-bound workload. Better Performance upon High Contention Loads (“hot rows”) MySQL 8.0 significantly improves the performance for high contention workloads. A high contention workload occurs when multiple transactions are waiting for a lock on the same row in a table,  causing queues of waiting transactions. Many real world workloads are not smooth over for example a day but might have bursts at certain hours. MySQL 8.0 deals much better with such bursts both in terms of transactions per second, mean latency, and 95th percentile latency. The benefit to the end user is better hardware utilization (efficiency) because the system needs less spare capacity and can thus run with a higher average load. MySQL 8.0 Enterprise Edition For mission critical applications, MySQL Enterprise Edition provides the following additional capabilities: MySQL Enterprise Backup for full, incremental and partial backups, Point-in-Time Recovery and backup compression. MySQL Enterprise High Availability for integrated, native, HA with InnoDB Cluster. MySQL Enterprise Transparent Data Encryption (TDE) for data-at-rest encryption. MySQL Enterprise Encryption for encryption, key generation, digital signatures and other cryptographic features. MySQL Enterprise Authentication for integration with existing security infrastructures including PAM and Windows Active Directory. MySQL Enterprise Firewall for real-time protection against database specific attacks, such as an SQL Injection. MySQL Enterprise Audit for adding policy-based auditing compliance to new and existing applications. MySQL Enterprise Monitor for managing your database infrastructure. Oracle Enterprise Manager for monitoring MySQL databases from existing OEM implementations. MySQL Cloud Service Oracle MySQL Cloud Service is built on MySQL Enterprise Edition and powered by Oracle Cloud, providing an enterprise-grade MySQL database service. It delivers the best in class management tools, self service provisioning, elastic scalability and multi-layer security. Resources MySQL Documentation MySQL Downloads

MySQL adds NoSQL and many new enhancements to the world’s most popular open source database: NoSQL Document Store gives developers the flexibility of developing traditional SQL relational...

Chatbots

JavaOne Event Expands with More Tracks, Languages and Communities – and New Name

The JavaOne conference is expanding to create a new, bigger event that’s inclusive to more languages, technologies and developer communities. Expect more talks on Go, Rust, Python, JavaScript, SQL, and R along with more of the great Java technical content that developers have come to expect. We’re calling the new event Oracle Code One, October 22-25 at Moscone West in San Francisco. Oracle Code One will include a Java technical keynote with the latest information on the Java platform from the architects of the Java team.  It will also have the latest details on Java 11, advances in OpenJDK, and other core Java development.  We are planning dedicated tracks for server side Java EE technology including Jakarta EE (now part of the Eclipse Foundation), Spring, and the latest advances in Java microservices and containers.  Also a wealth of community content on client development, JVM languages, IDEs, test frameworks, etc. As we expand, developers can also expect additional leading edge topics such as chatbots, microservices, AI, and blockchain. There will also be sessions around our modern open source developer technologies including Oracle JET, the Fn Project and OpenJFX. Finally, one of the things that will continue to make this conference so great is the breadth of community run activities such as Oracle Code4Kids workshops for young developers, IGNITE lightning talks run by local JUG leaders, and an array of technology demos and community projects showcased in the Developer Lounge.  Expect a grand finale with the Developer Community Keynote to close out this week of fun, technology, and community. Today, we are launching the call for papers for Oracle Code One and you can apply now to be part of any of the 11 tracks of content for Java developers, database developers, full stack developers, DevOps practitioners, and community members.   I hope you are as excited about this expansion of JavaOne as I am and will join me at the inaugural year of Oracle Code One! Please submit your abstracts here for consideration: https://www.oracle.com/code-one/index.html

The JavaOne conference is expanding to create a new, bigger event that’s inclusive to more languages, technologies and developer communities. Expect more talks on Go, Rust, Python, JavaScript, SQL,...

Podcasts

Beyond Chatbots: An AI Odyssey

This month the Oracle Developer Community Podcast looks beyond chatbots to explore artificial intelligence -- its current capabilities, staggering potential, and the challenges along the way. One of the most surprising comments to emerge from this discussion reveals how a character from a 50 year-old feature film factors into one of the most pressing AI challenges. According to podcast panelist Phil Gordon, CEO and founder of Chatbox.com, the HAL 9000 computer at the center of Stanley Kubrick’s 1968 science fiction classic “2001: A Space Odyssey” is very much on the minds of those now rushing to deploy AI-based solutions. “They have unrealistic expectations of how well AI is going to work and how much it’s going to solve out of the box.” (And apparently they're willing to overlook HAL's abysmal safety record.) It's easy to see how an AI capable of carrying on a conversation while managing and maintaining all the systems on a complex interplanetary spaceship would be an attractive idea for those who would like to apply similar technology to keeping a modern business on course. But the reality of today’s AI is a bit more modest (if less likely to refuse to open the pod bay doors). In the podcast, Lyudmil Pelov, a cloud solutions architect with Oracle’s A-Team, explains that unrealistic expectations about AI have been fed by recent articles that portray AI as far more human-like than is currently possible. “Most people don't understand what's behind the scenes,” says Lyudmil. “They cannot understand that the reality of the technology is very different. We have these algorithms that can beat humans at Go, but that doesn't necessarily mean we can find the cure for the next disease.” Those leaps forward are possible. “From a practical perspective, however, someone has to apply those algorithms,” Lyudmil says. For podcast panelist Brendan Tierney, an Oracle ACE Director and principal consultant with Oralytics, accessing relevant information from within the organization poses another AI challenge.  “When it comes to customer expectations, there's an idea that it's a magic solution, that it will automatically find and discover and save lots of money automatically. That's not necessarily true.”  But behind that magic is a lot of science. “The general term associated with this is, ‘data science,’” Brendan explains. “The science to it is that there is a certain amount of experimental work that needs to be done. We need to find out what works best with your data. If you're using a particular technique or algorithm or whatever, it might work for one company, but it might not work best for you. You've got to get your head around the idea that we are in a process of discovery and learning and we need to work out what's best for your data in your organization and processes.” For panelist Joris Schellekens, software engineer at iText, a key issue is that of retractability. “If the AI predicts something or if your system makes some kind of decision, where does that come from? Why does it decide to do that? This is important to be able to explain expectations correctly, but also in case of failure—why does it fail and why does it decide to do this instead of the correct thing?” Of course, these issues are only a sampling of what is discussed by the experienced developers in this podcast. So plug in and gain insight that just might help you navigate your own AI odyssey. The Panelists Phil Gordon CEO/founder of Chatbox.com   Lyudmil Pelov Oracle A-Team Cloud Architect, Mobile, Cloud and Bot Technologies, Oracle   Joris Schellekens Software Engineer, iText Brendan Tierney Consultant, Architect, Author, Oralytics   Additional Resources 6 Ways Automated Security Becomes A Developer’s Ally Three Advances That Will Finally Make Software Self-Healing, Self-Tuning, and Self Managing Podcast: Combating Complexity: Fad, Fashion, and Failure in Software Development Coming Soon The Making of a Meet-Up Subscribe Never miss an episode! The Oracle Developer Community Podcast is available via: iTunes Podbean Feedburner

This month the Oracle Developer Community Podcast looks beyond chatbots to explore artificial intelligence -- its current capabilities, staggering potential, and the challenges along the way. One of...

Database

Announcing GraalVM: Run Programs Faster Anywhere

Current production virtual machines (VMs) provide high performance execution of programs only for a specific language or a very small set of languages. Compilation, memory management, and tooling are maintained separately for different languages, violating the ‘don’t repeat yourself’ (DRY) principle. This leads not only to a larger burden for the VM implementers, but also for developers due to inconsistent performance characteristics, tooling, and configuration. Furthermore, communication between programs written in different languages requires costly serialization and deserialization logic. Finally, high performance VMs are heavyweight processes with high memory footprint and difficult to embed. Several years ago, to address these shortcomings, Oracle Labs started a new research project for exploring a novel architecture for virtual machines. Our vision was to create a single VM that would provide high performance for all programming languages, therefore facilitating communication between programs. This architecture would support unified language-agnostic tooling for better maintainability and its embeddability would make the VM ubiquitous across the stack. To meet this goal, we have invented a new approach for building such a VM. After years of extensive research and development, we are now ready to present the first production-ready release. Introducing GraalVM Today, we are pleased to announce the 1.0 release of GraalVM, a universal virtual machine designed for a polyglot world. GraalVM provides high performance for individual languages and interoperability with zero performance overhead for creating polyglot applications. Instead of converting data structures at language boundaries, GraalVM allows objects and arrays to be used directly by foreign languages. Example scenarios include accessing functionality of a Java library from Node.js code, calling a Python statistical routine from Java, or using R to create a complex SVG plot from data managed by another language. With GraalVM, programmers are free to use whatever language they think is most productive to solve the current task. GraalVM 1.0 allows you to run: - JVM-based languages like Java, Scala, Groovy, or Kotlin - JavaScript (including Node.js) - LLVM bitcode (created from programs written in e.g. C, C++, or Rust) - Experimental versions of Ruby, R, and Python GraalVM can either run standalone, embedded as part of platforms like OpenJDK or Node.js, or even embedded inside databases such as MySQL or the Oracle RDBMS. Applications can be deployed flexibly across the stack via the standardized GraalVM execution environments. In the case of data processing engines, GraalVM directly exposes the data stored in custom formats to the running program without any conversion overhead. For JVM-based languages, GraalVM offers a mechanism to create precompiled native images with instant start up and low memory footprint. The image generation process runs a static analysis to find any code reachable from the main Java method and then performs a full ahead-of-time (AOT) compilation. The resulting native binary contains the whole program in machine code form for immediate execution. It can be linked with other native programs and can optionally include the GraalVM compiler for complementary just-in-time (JIT) compilation support to run any GraalVM-based language with high performance. A major advantage of the GraalVM ecosystem is language-agnostic tooling that is applicable in all GraalVM deployments. The core GraalVM installation provides a language-agnostic debugger, profiler, and heap viewer. We invite third-party tool developers and language developers to enrich the GraalVM ecosystem using the instrumentation API or the language-implementation API. We envision GraalVM as a language-level virtualization layer that allows leveraging tools and embeddings across all languages. GraalVM in Production Twitter is one of the companies deploying GraalVM in production already today for executing their Scala-based microservices. The aggressive optimizations of the GraalVM compiler reduces object allocations and improves overall execution speed. This results in fewer garbage collection pauses and less computing power necessary for running the platform. See this presentation from a Twitter JVM Engineer describing their experiences in detail and how they are using the GraalVM compiler to save money. In the current 1.0 release, we recommend JVM-based languages and JavaScript (including Node.js) for production use while R, Ruby, Python and LLVM-based languages are still experimental. Getting Started The binary of the GraalVM v1.0 (release candidate) Community Edition (CE) built from the GraalVM open source repository on GitHub is available here. We are looking for feedback from the community for this release candidate. We welcome feedback in the form of GitHub issues or GitHub pull requests. In addition to the GraalVM CE, we also provide the GraalVM v1.0 (release candidate) Enterprise Edition (EE) for better security, scalability and performance in production environments. GraalVM EE is available on Oracle Cloud Infrastructure and can be downloaded from the Oracle Technology Network for evaluation. For production use of GraalVM EE, please contact graalvm-enterprise_grp_ww@oracle.com. Stay Connected The latest up-to-date downloads and documentation can be found at www.graalvm.org. Follow our daily development, request enhancements, or report issues via our GitHub repository at www.github.com/oracle/graal. We encourage you to subscribe to these GraalVM mailing lists: - graalvm-announce@oss.oracle.com - graalvm-users@oss.oracle.com - graalvm-dev@oss.oracle.com We communicate via the @graalvm alias on Twitter and watch for any tweet or Stack Overflow question with the #GraalVM hash tag. Future This first release is only the beginning. We are working on improving all aspects of GraalVM; in particular the support for Python, R and Ruby. GraalVM is an open ecosystem and we encourage building your own languages or tools on top of it. We want to make GraalVM a collaborative project enabling standardized language execution and a rich set of language-agnostic tooling. Please find more at www.graalvm.org on how to: - allow your own language to run on GraalVM - build language-agnostic tools for GraalVM - embed GraalVM in your own application We look forward to building this next generation technology for a polyglot world together with you!

Current production virtual machines (VMs) provide high performance execution of programs only for a specific language or a very small set of languages. Compilation, memory management, and tooling are...

Three Quick Tips API Platform CS - Gateway Installation (Part 3)

The part 2 of the series can be accessed here. Today, we keep it short and simple, here are three troubleshooting tips for Oracle API CS Gateway Installation: If while running the "install" action, you see an output as something like:            -bash-4.2$ ./APIGateway -f gateway-props.json -a install-configure-start-join Please enter user name for weblogic domain,representing the gateway node: weblogic Password: 2018-03-22 17:33:20,342 INFO action: install-configure-start-join 2018-03-22 17:33:20,342 INFO Initiating validation checks for action: install. 2018-03-22 17:33:20,343 WARNING Previous gateway installation found at directory = /u01/oemm 2018-03-22 17:33:20,343 INFO Current cleanup action is CLEAN 2018-03-22 17:33:20,343 INFO Validation complete 2018-03-22 17:33:20,343 INFO Action install is starting 2018-03-22 17:33:20,343 INFO start action: install 2018-03-22 17:33:20,343 INFO Clean started. 2018-03-22 17:33:20,345 INFO Logging to file /u01/oemm/logs/main.log 2018-03-22 17:33:20,345 INFO Outcomes of operations will be accumulated in /u01/oemm/logs/status.log 2018-03-22 17:33:20,345 INFO Clean finished. 2018-03-22 17:33:20,345 INFO Installing Gateway 2018-03-22 17:33:20,718 INFO complete action: install isSuccess: failed detail: {} 2018-03-22 17:33:20,718 ERROR Action install has failed. Detail: {} 2018-03-22 17:33:20,718 WARNING Full-Setup execution incomplete. Please check log file for more details 2018-03-22 17:33:20,719 INFO Execution complete.   The issue could be "/tmp" directory permissions. Please check that the tmp directory which is used by default by the OUI installer is not setup with "noexec","nosuid" or "nodev". Please check for other permission issues as well, Another possible area to investigate is the size allocated to "/tmp"  file system (should be greater than equal to 10 GB). If sometime during running any of the installer actions, you get an "Invalid JSON object: .... " error, then, please check if the gateway-master.props file is not empty.  This can happen if for example you execute "ctrl+z" to exit an installer action. The best approach is to backup the gateway-master.json file and replace it, in case the above error happens. In worst case copy the gateway-master .  If the "start" action is unable to start the managed server but the admin server starts Ok, then try changing the "publishAddress" property's value  to "listenIpAddress" property's value and try install,configure and start again. In other words "publishAddress"  = "listenIpAddress". That is all for now, we will back soon with more.      

The part 2 of the series can be accessed here. Today, we keep it short and simple, here are three troubleshooting tips for Oracle API CS Gateway Installation: If while running the "install" action,...

DevOps

Introducing Build Pipeline in Oracle Developer Cloud

With our current release we are introducing a new build engine in Oracle Developer Cloud. The new build engine also comes with a new enhanced functionality and user interface in Oracle Developer Cloud Service ‘Build’ tab for defining build pipelines visually. This was the much awaited functionality in Oracle Developer Cloud from the Continuous Integration and Continuous Delivery perspective. So what is changing in Developer Cloud build? The below screen shot shows the user interface for the new ‘Build’ tab in Oracle Developer Cloud. A quick glance at it tells you that there is a new tab called ‘Pipeline’ being added alongside the ‘Jobs’ tab. So the concept of creating build jobs remains the same. We have Pipeline in addition to the build jobs that you can create. Creating of build job has gone through a change as well. When you try to create a build job by clicking the ‘+New Job’ button in the Build tab, you will have a dialog box to create a new build job. The first screen shot shows the earlier ‘New Job ‘ dialog where you could give the job name and select to create a freestyle job or copy an existing build job. The second screen shot shows the latest ‘New Job’ dialog that comes up in Oracle Developer Cloud.  It has a Job name, description (which you could give in the build configuration interface earlier), create new/copy existing job options, check box to select ‘use for merge request’ and the most noticeable addition the Software Template dropdown. Dialog in the old build system: Dialog in the new build system: What these additional fields in the ‘New Job’ dialog mean? Description: To give the job description, which you could give in the build configuration interface earlier. You will still be able to edit it in the build configuration as part of the settings tab. Use for Merge Request: By selecting this option, your build will be parameterized to get the Git repo URL, Git repo branch and Git repo merge id and perform the merge as part of the build. Software Template: With this release you will be using your own Oracle Compute Classic to run/execute your build jobs. Earlier the build jobs were executed on internal pool of compute. This gives you immense flexibility to configure you build machine using the software runtimes that you need using the user interface that we provide as part of the Developer Cloud Service. These configuration will stay and the build machines will not be claimed back as it is your own compute instance. This will also enable you to run multiple parallel builds without any constraint by spinning up new computes as per your requirements. You will be able to create multiple VM templates with different software configurations and choose them while creating build jobs as per your requirement. Please use this link to refer the documentation for configuring Software Templates. Build Configuration Screen: In the build configuration tab you will now have two tabs as seen in the screen shot below. Build Configuration Build Settings As seen in the screenshot below, the build configuration tab would further have Source Control tab, Build Parameters, Build Environment, Builders and Post Build sub tabs. While in the build settings tab, you will have sub tabs such as General, Software, Triggers, and Advanced. Below are the brief description of each tab: General: As seen in the screenshot below is for generic build job related details. It is similar to the Main tab which existed previously. Software: This tab is a new introduction in the build configuration to support Software Templates for build machines, which is getting introduced in our current release as described above. It will let you change/see the software template that you have selected while creating the build job and also let you see the software (runtimes) available in the template. Please see the screenshot below for your reference. Triggers: You will be able add build triggers like Periodic Trigger and SCM Polling Trigger as shown in the screenshot below. This is similar to the Triggers tab that existed earlier. Advanced: Consists of some build settings related to aborting job conditions, retry count and adding timestamp to the console output. In the Build Configuration Tab There are four tabs in the Build Configuration tab as described below: Source Control: You can add Git as the Source Control from the dropdown-‘Add Source Control’.   Build Parameters: Apart from the existing build parameters like String Parameter, Password Parameter, Boolean Parameter, Choice Parameter, there is a new parameter type being added called Merge Request Parameters. The Merge Request Parameters get added automatically when the checkbox ‘Use for Merge Request’ is selected while creating the build job. This will add Git repo URL, Git repo branch and Git repo merge id as the build parameters. Build Environment: A new Build Environment settings have been added apart from the existing Xvfb Wrapper, Copy Artifacts and Oracle Maven Repository Connection, which is SonarQube Settings. SonarQube Settings – For static code analysis using SonarQube tool. I will be publishing a separate blog on SonarQube in Developer Cloud. Builders: To add build steps. There is an additions to the build steps, which is Docker Builder.  Docker Builder: Support to build Docker images and execute any Docker command. (Will be releasing a separate blog for dockers.) Post Build: To add Post Build configurations like deployment. SonarQube Result Publisher is the new Post Build configuration added in the current release. Pipelines After creating and configuring the build jobs, you can create a pipeline in the Pipelines tab using these build jobs. You can create a new pipeline using the ‘+New Pipeline’ button. You will see the below dialog to create a new pipeline. On creation of the Pipeline, you can drag and drop the build jobs using the Pipeline visual editor, sequence and connect the build jobs as per the requirement. You can also add conditions to the connection for execution by double clicking the links and selecting the condition from the dropdown, as shown below in the screenshot. Once completed, the pipeline will be listed in the Pipelines tab as shown below.   You can start the build manually using the play symbol button. We can also configure it to Auto Start when one of the job is executed externally. Stay tuned for more blogs on latest features and capabilities of Developer Cloud Service.  Happy Coding!  **The views expressed in this post are my own and do not necessarily reflect the views of Oracle    

With our current release we are introducing a new build engine in Oracle Developer Cloud. The new build engine also comes with a new enhanced functionality and user interface in Oracle Developer Cloud...

DevOps

Building Docker on Oracle Developer Cloud Service

The much awaited Docker build support on Oracle Developer Cloud Service is here. Now you will be able to build Docker images and execute Docker commands as part of the Continuous Integration and Continuous Deployment pipeline. This blog covers the description of how and what you can do with the Docker build support on Developer Cloud Service. It will give an understanding of the Docker commands that we can run/execute on Developer Cloud as part of the build job. Note: There will be a series of blogs following up on using Docker build on Developer Cloud covering different technology stacks and usage. Build Job Essentials: Pre-requisite to be able to run Docker commands or use Docker Build steps in the build job is that we should select a software template which has Docker included as a software bundle. Selecting the template with Docker ensures that the Build VM which gets instantiated using the selected software template has Docker runtime installed on it, as shown in the below screen shot. The template names may vary in your instance. To know about the new build system you can read this blog link. Also you can refer the documentation on configuring the Build VM.   You will be able to verify whether Docker is part of the slected VM or not by navigating to Build -> <Build Job> -> Build Settings -> Software You can refer this link to understand more about the new build interface on Developer Cloud.   Once you have the Build Job created with the right Software Template selected as described above, go to the Builders tab in the Build Job and click on the Add Builder. You will see Docker Builder in dropdown as shown in the screen shot below. Selecting Docker Builder would give you Docker command options which are given out of the box. You can run all other Docker commands as well, by selecting Unix Shell Builder and writing your Docker command in it. In the below screen shot you can see two commands selected from the Docker Builder menu. Docker Version – This command interface prints the Docker version installed on your Build VM. Docker Login – Using this command interface you can login and create connection with the Docker Registry. By default it is DockerHub but you can use Quay.io or any other Docker registry available over the internet. If you leave the Registry Host empty then by default it will connect to DockerHub.   Docker build – Using this command interface you can build a Docker Image in Oracle Developer Cloud.  You will have to have a Dockerfile in the Git repository that you will be configuring in the Build Job. The path of the Dockerfile has to be mentioned in the Dockerfile field. In case the Dockerfile resides in the build context root, you can leave the field empty. You will have to give the image name.   Docker Push – Now to push the Docker image that you have built using Docker Build command interface to the Docker Registry. You will have to first use Docker Login to create a connection to the Docker Registry where you want to push the image. Then use the Docker Push command giving the exact name of the image built as you had given in Docker Build command.   Docker rmi – To remove the Docker images we have build. As mentioned previously, you can run any Docker command in Developer Cloud.  If the UI for the command is not given, you can use Unix Shell Builder to write and execute your Docker command. In my follow up blog series I will using a combination of the out of the box command interface and Unix Shell Builder to execute Docker commands and get build tasks accomplished. So watch out for the upcoming blogs here. Happy Dockering on Developer Cloud!  **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

The much awaited Docker build support on Oracle Developer Cloud Service is here. Now you will be able to build Docker images and execute Docker commands as part of the Continuous Integration...

DevOps

Build and Deploy .Net Code using Oracle Developer Cloud

The much awaited support for building and deploying .Net code on Oracle Cloud using Developer Cloud Service is here. This blog post will show how you can use Oracle Developer Cloud Service to build .Net code and deploy it on Oracle Application Container Cloud. It will show how the newly released Docker build support in Developer Cloud can be leveraged to perform the build. Technology Stack Used: Application Stack: .Net for developing ASPX pages Build: Docker for compiling the .Net code Packaging Tool: Grunt to package the compiled code DevOps Cloud Service: Oracle Developer Cloud Deployment Cloud Service: Oracle Application Container Cloud OS for Development: Windows 7   .Net application code source:  The ASP.Net application that we would be building and deploying on Oracle Cloud using Docker can be downloaded from the Git repository on GitHub. Below is the link for the same. https://github.com/dotnet/dotnet-docker-samples/tree/master/aspnetapp If you want to clone the GitHub repository then use the below git command after installing the git cli on your machine. git clone https://github.com/dotnet/dotnet-docker-samples/ After cloning the above mentioned repository you can pick to use the aspnetapp. Below is the folder structure of the cloned aspnetapp.   Apart from the four highlighted files in the screen shot below, which are essential for the deployment, rest all the other files and folder are part of the .Net application. Note: You may not be able to see the .get folder as you have not initialized the Git repository. Now we need to initialize the Git repository for the aspnetappl as we will be pushing this code to the Git repo hosted on Oracle Developer Cloud. Below are the commands that you can use on you command line after installing Git Cli and configuring the same in the path. Command prompt > cd <to the aspnetappl folder> Command prompt > git init Command prompt > git add –all Command prompt > git commit –m “First Commit” Above mentioned git commands will initialize the git repository locally in the application folder. And then add all the code in the folder to the local Git repository using the git add –all command. Then commit the added files by using the git commit command, as shown above. Now go to Oracle Developer Cloud project and create a Git repository for the .Net code to be pushed to. For the purpose of this blog I have created the Git repository by clicking the ‘New Repository’ button and named it as ‘DotNetDockerAppl’, as shown in the screenshot below. You may choose to name it as per your choice. Copy the Git repository URL as shown below. Then add the URL as the remote repository to the local Git repository that we have created using the below command:  Command prompt > git remote add origin <Developer Cloud Git repository URL> The use the below command to push the code to the master branch of the Developer Cloud hosted Git repository. Command prompt > git push origin master   Deployment related files that need to be created: Dockerfile This file will be used by the Docker Tool to build the Docker image with the .Net core installed and it would also include the .Net application code, cloned from the Developer Cloud Git repository. You will be getting the Dockerfile as part of the project. Please replace the existing Dockerfile script with the one below. FROM microsoft/aspnetcore-build:2.0 WORKDIR /app # copy csproj and restore as distinct layers COPY *.csproj ./ RUN dotnet restore # copy everything else and build COPY . ./ RUN dotnet publish -c Release -r linux-x64 In the above script we download the aspnetcore-build:2.0 image, then create a work directory where we copy the .csproj file and then copy all the code from the Git repo. Finally use the ‘dotnet’ command to publish the compiled code, compliant with linux-x64 machine. manifest.json This file is essential for the deployment of the .Net application on the Oracle Application Container Cloud. { "runtime":{ "majorVersion":"2.0.0-runtime" }, "command": "dotnet AspNetAppl.dll" } The command attribute in the json, specifies the dll to be executed by the dotnet command. It also specifies the .Net version to be used for executing the compiled code.   Gruntfile.js This file defines the build task and is being used by the Build file to identify the deployment artifact type that needs to be generated, which in this case is a zip file and also the files from the project that need to be included in the build artifact. For the .Net application we would only need to include everything in the publish folder including the manifest.json for Application Container Cloud deployment. The folder is defined in the src attribute as shown in the code snippet below. /** * http://usejsdoc.org/ */ module.exports = function(grunt) { require('load-grunt-tasks')(grunt); grunt.initConfig({ compress: { main: { options: { archive: 'AspNetAppl.zip', pretty: true }, expand: true, cwd: './publish', src: ['./**/*'], dest: './' } } }); grunt.registerTask('default', ['compress']); }; package.json Since Grunt is Nodejs based build tool, which we are using in this blog to build and package the deployment artifact, we would need the package json file to define the dependencies required for the Grunt tool to execute. { "name": "ASPDotNetAppl", "version": "0.0.1", "private": true, "scripts": { "start": "dotnet AspNetAppl.dll" }, "dependencies": { "grunt": "^0.4.5", "grunt-contrib-compress": "^1.3.0", "grunt-hook": "^0.3.1", "load-grunt-tasks": "^3.5.2" } } Once all the code is pushed to the Git repository hosted on Oracle Developer Cloud. Below screenshot, shows how you can browse and verify your code by going to the Code tab and selecting the appropriate Git repository and branch in the respective dropdowns on top of the files list.   Build Job Configuration on Developer Cloud Below are the build job configuration screen shows for the ‘DotNetBuild’ which will build and deploy the .Net application: Create a build job by clicking on the “New Job” button. Give a name of your choice to the build job. For this blog I have named it as ‘DotNetBuild’. You will also need to select the Software Template which contains Docker and Nodejs runtimes. In case you do see the required software template in the dropdown,as shown in the screenshot below, you will have to configure the same from Organization -> VM Template menu. This will kick start the Build VM with the required software template. To understand and learn more about configuring VM and VM Templates you can refer this link.   Now go to the Builders tab where we would configure the build steps. First we would select execute shell where we would build the Docker image using the Dockerfile in our Git repository. Then create a container for the same (but not start the container). Then copy the compiled code to the build machine from the container and then use npm registry to download the grunt build tool dependencies. Finally, use the grunt command to build the AspNetAppl.zip file which will be deployed on Application Container Cloud.     Now configure the PSM Cli and configure the credentials for your ACCS instance along with the domain name. Then again configure Unix Shell builder where you will have to provide the psm command to deploy the zip file on Application Container that we have generated earlier using Grunt build tool. Note: All this will be done in the same ‘DotNetBuild’ build job that we have created earlier.   AS part of the last part of build configuration, in the Post Build tab configure the Artifact Archiver as show below, to archive the generated zip file for deployment.   Below screen shot show the ‘DotNet’ application deployed on Application Container Cloud service console. Copy the application URL as shown in the screen shot. The URL will vary for your cloud instance.   Use the copied URL to access the deployed .Net application on a browser. It will look like as shown in the below screen shot. Happy Coding! **The views expressed in this post are my own and do not necessarily reflect the views of Oracle    

The much awaited support for building and deploying .Net code on Oracle Cloud using Developer Cloud Service is here. This blog post will show how you can use Oracle Developer Cloud Service to build...

Database

Introducing the Oracle MySQL Operator for Kubernetes

(Originally published on Medium) Introduction Oracle recently open sourced a Kubernetes operator for MySQL that makes running and managing MySQL on Kubernetes easier. The MySQL Operator is a Kubernetes controller that can be installed into any existing Kubernetes cluster. Once installed, it will enable users to create and manage production-ready MySQL clusters using a simple declarative configuration format. Common operational tasks such as backing up databases and restoring from an existing backup are made extremely easy. In short, the MySQL Operator abstracts away the hard work of running MySQL inside Kubernetes. The project started as a way to help internal teams get MySQL running in Kubernetes more easily, but it quickly become clear that many other people might be facing similar issues. Features Before we dive into the specifics of how the MySQL Operator works, let’s take a quick look at some of the features it offers: Cluster configuration We have only two options for how a cluster is configured. Primary (in this mode the group has a single-primary server that is set to read-write mode. All the other members in the group are set to read-only mode) Multi-Primary (In multi-primary mode, there is no notion of a single primary. There is no need to engage an election procedure since there is no server playing any special role.) Cluster management Create and scale MySQL clusters using Innodb and Group Replication on Kubernetes When cluster instances die, the MySQL Operator will automatically re-join them into the cluster Use Kubernetes Persistent Volume Claims to store data on local disk or network attached storage. Backup and restore Create on-demand backups Create backup schedules to automatically backup databases to Object Storage (S3 etc) Restore a database from an existing backup Operations Run on any Kubernetes cluster (Oracle Cloud Infrastructure, AWS, GCP, Azure) Prometheus metrics for alerting and monitoring Self healing clusters The Operator Pattern A Kubernetes Operator is simply a domain specific controller that can manage, configure and automate the lifecycle of stateful applications. Managing stateful applications, such as databases, caches and monitoring systems running on Kubernetes is notoriously difficult. By leveraging the power of Kubernetes API we can now build self managing, self driving infrastructure by encoding operational knowledge and best practices directly into code. For instance, if a MySQL instance dies, we can use an Operator to react and take the appropriate action to bring the system back online. How it works The MySQL Operator makes use of Custom Resource Definitions as a way to extend the Kubernetes API. For instance, we create custom resources for MySQLClusters and MySQLBackups. Users of the MySQL Operator interact via these third party resource objects. When a user creates a backup for example, a new MySQLBackup resource is created inside Kubernetes which contains references and information about that backup. The MySQL Operator is, at it’s core, a simple Kubernetes controller that watches the API server for Customer Resource Definitions relating to MySQL and acts on them. HA / Production Ready MySQL Clusters The MySQL Operator is opinionated about the way in which clusters are configured. We build upon InnoDB cluster (which uses Group Replication) to provide a complete high availability solution for MySQL running on Kubernetes. Examples The following examples will give you an idea of how the MySQL Operator can be used to manage your MySQL Clusters. Create a MySQL Cluster Creating a MySQL cluster using the Operator is easy. We define a simple YAML file and submit this directly to Kubernetes via kubectl. The MySQL operator watches for MySQLCluster resources and will take action by starting up a MySQL cluster. apiVersion: "mysql.oracle.com/v1" kind: MySQLCluster metadata:   name: mysql-cluster-with-3-replicas spec:   replicas: 3 You should now be able to see your cluster running There are several other options available when creating a cluster such as specifying a Persistent Volume Claim to define where your data is stored. See the examples directory in the project for more examples. Create an on-demand backup We can use the MySQL operator to create an “on-demand” database backup and upload it to object storage. Create a backup definition and submit it via kubectl. apiVersion: "mysql.oracle.com/v1" kind: MySQLBackup metadata: name: mysql-backup spec: executor: provider: mysqldump databases: - test storage: provider: s3 secretRef: name: s3-credentials config: endpoint: x.compat.objectstorage.y.oraclecloud.com region: ociregion bucket: mybucket clusterRef: name: mysql-cluster You can now list or fetch individual backups via kubectl kubectl get mysqlbackups Or fetch an individual backup kubectl get mysqlbackup api-production-snapshot-151220170858 -o yaml Create a Backup Schedule Users can attach schedule backup policies to a cluster so that backups get created on a given cron schedule. A user may be create multiple backup schedules attached to a single cluster if required. This example will create a backup of a cluster test database every hour and upload it to Oracle Cloud Infrastructure Object Storage. apiVersion: "mysql.oracle.com/v1" kind: MySQLBackupSchedule metadata: name: mysql-backup-schedule spec: schedule: '30 * * * *' backupTemplate: executor: provider: mysqldump databases: - test storage: provider: s3 secretRef: name: s3-credentials config: endpoint: x.compat.objectstorage.y.oraclecloud.com region: ociregion bucket: mybucket clusterRef: name: mysql-cluster Roadmap Some of the features on our roadmap include Support for MySQL Enterprise Edition Support for MySQL Enterprise Backup Conclusion The MySQL Operator showcases the power of Kubernetes as a platform. It makes running MySQL inside Kubernetes easy by abstracting complexity and reducing operational burden. Although it is still in very early development, the MySQL Operator already provides a great deal of useful functionality out of the box. Visit https://github.com/oracle/mysql-operator to learn more. We welcome contributions, ideas and feedback from the community. If you want to deploy MySQL inside Kubernetes, we recommend using the MySQL Operator to do the heavy lifting for you.   Links oracle/mysql-operator mysql-operator - Create, operate and scale self-healing MySQL clusters in Kubernetes github.com   MySQL :: MySQL 5.7 Reference Manual :: 20 InnoDB Cluster MySQL Backup and Recovery MySQL Globalization MySQL Information Schema MySQL Installation Guide MySQL and Linux/Unix… dev.mysql.com   MySQL :: MySQL 5.7 Reference Manual :: 17 Group Replication There is a built-in group membership service that keeps the view of the group consistent and available for all servers… dev.mysql.com  

(Originally published on Medium) Introduction Oracle recently open sourced a Kubernetes operator for MySQL that makes running and managing MySQL on Kubernetes easier. The MySQL Operator is a Kubernetes...

Announcing Terraform support for Oracle Cloud Platform Services

Oracle and HashiCorp are pleased to announce the immediate availability of the Oracle Cloud Platform Terraform provider. Oracle Cloud Platform Terraform Provider The initial release of the Oracle Cloud Platform Terraform provider supports the creation and lifecycle management of Oracle Database Cloud Service and Oracle Java Cloud Service instances. With the availability of the Oracle Cloud Platform services support, Terraform’s “infrastructure-as-code” configurations can now be defined for deploying standalone Oracle PaaS services, or combined with the Oracle Cloud Infrastructure and Infrastructure Classic services supported by the opc and oci providers for complete infrastructure and application deployment. Supported PaaS Services The following Oracle Cloud Platform services are supported by the initial Oracle Cloud Platform (PaaS) Terraform provider. Additional services/resources will be added over time. Oracle Database Cloud Service Instances Oracle Database Cloud Service Access Rules Oracle Java Cloud Service Instances Oracle Java Service Access Rules Using the Oracle Cloud Platform Terraform provider To get started using Terraform to provision the Oracle Cloud Platform services lets looks at an example of deploying a single Java Cloud Service instance, along with its dependent Database Cloud Service instance. First we declare the provider definition, providing the account credentials and the appropriate service REST API endpoints. The Identity Domain name, Identity Service ID and REST endpoint URL can be found in the Service details section from on My Services Dashboard For IDCS Cloud Accounts use the Identity Service ID for the identity_domain. provider "oraclepaas" { user = "example@user.com" password = "Pa55_Word" identity_domain = "idcs-5bb188b5460045f3943c57b783db7ffa" database_endpoint = "https://dbaas.oraclecloud.com" java_endpoint = "https://jaas.oraclecloud.com" } For Traditional Accounts use the account Identity Domain Name for the identity_domain provider "oraclepaas" { user = "example@user.com" password = "Pa55_Word" identity_domain = "mydomain" database_endpoint = "https://dbaas.oraclecloud.com" java_endpoint = "https://jaas.oraclecloud.com" } Database Service Instance configuration The oraclepaas_database_service_instance resource is used to define the Oracle Database Cloud service instance. A single terraform database service resource definition can represent configurations ranging from a single instance Oracle Database Standard Edition deployment, to a complete multi node Oracle Database Enterprise Edition with RAC and Data Guard for high availability and disaster recovery. Instances can also be created for backups or snapshots of another Database Service instance. For this example we’ll create a new single instance database for use with the Java Cloud Service configured later further down. resource "oraclepaas_database_service_instance" "database" { name = "my-terraformed-database" description = "Created by Terraform" edition = "EE" version = "12.2.0.1" subscription_type = "HOURLY" shape = "oc1m" ssh_public_key = "${file("~/.ssh/id_rsa.pub")}" database_configuration { admin_password = "Pa55_Word" backup_destination = "BOTH" sid = "ORCL" usable_storage = 25 } backups { cloud_storage_container = "Storage-${var.domain}/my-terraformed-database-backup" cloud_storage_username = "${var.user}" cloud_storage_password = "${var.password}" create_if_missing = true } } Lets take a closer look at the configuration settings. Here we are declaring that this is an Oracle Database 12c Release 2 (12.2.0.1) Enterprise Edition instance with a oc1m (1 OCPU/15Gb RAM) shape and with hourly usage metering. edition = "EE" version = "12.2.0.1" subscription_type = "HOURLY" shape = "oc1m" The ssh_public_key is the public key to be provisioned to the instance to allow SSH access. The database_configuration block sets the initial configuration for the actual Database instance to be created in the Database Cloud service, including the database SID, the initial password, and the initial usable block volume storage for the database. database_configuration { admin_password = "Pa55_Word" backup_destination = "BOTH" sid = "ORCL" usable_storage = 25 } The backup_destination configure if backup are on the Object Storage Servive (OSS), both object storage and local storage (BOTH), or disabled (NONE). A backup destination of OSS or BOTH is required for database instances that as used in combination with Java Cloud service instances The Object Storage Service location and access credentials are configured in the backups block backups { cloud_storage_container = "Storage-${var.domain}/my-terraformed-database-backup" cloud_storage_username = "${var.user}" cloud_storage_password = "${var.password}" create_if_missing = true } Java Cloud Service Instance The oraclepaas_java_service_instance resource is used to define the Oracle Java Cloud service instance. A single Terraform resource definition can represent configurations ranging from a single instance Oracle WebLogic Server deployment, to a complete multi-node Oracle WebLogic cluster with a Oracle Coherence data grid cluster and an Oracle Traffic Director load balancer. Instances can also be created from snapshots of another Java Cloud Service instance. For this example we’ll create a new two node Weblogic cluster with a load balancer, and associated to the Database Cloud Service instance defined above. resource "oraclepaas_java_service_instance" "jcs" { name = "my-terraformed-java-service" description = "Created by Terraform" edition = "EE" service_version = "12cRelease212" metering_frequency = "HOURLY" enable_admin_console = true ssh_public_key = "${file("~/.ssh/id_rsa.pub")}" weblogic_server { shape = "oc1m" managed_servers { server_count = 2 } admin { username = "weblogic" password = "Weblogic_1" } database { name = "${oraclepaas_database_service_instance.database.name}" username = "sys" password = "${oraclepaas_database_service_instance.database.database_configuration.0.admin_password}" } } oracle_traffic_director { shape = "oc1m" listener { port = 8080 secured_port = 8081 } } backups { cloud_storage_container = "Storage-${var.domain}/my-terraformed-java-service-backup" auto_generate = true } } Let break this down. Here we are declaring that this is a 12c Release 2 (12.2.1.2) Enterprise Edition Java Cloud Service instance with hourly usage metering. edition = "EE" service_version = "12cRelease212" metering_frequency = "HOURLY" Again the ssh_public_key is the public key to be provisioned to the instance to allow SSH access. The weblogic_server block provides the configuration details for the WebLogic Service instances deployed for this Java Cloud Service instance. The weblogic_server definition sets the instance shape, in this case a oc1m (1 OCPU/15Gb RAM). The admin block sets the WebLogic server admin user and initial password. admin { username = "weblogic" password = "Weblogic_1" } The database block connects the WebLogic server to the Database Service instance already defined above. In this example we are assuming the database and java service instances are declared in the same configuration, so we can fetch the database configuration values. database { name = "${oraclepaas_database_service_instance.database.name}" username = "sys" password = "${oraclepaas_database_service_instance.database.database_configuration.0.admin_password}" } The oracle_traffic_director block configures the load balancer that directs traffic to the managed WebLogic server instances. oracle_traffic_director { shape = "oc1m" listener { port = 8080 secured_port = 8081 } } By default the load balancer will be configured with the same admin credentials defined in the weblogic_server block, different credentials can also be configured if required.  If the insecure port is not set then only the secured_port is enabled Finally, similar to the Database Cloud service instance configuration, the backups block sets the Object Storage Service location for the Java Service instance backups. backups { cloud_storage_container = "Storage-${var.domain}/-backup" auto_generate = true } Provisioning With the provider and resource definitions configured in a terraform project (e.g all in a main.tf file), deploying the above configuration is a simple as: $ terraform init $ terraform apply The terraform init command will automatically fetch the latest version of the oraclepaas provider. terraform apply will start the provisioning. The complete provisioning of the Database and Java Cloud Instances can be a long running operation. To remove the provisioning instance run terraform destroy Related Content Terraform Provider for Oracle Cloud Platform Terraform Provider for Oracle Cloud Infrastructure Terraform Provider for Oracle Cloud Infrastructure Classic

Oracle and HashiCorp are pleased to announce the immediate availability of the Oracle Cloud Platform Terraform provider. Oracle Cloud Platform Terraform Provider The initial release of the Oracle Cloud...

Cloud

Part II: Data processing pipelines with Spring Cloud Data Flow on Oracle Cloud

This is the 2nd (and final) part of this blog series about Spring Cloud Data Flow on Oracle Cloud In Part 1, we covered some of the basics, infrastructure setup (Kafka, MySQL) and at the end of it, we had a fully functional Spring Cloud Data Flow server on the cloud — now its time to put it to use ! Part I: Spring Cloud Dataflow on Oracle Application Container Cloud medium.com In this part, you will get a technical overview of solution and look at some internal details — whys and hows build and deploy a data flow pipeline on Oracle Application Container Cloud and finally test it out… Behind the scenes Before we see things in action, here is an overview so that you understand what you will be doing and get (a rough) idea of why it’s working the way it is At a high level, this is how things work in Spring Cloud Data Flow (you can always dive into the documentation for details) You start by registering applications — these contain the core business logic and deal with how you would process the data e.g. a service which simply transforms the data it receives (from the messaging layer) or an app which pumps user events/activities into a message queue You will then create a stream definition where you will define the pipeline of your data flow (using the apps which you previously registered) and then deploy them (here is the best part!) once you deploy the stream definition, the individual apps in the pipeline, which will get automatically deployed to Oracle Application Container Cloud, thanks to our custom Spring Cloud Deployer SPI implementation (this was briefly mentioned in Part 1) At a high level, the SPI implementation needs to adhere to the contract/interface outlined by org.springframework.cloud.deployer.spi.app.AppDeployer and provide implementation for the following methods — deploy, undeploy, status and environmentInfo Thus the implementation handles the life cycle of the pipeline/stream processing applications creation and deletion providing status information Show time…! App registration We will start by registering our stream/data processing applications As mentioned in Part 1, Spring Cloud Data Flow uses Maven as one of its sources for the applications which need to be deployed as a part of the pipelines which you build — more details here and here You can use any Maven repo — we are using Spring Maven repo since we will be importing their pre-built starter apps. Here is the manifest.json where this is configured {   "runtime": {     "majorVersion": "8"   },   "command": "java -jar spring-cloud-dataflow-server-accs-1.0.0-SNAPSHOT.jar    --server.port=$PORT    --maven.remote-repositories.repo1.url=http://repo.spring.io/libs-snapshot    --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=$OEHPCS_EXTERNAL_CONNECT_STRING     --spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=<event_hub_zookeeper_IP>:<port>",   "notes": "ACCS Spring Cloud Data Flow Server" } manifest.json for Data Flow server on ACCS Access the Spring Cloud Data Flow dashboard — navigate to the application URL e.g. https://SpringCloudDataflowServer-mydomain.apaas.us2.oraclecloud.com/dashboard Spring Cloud Data Flow dashboard For the purpose of this blog, we will import two pre-built starter apps http Type — source Role — pushes pushes data to the message broker Maven URL — maven://org.springframework.cloud.stream.app:http-source-kafka:1.0.0.BUILD-SNAPSHOT log Type — sink Role — consumes data/events from the message broker Maven URL — maven://org.springframework.cloud.stream.app:log-sink-kafka:1.0.0.BUILD-SNAPSHOT There is another category of apps known as processor — this is not covered for the sake of simplicity There are a bunch of these starter apps which make it super easy to get going with Spring Cloud Data Flow! Importing applications After app registration, we can go ahead and create our data pipeline. But, before we do that, let’s quickly glance at what it will do… Overview of the sample pipeline/data flow Here is the flow which the pipeline will encapsulate — you will see this in action once you reach the Test Drive section.. so keep going ! http app -> Kafka topic Kafka -> log app -> stdout The http app will provide a REST endpoint for us to POST messages to it and these will be pushed to a Kafka topic. The log app will simply consume these messages from the Kafka topic and then spit them out to stdout — simple! Create & deploy a pipeline Lets start creating stream — you can pick from the list of source and sink apps which we just imported ( http and log )   Use the below stream definition — just replace KafkaDemo with the name of your Event Hub Cloud service instance which you had setup in the Infrastructure setup section in Part 1 http --port=$PORT --app.accs.deployment.services='[{"type": "OEHPCS", "name": "KafkaDemo"}]' | log --app.accs.deployment.services='[{"type": "OEHPCS", "name": "KafkaDemo"}]' Stream definition You will see a graphical representation of the pipeline (which is quite simple in our case) Stream definition Create (and deploy) the pipeline Deploy the stream definition The deployment process will get initiated and the same will be reflected on the console Deployment in progress…. Go back to the Applications menu in Oracle Application Container Cloud to confirm that the individual app deployment has also got triggered Deployment in progress… Open the application details and navigate to the Deployments section to confirm that both apps have service binding to the Event Hub instances as specified in the stream definition Service Binding to Event Hub Cloud After the applications are deployed to Oracle Application Container Cloud, the state of the stream definition will change to deployed and the apps will also show up in the Runtime section   Deployment complete   Spring Cloud Data Flow Runtime menu Connecting the dots.. Before we jump ahead and test our the data pipeline we just created, here are a couple of pictorial representations to summarize how everything connects logically Individual pipeline components in Spring Cloud Data Flow map to their corresponding applications in Oracle Application Container Cloud — deployed via the custom SPI implementation (discussed above as well as in part 1) Spring Cloud Data Flow pipeline to application mapping .. and here is where the logical connection to Kafka is depicted http app pushes to Kafka topic the log app consumes from Kafka topic and emits the messages to stdout the topics are auto-created in Kafka by default (you can change this) and the naming convention is the stream definition (DemoStream) and the pipeline app name (http) separated by a dot (.) Pipeline apps interacting with Kafka Test drive Time to test the data pipeline… Send messages via the http (source) app POST a few messages to the REST endpoint exposed by the http app (check its URL from the Oracle Application Container Cloud console) — these messages will be sent to a Kafka topic and consumed by the log app curl -X POST https://demostreamhttp-ocloud200.uscom-central-1.oraclecloud.com/ -H ‘content-type: text/plain’ -d test1 curl -X POST https://demostreamhttp-ocloud200.uscom-central-1.oraclecloud.com/ -H ‘content-type: text/plain’ -d test12 curl -X POST https://demostreamhttp-ocloud200.uscom-central-1.oraclecloud.com/ -H ‘content-type: text/plain’ -d test123 Check the log (sink) service Download logs for log app to confirm . Navigate to the application details and check out the Logs tab in the Administration section — documentation here Check logs You should see the same messages which you sent to the HTTP endpoint   Messages from Kafka consumed and sent to stdout There is another way… What you can also do is to validate this directly using Kafka (on Event Hub cloud) itself — all you need is to create a custom Access Rule to open port 6667 on the Kafka Server VM on Oracle Event Hub Cloud — details here You can now inspect the Kafka topic directly by using the console consumer and then POSTing messages to the HTTP endpoint (as mentioned above) kafka-console-consumer.bat --bootstrap-server <event_hub_kakfa_IP>:6667 --topic DemoStream.http Un-deploy If you trigger an un-deployment or destroy of the stream definition, it will trigger an app deletion from Oracle Application Container Cloud Un-deploy/destroy the definition Quick recap That’s all for this blog and it marks the end of this 2-part blog series! we covered the basic concepts & deployed a Spring Cloud Data Flow server on Oracle Application Container Cloud along with its dependent components which included… … Oracle Event Hub Cloud as the Kafka based messaging layer, and Oracle MySQL Cloud as the persistent RDBMS store we then explored some behind the scenes details and made use of our Spring Cloud Data Flow setup where … … we built & deployed a simple data pipeline along with its basic testing/validation Don’t forget to… check out the tutorials for Oracle Application Container Cloud — there is something for every runtime! Oracle Application Container Cloud Service — Create Your First Applications Tutorials for Oracle Application Container Cloud Service. Learn to create your first applications. docs.oracle.com other blogs on Application Container Cloud Latest stories and news about App Container Cloud — Medium Read the latest writing about App Container Cloud. Every day, thousands of voices read, write, and share important… medium.com Cheers! The views expressed in this post are my own and do not necessarily reflect the views of Oracle.

This is the 2nd (and final) part of this blog series about Spring Cloud Data Flow on Oracle Cloud In Part 1, we covered some of the basics, infrastructure setup (Kafka, MySQL) and at the end of it, we...

Podcasts

Podcast: Combating Complexity: Fad, Fashion, and Failure in Software Development

There is little in our lives that does not rely on software. That has been the reality for quite some time, and it will be even more true as self-driving cars and similar technologies become an even greater part of our lives. But as our reliance on software grows, so does the potential for disaster as software becomes increasingly complex. In September 2017 The Atlantic featured “The Coming Software Apocalypse,” an article by James Somers that offers a fascinating and sobering look at how rampant code complexity has caused massive failures in critical software systems, like the 2014 incident that left the entire state of Washington without 9-1-1 emergency call-in services until the problem was traced to software running on a server in Colorado. The article suggests that the core of the complexity problem is that code is too hard to think about. When and how did this happen?   “You have to talk about the problem domain,” says Chris Newcombe,”because there are areas where code clearly works fine.” Newcombe, one of the people interviewed for the Atlantic article, is an expert on combating complexity, and since 2014 has been an architect on Oracle’s Bare Metal IaaS team. “I used to work in video games,” Newcombe says. “There is lots of complex code in video games and most of them work fine. But if you're talking about control systems, with significant concurrency or affecting real-world equipment, like cars and planes and rockets or large-scale distribution systems, then we still have a way to go to solve the problem of true reliability. I think it's problem-domain specific. I don't think code is necessarily the problem. The problem is complexity, particularly concurrency and partial failure modes and side effects in the real world.”   Java Champion Adam Bien believes that in constrained environments, such as the software found in automobiles, “it's more or less a state machine which could or should be coded differently. So it really depends on the focus or the context. I would say that in enterprise software, code works well. The problem I see is more if you get newer ideas -- how to reshape the existing code quickly. But also coding is not just about code. Whether you write code or draw diagrams, the complexity will remain the same.” Java Champion and microservices expert Chris Richardson agrees that “if you work hard enough, you can deliver software that actually works.” But he questions what is actually meant when software is described as “working well.” “How successful are large software developments?” Richardson asks. “Do they meet requirements on time? Obviously that's a complex issue around project managemen