X

Step Up to Modern Cloud Development

Recent Posts

Cloud

Automated Generation For OCI IAM Policies

As a cloud developer evangelist here at Oracle, I often find myself playing around with one or more of our services or offerings on the Oracle Cloud.  This of course means I end up working quite a bit in the Identity and Access Management (IAM) section of the OCI Compute Console.  It's a pretty straightforward concept, and likely familiar if you've worked with any other cloud provider.  I won't give a full overview here about IAM as it's been covered plenty already and the documentation is concise and easy to understand.  But one task that always ends up taking me a bit longer to accomplish than I'd like it to is IAM policy generation.  The policy syntax in OCI is as follows: Allow <subject> to <verb> <resource-type> in <location> where <conditions> Which seems pretty easy to follow - and it is.  The issue that I often have though is actually remembering the values to plug in for the variable sections of the policy.  Trying to remember the exact group name, or available verbs and resource types, as well as the exact compartment name that I want the policy to apply to is troublesome and usually ends up with me opening two or three tabs to look up exact spellings and case and then flipping over to the docs to get the verb and resource type just right.  So, I decided to do something to make my life a little easier when it comes to policy generation and figured that I'd share it with others in case I'm not the only one who struggles with this.   So, born out of my frustration and laziness, I present a simple project to help you generate IAM policies for OCI.  The tool is intended to be run from the command line and prompts you to make selections for each variable.  It gives you choices of available options based on actual values from your OCI account.  For example, if you choose to create a policy targeting a specific group, the tool gives you a list of your groups to choose from.  Same with verbs and resource types - the tool has a list of them built in and lets you choose which ones you are targeting instead of referring to the IAM policy documentation each time.  Here's a video demo of the tool in action: The code itself isn't a masterpiece - there's hardcoded values for verbs and resource types because those aren't exposed via the OCI CLI or SDK in anyway.  But it works, and makes policy generation a bit less painful.  The code behind the tool is located on GitHub, so feel free to submit a pull request to keep the tool up to date or enhance it in any way.  It's written in Groovy and can be run as a Groovy script, or via java -jar.  If you'd rather just get your hands on the binary and try it out, grab the latest release and give it a shot. The tool uses the OCI CLI behind the scenes to query the OCI API as necessary.  You'll need to make sure the OCI CLI is installed and configured on your machine before you generate a policy.  I decided to use the CLI as opposed to the SDK in order to minimize external dependencies and keep the project as light as possible while still providing value.  Besides, the OCI CLI is pretty awesome and if you work with the Oracle Cloud you should definitely have it installed and be familiar with it. Please check out the tool and as always, feel free to comment below if you have any questions or feedback.

As a cloud developer evangelist here at Oracle, I often find myself playing around with one or more of our services or offerings on the Oracle Cloud.  This of course means I end up working quite a bit...

New Features in Oracle Visual Builder for the New Year

We are happy to announce the December 2018 release of Oracle Visual Builder (known as 18.4.5). This version adds several key new features, and in addition implements many enhancements to the overall development experience in Oracle's high-productivity JavaScript development platform. Here are some of the new features you can now leverage: Integration Cloud Service Catalog If you are using Oracle Integration Cloud to connect and access data from various sources, Visual Builder now makes it even simpler to leverage those integrations. The new integrations service catalog will list integrations that are defined in your Oracle Integration Cloud, and allow you to add them as sources of data and operation easily to your VB application. This is a nice addition to the existing Oracle SaaS service catalog already available in Oracle VB. iPad/Tablet Support for Mobile Apps We extended our mobile packaging capabilities to support specific packaging for iPads in addition to iPhones. In addition, in the UI emulator VB now supports an iPad/Tablet size preview as another option in the screen sizes menu. Nested Flows To help you further encapsulate flows, Visual Builder now supports the concept of nested flows. Nested flows allow you to create sub-flows that are contained inside another flow and can be used by various pages in that "master" flow. These sub-flows are then embedded into a flow-container region on a page. At runtime you can switch the sub-flow that is shown in such a region giving you a more dynamic interface. This encapsulation also helps with scenarios of multiple developers that need to work on various sections of an application - eliminating potential conflicts. Visual Builder Add In for Excel Sometimes neither web nor mobile are the right UI for your customer, maybe they want to work with your data directly from spreadsheets - well now they can. With the Visual Builder Add in for Excel plug-in you can directly access Business Objects you created in Visual Builder from Excel spreadsheet and query and manipulate the data. The plug-in gives you a complete development environment embedded in Excel to create interactions with your business objects. JET 6 Support Visual Builder now supports the latest Oracle JET 6.0 set of components and capabilities. This applies for both design-time and run-time. Note that existing applications will continue to use their current JET version, unless you open them with the new version to do modifications - when you do open them, we'll automatically upgrade them to use JET 6. Vanity URL Visual Builder lets you define a specific URL that will be used for your published web applications. This means that if you own the URL to www.yourname.com for example - you can specify that your apps will show up using this URL. Check out the application settings for more information on this capability. But Wait There's More... There are many many other enhancements in every area of Oracle Visual Builder - you can read about them in our what's new book, or even better - just try out Visual Builder and experience it on your own!    

We are happy to announce the December 2018 release of Oracle Visual Builder (known as 18.4.5). This version adds several key new features, and in addition implements many enhancements to the overall...

Cloud

Controlling Your Cloud - A Look At The Oracle Cloud Infrastructure Java SDK

A few weeks ago our cloud evangelism team got the opportunity to spend some time on site with some amazing developers from one of Oracle's clients in Santa Clara, CA for a 3-day cloud hackfest.  During the event, one of the developers mentioned that a challenge his team faced was handling file uploads for potentially extremely large files.  I've faced this problem before as a developer and it's certainly challenging.  The web just wasn't really built for large file transfers (though, things have gotten much better in the past few years as we'll discuss later on).  We didn't end up with an opportunity to fully address the issue during the hackfest, but I promised the developer that I would follow-up with a solution after digging deeper into the Oracle Cloud Infrastructure APIs once I got back home.  So yesterday I got down to digging into the process and engineered a pretty solid demo for that developer on how to achieve large file uploads to OCI Object Storage, but before I show that solution I wanted to give a basic introduction to working with your Oracle Cloud via the available SDK so that things are easier to follow once we get into some more advanced interactions.  Oracle offers several other SDKs (Python, Ruby and Go), but since I typically write my code in Groovy I went with the Java SDK.  Oracle provides a full REST API for working with your cloud, but the SDK provides a nice native solution and abstracts away some of the painful bits of signing your request and making the HTTP calls into a nice package that can be bundled within your application. The Java SDK supports the following OCI services: Audit Container Engine for Kubernetes Core Services (Networking, Compute, Block Volume) Database DNS Email Delivery File Storage IAM Load Balancing Object Storage Search Key Management Let's take a look at the Java SDK in action, specifically how it can be used to interact with the Object Storage service.  The SDK is open source and available on GitHub.  I created a very simple web app for this demo.  Unfortunately, the SDK is not yet available via Maven (see here), so step one was to download the SDK and include it as a dependency in my application.  I use Gradle, so I dropped the JARs into a "libs" directory in the root of my app and declared the following dependencies block to make sure that Gradle picked up the local JARs (the key being the "implementation" method on line 8): The next step is to create some system properties that we'll need for authentication and some of our service calls.  To do this, you'll need to set up some config files locally and generate some key pairs, which can be mildly annoying at first, but once you're set up you're good to go in the future and you get the added bonus of being set up for the OCI CLI if you want to use it later on.  Once I had the config file and keys generated, I set my props into a file in the app root called 'gradle.properties'.  Using this properties file and the key naming convention shown below Gradle makes the variables available within your build script as system properties. Note that having the variables as system properties in your build script does not make them available within your application, but to do that you can simply pass them in via your 'run' task: Next, I created a class to manage the provider and service clients.  This class only has a single client right now, but adding additional clients for other services in the future would be trivial. I then created an 'ObjectService' for working with the Object Storage API.  The constructor accepts an instance of the OciClientManager that we looked at above, and sets some class variables for some things that are common to many of the SDK methods (namespace, bucket name, compartment ID, etc): At this point, we're ready to interact with the SDK.  As a developer, it definitely feels like an intuitive API and follows a standard "request/response" model that other cloud providers use in their APIs as well.  I found myself often simply guessing what the next method or property might be called and often being right (or close enough for intellisense to guide me to the right place).  That's pretty much my benchmark for a great API - if it's intuitive and doesn't get in my way with bloated authentication schemes and such then I'm going to love working with it.  Don't get me wrong, strong authentication and security are assuredly important, but the purpose of an SDK is to hide the complexity and expose a method to use the API in a straightforward manner.  All that said, let's look at using the Object Storage client.   We'll go rapid fire here and show how to use the client to do the following actions (with a sample result shown after each code block): List Buckets Get A Bucket List Objects In A Bucket Get An Object List Buckets:   Get Bucket: List Objects: Get Object: The 'Get Object' example also contains an InputStream containing the object that can be written to file. As you can see, the Object Storage API is predictable and consistent.  In another post, we'll finally tackle the more complex issue of handling large file uploads via the SDK.

A few weeks ago our cloud evangelism team got the opportunity to spend some time on site with some amazing developers from one of Oracle's clients in Santa Clara, CA for a 3-day cloud hackfest. ...

Cloud

Controlling Your Cloud - Uploading Large Files To Oracle Object Storage

In my last post, we took an introductory look at working with the Oracle Cloud Infrastructure (OCI) API with the OCI Java SDK.  I mentioned that my initial motivation for digging into the SDK was to handle large file uploads to OCI Object Storage, and in this post, we'll do just that.   As I mentioned, HTTP wasn't originally meant to handle large file transfers (Hypertext Transfer Protocol).  Rather, file transfers were typically (and often, still) handled via FTP (File Transfer Protocol).  But web developers deal with globally distributed clients and FTP requires server setup, custom desktop clients, different firewall rules and authentication which ultimately means large files end up getting transferred over HTTP/S.  Bit Torrent can be a better solution if the circumstances allow, but distributed files aren't often the case that web developers are dealing with.  Thankfully, many advances in HTTP over the past several years have made large file transfer much easier to deal with, the main advance being chunked transfer encoding (known as "chunked" or "multipart" file upload).  You can read more about Oracle's support for multipart uploading, but to explain it in the simplest possible way a file is broken up into several pieces ("chunks"), uploaded (at the same time, if necessary), and reassembled into the original file once all of the pieces have been uploaded. The process to utilize the Java SDK for multipart uploading involves, at a minimum, three steps.  Here's the JavaDocs for the SDK in case you're playing along at home and want more info. Initiate the multipart upload Upload the individual file parts Commit the upload The SDK provides methods for all of the steps above, as well as a few additional steps for listing existing multipart uploads, etc.  Individual parts can be up to 50 GiB.  The SDK process using the ObjectClient (see the previous post) necessary to complete the three steps above are explained as such: 1.  Call ObjectClient.createMultipartUpload, passing an instance of a CreateMultipartUploadRequest (which contains an instance of CreateMultipartUploadRequestDetails) To break down step 1, you're just telling the API "Hey, I want to upload a file.  The object name is "foo.jpg" and it's content type is "image/jpeg".  Can you give me an identifier so I can associate different pieces of that file later on?"  And the API will return that to you in the form of a CreateMultipartUploadResponse.  Here's the code: So to create the upload, I make a call to /oci/upload-create and pass the objectName and contentType param.  I'm invoking it via Postman, but this could just as easily be a fetch() call in the browser: So now we've got an upload identifier for further work (see "uploadId", #2 in the image above).  On to step 2 of the process: 2.  Call ObjectClient.uploadPart(), passing an instance of UploadPartRequest (including the uploadId, the objectName, a sequential part number, and the file chunk), which receives an UploadPartResponse.  The response will contain an "ETag" which we'll need to save, along with the part number, to complete the upload later on. Here's what the code looks like for step 2: And here's an invocation of step 2 in Postman, which was completed once for each part of the file that I chose to upload.  I'll save the ETag values along with each part number for use in the completion step. Finally, step 3 is to complete the upload. 3.  Call ObjectClient.commitMultipartUpload(), passing an instance of CommitMultipartUploadRequest (which contains the object name, uploadId and an instance of CommitMultipartUploadDetails - which itself contains an array of CommitMultipartUploadPartDetails). Sounds a bit complicated, but it's really not.  The code tells the story here: When invoked, we get a simple result confirming the completion of the multipart upload commit!  If we head over to our bucket in Object Storage, we can see the file details for the uploaded and reassembled file: And if we visit the URL via a presigned URL (or directly, if the bucket is public), we can see the image.  In this case, a picture of my dog Moses: As I've hopefully illustrated, the Oracle SDK for multipart upload is pretty straightforward to use once it's broken down into the steps required.  There are a number of frontend libraries to assist you with multipart upload once you have the proper backend service in place (in my case, the file was simply broken up using the "split" command on my MacBook).  

In my last post, we took an introductory look at working with the Oracle Cloud Infrastructure (OCI) API with the OCI Java SDK.  I mentioned that my initial motivation for digging into the SDK was to...

Developers

Announcing Oracle Cloud Infrastructure Resource Manager

We are excited to announce a new service, Oracle Cloud Infrastructure Resource Manager, that makes it easy to manage your infrastructure resources on Oracle Cloud Infrastructure. Resource Manager enables you to use infrastructure as code (IaC) to automate provisioning for infrastructure resources such as compute, networking, storage, and load balancing. Using IaC is a DevOps practice that makes it possible to provision infrastructure quickly, reliably, and at any scale. Changes are made in code, not in the target systems. That code can be maintained in a source control system, so it’s easy to collaborate, track changes, and document and reverse deployments when required. HashiCorp Terraform To describe infrastructure Resource Manager uses HashiCorp Terraform, an open source project that has become the dominant standard for describing cloud infrastructure. Oracle is making a strong commitment to Terraform and will enable all its cloud infrastructure services to be managed through Terraform. Earlier this year we released the Terraform Provider, and we have started to submit Terraform modules for Oracle Cloud Infrastructure to the Terraform Module Registry. Now we are taking the next step by providing a managed service. Managed Service In addition to the provider and modules, Oracle now provides Resource Manager, a fully managed service to operate Terraform. Resource Manager integrates with Oracle Cloud Infrastructure Identity and Access Management (IAM), so you can define granular permissions for Terraform operations. It further provides state locking, gives users the ability to share state, and lets teams collaborate effectively on their Terraform deployments. Most of all, it makes operating Terraform easier and more reliable. With Resource Manager, you create a stack before you run Terraform actions. Stacks enable you to segregate your Terraform configuration, where a single stack represents a set of Oracle Cloud Infrastructure resources that you want to create together. Each stack individually maps to a Terraform state file that you can download. To create a stack, you define a compartment and upload the Terraform configuration while creating this stack. This zip file contains all the .tf files that define the resources that you want to create. You can optionally include a variables.tf file or define your variables in a (key,value) format on the console. After your stack is created, you can run different Terraform actions like plan, apply, and destroy on this stack. These Terraform actions are called jobs. You can also update the stack by uploading a new zip file, download this configuration, and delete the stack when required. Plan: Resource Manager parses your configuration and returns an execution plan that lists the Oracle Cloud Infrastructure resources describing the end state. Apply: Resource Manager creates your stack based on the results of the plan job. After this action is completed, you can see the resources that have been created successfully in the defined compartments. Destroy: Terraform attempts to delete all the resources in the stack. You can define permissions on your stacks and jobs through IAM policies. You can define granular permissions and let only certain users or groups perform actions like plan, apply, or destroy. Availability Resource Manager will become generally available in early 2019. We are currently providing access to selected customers through our Cloud Native Limited Availability Program. The currently available early version offers access to the Compute, Networking, Block Storage, Object Storage, IAM, and Load Balancing services. To learn more about Resource Manager or to request access to the technology, please register.

We are excited to announce a new service, Oracle Cloud Infrastructure Resource Manager, that makes it easy to manage your infrastructure resources on Oracle Cloud Infrastructure. Resource Manager...

Community

Building the Oracle Code One 2018 Escape Rooms

By Chris Bensen, Cloud Experience Developer at Oracle I’ve built a lot of crazy things in my life but the “Escape Rooms” for Code One 2018 might just be one of the craziest. And funnest! The initial idea for our escape room came from Toni Epple where a Java based escape room was built for a German conference. We thought it was rather good, and escape rooms are trendy and fun so we decided to dial it up to eleven for 2018 Code One attendees. The concept was to have two escape rooms, one with a Java developer theme and one with the superhero theme of the developer keynote, and that’s when Duke’s Lab and Superhero Escape were born. We wanted to build a demo that was different than what is normally at a conference and make the rooms feel like real rooms. I actually built two rooms with 2x4 construction in my driveway. Each room consisted of two eight foot cubed rooms that could be split in two pieces for easy shipping. And shipping wasn’t easy as we only had 1/4” of clearance! Inside the walls were faux brick to have the Brooklyn New York look and feel where many of the Marvel comics take place. The faux brick is a particle board product that can be purchased at your favorite local hardware store and is fire retardant so it’s a turnkey solution.   Many escape rooms contain physical puzzles and with CodeOne being a conference about programming languages it seemed fitting to infuse electronics and software into each puzzle. Each room was powered by a 24 volt 12 amp power supply which is the same power supply used to power an Ultimaker 3D printers. Using voltage regulators this was stepped down to 12 volts and in some cases 5 and 3.3 volts depending on the needs. Throughout the room conduit was run with custom 3D printed outlets to power each device using aviation connectors because they are super secure. The project took just over two months to build, over 100 unique 3D printed parts were created and four 3D printers were running nearly 24by7 to produce over 400 parts total. 8 Arduinos and 5 Raspberry Pi ran the rooms with various electronics for sensors, displays, sounds and movement. The custom software was written using Python, Bash, C/C++ and Java. At the heart of Duke’s Lab and the final puzzle is a wooden crate with two locks. The intention was to look like something out of a Bond film or Indiana Jones. Once you open it you are presented with two devices as seen in the photo below. I wouldn’t want to ruin the surprise but let’s just say most people that open the crate get a little heart thump as the countdown timer starts ticking when the create is opened! At the heart of Superhero Escape we have The Mighty Thor’s hammer Mjölnir, Captain America’s shield and Iron Man’s arc reactor. The idea was to bring these three props to life and integrate them into an escape room of super proportions. And given the number of people that solved the puzzle and exited the room with Cap’s shield on one arm and Mjölnir in the other, I would say it was a resounding success! The goal and final puzzle for Superhero Escape is to wield Mjölnir. Mjölnir was held to the floor of the escape room by a very powerful electromagnet. At the heart of the hammer is a piece of solid 1” thick steel I had custom machined to my specifications connected to a pipe. The shell is one solid 3D print taking over 24 hours and an entire 1 kilogram of filament. For those that don’t know, that is an entire roll. Exactly an entire roll! As with any project I learned a lot. I leveraged all my knowledge of digital fabrication, traditional fabrication, electronics, programming, wood working and puzzles and did things I wasn’t sure were possible, especially in the timeframe we had. That’s what being an Oracle Groudbreaker is all about. And for all those Groudbreakers out there, keep dreaming and learning because you will never know when you’ll be asked to build something that will take every bit of knowledge you have to build something amazing.

By Chris Bensen, Cloud Experience Developer at Oracle I’ve built a lot of crazy things in my life but the “Escape Rooms” for Code One 2018 might just be one of the craziest. And funnest! The initial...

Announcing Oracle Functions

Photo by Tim Easley on Unsplash [First posted on the Oracle Cloud Infrastructure Blog] At KubeCon 2018 in Seattle Oracle announced Oracle Functions, a new cloud service that enables enterprises to build and run serverless applications in the cloud.  Oracle Functions is a serverless platform that makes it easy for developers to write and deploy code without having to worry about provisioning or managing compute and network infrastructure. Oracle Functions manages all the underlying infrastructure automatically and scales it elastically to service incoming requests.  Developers can focus on writing code that delivers business value. Pay-per-use Serverless functions change the economic model of cloud computing as customers are only charged for the resources used while a function is running.  There’s no charge for idle time! This is unlike the traditional approach of deploying code to a user provisioned and managed virtual machine or container that is typically running 24x7 and which must be paid for even when it’s idle.  Pay-per-use makes Oracle Functions an ideal platform for intermittent workloads or workloads with spiky usage patterns.  Open Source Open source has changed the way businesses build software and the same is true for Oracle. Rather than building yet another proprietary cloud functions platform, Oracle chose to invest in the Apache 2.0 licensed open source Fn Project and build Oracle Functions on Fn. With this approach, code written for Oracle Functions will run on any Fn server.  Functions can be deployed to Oracle Functions or to a customer managed Fn cluster on-prem or even on another cloud platform.  That said, the advantage of Oracle Functions is that it’s a serverless offering which eliminates the need for customers to manually manage an Fn cluster or the underlying compute infrastructure. But thanks to open source Fn, customers will always have the choice to deploy their functions to whatever platform offers the best price and performance. We’re confident that platform will be Oracle Functions. Container Native Unlike most other functions platforms, Oracle Functions is container native with functions packaged as Docker container images.  This approach supports a highly productive developer experience for new users while allowing power users to fully customize their function runtime environment, including installing any required native libraries.  The broad Docker ecosystem and the flexibility it offers lets developers focus on solving business problems and not on figuring out how to hack around restrictions frequently encountered on proprietary cloud function platforms.  As functions are deployed as Docker containers, Oracle Functions is seamlessly integrated with the Docker Registry v2 compliant Oracle Cloud Infrastructure Registry (OCIR) which is used to store function container images.  Like Oracle Functions, OCIR is also both serverless and pay-per-use.  You simply build a function and push the container images to OCIR which charges just for the resources used. Secure Security is the top priority for Oracle Cloud services and Oracle Functions is no different. All access to functions deployed on Oracle Functions is controlled through Oracle Identity and Access Management (IAM) which allows both function management and function invocation privileges to be assigned to specific users and user groups.  And once deployed, functions themselves may only access resources on VCNs in their compartment that they have been explicitly granted access to.  Secure access is also the default for function container images stored in OCIR.  Oracle Functions works with OCIR private registries to ensure that only authorized users are able to access and deploy function containers.  In each of these cases, Oracle Function takes a “secure by default” approach while providing customers full control over their function assets.   Getting Started Oracle Functions will be generally available in 2019 but we are currently providing access to selected customers through our Cloud Native Limited Availability Program. To learn more about Oracle Functions or to request access, please let us know by registering with this form.  You can also learn more about the underlying open source technology used in Oracle Function at FnProject.io.

Photo by Tim Easley on Unsplash [First posted on the Oracle Cloud Infrastructure Blog] At KubeCon 2018 in Seattle Oracle announced Oracle Functions, a new cloud service that enables enterprises to...

Cloud

Announcing Oracle Cloud Native Framework at KubeCon North America 2018

This blog was originally published at https://blogs.oracle.com/cloudnative/ At KubeCon + CloudNativeCon North America 2018, Oracle has announced the Oracle Cloud Native Framework - an inclusive, sustainable, and open cloud native development solution with deployment models for public cloud, on premises, and hybrid cloud. The Oracle Cloud Native Framework is composed of the recently-announced Oracle Linux Cloud Native Environment and a rich set of new Oracle Cloud Infrastructure cloud native services including Oracle Functions, an industry-first, open serverless solution available as a managed cloud service based on the open source Fn Project. With this announcement, Oracle is the only major cloud provider to deliver and support a unified cloud native solution across managed cloud services and on-premises software, for public cloud (Oracle Cloud Infrastructure), hybrid cloud and on-premises users, supporting seamless, bi-directional portability of cloud native applications built anywhere on the framework.  Since the framework is based on open, CNCF certified, conformant standards it will not lock you in - applications built on the Oracle Cloud Native Framework are portable to any Kubernetes conformant environment – on any cloud or infrastructure Oracle Cloud Native Framework – What is It? The Oracle Cloud Native Framework provides a supported solution of Oracle Cloud Infrastructure cloud services and Oracle Linux on-premises software based on open, community-driven CNCF projects. These are built on an open, Kubernetes foundation – among the first K8s products released and certified last year. Six new Oracle Cloud Infrastructure cloud native services are being announced as part of this solution and build on the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services. Cloud Native at a Crossroads – Amazing Progress We should all pause and consider how far the cloud native ecosystem has come – evidenced by the scale, excitement, and buzz around the sold-out KubeCon conference this week and the success and strong foundation that Kubernetes has delivered! We are living in a golden age for developers – a literal "First Wave" of cloud native deployment and technology - being shaped by three forces coming together and creating massive potential: Culture: The DevOps culture has fundamentally changed the way we develop and deploy software and how we work together in application development teams. With almost a decade’s worth of work and metrics to support the methodologies and cultural shifts, it has resulted in many related off-shoots, alternatives, and derivatives including SRE, DevSecOps, AIOps, GitOps, and NoOps (the list will go on no doubt). Code: Open source and the projects that have been battle tested and spun out of webscale organizations like Netflix, Google, Uber, Facebook, and Twitter have been democratized under the umbrella of organizations like CNCF (Cloud Native Computing Foundation). This grants the same access and opportunities to citizen developers playing or learning at home, as it does to enterprise developers in the largest of orgs. Cloud: Unprecedented compute, network, and storage are available in today’s cloud – and that power continues to grow with a never-ending explosion in scale, from bare metal to GPUs and beyond. This unlocks new applications for developers in areas such as HPC apps, Big Data, AI, blockchain, and more.  Cloud Native at a Crossroads – Critical Challenges Ahead Despite all the progress, we are facing new challenges to reach beyond these first wave successes. Many developers and teams are being left behind as the culture changes. Open source offers thousands of new choices and options, which on the surface create more complexity than a closed, proprietary path where everything is pre-decided for the developer. The rush towards a single source cloud model has left many with cloud lock-in issues, resulting in diminished choices and rising costs – the opposite of what open source and cloud are supposed to provide. The challenges below mirror the positive forces above and are reflected in the August 2018 CNCF survey: Cultural Change for Developers: on premises, traditional development teams are being left behind. Cultural change is slow and hard. Complexity: too many choices, too hard to do yourself (maintain, administer), too much too soon? Cloud Lock-in: proprietary single-source clouds can lock you in with closed APIs, services, and non-portable solutions. The Cloud Native Second Wave – Inclusive, Sustainable, Open What’s needed is a different approach: Inclusive: can include cloud and on-prem, modern and traditional, dev and ops, startups and enterprises Sustainable: managed services versus DIY, open but curated, supported, enterprise grade infrastructure Open: truly open, community-driven, and not based on proprietary tech or self-serving OSS extensions Introducing the Oracle Cloud Native Framework – What’s New? The Oracle Cloud Native Framework spans public cloud, on-premises, and hybrid cloud deployment models – offering choice and uniquely meeting the broad deployment needs of developers. It includes Oracle Cloud Infrastructure Cloud Native Services and the Oracle Linux Cloud Native Environment. On top of the existing Oracle Container Engine for Kubernetes (OKE), Oracle Cloud Infrastructure Registry, and Oracle Container Pipelines services, a rich set of new Oracle Cloud Infrastructure cloud native services has been announced with services across provisioning, application definition and development, and observability and analysis.   Application Definition and Development Oracle Functions: A fully managed, highly scalable, on-demand, functions-as-a-service (FaaS) platform, built on enterprise-grade Oracle Cloud Infrastructure and powered by the open source Fn Project. Multi-tenant and container native, Oracle Functions lets developers focus on writing code to meet business needs without having to manage or even address the underlying infrastructure. Users only pay for execution, not for idle time. Streaming: Enables applications such as supply chain, security, and IoT to collect from many sources and process in real-time. Streaming is a highly available, scalable and multi-tenant platform that makes it easy to collect and manage streaming data. Provisioning Resource Manager: A managed Oracle Cloud Infrastructure provisioning service based on industry standard Terraform. Infrastructure-as-code is a fundamental DevOps pattern, and Resource Manager is an indispensable tool to automate configuration and increases productivity by managing infrastructure declaratively. Observation and Analysis Monitoring: An integrated service that reports metrics from all resources and services in Oracle Cloud Infrastructure. Monitoring provides predefined metrics and dashboards, and also supports a service API to obtain a top-down view of the health, performance, and capacity of the system. The monitoring service includes alarms to track these metrics and act when they vary or exceed defined thresholds, helping users meet service level objectives and avoid interruptions. Notification Service: A scalable service that broadcasts messages to distributed components, such as email and PagerDuty. Users can easily deliver messages about Oracle Cloud Infrastructure to large numbers of subscribers through a publish-subscribe pattern. Events: Based on the CNCF Cloud Events standard, Events enables users to react to changes in the state of Oracle Cloud Infrastructure resources, both when initiated by the system or by user action. Events can store information to Object Storage, or they can trigger Functions to take actions, Notifications to inform users, or Streaming to update external services. Use Cases for the Oracle Cloud Native Framework: Inclusive, Sustainable, Open Inclusive: The Oracle Cloud Native Framework includes both cloud and on-prem, supports modern and traditional applications, supports both dev and ops, can be used by startups and enterprises. As an industry, we need to create more on-ramps to the cloud native freeway – in particular by reaching out to teams and technologies and connecting cloud native to what people know and work on every day. The WebLogic Server Operator for Kubernetes is a great example of just that. It enables existing WebLogic applications to easily integrate into and leverage Kubernetes cluster management.  As another example, the Helidon project for Java creates a microservice architecture and framework for Java apps to move more quickly to cloud native. Many Oracle Database customers are connecting cloud native applications based on Kubernetes for new web front-ends and AI/big data processing back-ends, and the combination of the Oracle Autonomous Database and OKE creates a new model for self-driving, securing, and repairing cloud native applications. For example, using Kubernetes service broker and service catalog technology, developers can simply connect Autonomous Transaction Processing applications into OKE services on Oracle Cloud Infrastructure.   Sustainable: The Oracle Cloud Native Framework provides a set of managed cloud services and supported on-premises solutions, open and curated, and built on an enterprise grade infrastructure. New open source projects are popping up every day and the rate of change of existing projects like Kubernetes is extraordinary. While the landscape grows, the industry and vendors must face the resultant challenge of complexity as enterprises and teams can only learn, change, and adopt so fast. A unified framework helps reduce this complexity through curation and support. Managed cloud services are the secret weapon to reduce the administration, training, and learning curve issues enterprises have had to shoulder themselves. While a do-it-yourself approach has been their only choice up to recently, managed cloud services such as OKE give developers a chance to leapfrog into cloud native without a long and arduous learning curve. A sustainable model – built on an open, enterprise grade infrastructure, gives enterprises a secure, performant platform from which to build real hybrid cloud deployments including these five key hybrid cloud use cases: Development and DevOps: Dev/test in the cloud, production on-prem     Application Portability and Migration: enables bi-directional cloud native application portability (on-prem to cloud, cloud to on-prem) and lift and shift migrations.  The Oracle MySQL Operator for Kubernetes is an extremely popular solution that simplifies portability and integration of MySQL applications into cloud native tooling.  It enables creation and management of production-ready MySQL clusters based on a simple declarative configuration format including operational tasks such as database backups and restoring from an existing backup. The MySQL Operator simplifies running MySQL inside Kubernetes and enabling further application portability and migrations.     HA/DR: Disaster recovery or high availability sites in cloud, production on-prem Workload-Specific Distribution: Choose where you want to run workloads, on-prem or cloud, based on specific workload type (e.g., based on latency, regulation, new vs. legacy) Intelligent Orchestration: More advanced hybrid use cases require more sophisticated distributed application intelligence and federation – these include cloud bursting and Kubernetes federation   Open: Over the course of the last few years, development teams have typically chosen to embrace a single-source cloud model to move fast and reduce complexity – in other words the quick and easy solution. The price they are paying now is cloud lock in resulting from proprietary services, closed APIs, and non-portable solutions. This is the exact opposite of where we are headed as an industry – fueled by open source, CNCF-based, and community-driven technologies.   An open ecosystem enables not only a hybrid cloud world but a truly multi-cloud world – and that is the vision that drives the Oracle Cloud Native Framework!

This blog was originally published at https://blogs.oracle.com/cloudnative/ At KubeCon + CloudNativeCon North America 2018, Oracle has announced the Oracle Cloud Native Framework - an inclusive,...

DevOps

Deploy containers on Oracle Container Engine for Kubernetes using Developer Cloud

In my previous blog, I described how to use Oracle Developer Cloud to build and push the Node.js microservice Docker image on DockerHub. This blog will help you understand, how to use Oracle Developer Cloud to deploy the Docker image pushed to DockerHub on Container Engine for Kubernetes. Container Engine for Kubernetes Container Engine for Kubernetes is a developer-friendly, container-native, enterprise-ready managed Kubernetes service for running highly available clusters with the control, security, and predictable performance of Oracle Cloud Infrastructure. Visit the following link to learn about Oracle’s Container Engine for Kubernetes: https://cloud.oracle.com/containers/kubernetes-engine Prerequisites for Kubernetes Deployment Access to an Oracle Cloud Infrastructure (OCI) account A Kubernetes cluster set up on OCI This tutorial explains how to set up a Kubernetes cluster on OCI.  Set Up the Environment: Create and Configure Build VM Templates and Build VMs You’ll need to create and configure the Build VM template and Build VM with the required software, which will be used to execute the build job.   Click the user avatar, then select Organization from the menu.    Click VM Templates then New Template. In the dialog that pops up, enter a template name, such as Kubernetes Template, select “Oracle Linux 7” for the platform, then click the Create button.     After the template has been created, click Configure Software.   Select Kubectl and OCIcli (you’ll be asked to add Python3 3.6, as well) from the list of software bundles available for configuration, then click + to add these software bundles to the template.  Click the Done button to complete the software configuration for that Build VM template.             From the Virtual Machines page, click +New VM and, in the dialog that pops up, enter the number of VMs you want to create and select the VM Template you just created (Kubernetes Template).   Click the Add button to add the VM.   Kubernetes deployment scripts From the Project page, click the + New Repository button to add a new repository.   After creating the repository, Developer Cloud will bring you to the Code page, with the  NodejsKubernetes repository showing. Click the +File button to create a new file in the repository. (The README file in the repository was created when the project was created.)    Copy the following script into a text editor and save the file as nodejs_micro.yaml. apiVersion: apps/v1beta1 kind: Deployment metadata: name: nodejsmicro-k8s-deployment spec: selector: matchLabels: app: nodejsmicro replicas: 1 # deployment runs 1 pods matching the template template: # create pods using pod definition in this template metadata: labels: app: nodejsmicro spec: containers: - name: nodejsmicro image: abhinavshroff/nodejsmicro4kube:latest ports: - containerPort: 80 #Endpoint is at port 80 in the container --- apiVersion: v1 kind: Service metadata: name: nodejsmicro-k8s-service spec: type: NodePort #Exposes the service as a node port ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nodejsmicro   Click the Commit button to create the file and commit the code changes.   Click the Commit button in the Commit changes dialog that displays. You should see the nodejs_micro.yaml file in the list of files for the NodejsKubernetes.git repository, as shown in the screenshot below.   Configuring the Build Job Click Build on the navigation bar to display the Build page. Click the +New Job button to create a new build job. In the New Job dialog box, enter NodejsKubernetesDeploymentBuild for the Job name and, from the Software Template drop-down list, select Kubernetes Template as the Software Template. Then click the Create Job button to create the build job.   After the build job has been created, you’ll be brought to the configure screen. Click the Source Control tab and select NodejsKubernetes.git from the repository drop-down list. This is the same repository where you created the nodejs_micro.yaml file. Select master from the Branch drop-down list.   In the Builders tab, click the Add Builder drop-down and select OCIcli Builder from the drop-down list.  To see what you need to fill in for each of the input fields in the OCIcli Builder form and to find out where to retrieve these values, you can either read my “Oracle Cloud Infrastructure CLI on Developer Cloud” blog or the documentation link to the “Access Oracle Cloud Infrastructure Services Using OCIcli” section in Using Oracle Developer Cloud Service. Note: The values in the screenshot below have been obfuscated for security reasons.    Click the Add Builder drop-down list again and select Unix Shell Builder.   In the text area of the Unix Shell Builder, add the following script that downloads the Kubernetes config file and deploys the container on Oracle Kubernetes Engine, which you created by following the instructions in my previous blog. Click the Save button to save the build job.    mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.iad.aaaaaaaaafrgkzrwhtimldhaytgnjqhazdgmbuhc2gemrvmq2w --file $HOME/.kube/config --region us-ashburn-1 export KUBECONFIG=$HOME/.kube/config kubectl config view kubectl get nodes kubectl create -f nodejs_micro.yaml sleep 120 kubectl get services nodejsmicro-k8s-service kubectl get pods kubectl describe pods This script creates the kube directory, uses the OCIcli command oci ce cluster to download the Kubernetes cluster config file, then sets the KUBECONFIG environment variable. The kubectl config and get nodes commands just let you view the cluster configuration and see the node details of the cluster. The create command actually deploys the Docker container on the Kubernetes cluster. We run the get services and, get pods commands to retrieve the IP address and the port of the deployed container. Note that the nodejsmicro-k8s-service name was previously configured in the nodejs_micro.yaml file. Note: The OCID for the cluster, mentioned in the script above, needs to be replaced by the one which you have.    Click the Build Now button to start executing the Kubernetes deployment build. You can click the Build Log icon to view the build execution logs.   After the build job executes successfully, you can examine the build log to retrieve the IP address and the port for the deployed service on Kubernetes cluster. You’ll need to look for the IP address and the port under the deployment name you configured in the YAML file.   Use the IP address and the port that you retrieved in the format shown below and see the output in your browser. http://<IP Address>:port/message Note: The message output you see may differ from what is shown here, based on what you coded in the Node.js REST application that was containerized.   So, now you’ve seen how Oracle Developer Cloud streamlines and simplifies the process for managing the automation for building and deploying Docker containers on Oracle Kubernetes Engine. Happy Coding!   **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

In my previous blog, I described how to use Oracle Developer Cloud to build and push the Node.js microservice Docker image on DockerHub. This blog will help you understand, how to use Oracle...

Finding Symmetry

(Originally published on Medium) Evolving the design of Eclipse Collections through symmetry. Got Eclipse Collections stickers? Find the Missing Types New Eclipse Collections types on the left add to the existing JDK types on the right Eclipse Collections has a bunch of new types you will not find in the JDK. These types give developers useful functionality that they need. There is an extra cost to supporting additional container types, especially when you factor in having support for primitive types across these types. These missing types are important. They help Eclipse Collections return better return types for iteration patterns. Type Symmetry Eclipse Collections has pretty good symmetry between object and primitive types. The missing container types are fixed sized primitive arrays, primitive BiMaps, primitive Multimaps, and some of the primitive Intervals (only IntInterval exists today). String really only should exist as a primitive immutable collection of either char or int. Eclipse Collections has ,CharAdapter, CodePointAdapter and CodePointList which provide a rich set of iteration protocols that work with Strings. API Symmetry There is still much that can be done to improve the symmetry between the object and primitive APIs. There are some APIs that cannot be replicated without adding new types. For instance, it would be less than desirable to implement a primitive version of groupBy with the current Multimap implementations because the only option would be to box the primitive Lists, Sets or Bags. Since there are a large number of APIs in Eclipse Collections, I will only draw attention to some of the major APIs that do not currently have symmetry between object and primitive collections. The following methods are missing on the primitive iterables. groupBy / groupByEach countBy / countByEach aggregateBy / aggregateInPlaceBy partition reduce / reduceInPlace toMap All “With” methods Of all the missing APIs on primitive collections perhaps the most subtle and yet glaring difference is the lack of “With” methods. It is not clear if the “With” methods would be as useful for primitive collections as they are with object collections. For some usage examples of the “With” methods on the object collection APIs, read my blog titled “Preposition Preference”. The “With” methods allow for more APIs to be used with Method References. This is what the signatures for some of the “With” methods might look like on IntList. <P> boolean anySatisfyWith(IntObjectPredicate<? super P> predicate, P parameter); <P> boolean allSatisfyWith(IntObjectPredicate<? super P> predicate, P parameter); <P> boolean noneSatisfyWith(IntObjectPredicate<? super P> predicate, P parameter); <P> IntList selectWith(IntObjectPredicate<? super P> predicate, P parameter); <P> IntList rejectWith(IntObjectPredicate<? super P> predicate, P parameter); Default Methods to the Rescue The addition of default methods in Java 8 has been of tremendous help increasing the symmetry between our object and primitive APIs. In Eclipse Collections 10.x we will be able to leverage default methods even more, as we now have the ability to use container factory classes in interfaces. The following examples show how the default implementations of countBy and countByWith has been optimized using the Bags factory. default <V> Bag<V> countBy(Function<? super T, ? extends V> function) { return this.countBy(function, Bags.mutable.empty()); } default <V, P> Bag<V> countByWith(Function2<? super T, ? super P, ? extends V> function, P parameter) { return this.countByWith(function, parameter, Bags.mutable.empty()); } More on Eclipse Collections API design To find out more about the design of the Eclipse Collections API, check out this slide deck and the following presentation. You can also find a set of visualizations of the Eclipse Collection library in this blog post. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.

(Originally published on Medium) Evolving the design of Eclipse Collections through symmetry. Got Eclipse Collections stickers? Find the Missing Types New Eclipse Collections types on the left add to the...

Install Spinnaker with Halyard on Kubernetes

(Originally published on Medium) This article will walk you through the steps that can be used to install and setup a Spinnaker instance on Kubernetes that’s behind a corporate proxy. We will use Halyard on docker to manage our Spinnaker deployment. For a super quick installation, you can use Spinnaker’s Helm chart Prerequisites Make sure to take care of these prerequisites before installing Spinnaker: Docker 17.x with proxies configured (click here for OL setup) A Kubernetes cluster (click here for OL setup) Helm with RBAC enabled (click here for generic setup) Install Halyard on Docker Halyard is used to install and manage a Spinnaker deployment. In fact, all production grade Spinnaker deployments require Halyard in order to properly configure and maintain Spinnaker. Let’s use Docker to install Halyard. Create a docker volume or create a host directory to hold the persistent data used by Halyard. For the purposes of this article, let’s create a host directory and grant users full access: mkdir halyard && chmod 747 halyard Halyard needs to interact with your Kubernetes cluster. So we pass the $KUBECONFIG file to it. One way would be to mount a host directory into the container that has your Kubernetes cluster details. Let’s create the directory “k8s” and copy the $KUBECONFIG file and make it visible to the user inside the Halyard container. mkdir k8s && cp $KUBECONFIG k8s/config && chmod 755 k8s/config Time to download and run Halyard docker image: docker run -p 8084:8084 -p 9000:9000 \ --name halyard -d \ -v /sandbox/halyard:/home/spinnaker/.hal \ -v /sandbox/k8s:/home/spinnaker/k8s \ -e http_proxy=http://<proxy_host>:<proxy_port> \ -e https_proxy=https://<proxy_host>:<proxy_port> \ -e JAVA_OPTS="-Dhttps.proxyHost=<proxy_host> -Dhttps.proxyPort=<proxy_port>" \ -e KUBECONFIG=/home/spinnaker/k8s/config \ gcr.io/spinnaker-marketplace/halyard:stable Make sure to replace the “<proxy_host>” and “<proxy_port>” with your corporate proxy values. Login to the “halyard” container to test the connection to your Kubernetes cluster: docker exec -it halyard bash kubectl get pods -n spinnaker Optionally, if you want command completion run the following inside the halyard container: source <(hal --print-bash-completion) Set provider to “Kubernetes” In Spinnaker terms, to deploy applications we use integrations to specific cloud platforms. We have to configure Halyard and set the cloud provider to Kubernetes v2 (manifest based) since we want to deploy Spinnaker onto a Kubernetes cluster: hal config provider kubernetes enable Next we create an account. In Spinnaker, an account is a named credential Spinnaker uses to authenticate against an integration provider — Kubernetes in our case: hal config provider kubernetes account add <my_account> \ --provider-version v2 \ --context $(kubectl config current-context) Make sure to replace “<my_account>” with an account name of your choice. Save the account name in an environment variable $ACCOUNT. Next, we need to enable Halyard to use artifacts: hal config features edit --artifacts true Set deployment type to “distributed” Halyard supports multiple types of Spinnaker deployments. Let’s tell Halyard that we need a distributed deployment of Spinnaker: hal config deploy edit --type distributed --account-name $ACCOUNT Set persistent store to “Minio” Spinnaker needs a persistent store to save the continuous delivery pipelines and other configurations. Halyard let’s you choose from multiple storage providers. For the purposes of this article, we will use “Minio”. Let’s use Helm to install a simple instance of Minio. Run the command from outside the Halyard docker container on a node that has access to your Kubernetes cluster and Helm: helm install --namespace spinnaker --name minio --set accessKey= <access_key> --set secretKey=<secret_key> stable/minio Make sure to replace “<access_key>” and “<secret_key>” with values of your choosing. If you are using a local k8s cluster with no real persistent volume support, you can pass “persistence.enabled=false” as a set to the previous Helm command. As the flag suggests, if Minio goes down, you will lose your changes. According to the Spinnaker docs, Minio does not support versioning objects. So let’s disable versioning under Halyard configuration. Back in the Halyard docker container run these commands: mkdir ~/.hal/default/profiles && \ touch ~/.hal/default/profiles/front50-local.yml Add the following to the front50-local.yml file: spinnaker.s3.versioning: false Now run the following command to configure the storage provider: echo $MINIO_SECRET_KEY | \ hal config storage s3 edit --endpoint http://minio:9000 \ --access-key-id $MINIO_ACCESS_KEY \ --secret-access-key Make sure to set the $MINIO_ACCESS_KEY and $MINIO_SECRET_KEY environment variables to the <access_key> and <secret_key> values that you used when you installed Minio. Finally, let’s enable the s3 storage provider: hal config storage edit --type s3 Set version to “latest” You have to select a specific version of Spinnaker and configure Halyard so it knows which version to deploy. You can view the available versions by running this command: hal version list Pick the latest version number from the list (or any other version that you want to deploy) and update Halyard: hal config version edit --version <version> Deploy Spinnaker At this point, Halyard should have all the information that it needs to deploy a Spinnaker instance. Let’s go ahead and deploy Spinnaker by running this command: hal deploy apply Note that first time deployments might take a while. Make Spinnaker reachable We need to expose the Spinnaker UI and Gateway services in order to interact with the Spinnaker dashboard and start creating pipelines. When we deployed Spinnaker using Halyard, a number of Kubernetes services get created in the “spinnaker” namespace. These services are by default exposed within the cluster (type is “ClusterIP”). Let’s change the service type of the services fronting the UI and API servers of Spinnaker to “NodePort” to make them available to end users outside the Kubernetes cluster. Edit the “spin-deck” service by running the following command: kubectl edit svc spin-deck -n spinnaker Change the type to “NodePort” and optionally specify the port on which you want the service exposed. Here’s a snapshot of the service definition: ... spec: type: NodePort ports: - port: 9000 protocol: TCP targetPort: 9000 nodePort: 30900 selector: app: spin cluster: spin-deck sessionAffinity: None status: ... Next, edit the “spin-gate” service by running the following command: kubectl edit svc spin-gate -n spinnaker Change the type to “NodePort” and optionally specify the port on which you want the API gateway service exposed. Note that Kubernetes services can be exposed in multiple ways. If you want to expose Spinnaker onto the public internet, you can use a LoadBalancer or an Ingress with https turned on. You should configure authentication to lock down access to unauthorized users. Save the node’s hostname or its IP address that will be used to access Spinnaker in an environment variable $SPIN_HOST. Using Halyard, configure the UI and API servers to receive incoming requests: hal config security ui edit \ --override-base-url "http://$SPIN_HOST:30900" hal config security api edit \ --override-base-url "http://$SPIN_HOST:30808" Redeploy Spinnaker so it picks up the configuration changes: hal deploy apply You can access the Spinnaker UI at “http://$SPIN_HOST:30900” Create a “hello-world” application Let’s take Spinnaker for a spin (pun intended). Using Spinnaker’s UI, let’s create a “hello-world” application. Use the “Actions” drop-down and click “Create Application”: Once the application is created, navigate to “Pipelines” tab and click “Configure a new pipeline”: Now add a new stage to the pipeline to create a manifest based deployment: Under the “Manifest Configuration”, add the following as the manifest source text: apiVersion: apps/v1 kind: Deployment metadata: labels: app: hello-world name: hello-world spec: replicas: 1 selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - image: '<docker_repository>:5000/helloworld:v1' name: hello-world ports: - containerPort: 80 Replace the “<docker_repository>” with the name of your internal docker registry that is made available to your Kubernetes cluster. Let’s take a quick side tour to create a “helloworld” docker image. We will create a “nginx” based image that hosts an “index.html” file containing: <h1>Hello World</h1> We will then create the corresponding “Dockerfile” in the same directory that holds the “index.html” file from the previous step: FROM nginx:alpine COPY . /usr/share/nginx/html Next, we build the docker image by running the following command: docker build -t <docker_repository>:5000/helloworld:v1 . Make sure to replace the “<docker_repository>” with the name of your internal docker registry that is made available to your Kubernetes cluster. Push the docker image to the “<docker_repository>” to make it available to the Kubernetes cluster. docker push <docker_repository>:5000/helloworld:v1 Back in the Spinnaker UI, let’s manually run the “hello-world” pipeline. After a successful execution you can drill down into the pipeline instance details: To quickly test our hello-world app, we can create a manifest based “LoadBalancer” in the Spinnaker UI. Click the “+” icon: Add the following service definition to create the load balancer: kind: Service apiVersion: v1 metadata: name: hello-world spec: type: NodePort selector: app: hello-world ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 31080 Once Spinnaker provisions the load balancer, hit the hello-world app’s URL at “http://$SPIN_HOST:31080” in your browser. Voila! There you have it, “Hello World” is rendered. Conclusion Spinnaker is a multi-cloud continuous delivery platform for releasing software with high velocity. We used Halyard to install Spinnaker on a Kubernetes cluster and deployed a simple hello-world pipeline. Of course, we barely scratched the surface in terms of what Spinnaker offers. Head over to the guides to learn more about Spinnaker. .cb11v2-cover {display : none !important;}

(Originally published on Medium) This article will walk you through the steps that can be used to install and setup a Spinnaker instance on Kubernetes that’s behind a corporate proxy. We will use Halyar...

How to Connect a Go Program to Oracle Database using goracle

Given that we just released Go programming language RPMs on Oracle Linux yum server, I figured it would be a good opportunity to take the goracle driver for a spin on Oracle Linux and connect a Go program to Oracle Database. goracle implements a Go database/sql driver for Oracle Database using ODPI-C (Oracle Database Programming Interface for C) 1. Update Yum Configuration First, make sure you have the most recent Oracle Linux yum server repo file by grabbing it from the source: $ sudo mv /etc/yum.repos.d/public-yum-ol7.repo /etc/yum.repos.d/public-yum-ol7.repo.bak $ sudo wget -O /etc/yum.repos.d/public-yum-ol7.repo http://yum.oracle.com/public-yum-ol7.repo 2. Enable Required Repositories to install Go and Oracle Instant Client $ sudo yum -y install yum-utils $ sudo yum-config-manager --enable ol7_developer_golang111 ol7_oracle_instantclient 3. Install Go and Verify Note that you must install git also so that go get can fetch and build the goracle module. $ sudo yum -y install git golang $ go env GOARCH="amd64" GOBIN="" GOCACHE="/home/vagrant/.cache/go-build" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GOPATH="/home/vagrant/go" GOPROXY="" GORACE="" GOROOT="/usr/lib/golang" GOTMPDIR="" GOTOOLDIR="/usr/lib/golang/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" CXX="g++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build013415374=/tmp/go-build -gno-record-gcc-switches" $ go version go version go1.11.1 linux/amd64 4. Install Oracle Instant Client and Add its Libraries to the Runtime Link Path Oracle Instant Client is available directly from Oracle Linux yum server. If you are deploying applications using Docker, I encourage you to check out our Oracle Instant Client Docker Image. sudo yum -y install oracle-instantclient18.3-basic Before you can make use of Oracle Instant Client, set the runtime link path so that goracle can find the libraries it needs to connect to Oracle Database. sudo sh -c "echo /usr/lib/oracle/18.3/client64/lib > /etc/ld.so.conf.d/oracle-instantclient.conf" sudo ldconfig 5. Install the goracle Driver Following the instructions from the goracle repo on GitHub: $ go get gopkg.in/goracle.v2 6. Create a Go Program to Test your Connection Create a file db.go as follows. Make sure you change your connect string. package main import ( "fmt" "database/sql" _ "gopkg.in/goracle.v2" ) func main(){ db, err := sql.Open("goracle", "scott/tiger@10.0.1.127:1521/orclpdb1") if err != nil { fmt.Println(err) return } defer db.Close() rows,err := db.Query("select sysdate from dual") if err != nil { fmt.Println("Error running query") fmt.Println(err) return } defer rows.Close() var thedate string for rows.Next() { rows.Scan(&thedate) } fmt.Printf("The date is: %s\n", thedate) } 7. Run it! Time to test your program. $ go run db.go The date is: 2018-11-21T23:53:31Z Conclusion In this blog post, I showed how you can install the Go programming language and Oracle Instant Client from Oracle Linux yum server and use it together with the goracle Driver to connect a Go program to Oracle Database. References Oracle OpenWorld 2018 session slides: The Go Language: Principles and Practices for Oracle Database [DEV5047] go-oracle

Given that we just released Go programming language RPMs on Oracle Linux yum server, I figured it would be a good opportunity to take the goracle driver for a spin on Oracle Linux and connect a Go...

From locally running Node application to Cloud based Kubernetes Deployment

(Originally published at technology.amis.nl) In this article I will discuss the steps I had to go through in order to take my locally running Node application — with various hard coded and sometimes secret values — and deploy it on a cloud based Kubernetes cluster. I will discuss the containerization of the application, the replacement of hard coded values with references to environment variables, the Docker container image manipulation, the creation of the Kubernetes yaml files for creating the Kubernetes resources and finally the actual execution of the application. Background A few days ago in Tokyo I presented at the local J-JUG event as part of the Oracle Groundbreakers Tour of Asia and Pacific. I had prepared a very nice demo: an update in a cloud based Oracle Database was replicated to another cloud based database — a MongoDB database. In this demo, I first used Twitter as the medium for exchanging the update event and then the Oracle Event Hub (managed Apache Kafka) cloud service. This picture visualizes what I was trying to do: However, my demo failed. I ran a local Node (JS) application that would be invoked over HTTP from within the Oracle Database — and that would publish to Twitter and Kafka. When I was working on the demo in my hotel room, it was all working just fine. I used ngrok to expose my locally running application on the public internet — a great way to easily integrate local services in cloud-spanning demonstrations. It turned out that use of ngrok was not allowed by the network configuration at the Oracle Japan office where I did my presentation. There was no way I could get my laptop to create the tunnel to the ngrok service that would allow it to hand over the HTTP request from the Oracle Database. This teaches me a lesson. No matter how convenient it may be to run stuff locally — I really should be able to have all components of this demo running in the cloud. And the most obvious way — apart from using a Serverless Function — is to deploy that application on a Kubernetes cluster. Even though I know how to get there — I realized the steps are not as engrained in my head and fingers as should be the case — especially in order to restore my demo to its former glory in less than 30 minutes. The Action Plan My demo application — somewhat quickly put together — contains quite a few hard coded values, including confidential settings such as Kafka Server IP address and Topic name as well as Twitter App Credentials. The first step I need to take is to remove all these hard coded values from the application code and replace them with references to environment variables. The second big step is to build a container for and from my application. This container needs to provide the Node runtime, have all npm modules used by the application and contain the application code itself. The container should automatically start the application and expose the proper port. At the end of this step, I should be able locally run my application in a Docker container — injecting values for the environment variables with the Docker run command. The third step is the creation of a Container Image from the container — and pushing that image (after meaningful tagging) to a container registry. Next is the preparation of the Kubernetes resources. My application consists of a Pod and a Service (in Kubernetes terms) that are combined in a Deployment in its own Namespace. The Deployment makes use of two Secrets — one contains the confidential values for the Kafka Server (IP address and topic name) and the other the Twitter client app credentials. Values from these Secrets are used to set some of the environment variables. Other values are hard coded in the Deployment definition. After arranging access to a Kubernetes Cluster instance — running in the Oracle Cloud Infrastructure, offered through the Oracle Kubernetes Engine (OKE) service — I can deploy the K8S resources and make the application running. Now, finally, I can point my Oracle Database trigger to the service endpoint on Kubernetes in the cloud and start publishing tweets for all relevant database updates. At this point, I should — and you likewise after reading the remainder of this article — have a good understanding for how to Kubernetalize a Node application, so that I will never be stymied in my demos by stupid network problems. I want to not even think twice about taking my local application and turn it into a containerized application that is running on Kubernetes. Note: the sources discussed in this article can be found on GitHub: https://github.com/lucasjellema/groundbreaker-japac-tour-cqrs-via-twitter-and-event-hub/tree/master/db-synch-orcl-2-mongodb-over-twitter-or-kafka. 1. Replace Hard Coded Values with Environment Variable References My application contained the hard coded values of the Kafka Broker endpoint and my Twitter App credentials secrets. For a locally running application that is barely acceptable. For an application that is deployed in a cloud environment (and whose source are published on GitHub) that is clearly not a good idea. Any hard coded value is to be removed from the code — replaced with a reference to a an environment variable, using the Node expression: process.env.NAME_OF_VARIABLE or process.env[‘NAME_OF_VARIABLE’] Let’s for now not worry how these values are set and provided to the Node application. I have created a generic code snippet that will check upon starting the application if all expected Environment Variables have been defined and if not writes a warning to the output: const REQUIRED_ENVIRONMENT_SETTINGS = [ {name:"PUBLISH_TO_KAFKA_YN" , message:"with either Y (publish event to Kafka) or N (publish to Twitter instead)"}, {name:"KAFKA_SERVER" , message:"with the IP address of the Kafka Server to which the application should publish"}, {name:"KAFKA_TOPIC" , message:"with the name of the Kafka Topic to which the application should publish"}, {name:"TWITTER_CONSUMER_KEY" , message:"with the consumer key for a set of Twitter client credentials"}, {name:"TWITTER_CONSUMER_SECRET" , message:"with the consumer secret for a set of Twitter client credentials"}, {name:"TWITTER_ACCESS_TOKEN_KEY" , message:"with the access token key for a set of Twitter client credentials"}, {name:"TWITTER_ACCESS_TOKEN_SECRET" , message:"with the access token secret for a set of Twitter client credentials"}, {name:"TWITTER_HASHTAG" , message:"with the value for the twitter hashtag to use when publishing tweets"}, ] for(var env of REQUIRED_ENVIRONMENT_SETTINGS) { if (!process.env[env.name]) { console.error(`Environment variable ${env.name} should be set: ${env.message}`); } else { // convenient for debug; however: this line exposes all environment variable values - including any secret values they may contain // console.log(`Environment variable ${env.name} is set to : ${process.env[env.name]}`); } } This snippet is used in the index.js file in my Node application. This file also contains several references to process.env — that used to be hard coded values. It seems convenient to use npm start to run the application — for example because it allows we to define environment variables as part of the application start up. When you execute npm start, npm will check the package.json file for a script with key “start”. This script will typically contain something like “node index” or “node index.js”. You can extend this script with the definition of environment variables to be applied before running the Node application, like this (taken from package.json): "scripts": { "start": "(export KAFKA_SERVER=myserver.cloud.com && export KAFKA_TOPIC=cool-topic ) || (set KAFKA_SERVER=myserver.cloud.com && set KAFKA_TOPIC=cool-topic && set TWITTER_CONSUMER_KEY=very-secret )&& node index", … }, Note: we may have to cater for Linux and Windows environments, that treat environment variables differently. 2. Containerize the Node application In my case, I was working on my Windows laptop, developing and testing the Node application from the Windows command line. Clearly, that is not an ideal environment for building and running a Docker container. What I have done is use Vagrant to run a Virtual Machine with Docker Engine inside. All Docker container manipulation can easily be done inside this Virtual Machine. Check out the Vagrantfile that instructs Vagrant on leveraging VirtualBox to create and run the desired Virtual Machine. Note that the local directory that contains the Vagrantfile and from which the vagrant up command is executed is automatically shared into the VM, mounted as /vagrant. Note: I have used this article for inspiration for this section of my article: https://nodejs.org/en/docs/guides/nodejs-docker-webapp/ . Note 2: I use the dockerignore file to exclude files and directories in the root folder that contains the Dockerfile. Anything listed in dockerignore is not added to the build context and will not end up in the container. A Docker container image is built using a Docker build file. The starting point of the Docker is the base image that is subsequently extended. In this case, the base image is node:10.13.0-alpine, a small and recent Node runtime environment. I create a directory /usr/src/app and have Docker set this directory as it focal point for all subsequent actions. Docker container images are created in layers. Each build step in the Dockerfile adds a layer. If the build is rerun, only layers for steps in the Dockerfile that have changed are rerun and only changed layers are actually uploaded when the image is pushed. Therefore, it is smart to have the steps that change the most at the end of the Dockerfile. In my case, that means that the application sources should be copied to the container image at a very late stage in the build process. First I only copy the package.json file — assuming this will not change very frequently. Immediately after copying package.json, all node modules are installed into the container image using npm install. Only then are the application sources copied. I have chose to expose port 8080 from the container — this is an extremely arbitrary decision. However, the environment variable PORT — whose value is read in index.js using process.env.PORT — needs to correspond exactly to whatever port I expose. Finally the instruction to to run the Node application when the container is run: npm start passed to the CMD instruction. Here is the complete Dockerfile: # note: run docker build in a directory that contains this Docker build file, the package.json file and all your application sources and static files # this directory should NOT contain the node-modules or any other resources that should not go into the Docker container - unless these are explicitly excluded in a .Dockerignore file! FROM node:10.13.0-alpine # Create app directory WORKDIR /usr/src/app # Install app dependencies # A wildcard is used to ensure both package.json AND package-lock.json are copied # where available (npm@5+) COPY package*.json ./ RUN npm install # Bundle app source - copy Node application from the current directory COPY . . # the application will be exposed at port 8080 ENV PORT=8080 #so we should expose that port EXPOSE 8080 # run the application, using npm start (which runs the start script in package.json) CMD [ "npm", "start" ] Running docker build — to be exact, I run: docker build -t lucasjellema/http-to-twitter-app . — gives the following output: The container image is created. I can now run the container itself, for example with: docker run -p 8090:8080 -e KAFKA_SERVER=127.1.1.1 -e KAFKA_TOPIC=topic -e TWITTER_CONSUMER_KEY=818 -e TWITTER_CONSUMER_SECRET=secret -e TWITTER_ACCESS_TOKEN_KEY=tokenkey -e TWITTER_ACCESS_TOKEN_SECRET=secret lucasjellema/http-to-twitter-app The container is running, the app is running and at port 8090 on the Docker host should I able to access the application:http://192.168.188.120:8090/about (not: 192.168.188.120 is the IP address exposed by the Virtual Machine managed by Vagrant) 3. Build, Tag and Push the Container Image In order to run a container on a Kubernetes cluster — or indeed on any other machine then the one on which it was built — this container must be shared or published. The easiest way of doing so is through the use of Container (Image) Registry, such as Docker Hub. In this case I simply tag the container image with the currently applicable tag of lucasjellema/http-to-twitter-app:0.9: docker tag lucasjellema/http-to-twitter-app:latest lucasjellema/http-to-twitter-app:0.9 I then push the tagged image to the Docker Hub registry: (note: before executing this statement, I have used docker login to connect my session to the Docker Hub): docker push lucasjellema/http-to-twitter-app:0.9 At this point, the Node application is publicly available for pull — and can be run on any Docker compatible container engine. It does not contain any secrets — all dependencies (such as Twitter credentials and Kafka configuration) needs to be injected through environment variable settings. 4. Prepare Kubernetes Resources (Pod, Service, Secrets, Namespace, Deployment) When the Node application is running on Kubernetes it shall have a number of constituents: a namespace cqrs-demo to isolate the other artifacts in their own compartment two secrets to provide the sensitive and dynamic, deployment specific details regarding Kafka and regarding the Twitter client credentials a Pod for a single container — with the Node application a Service — to expose the Pod on an (externally) accessible endpoint and guide requests to the port exposed by the Pod a Deployment http-to-twitter-app — to configure the Pod through a template that is used for scaling and redeployment The separate namespace cqrs-demo is created with a simple kubectl command: kubectl create namespace cqrs-demo The two secrets are two sets of sensitive data entries. Each entry has a key and a value and the value of course is the sensitive one. In the case of the application in this article I have ensured that only the secret-objects contain sensitive information. There is no password, endpoint, credential in any other artifact. So I can freely share the other files — even on GitHub. But not the secrets files. They contain the valuable goods. Note: even though the secrets may seem encrypted — in this case they are not. They simply contain the base64 representation of the actual values. These base64b values can easily be retrieved on the Linux command line using: echo -n '<value>' | base64 The secrets are created from these yaml files: apiVersion: v1 kind: Secret metadata: name: twitter-app-credentials-secret namespace: cqrs-demo type: Opaque data: CONSUMER_KEY: U0hh CONSUMER_SECRET: dT= ACCESS_TOKEN_KEY: ODk= ACCESS_TOKEN_SECRET: aUZv and apiVersion: v1 kind: Secret metadata: name: kafka-server-secret namespace: cqrs-demo type: Opaque data: kafka-server-endpoint: MTI5 kafka-topic: aWRj using these kubectl statements: kubectl create -f ./kafka-secret.yaml kubectl create -f ./twitter-app-credentials-secret.yaml The Kubernetes Dashboard displays the two secrets: And some details for one (but not the sensitive values): The file k8s-deployment.yml contains the definition of both the service as well as the deployment and through the deployment indirectly also the pod. The service is defined of type LoadBalancer. This results on Oracle Kubernetes Engine on a special external IP address assigned to this service. That could be considered somewhat wasteful. A more elegant approach would be to use a IngressController — that allows us to handle more than just a single service on an external IP address. For the current example, LoadBalancer will do. Note: when you run the Kubernetes artifacts on an environment that does not support LoadBalancer — such as minikube — you can change type LoadBalancer to type NodePort. A random port is then assigned to the service and the service will be available on that port on the IP address of the K8S cluster. The service is exposed externally at port 80 — although other ports would be perfectly fine too. The service connects to the container port with the logical name app-api-port in the cqrs-demo namespace. This port is defined for the http-to-twitter-app container definition in the http-to-twitter-app deployment. Note: multiple containers can be started for this single container definition — depending on the number of replicas specified in the deployment and for example depending on the question of (re)deployments are taking place. The service mechanism ensures that traffic is load balanced across all container instances that expose the app-api-port. kind: Service apiVersion: v1 metadata: name: http-to-twitter-app namespace: cqrs-demo labels: k8s-app: http-to-twitter-app kubernetes.io/name: http-to-twitter-app spec: selector: k8s-app: http-to-twitter-app ports: - protocol: TCP port: 80 targetPort: app-api-port type: LoadBalancer # with type LoadBalancer, an external IP will be assigned - if the K8S provider supports that capability, such as OKE # with type NodePort, a port is exposed on the cluster; whether that can be accessed or not depends on the cluster configuration; on Minikube it can be, in many other cases an IngressController may have to be configured After creating the service, it will take some time (up to a few minutes) before an external IP address is associated with the (load balancer for the) service. The external ip will then be shown as pending. Below what it looks like in the dashboard when the external IP has been assigned although I blurred most of the actual IP address) The deployment for now specifies just a single replica. It specifies the container image on which the container (instances) in this deployment are based: lucasjellema/http-to-twitter-app:0.9. This is of course the container image that I pushed in the previous section. The container exposes port 8080 (container port) and this port has been given the logical name app-api-port, that we have seen before. The K8S cluster instance I was using had an issue with DNS translation from domain names to IP address. Initially, my application was not working because the url api.twitter.com could not be translated into an IP address. Instead of trying to fix this DNS issue, I have made use of a built in feature in Kubernetes called hostAliases. This feature allows we to specify DNS entries that are added at runtime to the hosts file in the container. In this case I instruct Kubernetes to inject the mapping between api.twitter.com and its IP address into the hosts file of the container. Finally, the container template specifies a series of environment variable values. These are injected into the container when it is started. Some of the values for te environment variables are defined literally in the deployment definition. Others consist of references to entries in secrets, for example the value for TWITTER_CONSUMER_KEY that is derived from the twitter-app-credentials-secret using the CONSUMER_KEY key. apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: k8s-app: http-to-twitter-app name: http-to-twitter-app namespace: cqrs-demo spec: replicas: 1 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: k8s-app: http-to-twitter-app spec: hostAliases: - ip: "104.244.42.66" hostnames: - "api.twitter.com" containers: - image: "lucasjellema/http-to-twitter-app:0.9" imagePullPolicy: Always name: http-to-twitter-app ports: - containerPort: 8080 name: app-api-port protocol: TCP env: - name: PUBLISH_TO_KAFKA_YN value: "N" - name: TWITTER_HASHTAG value: "#GroundbreakersTourOrderEvent" - name: TWITTER_CONSUMER_KEY valueFrom: secretKeyRef: name: twitter-app-credentials-secret key: CONSUMER_KEY - name: TWITTER_CONSUMER_SECRET valueFrom: secretKeyRef: name: twitter-app-credentials-secret key: CONSUMER_SECRET - name: TWITTER_ACCESS_TOKEN_KEY valueFrom: secretKeyRef: name: twitter-app-credentials-secret key: ACCESS_TOKEN_KEY - name: TWITTER_ACCESS_TOKEN_SECRET valueFrom: secretKeyRef: name: twitter-app-credentials-secret key: ACCESS_TOKEN_SECRET - name: KAFKA_SERVER valueFrom: secretKeyRef: name: kafka-server-secret key: kafka-server-endpoint - name: KAFKA_TOPIC valueFrom: secretKeyRef: name: kafka-server-secret key: kafka-topic The deployment in the dashboard: Details on the Pod: Given admin privileges, I can inspect the real values of the environment variables that were derived from secrets. The Pod logging is easily accessed as well: 5. Run and Try Out the Application When the external IP has been allocated to the Service and the Pod is running successfully, the application can be accessed. From the Oracle Database — and also just from any browser: The public IP address was blurred in the location bar. Note that no Port is specified in the URL — because the port will default yo 80 and that happens to be the port defined in the service as the port to map to the container’s exposed port (8080). When the database makes its HTTP request, we can see in the Pod logging that the request is processed: And I can even verify that it has done what in the logging the application states it has done: Resources GitHub sources: https://github.com/lucasjellema/groundbreaker-japac-tour-cqrs-via-twitter-and-event-hub Kubernetes Cheatsheet for Docker developers: https://technology.amis.nl/2018/09/26/from-docker-run-to-kubectl-apply-quick-kubernetes-cheat-sheet-for-docker-users/ Kubernetes Documentation on Secrets: https://kubernetes.io/docs/concepts/configuration/secret/ Kubernetes Docs on Host Aliases: https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ Docker docs on dockerignore https://docs.docker.com/engine/reference/builder/#dockerignore-file Kubernetes Docs on Deployment: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

(Originally published at technology.amis.nl) In this article I will discuss the steps I had to go through in order to take my locally running Node application — with various hard coded and sometimes...

Connecting to Autonomous Database from a Node.js, Python or PHP App in Oracle Cloud Infrastructure

Introduction In this tutorial I demonstrate how to connect an app written in Python, Node.js or PHP running in Oracle Cloud Infrastructure(OCI) to an Autonomous Transaction Processing (ATP) Database running in Oracle Cloud. To complete these steps, it is assumed you have either a baremetal or VM shape running Oracle Linux with a public IP address in Oracle Cloud Infrastructure, and that you have access to Autonomous Transaction Processing Database Cloud Service. I used Oracle Linux 7.5 Note: this post has been updated to include optional use of the OCI CLI to download Client Credentials (Wallet) directly. We've recently added Oracle Instant Client to the Oracle Linux yum mirrors in each OCI region, which has simplified the steps significantly. Previously, installing Oracle Instant Client required either registering a system with ULN or downloading from OTN, each with manual steps to accept license terms. Now you can simply use yum install directly from Oracle Linux running in OCI. For this example, I use a Node.js app, but the same principles apply to Python with cx_Oracle, PHP with php-oci8 or any other language that can connect to Oracle Database with an appropriate connector via Oracle Instant Client.   Overview Installing Node.js, node-oracledb and Oracle Instant Client Using Node.js with node-oracledb and Oracle Instant Client to connect to Autonomous Transaction Processing Installing Node.js, node-oracledb and Oracle Instant Client Grab the Latest Oracle Linux Yum Mirror Repo File This steps will ensure you have an updated repo file local to your OCI region with a repo definition for OCI-included software such as Oracle Instant Client. Note that I obtain the OCI region from the instance metadata service via an HTTP endpoint that every OCI instance has access to via the address 169.254.169.254. After connecting to your OCI compute instance via ssh, run the following commands: cd /etc/yum.repos.d sudo mv public-yum-ol7.repo public-yum-ol7.repo.bak export REGION=`curl http://169.254.169.254/opc/v1/instance/ -s | jq -r '.region'| cut -d '-' -f 2` sudo -E wget http://yum-$REGION.oracle.com/yum-$REGION-ol7.repo   Enable yum repositories for Node.js and Oracle Instant Client Next, enable the required repositories to install Node.js 10 and Oracle Instant Client sudo yum install -y yum-utils sudo yum-config-manager --enable ol7_developer_nodejs10 ol7_oci_included   Install Node.js, node-oracledb and Oracle Instant Client To install Node.js 10 from the newly enabled repo, we'll need to make sure the EPEL repo is disabled. Otherwise, Node.js from that repo may be installed and that's not the Node we are looking for. Also, note the name of the node-oracledb package for Node.js 10 is node-oracledb-12c-node10. Oracle Instant Client will be installed automatically as a dependency of node-oracledb. sudo yum --disablerepo="ol7_developer_EPEL" -y install nodejs node-oracledb--node10   Add Oracle Instant Client to the runtime link path. sudo sh -c "echo /usr/lib/oracle/18.3/client64/lib > /etc/ld.so.conf.d/oracle-instantclient.conf" sudo ldconfig Download and Configure Client Credentials (Wallet) To connect to ATP via SQL*Net, you'll need Oracle client credentials. An ATP service administrator can download these via the service console (Option 1). See this documentation for more details. Or, you can use the OCI comand line interface (CLI) to download it (Option 2). Option 1: Download Wallet From Service Console Download the client credentials by clicking on DB Connection on your Autonomous Database's Details page Figure 1. Downloading Client Credentials (Wallet) from Autonomous Transaction Processing Service Console   Copy the wallet from the machine to which you've downloaded it to the OCI compute instance. Here I'm copying the file wallet_ATP1.zip from my development machine using scp. Note that I'm using an ssh key file that matches the ssh key I created the instance with. Note: this next command is run on your development machine to copy the downloaded Wallet zip file to your OCI instance. In my case, wallet_ATP1.zip was downloaded to ~/Downloads on my MacBook. scp -i ~/.ssh/oci/oci ~/Downloads/wallet_ATP1.zip opc@<OCI INSTANCE PUBLIC IP>:/etc/ORACLE/WALLETS/ATP1 Option 2: Download Wallet Using OCI CLI Using the latest OCI CLI, you can download client credentials via the command line. Here I'm assigning Compartment ID to environment variable C, then assigning the Autonomous Database ID to DB_ID by looking it up using the OCI cli based on its name. Finally, I download the client credentials zip archive using the OCI cli. $ export C=`oci iam compartment list --all | jq -r '.data[] ."compartment-id"'` $ export DB_ID=`oci db autonomous-database list --compartment-id $C | jq -r '.data[] | select( ."db-name" == "sergioblog" ) | .id'` $ oci db autonomous-database generate-wallet --autonomous-database-id $DB_ID --password MYPASSWORD123 --file wallet_ATP1.zip With the client credentials (wallet archive) downloaded, unzip it and set the permissions appropriately. First, prepare a location to store the Wallet: sudo mkdir -pv /etc/ORACLE/WALLETS/ATP1 sudo chown -R opc /etc/ORACLE Next, unzip it: cd /etc/ORACLE/WALLETS/ATP1 unzip wallet_ATP1.zip sudo chmod -R 700 /etc/ORACLE Edit sqlnet.ora to point to the Wallet location, replacing ?/network/admin. After editing sqlnet.ora should look something like this. cat /etc/ORACLE/WALLETS/ATP1/sqlnet.ora WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/etc/ORACLE/WALLETS/ATP1"))) SSL_SERVER_DN_MATCH=yes Set the TNS_ADMIN environment variable to point Instant Client the Oracle configuration directory as well as NODE_PATH so that the node-oracledb module can be found by our Node.js program. export TNS_ADMIN=/etc/ORACLE/WALLETS/ATP1 export NODE_PATH=`npm root -g`   Create and run a Node.js Program to Test Connection to ATP Create a file, select.js based on the example below. Either assign values to the environment variables NODE_ORACLEDB_USER, NODE_ORACLEDB_PASSWORD, and NODE_ORACLEDB_CONNECTIONSTRING to suit your configuration or edit the placeholder values USERNAME, PASSWORD and CONNECTIONSTRING in the code below. The former being the username and password you've been given for ATP and the latter being one of the service descriptors in the $TNS_ADMIN/tnsnames.ora file. The default username for an Autonomous Database is admin 'use strict'; const oracledb = require('oracledb'); async function run() { let connection; try { connection = await oracledb.getConnection({ user: process.env.NODE_ORACLEDB_USER || "USERNAME", password: process.env.NODE_ORACLEDB_PASSWORD || "PASSWORD", connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING || "CONNECTIONSTRING" }); let result = await connection.execute("select sysdate from dual"); console.log(result.rows[0]); } catch (err) { console.error(err); } finally { if (connection) { try { await connection.close(); } catch (err) { console.error(err); } } } } run();   Run It! Let's run our Node.js program. You should see a date returned from the Database. node select.js [ 2018-09-13T18:19:54.000Z ] Important Notes As there currently isn't a service gateway to connect from Oracle Cloud Infrastructure to Autonomous Transaction Processing, any traffic between these two will count against your network quota. Conclusion In this blog post I've demonstrated how to run a Node.js app on an Oracle Linux instance in Oracle Cloud Infrastructure (OCI) and connect it to Autonomous Transaction Processing Database by installing all necessary software —including Oracle Instant Client— directly from yum servers within OCI itself. By offering direct access to essential Oracle software from within Oracle Cloud Infrastructure, without requiring manual steps to accept license terms, we've made it easier for developers to build Oracle-based applications on Oracle Cloud.   References Create an Autonomous Transaction Processing Instance Downloading Client Credentials (Wallets) Node.js for Oracle Linux Python for Oracle Linux PHP for Oracle Linux Connect PHP 7.2 to Oracle Database 12c using Oracle Linux Yum Server

Introduction In this tutorial I demonstrate how to connect an app written in Python, Node.js or PHP running in Oracle Cloud Infrastructure(OCI) to an Autonomous Transaction Processing (ATP) Database...

Why Your Developer Story is Important

Stories are a window into life. They can if they resonate provide insights into our own lives or the lives of others.They can help us transmit  knowledge, pass on traditions, solve present day problems or allow us to imagine alternate realities. Open Source software is an example of an alternate reality in software development, where proprietary has been replaced in large part with sharing code that is free and open. How is this relevant to not only developers but people who work in technology? It is human nature that we continue to want to grow, learn and share.   With this in mind, I started 60 Second Developer Stories and tried it out at various Oracle Code events, at Developer Conferences and now at Oracle OpenWorld 2018/Code One. For the latter we had a Video Hangout in the Groundbreakers Hub at CodeOne where anyone with a story to share could do so. We livestream the story via Periscope/Twitter and record it and edit/post it later on YouTube.  In the Video Hangout, we use a green screen and through the miracles of technology Chroma key it in and put in a cool backdrop. Below are some photos of the Video Hangout as well as the ideas we give as suggestions.     Share what you learned on your first job     Share a best coding practice.     Explain how  a tool or technology works     What have you learned recently about building an App?     Share a work related accomplishment     What's the best decision you ever made?     What’s the worst mistake you made and the lesson learned?     What is one thing you learned from a mentor or peer that has really helped you?     Any story that you want to share and community can benefit from         Here are some FAQs about the 60 Second Developer Story   Q1. I am too shy, and as this is live what if I get it wrong? A1. It is your story, there is no right or wrong. If you mess up, it’s not a problem we can do a retake.   Q2. There are so many stories, how do I pick one? A2. Share something specific an event that has a beginning, middle an end. Ideally there was a challenge or obstacle and you overcame it. As long as it is meaningful to you it is worth sharing.   Q3. What if it’s not exactly 60 seconds, if it’s shorter or longer? A3. 60 Seconds is a guideline. I will usually show you a cue-card to let you know when you have 30 secs. and 15 secs. left. A little bit over or under is not a big deal.   Q4. When can I see the results? A4. Usually immediately. Whatever Periscope/Twitter handle we are sharing on, plus if you have a personal Twitter handle, we tweet that before you go live, so it will show up on your feed.   Q4. What if I am not a Developer? A5. We use Developer in a broad sense. It doesn’t matter if you are a DBA or Analyst, or whatever. If you are involved with technology and have a story to share, we want to hear it.     Here is an example of a  a 60 Second Developer Story. We hope to have the Video Hangout at future Oracle Code and other events and look forward for you to share your 60 Second story.

Stories are a window into life. They can if they resonate provide insights into our own lives or the lives of others.They can help us transmit  knowledge, pass on traditions, solve present day...

DevOps

New in Developer Cloud - Fn Support and Wercker Integration

Over the weekend we rolled out an update to your Oracle Developer Cloud Service instances which introduces several new features. In this blog we'll quickly review two of them - Support for the Fn project and integration with the Wercker CI/CD solution. These new features further enhance the scope of CI/CD functionality that you get in our team development platform. Project Fn Build Support Fn is a function-as-a-service open-source platform lead by Oracle and available for developers looking to develop portable functions with a variety of languages. If you are not familiar with Project Fn a good intro on why you should care is this blog, and you can learn more on it through the Fn project's home page on GitHub. In the latest version of Developer Cloud you have a new option in the build steps menu that helps you define various Fn related commands as part of your build process. So for example if you Fn project code is hosted in the Git repository provided by your DevCS project, you can use the build step to automate a process of building and deploying the function you created. Wercker/ Oracle Container Pipelines Integration A while back Oracle purchased a docker native CI/CD solution called Wercker - which is now also offered as part of  Oracle Cloud Infrastructure under the name Oracle Container Pipelines. Wercker is focused on offering CI/CD automation for Docker & Kubernetes based micro services. As you probably know we also offer similar support for Docker and Kubernetes in Developer Cloud Service which has support for declarative definition of Docker build steps, and ability to run Kubectl scripts in its build pipelines. If you have investment in Wercker based CI/C, and you want a more complete agile/DevOps set of features - such as the functionality offered by Developer Cloud Service (including free private Git repositories, issue tracking, agile boards and more) - now you can integrate the two solutions without loosing your investement in Wercker pipelines. For a while now Oracle Containers Pipeline provides support for picking up the code directly from a git repository hosted in Developer Cloud Service.  Now we added support for Developer Cloud Service to invoke pipelines you defined in Wercker directly as part of a build job and pipelines in Developer Cloud Service. Once you provide DevCS with your personal token for logging into Wercker, you can pick up specific applications, and pipelines that you would like to execute as part of your build jobs.   There are several other new features and enhancements in this month's release of Oracle Developer Cloud you can read about those in our What's New page.  

Over the weekend we rolled out an update to your Oracle Developer Cloud Service instances which introduces several new features. In this blog we'll quickly review two of them - Support for the Fn...

Making an IoT Badge – #badgelife going corporate

By Noel Portugal,  Senior Cloud Experience Developer at Oracle   Code Card 2018 For years I’ve been wanting to create something fun with the almighty esp8266 WiFi chip. I started experimenting with the esp8266 almost exactly four years ago. Back then there was no Arduino, Lua or even MicroPython ports for the chip, only the C Espressif SDK. Today it is fairly easy to write firmware for the ESP given how many documented projects are out there. IoT Badge by fab-lab.eu Two years ago I was very close to actually producing something with the esp8266. We, the AppsLab team,  partnered with the Oracle Technology Network team (now known as Oracle Groundbreakers Team) to offer an IoT workshop at Oracle Open World 2016. I reached out to friend-of-the lab Guido Burger from fab-lab.eu and he came up with a clever design for an IoT badge. This badge was the swiss army knife of IoT dev badge/kits.  Unfortunately, we ran out of time to actually mass produce this badge and we had to shelve the idea. Instead, we decided that year to use an off-the-shelf NodeMcu to introduce attendees to hardware that can talk to the Cloud. For the next year, we updated the IoT workshop curriculum to use the Wio Node board from Seeedstudio. Fast forward to 2018.  I’ve been following emerging use cases of e-ink screens, and I started experimenting with them. Then the opportunity came.  We needed something to highlight how easy it is to deploy serverless functions with Fn project. Having a physical device that could retrieve content from the cloud and display it was the perfect answer for me. I reached out to Squarofumi, the creators of Badgy, and we worked together to come up with the right specs for what we ended up calling the Code Card. The Code Card is an IoT badge powered by the esp8266, a rechargeable coin battery, and an e-ink display. I suggested using the same technique I used to create my smart esp8266 button. When either button A or B are pressed it sets the esp8266 enable pin to high, then the first thing the software does is keep the pin high until we are done doing an HTTP request and updating the e-ink screen.  When we are done, we set the enable pin to low and the chip turns off (not standby). This allows the battery to last much longer. To make it even easier for busy attendees to get started, I created a web app that was included in the official event app. The Code Card Designer lets you choose from different templates and assign them to a button press (short and long press). You can also choose an icon from some pre-loaded icons on the firmware. Sadly at the last minute, I had to remove one of the coolest features: the ability to upload your own picture. The feature was just not very reliable and often failed. With more time the feature can be re-introduced. After attendees used the Code Card designer they were ready for more complex stuff. All they needed to do was connect the Card to their laptops and connect via serial communication. I created a custom Electron Terminal to make it easier to access a custom CLI to change the button endpoints and SSID information. A serverless function or any other endpoint returning the required JSON is all that is needed to start modifying your Card. View image on Twitter Oracle Groundbreakers@groundbreakers     A name and a face! @groundbreakers @Java_Champions @babadopulos #codeone Code Card 12:30 PM - Oct 24, 2018   8   See Oracle Groundbreakers's other Tweets Twitter Ads info and privacy       I published the Arduino source code along with other documentation. It didn’t take long for attendees to start messing around with c codearray images to change their icons. Lastly, if you paid attention you can see that we added two Grove headers to connect analog or digital sensors. More fun! Go check out and clone the whole Github repo. You can prototype your own “badge” using off-the-shelf e-ink board similar to this. #badgelife!

By Noel Portugal,  Senior Cloud Experience Developer at Oracle   Code Card 2018 For years I’ve been wanting to create something fun with the almighty esp8266 WiFi chip. I started experimenting with the...

Blockchain

Oracle Code One, Day Four Updates and Wrap Up

It’s been an educational, inspirational, and insightful four days at Oracle Code One in San Francisco. This was the first time Oracle Code One and Oracle Open World were run side-by-side.  Attendees chose from the 2500 sessions, a majority of them featuring customers and partners who overcame real-world challenges. We also had an exhibition floor with Oracle Code One partners, and the Groundbreakers Hub, where attendees toyed around with Blockchain, IoT, AI and other emerging technologies. Personally, I felt inspired by a team of high school students who used Design Thinking, thermographic cameras, and pattern recognition to help with detecting early-stage cancer. Java Keynote The highlight from Day 1 was the Java Keynote. Matthew McCullough from Github talked about the importance of building a development community and growing the community one developer at a time. He also shared that Java has been the 2nd most popular language on Github, behind Javascript. Rayfer Hazen, manager of the data pipelines team at Github, shared similar views on Java: “Java’s strengths in parallelism and concurrency, its performance, its type system, and its massive ecosystem all make it a really good fit for building data infrastructure.” Developers from Oracle then unveiled Project Skara, which can used for the code review and code management practices for the JDK (Java Development Kit). Georges Saab, Vice President at Oracle, announced the following fulfilled commitments, which were originally made last year: Making Java more open: remaining closed-source features have been contributed to Open JDK Delivering enhancements and innovation faster: Oracle is adopting a predictable 6-month cadence so that developers can access new features sooner Continuing support for the Java ecosystem: specific releases will be provided with LTS (long-term support) Mark Reinhold, Chief Architect of the Java Platform then elaborated on major architectural changes to the Java. Though Oracle has moved to a six-month release cadence with certain builds supported long-term (LTS builds), he reiterated that “Java is still free.” Previously closed-source features such as Application Class-Data Sharing, Java Flight Recorder, and Java Mission Control are now available as open source.  Mark also showcased Java’s features to improve developer productivity and program performance, in the face of evolving programming paradigms. Key projects to meet these two goals include Amber, Loom, Panama, and Valhalla. Code One Keynote The Code One Keynote on Tuesday was kicked-off by Amit Zavery, Executive Vice President at Oracle, who elaborated on major application trends: Microservices and Serverless Architectures, which provide better infrastructure efficiency and developer productivity DevSecOps, with a move to NoOps, which requires a different mindset in engineering teams The importance of open source, which was also highlighted in Mark Reinhold’s talk at the Java keynote The need for Digital Assistants, which provide a different interface for interaction, and a different UX requirement Blockchain-based distributed transactions and ledgers The importance of embedding AI/ML into applications Amit also covered Oracle Cloud Platform’s comprehensive portfolio, which spans across the application development trends above, as well as other areas. Dee Kumar, Vice President for Marketing and Developer Relations at CNCF, talked about digital transformation which depends on cloud native computing and open source. Per Dee, Kubernetes is second only to Linux when measured by number of authors. Dee emphasized that containerization is the first step in becoming a cloud native organization. For organizations considering cloud native technology, the benefits of cloud native projects, per the CNCF bi-annual surveys include: Faster deployment time Improved scalability Cloud portability Matt Thompson, Vice President of Developer Engagement and Evangelism, hosted a session about “Building in the Cloud.” Matt Baldwin and Manish Kapur from Oracle conducted live demos featuring chatbots/digital assistants (conversational interfaces), serverless functions, and blockchain ledgers. Groundbreaker Panel and Award Winners Also on Tuesday, Stephen Chin led a talk on the Oracle Groundbreakers Awards through which Oracle seeks to recognize technology innovators. The Groundbreakers Award Winners for 2018 are: Neha Narkhede: co-creator of Apache Kafka Guido van Rossum: Creator of Python Doug Cutting: Co-creator of Hadoop Graeme Rocher: Creator of Grails Charles Nutter: Co-creator of JRuby In addition, Stephen recognized the Code One stars, individuals who were the best speakers at the conference, evangelists of open source and emerging technologies, and leaders in the community. Duke’s Choice Award Winners The Java team, represented by Georges Saab, also announced winners of the Duke’s Choice Awards, which were given to top projects and individuals in the Java community. Award winners included: Apache NetBeans – Toni Epple, Constantin Drabo, Mark Stephens ClassGraph – Luke Hutchison Jelastic – Ruslan Synytsky JPoint – Bert Jan Schrijver MicroProfile.io – Josh Juneau Twitter4J – Yusuke Yamamoto Project Helidon – Joe DiPol BgJUG (Bulgarian Java Users Group). Customer Spotlight We had customers join us to talk further about their use of Oracle Cloud Platform: Mitsubishi Electric: Leveraged Oracle Cloud for AI, IoT, SaaS, and PaaS to achieve 60% increase in operating rate, 55% decrease in manual processes, and 85% reduction in floor space CargoSmart: Used Blockchain to integrate custom ERP and Supply Chain on Oracle Database. Achieved 65% reduction in time taken to collect, consolidate, and confirm data. AllianceData: Moved over 6TB to Oracle Cloud Infrastructure – PeopleSoft, EPM, Exadata, and Windows, thereby saving $1M/year in licensing and support AkerBP: Achieved elastic scalability with Oracle Cloud, running reports in seconds instead of 20 minutes and eliminating downtimes due to database patching Groundbreakers Hub The Groundbreakers Hub featured a number of interesting demos on AI and chatbots, personalized manufacturing leveraging Oracle’s IoT Cloud, robotics, and even early-stage cancer detection. Here are some of the highlights. Personalized Manufacturing using Oracle IoT Cloud This was one of the most popular areas in the Hub. Here is how the demo worked: A robotic arm grabbed a piece of inventory (a coaster) using a camera. The camera used computer vision to detect placement of the coaster. The arm then moved across and dropped the coaster onto a conveyer belt The belt moved past a laser engraver, which engraves custom text, like your name, on the coaster Oracle Cloud, including IoT Cloud and SCM (Supply Chain Management) Cloud, were leveraged through this process to monitor the production equipment, inventory and engraving. Check out the video clip below. 3D Rendering with Raspberry Pis and Oracle Cloud Another cool spot was the “Bullet Time” photo booth. Using fifty Rasperry Pis equipped with cameras, images were captured around me. These images were then sent to the Oracle Cloud to be stitched together. The final output -- a video -- was sent to me via SMS. Cancer Detection by High School Students We also had high school students from DesignTech, which is supported by the Oracle Education Foundation. Among many projects, these students created a device to detect early-stage cancer using a thermographic (heat-sensitive) camera and a touchscreen display. An impressive high school project! Summary Java continues to be a leading development language, and is used extensively at companies such as Github. To keep pace with innovation in the industry, Java is moving to a 6-month release cadence. Oracle has a keen interest in emerging technologies, such as AI/ML, Blockchain, Containers, Serverless Functions and DevSecOps/NoOps. Oracle recognized innovators and leaders in the industry through the Groundbreakers Awards and Duke’s Choice Awards. That’s just some of the highlights from Oracle Code One 2018. We look forward to seeing you next time!  

It’s been an educational, inspirational, and insightful four days at Oracle Code One in San Francisco. This was the first time Oracle Code One and Oracle Open World were run side-by-side.  Attendees...

All Things Developer at Oracle Code One - Day 3

  Community Matters! The Code One Avengers Keynote. When it comes to a code conference, it has to be about the community. Stephen Chin and his superheroes proved that right on stage last night with their Code Avengers keynote. The action-packed superheroes stole the thunder of Code One on Day 3. Some of us were backstage with the superheroes, and the excitement and energy were just phenomenal. We want to tell this story in pictures, but what are these avengers fighting for? We will, of course, start with Dr. Strange's address to his fellow superheroes of code which brought more than a quarter million viewers on Twitter. And then his troupe follows! The mural comic strips, animations, screenplay, and cast came together just brilliantly! Congrats to the entire superheroes team! Here are some highlights from the keynote to recap: https://www.facebook.com/OracleCodeOne/videos/2486105168071342/ The Oracle Code One Team Heads to CloudFest18 The remaining thunder was stolen by the Portugal Man, the Beck, and the Bleachers at the CloudFest18 rock concert in AT&Park. Jam-packed with Oracle customers, employees, and partners from TCS, the park was just electric with Powerade music! Hands-on Labs Kept Rolling! The NoSQL hands-on lab in action here delivered by the crew. One API to many NoSQL databases! The Groundbreakers Hub was Busy! The Hub was busy with pepper, more Groundbreaker live interviews, video hangouts, Zip labs, Code Card pickups, bullet time photo booths, superhero escape rooms, hackergarten, and with our favorite Cloud DJ  - Sonic Pi! Stephen Chin recaps what's hot at the Hub right here. And a quick run of the bullet time photo booth. Rex Wang in action! Sam Craft, our first Zip lab winner! Code One Content in Action Click here for a quick 30 second recap of other things on Day 3 at Oracle Code One. Groundbreaker live interviews with Jesse Butler and Karthik Gaekwad on cloud native technologies and the Fn project - https://twitter.com/OracleDevs/status/1055169708192751616 Groundbreaker live interview on AI and ML https://twitter.com/OracleDevs/status/1055183021316292608 Groundbreaker live interviews on building RESTful APIs -  https://twitter.com/OracleDevs/status/1055230551324483584 Groundbreaker live interviews with the design tech school on The All Jacked Up Project -  https://twitter.com/OracleDevs/status/1055224092456960000 Groundbreaker live interviews on NetBeans https://twitter.com/OracleDevs/status/1055206463809843200 And interesting live video hangouts on diversity in tech and women in tech https://twitter.com/groundbreakers/status/1055161816333017088 https://twitter.com/groundbreakers/status/1055147716903239681

  Community Matters! The Code One Avengers Keynote. When it comes to a code conference, it has to be about the community. Stephen Chin and his superheroes proved that right on stage last night with...

All Things Developer at Oracle Code One - Day 2

Live from Oracle Code One - Day Two There was tons of action today at Oracle Code One. From Zip labs and challenges to an all-women developer community breakfast,  and the Duke Choice awards, to the Oracle Code keynotes and the debut Groundbreaker awards, it was all happening at Code One. Pepper was quite busy, and so was the blockchain beer bar! Zip Labs, Zip Lab Challenges and Hands-on Labs Zip labs are running all four days. So, if you want to dabble with the Oracle Cloud, or learn how you can provision the various services, go up to the second floor on Moscone West and sign-up for our cloud. You can sign-in for a 15-minute lab challenge on Oracle Cloud content and see your name on the leaderboard as the person to beat. Choose from labs including Oracle Autonomous Data Warehouse, Oracle Autonomous Transaction Processing, and Virtual Machines. Lots of ongoing hands-on labs everyday but the Container Native labs today were quite a hit. Oracle Women's Leadership Developer Community Breakfast A breakfast this morning with several women developers from across the globe. It was quite insightful to learn about their life and experiences in code. The Duke Choice Awards and Groundbreaker Live Interviews Georges Saab announced the Duke Choice Award winners at Code One today.  Some exciting Groundbreaker live interviews: Jim Grisanzio and Gerald Venzl talk about Oracle Autonomous Database Bob Rubhart, Ashley Sullivan and the Design Tech Students discuss the Vida Cam Project The Oracle Code One Keynotes and Groundbreaker Awards in Pictures Building Next-Gen Cloud Native Apps with Embedded Intelligence, Chatbots, and Containers: Amit Zavery, EVP, PaaS Development, Oracle talks about how developers can leverage the power of the Oracle Cloud. Making Cloud Native Computing Universal and Sustainably Harnessing the Power of Open Source: Dee Kumar, VP of CNCF congratulates Oracle on becoming Platinum members.     Building for the Cloud: Matt Thompson, Developer Engagement and Evangelism, Oracle Cloud Platform talks about how a cloud works best - when it is open, secure, and all things productive for the developer.        Demos: Serverless, Chatbots, Blockchain...   Manish Kapur, Director of Product Management for Cloud Platform showed a cool demo of a new serverless/microservices based cloud architecture for selling and buying a car.     Matt Baldwin talked about the DNA of Blockchain and how it is used in the context of selling and buying a car.     And the Oracle Code One Groundbreaker Awards go to:   Stephen Chin, Director of Developer Community, announces the debut Groundbreaker awards and moderates a star panel with the winners.     We had more than 200K viewers of this panel on the Oracle Code One Twitter live stream today! There were lots of interesting and diverse questions for the panel from the Oracle Groundbreaker Twitter channel. For more information on Oracle Groundbreakers, click here. And now, moving on to  Day 3 of Code One!  

Live from Oracle Code One - Day Two There was tons of action today at Oracle Code One. From Zip labs and challenges to an all-women developer community breakfast,  and the Duke Choice awards, to the...

Cloud

Oracle Database 18c XE on Oracle Cloud Infrastructure: A Mere Yum Install Away

It's a busy week at OpenWorld 2018. So busy, that we didn't get around to mentioning that Oracle Database 18c Express Edition now available on Oracle Cloud Infrastructure (OCI) yum servers! This means it's easy to install this full-features Oracle Database for developers on an OCI compute shape without incurring any extra networking charges. In this blog post I demonstrate how to install, configure and connect to Oracle Database 18c XE OCI. Installing Oracle Database 18c XE on Oracle Cloud Infrastructure From a compute shape in OCI, grab the latest version of the repo definition from the yum server local to your region as follows: cd /etc/yum.repos.d sudo mv public-yum-ol7.repo public-yum-ol7.repo.bak export REGION=`curl http://169.254.169.254/opc/v1/instance/ -s | jq -r '.region'| cut -d '-' -f 2` sudo -E wget http://yum-$REGION.oracle.com/yum-$REGION-ol7.repo Enable the ol7_oci_included repo: sudo yum-config-manager --enable ol7_oci_included Here you see the Oracle Database 18c XE RPM is available in the yum repositories: $ yum info oracle-database-xe-18c Loaded plugins: langpacks, ulninfo Available Packages Name : oracle-database-xe-18c Arch : x86_64 Version : 1.0 Release : 1 Size : 2.4 G Repo : ol7_oci_included/x86_64 Summary : Oracle 18c Express Edition Database URL : http://www.oracle.com License : Oracle Corporation Description : Oracle 18c Express Edition Database Let's install it. $ sudo yum install $ yum info oracle-database-xe-18c Loaded plugins: langpacks, ulninfo No package $ available. Package yum-3.4.3-158.0.2.el7.noarch already installed and latest version Package info-5.1-5.el7.x86_64 already installed and latest version Resolving Dependencies --> Running transaction check ---> Package oracle-database-xe-18c.x86_64 0:1.0-1 will be installed --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================= Package Arch Version Repository Size ========================================================================================================= Installing: oracle-database-xe-18c x86_64 1.0-1 ol7_oci_included 2.4 G Transaction Summary ========================================================================================================= Install 1 Package Total download size: 2.4 G Installed size: 5.2 G Is this ok [y/d/N]: y Downloading packages: oracle-database-xe-18c-1.0-1.x86_64.rpm | 2.4 GB 00:01:13 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : oracle-database-xe-18c-1.0-1.x86_64 1/1 [INFO] Executing post installation scripts... [INFO] Oracle home installed successfully and ready to be configured. To configure Oracle Database XE, optionally modify the parameters in '/etc/sysconfig/oracle-xe-18c.conf' and then execute '/etc/init.d/oracle-xe-18c configure' as root. Verifying : oracle-database-xe-18c-1.0-1.x86_64 1/1 Installed: oracle-database-xe-18c.x86_64 0:1.0-1 Complete! $ Configuring Oracle Database 18c XE With the software now installed, the next step is to configure it: $ sudo /etc/init.d/oracle-xe-18c configure Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts: Confirm the password: Configuring Oracle Listener. Listener configuration succeeded. Configuring Oracle Database XE. Enter SYS user password: ************** Enter SYSTEM user password: ************ Enter PDBADMIN User Password: ************** Prepare for db operation 7% complete Copying database files 29% complete Creating and starting Oracle instance 30% complete 31% complete 34% complete 38% complete 41% complete 43% complete Completing Database Creation 47% complete 50% complete Creating Pluggable Databases 54% complete 71% complete Executing Post Configuration Actions 93% complete Running Custom Scripts 100% complete Database creation complete. For details check the logfiles at: /opt/oracle/cfgtoollogs/dbca/XE. Database Information: Global Database Name:XE System Identifier(SID):XE Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details. Connect to Oracle Database using one of the connect strings: Pluggable database: instance-20181023-1035/XEPDB1 Multitenant container database: instance-20181023-1035 Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE Connecting to Oracle Database 18c XE To connect to the database, use the oraenv script to set the necessary environment variables, entering the XE as the ORACLE_SID. $ . oraenv ORACLE_SID = [opc] ? XE ORACLE_BASE environment variable is not being set since this information is not available for the current user ID opc. You can set ORACLE_BASE manually if it is required. Resetting ORACLE_BASE to its previous value or ORACLE_HOME The Oracle base has been set to /opt/oracle/product/18c/dbhomeXE $ Then, connect as usual using sqlplus: $ sqlplus sys/OpenWorld2018 as sysdba SQL*Plus: Release 18.0.0.0.0 - Production on Tue Oct 23 19:13:23 2018 Version 18.4.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Connected to: Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production Version 18.4.0.0.0 SQL> select 1 from dual; 1 ---------- 1 SQL> Conclusion Whether you are a developer looking to get started quickly with building applications on your own full-featured Oracle Database, or an ISV prototyping solutions that require an embedded database, installing Oracle Database XE on OCI is an excellent way to get started. With Oracle Datbase 18c XE available as an RPM inside OCI via yum, it doesn't get any easier.

It's a busy week at OpenWorld 2018. So busy, that we didn't get around to mentioning that Oracle Database 18c Express Edition now available on Oracle Cloud Infrastructure (OCI) yum servers! This means...

APIs

All Things Developer at Oracle CodeOne - Day 1 Recap

Live from Oracle CodeOne - Day One A lot of action, energy, and fun here on the first day at Oracle CodeOne 2018. From all the fun at the Developers Exchange to the cool things at the Groundbreakers hub, we've covered it all for you! So, let's get started!  Here's a one minute recap that tells the day one story, in well, a minute! The Groundbreakers Hub. We announced a new developer brand at Oracle CodeOne today, and it is...the Groundbreaker's Hub - yes, you got it! Groundbreakers is the lounge for developers, the nerds, the geeks and also the tech enthusiasts. Anyone who wants to hang out with the fellow developer community. There's the Groundbreaker live stage where we got customers talking about their experience with the Oracle Cloud, and we got over 30 great stories on record today. Kudos to the interviewers - Javed Mohammed and Bob Rhubart. The video hangouts was a casual, sassy corner to share stories of the code you built, the best app you created, the best developer you met, or the most compelling lesson you've ever learned.  Don't forget to chat with Pepper, our chatbot that will tell you what's on at CodeOne or anything at all. Also, check out the commit mural that commemorates the first annual CodeOne and the new Groundbreaker community. There's some blockchain beer action too! Select the beer you want to taste using the Blockchain app and learn all about its origins!  The Keynotes and Sessions Keynote: The Future of Java is Today ​The BIG keynote first! The Future of Java is today! An all things Java keynote by Mark Reinhold (Chief Architect of Java Platform Group at Oracle), and Georges Saab (VP Development at Oracle). It's a full house of developers who flocked to a very informative session on the evolution of Java to meet the needs of developers to become more secure, stable, and rich.  A lot of insight into the new enhancements around Java with recent additions to languages and the platform. Matthew McCullough (Vice President of Field Services, GitHub) and Rafer Hazen (Data Pipelines Engineering Manager, GitHub) also talked about GitHub, Java, and the OpenJDK Collaboration.  We streamed this live via our social channels with a viewership about half a million developers worldwide!! Here are some snippets of the backstage excitement from the crew. Big Session Today: Emerging Trends and Technologies with Modern App Dev Siddhartha Agarwal took the audience through all things app dev at Oracle - Cloud-native application development, DevSecOps, AI and conversational AI, Open Source software, Blockchain platform and more! And he was supported by Suhas Uliyar, (VP, Bot AI and Mobile Product Management), and Manish Kapur (Director of App Dev, Product Management) to tell this modern app dev story via demos. The Developer's Exchange Lots of good tech (and swag) on the Oracle Developers Exchange floor that developers could flock to. Pivotal, JFrog, IBM, Redhat, AppDynamics, DataDog...the list goes on. But check out a few fashionable. booths right here. Now, onto day two - Tuesday (10/23) ! Lots of keynotes, fireside chats, DJ and music, demos, hubs, and labs await! Thanks to Anaplan. They provided delicious free food, snacks, and drinks to all the visitors who checked-in with them!  

Live from Oracle CodeOne - Day One A lot of action, energy, and fun here on the first day at Oracle CodeOne 2018. From all the fun at the Developers Exchange to the cool things at the Groundbreakers...

APIs

All Things Developer at Oracle CodeOne. Spotlight APIs.

Code One - It’s Here! We’re just a few days away from Oracle’s biggest conference for Developers that’s now known as Code One. Java One morphed into Code One to extend support for more developer ecosystems - languages, open-source projects, and cloud-native foundations. So, first, the plugs - if you’d like to be a part of the Oracle Code One movement and have not already registered, you can still do it. You can get lost, yes! It’s a large conference with lots of sessions and other moving parts. But we’ve tried to make things simple for you here to plan your calendar. Look through these to find the right tracks and session types for you.  There are some exciting keynotes you don’t want to miss - The Future of Java, the Power of Open Source, and Building Next-Gen Could Native Apps using Emerging Tech, the Ground Breaker Code Avenger sessions, and Fireside chats! And now for the fun stuff, cause our conference is not complete without that - there’s the Cloud Fest! Get ready to be up all night with Beck; Portugal; the Man; and the Bleachers. And if you are up, get your nerdy kids to the code camp over the weekend. It’s Oracle Code for Kids time inspiring the next generation of developers! The prelude to Code One wouldn’t be complete without talking about the Groundbreaker’s Hub. A few things that you HAVE to check out are: the Blockchain Beer - Try beers that were mixed using Blockchain technology that enabled the microbrewery to accurately estimate the correct combination of raw materials to create different types of beer. Then vote for your favorite beer on our mobile app - it’s pretty cool! Experience the bullet time photo booth, the chatbot with pepper, code card ( IoT card that you can program using Fn project serverless technologies. It will have a wifi embedded chip, e-link screen, and a few fun buttons). Catch all the hub action if you're there! The Tech that Matters to Developers: Powerful APIs We’ve talked about a lot of tech here, but there are a few things that are closer to the developer’s heart! Things that make their life more straightforward, and stuff that they use on an every hour basis. And one such technology is API. I am not going to explain what APIs are because if you are a developer, you know this. APIs are a mechanism that help to dial down on the heavy duty code and add powerful functionality to a website, app, or platform, without extensive coding, and only including the API code - there I said it. But even for developers, it is essential to understand the system of engagement around designing and maintaining sound and healthy APIs. The cleaner the API, the better the associated user experience and performance of the app or platform in contention. Since APIs are reusable, it is essential to understand what goes into making the API an excellent one. And different types of APIs require a different type of love.  API Strategy with Business Outcomes First, there is a class of APIs that are powering the chatbots, and the digital experience of customers and the UX becomes one of the most significant driving factors. Second, APIs help to monetize existing data and assets. Here’s where there are organizations with API as a product and dealing with performance, scale, policy and governance around them so that the consumers have an API 360 experience.  Third and fourth - APIs are used for operational efficiency and cost savings, and they are also used for creating exchange/app systems like the app stores!  So now, taking these four areas and establishing a business outcome is critical to driving the API strategy. And the API strategy entails good design as you’ll hear in Robert’s podcast below. Design Matters Podcast by Robert Wunderlich Beyond Design - Detailing the API Lifecycle Once you have followed the principles of good API Design and established the documentation based on the business outcome, it then literally comes to the lifecycle management - the building, deployment, governance, and then managing them for scale and performance, and looping back the analytics to deliver the right expected experience. And then on the other side, there is the consumption, where developers now should be able to discover these APIs and start using them.  And then there’s the Oracle way with APIs. Vikas Anand, VP of Product Management for SOA, Integration, and API Cloud tells how this happens. API 360 Podcast by Vikas Anand API Action at Code One A lot is happening there! Hear from the customers directly on how Oracle’s API Cloud has helped to design and manage world-class APIs. Here are a few do-not-miss sessions, but you can always visit the Oracle Code page to discover more. See you there! How Rabobank is using APICS to Achieve API Success How RTD Connexions and Graco are using the API Success How Ikea is using APICS to Achieve API Success  Keynote: AI Powered Autonomous Integration Cloud and API Cloud Service API Evolution Challenges by NFL Evolutionary Tales of API by NFL Vector API for Java by Netflix

Code One - It’s Here! We’re just a few days away from Oracle’s biggest conference for Developers that’s now known as Code One. Java One morphed into Code One to extend support for more developer...

Cloud

Using Kubernetes in Hybrid Clouds -- Join Us @ Oracle OpenWorld

By now you have probably heard of the term cloud native. Cloud Native Computing Foundation (CNCF) defines cloud native as a set of technologies that “empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.” Cloud native is characterized by the use of containers and small, modular services – microservices -- which are managed by orchestration software. In the following blog post, we will cover the relationship between Containers, Kubernetes and Hybrid Clouds. For more on this topic, please join us at Oracle OpenWorld for Kubernetes in an Oracle Hybrid Cloud [BUS5722]. Containers and Kubernetes In the most recent CNCF survey among 2400 respondents, use of cloud native technologies in production has grown to over 200%, and 45% of companies run 250 or more containers. Leveraging many containerized applications requires orchestration software that can run, deploy, scale, monitor, manage, and provide high availability for hundreds or thousands of microservices. These microservices are easier and faster to develop and upgrade since development and updates of each microservice can be independently completed, without affecting the overall application. Once a new version of a microservice is tested, it can then be pushed into production to replace the existing version without any downtime. Hybrid Clouds Hybrid clouds can reduce downtime and ensure application availability. For example, in a hybrid cloud model you can leverage an on-premises datacenter for your production workloads, and leverage Availability Domains in Oracle Cloud for your DR deployments to ensure that business operations are not affected by a disaster. Whereas in a traditional on-premises datacenter model you would hire staff to manage each of your geographically dispersed datacenters, you can now offload the maintenance of infrastructure and software to a public cloud vendor such as Oracle Cloud. In turn, this reduces your operational costs of managing multiple datacenter environments. Why Kubernetes and Hybrid Clouds are like Peanut Butter and Jelly To make the best use of a hybrid cloud, you need to be able to easily package an application so that it can be deployed anywhere, i.e. you need portability. Docker containers provide the easiest way to do this since they package the application and its dependencies to be run in any environment, on-premises datacenters or public clouds. At the same time, they are more efficient than virtual machines (VMs) as they require less compute, memory, and storage resources. This makes them more economical and faster to deploy than VMs. Oracle’s Solution for Hybrid Clouds Oracle Cloud is a public cloud offering that offers multiple services for containers, including Oracle Container Engine for Kubernetes (OKE). OKE is certified by CNCF, and is managed and maintained by Oracle. With OKE, you can get started with a continuously up to date container orchestration platform quickly – just bring your container apps. For hybrid use cases, you can couple Kubernetes in your data center with OKE, and then move workloads or mix workloads as needed. To get more details and real-world insight with OKE and hybrid use cases, please join us at Oracle OpenWorld for the following session where Jason Looney from Beeline will be presenting with David Cabelus from Oracle Product Management: Kubernetes in an Oracle Hybrid Cloud [BUS5722] Wednesday, Oct 24, 4:45 p.m. - 5:30 p.m. | Moscone South - Room 160 David Cabelus, Senior Principal Product Manager, Oracle Jason Looney, VP of Enterprise Architecture, Beeline

By now you have probably heard of the term cloud native. Cloud Native Computing Foundation (CNCF) defines cloud native as a set of technologies that “empower organizations to build and run scalable...

Get going quickly with Command Line Interface for Oracle Cloud Infrastructure using Docker container

Originally published at technology.amis.nl on October 14, 2018. Oracle Cloud Infrastructure is Oracle’s second generation infrastructure as a service offering — that support many components including compute nodes, networks, storage, Kubernetes clusters and Database as a Service. Oracle Cloud Infrastructure can be administered through a GUI — a browser based console — as well as through a REST API and with the OCI Command Line Interface. Oracle offers a Terraform provider that allows automated, scripted provisioning of OCI artefacts. This article describes an easy approach to get going with the Command Line Interface for Oracle Cloud Infrastructure — using the oci-cli Docker image. Using a Docker container image and a simple configuration file, oci commands can be executed without locally having to install and update the OCI Command Line Interface (and the Python runtime environment) itself. These are the steps to get going on a Linux or Mac Host that contains a Docker engine: create a new user in OCI (or use an existing user) with appropriate privileges; you need the OCID for the user also make sure you have the name of the region and the OCID for the tenancy on OCI execute a docker run command to prepare the OCI CLI configuration file update the user in OCI with the public key created by the OCI CLI setup action edit the .profile to associate the oci command line instruction on the Docker host with running the OCI CLI Docker image At that point, you can locally run any OCI CLI command against the specified user and tenant — using nothing but the Docker container that contains the latest version of the OCI CLI and the required runtime dependencies. In more detail, the steps look like this: Create a new user in OCI (or use an existing user) with appropriate privileges; you need the OCID for the user You can reuse an existing user or create a fresh one — which is what I did. This step I performed in the OCI Console: I then added this user to the group Administrators. And I noted the OCID for this user: also make sure you have the name of the region and the OCID for the tenancy on OCI: Execute a docker run command to prepare the OCI CLI configuration file On the Docker host machine, create a directory to hold the OCI CLI configuration files. These files will be made available to the CLI tool by mounting the directory into the Docker container. mkdir ~/.oci Run the following Docker command: docker run --rm --mount type=bind,source=$HOME/.oci,target=/root/.oci -it stephenpearson/oci-cli:latest setup config This starts the OCI CLI container in interactive mode — with the ~/.oci directory mounted into the container at /root/oci — the and executes the setup config command on the OCI CLI (see https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/cmdref/setup/config.html). This command will start a dialog that results in the OCI Config file being written to /root/.oci inside the container and to ~/.oci on the Docker host. The dialog also result in a private and public key file, in that same dircetory. Here is the content of the config file that the dialog has generated on the Docker host: Update the user in OCI with the public key created by the OCI CLI setup action The contents of the file that contains the public key — ~/.oci/oci_api_key_public.pem in this case — should be configured on the OCI user — kubie in this case — as API Key:   Create shortcut command for OCI CLI on Docker host We did not install the OCI CLI on the Docker host — but we can still make it possible to run the CLI commands as if we did. If we edit the .profile file to associate the oci command line instruction on the Docker host with running the OCI CLI Docker image, we get the same experience on the host command line as if we did install the OCI CLI. Edit ~/.profile and add this line: oci() { docker run --rm --mount type=bind,source=$HOME/.oci,target=/root/.oci stephenpearson/oci-cli:latest "$@"; } On the docker host I can now run oci cli commands (that will be sent to the docker container that uses the configuration in ~/.oci for connecting to the OCI instance) Run OCI CLI command on the Host We are now set to run OCI CLI command — even though we did not actually install the OCI CLI and the Python runtime environment. Note: most commands we run will require us to pass the Compartment Id of the OCI Compartment against which we want to perform an action. It is convenient to set an environment variable with the Compartment OCID value and then refer in all cli commands to the variable. For example: export COMPARTMENT_ID=ocid1.tenancy.oc1..aaaaaaaaot3ihdt Now to list all policies in this compartment: oci iam policy list --compartment-id $COMPARTMENT_ID --all And to create a new policy — one that I need in order to provision a Kubernetes cluster: oci iam policy create --name oke-service --compartment-id $COMPARTMENT_ID --statements '[ "allow service OKE to manage all-re sources in tenancy"]' --description 'policy for granting rights on OKE to manage cluster resources' Or to create a new compartment: oci iam compartment create --compartment-id $COMPARTMENT_ID --name oke-compartment --description "Compartment for OCI resources created for OKE Cluster" From here on, it is just regular OCI CLI work, just as if it had been installed locally. But by using the Docker container, we keep our system tidy and we can easily benefit from the latest version of the OCI CLI at all times. Resources OCI CLI Command Reference — https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/index.html Terraform Provider for OCI: https://www.terraform.io/docs/providers/oci/index.html GitHub repo for OCI CLI Docker — https://github.com/stephenpearson/oci-cli

Originally published at technology.amis.nl on October 14, 2018. Oracle Cloud Infrastructure is Oracle’s second generation infrastructure as a service offering — that support many components including...

Java

Matrix Bullet Time Demo Take Two

By Christopher Bensen and Noel Portugal, Cloud Experience Developers If you are attending Oracle Code One or Open World 2018, you will be happy to hear that the Matrix Bullet Time Demo will be there. You can experience it by coming to Moscone West in the Developer Exchange and inside the GroundBreakers Hub. Last year we went into the challenges of building the Matrix Bullet Time Demo (https://developer.oracle.com/java/bullet-time). A lot of problems were encountered after that article was published so this year we pulled the demo out of storage, dusted it off and began refurbishing the demo so it could make a comeback. The first challenge was trying to remember how it all worked. Let’s backup a bit and describe what we’ve built here so you don’t have to read the previous article. The idea is to create a demo that takes a simultaneous photo from camera’s placed around a subject, stitch these photos together and form a movie. The intended final effect is for it to appear as though the camera is moving around a subject frozen in time. To do this we used 60 individual Raspberry Pi 3 single-board computers with Raspberry Pi cameras. Besides all of the technical challenges, there are some logistical challenges. When setup, the demo is huge! It forms a ten foot diameter circle and needs even more space for the mounting system. Not only is it huge, it’s delicate. Wires big and small are going everywhere. 15 Raspberry Pi 3s are mounted to each of the four lighting track gantries, and they are precarious at best. And to top it off we have to transport this demo to where we are going to set it up and back. An absolutely massive crate was built that requires an entire truck. Because of these logistical challenges the demo was only used at Open World and they keynote at JavaOne. Last year at Open World the demo was not working for the full length of the show. One of the biggest reasons is aligning 60 cameras to a single point is difficult at best and impossible with a precariously delicate mounting system. So software image stabilization was written which was done by Richard Bair on the floor under the demo. If you read the previous article about Bullet Time, then you’d know a lighting track system was used to provide power. One of the benefits of using a lighting track system is that it handles power distribution. You provide the 120 volt AC input power to the track and it carries that power through copper wires built into the track. At any point where you want to have a light, you use a mount designed for the track system, which transfers the power through the mount to the light. A 48 volt DC power supply sends 20 amps through the wires designed for 120 volts AC. Each camera has a small voltage regulator to step down to the 5 volts DC required for a Raspberry Pi. The brilliance of this system is, it is easy to send power and transmit the shutter release of the cameras and transfer of the photos via WiFi. Unfortunately WiFi is unreliable at a conference, there are far too many devices jamming up the spectrum, so that required running individual Ethernet cables to each camera which is what we were trying to avoid by using the lighting track system. So we end up with a Ethernet harness strapped to the track. Once we opened up the crate, and setup BulletTime, only one camera was not functioning. On the software side there are four parts:   A tablet that the user interacts with providing a name and optional mobile number and a button to start the countdown to take the photo. The Java server receives countdown, sends out a UDP packet to the Raspberry Pi cameras to take a photo. The server also receives the photos and stitches them together to make the video. Python code running on the Raspberry Pi listens for a UDP packet to take a photo and know where to send it. The cloud software uploads the video to a YouTube channel.  And a text message with the link is sent to the user. The overall system works like this: The user would input their name on the Oracle JavaScript Extension Toolkit (Oracle JET) web UI we built for this demo, which is running on a Microsoft Surface tablet. The user would then click a button on the Oracle JET web UI to start a 10-second countdown. The web UI would invoke a REST API on the Java server to start the countdown. After a 10-second delay, the Java server would send a multicast message to all the Raspberry Pi units at the same moment instructing them to take a picture. Each camera would take a picture and send the picture data back up to the server. The server would make any adjustments necessary to the picture (see below), and then using FFMPEG, the server would turn those 60 images into an MP4 movie. The server would respond to the Oracle JET web UI's REST request with a link to the completed movie. The Oracle JET web UI would display the movie. In general, this system worked really well. The primary challenge that we encountered was getting all 60 cameras to focus on exactly the same point in space. If the cameras were not precisely focused on the same point, then it would seem like the "virtual" camera (the resulting movie) would jump all over the place. One camera might be pointed a little higher, the next a little lower, the next a little left, and the next rotated a little. This would create a disturbing "bouncy" effect in the movie. We took two approaches to solve this. First, each Raspberry Pi camera was mounted with a series of adjustable parts, such that we could manually visit each Raspberry Pi and adjust the yaw, pitch, and roll of each camera. We would place a tripod with a pyramid target mounted to it in the center of the camera helix as a focal point, and using a hand-held HDMI monitor we visited each camera to manually adjust the cameras as best we could to line them all up on the pyramid target. Even so, this was only a rough adjustment and the resulting videos were still very bouncy. The next approach was a software-based approach to adjusting the translation (pitch and yaw) and rotation (roll) of the camera images. We created a JavaFX app to help configure each camera with settings for how much translation and rotation was necessary to perfectly line up each camera on the same exact target point. Within the app, we would take a picture from the camera. We would then click the target location, and the software would know how much it had to adjust the x and y axis for that point to end up in the dead center of each image. Likewise, we would rotate the image to line it up relative to a "horizon" line that was superimposed on the image. We had to visit each of the 60 cameras to perform both the physical and virtual configuration. Then at runtime, the server would query the cameras to get their adjustments. Then, when images were received from the cameras (see step 6 above), we used the Java 2D API to transform those images according to the translation and rotation values previously configured. We also had to crop the images, so we adjusted each Raspberry Pi camera to take the highest resolution image possible, and then we cropped it to 1920x1080 for a resealing hi-def movie. If we were to build Bullet Time version 2.0 we’d make a few changes, such as powering the Raspberry Pi using PoE, replace the lighting track with a stronger less flexible rolled aluminum square tube in eight sections rather than four, and upgrade the camera module with a better lens. But overall this is a fun project with a great user experience.  

By Christopher Bensen and Noel Portugal, Cloud Experience Developers If you are attending Oracle Code One or Open World 2018, you will be happy to hear that the Matrix Bullet Time Demo will be there....

APIs

Microservices From Dev To Deploy, Part 3: Local Deployment & The Angular UI

In this series, we're taking a look at how microservice applications are built.  In part 1 we learned about the new open source framework from Oracle called Helidon and learned how it can be used with both Java and Groovy in either a functional, reactive style or a more traditional Microprofile manner.  Part 2 acknowledged that some dev teams have different strengths and preferences and that one team in our fictional scenario used NodeJS with the ExpressJS framework to develop their microservice.  Yet another team in the scenario chose to use Fn, another awesome Oracle open source technology to add serverless to the application architecture.  Here is an architecture diagram to help you better visualize the overall picture: It may be a contrived and silly scenario, but I think it properly represents the diversity of skills and preferences that are the true reality of many teams that are building software today.  Our ultimate path in this journey is how all of the divergent pieces of this application come together in a deployment on the Oracle Cloud and we're nearly at that point.  But before we get there, let's take a look at how all of these backend services that have been developed come together in a unified frontend. Before we get started, if you're playing along at home you might want to first make sure you have access to a local Kubernetes cluster.  For testing purposes, I've built my own cluster using a few Raspberry Pi's (following the instructions here), but you can get a local testing environment up and running with minikube pretty quickly.  Don't forget to install kubectl, you'll need the command line tools to work with the cluster that you set up. With the environment set up, let's revisit Chris' team who you might recall from part 1 have built out a weather service backend using Groovy with Helidon SE.  The Gradle 'assemble' task gives them their JAR file for deployment, but Helidon also includes a few other handy features: a docker build file and a Kubernetes yaml template to speed up deploying to a K8S cluster.  When you use the Maven archetype (as Michiko's team did in part 1) the files are automatically copied to the 'target' directory along with the JAR, but since Chris' team is using Groovy with Gradle, they had to make a slight modification to the build script to copy the templates and slightly modify the paths within them.  The build.gradle script they used now includes the following tasks: task copyDocker(type:Copy) { from "src/main/docker" into "build" doLast { def d = new File( 'build/Dockerfile' ) def dfile = d.text.replaceAll('\\$\\{project.artifactId\\}', project.name) dfile = dfile.replaceAll("COPY ${project.name}", "COPY libs/${project.name}") d.write(dfile) } } task copyK8s(type:Copy) { from "src/main/k8s" into "build" doLast { def a = new File( 'build/app.yaml' ) def afile = a.text.replaceAll('\\$\\{project.artifactId\\}', project.name) a.write(afile) } } copyLibs.dependsOn jar copyDocker.dependsOn jar copyK8s.dependsOn jar assemble.dependsOn copyLibs assemble.dependsOn copyDocker assemble.dependsOn copyK8s So now, when Chris' team performs a local build they receive a fully functional Dockerfile and app.yaml file to help them quickly package the service into a Docker container and deploy that container to a Kubernetes cluster.  The process now becomes: Write Code Test Code Build JAR (gradle assemble) Build Docker Container (docker build / docker tag) Push To Docker Registry (docker push) Create Kubernetes Deployment (kubectl create) Which, if condensed into a quick screencast, looks something like this: When the process is repeated for the rest of the backend services the frontend team led by Ava are now are able to integrate the backend services into the Angular 6 frontend that they have been working on.  They start by specifying the deployed backend base URLs in their environment.ts file.  Angular uses this file to provide a flexible way to manage global application variables that have different values per environment.  For example, an environment.prod.ts file can have it's own set of production specific values that will be substituted when a `ng build --prod` is performed.  The default environment.ts is used if no environment is specified so the team uses that file for development and have set it up with the following values: export const environment = { production: false, stockApiBaseUrl: 'http://192.168.0.160:31002', weatherApiBaseUrl: 'http://192.168.0.160:31000', quoteApiBaseUrl: 'http://192.168.0.160:31001', catApiBaseUrl: 'http://localhost:31004', }; The team then creates services corresponding to each microservice.  Here's the weather.service.ts: import {Injectable} from '@angular/core'; import {HttpClient} from '@angular/common/http'; import {environment} from '../../environments/environment'; @Injectable({ providedIn: 'root' }) export class WeatherService { private baseUrl: string = environment.weatherApiBaseUrl; constructor( private http: HttpClient, ) { } getWeatherByCoords(coordinates) { return this.http .get(`${this.baseUrl}/weather/current/lat/${coordinates.lat}/lon/${coordinates.lon}`); } } And call the services from the view component. getWeather() { this.weather = null; this.weatherLoading = true; this.locationService.getLocation().subscribe((result) => { const response: any = result; const loc: Array<string> = response.loc.split(','); const lat: string = loc[0]; const long: string = loc[1]; console.log(loc) this.weatherService.getWeatherByCoords({lat: lat, lon: long}) .subscribe( (weather) => { this.weather = weather; }, (error) => {}, () => { this.weatherLoading = false; } ); }); } Once they've completed this for all of the services, the corporate vision of a throwback homepage is starting to look like a reality: In three posts we've followed TechCorp's journey to developing an internet homepage application from idea, to backend service creation and onto integrating the backend with a modern JavaScript based frontend built with Angular 6.  In the next post of this series we will see how this technologically diverse application can be deployed to Oracle's Cloud.

In this series, we're taking a look at how microservice applications are built.  In part 1 we learned about the new open source framework from Oracle called Helidon and learned how it can be used...

Containers, Microservices, APIs

Microservices From Dev To Deploy, Part 2: Node/Express and Fn Serverless

.syntaxhighlighter table td.gutter div.line { padding: 0 .5em 0 1em!important; } In our last post, we were introduced to a fictional company called TechCorp run by an entrepreneur named Lydia whose goal it is to bring back the world back to the glory days of the internet homepage. Lydia’s global team of remarkable developers are implementing her vision with a microservice architecture and we learned about Chris and Michiko who have teams in London and Tokyo.  These teams built out a weather and quote service using Helidon, a microservice framework by Oracle.  Chris’ team used Helidon SE with Groovy and Michiko’s team chose Java with Helidon MP.  In this post, we’ll look at Murielle and her Bangalore crew who are building a stock service using NodeJS with Express and Dominic and the Melbourne squad who have the envious task of building out a random cat image service with Java Oracle Fn (a serverless technology). It’s clear Helidon makes both functional and Microprofile style services straight-forward to implement.  But, despite what I personally may have thought 5 years ago it is getting impossible to ignore that NodeJS has exploded in popularity.  Stack Overflow’s most recent survey shows over 69% of respondents selecting JavaScript as the “Most Popular Technology” among Programming, Scripting and Markup Languages and Node comes in atop the “Framework” category with greater than 49% of the respondents preferring it.  It’s a given that people are using JavaScript on the frontend and it’s more and more likely that they are taking advantage of it on the backend, so it’s no surprise that Murielle’s team decided to use Node with Express to build out the stock service.     We won’t dive too deep into the Express plumbing for this service, but let’s have a quick look at the method to retrieve the stock quote: var express = require('express'); var router = express.Router(); var config = require('config'); var fetch = require("node-fetch"); /* GET stock quote */ /* jshint ignore:start */ router.get('/quote/:symbol', async (req, res, next) => { const symbol = req.param('symbol'); const url = `${config.get("api.baseUrl")}/?function=GLOBAL_QUOTE&symbol=${symbol}&apikey=${config.get("api.apiKey")}`; try { const response = await fetch(url); const json = await response.json(); res.send(json); } catch (error) { res.send(JSON.stringify(error)); } }); /* jshint ignore:end */ module.exports = router; Using fetch (in an async manner), this method calls the stock quote API and passes along the symbol that it received via the URL parameters and returns the stock quote as a JSON string to the consumer.  Here’s how that might look when we hit the service locally: Murielle’s team can expand the service in the future to provide historical data, cryptocurrency lookups, or whatever the business needs demand, but for now it provides a current quote based on the symbol it receives.  The team creates a Dockerfile and Kubernetes config file for deployment which we’ll take a look at in the future.   Dominic’s team down in Melbourne has been doing a lot of work with serverless technologies.  Since they’ve been tasked with a priority feature – random cat images – they feel that serverless is the way to go do deliver this feature and set about using Fn to build the service.  It might seem out of place to consider serverless in a microservice architecture, but it undoubtedly has a place and fulfills the stated goals of the microservice approach:  flexible, scalable, focused and rapidly deployable.  Dominic’s team has done all the research on serverless and Fn and is ready to get to work, so the developers installed a local Fn server and followed the quickstart for Java to scaffold out a function.   Once the project was ready to go Dominic’s team modified the func.yaml file to set up some configuration for the project, notably the apiBaseUrl and apiKey: schema_version: 20180708 name: cat-svc version: 0.0.47 runtime: java build_image: fnproject/fn-java-fdk-build:jdk9-1.0.70 run_image: fnproject/fn-java-fdk:jdk9-1.0.70 cmd: codes.recursive.cat.CatFunction::handleRequest format: http config: apiBaseUrl: https://api.thecatapi.com/v1 apiKey: [redacted] triggers: - name: cat type: http source: /random The CatFunction class is basic.  A setUp() method, annotated with @FnConfiguration gives access to the function context which contains the config info from the YAML file and initializes the variables for the function.  Then the handleRequest() method makes the HTTP call, again using a client library called Unirest, and returns the JSON containing the link to the crucial cat image.   public class CatFunction { private String apiBaseUrl; private String apiKey; @FnConfiguration public void setUp(RuntimeContext ctx) { apiBaseUrl = ctx.getConfigurationByKey("apiBaseUrl").orElse(""); apiKey = ctx.getConfigurationByKey("apiKey").orElse(""); } public OutputEvent handleRequest(String input) throws UnirestException { String url = apiBaseUrl + "/images/search?format=json"; HttpResponse<JsonNode> response = Unirest .get(url) .header("Content-Type", "application/json") .header("x-api-key", apiKey) .asJson(); OutputEvent out = OutputEvent.fromBytes( response.getBody().toString().getBytes(), OutputEvent.Status.Success, "application/json" ); return out; } } To test the function, the team deploys the function locally with: fn deploy --app cat-svc –local And tests that it is working: curl -i \ -H "Content-Type: application/json" \ http://localhost:8080/t/cat-svc/random Which produces: HTTP/1.1 200 OK Content-Length: 112 Content-Type: application/json Fn_call_id: 01CRGBAH56NG8G00RZJ0000001 Xxx-Fxlb-Wait: 502.0941ms Date: Fri, 28 Sep 2018 15:04:05 GMT [{"id":"ci","categories":[],"url":"https://24.media.tumblr.com/tumblr_lz8xmo6xYV1r0mbi6o1_500.jpg","breeds":[]}] Success!  Dominic’s team created the cat service before lunch and spent the rest of the day looking at random cat pictures.   Now that all 4 teams have implemented their respective services using various technologies, you might be asking yourself why it was necessary to implement such trivial services on the backend instead of calling the third-party APIs directly from the front end.  There are several reasons but let's take a look at just a few of them:   One reason to implement this functionality via a server-based backend is that third-party APIs can be unreliable and/or rate limited.  By proxying the API through their own backend, the teams are able to take advantage of caching and rate limiting of their own design to prevent the demand on the third-party API and get around potential downtime or rate limiting for a service that they have limited or no control over.     Secondly, the teams are given the luxury of controlling the data before it’s sent to the client.  If it is allowed within the API terms and the business needs require them to supplement the data with other third-party or user data they can reduce the client CPU, memory, and bandwidth demands by augmenting or modifying the data before it even gets to the client.   Finally, CORS restrictions in the browser can be circumvented by calling the API from the server (and if you've ever had CORS block your HTTP calls in the browser you can definitely appreciate this!).   TechCorp has now completed the initial microservice development sprint of their project.  In the next post, we’ll look at how these 4 services can be deployed to a local Kubernetes cluster and we'll also dig into the Angular front end of the application.  

In our last post, we were introduced to a fictional company called TechCorp run by an entrepreneur named Lydia whose goal it is to bring back the world back to the glory days of the internet...

Containers, Microservices, APIs

Microservices From Dev To Deploy, Part 1: Getting Started With Helidon

.syntaxhighlighter table td.gutter div.line { padding: 0 .5em 0 1em!important; } Microservices are undoubtedly popular.  There have been plenty of great posts on this blog that explain the advantages of using a microservice approach to building applications (or “why you should use them”).  And the reasons are plentiful:  flexibility to allow your teams to implement different services with their language/framework of choice, independent deployments, and scalability, and improved build and test times are among the many factors that make a microservice approach preferable to many dev teams nowadays.  It’s really not much of a discussion anymore as studies have shown that nearly 86% of respondents believe that a microservice approach will be their default architecture within the next 5 years.  As I mentioned, the question of “why microservices” has long been answered, so in this short blog series, I’d like to answer the question of “how” to implement microservices in your organization. Specifically, how Oracle technologies can help your dev team implement a maintainable, scalable and easy to test, develop, and deploy solution for your microservice applications. To keep things interesting I thought I’d come up with a fictional scenario that we can follow as we take this journey.  Let’s imagine that a completely fabricated startup called TechCorp has just secured $150M in seed funding for their brilliant new project.  TechCorp’s founder Lydia is very nostalgic and she longs for the “good old days” when 56k modems screeched and buzzed their way down the on-ramp to the “interwebs” and she’s convinced BigCity Venture Capital that personalized homepages are about to make a comeback in a major way.  You remember those, right?  Weather, financials, news – even inspiring quotes and funny cat pictures to brighten your day.  With funding secured Lydia set about creating a multinational corporation with several teams of “rock star” developers across the globe.  Lydia and her CTO Raj know all about microservices and plan on having their teams split up and tackle individual portions of the backend to take advantage of their strengths and ensure a flexible and reliable architecture. Team #1: Location:  London Team Lead:  Chris Focus:  Weather Service Language:  Groovy Framework:  Oracle Helidon SE with Gradle Team #2: Location:  Tokyo Team Lead:  Michiko Focus:  Quote Service Language:  Java Framework:  Oracle Helidon MP with Maven Team #3: Location:  Bangalore Team Lead:  Murielle Focus:  Stock Service Language:  JavaScript/Node Framework:  Express Team #4: Location:  Melbourne Team Lead:  Dominic Focus:  Cat Picture Service Language:  Java Framework Oracle Fn (Serverless) Team #5 Location:  Atlanta Team Lead:  Ava Focus:  Frontend Language:  JavaScript/TypeScript Framework:  Angular 6 As you can see, Lydia has put together quite a globally diverse group of teams with a wide-ranging set of skills and experience.  You’ll also notice some non-Oracle technologies in their selections which you might find odd in a blog post focused on Oracle technology, but that’s indicative of many software companies these days.  Rarely do teams focus solely on a single company’s stack anymore.  While we’d love it if they did, the reality is that teams typically have strengths and preferences that come into play.  I’ll show you in this series how Oracle’s new open source Helidon framework and Fn Serverless project can be leveraged to build microservices and serverless functions, but also how a team can deploy their entire stack to Oracle’s cloud regardless of the language or framework used to build the services that comprise their application.  We'll dive slightly deeper into Helidon than an introductory post, so you might want to first read this introductory blog post and the tutorial before you read the rest of this post. Let’s begin with Team #1 who has been tasked with building out the backend for retrieving a user’s local weather.  They’re a Groovy team, but they’ve heard good things about Oracle’s new microservice framework Helidon so they’ve chosen to use this new project as an opportunity to learn the new framework and see how well it works with Groovy and Gradle as a build tool.  Team lead Chris has read through the Helidon tutorial and created a new application using the quickstart examples so his first task is to transform the Java application that was created into a Groovy application.  The first step for Chris, in this case, is to create a Gradle build file and make sure that it includes all of the necessary Helidon dependencies as well as a Groovy dependency.  Chris also adds a ‘copyLibs’ task to make sure that all of the dependencies end up where they need to when the project is built.  The build.gradle file looks like this: apply plugin: 'java' apply plugin: 'maven' apply plugin: 'groovy' apply plugin: 'application' mainClassName = 'codes.recursive.weather.Main' group = 'codes.recursive.weather' version = '1.0-SNAPSHOT' description = """A simple weather microservice""" sourceSets.main.resources.srcDirs = [ "src/main/groovy", "src/main/resources" ] sourceCompatibility = 1.8 targetCompatibility = 1.8 tasks.withType(JavaCompile) { options.encoding = 'UTF-8' } ext { helidonversion = '0.10.0' } repositories { maven { url "http://repo.maven.apache.org/maven2" } mavenLocal() mavenCentral() } configurations { localGroovyConf } dependencies { localGroovyConf localGroovy() compile 'org.codehaus.groovy:groovy-all:3.0.0-alpha-3' compile "io.helidon:helidon-bom:${project.helidonversion}" compile "io.helidon.webserver:helidon-webserver-bundle:${project.helidonversion}" compile "io.helidon.config:helidon-config-yaml:${project.helidonversion}" compile "io.helidon.microprofile.metrics:helidon-metrics-se:${project.helidonversion}" compile "io.helidon.webserver:helidon-webserver-prometheus:${project.helidonversion}" compile group: 'com.mashape.unirest', name: 'unirest-java', version: '1.4.9' testCompile 'org.junit.jupiter:junit-jupiter-api:5.1.0' } // define a custom task to copy all dependencies in the runtime classpath // into build/libs/libs // uses built-in Copy task copyLibs(type: Copy) { from configurations.runtime into 'build/libs/libs' } // add it as a dependency of built-in task 'assemble' copyLibs.dependsOn jar copyDocker.dependsOn jar copyK8s.dependsOn jar assemble.dependsOn copyLibs assemble.dependsOn copyDocker assemble.dependsOn copyK8s // default jar configuration // set the main classpath jar { archiveName = "${project.name}.jar" manifest { attributes ('Main-Class': "${mainClassName}", 'Class-Path': configurations.runtime.files.collect { "libs/$it.name" }.join(' ') ) } } With the build script set up Chris’ team goes about building the application.  Helidon SE makes it pretty easy to build out a simple service.  To get started you only really need a few classes:  A Main.groovy (notice that the Gradle script indentifies the mainClassName with a path to Main.groovy) which creates the server, sets up routing, configures error handling and optionally sets up metrics for the server.  Here’s the entire Main.groovy: final class Main { private Main() { } private static Routing createRouting() { MetricsSupport metricsSupport = MetricsSupport.create() MetricRegistry registry = RegistryFactory .getRegistryFactory() .get() .getRegistry(MetricRegistry.Type.APPLICATION) return Routing.builder() .register("/weather", new WeatherService()) .register(metricsSupport) .error( NotFoundException.class, {req, res, ex -> res.headers().contentType(MediaType.APPLICATION_JSON) res.status(404).send(new JsonGenerator.Options().build().toJson(ex)) }) .error( Exception.class, {req, res, ex -> ex.printStackTrace() res.headers().contentType(MediaType.APPLICATION_JSON) res.status(500).send(new JsonGenerator.Options().build().toJson(ex)) }) .build() } static void main(final String[] args) throws IOException { startServer() } protected static WebServer startServer() throws IOException { // load logging configuration LogManager.getLogManager().readConfiguration( Main.class.getResourceAsStream("/logging.properties")) // By default this will pick up application.yaml from the classpath Config config = Config.create() // Get webserver config from the "server" section of application.yaml ServerConfiguration serverConfig = ServerConfiguration.fromConfig(config.get("server")) WebServer server = WebServer.create(serverConfig, createRouting()) // Start the server and print some info. server.start().thenAccept( { NettyWebServer ws -> println "Web server is running at http://${config.get("server").get("host").asString()}:${config.get("server").get("port").asString()}" }) // Server threads are not demon. NO need to block. Just react. server.whenShutdown().thenRun({ it -> Unirest.shutdown() println "Web server has been shut down. Goodbye!" }) return server } } Heldion SE uses a YAML file located in src/main/resources (named application.yaml) for configuration.  You can store server related config, as well as any application variables in this file.  Chris’ team puts a few variables related to the API in this file: app: apiBaseUrl: "https://api.openweathermap.org/data/2.5" apiKey: "[redacted]" server: port: 8080 host: 0.0.0.0 Looking back at the Main class, notice on line 13 where the endpoint “/weather” is registered and pointed at the WeatherService. That’s the class that’ll do all the heavy lifting when it comes to getting weather data.  Helidon SE services implement the Service interface.  This class has an update() method that is used to establish sub-routes for the given service and point those sub-routes at private methods of the service class.  Here’s what Chris’ team came up with for the update() method: void update(Routing.Rules rules) { rules .any(this::countAccess as Handler) .get("/current/city/{city}", this::getByLocation as Handler) .get("/current/id/{id}", this::getById as Handler) .get("/current/lat/{lat}/lon/{lon}", this::getByLatLon as Handler) .get("/current/zip/{zip}", this::getByZip as Handler) } Chris’ team creates 4 different routes under “/weather” giving the consumer the ability to get the current weather in 4 separate ways (by city, id, lat/lon or zip code).  Note that since we’re using Groovy we have to cast the method references as io.helidon.webserver.Handler or we’ll get an exception.  We’ll take a quick look at just one of those methods, getByZip(): private void getByZip(ServerRequest request, ServerResponse response) { def zip = request.path().param("zip") def weather = getWeather([ (ZIP): zip ]) response.headers().contentType(MediaType.APPLICATION_JSON) response.send(weather.getBody().getObject().toString()) } The getByZip() method grabs the zip parameter from the request and calls getWeather(), which uses a client library called Unirest to make an HTTP call to the chosen weather API and returns the current weather to getByZip() which sends the response to the browser as JSON: private HttpResponse<JsonNode> getWeather(Map params) { return Unirest .get("${baseUrl}/weather?${params.collect { it }.join('&')}&appid=${apiKey}") .asJson() } As you can see, each service method gets passed two arguments when called by the router – the request and response (as you might have guessed if you’ve worked with a microservice framework before).  These arguments allow the developer to grab URL parameters, form data or headers from the request and set the status, body or headers into the response as necessary.  Once the team builds out the entire weather service they are ready to execute the Gradle run task to see everything working in the browser. Cloudy in London?  A shocking weather development! There’s obviously more to Helidon SE, but as you can see it doesn’t take a lot of code to get a basic microservice up and running. We’ll take a look at deploying the services in a later post, but Helidon makes that step trivial with baked in support for generating Dockerfiles and Kubernetes config files.  Let’s switch gears now and look at Michiko’s team who was tasked with building out a backend to return random quotes since no personalized homepage would be complete without such a feature.  The Tokyo team prefers to code in Java and they use Maven to manage compilation and dependencies.  They are quite familiar with the Microprofile family of APIs.  Michiko and team also decided to use Helidon, but with their Microprofile expertise, they decided to go with Helidon MP over the more reactive functional style of SE because it provides recognizable APIs like JAX-RS and CDI that they have been using for years.  Like Chris’ team, they rapidly scaffold out a skeleton application with the MP quickstart archetype and set out configuring their Main.java class.  The main method of that class calls startServer() which is slightly different from the SE method, but accomplishes the same task – starting up the application server using a config file (this one named microprofile-config.properties and located in /src/main/resources/META-INF): protected static Server startServer() throws IOException { // load logging configuration LogManager.getLogManager().readConfiguration( Main.class.getResourceAsStream("/logging.properties")); // Server will automatically pick up configuration from // microprofile-config.properties Server server = Server.create(); server.start(); return server; } Next, they create a beans.xml file in /src/main/resources/META-INF so the CDI implementation can pick up their classes: <!--?xml version="1.0" encoding="UTF-8"?--> <beans> </beans> Create the JAX-RS application, adding the resource class(es) as needed: @ApplicationScoped @ApplicationPath("/") public class QuoteApplication extends Application { @Override public Set<Class<?>> getClasses() { Set<Class<?>> set = new HashSet<>(); set.add(QuoteResource.class); return Collections.unmodifiableSet(set); } } And create the QuoteResource class: @Path("/quote") @RequestScoped public class QuoteResource { private static String apiBaseUrl = null; @Inject public QuoteResource(@ConfigProperty(name = "app.api.baseUrl") final String apiBaseUrl) { if (this.apiBaseUrl == null) { this.apiBaseUrl = apiBaseUrl; } } @SuppressWarnings("checkstyle:designforextension") @Path("/random") @GET @Produces(MediaType.APPLICATION_JSON) public String getRandomQuote() throws UnirestException { String url = apiBaseUrl + "/posts?filter[orderby]=rand&filter[posts_per_page]=1"; HttpResponse<JsonNode> quote = Unirest.get(url).asJson(); return quote.getBody().toString(); } } Notice the use of constructor injection to get a configuration property and the simple annotations for the path, HTTP method and content type of the response. The getRandomQuote() method again uses Unirest to make a call to the quote API and return the result as a JSON string.  Running the mvn package task and executing the resulting JAR starts the application running and results in the following: Michiko’s team has successfully built the initial implementation of their quote microservice on a flexible foundation that will allow the service to grow with time as the user base expands and additional funding rolls in from the excited investors!  As with the SE version, Helidon MP generates a Dockerfile and Kubernetes app.yaml file to assist the team with deployment.  We’ll look at deployment in a later post in this series. In this post, we talked about a fictitious startup getting into microservices for their heavily funded internet homepage application.  We looked at the Helidon microservice framework which provides a reactive, functional style version as well as a Microprofile version more suited to Java EE developers who are comfortable with JAX-RS and CDI.  Lydia’s teams are moving rapidly to get their backend architecture built out and are well on their way to implementing her vision for TechCorp.  In the next post, we’ll look at how Murielle and Dominic’s teams build out their services and in future posts we’ll see how all of the teams ultimately test and deploy the services into production.

Microservices are undoubtedly popular.  There have been plenty of great posts on this blog that explain the advantages of using a microservice approach to building applications (or “why you...

Oracle Offline Persistence Toolkit — After Request Sync Listener

.oracle-singlepageview__featuredimage, .cb11v2-cover{display:none !important;} Originally published at andrejusb.blogspot.com  In my previous post, we learned how to handle replay conflict — Oracle Offline Persistence Toolkit — Reacting to Replay Conflict. Additional important thing to know — how to handle response from request which was replayed during sync (we are talking here about PATCH). It is not as obvious as handling response from direct REST call in callback (there is no callback for response which is synchronised later). You may think, why you would need to handle response, after successful sync. Well there could be multiple reasons — for instance you may read returned value and update value stored on the client. Listener is registered in Persistence Manager configuration, by adding event listener of type syncRequest for given endpoint: This is listener code. We are getting response, reading change indicator value (it was updated on the backend and new value is returned in response) and storing it locally on the client. Additionally we maintain array with mapping of change indicator value to updated row ID (in my next post I will explain why this is needed). After request listener must return promise: On runtime — when request sync is executed, you should see in the log message printed, which shows new change indicator value: Double check in payload, to make sure request was submitted with previous value: Check response, you will see new value for change indicator (same as in after request listener): Sample code can be downloaded from GitHub repository

Originally published at andrejusb.blogspot.com  In my previous post, we learned how to handle replay conflict — Oracle Offline Persistence Toolkit — Reacting to Replay Conflict. Additional important...

Generic Docker Container Image for running and live reloading a Node application based on a GitHub Repo

Originally published at technology.amis.nl My desire: find a way to run a Node application from a Git(Hub) repository using a generic Docker container and be able to refresh the running container on the fly whenever the sources in the repo are updated. The process of producing containers for each application and upon each change of the application is too cumbersome and time consuming for certain situations — including rapid development/test cycles and live demonstrations. I am looking for a convenient way to run a Node application anywhere I can run a Docker container — without having to build and push a container image — and to continuously update the running application in mere seconds rather than minutes. This article describes what I created to address that requirement. Key ingredient in the story: nodemon — a tool that monitors a file system for any changes in a node.js application and automatically restarts the server when there are such changes. What I had to put together: a generic Docker container based on the official Node image — with npm and a git client inside adding nodemon (to monitor the application sources) adding a background Node application that can refresh from the Git repository — upon an explicit request, based on a job schedule and triggered by a Git webhook defining an environment variable GITHUB_URL for the url of the source Git repository for the Node application adding a startup script that runs when the container is ran first (clone from Git repo specified through GITHUB_URL and run application with nodemon) or restarted (just run application with nodemon) I have been struggling a little bit with the Docker syntax and operations (CMD vs RUN vs ENTRYPOINT) and the Linux bash shell scripts — and I am sure my result can be improved upon. The Dockerfile that builds the Docker container with all generic elements looks like this: FROM node:8 #copy the Node Reload server - exposed at port 4500 COPY package.json /tmp COPY server.js /tmp RUN cd tmp && npm install EXPOSE 4500 RUN npm install -g nodemon COPY startUpScript.sh /tmp COPY gitRefresh.sh /tmp CMD ["chmod", "+x", "/tmp/startUpScript.sh"] CMD ["chmod", "+x", "/tmp/gitRefresh.sh"] ENTRYPOINT ["sh", "/tmp/startUpScript.sh"] Feel free to pick any other node base image — from https://hub.docker.com/_/node/. For example: node:10. The startUpScript that is executed whenever the container is started up — that takes care of the initial cloning of the Node application from the Git(Hub) URL to directory /tmp/app and the running of that application using nodemon is shown below. Note the trick (inspired by StackOverflow) to run a script only when the container is ran for the very first time. #!/bin/sh CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER" if [ ! -e $CONTAINER_ALREADY_STARTED ]; then touch $CONTAINER_ALREADY_STARTED echo "-- First container startup --" # YOUR_JUST_ONCE_LOGIC_HERE cd /tmp # prepare the actual Node app from GitHub mkdir app git clone $GITHUB_URL app cd app #install dependencies for the Node app npm install #start both the reload app and (using nodemon) the actual Node app cd .. (echo "starting reload app") & (echo "start reload";npm start; echo "reload app finished") & cd app; echo "starting nodemon for app cloned from $GITHUB_URL"; nodemon else echo "-- Not first container startup --" cd /tmp (echo "starting reload app and nodemon") & (echo "start reload";npm start; echo "reload app finished") & cd app; echo "starting nodemon for app cloned from $GITHUB_URL"; nodemon fi The startup script runs the live reloader application in the background — using (echo “start reload”;npm start)&. That final ampersand (&) takes care of running the command in the background. This npm start command runs the server.js file in /tmp. This server listens at port 4500 for requests. When a request is received at /reload, the application will execute the gitRefresh.sh shell script that performs a git pull in the /tmp/app directory where the git clone of the repository was targeted.   const RELOAD_PATH = '/reload' const GITHUB_WEBHOOK_PATH = '/github/push' var http = require('http'); var server = http.createServer(function (request, response) { console.log(`method ${request.method} and url ${request.url}`) if (request.method === 'GET' && request.url === RELOAD_PATH) { console.log(`reload request starting at ${new Date().toISOString()}...`); refreshAppFromGit(); response.write(`RELOADED!!${new Date().toISOString()}`); response.end(); console.log('reload request handled...'); } else if (request.method === 'POST' && request.url === GITHUB_WEBHOOK_PATH) { let body = []; request.on('data', (chunk) => { body.push(chunk);}) .on('end', () => { body = Buffer.concat(body).toString(); // at this point, `body` has the entire request body stored in it as a string console.log(`GitHub WebHook event handling starting ${new Date().toISOString()}...`); ... (see code in GitHub Repo https://github.com/lucasjellema/docker-node-run-live-reload/blob/master/server.js console.log("This commit involves changes to the Node application, so let's perform a git pull ") refreshAppFromGit(); response.write('handled'); response.end(); console.log(`GitHub WebHook event handling complete at ${new Date().toISOString()}`); }); } else { // respond response.write('Reload is live at path '+RELOAD_PATH); response.end(); } }); server.listen(4500); console.log('Server running and listening at Port 4500'); var shell = require('shelljs'); var pwd = shell.pwd() console.info(`current dir ${pwd}`) function refreshAppFromGit() { if (shell.exec('./gitRefresh.sh').code !== 0) { shell.echo('Error: Git Pull failed'); shell.exit(1); } else { } } Using the node-run-live-reload image Now that you know a little about the inner workings of the image, let me show you how to use it (also see instructions here: https://github.com/lucasjellema/docker-node-run-live-reload). To build the image yourself, clone the GitHub repo and run docker build -t "node-run-live-reload:0.1" . using of course your own image tag if you like. I have pushed the image to Docker Hub as lucasjellema/node-run-live-reload:0.1. You can use this image like this: docker run --name express -p 3011:3000 -p 4505:4500 -e GITHUB_URL=https://github.com/shapeshed/express_example -d lucasjellema/node-run-live-reload:0.1 In the terminal window — we can get the logging from within the container using docker logs express --follow After the application has been cloned from GitHub, npm has installed the dependencies and nodemon has started the application, we can access it at <host>:3011 (because of the port mapping in the docker run command): When the application sources are updated in the GitHub repository, we can use a GET request (from CURL or the browser) to <host>:4505 to refresh the container with the latest application definition: The logging from the container indicates that a git pull was performed — and returned no new sources: Because there are no changed files, nodemon will not restart the application in this case. One requirement at this moment for this generic container to work is that the Node application has a package.json with a scripts.start entry in its root directory; nodemon expects that entry as instruction on how to run the application. This same package.json is used with npm install to install the required libraries for the Node application. Summary The next figure gives an overview of what this article has introduced. If you want to run a Node application whose sources are available in a GitHub repository, then all you need is a Docker host and these are your steps: Pull the Docker image: docker pull lucasjellema/node-run-live-reload:0.1 (this image currently contains the Node 8 runtime, npm, nodemon, a git client and the reloader application)  Alternatively: build and tag the container yourself. Run the container image, passing the GitHub URL of the repo containing the Node application; specify required port mappings for the Node application and the reloader (port 4500): docker run –name express -p 3011:3000 -p 4500:4500 -e GITHUB_URL=<GIT HUB REPO URL> -d lucasjellema/node-run-live-reload:0.1 When the container is started, it will clone the Node application from GitHub Using npm install, the dependencies for the application are installed Using nodemon the application is started (and the sources are monitored so to restart the application upon changes) Now the application can be accessed at the host running the Docker container on the port as mapped per the docker run command With an HTTP request to the /reload endpoint, the reloader application in the container is instructed to git pull the sources from the GitHub repository and run npm install to fetch any changed or added dependencies if any sources were changed, nodemon will now automatically restart the Node application the upgraded Node application can be accessed Note: alternatively, a WebHook trigger can be configured. This makes it possible to automatically trigger the application reload facility upon commits to the GitHub repo. Just like a regular CD pipeline this means running Node applications can be automatically upgraded. Next Steps Some next steps I am contemplating with this generic container image — and I welcome your pull requests — include: allow an automated periodic application refresh to be configured through an environment variable on the container (and/or through a call to an endpoint on the reload application) instructing the reloader to do a git pull every X seconds. use https://www.npmjs.com/package/simple-git instead of shelljs plus local Git client (this could allow usage of a lighter base image — e.g. node-slim instead of node) force a restart of the Node application — even it is not changed at all allow for alternative application startup scenarios besides running the scripts.start entry in the package.json in the root of the application Resources GitHub Repository with the resources for this article — including the Dockerfile to build the container: https://github.com/lucasjellema/docker-node-run-live-reload My article on my previous attempt at creating a generic Docker container for running a Node application from GitHub: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/ Article and Documentation on nodemon: https://medium.com/lucjuggery/docker-in-development-with-nodemon-d500366e74df and https://github.com/remy/nodemon#nodemon NPM module shelljs that allows shell commands to be executed from Node applications: https://www.npmjs.com/package/shelljs

Originally published at technology.amis.nl My desire: find a way to run a Node application from a Git(Hub) repository using a generic Docker container and be able to refresh the running container on...