X

Step Up to Modern Cloud Development

Recent Posts

Why Your Developer Story is Important

Stories are a window into life. They can if they resonate provide insights into our own lives or the lives of others.They can help us transmit  knowledge, pass on traditions, solve present day problems or allow us to imagine alternate realities. Open Source software is an example of an alternate reality in software development, where proprietary has been replaced in large part with sharing code that is free and open. How is this relevant to not only developers but people who work in technology? It is human nature that we continue to want to grow, learn and share.   With this in mind, I started 60 Second Developer Stories and tried it out at various Oracle Code events, at Developer Conferences and now at Oracle OpenWorld 2018/Code One. For the latter we had a Video Hangout in the Groundbreakers Hub at CodeOne where anyone with a story to share could do so. We livestream the story via Periscope/Twitter and record it and edit/post it later on YouTube.  In the Video Hangout, we use a green screen and through the miracles of technology Chroma key it in and put in a cool backdrop. Below are some photos of the Video Hangout as well as the ideas we give as suggestions.     Share what you learned on your first job     Share a best coding practice.     Explain how  a tool or technology works     What have you learned recently about building an App?     Share a work related accomplishment     What's the best decision you ever made?     What’s the worst mistake you made and the lesson learned?     What is one thing you learned from a mentor or peer that has really helped you?     Any story that you want to share and community can benefit from         Here are some FAQs about the 60 Second Developer Story   Q1. I am too shy, and as this is live what if I get it wrong? A1. It is your story, there is no right or wrong. If you mess up, it’s not a problem we can do a retake.   Q2. There are so many stories, how do I pick one? A2. Share something specific an event that has a beginning, middle an end. Ideally there was a challenge or obstacle and you overcame it. As long as it is meaningful to you it is worth sharing.   Q3. What if it’s not exactly 60 seconds, if it’s shorter or longer? A3. 60 Seconds is a guideline. I will usually show you a cue-card to let you know when you have 30 secs. and 15 secs. left. A little bit over or under is not a big deal.   Q4. When can I see the results? A4. Usually immediately. Whatever Periscope/Twitter handle we are sharing on, plus if you have a personal Twitter handle, we tweet that before you go live, so it will show up on your feed.   Q4. What if I am not a Developer? A5. We use Developer in a broad sense. It doesn’t matter if you are a DBA or Analyst, or whatever. If you are involved with technology and have a story to share, we want to hear it.     Here is an example of a  a 60 Second Developer Story. We hope to have the Video Hangout at future Oracle Code and other events and look forward for you to share your 60 Second story.

Stories are a window into life. They can if they resonate provide insights into our own lives or the lives of others.They can help us transmit  knowledge, pass on traditions, solve present day...

DevOps

New in Developer Cloud - Fn Support and Wercker Integration

Over the weekend we rolled out an update to your Oracle Developer Cloud Service instances which introduces several new features. In this blog we'll quickly review two of them - Support for the Fn project and integration with the Wercker CI/CD solution. These new features further enhance the scope of CI/CD functionality that you get in our team development platform. Project Fn Build Support Fn is a function-as-a-service open-source platform lead by Oracle and available for developers looking to develop portable functions with a variety of languages. If you are not familiar with Project Fn a good intro on why you should care is this blog, and you can learn more on it through the Fn project's home page on GitHub. In the latest version of Developer Cloud you have a new option in the build steps menu that helps you define various Fn related commands as part of your build process. So for example if you Fn project code is hosted in the Git repository provided by your DevCS project, you can use the build step to automate a process of building and deploying the function you created. Wercker/ Oracle Container Pipelines Integration A while back Oracle purchased a docker native CI/CD solution called Wercker - which is now also offered as part of  Oracle Cloud Infrastructure under the name Oracle Container Pipelines. Wercker is focused on offering CI/CD automation for Docker & Kubernetes based micro services. As you probably know we also offer similar support for Docker and Kubernetes in Developer Cloud Service which has support for declarative definition of Docker build steps, and ability to run Kubectl scripts in its build pipelines. If you have investment in Wercker based CI/C, and you want a more complete agile/DevOps set of features - such as the functionality offered by Developer Cloud Service (including free private Git repositories, issue tracking, agile boards and more) - now you can integrate the two solutions without loosing your investement in Wercker pipelines. For a while now Oracle Containers Pipeline provides support for picking up the code directly from a git repository hosted in Developer Cloud Service.  Now we added support for Developer Cloud Service to invoke pipelines you defined in Wercker directly as part of a build job and pipelines in Developer Cloud Service. Once you provide DevCS with your personal token for logging into Wercker, you can pick up specific applications, and pipelines that you would like to execute as part of your build jobs.   There are several other new features and enhancements in this month's release of Oracle Developer Cloud you can read about those in our What's New page.  

Over the weekend we rolled out an update to your Oracle Developer Cloud Service instances which introduces several new features. In this blog we'll quickly review two of them - Support for the Fn...

Making an IoT Badge – #badgelife going corporate

By Noel Portugal,  Senior Cloud Experience Developer at Oracle   Code Card 2018 For years I’ve been wanting to create something fun with the almighty esp8266 WiFi chip. I started experimenting with the esp8266 almost exactly four years ago. Back then there was no Arduino, Lua or even MicroPython ports for the chip, only the C Espressif SDK. Today it is fairly easy to write firmware for the ESP given how many documented projects are out there. IoT Badge by fab-lab.eu Two years ago I was very close to actually producing something with the esp8266. We, the AppsLab team,  partnered with the Oracle Technology Network team (now known as Oracle Groundbreakers Team) to offer an IoT workshop at Oracle Open World 2016. I reached out to friend-of-the lab Guido Burger from fab-lab.eu and he came up with a clever design for an IoT badge. This badge was the swiss army knife of IoT dev badge/kits.  Unfortunately, we ran out of time to actually mass produce this badge and we had to shelve the idea. Instead, we decided that year to use an off-the-shelf NodeMcu to introduce attendees to hardware that can talk to the Cloud. For the next year, we updated the IoT workshop curriculum to use the Wio Node board from Seeedstudio. Fast forward to 2018.  I’ve been following emerging use cases of e-ink screens, and I started experimenting with them. Then the opportunity came.  We needed something to highlight how easy it is to deploy serverless functions with Fn project. Having a physical device that could retrieve content from the cloud and display it was the perfect answer for me. I reached out to Squarofumi, the creators of Badgy, and we worked together to come up with the right specs for what we ended up calling the Code Card. The Code Card is an IoT badge powered by the esp8266, a rechargeable coin battery, and an e-ink display. I suggested using the same technique I used to create my smart esp8266 button. When either button A or B are pressed it sets the esp8266 enable pin to high, then the first thing the software does is keep the pin high until we are done doing an HTTP request and updating the e-ink screen.  When we are done, we set the enable pin to low and the chip turns off (not standby). This allows the battery to last much longer. To make it even easier for busy attendees to get started, I created a web app that was included in the official event app. The Code Card Designer lets you choose from different templates and assign them to a button press (short and long press). You can also choose an icon from some pre-loaded icons on the firmware. Sadly at the last minute, I had to remove one of the coolest features: the ability to upload your own picture. The feature was just not very reliable and often failed. With more time the feature can be re-introduced. After attendees used the Code Card designer they were ready for more complex stuff. All they needed to do was connect the Card to their laptops and connect via serial communication. I created a custom Electron Terminal to make it easier to access a custom CLI to change the button endpoints and SSID information. A serverless function or any other endpoint returning the required JSON is all that is needed to start modifying your Card. View image on Twitter Oracle Groundbreakers@groundbreakers     A name and a face! @groundbreakers @Java_Champions @babadopulos #codeone Code Card 12:30 PM - Oct 24, 2018   8   See Oracle Groundbreakers's other Tweets Twitter Ads info and privacy       I published the Arduino source code along with other documentation. It didn’t take long for attendees to start messing around with c codearray images to change their icons. Lastly, if you paid attention you can see that we added two Grove headers to connect analog or digital sensors. More fun! Go check out and clone the whole Github repo. You can prototype your own “badge” using off-the-shelf e-ink board similar to this. #badgelife!

By Noel Portugal,  Senior Cloud Experience Developer at Oracle   Code Card 2018 For years I’ve been wanting to create something fun with the almighty esp8266 WiFi chip. I started experimenting with the...

Blockchain

Oracle Code One, Day Four Updates and Wrap Up

It’s been an educational, inspirational, and insightful four days at Oracle Code One in San Francisco. This was the first time Oracle Code One and Oracle Open World were run side-by-side.  Attendees chose from the 2500 sessions, a majority of them featuring customers and partners who overcame real-world challenges. We also had an exhibition floor with Oracle Code One partners, and the Groundbreakers Hub, where attendees toyed around with Blockchain, IoT, AI and other emerging technologies. Personally, I felt inspired by a team of high school students who used Design Thinking, thermographic cameras, and pattern recognition to help with detecting early-stage cancer. Java Keynote The highlight from Day 1 was the Java Keynote. Matthew McCullough from Github talked about the importance of building a development community and growing the community one developer at a time. He also shared that Java has been the 2nd most popular language on Github, behind Javascript. Rayfer Hazen, manager of the data pipelines team at Github, shared similar views on Java: “Java’s strengths in parallelism and concurrency, its performance, its type system, and its massive ecosystem all make it a really good fit for building data infrastructure.” Developers from Oracle then unveiled Project Skara, which can used for the code review and code management practices for the JDK (Java Development Kit). Georges Saab, Vice President at Oracle, announced the following fulfilled commitments, which were originally made last year: Making Java more open: remaining closed-source features have been contributed to Open JDK Delivering enhancements and innovation faster: Oracle is adopting a predictable 6-month cadence so that developers can access new features sooner Continuing support for the Java ecosystem: specific releases will be provided with LTS (long-term support) Mark Reinhold, Chief Architect of the Java Platform then elaborated on major architectural changes to the Java. Though Oracle has moved to a six-month release cadence with certain builds supported long-term (LTS builds), he reiterated that “Java is still free.” Previously closed-source features such as Application Class-Data Sharing, Java Flight Recorder, and Java Mission Control are now available as open source.  Mark also showcased Java’s features to improve developer productivity and program performance, in the face of evolving programming paradigms. Key projects to meet these two goals include Amber, Loom, Panama, and Valhalla. Code One Keynote The Code One Keynote on Tuesday was kicked-off by Amit Zavery, Executive Vice President at Oracle, who elaborated on major application trends: Microservices and Serverless Architectures, which provide better infrastructure efficiency and developer productivity DevSecOps, with a move to NoOps, which requires a different mindset in engineering teams The importance of open source, which was also highlighted in Mark Reinhold’s talk at the Java keynote The need for Digital Assistants, which provide a different interface for interaction, and a different UX requirement Blockchain-based distributed transactions and ledgers The importance of embedding AI/ML into applications Amit also covered Oracle Cloud Platform’s comprehensive portfolio, which spans across the application development trends above, as well as other areas. Dee Kumar, Vice President for Marketing and Developer Relations at CNCF, talked about digital transformation which depends on cloud native computing and open source. Per Dee, Kubernetes is second only to Linux when measured by number of authors. Dee emphasized that containerization is the first step in becoming a cloud native organization. For organizations considering cloud native technology, the benefits of cloud native projects, per the CNCF bi-annual surveys include: Faster deployment time Improved scalability Cloud portability Matt Thompson, Vice President of Developer Engagement and Evangelism, hosted a session about “Building in the Cloud.” Matt Baldwin and Manish Kapur from Oracle conducted live demos featuring chatbots/digital assistants (conversational interfaces), serverless functions, and blockchain ledgers. Groundbreaker Panel and Award Winners Also on Tuesday, Stephen Chin led a talk on the Oracle Groundbreakers Awards through which Oracle seeks to recognize technology innovators. The Groundbreakers Award Winners for 2018 are: Neha Narkhede: co-creator of Apache Kafka Guido van Rossum: Creator of Python Doug Cutting: Co-creator of Hadoop Graeme Rocher: Creator of Grails Charles Nutter: Co-creator of JRuby In addition, Stephen recognized the Code One stars, individuals who were the best speakers at the conference, evangelists of open source and emerging technologies, and leaders in the community. Duke’s Choice Award Winners The Java team, represented by Georges Saab, also announced winners of the Duke’s Choice Awards, which were given to top projects and individuals in the Java community. Award winners included: Apache NetBeans – Toni Epple, Constantin Drabo, Mark Stephens ClassGraph – Luke Hutchison Jelastic – Ruslan Synytsky JPoint – Bert Jan Schrijver MicroProfile.io – Josh Juneau Twitter4J – Yusuke Yamamoto Project Helidon – Joe DiPol BgJUG (Bulgarian Java Users Group). Customer Spotlight We had customers join us to talk further about their use of Oracle Cloud Platform: Mitsubishi Electric: Leveraged Oracle Cloud for AI, IoT, SaaS, and PaaS to achieve 60% increase in operating rate, 55% decrease in manual processes, and 85% reduction in floor space CargoSmart: Used Blockchain to integrate custom ERP and Supply Chain on Oracle Database. Achieved 65% reduction in time taken to collect, consolidate, and confirm data. AllianceData: Moved over 6TB to Oracle Cloud Infrastructure – PeopleSoft, EPM, Exadata, and Windows, thereby saving $1M/year in licensing and support AkerBP: Achieved elastic scalability with Oracle Cloud, running reports in seconds instead of 20 minutes and eliminating downtimes due to database patching Groundbreakers Hub The Groundbreakers Hub featured a number of interesting demos on AI and chatbots, personalized manufacturing leveraging Oracle’s IoT Cloud, robotics, and even early-stage cancer detection. Here are some of the highlights. Personalized Manufacturing using Oracle IoT Cloud This was one of the most popular areas in the Hub. Here is how the demo worked: A robotic arm grabbed a piece of inventory (a coaster) using a camera. The camera used computer vision to detect placement of the coaster. The arm then moved across and dropped the coaster onto a conveyer belt The belt moved past a laser engraver, which engraves custom text, like your name, on the coaster Oracle Cloud, including IoT Cloud and SCM (Supply Chain Management) Cloud, were leveraged through this process to monitor the production equipment, inventory and engraving. Check out the video clip below. 3D Rendering with Raspberry Pis and Oracle Cloud Another cool spot was the “Bullet Time” photo booth. Using fifty Rasperry Pis equipped with cameras, images were captured around me. These images were then sent to the Oracle Cloud to be stitched together. The final output -- a video -- was sent to me via SMS. Cancer Detection by High School Students We also had high school students from DesignTech, which is supported by the Oracle Education Foundation. Among many projects, these students created a device to detect early-stage cancer using a thermographic (heat-sensitive) camera and a touchscreen display. An impressive high school project! Summary Java continues to be a leading development language, and is used extensively at companies such as Github. To keep pace with innovation in the industry, Java is moving to a 6-month release cadence. Oracle has a keen interest in emerging technologies, such as AI/ML, Blockchain, Containers, Serverless Functions and DevSecOps/NoOps. Oracle recognized innovators and leaders in the industry through the Groundbreakers Awards and Duke’s Choice Awards. That’s just some of the highlights from Oracle Code One 2018. We look forward to seeing you next time!  

It’s been an educational, inspirational, and insightful four days at Oracle Code One in San Francisco. This was the first time Oracle Code One and Oracle Open World were run side-by-side.  Attendees...

All Things Developer at Oracle Code One - Day 3

  Community Matters! The Code One Avengers Keynote. When it comes to a code conference, it has to be about the community. Stephen Chin and his superheroes proved that right on stage last night with their Code Avengers keynote. The action-packed superheroes stole the thunder of Code One on Day 3. Some of us were backstage with the superheroes, and the excitement and energy were just phenomenal. We want to tell this story in pictures, but what are these avengers fighting for? We will, of course, start with Dr. Strange's address to his fellow superheroes of code which brought more than a quarter million viewers on Twitter. And then his troupe follows! The mural comic strips, animations, screenplay, and cast came together just brilliantly! Congrats to the entire superheroes team! Here are some highlights from the keynote to recap: https://www.facebook.com/OracleCodeOne/videos/2486105168071342/ The Oracle Code One Team Heads to CloudFest18 The remaining thunder was stolen by the Portugal Man, the Beck, and the Bleachers at the CloudFest18 rock concert in AT&Park. Jam-packed with Oracle customers, employees, and partners from TCS, the park was just electric with Powerade music! Hands-on Labs Kept Rolling! The NoSQL hands-on lab in action here delivered by the crew. One API to many NoSQL databases! The Groundbreakers Hub was Busy! The Hub was busy with pepper, more Groundbreaker live interviews, video hangouts, Zip labs, Code Card pickups, bullet time photo booths, superhero escape rooms, hackergarten, and with our favorite Cloud DJ  - Sonic Pi! Stephen Chin recaps what's hot at the Hub right here. And a quick run of the bullet time photo booth. Rex Wang in action! Sam Craft, our first Zip lab winner! Code One Content in Action Click here for a quick 30 second recap of other things on Day 3 at Oracle Code One. Groundbreaker live interviews with Jesse Butler and Karthik Gaekwad on cloud native technologies and the Fn project - https://twitter.com/OracleDevs/status/1055169708192751616 Groundbreaker live interview on AI and ML https://twitter.com/OracleDevs/status/1055183021316292608 Groundbreaker live interviews on building RESTful APIs -  https://twitter.com/OracleDevs/status/1055230551324483584 Groundbreaker live interviews with the design tech school on The All Jacked Up Project -  https://twitter.com/OracleDevs/status/1055224092456960000 Groundbreaker live interviews on NetBeans https://twitter.com/OracleDevs/status/1055206463809843200 And interesting live video hangouts on diversity in tech and women in tech https://twitter.com/groundbreakers/status/1055161816333017088 https://twitter.com/groundbreakers/status/1055147716903239681

  Community Matters! The Code One Avengers Keynote. When it comes to a code conference, it has to be about the community. Stephen Chin and his superheroes proved that right on stage last night with...

All Things Developer at Oracle Code One - Day 2

Live from Oracle Code One - Day Two There was tons of action today at Oracle Code One. From Zip labs and challenges to an all-women developer community breakfast,  and the Duke Choice awards, to the Oracle Code keynotes and the debut Groundbreaker awards, it was all happening at Code One. Pepper was quite busy, and so was the blockchain beer bar! Zip Labs, Zip Lab Challenges and Hands-on Labs Zip labs are running all four days. So, if you want to dabble with the Oracle Cloud, or learn how you can provision the various services, go up to the second floor on Moscone West and sign-up for our cloud. You can sign-in for a 15-minute lab challenge on Oracle Cloud content and see your name on the leaderboard as the person to beat. Choose from labs including Oracle Autonomous Data Warehouse, Oracle Autonomous Transaction Processing, and Virtual Machines. Lots of ongoing hands-on labs everyday but the Container Native labs today were quite a hit. Oracle Women's Leadership Developer Community Breakfast A breakfast this morning with several women developers from across the globe. It was quite insightful to learn about their life and experiences in code. The Duke Choice Awards and Groundbreaker Live Interviews Georges Saab announced the Duke Choice Award winners at Code One today.  Some exciting Groundbreaker live interviews: Jim Grisanzio and Gerald Venzl talk about Oracle Autonomous Database Bob Rubhart, Ashley Sullivan and the Design Tech Students discuss the Vida Cam Project The Oracle Code One Keynotes and Groundbreaker Awards in Pictures Building Next-Gen Cloud Native Apps with Embedded Intelligence, Chatbots, and Containers: Amit Zavery, EVP, PaaS Development, Oracle talks about how developers can leverage the power of the Oracle Cloud. Making Cloud Native Computing Universal and Sustainably Harnessing the Power of Open Source: Dee Kumar, VP of CNCF congratulates Oracle on becoming Platinum members.     Building for the Cloud: Matt Thompson, Developer Engagement and Evangelism, Oracle Cloud Platform talks about how a cloud works best - when it is open, secure, and all things productive for the developer.        Demos: Serverless, Chatbots, Blockchain...   Manish Kapur, Director of Product Management for Cloud Platform showed a cool demo of a new serverless/microservices based cloud architecture for selling and buying a car.     Matt Baldwin talked about the DNA of Blockchain and how it is used in the context of selling and buying a car.     And the Oracle Code One Groundbreaker Awards go to:   Stephen Chin, Director of Developer Community, announces the debut Groundbreaker awards and moderates a star panel with the winners.     We had more than 200K viewers of this panel on the Oracle Code One Twitter live stream today! There were lots of interesting and diverse questions for the panel from the Oracle Groundbreaker Twitter channel. For more information on Oracle Groundbreakers, click here. And now, moving on to  Day 3 of Code One!  

Live from Oracle Code One - Day Two There was tons of action today at Oracle Code One. From Zip labs and challenges to an all-women developer community breakfast,  and the Duke Choice awards, to the...

Cloud

Oracle Database 18c XE on Oracle Cloud Infrastructure: A Mere Yum Install Away

It's a busy week at OpenWorld 2018. So busy, that we didn't get around to mentioning that Oracle Database 18c Express Edition now available on Oracle Cloud Infrastructure (OCI) yum servers! This means it's easy to install this full-features Oracle Database for developers on an OCI compute shape without incurring any extra networking charges. In this blog post I demonstrate how to install, configure and connect to Oracle Database 18c XE OCI. Installing Oracle Database 18c XE on Oracle Cloud Infrastructure From a compute shape in OCI, grab the latest version of the repo definition from the yum server local to your region as follows: cd /etc/yum.repos.d sudo mv public-yum-ol7.repo public-yum-ol7.repo.bak export REGION=`curl http://169.254.169.254/opc/v1/instance/ -s | jq -r '.region'| cut -d '-' -f 2` sudo -E wget http://yum-$REGION.oracle.com/yum-$REGION-ol7.repo Enable the ol7_oci_included repo: sudo yum-config-manager --enable ol7_oci_included Here you see the Oracle Database 18c XE RPM is available in the yum repositories: $ yum info oracle-database-xe-18c Loaded plugins: langpacks, ulninfo Available Packages Name : oracle-database-xe-18c Arch : x86_64 Version : 1.0 Release : 1 Size : 2.4 G Repo : ol7_oci_included/x86_64 Summary : Oracle 18c Express Edition Database URL : http://www.oracle.com License : Oracle Corporation Description : Oracle 18c Express Edition Database Let's install it. $ sudo yum install $ yum info oracle-database-xe-18c Loaded plugins: langpacks, ulninfo No package $ available. Package yum-3.4.3-158.0.2.el7.noarch already installed and latest version Package info-5.1-5.el7.x86_64 already installed and latest version Resolving Dependencies --> Running transaction check ---> Package oracle-database-xe-18c.x86_64 0:1.0-1 will be installed --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================= Package Arch Version Repository Size ========================================================================================================= Installing: oracle-database-xe-18c x86_64 1.0-1 ol7_oci_included 2.4 G Transaction Summary ========================================================================================================= Install 1 Package Total download size: 2.4 G Installed size: 5.2 G Is this ok [y/d/N]: y Downloading packages: oracle-database-xe-18c-1.0-1.x86_64.rpm | 2.4 GB 00:01:13 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : oracle-database-xe-18c-1.0-1.x86_64 1/1 [INFO] Executing post installation scripts... [INFO] Oracle home installed successfully and ready to be configured. To configure Oracle Database XE, optionally modify the parameters in '/etc/sysconfig/oracle-xe-18c.conf' and then execute '/etc/init.d/oracle-xe-18c configure' as root. Verifying : oracle-database-xe-18c-1.0-1.x86_64 1/1 Installed: oracle-database-xe-18c.x86_64 0:1.0-1 Complete! $ Configuring Oracle Database 18c XE With the software now installed, the next step is to configure it: $ sudo /etc/init.d/oracle-xe-18c configure Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts: Confirm the password: Configuring Oracle Listener. Listener configuration succeeded. Configuring Oracle Database XE. Enter SYS user password: ************** Enter SYSTEM user password: ************ Enter PDBADMIN User Password: ************** Prepare for db operation 7% complete Copying database files 29% complete Creating and starting Oracle instance 30% complete 31% complete 34% complete 38% complete 41% complete 43% complete Completing Database Creation 47% complete 50% complete Creating Pluggable Databases 54% complete 71% complete Executing Post Configuration Actions 93% complete Running Custom Scripts 100% complete Database creation complete. For details check the logfiles at: /opt/oracle/cfgtoollogs/dbca/XE. Database Information: Global Database Name:XE System Identifier(SID):XE Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details. Connect to Oracle Database using one of the connect strings: Pluggable database: instance-20181023-1035/XEPDB1 Multitenant container database: instance-20181023-1035 Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE Connecting to Oracle Database 18c XE To connect to the database, use the oraenv script to set the necessary environment variables, entering the XE as the ORACLE_SID. $ . oraenv ORACLE_SID = [opc] ? XE ORACLE_BASE environment variable is not being set since this information is not available for the current user ID opc. You can set ORACLE_BASE manually if it is required. Resetting ORACLE_BASE to its previous value or ORACLE_HOME The Oracle base has been set to /opt/oracle/product/18c/dbhomeXE $ Then, connect as usual using sqlplus: $ sqlplus sys/OpenWorld2018 as sysdba SQL*Plus: Release 18.0.0.0.0 - Production on Tue Oct 23 19:13:23 2018 Version 18.4.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Connected to: Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production Version 18.4.0.0.0 SQL> select 1 from dual; 1 ---------- 1 SQL> Conclusion Whether you are a developer looking to get started quickly with building applications on your own full-featured Oracle Database, or an ISV prototyping solutions that require an embedded database, installing Oracle Database XE on OCI is an excellent way to get started. With Oracle Datbase 18c XE available as an RPM inside OCI via yum, it doesn't get any easier.

It's a busy week at OpenWorld 2018. So busy, that we didn't get around to mentioning that Oracle Database 18c Express Edition now available on Oracle Cloud Infrastructure (OCI) yum servers! This means...

APIs

All Things Developer at Oracle CodeOne - Day 1 Recap

Live from Oracle CodeOne - Day One A lot of action, energy, and fun here on the first day at Oracle CodeOne 2018. From all the fun at the Developers Exchange to the cool things at the Groundbreakers hub, we've covered it all for you! So, let's get started!  Here's a one minute recap that tells the day one story, in well, a minute! The Groundbreakers Hub. We announced a new developer brand at Oracle CodeOne today, and it is...the Groundbreaker's Hub - yes, you got it! Groundbreakers is the lounge for developers, the nerds, the geeks and also the tech enthusiasts. Anyone who wants to hang out with the fellow developer community. There's the Groundbreaker live stage where we got customers talking about their experience with the Oracle Cloud, and we got over 30 great stories on record today. Kudos to the interviewers - Javed Mohammed and Bob Rhubart. The video hangouts was a casual, sassy corner to share stories of the code you built, the best app you created, the best developer you met, or the most compelling lesson you've ever learned.  Don't forget to chat with Pepper, our chatbot that will tell you what's on at CodeOne or anything at all. Also, check out the commit mural that commemorates the first annual CodeOne and the new Groundbreaker community. There's some blockchain beer action too! Select the beer you want to taste using the Blockchain app and learn all about its origins!  The Keynotes and Sessions Keynote: The Future of Java is Today ​The BIG keynote first! The Future of Java is today! An all things Java keynote by Mark Reinhold (Chief Architect of Java Platform Group at Oracle), and Georges Saab (VP Development at Oracle). It's a full house of developers who flocked to a very informative session on the evolution of Java to meet the needs of developers to become more secure, stable, and rich.  A lot of insight into the new enhancements around Java with recent additions to languages and the platform. Matthew McCullough (Vice President of Field Services, GitHub) and Rafer Hazen (Data Pipelines Engineering Manager, GitHub) also talked about GitHub, Java, and the OpenJDK Collaboration.  We streamed this live via our social channels with a viewership about half a million developers worldwide!! Here are some snippets of the backstage excitement from the crew. Big Session Today: Emerging Trends and Technologies with Modern App Dev Siddhartha Agarwal took the audience through all things app dev at Oracle - Cloud-native application development, DevSecOps, AI and conversational AI, Open Source software, Blockchain platform and more! And he was supported by Suhas Uliyar, (VP, Bot AI and Mobile Product Management), and Manish Kapur (Director of App Dev, Product Management) to tell this modern app dev story via demos. The Developer's Exchange Lots of good tech (and swag) on the Oracle Developers Exchange floor that developers could flock to. Pivotal, JFrog, IBM, Redhat, AppDynamics, DataDog...the list goes on. But check out a few fashionable. booths right here. Now, onto day two - Tuesday (10/23) ! Lots of keynotes, fireside chats, DJ and music, demos, hubs, and labs await! Thanks to Anaplan. They provided delicious free food, snacks, and drinks to all the visitors who checked-in with them!  

Live from Oracle CodeOne - Day One A lot of action, energy, and fun here on the first day at Oracle CodeOne 2018. From all the fun at the Developers Exchange to the cool things at the Groundbreakers...

APIs

All Things Developer at Oracle CodeOne. Spotlight APIs.

Code One - It’s Here! We’re just a few days away from Oracle’s biggest conference for Developers that’s now known as Code One. Java One morphed into Code One to extend support for more developer ecosystems - languages, open-source projects, and cloud-native foundations. So, first, the plugs - if you’d like to be a part of the Oracle Code One movement and have not already registered, you can still do it. You can get lost, yes! It’s a large conference with lots of sessions and other moving parts. But we’ve tried to make things simple for you here to plan your calendar. Look through these to find the right tracks and session types for you.  There are some exciting keynotes you don’t want to miss - The Future of Java, the Power of Open Source, and Building Next-Gen Could Native Apps using Emerging Tech, the Ground Breaker Code Avenger sessions, and Fireside chats! And now for the fun stuff, cause our conference is not complete without that - there’s the Cloud Fest! Get ready to be up all night with Beck; Portugal; the Man; and the Bleachers. And if you are up, get your nerdy kids to the code camp over the weekend. It’s Oracle Code for Kids time inspiring the next generation of developers! The prelude to Code One wouldn’t be complete without talking about the Groundbreaker’s Hub. A few things that you HAVE to check out are: the Blockchain Beer - Try beers that were mixed using Blockchain technology that enabled the microbrewery to accurately estimate the correct combination of raw materials to create different types of beer. Then vote for your favorite beer on our mobile app - it’s pretty cool! Experience the bullet time photo booth, the chatbot with pepper, code card ( IoT card that you can program using Fn project serverless technologies. It will have a wifi embedded chip, e-link screen, and a few fun buttons). Catch all the hub action if you're there! The Tech that Matters to Developers: Powerful APIs We’ve talked about a lot of tech here, but there are a few things that are closer to the developer’s heart! Things that make their life more straightforward, and stuff that they use on an every hour basis. And one such technology is API. I am not going to explain what APIs are because if you are a developer, you know this. APIs are a mechanism that help to dial down on the heavy duty code and add powerful functionality to a website, app, or platform, without extensive coding, and only including the API code - there I said it. But even for developers, it is essential to understand the system of engagement around designing and maintaining sound and healthy APIs. The cleaner the API, the better the associated user experience and performance of the app or platform in contention. Since APIs are reusable, it is essential to understand what goes into making the API an excellent one. And different types of APIs require a different type of love.  API Strategy with Business Outcomes First, there is a class of APIs that are powering the chatbots, and the digital experience of customers and the UX becomes one of the most significant driving factors. Second, APIs help to monetize existing data and assets. Here’s where there are organizations with API as a product and dealing with performance, scale, policy and governance around them so that the consumers have an API 360 experience.  Third and fourth - APIs are used for operational efficiency and cost savings, and they are also used for creating exchange/app systems like the app stores!  So now, taking these four areas and establishing a business outcome is critical to driving the API strategy. And the API strategy entails good design as you’ll hear in Robert’s podcast below. Design Matters Podcast by Robert Wunderlich Beyond Design - Detailing the API Lifecycle Once you have followed the principles of good API Design and established the documentation based on the business outcome, it then literally comes to the lifecycle management - the building, deployment, governance, and then managing them for scale and performance, and looping back the analytics to deliver the right expected experience. And then on the other side, there is the consumption, where developers now should be able to discover these APIs and start using them.  And then there’s the Oracle way with APIs. Vikas Anand, VP of Product Management for SOA, Integration, and API Cloud tells how this happens. API 360 Podcast by Vikas Anand API Action at Code One A lot is happening there! Hear from the customers directly on how Oracle’s API Cloud has helped to design and manage world-class APIs. Here are a few do-not-miss sessions, but you can always visit the Oracle Code page to discover more. See you there! How Rabobank is using APICS to Achieve API Success How RTD Connexions and Graco are using the API Success How Ikea is using APICS to Achieve API Success  Keynote: AI Powered Autonomous Integration Cloud and API Cloud Service API Evolution Challenges by NFL Evolutionary Tales of API by NFL Vector API for Java by Netflix

Code One - It’s Here! We’re just a few days away from Oracle’s biggest conference for Developers that’s now known as Code One. Java One morphed into Code One to extend support for more developer...

Cloud

Using Kubernetes in Hybrid Clouds -- Join Us @ Oracle OpenWorld

By now you have probably heard of the term cloud native. Cloud Native Computing Foundation (CNCF) defines cloud native as a set of technologies that “empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.” Cloud native is characterized by the use of containers and small, modular services – microservices -- which are managed by orchestration software. In the following blog post, we will cover the relationship between Containers, Kubernetes and Hybrid Clouds. For more on this topic, please join us at Oracle OpenWorld for Kubernetes in an Oracle Hybrid Cloud [BUS5722]. Containers and Kubernetes In the most recent CNCF survey among 2400 respondents, use of cloud native technologies in production has grown to over 200%, and 45% of companies run 250 or more containers. Leveraging many containerized applications requires orchestration software that can run, deploy, scale, monitor, manage, and provide high availability for hundreds or thousands of microservices. These microservices are easier and faster to develop and upgrade since development and updates of each microservice can be independently completed, without affecting the overall application. Once a new version of a microservice is tested, it can then be pushed into production to replace the existing version without any downtime. Hybrid Clouds Hybrid clouds can reduce downtime and ensure application availability. For example, in a hybrid cloud model you can leverage an on-premises datacenter for your production workloads, and leverage Availability Domains in Oracle Cloud for your DR deployments to ensure that business operations are not affected by a disaster. Whereas in a traditional on-premises datacenter model you would hire staff to manage each of your geographically dispersed datacenters, you can now offload the maintenance of infrastructure and software to a public cloud vendor such as Oracle Cloud. In turn, this reduces your operational costs of managing multiple datacenter environments. Why Kubernetes and Hybrid Clouds are like Peanut Butter and Jelly To make the best use of a hybrid cloud, you need to be able to easily package an application so that it can be deployed anywhere, i.e. you need portability. Docker containers provide the easiest way to do this since they package the application and its dependencies to be run in any environment, on-premises datacenters or public clouds. At the same time, they are more efficient than virtual machines (VMs) as they require less compute, memory, and storage resources. This makes them more economical and faster to deploy than VMs. Oracle’s Solution for Hybrid Clouds Oracle Cloud is a public cloud offering that offers multiple services for containers, including Oracle Container Engine for Kubernetes (OKE). OKE is certified by CNCF, and is managed and maintained by Oracle. With OKE, you can get started with a continuously up to date container orchestration platform quickly – just bring your container apps. For hybrid use cases, you can couple Kubernetes in your data center with OKE, and then move workloads or mix workloads as needed. To get more details and real-world insight with OKE and hybrid use cases, please join us at Oracle OpenWorld for the following session where Jason Looney from Beeline will be presenting with David Cabelus from Oracle Product Management: Kubernetes in an Oracle Hybrid Cloud [BUS5722] Wednesday, Oct 24, 4:45 p.m. - 5:30 p.m. | Moscone South - Room 160 David Cabelus, Senior Principal Product Manager, Oracle Jason Looney, VP of Enterprise Architecture, Beeline

By now you have probably heard of the term cloud native. Cloud Native Computing Foundation (CNCF) defines cloud native as a set of technologies that “empower organizations to build and run scalable...

Get going quickly with Command Line Interface for Oracle Cloud Infrastructure using Docker container

Originally published at technology.amis.nl on October 14, 2018. Oracle Cloud Infrastructure is Oracle’s second generation infrastructure as a service offering — that support many components including compute nodes, networks, storage, Kubernetes clusters and Database as a Service. Oracle Cloud Infrastructure can be administered through a GUI — a browser based console — as well as through a REST API and with the OCI Command Line Interface. Oracle offers a Terraform provider that allows automated, scripted provisioning of OCI artefacts. This article describes an easy approach to get going with the Command Line Interface for Oracle Cloud Infrastructure — using the oci-cli Docker image. Using a Docker container image and a simple configuration file, oci commands can be executed without locally having to install and update the OCI Command Line Interface (and the Python runtime environment) itself. These are the steps to get going on a Linux or Mac Host that contains a Docker engine: create a new user in OCI (or use an existing user) with appropriate privileges; you need the OCID for the user also make sure you have the name of the region and the OCID for the tenancy on OCI execute a docker run command to prepare the OCI CLI configuration file update the user in OCI with the public key created by the OCI CLI setup action edit the .profile to associate the oci command line instruction on the Docker host with running the OCI CLI Docker image At that point, you can locally run any OCI CLI command against the specified user and tenant — using nothing but the Docker container that contains the latest version of the OCI CLI and the required runtime dependencies. In more detail, the steps look like this: Create a new user in OCI (or use an existing user) with appropriate privileges; you need the OCID for the user You can reuse an existing user or create a fresh one — which is what I did. This step I performed in the OCI Console: I then added this user to the group Administrators. And I noted the OCID for this user: also make sure you have the name of the region and the OCID for the tenancy on OCI: Execute a docker run command to prepare the OCI CLI configuration file On the Docker host machine, create a directory to hold the OCI CLI configuration files. These files will be made available to the CLI tool by mounting the directory into the Docker container. mkdir ~/.oci Run the following Docker command: docker run --rm --mount type=bind,source=$HOME/.oci,target=/root/.oci -it stephenpearson/oci-cli:latest setup config This starts the OCI CLI container in interactive mode — with the ~/.oci directory mounted into the container at /root/oci — the and executes the setup config command on the OCI CLI (see https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/cmdref/setup/config.html). This command will start a dialog that results in the OCI Config file being written to /root/.oci inside the container and to ~/.oci on the Docker host. The dialog also result in a private and public key file, in that same dircetory. Here is the content of the config file that the dialog has generated on the Docker host: Update the user in OCI with the public key created by the OCI CLI setup action The contents of the file that contains the public key — ~/.oci/oci_api_key_public.pem in this case — should be configured on the OCI user — kubie in this case — as API Key:   Create shortcut command for OCI CLI on Docker host We did not install the OCI CLI on the Docker host — but we can still make it possible to run the CLI commands as if we did. If we edit the .profile file to associate the oci command line instruction on the Docker host with running the OCI CLI Docker image, we get the same experience on the host command line as if we did install the OCI CLI. Edit ~/.profile and add this line: oci() { docker run --rm --mount type=bind,source=$HOME/.oci,target=/root/.oci stephenpearson/oci-cli:latest "$@"; } On the docker host I can now run oci cli commands (that will be sent to the docker container that uses the configuration in ~/.oci for connecting to the OCI instance) Run OCI CLI command on the Host We are now set to run OCI CLI command — even though we did not actually install the OCI CLI and the Python runtime environment. Note: most commands we run will require us to pass the Compartment Id of the OCI Compartment against which we want to perform an action. It is convenient to set an environment variable with the Compartment OCID value and then refer in all cli commands to the variable. For example: export COMPARTMENT_ID=ocid1.tenancy.oc1..aaaaaaaaot3ihdt Now to list all policies in this compartment: oci iam policy list --compartment-id $COMPARTMENT_ID --all And to create a new policy — one that I need in order to provision a Kubernetes cluster: oci iam policy create --name oke-service --compartment-id $COMPARTMENT_ID --statements '[ "allow service OKE to manage all-re sources in tenancy"]' --description 'policy for granting rights on OKE to manage cluster resources' Or to create a new compartment: oci iam compartment create --compartment-id $COMPARTMENT_ID --name oke-compartment --description "Compartment for OCI resources created for OKE Cluster" From here on, it is just regular OCI CLI work, just as if it had been installed locally. But by using the Docker container, we keep our system tidy and we can easily benefit from the latest version of the OCI CLI at all times. Resources OCI CLI Command Reference — https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/index.html Terraform Provider for OCI: https://www.terraform.io/docs/providers/oci/index.html GitHub repo for OCI CLI Docker — https://github.com/stephenpearson/oci-cli

Originally published at technology.amis.nl on October 14, 2018. Oracle Cloud Infrastructure is Oracle’s second generation infrastructure as a service offering — that support many components including...

Java

Matrix Bullet Time Demo Take Two

By Christopher Bensen and Noel Portugal, Cloud Experience Developers If you are attending Oracle Code One or Open World 2018, you will be happy to hear that the Matrix Bullet Time Demo will be there. You can experience it by coming to Moscone West in the Developer Exchange and inside the GroundBreakers Hub. Last year we went into the challenges of building the Matrix Bullet Time Demo (https://developer.oracle.com/java/bullet-time). A lot of problems were encountered after that article was published so this year we pulled the demo out of storage, dusted it off and began refurbishing the demo so it could make a comeback. The first challenge was trying to remember how it all worked. Let’s backup a bit and describe what we’ve built here so you don’t have to read the previous article. The idea is to create a demo that takes a simultaneous photo from camera’s placed around a subject, stitch these photos together and form a movie. The intended final effect is for it to appear as though the camera is moving around a subject frozen in time. To do this we used 60 individual Raspberry Pi 3 single-board computers with Raspberry Pi cameras. Besides all of the technical challenges, there are some logistical challenges. When setup, the demo is huge! It forms a ten foot diameter circle and needs even more space for the mounting system. Not only is it huge, it’s delicate. Wires big and small are going everywhere. 15 Raspberry Pi 3s are mounted to each of the four lighting track gantries, and they are precarious at best. And to top it off we have to transport this demo to where we are going to set it up and back. An absolutely massive crate was built that requires an entire truck. Because of these logistical challenges the demo was only used at Open World and they keynote at JavaOne. Last year at Open World the demo was not working for the full length of the show. One of the biggest reasons is aligning 60 cameras to a single point is difficult at best and impossible with a precariously delicate mounting system. So software image stabilization was written which was done by Richard Bair on the floor under the demo. If you read the previous article about Bullet Time, then you’d know a lighting track system was used to provide power. One of the benefits of using a lighting track system is that it handles power distribution. You provide the 120 volt AC input power to the track and it carries that power through copper wires built into the track. At any point where you want to have a light, you use a mount designed for the track system, which transfers the power through the mount to the light. A 48 volt DC power supply sends 20 amps through the wires designed for 120 volts AC. Each camera has a small voltage regulator to step down to the 5 volts DC required for a Raspberry Pi. The brilliance of this system is, it is easy to send power and transmit the shutter release of the cameras and transfer of the photos via WiFi. Unfortunately WiFi is unreliable at a conference, there are far too many devices jamming up the spectrum, so that required running individual Ethernet cables to each camera which is what we were trying to avoid by using the lighting track system. So we end up with a Ethernet harness strapped to the track. Once we opened up the crate, and setup BulletTime, only one camera was not functioning. On the software side there are four parts:   A tablet that the user interacts with providing a name and optional mobile number and a button to start the countdown to take the photo. The Java server receives countdown, sends out a UDP packet to the Raspberry Pi cameras to take a photo. The server also receives the photos and stitches them together to make the video. Python code running on the Raspberry Pi listens for a UDP packet to take a photo and know where to send it. The cloud software uploads the video to a YouTube channel.  And a text message with the link is sent to the user. The overall system works like this: The user would input their name on the Oracle JavaScript Extension Toolkit (Oracle JET) web UI we built for this demo, which is running on a Microsoft Surface tablet. The user would then click a button on the Oracle JET web UI to start a 10-second countdown. The web UI would invoke a REST API on the Java server to start the countdown. After a 10-second delay, the Java server would send a multicast message to all the Raspberry Pi units at the same moment instructing them to take a picture. Each camera would take a picture and send the picture data back up to the server. The server would make any adjustments necessary to the picture (see below), and then using FFMPEG, the server would turn those 60 images into an MP4 movie. The server would respond to the Oracle JET web UI's REST request with a link to the completed movie. The Oracle JET web UI would display the movie. In general, this system worked really well. The primary challenge that we encountered was getting all 60 cameras to focus on exactly the same point in space. If the cameras were not precisely focused on the same point, then it would seem like the "virtual" camera (the resulting movie) would jump all over the place. One camera might be pointed a little higher, the next a little lower, the next a little left, and the next rotated a little. This would create a disturbing "bouncy" effect in the movie. We took two approaches to solve this. First, each Raspberry Pi camera was mounted with a series of adjustable parts, such that we could manually visit each Raspberry Pi and adjust the yaw, pitch, and roll of each camera. We would place a tripod with a pyramid target mounted to it in the center of the camera helix as a focal point, and using a hand-held HDMI monitor we visited each camera to manually adjust the cameras as best we could to line them all up on the pyramid target. Even so, this was only a rough adjustment and the resulting videos were still very bouncy. The next approach was a software-based approach to adjusting the translation (pitch and yaw) and rotation (roll) of the camera images. We created a JavaFX app to help configure each camera with settings for how much translation and rotation was necessary to perfectly line up each camera on the same exact target point. Within the app, we would take a picture from the camera. We would then click the target location, and the software would know how much it had to adjust the x and y axis for that point to end up in the dead center of each image. Likewise, we would rotate the image to line it up relative to a "horizon" line that was superimposed on the image. We had to visit each of the 60 cameras to perform both the physical and virtual configuration. Then at runtime, the server would query the cameras to get their adjustments. Then, when images were received from the cameras (see step 6 above), we used the Java 2D API to transform those images according to the translation and rotation values previously configured. We also had to crop the images, so we adjusted each Raspberry Pi camera to take the highest resolution image possible, and then we cropped it to 1920x1080 for a resealing hi-def movie. If we were to build Bullet Time version 2.0 we’d make a few changes, such as powering the Raspberry Pi using PoE, replace the lighting track with a stronger less flexible rolled aluminum square tube in eight sections rather than four, and upgrade the camera module with a better lens. But overall this is a fun project with a great user experience.  

By Christopher Bensen and Noel Portugal, Cloud Experience Developers If you are attending Oracle Code One or Open World 2018, you will be happy to hear that the Matrix Bullet Time Demo will be there....

APIs

Microservices From Dev To Deploy, Part 3: Local Deployment & The Angular UI

In this series, we're taking a look at how microservice applications are built.  In part 1 we learned about the new open source framework from Oracle called Helidon and learned how it can be used with both Java and Groovy in either a functional, reactive style or a more traditional Microprofile manner.  Part 2 acknowledged that some dev teams have different strengths and preferences and that one team in our fictional scenario used NodeJS with the ExpressJS framework to develop their microservice.  Yet another team in the scenario chose to use Fn, another awesome Oracle open source technology to add serverless to the application architecture.  Here is an architecture diagram to help you better visualize the overall picture: It may be a contrived and silly scenario, but I think it properly represents the diversity of skills and preferences that are the true reality of many teams that are building software today.  Our ultimate path in this journey is how all of the divergent pieces of this application come together in a deployment on the Oracle Cloud and we're nearly at that point.  But before we get there, let's take a look at how all of these backend services that have been developed come together in a unified frontend. Before we get started, if you're playing along at home you might want to first make sure you have access to a local Kubernetes cluster.  For testing purposes, I've built my own cluster using a few Raspberry Pi's (following the instructions here), but you can get a local testing environment up and running with minikube pretty quickly.  Don't forget to install kubectl, you'll need the command line tools to work with the cluster that you set up. With the environment set up, let's revisit Chris' team who you might recall from part 1 have built out a weather service backend using Groovy with Helidon SE.  The Gradle 'assemble' task gives them their JAR file for deployment, but Helidon also includes a few other handy features: a docker build file and a Kubernetes yaml template to speed up deploying to a K8S cluster.  When you use the Maven archetype (as Michiko's team did in part 1) the files are automatically copied to the 'target' directory along with the JAR, but since Chris' team is using Groovy with Gradle, they had to make a slight modification to the build script to copy the templates and slightly modify the paths within them.  The build.gradle script they used now includes the following tasks: task copyDocker(type:Copy) { from "src/main/docker" into "build" doLast { def d = new File( 'build/Dockerfile' ) def dfile = d.text.replaceAll('\\$\\{project.artifactId\\}', project.name) dfile = dfile.replaceAll("COPY ${project.name}", "COPY libs/${project.name}") d.write(dfile) } } task copyK8s(type:Copy) { from "src/main/k8s" into "build" doLast { def a = new File( 'build/app.yaml' ) def afile = a.text.replaceAll('\\$\\{project.artifactId\\}', project.name) a.write(afile) } } copyLibs.dependsOn jar copyDocker.dependsOn jar copyK8s.dependsOn jar assemble.dependsOn copyLibs assemble.dependsOn copyDocker assemble.dependsOn copyK8s So now, when Chris' team performs a local build they receive a fully functional Dockerfile and app.yaml file to help them quickly package the service into a Docker container and deploy that container to a Kubernetes cluster.  The process now becomes: Write Code Test Code Build JAR (gradle assemble) Build Docker Container (docker build / docker tag) Push To Docker Registry (docker push) Create Kubernetes Deployment (kubectl create) Which, if condensed into a quick screencast, looks something like this: When the process is repeated for the rest of the backend services the frontend team led by Ava are now are able to integrate the backend services into the Angular 6 frontend that they have been working on.  They start by specifying the deployed backend base URLs in their environment.ts file.  Angular uses this file to provide a flexible way to manage global application variables that have different values per environment.  For example, an environment.prod.ts file can have it's own set of production specific values that will be substituted when a `ng build --prod` is performed.  The default environment.ts is used if no environment is specified so the team uses that file for development and have set it up with the following values: export const environment = { production: false, stockApiBaseUrl: 'http://192.168.0.160:31002', weatherApiBaseUrl: 'http://192.168.0.160:31000', quoteApiBaseUrl: 'http://192.168.0.160:31001', catApiBaseUrl: 'http://localhost:31004', }; The team then creates services corresponding to each microservice.  Here's the weather.service.ts: import {Injectable} from '@angular/core'; import {HttpClient} from '@angular/common/http'; import {environment} from '../../environments/environment'; @Injectable({ providedIn: 'root' }) export class WeatherService { private baseUrl: string = environment.weatherApiBaseUrl; constructor( private http: HttpClient, ) { } getWeatherByCoords(coordinates) { return this.http .get(`${this.baseUrl}/weather/current/lat/${coordinates.lat}/lon/${coordinates.lon}`); } } And call the services from the view component. getWeather() { this.weather = null; this.weatherLoading = true; this.locationService.getLocation().subscribe((result) => { const response: any = result; const loc: Array<string> = response.loc.split(','); const lat: string = loc[0]; const long: string = loc[1]; console.log(loc) this.weatherService.getWeatherByCoords({lat: lat, lon: long}) .subscribe( (weather) => { this.weather = weather; }, (error) => {}, () => { this.weatherLoading = false; } ); }); } Once they've completed this for all of the services, the corporate vision of a throwback homepage is starting to look like a reality: In three posts we've followed TechCorp's journey to developing an internet homepage application from idea, to backend service creation and onto integrating the backend with a modern JavaScript based frontend built with Angular 6.  In the next post of this series we will see how this technologically diverse application can be deployed to Oracle's Cloud.

In this series, we're taking a look at how microservice applications are built.  In part 1 we learned about the new open source framework from Oracle called Helidon and learned how it can be used...

Containers, Microservices, APIs

Microservices From Dev To Deploy, Part 2: Node/Express and Fn Serverless

.syntaxhighlighter table td.gutter div.line { padding: 0 .5em 0 1em!important; } In our last post, we were introduced to a fictional company called TechCorp run by an entrepreneur named Lydia whose goal it is to bring back the world back to the glory days of the internet homepage. Lydia’s global team of remarkable developers are implementing her vision with a microservice architecture and we learned about Chris and Michiko who have teams in London and Tokyo.  These teams built out a weather and quote service using Helidon, a microservice framework by Oracle.  Chris’ team used Helidon SE with Groovy and Michiko’s team chose Java with Helidon MP.  In this post, we’ll look at Murielle and her Bangalore crew who are building a stock service using NodeJS with Express and Dominic and the Melbourne squad who have the envious task of building out a random cat image service with Java Oracle Fn (a serverless technology). It’s clear Helidon makes both functional and Microprofile style services straight-forward to implement.  But, despite what I personally may have thought 5 years ago it is getting impossible to ignore that NodeJS has exploded in popularity.  Stack Overflow’s most recent survey shows over 69% of respondents selecting JavaScript as the “Most Popular Technology” among Programming, Scripting and Markup Languages and Node comes in atop the “Framework” category with greater than 49% of the respondents preferring it.  It’s a given that people are using JavaScript on the frontend and it’s more and more likely that they are taking advantage of it on the backend, so it’s no surprise that Murielle’s team decided to use Node with Express to build out the stock service.     We won’t dive too deep into the Express plumbing for this service, but let’s have a quick look at the method to retrieve the stock quote: var express = require('express'); var router = express.Router(); var config = require('config'); var fetch = require("node-fetch"); /* GET stock quote */ /* jshint ignore:start */ router.get('/quote/:symbol', async (req, res, next) => { const symbol = req.param('symbol'); const url = `${config.get("api.baseUrl")}/?function=GLOBAL_QUOTE&symbol=${symbol}&apikey=${config.get("api.apiKey")}`; try { const response = await fetch(url); const json = await response.json(); res.send(json); } catch (error) { res.send(JSON.stringify(error)); } }); /* jshint ignore:end */ module.exports = router; Using fetch (in an async manner), this method calls the stock quote API and passes along the symbol that it received via the URL parameters and returns the stock quote as a JSON string to the consumer.  Here’s how that might look when we hit the service locally: Murielle’s team can expand the service in the future to provide historical data, cryptocurrency lookups, or whatever the business needs demand, but for now it provides a current quote based on the symbol it receives.  The team creates a Dockerfile and Kubernetes config file for deployment which we’ll take a look at in the future.   Dominic’s team down in Melbourne has been doing a lot of work with serverless technologies.  Since they’ve been tasked with a priority feature – random cat images – they feel that serverless is the way to go do deliver this feature and set about using Fn to build the service.  It might seem out of place to consider serverless in a microservice architecture, but it undoubtedly has a place and fulfills the stated goals of the microservice approach:  flexible, scalable, focused and rapidly deployable.  Dominic’s team has done all the research on serverless and Fn and is ready to get to work, so the developers installed a local Fn server and followed the quickstart for Java to scaffold out a function.   Once the project was ready to go Dominic’s team modified the func.yaml file to set up some configuration for the project, notably the apiBaseUrl and apiKey: schema_version: 20180708 name: cat-svc version: 0.0.47 runtime: java build_image: fnproject/fn-java-fdk-build:jdk9-1.0.70 run_image: fnproject/fn-java-fdk:jdk9-1.0.70 cmd: codes.recursive.cat.CatFunction::handleRequest format: http config: apiBaseUrl: https://api.thecatapi.com/v1 apiKey: [redacted] triggers: - name: cat type: http source: /random The CatFunction class is basic.  A setUp() method, annotated with @FnConfiguration gives access to the function context which contains the config info from the YAML file and initializes the variables for the function.  Then the handleRequest() method makes the HTTP call, again using a client library called Unirest, and returns the JSON containing the link to the crucial cat image.   public class CatFunction { private String apiBaseUrl; private String apiKey; @FnConfiguration public void setUp(RuntimeContext ctx) { apiBaseUrl = ctx.getConfigurationByKey("apiBaseUrl").orElse(""); apiKey = ctx.getConfigurationByKey("apiKey").orElse(""); } public OutputEvent handleRequest(String input) throws UnirestException { String url = apiBaseUrl + "/images/search?format=json"; HttpResponse<JsonNode> response = Unirest .get(url) .header("Content-Type", "application/json") .header("x-api-key", apiKey) .asJson(); OutputEvent out = OutputEvent.fromBytes( response.getBody().toString().getBytes(), OutputEvent.Status.Success, "application/json" ); return out; } } To test the function, the team deploys the function locally with: fn deploy --app cat-svc –local And tests that it is working: curl -i \ -H "Content-Type: application/json" \ http://localhost:8080/t/cat-svc/random Which produces: HTTP/1.1 200 OK Content-Length: 112 Content-Type: application/json Fn_call_id: 01CRGBAH56NG8G00RZJ0000001 Xxx-Fxlb-Wait: 502.0941ms Date: Fri, 28 Sep 2018 15:04:05 GMT [{"id":"ci","categories":[],"url":"https://24.media.tumblr.com/tumblr_lz8xmo6xYV1r0mbi6o1_500.jpg","breeds":[]}] Success!  Dominic’s team created the cat service before lunch and spent the rest of the day looking at random cat pictures.   Now that all 4 teams have implemented their respective services using various technologies, you might be asking yourself why it was necessary to implement such trivial services on the backend instead of calling the third-party APIs directly from the front end.  There are several reasons but let's take a look at just a few of them:   One reason to implement this functionality via a server-based backend is that third-party APIs can be unreliable and/or rate limited.  By proxying the API through their own backend, the teams are able to take advantage of caching and rate limiting of their own design to prevent the demand on the third-party API and get around potential downtime or rate limiting for a service that they have limited or no control over.     Secondly, the teams are given the luxury of controlling the data before it’s sent to the client.  If it is allowed within the API terms and the business needs require them to supplement the data with other third-party or user data they can reduce the client CPU, memory, and bandwidth demands by augmenting or modifying the data before it even gets to the client.   Finally, CORS restrictions in the browser can be circumvented by calling the API from the server (and if you've ever had CORS block your HTTP calls in the browser you can definitely appreciate this!).   TechCorp has now completed the initial microservice development sprint of their project.  In the next post, we’ll look at how these 4 services can be deployed to a local Kubernetes cluster and we'll also dig into the Angular front end of the application.  

In our last post, we were introduced to a fictional company called TechCorp run by an entrepreneur named Lydia whose goal it is to bring back the world back to the glory days of the internet...

Containers, Microservices, APIs

Microservices From Dev To Deploy, Part 1: Getting Started With Helidon

.syntaxhighlighter table td.gutter div.line { padding: 0 .5em 0 1em!important; } Microservices are undoubtedly popular.  There have been plenty of great posts on this blog that explain the advantages of using a microservice approach to building applications (or “why you should use them”).  And the reasons are plentiful:  flexibility to allow your teams to implement different services with their language/framework of choice, independent deployments, and scalability, and improved build and test times are among the many factors that make a microservice approach preferable to many dev teams nowadays.  It’s really not much of a discussion anymore as studies have shown that nearly 86% of respondents believe that a microservice approach will be their default architecture within the next 5 years.  As I mentioned, the question of “why microservices” has long been answered, so in this short blog series, I’d like to answer the question of “how” to implement microservices in your organization. Specifically, how Oracle technologies can help your dev team implement a maintainable, scalable and easy to test, develop, and deploy solution for your microservice applications. To keep things interesting I thought I’d come up with a fictional scenario that we can follow as we take this journey.  Let’s imagine that a completely fabricated startup called TechCorp has just secured $150M in seed funding for their brilliant new project.  TechCorp’s founder Lydia is very nostalgic and she longs for the “good old days” when 56k modems screeched and buzzed their way down the on-ramp to the “interwebs” and she’s convinced BigCity Venture Capital that personalized homepages are about to make a comeback in a major way.  You remember those, right?  Weather, financials, news – even inspiring quotes and funny cat pictures to brighten your day.  With funding secured Lydia set about creating a multinational corporation with several teams of “rock star” developers across the globe.  Lydia and her CTO Raj know all about microservices and plan on having their teams split up and tackle individual portions of the backend to take advantage of their strengths and ensure a flexible and reliable architecture. Team #1: Location:  London Team Lead:  Chris Focus:  Weather Service Language:  Groovy Framework:  Oracle Helidon SE with Gradle Team #2: Location:  Tokyo Team Lead:  Michiko Focus:  Quote Service Language:  Java Framework:  Oracle Helidon MP with Maven Team #3: Location:  Bangalore Team Lead:  Murielle Focus:  Stock Service Language:  JavaScript/Node Framework:  Express Team #4: Location:  Melbourne Team Lead:  Dominic Focus:  Cat Picture Service Language:  Java Framework Oracle Fn (Serverless) Team #5 Location:  Atlanta Team Lead:  Ava Focus:  Frontend Language:  JavaScript/TypeScript Framework:  Angular 6 As you can see, Lydia has put together quite a globally diverse group of teams with a wide-ranging set of skills and experience.  You’ll also notice some non-Oracle technologies in their selections which you might find odd in a blog post focused on Oracle technology, but that’s indicative of many software companies these days.  Rarely do teams focus solely on a single company’s stack anymore.  While we’d love it if they did, the reality is that teams typically have strengths and preferences that come into play.  I’ll show you in this series how Oracle’s new open source Helidon framework and Fn Serverless project can be leveraged to build microservices and serverless functions, but also how a team can deploy their entire stack to Oracle’s cloud regardless of the language or framework used to build the services that comprise their application.  We'll dive slightly deeper into Helidon than an introductory post, so you might want to first read this introductory blog post and the tutorial before you read the rest of this post. Let’s begin with Team #1 who has been tasked with building out the backend for retrieving a user’s local weather.  They’re a Groovy team, but they’ve heard good things about Oracle’s new microservice framework Helidon so they’ve chosen to use this new project as an opportunity to learn the new framework and see how well it works with Groovy and Gradle as a build tool.  Team lead Chris has read through the Helidon tutorial and created a new application using the quickstart examples so his first task is to transform the Java application that was created into a Groovy application.  The first step for Chris, in this case, is to create a Gradle build file and make sure that it includes all of the necessary Helidon dependencies as well as a Groovy dependency.  Chris also adds a ‘copyLibs’ task to make sure that all of the dependencies end up where they need to when the project is built.  The build.gradle file looks like this: apply plugin: 'java' apply plugin: 'maven' apply plugin: 'groovy' apply plugin: 'application' mainClassName = 'codes.recursive.weather.Main' group = 'codes.recursive.weather' version = '1.0-SNAPSHOT' description = """A simple weather microservice""" sourceSets.main.resources.srcDirs = [ "src/main/groovy", "src/main/resources" ] sourceCompatibility = 1.8 targetCompatibility = 1.8 tasks.withType(JavaCompile) { options.encoding = 'UTF-8' } ext { helidonversion = '0.10.0' } repositories { maven { url "http://repo.maven.apache.org/maven2" } mavenLocal() mavenCentral() } configurations { localGroovyConf } dependencies { localGroovyConf localGroovy() compile 'org.codehaus.groovy:groovy-all:3.0.0-alpha-3' compile "io.helidon:helidon-bom:${project.helidonversion}" compile "io.helidon.webserver:helidon-webserver-bundle:${project.helidonversion}" compile "io.helidon.config:helidon-config-yaml:${project.helidonversion}" compile "io.helidon.microprofile.metrics:helidon-metrics-se:${project.helidonversion}" compile "io.helidon.webserver:helidon-webserver-prometheus:${project.helidonversion}" compile group: 'com.mashape.unirest', name: 'unirest-java', version: '1.4.9' testCompile 'org.junit.jupiter:junit-jupiter-api:5.1.0' } // define a custom task to copy all dependencies in the runtime classpath // into build/libs/libs // uses built-in Copy task copyLibs(type: Copy) { from configurations.runtime into 'build/libs/libs' } // add it as a dependency of built-in task 'assemble' copyLibs.dependsOn jar copyDocker.dependsOn jar copyK8s.dependsOn jar assemble.dependsOn copyLibs assemble.dependsOn copyDocker assemble.dependsOn copyK8s // default jar configuration // set the main classpath jar { archiveName = "${project.name}.jar" manifest { attributes ('Main-Class': "${mainClassName}", 'Class-Path': configurations.runtime.files.collect { "libs/$it.name" }.join(' ') ) } } With the build script set up Chris’ team goes about building the application.  Helidon SE makes it pretty easy to build out a simple service.  To get started you only really need a few classes:  A Main.groovy (notice that the Gradle script indentifies the mainClassName with a path to Main.groovy) which creates the server, sets up routing, configures error handling and optionally sets up metrics for the server.  Here’s the entire Main.groovy: final class Main { private Main() { } private static Routing createRouting() { MetricsSupport metricsSupport = MetricsSupport.create() MetricRegistry registry = RegistryFactory .getRegistryFactory() .get() .getRegistry(MetricRegistry.Type.APPLICATION) return Routing.builder() .register("/weather", new WeatherService()) .register(metricsSupport) .error( NotFoundException.class, {req, res, ex -> res.headers().contentType(MediaType.APPLICATION_JSON) res.status(404).send(new JsonGenerator.Options().build().toJson(ex)) }) .error( Exception.class, {req, res, ex -> ex.printStackTrace() res.headers().contentType(MediaType.APPLICATION_JSON) res.status(500).send(new JsonGenerator.Options().build().toJson(ex)) }) .build() } static void main(final String[] args) throws IOException { startServer() } protected static WebServer startServer() throws IOException { // load logging configuration LogManager.getLogManager().readConfiguration( Main.class.getResourceAsStream("/logging.properties")) // By default this will pick up application.yaml from the classpath Config config = Config.create() // Get webserver config from the "server" section of application.yaml ServerConfiguration serverConfig = ServerConfiguration.fromConfig(config.get("server")) WebServer server = WebServer.create(serverConfig, createRouting()) // Start the server and print some info. server.start().thenAccept( { NettyWebServer ws -> println "Web server is running at http://${config.get("server").get("host").asString()}:${config.get("server").get("port").asString()}" }) // Server threads are not demon. NO need to block. Just react. server.whenShutdown().thenRun({ it -> Unirest.shutdown() println "Web server has been shut down. Goodbye!" }) return server } } Heldion SE uses a YAML file located in src/main/resources (named application.yaml) for configuration.  You can store server related config, as well as any application variables in this file.  Chris’ team puts a few variables related to the API in this file: app: apiBaseUrl: "https://api.openweathermap.org/data/2.5" apiKey: "[redacted]" server: port: 8080 host: 0.0.0.0 Looking back at the Main class, notice on line 13 where the endpoint “/weather” is registered and pointed at the WeatherService. That’s the class that’ll do all the heavy lifting when it comes to getting weather data.  Helidon SE services implement the Service interface.  This class has an update() method that is used to establish sub-routes for the given service and point those sub-routes at private methods of the service class.  Here’s what Chris’ team came up with for the update() method: void update(Routing.Rules rules) { rules .any(this::countAccess as Handler) .get("/current/city/{city}", this::getByLocation as Handler) .get("/current/id/{id}", this::getById as Handler) .get("/current/lat/{lat}/lon/{lon}", this::getByLatLon as Handler) .get("/current/zip/{zip}", this::getByZip as Handler) } Chris’ team creates 4 different routes under “/weather” giving the consumer the ability to get the current weather in 4 separate ways (by city, id, lat/lon or zip code).  Note that since we’re using Groovy we have to cast the method references as io.helidon.webserver.Handler or we’ll get an exception.  We’ll take a quick look at just one of those methods, getByZip(): private void getByZip(ServerRequest request, ServerResponse response) { def zip = request.path().param("zip") def weather = getWeather([ (ZIP): zip ]) response.headers().contentType(MediaType.APPLICATION_JSON) response.send(weather.getBody().getObject().toString()) } The getByZip() method grabs the zip parameter from the request and calls getWeather(), which uses a client library called Unirest to make an HTTP call to the chosen weather API and returns the current weather to getByZip() which sends the response to the browser as JSON: private HttpResponse<JsonNode> getWeather(Map params) { return Unirest .get("${baseUrl}/weather?${params.collect { it }.join('&')}&appid=${apiKey}") .asJson() } As you can see, each service method gets passed two arguments when called by the router – the request and response (as you might have guessed if you’ve worked with a microservice framework before).  These arguments allow the developer to grab URL parameters, form data or headers from the request and set the status, body or headers into the response as necessary.  Once the team builds out the entire weather service they are ready to execute the Gradle run task to see everything working in the browser. Cloudy in London?  A shocking weather development! There’s obviously more to Helidon SE, but as you can see it doesn’t take a lot of code to get a basic microservice up and running. We’ll take a look at deploying the services in a later post, but Helidon makes that step trivial with baked in support for generating Dockerfiles and Kubernetes config files.  Let’s switch gears now and look at Michiko’s team who was tasked with building out a backend to return random quotes since no personalized homepage would be complete without such a feature.  The Tokyo team prefers to code in Java and they use Maven to manage compilation and dependencies.  They are quite familiar with the Microprofile family of APIs.  Michiko and team also decided to use Helidon, but with their Microprofile expertise, they decided to go with Helidon MP over the more reactive functional style of SE because it provides recognizable APIs like JAX-RS and CDI that they have been using for years.  Like Chris’ team, they rapidly scaffold out a skeleton application with the MP quickstart archetype and set out configuring their Main.java class.  The main method of that class calls startServer() which is slightly different from the SE method, but accomplishes the same task – starting up the application server using a config file (this one named microprofile-config.properties and located in /src/main/resources/META-INF): protected static Server startServer() throws IOException { // load logging configuration LogManager.getLogManager().readConfiguration( Main.class.getResourceAsStream("/logging.properties")); // Server will automatically pick up configuration from // microprofile-config.properties Server server = Server.create(); server.start(); return server; } Next, they create a beans.xml file in /src/main/resources/META-INF so the CDI implementation can pick up their classes: <!--?xml version="1.0" encoding="UTF-8"?--> <beans> </beans> Create the JAX-RS application, adding the resource class(es) as needed: @ApplicationScoped @ApplicationPath("/") public class QuoteApplication extends Application { @Override public Set<Class<?>> getClasses() { Set<Class<?>> set = new HashSet<>(); set.add(QuoteResource.class); return Collections.unmodifiableSet(set); } } And create the QuoteResource class: @Path("/quote") @RequestScoped public class QuoteResource { private static String apiBaseUrl = null; @Inject public QuoteResource(@ConfigProperty(name = "app.api.baseUrl") final String apiBaseUrl) { if (this.apiBaseUrl == null) { this.apiBaseUrl = apiBaseUrl; } } @SuppressWarnings("checkstyle:designforextension") @Path("/random") @GET @Produces(MediaType.APPLICATION_JSON) public String getRandomQuote() throws UnirestException { String url = apiBaseUrl + "/posts?filter[orderby]=rand&filter[posts_per_page]=1"; HttpResponse<JsonNode> quote = Unirest.get(url).asJson(); return quote.getBody().toString(); } } Notice the use of constructor injection to get a configuration property and the simple annotations for the path, HTTP method and content type of the response. The getRandomQuote() method again uses Unirest to make a call to the quote API and return the result as a JSON string.  Running the mvn package task and executing the resulting JAR starts the application running and results in the following: Michiko’s team has successfully built the initial implementation of their quote microservice on a flexible foundation that will allow the service to grow with time as the user base expands and additional funding rolls in from the excited investors!  As with the SE version, Helidon MP generates a Dockerfile and Kubernetes app.yaml file to assist the team with deployment.  We’ll look at deployment in a later post in this series. In this post, we talked about a fictitious startup getting into microservices for their heavily funded internet homepage application.  We looked at the Helidon microservice framework which provides a reactive, functional style version as well as a Microprofile version more suited to Java EE developers who are comfortable with JAX-RS and CDI.  Lydia’s teams are moving rapidly to get their backend architecture built out and are well on their way to implementing her vision for TechCorp.  In the next post, we’ll look at how Murielle and Dominic’s teams build out their services and in future posts we’ll see how all of the teams ultimately test and deploy the services into production.

Microservices are undoubtedly popular.  There have been plenty of great posts on this blog that explain the advantages of using a microservice approach to building applications (or “why you...

Oracle Offline Persistence Toolkit — After Request Sync Listener

.oracle-singlepageview__featuredimage, .cb11v2-cover{display:none !important;} Originally published at andrejusb.blogspot.com  In my previous post, we learned how to handle replay conflict — Oracle Offline Persistence Toolkit — Reacting to Replay Conflict. Additional important thing to know — how to handle response from request which was replayed during sync (we are talking here about PATCH). It is not as obvious as handling response from direct REST call in callback (there is no callback for response which is synchronised later). You may think, why you would need to handle response, after successful sync. Well there could be multiple reasons — for instance you may read returned value and update value stored on the client. Listener is registered in Persistence Manager configuration, by adding event listener of type syncRequest for given endpoint: This is listener code. We are getting response, reading change indicator value (it was updated on the backend and new value is returned in response) and storing it locally on the client. Additionally we maintain array with mapping of change indicator value to updated row ID (in my next post I will explain why this is needed). After request listener must return promise: On runtime — when request sync is executed, you should see in the log message printed, which shows new change indicator value: Double check in payload, to make sure request was submitted with previous value: Check response, you will see new value for change indicator (same as in after request listener): Sample code can be downloaded from GitHub repository

Originally published at andrejusb.blogspot.com  In my previous post, we learned how to handle replay conflict — Oracle Offline Persistence Toolkit — Reacting to Replay Conflict. Additional important...

Generic Docker Container Image for running and live reloading a Node application based on a GitHub Repo

Originally published at technology.amis.nl My desire: find a way to run a Node application from a Git(Hub) repository using a generic Docker container and be able to refresh the running container on the fly whenever the sources in the repo are updated. The process of producing containers for each application and upon each change of the application is too cumbersome and time consuming for certain situations — including rapid development/test cycles and live demonstrations. I am looking for a convenient way to run a Node application anywhere I can run a Docker container — without having to build and push a container image — and to continuously update the running application in mere seconds rather than minutes. This article describes what I created to address that requirement. Key ingredient in the story: nodemon — a tool that monitors a file system for any changes in a node.js application and automatically restarts the server when there are such changes. What I had to put together: a generic Docker container based on the official Node image — with npm and a git client inside adding nodemon (to monitor the application sources) adding a background Node application that can refresh from the Git repository — upon an explicit request, based on a job schedule and triggered by a Git webhook defining an environment variable GITHUB_URL for the url of the source Git repository for the Node application adding a startup script that runs when the container is ran first (clone from Git repo specified through GITHUB_URL and run application with nodemon) or restarted (just run application with nodemon) I have been struggling a little bit with the Docker syntax and operations (CMD vs RUN vs ENTRYPOINT) and the Linux bash shell scripts — and I am sure my result can be improved upon. The Dockerfile that builds the Docker container with all generic elements looks like this: FROM node:8 #copy the Node Reload server - exposed at port 4500 COPY package.json /tmp COPY server.js /tmp RUN cd tmp && npm install EXPOSE 4500 RUN npm install -g nodemon COPY startUpScript.sh /tmp COPY gitRefresh.sh /tmp CMD ["chmod", "+x", "/tmp/startUpScript.sh"] CMD ["chmod", "+x", "/tmp/gitRefresh.sh"] ENTRYPOINT ["sh", "/tmp/startUpScript.sh"] Feel free to pick any other node base image — from https://hub.docker.com/_/node/. For example: node:10. The startUpScript that is executed whenever the container is started up — that takes care of the initial cloning of the Node application from the Git(Hub) URL to directory /tmp/app and the running of that application using nodemon is shown below. Note the trick (inspired by StackOverflow) to run a script only when the container is ran for the very first time. #!/bin/sh CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER" if [ ! -e $CONTAINER_ALREADY_STARTED ]; then touch $CONTAINER_ALREADY_STARTED echo "-- First container startup --" # YOUR_JUST_ONCE_LOGIC_HERE cd /tmp # prepare the actual Node app from GitHub mkdir app git clone $GITHUB_URL app cd app #install dependencies for the Node app npm install #start both the reload app and (using nodemon) the actual Node app cd .. (echo "starting reload app") & (echo "start reload";npm start; echo "reload app finished") & cd app; echo "starting nodemon for app cloned from $GITHUB_URL"; nodemon else echo "-- Not first container startup --" cd /tmp (echo "starting reload app and nodemon") & (echo "start reload";npm start; echo "reload app finished") & cd app; echo "starting nodemon for app cloned from $GITHUB_URL"; nodemon fi The startup script runs the live reloader application in the background — using (echo “start reload”;npm start)&. That final ampersand (&) takes care of running the command in the background. This npm start command runs the server.js file in /tmp. This server listens at port 4500 for requests. When a request is received at /reload, the application will execute the gitRefresh.sh shell script that performs a git pull in the /tmp/app directory where the git clone of the repository was targeted.   const RELOAD_PATH = '/reload' const GITHUB_WEBHOOK_PATH = '/github/push' var http = require('http'); var server = http.createServer(function (request, response) { console.log(`method ${request.method} and url ${request.url}`) if (request.method === 'GET' && request.url === RELOAD_PATH) { console.log(`reload request starting at ${new Date().toISOString()}...`); refreshAppFromGit(); response.write(`RELOADED!!${new Date().toISOString()}`); response.end(); console.log('reload request handled...'); } else if (request.method === 'POST' && request.url === GITHUB_WEBHOOK_PATH) { let body = []; request.on('data', (chunk) => { body.push(chunk);}) .on('end', () => { body = Buffer.concat(body).toString(); // at this point, `body` has the entire request body stored in it as a string console.log(`GitHub WebHook event handling starting ${new Date().toISOString()}...`); ... (see code in GitHub Repo https://github.com/lucasjellema/docker-node-run-live-reload/blob/master/server.js console.log("This commit involves changes to the Node application, so let's perform a git pull ") refreshAppFromGit(); response.write('handled'); response.end(); console.log(`GitHub WebHook event handling complete at ${new Date().toISOString()}`); }); } else { // respond response.write('Reload is live at path '+RELOAD_PATH); response.end(); } }); server.listen(4500); console.log('Server running and listening at Port 4500'); var shell = require('shelljs'); var pwd = shell.pwd() console.info(`current dir ${pwd}`) function refreshAppFromGit() { if (shell.exec('./gitRefresh.sh').code !== 0) { shell.echo('Error: Git Pull failed'); shell.exit(1); } else { } } Using the node-run-live-reload image Now that you know a little about the inner workings of the image, let me show you how to use it (also see instructions here: https://github.com/lucasjellema/docker-node-run-live-reload). To build the image yourself, clone the GitHub repo and run docker build -t "node-run-live-reload:0.1" . using of course your own image tag if you like. I have pushed the image to Docker Hub as lucasjellema/node-run-live-reload:0.1. You can use this image like this: docker run --name express -p 3011:3000 -p 4505:4500 -e GITHUB_URL=https://github.com/shapeshed/express_example -d lucasjellema/node-run-live-reload:0.1 In the terminal window — we can get the logging from within the container using docker logs express --follow After the application has been cloned from GitHub, npm has installed the dependencies and nodemon has started the application, we can access it at <host>:3011 (because of the port mapping in the docker run command): When the application sources are updated in the GitHub repository, we can use a GET request (from CURL or the browser) to <host>:4505 to refresh the container with the latest application definition: The logging from the container indicates that a git pull was performed — and returned no new sources: Because there are no changed files, nodemon will not restart the application in this case. One requirement at this moment for this generic container to work is that the Node application has a package.json with a scripts.start entry in its root directory; nodemon expects that entry as instruction on how to run the application. This same package.json is used with npm install to install the required libraries for the Node application. Summary The next figure gives an overview of what this article has introduced. If you want to run a Node application whose sources are available in a GitHub repository, then all you need is a Docker host and these are your steps: Pull the Docker image: docker pull lucasjellema/node-run-live-reload:0.1 (this image currently contains the Node 8 runtime, npm, nodemon, a git client and the reloader application)  Alternatively: build and tag the container yourself. Run the container image, passing the GitHub URL of the repo containing the Node application; specify required port mappings for the Node application and the reloader (port 4500): docker run –name express -p 3011:3000 -p 4500:4500 -e GITHUB_URL=<GIT HUB REPO URL> -d lucasjellema/node-run-live-reload:0.1 When the container is started, it will clone the Node application from GitHub Using npm install, the dependencies for the application are installed Using nodemon the application is started (and the sources are monitored so to restart the application upon changes) Now the application can be accessed at the host running the Docker container on the port as mapped per the docker run command With an HTTP request to the /reload endpoint, the reloader application in the container is instructed to git pull the sources from the GitHub repository and run npm install to fetch any changed or added dependencies if any sources were changed, nodemon will now automatically restart the Node application the upgraded Node application can be accessed Note: alternatively, a WebHook trigger can be configured. This makes it possible to automatically trigger the application reload facility upon commits to the GitHub repo. Just like a regular CD pipeline this means running Node applications can be automatically upgraded. Next Steps Some next steps I am contemplating with this generic container image — and I welcome your pull requests — include: allow an automated periodic application refresh to be configured through an environment variable on the container (and/or through a call to an endpoint on the reload application) instructing the reloader to do a git pull every X seconds. use https://www.npmjs.com/package/simple-git instead of shelljs plus local Git client (this could allow usage of a lighter base image — e.g. node-slim instead of node) force a restart of the Node application — even it is not changed at all allow for alternative application startup scenarios besides running the scripts.start entry in the package.json in the root of the application Resources GitHub Repository with the resources for this article — including the Dockerfile to build the container: https://github.com/lucasjellema/docker-node-run-live-reload My article on my previous attempt at creating a generic Docker container for running a Node application from GitHub: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/ Article and Documentation on nodemon: https://medium.com/lucjuggery/docker-in-development-with-nodemon-d500366e74df and https://github.com/remy/nodemon#nodemon NPM module shelljs that allows shell commands to be executed from Node applications: https://www.npmjs.com/package/shelljs

Originally published at technology.amis.nl My desire: find a way to run a Node application from a Git(Hub) repository using a generic Docker container and be able to refresh the running container on...

Cloud

Autonomous Database: Creating an Autonomous Transaction Processing Instance

In this post I’m going to demonstrate how quick and easy one can create an Autonomous Transaction Processing, short ATP, instance of Oracle’s Autonomous Database Cloud Services. Oracle’s ATP launched on the 7th of August 2018 and is the general purpose flavor of the Oracle Autonomous Database. My colleague SQLMaria (also known as Maria Colgan  ) has already done a great job explaining the difference between the Autonomous Transaction Processing and the Autonomous Data Warehouse services. She has also written another post on what one can expect from Oracle Autonomous Transaction Processing. I highly recommend reading both her articles first for a better understanding of the offerings. Last but not least, you can try ATP yourself today via the Oracle Free Cloud Trial. Now let’s get started. Provisioning an ATP service is, as said above, quick and easy. tl;dr To create an instance you just have to follow these three simple steps: Log into the Oracle Cloud Console and choose "Autonomous Transaction Processing" from the menu. Click "Create Autonomous Transaction Processing" Specify the name, the amount of CPU and storage, the administrator password and hit "Create Autonomous Transaction Processing" Creating an ATP instance In order to create an ATP environment you first have to logon to the Oracle Cloud Console. From there, click on the top left menu and choose “Autonomous Transaction Processing“. On the next screen you will see all your ATP databases, in my case none, because I haven’t created any yet. Hit the “Create Autonomous Transaction Processing” button. A new window will open that asks you about the display and database name, the amount of CPUs and storage capacity, as well as the administrator password and the license to use. The display name is what you will see in the cloud console once your database service is created. The database name is the name of the database itself that you will later connect to from your applications. You can use the same name for both or different ones. In my case I will use a different name for the database than for the service. The minimum CPU and storage count is 1, which is what I’m going for. Don’t forget that scaling the CPUs and/or storage up and down is fully online with Oracle Autonomous Database and transparent to the application. So even if you don’t know yet exactly how many CPUs or TBs of storage you need, you can always change that later on which no outages! Next you have to specify the password for the admin user. The admin user is a database user with administrative privileges that allows you to create other users and perform various other tasks. Last but not least, you have to choose which license model you want to use. The choice is either bringing your own license, i.e. “My organization already owns Oracle Database software licenses“, sometimes also referred to as “BYOL” or “Bring Your Own License“, which means that you do already have some unused Oracle Database licenses that you would like to reuse for your Autonomous Transaction Processing instance. This is usually done if you want to migrate your on-premises databases into the cloud and want to leverage the fact that you have already bought Oracle Database licenses in the past. The other option is to subscribe to new Oracle Database software licenses as part of the provisioning. This option is usually used if you want to have a new database cloud service that doesn’t replace an existing database. Once you have made your choice, it’s time to hit the “Create Autonomous Transaction Processing“. Your database is now being provisioned. Once the state changes to Green – Available, your database is up and running. Clicking on the name of the service will provide you with further details. Congratulations, you have just created your first Autonomous Transaction Processing Database Cloud Service. Make sure you also check out the Autonomous Transaction Processing Documentation. Originally published at geraldonit.com on August 28, 2018.

In this post I’m going to demonstrate how quick and easy one can create an Autonomous Transaction Processing, short ATP, instance of Oracle’s Autonomous Database Cloud Services. Oracle’s ATP launched...

Revisiting the Performance & Scalability of Java Applications that use RDBMSes

Originally published at db360.blogspot.com. A re-edition of my recent blog post. There is an abundant literature on Java performance (books, articles, blogs, websites, and so on); a Google search returns more than 5 millions hits. To name a few, the Effective Java programming language guide, Java Performance the definitive guide, Java performance tuning newsletter and its associated website.  This blog post revisits the known best practices for speeding up and scaling database operations for Java applications then discusses new mechanisms such as database proxies, and the Asynchronous Database Access (ADBA) proposal. Speeding up RDBMS operations in Java apps Optimizing database operations for Java applications includes: speeding up database connectivity, speeding up SQL statements processing, optimizing network traffic, and in-place processing. Speeding up Database Connectivity Connection establishment is the most expensive database operation; the obvious optimization that Java developers have been using for ages is connection pooling which avoid creating connections at runtime (unless you exhaust the pool capacity. Client-side Connection Pools Java connection pools such as the Apache Commons DBCP, C3P0, the Oracle Universal Connection Pool (UCP) and so on, are libraries to be used as part of your stand-alone Java applications or as part of the datasource of Java EE containers e.g.,Tomcat, Weblogic, WebSphere and others. Java EE containers embed their own connection pools but also allow 3rd party pools (e.g., using Using UCP with Tomcat, Using UCP with Weblogic).    Most Java applications use client-side or mid-tier connection pools to support small and medium workloads however, these pools are confined to the JRE/JDK instance (i.e., can’t be shared beyond their boundaries) and unpractical when deploying tens of thousands of mid-tiers or Web servers. Even with a very small pool size on each web tier, the RDBMS server is overwhelmed by thens of thousands of pre-allocated connections that are idle more than 90% of the time. Proxy Connection Pools  Database proxies such as MySQL Router, the Oracle Database Connection Manager in Traffic Director Mode (CMAN-TDM), NGINX and others, are proxy servers that sit between the database clients (i.e., Java apps, Web tiers) and the RDBMS. These allow thousands of mid-tiers to share a common connection pool. See database proxy in the second part of this blog. The Oracle database furnishes in addition, database-side connection pools such as the Shared Servers, and the Database Resident Connection Pool (DRCP); these will not be discussed in this post. Misc. Connection Optimizations Other connection optimization features include: deferring connection health check, and de-prioritization of failed nodes. Deferring Connection Health Check  The ability of a connection pool to defer the health checking of connections for a defined period of time, fastens connection check-out (i.e., getConnection() returns faster).   De-prioritization of Failed Nodes In a multi-instances clustered database environment such as Oracle RAC, this feature assigns a low priority to a failed instance for a defined period of time (iow, avoids attempts to get connections from the failed instance) thereby speeding up connection check-out. Speeding up SQL Statements Processing Processing a SQL statement requires several steps including: parsing (at least once), binding variables, executing, fetching resultSets (if a query), and COMMITting or ROLLBACKing the transaction (if a DML i.e., Insert, Update, or Delete).  JDBC furnishes several APIs/knobs for optimizing SQL statements processing including: Prepared Statements, Statements Caching, and ResultSets caching. Disabling Default COMMIT Auto-COMMITTING each DML is the default/implicit transaction mode in JDBC . Unless this mode corresponds to your desire, you should explicitly disable it on the connection object and demarcate your transactions (DML + Queries) with explicit COMMIT or ROLLBACK calls. i.e., conn.setAutoCommit(false); Prepared Statements Ahead of its execution, a SQL statement must be parsed, if not already. Parsing (i.e., hard parsing) is the most expensive operation when processing a SQL statement. The best practice consists in using Prepared Statements which are parsed once then reused many times on subsequent invocations, after setting new values for the bind variables. A security byproduct of Prepared Statements is the prevention of SQL injection. Statements Caching The JDBC driver may be directed to automatically cache SQL statements (PreparedStatements and CallableStatements) on smt.close(). On subsequent invocation of the same statements, the driver directs the RDBMS to use an existing statement (i.e., “use statement #2) without sending the statement string, thereby avoiding soft parsing (lexical analysis, syntactic parsing) and potentially a network roundtrip. Implicit statement caching is enabled either on the connection object or the datasource object (note: the statement cache is an array per physical connection). ResultSets Caching with Change Notification — the Hard Way Caching JDBC result sets avoids re-executing the corresponding SQL query, resulting in dramatic Java applications performance. RDBMSes allow caching ResultSet at the server side however, applications needs roundtrips to the database to retrieve it. This topic is discussed at length in chapter 15th of the Oracle database performance tuning guide. Optimizing further, these result set can be pushed to the drivers (Java, C/C++, PHP, C#, and so on) and consumed by the applications without database roundtrips. What if the ResultSets become stale, out of sync with the actual data in the RDBMS table? RDBMSes furnish mechanisms for maintaining the ResultSets up to date thereby ensuring the consistency of the cached ResultSets. For example, the Oracle database’s Query Change Notifications allows registering a SQL query with the RDBMS and receiving notifications when committed DMLs from other threads render the ResultSets out of sync. Java applications may explicitly implement ResultSet caching with change notification through the following steps: 0) Prerequisites: the server-side ResultSet caching must be enabled and database user schema must be granted the “CHANGE NOTIFICATION” privilege.  e.g., grant change notification to HR; // might need your DBA’s help. 1) Create “a registration” on the connection object // Creating a registration for Query Change Notiication // OracleConnection conn = ods.getConnection(); Properties prop = new Properties(); prop.setProperty(OracleConnection.DCN_NOTIFY_ROWIDS, "true"); prop.setProperty(OracleConnection.DCN_QUERY_CHANGE_NOTIFICATION,"true"); // ... DatabaseChangeRegistration dcr = conn.registerDatabaseChangeNotifictaion(prop); // ...   2) Associate a query with the registration Statement stmt = conn.createStatement(); // associating the query with the registration ((OracleStatement)stmt).setDatabaseChangeRegistration(dcr); /* * any query that will be executed with the 'stmt' object will be associated with * the registration 'dcr' until 'stmt' is closed or * '((OracleStatement)stmt).setDatabaseChangeRegistration(null);' is executed. */ ... 3) Listen to the notification /* * Attach a listener to the registration. * Note: DCNListener is a custom listener and not a predefined or standard * listener */ DCNListener list = new DCNListener(); dcr.addListener(list); catch(SQLException ex) { /* * if an exception occurs, we need to close the registration in order * to interrupt the thread otherwise it will be hanging around. */ if(conn != null) conn.unregisterDatabaseChangeNotification(dcr); throw ex; } See more details in chapter 26 of the Oracle JDBC Developers guide. ResultSets Caching with Change Notification — the Easier Way You may, preferably, enable client-side ResulSet caching with invalidation in a much easier way, using the following steps (available with the Oracle JDBC driver release 18.3 and up) 1) Set the following parameters in the database configuration file a.k.a. INIT.ORA. CLIENT_RESULT_CACHE_SIZE=100M // example: maximum cache size, in bytes CLIENT_RESULT_CACHE_LAG=1000 // example: maximum delay for refreshing the cache (msec) 2) Set the JDBC connection property oracle.jdbc.enableQueryResultCache to true (the default). 3) add the following hint to the SQL query string “/*+ RESULT_CACHE */” SELECT /*+ RESULT_CACHE */ product_name, unit_price FROM PRODUCTS WHERE unit_price > 100 If changing the Java/JDBC source code to add the SQL hint is not an option, you can instruct the RDBMS to cache the ResultSets of all queries related to a specific table, either at table creation (the default mode) or later (force mode); this is called “Table Annotation”. /* Table Annotation at creation time */ CREATE TABLE products (...) RESULT_CACHE (MODE DEFAULT); /* Table annotation at runtime ALTER TABLE products RESULT_CACHE (MODE FORCE); The Oracle RDBMS furnishes views such as the V$RESULT_CACHE_STATISTICS and a CLIENT_RESULT_CACHE_STATS$ table for monitoring the effectiveness of ResultSet caching. See section 15 in the performance tuning guide for more details on configuring the server-side result set cache Array Fetch Array fetching is a must when retrieving a large number of rows from a ResultSet. The fetch size can be specified on the Statement, or the PreparedStatement, or the CallableStatement, or the ResultSet objects. Example: pstmt.setFetchSize(20); When using the Oracle database, this array size is capped by the RDBMS’s internal buffer known as Session Data Unit (SDU). The SDU buffer is used for transferring data from the tables to the client, over the network. The size of this buffer, in bytes, can be specified in JDBC URL as illustrated hereafter. or at the service level in Net Services configuration files sqlnet.ora and tnsnames.ora. There is a hard limit depending on the RDBMS release: 2MB with DB 12c and up, 64K with DB 11.2, and 32K with DB pre-11.2.  In summary, even if you set the array fetch to a large number, it cannot retrieve more data than the SDU permits, for each roundtrip. Array DML (Update Batch) The JDBC specification allows sending a batch of the same DML operations (i.e., array INSERTs, array UPDATEs, array DELETE) for sequential execution at the server, thereby reducing network round-trips. Update Batching consists in explicitly invoking the addBatch method which adds a statement to an array of operations then explicitly calling executeBatch method to send it as in the following example. // Array INSERT PreparedStatement pstmt = conn.prepareStatement("INSERT INTO employees VALUES(?, ?)"); pstmt.setInt(1, 2000); pstmt.setString(2, "Milo Mumford"); pstmt.addBatch(); pstmt.setInt(1, 3000); pstmt.setString(2, "Sulu Simpson"); pstmt.addBatch(); int[] updateCounts = pstmt.executeBatch(); ... Optimizing Network Traffic Here are two mechanisms that will help optimize the network traffic between your Java code and the RDBMS: network compression and session multiplexing. Network Data Compression The ability to compress data transmitted between the Java applications and the RDBMS over LAN or WAN reduces the volume of data, the transfer time and the number of roundtrips. // Enabling Network Compression in Java prop.setProperty("oracle.net.networkCompression","on"); // Optional configuration for setting the client compression threshold. prop.setProperty("oracle.net.networkCompressionThreshold","1024"); ds.setConnectionProperties(prop); ds.setURL(url); Connection conn = ds.getConnection(); // ... Sessions Multiplexing  The Oracle database Connection Manager a.k.a. CMAN, furnishes the ability to funnel multiple database connections over a single network connection thereby saving OS resources. See more details in the Net Services Admin guide. In-Place Processing As seen earlier, SQL statements processing involves a number of roundtrips between a database client i.e., Java mid-tier/web-server and the RDBMS. If you move the Java code close or into the RDBMS session/process, you cut the network traffic which constitutes a large part of the latency. Okay, stored procedures are old-fashion, so seventies, but modern data processing such as Hadoop or Spark, collocate the processing and data for low latency. If your goal is efficiency, you got to consider using Java stored procedures, here and there for data-bound modules. I discussed the pros and cons of stored procedures in chapter 1 of my book. I’d add that in a modern micro-services based architecture, REST-wrapped stored procedures are a good design choice for data-bound services. All RDBMSes furnish stored procedures in various languages including proprietary procedural language, Java, JavaScript, PHP, Perl, Python, and TCL.The Oracle database furnishes Java and PL/SQL stored procedures. Java in the database is one of the best unsung Oracle database gems; see some code samples on GitHub. Scaling Out Java Workloads In the second part of this blog post, I will discuss the various mechanisms for scaling out Java workloads including Sharded and Multitenant databases, database proxy, and the asynchronous Java database access API proposal. Horizontal Scaling of Java applications with Sharded Databases Sharded databases — horizontal partitioning of tables across several databases — have been around for a while. Java applications that use sharded databases must: (i) define which fields to use as sharding key (and super sharding key)  (ii) set the values and build the key then request a connection to the datasource. Java SE 9 furnishes the standard APIs for building the sharding and super-sharding keys. DataSource ds = new MyDataSource(); // ShardingKey shardingKey = ds.createShardingKeyBuilder() .subkey("abc", JDBCType.VARCHAR) .subkey(94002, JDBCType.INTEGER) .build(); // ... Connection con = ds.createConnectionBuilder() .shardingKey(shardingKey) .build();> Without further optimization, all shard-aware connection requests go to a central mechanism which maintains the map or topology of the shard keys thereby incurring one additional hop per request. The Oracle Universal Connection Pool (UCP) has been enhanced to transparently collect all the keys that map to a specific shard. Once UCP gets the keys range it directs connection requests to the appropriate shard, based on the shard key. Scaling Multi-Tenant Java Applications Fully multi-tenant Java applications must use a multi-tenant RDBMS where a group of tenants or each tenant has its own database (it’s own pluggable database or PDB in Oracle’s parlance). With tens of thousands of tenants, using (tens of) thousands of databases (or PDBs), a naive approach would allocate a pool per database; we have witnessed naive architecture with a connection per tenant. The Oracle UCP has been enhanced to use a single pool for all (tens of) thousands databases. Upon a connection request to a specific database, if there is no free/available connection attached to that database, UCP transparently repurposes an idle connection in the pool, which was attached to another database to be re-attached to this one, thereby allowing to use a small set of pooled connections to service all tenants. See the UCP doc for more details on using one datasource per tenant or a single datasource for all tenants. Database proxy Proxies are man-in-the-middle software running between the database and its clients e.g., Java applications. There are several proxy offerings on the market; to name a few: MySQL Router, the Oracle Database Connection Manager in Traffic Director Mode (CMAN-TDM), ProxySQL, and so on. The Oracle CMAN-TDM is new in Oracle database 18c; it is an extension of the existing Oracle Connection Manager a.k.a. CMAN and furnishes these new following capabilities Fully transparent to applications Routes database traffic to right instance (planned) Hides database planned and unplanned outages to support zero application downtime Optimizes database session usage and application performance Enhances database security CMAN-TDM is client agnostic, iow, it supports all database clients applications including: Java, C, C++, DotNET, Node.js, Python, Ruby, R. Java applications would connect to CMAN-TDM which, in its turn, connects to the database using the latest driver and libraries then transparently furnish the Quality of Service that the application would get only if it was using the latest driver and APIs. See more details in the CMAN landing page and the Net Services documentations linked from the landing page. The Asynchronous Database Access API (ADBA) The existing JDBC API leads to blocked threads, threads scheduling, and contention; it is not suitable for reactive applications or high throughput and large-scale deployments. There are 3rd party asynchronous Java database access libraries but the Java community needs a standard API where user threads submit database operations and return. The new API proposal based on the java.util.concurrent.CompletionStage interface; it is available for download from the OpenJDK sandbox @ http://tinyurl.com/java-async-db. The API implementation takes care of executing the operations, completing the CompletableFutures. You can sense the ADBA API through the latest presentation and examples. ADBA over JDBC (AoJ) In order to help the community get a feel of ADBA, a trial/functional version (no asynchronous behavior) of it that runs over JDBC — that we are calling AoJ for ADBA over JDBC — @ https://github.com/oracle/oracle-db-examples/tree/master/java/AoJ. I encourage the reader to play with the AoJ examples. With the announce of the project Loom which will bring Fibers and Java continuations to the JVM, we will once again revisit the performance and scalability of Java applications that use RDBMSes.

Originally published at db360.blogspot.com. A re-edition of my recent blog post. There is an abundant literature on Java performance (books, articles, blogs, websites, and so on); a Google search...

Developers

Podcast: Developer Evolution: What's rockin’ roles in IT?

The good news is that the US Bureau of Labor Statistics predicts 24% growth in software developer jobs through 2026. That’s well above average. The outlook for Database administrators certainly isn’t bleak, but with projected job growth of 11% to 2026, that’s less than half the growth projected for developers. Job growth for System administrators, at 6% through 2016, is considered average by the BLS. So while the news is positive all around, developers certainly have an advantage. Each of these roles certainly has separate and distinct responsibilities. But why is the outlook so much better for developers, and what does this say about what’s happening in the IT ecosystem? "More than ever," says Oracle Developer Champion Rolando Carrasco, "institutions, organizations, and governments are keen to generate a new crop of developers that can help them to to create something new." In today's business climate competition is tough, and high premium is placed on innovation. "But developers have a lot of tools,  a lot of abilities within reach, and the opportunity to make something that can make a competitive difference." But the role of the developer is morphing into something new, according to Oracle ACE Director Martin Giffy D'Souza. "In the next couple years we're also going to see that  the typical developer is not going to be the traditional developer that went to school, or the script kitties that just got into the business. We're going see what is called the citizen developer. We're going to see a lot more people transition to that simply because it adds value to their job. Those people are starting to hit the limits of writing VBA macros in Excel and they want to write custom apps. I think that's what we're going to see more and more of, because we already know there's a developer job shortage." But why is the job growth for developers outpacing that for DBAs and SysAdmins? "If you take it at very high level, devs produce things," Martin says. "They produce value. They produce products.  DBAs and IT people are maintainers. They’re both important, but the more products and solutions we can create," the more value to the business. Oracle ACE Director Mark Rittman has spent the last couple of years working as a product manager in a start-up, building a tech platform. "I never saw a DBA there," he admits. "It was at the point that if I were to try to explain what a DBA was to people there, all of whom are uniformly half my age, they wouldn't know what I was talking about. That's because the platforms people use these days, within the Oracle ecosystem or Google or Amazon or whatever, it's all very much cloud, and it's all very much NoOPs, and it's very much the things that we used to spend ages worrying about," This frees developers to do what they do best. "There are far fewer people doing DBA work and SysAdmin work," Mark says. "That’s all now in the cloud. And that also means that developers can also develop now. I remember, as a BI developer working on projects, it was surprising how much of my time was spent just getting the system working in the first place, installing things, configuring things, and so on. Probably 75% of every project was just getting the thing to actually work." Where some roles may vanish altogether, others will transform. DBAs have become data engineers or infrastructure engineers, according to Mark. "So there are engineers around and there are developers around," he observes, "but I think administrator is a role that, unless you work for one of the big cloud companies in one of those big data centers, is largely kind of managed away now." Phil Wilkins, an Oracle ACE, has witnessed the changes. DBAs in particular, as well as network people focused on infrastructure, have been dramatically affected by cloud computing, and the ground is still shaking. "With the rise and growth in cloud adoption these days, you're going to see the low level, hard core technical skills that the DBAs used to bring being concentrated into the cloud providers, where you're taking a database as a service. They're optimizing the underlying infrastructure, making sure the database is running. But I'm just chucking data at it, so I don't care about whether the storage is running efficiently or not. The other thing is that although developers now get a get more freedom, and we've got NoSQL and things like that, we're getting more and more computing power, and it's accelerating at such a rate now that, where 10 years ago we used to have to really worry about the tuning and making sure the database was performant, we can now do a lot of that computing on an iPhone. So why are we worrying when we've got huge amounts of cloud and CPU to the bucketload? These comments represent just a fraction of the conversation captured in this latest Oracle Developer Community Podcast, in which the panelists dive deep into the forces that are shaping and re-shaping roles, and discuss their own concerns about the trends and technologies that are driving that evolution. Listen! The Panelists Rolando Carrasco Oracle Developer Champion Oracle ACE Co-owner, Principal SOA Architect, S&P Solutions Martin Giffy D'Souza Oracle ACE Director Director of Innovation, Insum Solutions   Mark Rittman Oracle ACE Director Chief Executive Officer, MJR Analytics   Phil Wilkins Oracle ACE Senior Consultant, Capgemini 5 Related Oracle Code One Sessions The Future of Serverless is Now: Ci/CD for the Oracle Fn Project, by Rolando Carrasco and Leonardo Gonzalez Cruz [DEV5325] Other Related Content Podcast: Are Microservices and APIs Becoming SOA 2.0? Vibrant and Growing: The Current State of API Management Video: 2 Minute Integration with Oracle Integration Cloud Service It's Always Time to Change Coming Soon The next program, coming on Sept 5, will feature a discussion of "DevOps to NoOps," featuring panelists Baruch Sadogursky, Davide Fiorentino, Bert Jan Schrijver, and others TBA. Stay tuned! Subscribe Never miss an episode! The Oracle Developer Community Podcast is available via: iTunes Podbean Feedburner

The good news is that the US Bureau of Labor Statistics predicts 24% growth in software developer jobs through 2026. That’s well above average. The outlook for Database administrators certainly isn’t...

DevOps

What's New in Oracle Developer Cloud Service - August 2018

Over the weekend we updated Oracle Developer Cloud Service - your cloud based DevOps and Agile platform - with a new release (18.3.3) adding some key new features that will improve the way you develop and release software on the Oracle Cloud. Here is a quick rundown of key new capabilities added this month. Environments A new top level section in Developer Cloud Service now allows you to define "Environments" - a collection of cloud services that you bundle together under one name. Once you have an environment defined, you'll be able to see the status of your environment on the home page of your project. You can for example define a development, test and production environments - and see the status of each one with a simple glance. This is the first step in a set of future features of DevCS that will help you manage software artifacts across environments in an easier way. Project Templates When you create a new project in DevCS you can base it on a template. Up until this release you were limited to templates created by Oracle, now you can define your own templates for your company. Template can include default artifacts such as wiki pages, default git repositories, and even builds and deployment steps. This is very helpful for companies who are aiming to standardize development across development teams, as well as for team who have repeating patterns of development. Wiki Enhancments The wiki in DevCS is a very valuable mechanism for your team to share information, and we just added a bunch of enhancements that will make collaboration in your team even better. You can now watch specific wiki pages or sections, which will notify you whenever someone updates those pages. We also added support for commenting on wiki pages - helping you to conduct virtual discussion on their content. More These are just some of the new features in Developer Cloud Service. All of these features are part of the free functionality that Developer Cloud Service provides to Oracle Cloud customers. Take them for a spin and let us know what you think. For information on additional new feature check out the What's New in Developer Cloud Service Documentation. Got technical questions - ask them on our cloud customer connect community page.  

Over the weekend we updated Oracle Developer Cloud Service - your cloud based DevOps and Agile platform - with a new release (18.3.3) adding some key new features that will improve the way you develop...

Auto-updatable, self-contained CLI with Java 11

.cb11splash{display:none;} (Originally published on Medium) Introduction Over the course of the last 11 months, we have seen two major releases of Java — Java 9 and Java 10. Come September, we will get yet another release in the form of Java 11, all thanks to the new 6 month release train. Each new release introduces exciting features to assist the modern Java developer. Let’s take some of these features for a spin and build an auto-updatable, self-contained command line interface. The minimum viable feature-set for our CLI is defined as follows: Display the current bitcoin price index by calling the free coin desk API Check for new updates and if available, auto update the CLI Ship the CLI with a custom Java runtime image to make it self-contained Prerequisites To follow along, you will need a copy of JDK 11 early-access build. You will also need the latest version (4.9 at time of writing) of gradle. Of course, you can use your preferred way of building Java applications. Though not required, familiarity with JPMS and JLink can be helpful since we are going to use the module system to build a custom runtime image. Off we go We begin by creating a class that provides the latest bitcoin price index. Internally, it reads a configuration file to get the URL of the coin desk REST API and builds an http client to retrieve the latest price. This class makes use of the new fluent HTTP client classes that are part of “java.net.http” module. var bpiRequest = HttpRequest.newBuilder() .uri(new URI(config.getProperty("bpiURL"))) .GET() .build(); var bpiApiClient = HttpClient.newHttpClient(); bpiApiClient .sendAsync(bpiRequest, HttpResponse.BodyHandlers.ofString()) .thenApply(response -> toJson(response)) .thenApply(bpiJson -> bpiJson.getJsonObject("usd").getString("rate")); Per Java standards, this code is actually very concise. We used the new fluent builders to create a GET request, call the API, convert the response into JSON, and pull the current bitcoin price in USD currency. In order to build a modular jar and set us up to use “jlink”, we need to add a “module-info.java” file to specify the CLI’s dependencies on other modules. module ud.bpi.cli { requires java.net.http; requires org.glassfish.java.json; } From the code snippet, we observe that our CLI module requires the http module shipped in Java 11 and an external JSON library. Now, let’s turn our attention to implement an auto-updater class. This class should provide a couple of methods. One method to talk to a central repository and check for the availability of newer versions of the CLI and another method to download the latest version. The following snippet shows how easy it is to use the new HTTP client interfaces to download remote files. CompletableFuture update(String downloadToFile) { try { HttpRequest request = HttpRequest.newBuilder() .uri(new URI("http://localhost:8080/2.zip")) .GET() .build(); return HttpClient.newHttpClient() .sendAsync(request, HttpResponse.BodyHandlers .ofFile(Paths.get(downloadToFile))) .thenApply(response -> { unzip(response.body()); return true; }); } catch (URISyntaxException ex) { return CompletableFuture.failedFuture(ex); } } The new predefined HTTP body handlers in Java 11 can convert a response body into common high-level Java objects. We used the HttpResponse.BodyHandlers.ofFile() method to download a zip file that contains the latest version of our CLI. Let’s put these classes together by using a launcher class. It provides an entry point to our CLI and implements the application flow. Right when the application starts, this class calls its launch() method that will check for new updates. void launch() { var autoUpdater = new AutoUpdater(); try { if (autoUpdater.check().get()) { System.exit(autoUpdater.update().get() ? 100 : -1); } } catch (InterruptedException | ExecutionException ex) { throw new RuntimeException(ex); } } As you can see, if a new version of the CLI is available, we download the new version and exit the JVM by passing in a custom exit code 100. A simple wrapper script will check for this exit code and rerun the CLI. #!/bin/sh ... start EXIT_STATUS=$? if [ ${EXIT_STATUS} -eq 100 ]; then start fi And finally, we will use “jlink” to create a runtime image that includes all the necessary pieces to execute our CLI. jlink is a new command line tool provided by Java that will look at the options passed to it to assemble and optimize a set of modules and their dependencies into a custom runtime image. In the process, it builds a custom JRE — thereby making our CLI self-contained. jlink --module-path build/libs/:${JAVA_HOME}/jmods \ --add-modules ud.bpi.cli,org.glassfish.java.json \ --launcher bpi=ud.bpi.cli/ud.bpi.cli.Launcher \ --output images Let’s look at the options that we passed to jlink: “ module-path” tells jlink to look into the specified folders that contain java modules “ add-modules” tells jlink which user-defined modules are to be included in the custom image “launcher” is used to specify the name of the script that will be used to start our CLI and the full path to the class that contains the main method of the application “output” is used to specify the folder name that holds the newly created self-contained custom image When we run our first version of the CLI and there are no updates available, the CLI prints something like this: Say we release a new version (2) of the CLI and push it to the central repo. Now, when you rerun the CLI, you will see something like this: Voila! The application sees that a new version is available and auto-updates itself. It then restarts the CLI. As you can see, the new version adds an up/down arrow indicator to let the user know how well the bitcoin price index is doing. Head over to GitHub to grab the source code and experiment with it.

(Originally published on Medium) Introduction Over the course of the last 11 months, we have seen two major releases of Java — Java 9 and Java 10. Come September, we will get yet another release in the...

Running Spring Tool Suite and other GUI applications from a Docker container

Originally published at javaoraclesoa.blogspot.com Running an application within a Docker container helps in isolating the application from the host OS. Running GUI applications like for example an IDE from a Docker container, can be challenging. I’ll explain several of the issues you might encounter and how to solve them. For this I will use Spring Tool Suite as an example. The code (Dockerfile and docker-compose.yml) can also be found here. Due to (several) security concerns, this is not recommended in a production environment. Running a GUI from a Docker container In order to run a GUI application from a Docker container and display its GUI on the host OS, several steps are needed; Which display to use? The container needs to be aware of the display to use. In order to make the display available, you can pass the DISPLAY environment variable to the container. docker-compose describes the environment/volume mappings/port mappings and other things of docker containers. This makes it easier to run containers in a quick and reproducible way and avoids long command lines. docker-compose You can do this by providing it in a docker-compose.yml file. See for example below. The environment indicates the host DISPLAY variable is passed as DISPLAY variable to the container. Docker In a Docker command (when not using docker-compose), you would do this with the -e flag or with — env. For example; docker run — env DISPLAY=$DISPLAY containername Allow access to the display The Docker container needs to be allowed to present its screen on the Docker host. This can be done by executing the following command: xhost local:root After execution, during the session, root is allowed to use the current users display. Since the Docker daemon runs as root, Docker containers (in general!) now can use the current users display. If you want to persist this, you should add it to a start-up script. Sharing the X socket The last thing to do is sharing the X socket (don’t ask me details but this is required…). This can be done by defining a volume mapping in your Docker command line or docker-compose.yml file. For Ubuntu this looks like you can see in the image below. Spring Tool Suite from a Docker container In order to give a complete working example, I’ll show how to run Spring Tool Suite from a Docker container. In this example I’m using the Docker host JVM instead of installing a JVM inside the container. If you want to have the JVM also inside the container (instead of using the host JVM), look at the following and add that to the Dockerfile. As a base image I’m using an official Ubuntu image. I’ve used the following Dockerfile: FROM ubuntu:18.04 MAINTAINER Maarten Smeets <maarten.smeets@amis.nl> ARG uid LABEL nl.amis.smeetsm.ide.name=”Spring Tool Suite” nl.amis.smeetsm.ide.version=”3.9.5" ADD https://download.springsource.com/release/STS/3.9.5.RELEASE/dist/e4.8/spring-tool-suite-3.9.5.RELEASE-e4.8.0-linux-gtk-x86_64.tar.gz /tmp/ide.tar.gz RUN adduser — uid ${uid} — disabled-password — gecos ‘’ develop RUN mkdir -p /opt/ide && \ tar zxvf /tmp/ide.tar.gz — strip-components=1 -C /opt/ide && \ ln -s /usr/lib/jvm/java-10-oracle /opt/ide/sts-3.9.5.RELEASE/jre && \ chown -R develop:develop /opt/ide && \ mkdir /home/develop/ws && \ chown develop:develop /home/develop/ws && \ mkdir /home/develop/.m2 && \ chown develop:develop /home/develop/.m2 && \ rm /tmp/ide.tar.gz && \ apt-get update && \ apt-get install -y libxslt1.1 libswt-gtk-3-jni libswt-gtk-3-java && \ apt-get autoremove -y && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* && \ rm -rf /tmp/* USER develop:develop WORKDIR /home/develop ENTRYPOINT /opt/ide/sts-3.9.5.RELEASE/STS -data /home/develop/ws The specified packages are required to be able to run STS inside the container and create the GUI to display on the host. I’ve used the following docker-compose.yml file: version: ‘3’ services: sts: build: context: . dockerfile: Dockerfile args: uid: ${UID} container_name: “sts” volumes: - /tmp/.X11-unix:/tmp/.X11-unix - /home/develop/ws:/home/develop/ws - /home/develop/.m2:/home/develop/.m2 - /usr/lib/jvm/java-10-oracle:/usr/lib/jvm/java-10-oracle - /etc/java-10-oracle:/etc/java-10-oracle environment: - DISPLAY user: develop ports: "8080:8080" Notice this docker-compose file has some dependencies on the host OS. It expects a JDK 10 to be installed in /usr/lib/jvm/java-10-oracle with configuration in /etc/java-10-oracle. Also it expects to find /home/develop/ws and /home/develop/.m2 to be present on the host to be mapped to the container. The .X11-unix mapping was already mentioned as needed to allow a GUI screen to be displayed. There are also some other things which are important to notice in this file. User id First the way a non-privileged user is created inside the container. This user is created with a user id (uid) which is supplied as a parameter. Why did I do that? Files in mapped volumes which are created by the container user will be created with the uid which the user inside the container has. This will cause issues if inside the container the user has a different uid as outside of the container. Suppose I run the container onder a user develop. This user on the host has a uid of 1002. Inside the container there is also a user develop with a uid of 1000. Files on a mapped volume are created with uid 1000; the uid of the user in the container. On the host however, uid 1000 is a different user. These files created by the container cannot be accessed by the develop user on the host (with uid 1002). In order to avoid this, I’m creating a develop user inside the VM with the same uid as the user used outside of the VM (the user in the docker group which gave the command to start the container). Workspace folder and Maven repository When working with Docker containers, it is a common practice to avoid storing state inside the container. State can be various things. I consider the STS application work-space folder and the Maven repository among them. This is why I’ve created the folders inside the container and mapped them in the docker-compose file to the host. They will use folders with the same name (/home/develop/.m2 and /home/develop/ws) on the host. Java My Docker container with only Spring Tool Suite was big enough already without having a more than 300Mb JVM inside of it (on Linux Java 10 is almost double the size of Java 8). I’m using the host JVM instead. I installed the host JVM on my Ubuntu development VM as described here. In order to use the host JVM inside the Docker container, I needed to do 2 things: Map 2 folders to the container: And map the JVM path to the JRE folder onder STS: ln -s /usr/lib/jvm/java-10-oracle /opt/ide/sts-3.9.5.RELEASE/jre. Seeing it work First allow access to the display: xhost local:root Next make available the variable UID: export UID=$UID Then build: docker-compose build Building sts Step 1/10 : FROM ubuntu:18.04 — -> 735f80812f90 Step 2/10 : MAINTAINER Maarten Smeets <maarten.smeets@amis.nl> — -> Using cache — -> 69177270763e Step 3/10 : ARG uid — -> Using cache — -> 85c9899e5210 Step 4/10 : LABEL nl.amis.smeetsm.ide.name=”Spring Tool Suite” nl.amis.smeetsm.ide.version=”3.9.5" — -> Using cache — -> 82f56ab07a28 Step 5/10 : ADD https://download.springsource.com/release/STS/3.9.5.RELEASE/dist/e4.8/spring-tool-suite-3.9.5.RELEASE-e4.8.0-linux-gtk-x86_64.tar.gz /tmp/ide.tar.gz — -> Using cache — -> 61ab67d82b0e Step 6/10 : RUN adduser — uid ${uid} — disabled-password — gecos ‘’ develop — -> Using cache — -> 679f934d3ccd Step 7/10 : RUN mkdir -p /opt/ide && tar zxvf /tmp/ide.tar.gz — strip-components=1 -C /opt/ide && ln -s /usr/lib/jvm/java-10-oracle /opt/ide/sts-3.9.5.RELEASE/jre && chown -R develop:develop /opt/ide && mkdir /home/develop/ws && chown develop:develop /home/develop/ws && rm /tmp/ide.tar.gz && apt-get update && apt-get install -y libxslt1.1 libswt-gtk-3-jni libswt-gtk-3-java && apt-get autoremove -y && apt-get clean && rm -rf /var/lib/apt/lists/* && rm -rf /tmp/* — -> Using cache — -> 5e486a4d6dd0 Step 8/10 : USER develop:develop — -> Using cache — -> c3c2b332d932 Step 9/10 : WORKDIR /home/develop — -> Using cache — -> d8e45440ce31 Step 10/10 : ENTRYPOINT /opt/ide/sts-3.9.5.RELEASE/STS -data /home/develop/ws — -> Using cache — -> 2d95751237d7 Successfully built 2d95751237d7 Successfully tagged t_sts:latest Next run: docker-compose up When you run a Spring Boot application on port 8080 inside the container, you can access it on the host on port 8080 with for example Firefox.

Originally published at javaoraclesoa.blogspot.com Running an application within a Docker container helps in isolating the application from the host OS. Running GUI applications like for example an IDE...

Text Classification with Deep Neural Network in TensorFlow — Simple Explanation

.cb11splash{display:none;} (Originally published on andrejusb.blogspot.com) Text classification implementation with TensorFlow can be simple. One of the areas where text classification can be applied — chatbot text processing and intent resolution. I will describe step by step in this post, how to build TensorFlow model for text classification and how classification is done. Please refer to my previous post related to similar topic — Contextual Chatbot with TensorFlow, Node.js and Oracle JET — Steps How to Install and Get It Working. I would recommend to go through this great post about chatbot implementation — Contextual Chatbots with Tensorflow. Complete source code is available in GitHub repo (refer to the steps described in the blog referenced above). Text classification implementation: Step 1: Preparing Data Tokenise patterns into array of words Lower case and stem all words. Example: Pharmacy = pharm. Attempt to represent related words Create list of classes — intents Create list of documents — combination between list of patterns and list of intents Python implementation: Step 2: Preparing TensorFlow Input [X: [0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, …N], Y: [0, 0, 1, 0, 0, 0, …M]] [X: [0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, …N], Y: [0, 0, 0, 1, 0, 0, …M]] Array representing pattern with 0/1. N = vocabulary size. 1 when word position in vocabulary is matching word from pattern Array representing intent with 0/1. M = number of intents. 1 when intent position in list of intents/classes is matching current intent Python implementation: Step 3: Training Neural Network Use tflearn — deep learning library featuring a higher-level API for TensorFlow Define X input shape — equal to word vocabulary size Define two layers with 8 hidden neurones — optimal for text classification task (based on experiments) Define Y input shape — equal to number of intents Apply regression to find the best equation parameters Define Deep Neural Network model (DNN) Run model.fit to construct classification model. Provide X/Y inputs, number of epochs and batch size Per each epoch, multiple operations are executed to find optimal model parameters to classify future input converted to array of 0/1 Batch size: Smaller batch size requires less memory. Especially important for datasets with large vocabulary Typically networks train faster with smaller batches. Weights and network parameters are updated after each propagation The smaller the batch the less accurate estimate of the gradient (function which describes the data) could be Python implementation: Step 4: Initial Model Testing Tokenise input sentence — split it into array of words Create bag of words (array with 0/1) for the input sentence — array equal to the size of vocabulary, with 1 for each word found in input sentence Run model.predict with given bag of words array, this will return probability for each intent Python implementation: Step 5: Reuse Trained Model For better reusability, it is recommended to create separate TensorFlow notebook, to handle classification requests We can reuse previously created DNN model, by loading it with TensorFlow pickle Python implementation: Step 6: Text Classification Define REST interface, so that function will be accessible outside TensorFlow Convert incoming sentence into bag of words array and run model.predict Consider results with probability higher than 0.25 to filter noise Return multiple identified intents (if any), together with assigned probability Python implementation:

(Originally published on andrejusb.blogspot.com) Text classification implementation with TensorFlow can be simple. One of the areas where text classification can be applied — chatbot text processing...

Oracle Load Balancer Classic configuration with Terraform

(Originally published on Medium) This article provides an introduction to using the Load Balancer resources to provision and configure an Oracle Cloud Infrastructure Load Balancer Classic instance using Terraform When using the Load Balancer Classic resources with the opc Terraform Provider the  lbaas_endpoint  attribute must be set in the provider configuration. provider "opc" { version = "~> 1.2" user = "${var.user}" password = "${var.password}" identity_domain = "${var.compute_service_id}" endpoint = "${var.compute_endpoint}" lbaas_endpoint = "https://lbaas-1111111.balancer.oraclecloud.com" } First we create the main Load Balancer instance resource. The Server Pool, Listener and Policy resources will be created as child resources associated to this instance. resource "opc_lbaas_load_balancer" "lb1" { name = "examplelb1" region = "uscom-central-1" description = "My Example Load Balancer" scheme = "INTERNET_FACING" permitted_methods = ["GET", "HEAD", "POST"] ip_network = "/Compute-${var.domain}/${var.user}/ipnet1" } To define the set of servers the load balancer will be directing traffic to we create a Server Pool, sometimes referred to as an origin server pool. Each server is defined by the combination of the target IP address, or hostname, and port. For the brevity of this example we’ll assume we already have a couple instances on an existing IP Network with a web service running on port  8080  resource "opc_lbaas_server_pool" "serverpool1" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "serverpool1" servers = ["192.168.1.2:8080", "192.168.1.3:8080"] vnic_set = "/Compute-${var.domain}/${var.user}/vnicset1" } The Listener resource defines what incoming traffic the Load Balancer will direct to a specific server pool. Multiple Server Pools and Listeners can be defined for a single Load Balancer instance. For now we’ll assume all the traffic is HTTP, both to the load balancer and between the load balancer and the server pool. We’ll look at securing traffic with HTTPS later. In this example the load balancer is managing inbound requests for a site  http://mywebapp.example.com  and directing them to the server pool we defined above. resource "opc_lbaas_listener" "listener1" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "http-listener" balancer_protocol = "HTTP" port = 80 virtual_hosts = ["mywebapp.example.com"] server_protocol = "HTTP" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.load_balancing_mechanism_policy.uri}", ] } Policies are used to define how the Listener processes the incoming traffic. In the Listener definition we are referencing a Load Balancing Mechanism Policy to set how the load balancer allocates the traffic across the available servers in the server pool. Additional policy type could also be defined to control session affinity of resource "opc_lbaas_policy" "load_balancing_mechanism_policy" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "roundrobin" load_balancing_mechanism_policy { load_balancing_mechanism = "round_robin" } } With that, our first basic Load Balancer configuration is complete. Well almost. The last step is to configure the DNS CNAME record to point the source domain name (e.g. mywebapp.example.com ) to the canonical host name of load balancer instance. The exact steps to do this will be dependent on your DNS provider. To get the  canonical_host_name add the following output. output "canonical_host_name" { value = "${opc_lbaas_load_balancer.lb1.canonical_host_name}" } Helpful Hint: if you are just creating the load balancer for testing and you don’t have access to a DNS name you can redirect, a workaround is to set the  virtual host  in the listener configuration to the load balancers canonical host name, you can then use the canonical host name directly for the inbound service URL, e.g. resource "opc_lbaas_listener" "listener1" { ... virtual_hosts = [ "${opc_lbaas_load_balancer.lb1.canonical_host_name}" ] ... } Configuring the Load Balancer for HTTPS There are two separate aspects to configuring the Load Balancer for HTTPS traffic, the first is to enable inbound HTTPS requests to the Load Balancer, often referred to as SSL or TLS termination or offloading. The second is the use of HTTPS for traffic between the Load Balancer and the servers in the origin server pool. HTTPS SSL/TLS Termination To configure the Load Balancer listener to accept inbound HTTPS requests for encrypted traffic between the client and the Load Balancer, create a Server Certificate providing the PEM encoded certificate and private key, and the concatenated set of PEM encoded certificates for the CA certification chain. resource "opc_lbaas_certificate" "cert1" { name = "server-cert" type = "SERVER" private_key = "${var.private_key_pem}" certificate_body = "${var.cert_pem}" certificate_chain = "${var.ca_cert_pem}" } Now update the existing, or create a new listener for HTTPS resource "opc_lbaas_listener" "listener2" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "https-listener" balancer_protocol = "HTTPS" port = 443 certificates = ["${opc_lbaas_certificate.cert1.uri}"] virtual_hosts = ["mywebapp.example.com"] server_protocol = "HTTP" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.load_balancing_mechanism_policy.uri}", ] } Note that the server pool protocol is still HTTP, in this configuration traffic is only encrypted between the client and the load balancer. HTTP to HTTPS redirect A common pattern required for many web applications is to ensure that any initial incoming requests over HTTP are redirected to HTTPS for secure site communication. To do this we can we can update the original HTTP listeners we created above with a new redirect policy resource "opc_lbaas_policy" "redirect_policy" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "example_redirect_policy" redirect_policy { redirect_uri = "https://${var.dns_name}" response_code = 301 } } resource "opc_lbaas_listener" "listener1" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "http-listener" balancer_protocol = "HTTP" port = 80 virtual_hosts = ["mywebapp.example.com"] server_protocol = "HTTP" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.redirect_policy.uri}", ] } HTTPS between Load Balancer and Server Pool HTTPS between the Load Balancer and Server Pool should be used if the server pool is accessed over the Public Internet, and can also be used for extra security when accessing servers within the Oracle Cloud Infrastructure over the private IP Network. This configuration assumes the backend servers are already configured to server their content over HTTPS. To configure the Load Balancer to communicate securely with the backend servers create a Trusted Certificate, providing the PEM encoded Certificate and CA authority certificate chain for the backend servers. resource "opc_lbaas_certificate" "cert2" { name = "trusted-cert" type = "TRUSTED" certificate_body = "${var.cert_pem}" certificate_chain = "${var.ca_cert_pem}" } Next create a Trusted Certificate Policy referencing the Trusted Certificate resource "opc_lbaas_policy" "trusted_certificate_policy" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "example_trusted_certificate_policy" trusted_certificate_policy { trusted_certificate = "${opc_lbaas_certificate.cert2.uri}" } } And finally update the listeners server pool configuration to HTTPS, adding the trusted certificate policy resource "opc_lbaas_listener" "listener2" { load_balancer = "${opc_lbaas_load_balancer.lb1.id}" name = "https-listener" balancer_protocol = "HTTPS" port = 443 certificates = ["${opc_lbaas_certificate.cert1.uri}"] virtual_hosts = ["mywebapp.example.com"] server_protocol = "HTTPS" server_pool = "${opc_lbaas_server_pool.serverpool1.uri}" policies = [ "${opc_lbaas_policy.load_balancing_mechanism_policy.uri}", "${opc_lbaas_policy.trusted_certificate_policy.uri} ] } More Information Example Terraform configuration for Load Balancer Classic Getting Started with Oracle Cloud Infrastructure Load Balancing Classic Terraform Provider for Oracle Cloud Infrastructure Classic

(Originally published on Medium) This article provides an introduction to using the Load Balancer resources to provision and configure an Oracle Cloud Infrastructure Load Balancer Classic instance...

Developers

A Quick Look At What's New In Oracle JET v5.1.0

On June 18th, the v5.1.0 release of Oracle JET was made available. It was the 25th consecutive on-schedule release for Oracle JET. Details on the release schedule are provided here in the FAQ. As indicated by the release number, v5.1.0 is a minor release, aimed at tweaking and consolidating features throughout the toolkit. As in other recent releases, new features have been added to support development of composite components, following the Composite Component Architecture (CCA). For details, see the entry on the new Template Slots in Duncan Mills's blog. Also, take note of the new design time metadata, as described in the release notes.  Aside from the work done in the CCA area, the key new features and enhancements to be aware of in the release are listed below, sorted alphabetically: Component Enhancement Description oj-chart New "data" attribute. Introduces new attributes, slots, and custom elements. oj-film-strip New "looping" attribute. Specifies filmstrip navigation behavior, bounded ("off) or looping ("page"). oj-form-layout Enhanced content flexibility. Removes restrictions on the types of children allowed in the "oj-form-layout" component. oj-gantt New "dnd" attribute and "ojMove" event.  Provides new support for moving tasks via drag and drop. oj-label-value New component. Provides enhanced layout flexibility for the "oj-form-layout" component. oj-list-view Enhanced "itemTemplate" slot. Supports including the <LI> element in the template. oj-swipe-actions New component. Provides a declarative way to add swipe-to-reveal functionality to items in the "oj-list-view" component. For all the details on the items above, see the release notes. Note: Be aware that in Oracle JET 7.0.0, support for Yeoman and Grunt will be removed from generator-oraclejet and ojet-cli. As a consequence, the ojet-cli will be the only way to use the Oracle JET tooling, e.g., to create new Oracle JET projects from that point on. Therefore, if you haven't transferred from using Yeoman and Grunt to ojet-cli yet, e.g., to command line calls such as "ojet create", take some time to move in that direction before the 7.0.0 release. As always, your comments and constructive feedback are welcome. If you have questions, or comments, please engage with the Oracle JET Community in the Discussion Forums and also follow @OracleJET on Twitter. For organizations using Oracle JET in production, you're invited to be highlighted on the Oracle JET site, with the latest addition being a brand new Customer Success Story by Cagemini. On behalf of the entire Oracle JET development team: "Happy coding!"

On June 18th, the v5.1.0 release of Oracle JET was made available. It was the 25th consecutive on-schedule release for Oracle JET. Details on the release schedule are provided here in the FAQ. As indicat...

APIs

Vibrant and Growing: The Current State of API Management

"Vibrant and growing all the time!" That's how Andrew Bell, Oracle PaaS API Management Architect at Capgemini, describes the current state of API management. "APIs are the doors to organizations, the means by which organizations connect to one another, connect their processes to one another, and streamline those processes to meet customer needs. The API environment is growing rapidly as we speak," Bell says. "API management today is quite crucial," says Bell's Capgemini colleague Sander Rensen, an Oracle PaaS lead and architect, "especially for clients who want to go on a journey of a digital transformation. For our clients, the ability to quickly find APIs and subscribe to them is a very crucial part of digital transformation. "It's not just the public-facing view of APIs," observes Oracle ACE Phil Wilkins, a senior Capgemini consultant specializing in iPaaS. "People are realizing that APIs are an easier, simpler way to do internal decoupling. If I expose my back-end system in a particular way to another part of the organization — the same organization — I can then mask from you how I'm doing transformation or innovation or just trying to keep alive a legacy system while we try and improve our situation," Wilkins explains. "I think that was one of the original aspirations of WSDL and technologies like that, but we ended up getting too fine-grained and tying WSDLs to end products. Then the moment the product changed that WSDL changed and you broke the downstream connections." Luis Weir, CTO of Capgemini's Oracle delivery unit and an Oracle Developer Champion and ACE Director, is just as enthusiastic about the state of API management, but see's a somewhat rocky road ahead for some organizations. "APIs are one thing, but the management of those APIs is something entirely different," Weir explains "API management is something that we're doing quite heavily, but I don't think all organizations have actually realized the importance of the full lifecycle management of the APIs. Sometimes people think of API management as just an API gateway. That’s an important capability, but there is far more to it," Weir wonders if organizations understand what it means to manage an API throughout its entire lifecycle. Bell, Rensen, Wilkins, and Weir are the authors of Implementing Oracle API Platform Cloud Service, now available from Packt Publishing, and as you'll hear in this podcast, they bring considerable insight and expertise to this discussion of what's happening in API management. The conversation goes beyond the current state of API management to delve into architectural implications, API design, and how working in SOA may have left you with some bad habits. Listen! This program was recorded on June 27, 2018. The Panelists Andrew Bell Oracle PaaS API Management Architect, Capgemini     Sander Rensen Oracle PaaS Lead and Architect, Capgemini     Luis Weir CTO, Oracle DU, Capgemini Oracle Developer Champion Oracle ACE Director Phil Wilkins Senior Consultant specializing in iPaaS Oracle ACE   Additional Resources Book Excerpt: Implement an API Design-first approach for building APIs [Tutorial] Microservices in a Monolith World, Presentation by Phil Wilkins Video: API-Guided Drone Flight London Oracle Developer Meet-up Two New Articles on API Management and Microservices Podcast: Are Microservices and APIs Becoming SOA 2.0? Podcast Show Notes: API Management Roundtable Podcast: Taking Charge: Meeting SOA Governance Challenges Related Oracle Code One Sessions The Seven Deadly Sins of API Design [DEV4921], by Luis Weir Oracle Cloud Soaring: Live Demo of a Poly-Cloud Microservices Implementation [DEV4979], by Luis Weir, Lucas Jellema, Guido Schmutz   Coming Soon How has your role as a developer, DBA, or Sysadmin changed? Our next program will focus on the evolution of IT roles and the trends and technologies that are driving the changes. Subscribe Never miss an episode! The Oracle Developer Community Podcast is available via: iTunes Podbean Feedburner

"Vibrant and growing all the time!" That's how Andrew Bell, Oracle PaaS API Management Architect at Capgemini, describes the current state of API management. "APIs are the doors to organizations, the...

Blockchain

Keep Calm and Code On: Four Ways an Enterprise Blockchain Platform Can Improve Developer Productivity

A guest post by Sarabjeet (Jay) Chugh, Sr. Director Product Marketing, Oracle Cloud Platform Situation You just got a cool new Blockchain project for a client. As you head back to the office, you start to map out the project plan in your mind. Can you meet all of your client’s requirements in time? You're not alone in this dilemma. You attend a blockchain conference the next day and get inspired by engaging talks, meet fellow developers working on similar projects. A lunchtime chat with a new friend turns into a lengthy conversation about getting started with Blockchain. Now you’re bursting with new ideas and ready to get started with your hot new Blockchain coding project. Right? Well almost… You go back to your desk and contemplate a plan of action to develop your smart contract or distributed application, thinking through the steps, including ideation, analysis, prototype, coding, and finally building the client-facing application. Problem It is then that the reality sets in. You begin thinking beyond proof-of-concept to the production phase that will require additional things that you will need to design for and build into your solution. Additional things such as:   These things may delay or even prevent you from getting started with building the solution. Ask yourself the questions such as: Should I spend time trying to fulfill dependencies of open-source software such as Hyperledger Fabric on my own to start using it to code something meaningful? Do I spend time building integrations of diverse systems of record with Blockchain? Do I figure out how to assemble components such as Identity management, compute infrastructure, storage, management & monitoring systems to Blockchain? How do I integrate my familiar development tools & CI/CD platform without learning new tools? And finally, ask yourself, Is it the best use of your time to figure out scaling, security, disaster recovery, point in time recovery of distributed ledger, and the “illities” like reliability, availability, and scalability? If the answer to one or more of these is a resounding no, you are not alone. Focusing on the above aspects, though important, will take time away from doing the actual work to meet your client’s needs in a timely manner, which can definitely be a source of frustration. But do not despair. You need to read on about how an enterprise Blockchain platform such as the one from Oracle can make your life simpler. Imagine productivity savings multiplied hundreds of thousands of times across critical enterprise blockchain applications and chaincode. What is an Enterprise Blockchain Platform? The very term “enterprise”  typically signals a “large-company, expensive thing” in the hearts and minds of developers. Not so in this case, as it may be more cost effective than spending your expensive developer hours to build, manage, and maintain blockchain infrastructure and its dependencies on your own. As the chart below shows, the top two Blockchain technologies used in proofs of concept have been Ethereum and Hyperledger.   Ethereum has been a platform of choice among the ICO hype for public blockchain use. However, it has relatively lower performance, is slower and less mature compared to Hyperledger. It also uses a less secure programming model based on a primitive language called Solidity, which is prone to re-entrant attacks that has led to prominent hacks like the DOA attack that lost $50M recently.   Hyperledger Fabric, on the other hand, wins out in terms of maturity, stability, performance, and is a good choice for enterprise use cases involving the use of permissioned blockchains. In addition, capabilities such as the ones listed in Red have been added by vendors such as Oracle that make it simpler to adopt and use and yet retain the open source compatibility. Let’s look at how enterprise Blockchain platform, such as the one Oracle has built that is based on open-source Hyperledger Fabric can help boost developer productivity. How an Enterprise Blockchain Platform Drives Developer Productivity Enterprise blockchain platforms provide four key benefits that drive greater developer productivity:   Performance at Scale Faster consensus with Hyperledger Fabric Faster world state DB - record level locking for concurrency and parallelization of updates to world state DB Parallel execution across channels, smart contracts Parallelized validation for commit Operations Console with Web UI Dynamic Configuration – Nodes, Channels Chaincode Lifecycle – Install, Instantiate, Invoke, Upgrade Adding Organizations Monitoring dashboards Ledger browser Log access for troubleshooting Resilience and Availability Highly Available configuration with replicated VMs Autonomous Monitoring & Recovery Embedded backup of configuration changes and new blocks Zero-downtime patching Enterprise Development and Integration Offline development support and tooling DevOps CI/CD integration for chaincode deployment, and lifecycle management SQL rich queries, which enable writing fewer lines of code, fewer lines to debug REST API based integration with SaaS, custom apps, systems of record Node.js, GO, Java client SDKs Plug-and-Play integration adapters in Oracle’s Integration Cloud Developers can experience orders of magnitude of productivity gains with pre-assembled, managed, enterprise-grade, and integrated blockchain platform as compared assembling it on their own. Summary Oracle offers a pre-assembled, open, enterprise-grade blockchain platform, which provides plug-and-play integrations with systems of records and applications and autonomous AI-driven self-driving, self-repairing, and self-securing capabilities to streamline operations and blockchain functionality. The platform is built with Oracle’s years of experience serving enterprise’s most stringent use cases and is backed by expertise of partners trained in Oracle blockchain. The platform rids developers of the hassles of assembling, integrating, or even worrying about performance, resilience, and manageability that greatly improves productivity. If you’d like to learn more, Register to attend an upcoming webcast (July 16, 9 am PST/12 pm EST). And if your ready to dive right in you can sign up for $300 of free credits good for up to 3500 hours of Oracle Autonomous Blockchain Cloud Service usage.

A guest post by Sarabjeet (Jay) Chugh, Sr. Director Product Marketing, Oracle Cloud Platform Situation You just got a cool new Blockchain project for a client. As you head back to the office, you start...

DevOps

Build and Deploy Node.js Microservice on Docker using Oracle Developer Cloud

This is the first blog in the series to come, which will help you understand, how you can build a NodeJS REST microservice application Docker image and push it to DockerHub using Oracle Developer Cloud Service. The next blog in the series would focus on deployment of the container we build here to deploy on Oracle Kubernetes Engine on Oracle Cloud infrastructure. You can read about the overview of the Docker functionality in this blog. Technology Stack Used Developer Cloud Service - DevOps Platform Node.js Version 6 – For microservice development. Docker – For Build Docker Hub – Container repository   Setting up the Environment: Setting up Docker Hub Account: You should create an account on https://hub.docker.com/. Keep the credentials handy for use in the build configuration section of the blog. Setting up Developer Cloud Git Repository: Now login into your Oracle Developer Cloud Service project. And create a Git repository as shown below. You can give a name of your choice to the Git repository. For the purpose of this blog, I am calling it NodeJSDocker. You can copy the Git repository URL and keep it handy for future use.  Setting up Build VM in Developer Cloud: Now we have to create a VM Template and VM with the Docker software bundle for the execution of the build. Click on the user drop down on the right hand top of the page. Select “Organization” from the menu. Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button. On creation of the template click on “Configure Software” button. Select Docker from the list of software bundles available for configuration and click on the + sign to add it to the template. Then click on “Done” to complete the Software configuration. Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM(s) you want to create and select the VM Template you just created, which would be “DockerTemplate” for our blog.   Pushing Scripts to Git Repository on Oracle Developer Cloud: Command_prompt:> cd <path to the NodeJS folder> Command_prompt:>git init Command_prompt:>git add –all Command_prompt:>git commit –m “<some commit message>” Command_prompt:>git remote add origin <Developer cloud Git repository HTTPS URL> Command_prompt:>git push origin master Below screen shots are for your reference.   Below is the folder structure description for the code that I have in the Git Repository on Oracle Developer Cloud Service. Code in the Git Repository: You will need to push the below 3 files in the Developer Cloud hosted Git repository which we have created. Main.js This is the main Node JavaScript code snippet which contains two simple methods, first one is to show the message and second one /add is for adding two numbers. The application listens at port 80.  var express = require("express"); var bodyParser = require("body-parser"); var app = express(); app.use(bodyParser.urlencoded()); app.use(bodyParser.json()); var router = express.Router(); router.get('/',function(req,res){   res.json({"error" : false, "message" : "Hello Abhinav!"}); }); router.post('/add',function(req,res){   res.json({"error" : false, "message" : "success", "data" : req.body.num1 + req.body.num2}); }); app.use('/',router); app.listen(80,function(){   console.log("Listening at PORT 80"); }) Package.json In this JSON code snippet we define the Node.js module dependencies. We also define the start file, which is Main.js for our project and the Name of the application. {   "name": "NodeJSMicro",   "version": "0.0.1",   "scripts": {     "start": "node Main.js"   },   "dependencies": {     "body-parser": "^1.13.2",     "express": "^4.13.1"     } } Dockerfile This file will contains the commands to be executed to build the Docker container with the Node.js code. It starts by getting the Node.js version 6 Docker image, then adds the two files Main.js and package.json cloned from the Git repository. Run the npm install to download the dependencies in package.json file. Expose port 80 for Docker container. And finally start the application to listen on port 80.   FROM node:6 ADD Main.js ./ ADD package.json ./ RUN npm install EXPOSE 80 CMD [ "npm", "start" ] Build Configuration: Click on the “+ New Job” button and in the dialog which pops up, give the build job a name of your choice(for the purpose of this blog I have given this as “NodeJSMicroDockerBuild”) and then select the build template (DockerTemplate) from the dropdown, that we had created earlier in the blog.  As part of the build configuration, add Git from the “Add Source Control” dropdown. And now select the repository we created earlier in the blog, which is NodeJSDocker and the master branch to which we have pushed the code. You may select the checkbox to configure automatic build trigger on SCM commits. Now from the Builders tab, select Docker Builder -> Docker Login. In the Docker login form you can leave the Registry host empty as we will be using Docker Hub which is the default Docker registry for Developer Cloud Docker Builder. You will have to provide the Docker Hub account username and password in the respective fields of the login form. In the Builders tab, select Docker Builder -> Docker Build from the Add Builder dropdown. You can leave the Registry host empty as we are going to use Docker Hub which is the default registry. Now, you just need to give the Image name in the form that gets added and you are all done with the Build Job configuration. Click on Save to save the build job configuration. Note: Image name should be in the format <Docker Hub user name>/<Image Name> For this blog we can give the image name as - nodejsmicro Then add Docker Push by selecting Docker Builder -> Docker Push from the Builders tab.Here you just need to mention the Image name, same as you have done in the Docker Build form to push the Docker Image build to the Docker Registry, which in this case is Docker Hub. Once you execute the build, you will be able to see the build in the build queue. Once the build gets executed the Docker Image that gets build is pushed to the Docker Registry which is Docker Hub for our blog. You can login into your Docker Hub account to see the Docker repository being created and the image being pushed to it, as seen in the screen shot below. Now you can pull this image anywhere, then create and run the container, you will have your Node.js microservice code up and running.   You can go ahead and try many other Docker commands both using the out of the box Docker Builder functionality and also alternatively using the Shell Builder to run your Docker commands. In the next blog, of the series, we will deploy this Node.js microservice container on a Kubernetes cluster in Oracle Kubernetes Engine. Happy Coding!  **The views expressed in this post are my own and do not necessarily reflect the views of Oracle    

This is the first blog in the series to come, which will help you understand, how you can build a NodeJS REST microservice application Docker image and push it to DockerHub using Oracle Developer...

Lessons From Alpha Zero (part 5): Performance Optimization

Photo by Mathew Schwartz on Unsplash (Originally published on Medium) This is the Fifth installment in our series on lessons learned from implementing AlphaZero. Check out Part 1, Part 2, Part 3, and Part4. In this post, we review aspects of our AlphaZero implementation that allowed us to dramatically improve the speed of game generation and training.   Overview The task of implementing AlphaZero is daunting, not just because the algorithm itself is intricate, but also due to the massive resources the authors employed to do their research: 5000 TPUs were used over the course of many hours to train their algorithm, and that is presumably after a tremendous amount of time was spent determining the best parameters to allow it to train that quickly. By choosing Connect Four as our first game, we hoped to make a solid implementation of AlphaZero while utilizing more modest resources. But soon after starting, we realized that even a simple game like Connect Four could require significant resources to train: in our initial implementation, training would have taken weeks on a single gpu-enabled computer. Fortunately, we were able to make a number of improvements that made our training cycle time shrink from weeks to about a day. In this post I’ll go over some of our most impactful changes.   The Bottleneck Before diving into some of the tweaks we made to reduce AZ training time, let’s describe our training cycle. Although the authors of AlphaZero used a continuous and asynchronous process to perform model training and updates, for our experiments we used the following three stage synchronous process, which we chose for its simplicity and debugability: While (my model is not good enough): Generate Games: every model cycle, using the most recent model, game play agents generate 7168 games, which equates to about 140–220K game positions. Train a New Model: based on a windowing algorithm, we sample from historical data and train an improved neural network. Deploy the New Model: we now take our new model, transform it into a deployable format, and push it into our cloud for the next cycle of training Far and away, the biggest bottleneck of this process is game generation, which was taking more than an hour per cycle when we first got started. Because of this, minimizing game generation time became the focus of our attention.   Model Size Alpha Zero is very inference heavy during self-play. In fact, during one of our typcal game generation cycles, MCTS requires over 120 Million position evaluations. Depending on the size of your model, this can translate to siginificant GPU time. In the original implementation of AlphaZero, the authors used an architecture where the bulk of computation was performed in 20 residual layers each with 256 filters. This amounts to a model in excess of 90 megabytes, which seemed overkill for Connect Four. Also, using a model of that size was impractical given our initially limited GPU resources. Instead, we started with a very small model, using just 5 layers and 64 filters, just to see if we could make our implementation learn anything at all. As we continued to optimize our pipeline and improve our results, we were able to bump our model size to 20X128 while still maintaining a reasonable game generation speed on our hardware.   Distributed Inference From the get-go, we knew that we would need more than one GPU in order to achieve the training cycle time that we were seeking, so we created software that allowed our Connect 4 game agent to perform remote inference to evaluate positions. This allowed us to scale GPU-heavy inference resources separately from game play resources, which need only CPU.   Parallel Game Generation GPU resources are expensive, so we wanted to make sure that we were saturating them as much as possible during playouts. This turned out to be trickier than we imagined. One of the first optimizations we put in place was to run many games on parallel threads from the same process. Perhaps the largest direct benefit of this, is that it allowed us to cache position evaluations, which could be shared amongst different threads. This cut the number of requests getting sent to our remote inference server by more than a factor of 2: Caching was a huge win, but we still wanted to deal with the remaining uncached requests in an efficient manner. To minimize network latency and best leverage GPU parallelization, we combined inference requests from different worker threads into a bucket before sending them to our inference service. The downside to this is that if a bucket was not promptly filled, any calling thread would be stuck waiting until the bucket’s timeout expired. Under this scheme, choosing an appropriate inference bucket size and timeout value was very important. We found that bucket fill rate varied throughout the course of a game generation batch, mostly because some games would finish sooner than others, leaving behind fewer and fewer threads to fill the bucket. This caused the final games of a batch to take a long time to complete, all while GPU utilization dwindled to zero. We needed a better way to keep our buckets filled.   Parallel MCTS To help with our unfilled bucket problem, we implemented Parallel MCTS, which was discussed in the AZ paper. Initially we had punted on this detail, as it seemed mostly important for competitive one-on-one game play, where parallel game play is not applicable. After running into the issues mentioned previously, we decided to give it a try. The idea behind Parallel MCTS is to allow multiple threads to take on the work of accumulting tree statistics. While this sounds simple, the naiive approach suffers from a basic problem: if N threads all start at the same time and choose a path based on the current tree statistics, they will all choose exactly the same path, thus crippling MCTS’ exploration component. To counteract this, AlphaZero uses the concept of Virtual Loss, an algorithm that temporarily adds a game loss to any node that is traversed during a simulation. A lock is used to prevent multiple threads from simultaneously modifying a node’s simulation and virtual loss statistics. After a node is visited and a virtual loss is applied, when the next thread visits the same node, it will be discouraged from following the same path. Once a thread reaches a terminal point and backs up its result, this virtual loss is removed, restoring the true statistics from the simulation. With virtual loss in place, we were finally able to achieve >95% GPU utilization during most of our game generation cycle, which was a sign that we were approaching the real limits of our hardware setup. Technically, virtual loss adds some degree of exploration to game playouts, as it forces move selection down paths that MCTS may not naturally be inclined to visit, but we never measured any detrimental (or beneficial) effect due to its use.   TensorRT/TensorRT+INT8 Though it was not necessary to use a model quite as large as that described in the AlphaZero paper, we saw better learning from larger models, and so wanted to use the biggest one possible. To help with this, we tried TensorRT, which is a technology created by Nvidia to optimize the performance of model inference. It is easy to convert an existing Tensorflow/Keras model to TensorRT using just a few scripts. Unfortunately, at the time we were working on this, there was no released TensorRT remote serving component, so we wrote our own. With TensorRT’s default configuration, we noticed a small increase in inference throughput (~11%). We were pleased by this modest improvement, but were hopeful to see an even larger performance increase by using TensorRT’s INT8 mode. INT8 mode required a bit more effort to get going, since when using INT8 you must first generate a calibration file to tell the inference engine what scale factors to apply to your layer activations when using 8-bit approximated math. This calibration is done by feeding a sample of your data into Nvidia’s calibration library. Because we observed some variation in the quality of calibration runs, we would attempt calibration against 3 different sets of sample data, and then validate the resulting configuraton against hold-out data. Of the three calibration attempts, we chose the one with the lowest validation error. Once our INT8 implementation was in place, we saw an almost 4X increase in inference throughput vs. stock libtensorflow, which allowed us to use larger models than would have otherwise been feasible. One downside of using INT8 is that it can be lossy and imprecise in certain situations. While we didn’t observe serious precision issues during the early parts of training, as learning progressed we would observe the quality of inference start to degrade, particularly on our value output. This initially led us to use INT8 only during the very early stages of training. Serendipitously, we were able to virtually eliminate our INT8 precision problem when we began experimenting with increasing the number of convolutional filters in our head networks, an idea we got from Leela Chess. Below is a chart of our value output’s mean average error with 32 filters in the value head, vs. the AZ default of 1: We theorize that adding additional cardinality to these layers reduces the variance in the activations, which makes the model easier to accurately quantize. These days, we always perfom our game generation with INT8 enabled and see no ill effects even towards the end of AZ training.   Summary By using all of these approaches, we were finally able to train a decent-sized model with high GPU utilization and good cycle time. It was initially looking like it would take weeks to perform a full train, but now we could train a decent model in less than a day. This was great, but it turned out we were just getting started — in the next article we’ll talk about how we tuned AlphaZero itself to get even better learning speed. Part 6 is now out. Thanks to Vish (Ishaya) Abrams and Aditya Prasad.

Photo by Mathew Schwartz on Unsplash (Originally published on Medium) This is the Fifth installment in our series on lessons learned from implementing AlphaZero. Check out Part 1, Part 2, Part 3, and Pa...

Chatbots

A Practical Guide to Building Multi-Language Chatbots with the Oracle Bot Platform

Article by Frank Nimphius, Marcelo Jabali - June 2018 Chatbot support for multiple languages is a worldwide requirement. Almost every country has the need for supporting foreign languages, be it to support immigrants, refugees, tourists, or even employees crossing borders on a daily basis for their jobs. According to the Linguistic Society of America1, as of 2009, 6,909 distinct languages were classified, a number that since then has been grown. Although no bot needs to support all languages, you can tell that for developers building multi-language bots, understanding natural language in multiple languages is a challenge, especially if the developer does not speak all of the languages he or she needs to implement support for. This article explores Oracle's approach to multi language support in chatbots. It explains the tooling and practices for you to use and follow to build bots that understand and "speak" foreign languages. Read the full article.   Related Content TechExchange: A Simple Guide and Solution to Using Resource Bundles in Custom Components  TechExchange - Custom Component Development in OMCe – Getting Up and Running Immediately TechExchange - First Step in Training Your Bot

Article by Frank Nimphius, Marcelo Jabali - June 2018 Chatbot support for multiple languages is a worldwide requirement. Almost every country has the need for supporting foreign languages, be it to...

Database

Announcing Oracle APEX 18.1

Oracle Application Express (APEX) 18.1 is now generally available! APEX enables you to develop, design and deploy beautiful, responsive, data-driven desktop and mobile applications using only a browser. This release of APEX is a dramatic leap forward in both the ease of integration with remote data sources, and the easy inclusion of robust, high-quality application features. Keeping up with the rapidly changing industry, APEX now makes it easier than ever to build attractive and scalable applications which integrate data from anywhere - within your Oracle database, from a remote Oracle database, or from any REST Service, all with no coding.  And the new APEX 18.1 enables you to quickly add higher-level features which are common to many applications - delivering a rich and powerful end-user experience without writing a line of code. "Over a half million developers are building Oracle Database applications today using  Oracle Application Express (APEX).  Oracle APEX is a low code, high productivity app dev tool which combines rich declarative UI components with SQL data access.  With the new 18.1 release, Oracle APEX can now integrate data from REST services with data from SQL queries.  This new functionality is eagerly awaited by the APEX developer community", said Andy Mendelsohn, Executive Vice President of Database Server Technologies at Oracle Corporation.   Some of the major improvements to Oracle Application Express 18.1 include: Application Features It has always been easy to add components to an APEX application - a chart, a form, a report.  But in APEX 18.1, you now have the ability to add higher-level application features to your app, including access control, feedback, activity reporting, email reporting, dynamic user interface selection, and more.  In addition to the existing reporting and data visualization components, you can now create an application with a "cards" report interface, a dashboard, and a timeline report.  The result?  An easily-created powerful and rich application, all without writing a single line of code. REST Enabled SQL Support Oracle REST Data Services (ORDS) REST-Enabled SQL Services enables the execution of SQL in remote Oracle Databases, over HTTP and REST.  You can POST SQL statements to the service, and the service then runs the SQL statements against Oracle database and returns the result to the client in a JSON format.   In APEX 18.1, you can build charts, reports, calendars, trees and even invoke processes against Oracle REST Data Services (ORDS)-provided REST Enabled SQL Services.  No longer is a database link necessary to include data from remote database objects in your APEX application - it can all be done seamlessly via REST Enabled SQL. Web Source Modules APEX now offers the ability to declaratively access data services from a variety of REST endpoints, including ordinary REST data feeds, REST Services from Oracle REST Data Services, and Oracle Cloud Applications REST Services.  In addition to supporting smart caching rules for remote REST data, APEX also offers the unique ability to directly manipulate the results of REST data sources using industry standard SQL. REST Workshop APEX includes a completely rearchitected REST Workshop, to assist in the creation of REST Services against your Oracle database objects.  The REST definitions are managed in a single repository, and the same definitions can be edited via the APEX REST Workshop, SQL Developer or via documented API's.  Users can exploit the data management skills they possess, such as writing SQL and PL/SQL to define RESTful API services for their database.   The new REST Workshop also includes the ability to generate Swagger documentation against your REST definitions, all with the click of a button. Application Builder Improvements In Oracle Application Express 18.1, wizards have been streamlined with smarter defaults and fewer steps, enabling developers to create components quicker than ever before.  There have also been a number of usability enhancements to Page Designer, including greater use of color and graphics on page elements, and "Sticky Filter" which is used to maintain a specific filter in the property editor.  These features are designed to enhance the overall developer experience and improve development productivity.  APEX Spotlight Search provides quick navigation and a unified search experience across the entire APEX interface. Social Authentication APEX 18.1 introduces a new native authentication scheme, Social Sign-In.  Developers can now easily create APEX applications which can use Oracle Identity Cloud Service, Google, Facebook, generic OpenID Connect and generic OAuth2 as the authentication method, all with no coding. Charts The data visualization engine of Oracle Application Express powered by Oracle JET (JavaScript Extension Toolkit), a modular open source toolkit based on modern JavaScript, CSS3 and HTML5 design and development principles.  The charts in APEX are fully HTML5 capable and work on any modern browser, regardless of platform, or screen size.  These charts provide numerous ways to visualize a data set, including bar, line, area, range, combination, scatter, bubble, polar, radar, pie, funnel, and stock charts.  APEX 18.1 features an upgraded Oracle JET 4.2 engine with updated charts and API's.  There are also new chart types including Gantt, Box-Plot and Pyramid, and better support for multi-series, sparse data sets. Mobile UI APEX 18.1 introduce many new UI components to assist in the creation of mobile applications.  Three new component types, ListView, Column Toggle and Reflow Report, are now components which can be used natively with the Universal Theme and are commonly used in mobile applications.  Additional enhancements have been made to the APEX Universal Theme which are mobile-focused, namely, mobile page headers and footers which will remain consistently displayed on mobile devices, and floating item label templates, which optimize the information presented on a mobile screen.  Lastly, APEX 18.1 also includes declarative support for touch-based dynamic actions, tap and double tap, press, swipe, and pan, supporting the creation of rich and functional mobile applications. Font APEX Font APEX is a collection of over 1,000 high-quality icons, many specifically created for use in business applications.  Font APEX in APEX 18.1 includes a new set of high-resolution 32 x 32 icons which include much greater detail and the correctly-sized font will automatically be selected for you, based upon where it is used in your APEX application. Accessibility APEX 18.1 includes a collection of tests in the APEX Advisor which can be used to identify common accessibility issues in an APEX application, including missing headers and titles, and more. This release also deprecates the accessibility modes, as a separate mode is no longer necessary to be accessible. Upgrading If you're an existing Oracle APEX customer, upgrading to APEX 18.1 is as simple as installing the latest version.  The APEX engine will automatically be upgraded and your existing applications will look and run exactly as they did in the earlier versions of APEX.     "We believe that APEX-based PaaS solutions provide a complete platform for extending Oracle’s ERP Cloud. APEX 18.1 introduces two new features that make it a landmark release for our customers. REST Service Consumption gives us the ability to build APEX reports from REST services as if the data were in the local database. This makes embedding data from a REST service directly into an ERP Cloud page much simpler. REST enabled SQL allows us to incorporate data from any Cloud or on-premise Oracle database into our Applications. We can’t wait to introduce APEX 18.1 to our customers!", said Jon Dixon, co-founder of JMJ Cloud.   Additional Information Application Express (APEX) is the low code rapid app dev platform which can run in any Oracle Database and is included with every Oracle Database Cloud Service.  APEX, combined with the Oracle Database, provides a fully integrated environment to build, deploy, maintain and monitor data-driven business applications that look great on mobile and desktop devices.  To learn more about Oracle Application Express, visit apex.oracle.com.  To learn more about Oracle Database Cloud, visit cloud.oracle.com/database. 

Oracle Application Express (APEX) 18.1 is now generally available! APEX enables you to develop, design and deploy beautiful, responsive, data-driven desktop and mobile applications using only a...

DevOps

Oracle Cloud Infrastructure CLI on Developer Cloud

With our May 2018 release of Oracle Developer Cloud, we have integrated Oracle Cloud Infrastructure command line interface (from here on, will be using OCIcli in the blog) as part of the build pipeline in Developer Cloud. This blog will help you understand how you can configure and execute OCIcli commands as part of the build pipeline, configured as part of the build job in Developer Cloud. Configuring the Build VM Template for OCIcli You will have to create a build VM with the OCIcli software bundle, to be able to execute the build with OCIcli commands. Click on the user drop down on the right hand top of the page. Select “Organization” from the menu. Click on the VM Templates tab and then on the “New Template” button. Give a template name of your choice and select the platform as “Oracle Linux 7”. And then click the Create button. On creation of the template click on “Configure Software” button. Select OCIcli from the list of software bundles available for configuration and click on the + sign to add it to the template. You will also have to add the Python3.5 software bundle, which is a dependency for the OCIcli. Then click on “Done” to complete the Software configuration. Click on the Virtual Machines tab, then click on “+New VM” button and enter the number of VM you want to create and select the VM Template you just created, which would be “OCIcli” for our blog. Build Job Configuration Configure the Tenancy OCID as Build Parameter using String Parameter and give the name as per your wish. I have named it as "T" and have provided a default value to it, as shown in the screenshot below. In the Builders tab Select OCIcli Builder and a Unix Shell builder in this sequence from the Add Builder drop down. On adding the OCIcli Builder, you will see the form as below. For the OCIcli Builder, you can get the parameters from the OCI console. Below screenshots would show where to get each of these form values from the OCI console.Below highlighted are in red boxes shows where you can get the Tenancy OCID and the region for the “Tenancy” and “Region” fields respectively in the OCIcli builder form. For the “User OCID” and “Fingerprint” you need go to User Settings by clicking over the username drop down in the OCI console located at right hand side top. Please refer the screen shot below. Please refer the links below for understanding the process of generating the Private Key and configuring the Public Key for the user in the OCI console. https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#How3 In the Unix Shell Builder you can try out the below command: oci iam compartment list -c $T This command will list all the compartment in the Tenancy with OCID given to variable ‘T’ that we configured in the Build parameters tab as a String Parameter.   Post execution of the command, you can view the output on the console log. As shown below. There are tons of other OCIcli commands that you can run as part of the build pipeline. Please refer this link for the same. Happy Coding! **The views expressed in this post are my own and do not necessarily reflect the views of Oracle

With our May 2018 release of Oracle Developer Cloud, we have integrated Oracle Cloud Infrastructure command line interface (from here on, will be using OCIcli in the blog) as part of the...

DevOps

Oracle Developer Cloud - New Continuous Integration Engine Deep Dive

We introduced our new Build Engine in Oracle Developer Cloud in our April release. This new build engine now comes with the capability to define build pipelines visually. Read more about it in my previous blog. In this blog we will delve deeper into some of the functionalities of Build Pipeline feature of the new CI Engine in Oracle Developer Cloud. Auto Start Auto Start is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure the pipeline execution auto starts when one of the build job in the pipeline is executed externally, then that would trigger the execution of rest of the build jobs in the pipeline. The below screen shot shows the pipeline for NodeJS application created on Oracle Developer Cloud Pipelines. The build jobs used in the pipeline are build-microservice, test-microservices and loadtest-microservice. And in parallel to the microservice build sequence we have, WiremockInstall and WiremockConfigure. Scenarios When Auto Start is enabled for the Pipeline: Scenario 1: If we run build-microservice build job externally, then it will lead to the execution of the test-microservice and loadtest-microservice build jobs in that order subsequently. But note this does not trigger the execution of WiremockInstall or WiremockConfigure build jobs as they are part of a separate sequence. Please refer the screen shot below, which shows only the build jobs executed in green. Scenario 2: If we run test-microservice build job externally, then it will lead to the execution of the loadtest-microservice build job only. Please refer the screen shot below, which shows only the build jobs executed in green. Scenario 3: If we run loadtest-microservice build job externally, then it will lead to no other build job execution in the pipeline across both the build sequences. Exclusive Build This enables the users to disallow the pipeline jobs to be built externally in parallel to the execution of the build pipeline. It is an option given to the user while creating a build pipeline on Oracle Developer Cloud Service. The below screenshot shows the dialog to create a new Pipeline, where you have a checkbox which needs to be checked to ensure that the execution of build jobs in pipeline will not be allowed to be built in parallel to the pipeline execution. When you run the pipeline you would see the build jobs queued for execution which you can see in the Build History. In this case you would see two build jobs queued, one would be build-micorservice and other would be WiremockInstall as they are parallel sequences part of the same pipeline. Now if you try to run any of the build jobs in the pipeline, for example; like test-microservice, you will be given an error message, as shown in the screenshot below.   Pipeline Instances: If you click the Build Pipeline name link in the Pipelines tab you will be able to see the pipeline instances. Pipeline instance is the instance at which it was executed.  Below screen shot shows the pipeline instances with time stamp of when it was executed. It will show if the pipeline got Auto Started (hover on the status icon of the pipeline instance) due to an external execution of the build job or shows the success status if all the build jobs of the pipeline were build successfully. It also shows the build jobs that executed successfully in green for that particular pipeline instance. The build jobs that did not get executed have a white background.  You also get an option to cancel while the pipeline is getting executed and you may choose to delete the instance post execution of the pipeline.   Conditional Build: The visual build pipeline editor in Oracle Developer Cloud has a feature to support conditional builds. You will have to double click the link connecting the two build jobs and select any one of the conditions as given below: Successful: To proceed to the next build job in the sequence if the previous one was a success. Failed: To proceed to the next build job in the sequence if the previous one failed. Test Failed: To proceed to the next build job in the sequence if the test failed in the previous build job in the pipeline.   Fork and Join: Scenario 1: Fork In this scenario if you have a build job like build-microservice on which the other three build jobs, “DockerBuild” which builds a deployable Docker image for the code, “terraformBuild” which builds the instance on Oracle Cloud Infrastructure and deploy the code artifact and “ArtifactoryUpload” build job to upload the generated artifact to Artifact