X

News, tips, partners, and perspectives for the Oracle Linux operating system and upstream Linux kernel work

Recent Posts

Linux

Oracle Linux 8 Advanced System Administration Certification Exam

Thanks to Craig McBride for this blog. Training Tuesday Edition - 14 In response to increasing interest from the Oracle Linux user community in an updated certification exam, we are pleased to announce the availability of the certification exam for Oracle Linux 8. The Oracle Linux 8 Advanced System Administration Certification Exam is now available: https://education.oracle.com/oracle-linux-8-advanced-system-administration/pexam_1Z0-106 By passing this exam, a certified individual proves fluency and a solid understanding of the skills required to deploy, configure, and administer an Oracle Linux 8 production server environment. Reap the benefits of earning an Oracle Certification. Expand your knowledge base and validate your skills to appeal to potential employers. Broaden your network and join 1.8 million Oracle Certified professionals. Gain exposure to a wide variety of important features, functions and tasks to use on the job. Participants should have the following prerequisite knowledge before taking the certification exam: Configuring the Linux boot process and systemd System configuration options Installing and maintaining software packages Automating tasks Oracle Ksplice User and group administration File systems and swap configuration Configuring LVM, MD, and RAID devices Network configuration OpenSSH Managing Linux security System logging System monitoring Pluggable Authentication Modules (PAM) Access Control Lists (ACLS), encrypted block devices Control groups (cgroups) Container concepts, Podman, Kubernetes Linux Auditing System A combination of Oracle training and hands-on experience (attained via labs and/or field experience) provides the best preparation for passing the exam. Access useful training and product documents in the websites below. Oracle Linux website Oracle Linux 8 training videos Oracle Linux 8 product documentation

Thanks to Craig McBride for this blog. Training Tuesday Edition - 14 In response to increasing interest from the Oracle Linux user community in an updated certification exam, we are pleased to announce...

Announcements

Announcing the Unbreakable Enterprise Kernel Release 5 Update 5 for Oracle Linux

The Unbreakable Enterprise Kernel (UEK) for Oracle Linux provides the latest open source innovations and key optimizations and security to enterprise cloud and on-premises workloads. It is the Linux kernel that powers Oracle Cloud and Oracle Engineered Systems such as Oracle Exadata Database Machine as well as Oracle Linux on 64-bit Intel and AMD or 64-bit Arm platforms. UEK Release 5 does not disable any features that are enabled in RHCK. Additional features are enabled to provide support for key functional requirements and patches are applied to improve performance and optimize the kernel. What's New? UEK R5 Update 5 can be recognized with a release number starting with 4.14.35-2047.500.9.1. Notable changes: Page clearing optimizations. Optimizations to the code that handles page cache clearance can improve performance in KVM for large guests, which can result in much quicker start-up times; the changes are localized to Intel's next-generation Icelake server hardware platform. File systems and storage include different security and bug fixes and back-ports for btrfs, CIFS, ext4, NFS, OCFS2 and XFS. RDMA Improvements to RDS failover/failback performance. RDS handling of failover or failback is improved, to boost performance. Improved tracing on RDS for debugging. Tracepoints have been added to RDS code for support within eBPF and DTrace, to replace legacy debugging mechanisms. RDMA bug fixes and optimizations. General bug fixes and optimizations from RDMA are also included in this update, including the resolution of a bug to properly handle RDMA cancel requests. Security securityfs interface for Secure Boot lockdown mode added. The lockdown file for the securityfs interface (/sys/kernel/security/lockdown) now allows for reading and setting the Secure Boot lockdown state. Driver updates. Working in close cooperation with hardware and storage vendors, Oracle has updated several device drivers from the versions in mainline Linux 4.14.35. Further details are available on section "1.2.1 Notable Driver Features and Updates" of the Release Notes. The Unbreakable Enterprise Kernel Release 5 Update 5 (UEK R5U5) is based on the mainline kernel version 4.14.35. Through actively monitoring upstream check-ins and collaboration with partners and customers, Oracle continues to improve and apply critical bug and security fixes to UEK R5 for Oracle Linux. This update includes several new features, added functionality, and bug fixes across a range of subsystems. For more details on these and other new features and changes, please consult the Release Notes for the UEK R5 Update 5. Security (CVE) Fixes A full list of CVEs fixed in this release can be found in the Release Notes for the UEK R5 Update 5. Supported Upgrade Path Customers can upgrade existing Oracle Linux 7 servers using the Unbreakable Linux Network or the Oracle Linux yum server by pointing to "UEK Release 5" Yum Channel. Software Download Oracle Linux can be downloaded, used, and distributed free of charge and updates and errata are freely available. This allows organizations to decide which systems require a support subscription and makes Oracle Linux an ideal choice for development, testing, and production systems. The user decides which support coverage is the best for each system individually, while keeping all systems up-to-date and secure. Customers with Oracle Linux Premier Support also receive access to zero-downtime kernel updates using Oracle Ksplice. Compatibility To minimize impact on interoperability during releases, the Oracle Linux team works closely with third-party vendors that have hardware and software dependencies on kernel modules. The kernel ABI for UEK R5 will remain unchanged in all subsequent updates to the initial release. Oracle Linux maintains full user-space compatibility with Red Hat Enterprise Linux (RHEL), which is independent of the kernel version that s running underneath the operating system. Existing applications in userspace will continue to run unmodified on the Unbreakable Enterprise Kernel Release 5 and no re-certifications are needed for RHEL certified applications. About Oracle Linux Oracle Linux is an open and complete operating environment that helps accelerate digital transformation. It delivers leading performance and security for hybrid and multi cloud deployments. Oracle Linux is 100% application binary compatible with Red Hat Enterprise Linux. And, with an Oracle Linux Support subscription, customers have access to award-winning Oracle support resources and Linux support specialists, zero-downtime patching with Ksplice, cloud native tools such as Kubernetes and Kata Containers, KVM virtualization and Oracle Linux Virtualization Manager, DTrace, clustering tools, Oracle Linux Manager, Oracle Enterprise Manager, and lifetime support. All this and more is included in a single cost-effective support offering. Unlike many other commercial Linux distributions, Oracle Linux is easy to download and completely free to use, distribute, and update. Further information available at oracle.com/linux .

The Unbreakable Enterprise Kernel (UEK) for Oracle Linux provides the latest open source innovations and key optimizations and security to enterprise cloud and on-premises workloads. It is the Linux...

Linux

Oracle Linux Virtualization Manager: Managing Storage made easy with short training videos

Thanks to Craig McBride for this blog. Training Tuesday Edition - 13 In this week’s blog, we present you with a set of short videos on managing storage in Oracle Linux Virtualization Manager. These videos introduce you to storage concepts and demonstrate how to create, attach, and maintain various storage types. In these videos, you will learn about storage domains that hold resources used by virtual machines. These resources include disk images, ISO files, templates, and snapshots. Supported storage types can be file-based storage like Network File System (NFS) or other POSIX compliant file systems. Oracle Linux Virtualization Manager also supports block-based storage types like Internet Small Computer System Interface (iSCSI) and Fibre Channel Protocol (FCP) storage. You can also use locally attached storage for storage domains. Here’s the list of videos and the time it takes for you to complete each one: Managing iSCSI Storage with Oracle Linux Virtualization Manager (8 Min) Managing NFS Storage with Oracle Linux Virtualization Manager (8 Min) Uploading Resources to a Data Storage Domain with Oracle Linux Virtualization Manager (5 Min) You can also enroll in the Oracle Linux Virtualization Manager Managing Storage learning path and take the following quiz: Oracle Linux Virtualization Manager Managing Storage Quiz Be sure to come back for our next edition, which will cover the release of the Oracle Linux 8 Advanced System Administration certification exam. Read more about the products, download datasheets, and access other useful documents in the websites below. Oracle Linux website Oracle Virtualization website Oracle Linux Virtualization Manager training videos Oracle Linux Virtualization Manager product documentation

Thanks to Craig McBride for this blog. Training Tuesday Edition - 13 In this week’s blog, we present you with a set of short videos on managing storage in Oracle Linux Virtualization Manager. These...

Linux

Oracle Linux Virtualization Manager: Managing Virtual Machines made easy with short training videos

Thanks to Craig McBride for this blog. Training Tuesday Edition - 12 In last week’s Training Tuesday blog, we introduced you to the first in a series of training videos on Oracle Linux Virtualization Manager. Today, we continue with the second set of free, short videos on managing virtual machines (VM). These videos demonstrate deployment of VMs through the Administration Portal and VM portal graphical interfaces. In this set of videos, you learn about the physical, logical, and virtual components needed to create and run virtual machines. You learn how to use templates to simplify the deployment of similar virtual machines and how to create and use Open Virtual Appliance (OVA) files in Oracle Linux Virtualization Manager. Here’s the list of videos and the time it takes for you to complete each one: Creating a Virtual Machine in Oracle Linux Virtualization Manager (10 Min) Creating a Template from a Virtual Machine in Oracle Linux Virtualization Manager (8 Min) Creating a Virtual Machine from a Template in Oracle Linux Virtualization Manager (5 Min) Export VMs and Templates as OVAs in Oracle Linux Virtualization Manager (6 Min) You can also enroll in the Oracle Linux Virtualization Manager managing virtual machines learning path and take the following quiz: Oracle Linux Virtualization Manager Managing Virtual Machines Quiz Be sure to come back for our next edition, which will cover managing storage classes. Read more about the products, download datasheets, and access other useful documents in the websites below. Oracle Linux website Oracle Virtualization website Oracle Linux Virtualization Manager training videos Oracle Linux Virtualization Manager product documentation

Thanks to Craig McBride for this blog. Training Tuesday Edition - 12 In last week’s Training Tuesday blog, we introduced you to the first in a series of training videos on Oracle Linux Virtualization...

Announcements

Bowmicro builds successful cloud service with Oracle Linux

In this article, we are excited to share how Bowmicro offers clients enterprise-grade SaaS and IaaS solutions built with Oracle Linux and Virtualization. Bowmicro is a leading software company that provides managed data center cloud solutions and professional services in China. The company is a nationally certified and fully licensed high-tech company. These accreditations help assure customers that they are working with a reputable and reliable business with the proper technical expertise. In a competitive market, Bowmicro has a distinct advantage by offering hosted cloud solutions at affordable rates. They work with customers from all sectors including government, financial, telecommunications, and internet providers. Bowmicro also has many clients in the rapidly growing small and midsize Chinese business (SMB) market. Bowmicro is a longstanding member of the Oracle PartnerNetwork (OPN). They work with and use many Oracle technologies to build their managed cloud platforms. For example, Bowmicro’s SaaS offering includes Oracle PeopleSoft amongst other applications. Underlying these SaaS instances is an IaaS foundation built using Oracle x86 Servers, Oracle Linux, and Oracle VM which can support third-party solutions, in addition to Oracle workloads. This infrastructure layer is also available separately to Bowmicro's clients as a standalone IaaS offering. These combined SaaS and IaaS solutions provide a highly secure, highly available architecture for a low Total Cost of Ownership (TCO). Oracle x86 Servers allow customers to run workloads on industry-standard servers with high security and performance. End-to-end Oracle engineering and trusted boot capabilities increase workload security. These are the same servers used to build Oracle Cloud Infrastructure and Oracle Engineered Systems. Oracle x86 Servers include Oracle Linux and virtualization software at no extra charge, helping service providers such as Bowmicro eliminate hidden infrastructure expenses. If you want to learn more about how Bowmicro implemented their solutions read this paper or check out this customer profile. Resources: Oracle Linux website Oracle Virtualization website

In this article, we are excited to share how Bowmicro offers clients enterprise-grade SaaS and IaaS solutions built with Oracle Linux and Virtualization. Bowmicro is a leading software company that...

Linux

Oracle Linux Virtualization Manager: Administration and Deployment made easy with short training videos

Thanks to Craig McBride for this post. Training Tuesday Edition - 11 In this week's Training Tuesday blog we will begin with the first in a series of blogs about Oracle Linux Virtualization Manager training videos. Each blog provides pointers to free, short videos that you can take at your own pace to get a better at understanding of the product. Oracle Linux Virtualization Manager is a server virtualization management platform, based on the oVirt open source project, that can be easily deployed to configure, monitor, and manage an Oracle Linux Kernel-based Virtual Machine (KVM) environment with enterprise-grade performance and support from Oracle. This environment also includes management, cloud native computing tools, and the operating system, delivering leading performance and security for hybrid and multi-cloud deployments. This first blog focuses on the administration of the Oracle Linux Virtualization Manager environment as well as the deployment of hosts and architectural resources. There is also a short quiz that you can take at the end of this learning path to test your knowledge and reinforce your learning experience. Here’s the list of videos and the time it takes for you to complete each one. In less than an hour, you can complete administration and deployment training: Managing Local Users and Groups with Command Line for Oracle Linux Virtualization Manager (9 Min) Managing Roles and Permissions with Oracle Linux Virtualization Manager (12 Min) Adding a KVM Compute Host with Oracle Linux Virtualization Manager (8 Min) Creating a Virtual Machine Network in Oracle Linux Virtualization Manager (7 Min) Backing up and Restoring an Oracle Linux Virtualization Manager (9 Min) Upgrade the Oracle Linux Virtualization Manager KVM Host (8 Min) You can also enroll in the Oracle Linux Virtualization Manager Administration and Deployment learning path and take the following quiz: Oracle Linux Virtualization Manager Administration and Deployment Quiz Be sure to come back for our next edition, which will cover managing virtual machines classes. Resources: Oracle Linux website Oracle Virtualization website Oracle Linux Virtualization Manager training videos Oracle Linux Virtualization Manager product documentation

Thanks to Craig McBride for this post. Training Tuesday Edition - 11 In this week's Training Tuesday blog we will begin with the first in a series of blogs about Oracle Linux Virtualization Manager...

Linux

Cloud Native Patterns: a free ebook for developers

Building cloud native applications is a challenging undertaking, especially considering the rapid evolution of cloud native computing. But it’s also very liberating and rewarding. You can develop new patterns and practices where the limitations of hardware dependent models, geography, and size no longer exist. This approach to technology can make cloud application developers more agile and efficient, even as it reduces deployment costs and increases independence from cloud service providers. Oracle is one of the few cloud vendors to also have a long history of providing enterprise software. Wearing both software developer and cloud service provider hats, we understand the complexity of transforming on-premises applications into cloud native applications. Removing that complexity for customers is a guiding tenet at Oracle. To help developers advance their learning, the Oracle Linux team is offering Cloud Native Patterns, a free ebook. It’s a three-chapter excerpt of Cornelia Davis’ Cloud Native Patterns: Designing change-tolerant software. The ebook is intended to help you navigate key elements of the cloud native landscape, determine potential pitfalls in your environment, and to streamline your cloud adoption. In these three chapters, you’ll learn about important cloud native concepts such as data-driven applications and shorter user feedback cycles. You’ll also learn why repeatability and continuous delivery can help you succeed. Get your free ebook today. With thanks to Oracle Publishers Program member Manning Publications.         Resources : Read more about the products, download datasheets, and access other useful documents in the websites below. Oracle Linux Oracle VM VirtualBox Oracle Virtualization    

Building cloud native applications is a challenging undertaking, especially considering the rapid evolution of cloud native computing. But it’s also very liberating and rewarding. You can develop new...

Linux

Oracle Linux 8: Containers made easy with short training videos

Thanks to Craig McBride for this post. Training Tuesday Edition - 10 Container technology provides a means for developers and system administrators to build and package applications together with libraries, binaries, and configuration files so they can run independently from the host operating system and kernel version. You can run the same container application, unchanged, on laptops, data center virtual machines, and on a cloud environment. Learn how to create and maintain containers, images, and pods on Oracle Linux 8 with Podman, Buildah, and Skopeo tools. Podman provides a lightweight utility to run and manage Open Container Initiative (OCI) compatible containers and can re-use existing container images designed for Kubernetes and Oracle Linux Cloud Native Environment. Buildah is a utility for creating OCI-compatible container images. Skopeo is a utility for managing container images on remote container registries, thus making it possible to inspect a container image's contents without first downloading it. Here’s the list of videos and the time it takes for you to complete each one: Install Podman, Buildah,and Skopeo on Oracle Linux 8 (6 Min) Pulling an Image using Podman on Oracle Linux 8 (5 Min) Running a Container using Podman on Oracle Linux 8 (9 Min) Using Bind Mounts with Podman Containers on Oracle Linux 8 (8 Min) Using Volumes for Podman Container Storage on Oracle Linux 8 (8 Min) Use a Dockerfile with Podman on Oracle Linux 8 (9 Min) New videos are added on an ongoing basis so check back often. To reinforce your learning about Podman, access free Podman hands-on lab exercises via docs.oracle.com/learn: Get Started with Podman Use Storage with Podman Containers In next week's Training Tuesday blog we will continue on the theme of virtualization with the first in a series of blogs about Oracle Linux Virtualization Manager training videos. Watch the previous Training Tuesday episodes: Oracle Linux 8: Installation made easy with free videos Oracle Linux 8: Administration made easy with free videos Oracle Linux 8: Package Management made easy with free videos Oracle Linux 8: Networking made easy with free videos Oracle Linux 8: Oracle Ksplice made easy with short training videos Oracle Linux 8: Remote Management made easy with short training videos Oracle Linux 8: Disk Management made easy with short training videos Oracle Linux 8: Kernel-based Virtual Machine (KVM) Virtualization made easy with short training videos Oracle Linux 8: Oracle VM VirtualBox made east with short training videos Resources: Oracle Linux website Oracle Linux 8 training videos Oracle Linux 8 product documentation

Thanks to Craig McBride for this post. Training Tuesday Edition - 10 Container technology provides a means for developers and system administrators to build and package applications together...

Linux

Oracle Linux 8: Oracle VM VirtualBox made easy with short training videos

Thanks to Craig McBride for this post. Training Tuesday Edition - 9 A popular tool for developers and users creating cloud and local applications without the overhead of using a full server environment, Oracle VM VirtualBox runs on standard x86 desktop and laptop computers. It allows users to set up multi-platform virtual machine environments for software development, testing, and general purpose operating system (OS) virtualization, with optional runtime encryption. Software engineers can develop for cloud native environments from within Oracle VM VirtualBox VMs directly on their Windows, Mac OS, Linux, and Oracle Solaris machines, making it easier to create multi-tier applications with just a standard laptop. Oracle VM VirtualBox also enables users to create and update virtual machines locally, including the OS and applications, and then package them into an industry standard file format for easy distribution and cloud deployment in conjunction with Oracle Linux KVM, or other server virtualization solutions. Oracle VM VirtualBox allows users to run nearly any standard x86 OS hosting applications that are not available natively on their systems. Here’s the list of videos and the time it takes for you to complete each one: Install VirtualBox 6.1 on Oracle Linux 8 (6 Min) Install VirtualBox 6.1 Extension Pack on Oracle Linux 8 (5 Min) Create a Linux VM using VirtualBox 6.1 on Oracle Linux 8 (10 Min) Install VirtualBox 6.1 Guest Additions (Linux VM) on Oracle Linux 8 (6 Min) Export a Linux VM guest using VirtualBox 6.1 on Oracle Linux 8 (6 Min) Be sure to come back for our next edition, which will cover Containers. New videos are added on an ongoing basis so check back often. Watch the previous Training Tuesday episodes: Oracle Linux 8: Installation made easy with free videos Oracle Linux 8: Administration made easy with free videos Oracle Linux 8: Package Management made easy with free videos Oracle Linux 8: Networking made easy with free videos Oracle Linux 8: Oracle Ksplice made easy with short training videos Oracle Linux 8: Remote Management made easy with short training videos Oracle Linux 8: Disk Management made easy with short training videos Oracle Linux 8: Kernel-based Virtual Machine (KVM) Virtualization made easy with short training videos This is our last video training blog for 2020. Happy Holidays and join us for Training Tuesdays in 2021. Resources: Oracle Linux website Oracle Linux 8 training videos Oracle Linux 8 product documentation Oracle VM VirtualBox Oracle VM VirtualBox 6.1 Release

Thanks to Craig McBride for this post. Training Tuesday Edition - 9 A popular tool for developers and users creating cloud and local applications without the overhead of using a full server environment,...

Announcements

Enhancing Oracle Linux 8 Management with Oracle Linux Manager 2.10

Oracle is pleased to introduce Oracle Linux Manager 2.10, a management tool based on the Spacewalk project providing IT managers with what they need to manage their Oracle Linux environment. This release includes all of the features you have come to appreciate in Spacewalk 2.10 with incremental enhancements for Oracle Linux 8 clients. Oracle Linux Manager 2.10 replaces the previous Spacewalk 2.10 release. Oracle Linux Manager incorporates significant enhancements to support Oracle Linux 8 clients and module-enabled repositories. Errata and package upgrade handling for channels with modules Oracle Linux Manager works smarter, reporting errata and package upgrades only when a channel has modules and streams defined, and can install those errata. Previously, Spacewalk would incorrectly report that errata and package upgrades were applicable from any stream for each module installed on a client. Oracle Linux Manager filters the listed errata and package upgrades to only those relevant to the module:stream combination installed on the client and correctly installs the packages needed for individual systems, groups of systems, and system sets. Note that errata handling and filtering is designed for Oracle Linux.  This functionality is available to the Web UI, spacecmd system_listerrata, system_applyerrata,  system_listupgrades, system_upgradepackage, the XMLRPC channel.software.listErrata methods and spacewalk-report commands. Module support when cloning software channels Oracle Linux Manager clones the module metadata when cloning software channels, which ensures that the clone channel has the correct module metadata that matches the state of the channel when it was cloned. This functionality works with modular channels from any vendor's distribution, not just Oracle Linux. This functionality is available via the Web UI, spacecmd softwarechannel_clone and the XMLRPC API channel.software.clone method. Module support for channel life cycle management The updated spacewalk-channel-manage-lifecycle tool in Oracle Linux Manager copies and updates module metadata when manipulating channel life cycles which ensures that the module metadata is always in sync with the channel package state. Specifically: Initializing a modular channel copies the source channel's module metadata Promoting a modular channel copies missing module metadata from the source channel to target channel Promoting a modular channel with the clear option set, overwrites the target channel's module metadata with the source channel's metadata Creating an archive of a channel copies the module metadata from the specified channel into the new archive channel Rolling back a channel to an archive copies the module metadata from the archive channel to the target channel This functionality works with modular content from any vendor's distribution, including Oracle Linux. New XMLRPC API namespace methods channel.software.clear and channel.software.rollbackChannel have been added to support this functionality. The API documentation has been updated as part of this release. Module support for copying packages between channels When adding packages from one modular channel to another, Oracle Linux Manager updates the target channel's module metadata with the modules:streams corresponding to the packages being added. Additionally, Oracle Linux Manager will automatically add any other packages listed in each module:stream that are not present in the target channel.  Available via the Web UI, Oracle Linux Manager 2.10 provides a summary of the number of packages that were automatically added to complete module:stream requirements. Switching or upgrading from Spacewalk to Oracle Linux Manager 2.10 The Oracle Linux Manager Installation Guide provides step-by-step instructions on converting from Spacewalk 2.7 or 2.10 to Oracle Linux Manager 2.10. Resources Oracle Linux Manager Documentation For more information visit oracle.com/linux.

Oracle is pleased to introduce Oracle Linux Manager 2.10, a management tool based on the Spacewalk project providing IT managers with what they need to manage their Oracle Linux environment. This...

Linux

Multiprocess QEMU: Breaking up is hard to do

QEMU is the backbone of virtualization on Linux, providing control plane and emulation services for guest VMs. One of the most common complaints about QEMU stems from its monolithic nature -- one process that does both control and emulation exposes more "surface area" that we, in turn, have to protect from security vulnerabilities. Well perhaps no longer, as multi-process QEMU has now been accepted into the QEMU code-base. Congratulations to Jag Raman, Elena Ufimtseva and John Johnson on taking this code from idea to reality. This patchset represents the culmination of more than two years of difficult work and a dozen patchset submissions, starting from the original conference submission by Konrad Wilk at KVM Forum in 2017, the first RFC in 2018, and the successful v12 patchset. This is a major accomplishment! How does this help security? In addition to minimizing the attack surface for guests, it also isolates any emulated devices. If an attacker gains control of an emulated device, their access will be constrained by the strict SELinux controls on that device. This change also allows developers to write QEMU backends in their preferred language like Rust or Go. Read the full patch submission and responses at lists.gnu.org/qemu-devel From: Jagannathan Raman Subject: [PATCH v12 00/19] Initial support for multi-process Qemu Date: Tue, 1 Dec 2020 15:22:35 -0500 To touch upon the history of this project, we posted the Proof Of Concept patches before the BoF session in 2018. Subsequently, we have posted 11 versions on the qemu-devel mailing list. You can find them by following the links below ([1] - [11]).Following people contributed to the design and implementation of this project: Jagannathan Raman Elena Ufimtseva John G Johnson Stefan Hajnoczi Konrad Wilk Kanth Ghatraju We would like to thank QEMU community for your feedback in the design and implementation of this project.Qemu wiki page: https://wiki.qemu.org/Features/MultiProcessQEMU For the full concept writeup about QEMU multi-process, please refer to docs/devel/qemu-multiprocess.rst. Also see docs/qemu-multiprocess.txt for usage information. We welcome all your ideas, concerns, and questions for this patchset. Thank you! [POC]: https://www.mail-archive.com/qemu-devel@nongnu.org/msg566538.html [1]: https://www.mail-archive.com/qemu-devel@nongnu.org/msg602285.html [2]: https://www.mail-archive.com/qemu-devel@nongnu.org/msg624877.html [3]: https://www.mail-archive.com/qemu-devel@nongnu.org/msg642000.html [4]: https://www.mail-archive.com/qemu-devel@nongnu.org/msg655118.html [5]: https://www.mail-archive.com/qemu-devel@nongnu.org/msg682429.html [6]: https://www.mail-archive.com/qemu-devel@nongnu.org/msg697484.html [7]: https://patchew.org/QEMU/cover.1593273671.git.elena.ufimtseva@oracle.com/ [8]: https://www.mail-archive.com/qemu-devel@nongnu.org/msg727007.html [9]: https://www.mail-archive.com/qemu-devel@nongnu.org/msg734275.html [10]: https://www.mail-archive.com/qemu-devel@nongnu.org/msg747638.html [11]: https://www.mail-archive.com/qemu-devel@nongnu.org/msg750972.html Thank you!

QEMU is the backbone of virtualization on Linux, providing control plane and emulation services for guest VMs. One of the most common complaints about QEMU stems from its monolithic nature -- one...

Linux

The Maple Tree, a new data structure for Linux

Last week, Liam Howlett posted the first version of the Maple Tree to the linux-kernel mailing list. The Maple Tree, a collaboration between Liam Howlett and Matthew Wilcox, introduces a B-tree based range-locked tree which could significantly reduce unnecessary contention on the memory management subsystem -- with the eventual goal of perhaps removing mmap_sem entirely. They have been working on this for a year over at github.com/oracle/linux-uek and I'm really excited to see this project sent out for comment and review! Read the RFC at lore.kernel.org The maple tree is an RCU-safe range based B-tree designed to use modern processor cache efficiently. There are a number of places in the kernel that a non-overlapping range-based tree would be beneficial, especially one with a simple interface. The first user that is covered in this patch set is the vm_area_struct rbtree in the mm_struct with the long term goal of reducing the contention of the mmap_sem. The tree has a branching factor of 10 for non-leaf nodes and 16 for leaf nodes. With the increased branching factor, it is significantly short than the rbtree so it has fewer cache misses. As you can see below, the performance is getting very close even without the vmacache. I am looking for ways to optimize the vma code to make this even faster and would like input. As this tree requires allocations to provide RCU functionality, it is worth avoiding expanding the tree only to contract it in the next step. This patch set is based on 5.10-rc1. Please note that 35821 lines of this patch is the test code. It is important to note that this is an RFC and there are limitations around what is currently supported. The current status of this release is: - **No support for 32 bit or nommu builds** - There is a performance regression with regards to kernel builds from -3% to -5% of system time. Elapsed time is within 1 second of 5.10-rc1. - Removal of the vmacache - The mm struct uses the maple tree to store and retrieve all VMAs - Decrease in performance in the following micro-benchmarks: - will-it-scale brk1 which tests insert speed of a vma as it is written of -25 to -33%. The test does not seem to test what it is meant to test. - kernbuild (~-4 to -5% system, less that -1% elapsed Amean) - Increase in performance in the following micro-benchmarks in Hmean: - will-it-scale malloc1-processes (~1-9%) - will-it-scale malloc1-threads (~29-71%) - will-it-scale pthread_mutex1-threads (~0-16%) - will-it-scale signal1-processes (2-17%) - will-it-scale brk1-threads **This test doesn't make sense, disregard** - Converted the following to use the advanced maple tree interface: - brk() - mmap_region() - do_munmap() - dup_mmap() - Currently operating in non-RCU mode. Once enabled, RCU mode will be enabled only when more than one thread is active on a task. At that point the maple tree will enable RCU lookup of VMAs. The long term goal of the maple tree is to reduce mmap_sem contention by removing users that cause contention one by one. This may lead to the lock being completely removed at some point. The secondary goals is to provide the kernel with a fast RCU tree with a simple interface and to clean up the mm code by removing the integration of the tree within the code and structures themselves. This patch set is based on 5.10-rc1. Link: https://github.com/oracle/linux-uek/releases/tag/howlett%... Implementation details: Each node is 256 Bytes and has multiple node types. The current implementation uses two node types and has a branching factor of 16 for leaf/non-alloc nodes and 10 for internal alloc nodes. The advantage of the internal alloc nodes is the ability to track the largest sub-tree gap for each entry. Even with the removal of the vmacache, the benchmarks are getting close to the rbtree, but I'd like some help in regards to making things faster. The tree is at a disadvantage of needing to allocate storage space for internal use where as the rbtree stored the data within the vm_area_struct. This is a necessary trade off to move to RCU mode in the future.

Last week, Liam Howlett posted the first version of the Maple Tree to the linux-kernel mailing list. The Maple Tree, a collaboration between Liam Howlett and Matthew Wilcox, introduces a B-tree based...

Linux

How long does your IO take ?

Oracle Linux engineer Rajan Shanmugavelu illustrates how to analyse disk IO latency using Dtrace.   There are times despite having a Highly Available and Fault Tolerant architected storage environment, the disk IO takes an abnormally longer time to complete, potentially causing outages at different levels in a data center. This becomes critical in a Cluster with multiple nodes that are using disk heartbeat to do health checks. There are utilities like iostat and sar that provide some information at a higher level. The service time 'svctm' column in both iostat and sar shows the latency from the host, the amount of time spent on the wire between the HBA port and the target/lun. Sometimes the service time 'svctm' provided by Linux iostat and sar are not reliable. Dynamic Tracing (DTrace) allows one to measure latency at a more granular level like measuring elapsed time at adapter driver layer. With this we can find out where in the whole driver stack more cycles are spent. This can be done on the running Linux kernel without having to install an instrumented driver or requiring a reboot of the system. Below is an example of DTrace measuring latency in the QLogic FC qla2xxx driver from the time the SCSI command is queued until it is completed from the target. This would measure every IO that is sent down the FC channel. We can also filter for commands that are taking abnormally longer time to complete. We would be interested in SCSI commands that have taken more than 25 milliseconds. Anything less than 15 milliseconds is normal per the SCSI specification standard for a spindle Disk. The DTrace utility on Oracle Linux can be installed by following the instructions on Getting Started With DTrace, the kernel DTrace provider modules can be dynamically loaded. #!/usr/sbin/dtrace -s /* * qla.d DTrace script to measure latency of IO in the FC adapter qla2xxx layer. * Requires 'fbt' provider to be loaded (run 'sudo modprobe fbt'). */ fbt::qla2xxx_queuecommand:entry { qla_cmnd = (struct scsi_cmnd *)arg1; qla_starttime = (int64_t)timestamp; } fbt::qla2x00_sp_compl:entry /qla_cmnd == ((srb_t *)arg0)->u.scmd.cmd && (this->elapsed = ((int64_t)timestamp - qla_starttime)/1000000) > 25/ { this->scsi_device = (struct scsi_device *)qla_cmnd->device; this->device = (struct device) this->scsi_device->sdev_gendev; this->dev_name = stringof(this->device.kobj.name); this->disk_name = stringof(qla_cmnd->request->rq_disk->disk_name); printf("%Y cmnd 0x%p opcode %xh on [%s] %s took %3dms", walltimestamp, qla_cmnd, qla_cmnd->cmnd[0], this->dev_name, this->disk_name, this->elapsed); qla_cmnd = NULL; qla_starttime = 0; } The DTrace qla.d script has 2 fbt (function boundary trace) probes. The 'qla2xxx_queuecommand:entry' probe stores the SCSI command, which is the second argument in this routine. We also store the nanosecond timestamp as qla_starttime. The probe 'qla2x00_sp_compl:entry' is fired upon SCSI command completion, which gets called for every IO. This is a place to insert the DTrace predicate to filter the code block to execute only when it meets the condition specified in the predicate. Here we check for the SCSI command that matches with the one set at qla2xxx_queuecommand and if the elapsed time is greater than 25 milliseconds. To match the SCSI command with the one set at qla2xxx_queuecommand we need to get the SCSI command upon entering the qla2x00_sp_comp routine whose function's first argument is a pointer of type 'srb_t'. The SCSI command resides in a union member u.scmd.cmd of the srb_t structure. DTrace allows one to traverse the structure member of the argument with ((srb_t *)arg0)->u.scmd.cmd. Then we calculate the elapsed time as the built-in nanosecond timer variable timestamp minus the start time set at qla2xxx_queuecommand. If the elapsed time is greater than 25ms and the SCSI commands match, then we execute the code block to show details. Below is output from a system with simulated slow disk: # dtrace -s qla.d dtrace: script 'qla.d' matched 2 probes CPU ID FUNCTION:NAME 18 71529 qla2x00_sp_compl:entry 2020 Dec 3 20:17:33 cmnd 0xffff881e33796800 opcode 28h on [1:0:0:1] sdc took 214ms 18 71529 qla2x00_sp_compl:entry 2020 Dec 3 20:17:33 cmnd 0xffff881e337971c0 opcode 28h on [1:0:0:1] sdc took 55ms 18 71529 qla2x00_sp_compl:entry 2020 Dec 3 20:20:45 cmnd 0xffff881e33cea700 opcode 28h on [1:0:0:245] sddw took 438ms 6 71529 qla2x00_sp_compl:entry 2020 Dec 3 20:20:52 cmnd 0xffff881e35959040 opcode 28h on [2:0:1:259] sduh took 268ms 6 71529 qla2x00_sp_compl:entry 2020 Dec 3 20:21:18 cmnd 0xffff881cec68b740 opcode 28h on [2:0:1:243] sdtz took 31ms 6 71529 qla2x00_sp_compl:entry 2020 Dec 3 20:22:26 cmnd 0xffff881cdf960000 opcode 2ah on [2:0:1:103] sdsf took 53ms 6 71529 qla2x00_sp_compl:entry 2020 Dec 3 20:23:16 cmnd 0xffff881e30737500 opcode 28h on [2:0:1:193] sdta took 525ms 18 71529 qla2x00_sp_compl:entry 2020 Dec 3 20:24:11 cmnd 0xffff881def666180 opcode 2ah on [1:0:0:175] sdh took 118ms 18 71529 qla2x00_sp_compl:entry 2020 Dec 3 20:24:50 cmnd 0xffff881e2629db00 opcode 28h on [1:0:0:253] sdea took 190ms 6 71529 qla2x00_sp_compl:entry 2020 Dec 3 20:25:01 cmnd 0xffff88006b78de40 opcode 28h on [2:0:1:135] sdrx took 559ms 18 71529 qla2x00_sp_compl:entry 2020 Dec 3 20:25:26 cmnd 0xffff881cbc718000 opcode 28h on [1:0:0:73] sdao took 133ms The DTrace command takes a -s option on the command line to specify the location of the D program source file to be compiled. There are other options that can be specified along with the -s option (see dtrace(1) man page). The output shows the probe being fired when the predicate conditions are met. The printf statement in the DTrace code block shows the timestamp in human readable form, the SCSI command address, and how long it took to complete the IO. The Device name shows details like on which Nexus the delay occurred. It shows if the delay is due to a specific device/lun or a specific target or is it channel wide or across all channels. We can also find which SCSI command was on the wire for such a long time. DTrace allows one to access structure members with type casting, which enables to read the values of the member. They are not writable. The dev_name or Nexus shows the SCSI host number, this is also the FibreChannel port, Target, Lun number. The disk name is the name of the device under the /dev/ directory. The opcode shows if it is read (28h), write (2ah) or Test Unit Ready (0h) or some other SCSI command opcode that got delayed and the time it took to complete the IO. # dtrace -qs qla.d 2020 Dec 8 19:22:55 cmnd 0xffff881cd08b71c0 opcode 28h on [1:0:0:213] sddg took 35ms 2020 Dec 8 19:26:37 cmnd 0xffff881e24e93400 opcode 28h on [2:0:0:189] sdno took 462ms 2020 Dec 8 19:27:53 cmnd 0xffff881cd8f93a80 opcode 28h on [2:0:0:219] sdod took 29ms 2020 Dec 8 21:04:39 cmnd 0xffff881c5c0c0340 opcode 2ah on [2:0:0:41] sdks took 44ms 2020 Dec 8 22:16:08 cmnd 0xffff881f3a96e4c0 opcode 28h on [1:0:0:235] sddr took 529ms 2020 Dec 8 22:27:47 cmnd 0xffff881e2de109c0 opcode 28h on [2:0:0:91] sdlr took 578ms 2020 Dec 9 00:33:38 cmnd 0xffff881e00a5aa40 opcode 28h on [1:0:0:43] sdz took 61ms 2020 Dec 9 00:54:48 cmnd 0xffff881dc7846800 opcode 28h on [1:0:0:195] sdcx took 408ms The DTrace script running with '-q' option is for quite mode, it displays what is specified in the printf() statement. The non-quiet mode shows the number of probes being traced, which CPU ran the probe routine, PID and the probe function name.

Oracle Linux engineer Rajan Shanmugavelu illustrates how to analyse disk IO latency using Dtrace.   There are times despite having a Highly Available and Fault Tolerant architected storage environment,...

Linux

Oracle Linux 8: Kernel-based Virtual Machine (KVM) Virtualization made easy with short training videos

Thanks to Craig McBride for this post. Training Tuesday Edition - 8 This week’s blog presents a set of short videos on how to use Kernel-based Virtual Machine (KVM) virtualization on Oracle Linux 8. The KVM code was first announced in 2006 and was merged into the mainline Linux kernel as part of version 2.6.20, in February 2007. Therefore, KVM is part of Linux. KVM is an open-source type-1 (bare-metal) hypervisor that permits a system running Oracle Linux 8 to host multiple virtual machines (VMs) or guests. These VMs use the system's physical computing resources to virtualize an operating system as a regular Linux user-space process. In these videos, we cover installation, management, creation, and other aspects related to using KVM virtualization on Oracle Linux 8. Here’s the list of videos and the time it takes for you to complete each one: Enabling KVM on Oracle Linux 8 (6 Min) Setup Cockpit Web Console to manage KVM on Oracle Linux 8 (5 Min) Using Cockpit to create KVM VMs on Oracle Linux 8 (7 Min) SSH into a NAT based KVM VM on Oracle Linux 8 (3 Min) Create a Network Bridge for KVM on Oracle Linux 8 (8 Min) Using a Network Bridge with KVM VMs on Oracle Linux 8 (7 Min) Switch a KVM VM from NAT to Bridged on Oracle Linux 8 (5 Min) Convert and Deploy a VirtualBox VM to KVM on Oracle Linux 8 (9 Min) Add a disk to an existing KVM VM on Oracle Linux 8 (7 Min) Be sure to come back for our next edition, which will cover virtualization using Oracle VM VirtualBox. New videos are added on an ongoing basis so check back often. Watch the previous Training Tuesday episodes: Oracle Linux 8: Installation made easy with free videos Oracle Linux 8: Administration made easy with free videos Oracle Linux 8: Package Management made easy with free videos Oracle Linux 8: Networking made easy with free videos Oracle Linux 8: Oracle Ksplice made easy with short training videos Oracle Linux 8: Remote Management made easy with short training videos Oracle Linux 8: Disk Management made easy with short training videos Resources: Oracle Linux website Oracle Linux 8 training videos

Thanks to Craig McBride for this post. Training Tuesday Edition - 8 This week’s blog presents a set of short videos on how to use Kernel-based Virtual Machine (KVM) virtualization on Oracle Linux 8. The...

Linux

Need a stable, RHEL-compatible alternative to CentOS? Three reasons to consider Oracle Linux

If you are reading this blog, you are probably a CentOS user and are in the position where you need to look at alternatives going forward. Switching to Oracle Linux is easy, so here are a few reasons to consider why you should. Free to download, use, and distribute — for more than 14 years Since the debut of Oracle Linux release 4, in 2006, it has been completely free to use and easy to download. Major and update release have been free for more than 14 years. Errata releases have been freely available since 2012. Free source code, free binaries, free updates, free errata, freely redistributable and free to use in production — all without having to sign any documents with Oracle and without any need to remove trademarks and copyrights. Oracle Linux has an equivalent release for every major Red Hat Enterprise Linux (RHEL) version: 4, 5, 6, 7, and most recently 8. Oracle Linux releases consistently track Red Hat with errata typically released within 24 hours, update releases usually available within five business days, and major version releases within three months. This makes Oracle Linux an ideal choice for development, testing, and production use. You decide which support coverage, if any, is best for each individual system while keeping all systems up-to-date and secure. Compatible and enhanced Oracle Linux is 100% application binary compatible with Red Hat Enterprise Linux (RHEL). Since the 2006 launch, enterprises have been running Oracle Linux with no compatibility bugs logged. Oracle offers a choice of two kernels: the Unbreakable Enterprise Kernel (UEK) for Oracle Linux or the Red Hat Compatible Kernel (RHCK). Both are supported by Oracle. UEK offers extensive performance and scalability improvements to the process scheduler, memory management, file systems, and the networking stack. We believe the source for the kernel you run should be easy to pick apart. That’s why we publish the UEK source code, complete with the original mainline changelog and our changelog on GitHub. Whether running on UEK or RHCK, Oracle Linux remains fully compatible with RHEL. Designed for all workloads Oracle Linux is designed for all workloads and a broad range of systems, including x86 and Arm. Oracle Linux is designed to optimally support both Oracle and third-party applications on your choice of Oracle or third-party hardware, Oracle Cloud Infrastructure or another public cloud. It’s easy to access, try it out. FAQ on switching from CentOS to Oracle Linux Download the script to switch from CentOS to Oracle Linux Installation media and updates freely available from the Oracle Linux yum server. UEK source code on GitHub For more information visit oracle.com/linux.

If you are reading this blog, you are probably a CentOS user and are in the position where you need to look at alternatives going forward. Switching to Oracle Linux is easy, so here are a few reasons...

Linux

Oracle Linux 8: Disk Management made easy with short training videos

Thanks to Craig McBride for this post. Training Tuesday Edition - 7 This week’s blog presents a set of short videos on how to manage disk storage for your Oracle Linux 8 systems. Partitioning disks, creating file systems, and mounting file systems is an essential skill needed to provide storage for users, applications, and data. In these videos, you’ll learn: How to divide, or partition, your storage devices into smaller segments. The standard naming convention used to access disk partitions. The extended, or Ext, file system types and how to create and attach, or mount, a file system to a file hierarchy. The file systems table, or fstab file, and how to configure file systems to be automatically mounted at boot time. How to use swap space to help free your system’s physical memory when it becomes exhausted. Here’s the list of videos and the time it takes for you to complete each one: Disk Partitioning with fdisk on Oracle Linux 8 (8 Min) Using Ext2, Ext3, and Ext4 File Systems on Oracle Linux 8 (8 Min) Mounting File Systems Oracle Linux 8 (9 Min) Using fstab to Mount Devices on Oracle Linux 8 (9 Min) Using Swap Space on Oracle Linux 8 (8 Min) Be sure to come back for our next edition, which will cover Kernel-based Virtual Machine (KVM) Virtualization. New videos are added on an ongoing basis so check back often. Watch the previous Training Tuesday episodes: Oracle Linux 8: Installation made easy with free videos Oracle Linux 8: Administration made easy with free videos Oracle Linux 8: Package Management made easy with free videos Oracle Linux 8: Networking made easy with free videos Oracle Linux 8: Oracle Ksplice made easy with short training videos Oracle Linux 8: Remote Management made easy with short training videos Resources: Oracle Linux website Oracle Linux 8 training videos Oracle Linux 8 product documentation

Thanks to Craig McBride for this post. Training Tuesday Edition - 7 This week’s blog presents a set of short videos on how to manage disk storage for your Oracle Linux 8 systems. Partitioning disks,...

Linux

Oracle Linux 8: Remote Management made easy with short training videos

Thanks to Craig McBride for this post. Training Tuesday Edition - 6 This week’s blog presents a set of short videos on how to establish secure connections to remote Oracle Linux 8 systems. With secure connections, all traffic transmitted over the wire is encrypted and protected from password sniffing attacks and other outside monitoring. In these videos, you’ll learn how to use the OpenSSH suite of connectivity tools used for remote login and management. These videos cover OpenSSH features including ProxyJump, tunneling capabilities, key management, authentication methods, and configuration options. You will also learn how to create an SSH tunnel for securely connecting to a VNC Server for remote administration and how to enable and securely use the Gnome 3 Screen Sharing feature. Here’s the list of videos and the time it takes for you to complete each one: video Using OpenSSH ProxyJump (7 Min) video Using ssh and client-side config with Oracle Linux 8 (7 Min) video Using ssh tunnels with Oracle Linux 8 (9 Min) video Provision user and ssh key with Ansible on Oracle Linux 8 (7 Min) video Install and configure VNC Server on Oracle Linux 8 (6 Min) video Using Gnome 3 screen sharing on Oracle Linux 8 (7 Min) Be sure to come back for our next edition, which will cover disk management.  New videos are added on an ongoing basis so check back often. Resources: Oracle Linux Oracle Linux 8 training videos Oracle Linux 8 product documentation Oracle Linux 8 enhancing system security Watch the previous Training Tuesday episodes: Oracle Linux 8: Installation made easy with free videos Oracle Linux 8: Administration made easy with free videos Oracle Linux 8: Package Management made easy with free videos Oracle Linux 8: Networking made easy with free videos Oracle Linux 8: Oracle Ksplice made easy with free videos  

Thanks to Craig McBride for this post. Training Tuesday Edition - 6 This week’s blog presents a set of short videos on how to establish secure connections to remote Oracle Linux 8 systems. With...

Announcements

Announcing Oracle Linux Cloud Native Environment Release 1.2

Oracle is pleased to announce the general availability of Oracle Linux Cloud Native Environment Release 1.2. This release includes several enhancements focused on improving the security and compliance of customer environments. Release 1.2 also includes new versions of core components, including Kubernetes, CRI-O, Kata Containers, and Istio. Oracle Linux Cloud Native Environment is an integrated suite of software components for the development and management of cloud-native applications. Based on the Cloud Native Computing Foundation and Open Container Initiative standards, Oracle Linux Cloud Native Environment delivers a simplified framework for installations, updates, upgrades, and configuration of key features for orchestrating microservices. New features and enhancements Highlights of this release include: FIPS 140-2 Compliant Kubernetes when run on Oracle Linux 8, as the Kubernetes Module is now compiled with OpenSSL. FIPS mode must be enabled at the host OS level. SELinux support for both permissive and enforcing modes on all Oracle Linux Cloud Native Environment nodes. Bare metal and virtual Oracle Cloud Infrastructure instances that meet the minimum hardware requirements are now supported for all node types. Oracle Linux 8 can be used as the base operating system for control plane, worker, and operator nodes. Unbreakable Enterprise Kernel (UEK) Release 6 is supported for control plane, worker, and operator nodes on both Oracle Linux 7 and Oracle Linux 8. In addition, UEK R6 support on Oracle Linux 7 has been backported to releases 1.1.7 and 1.0.9. Additional enhancements to enable advanced cluster deployments include: TLS customization to allow the selection of specific TLS cipher suites and both minimum and maximum supported TLS version for both the olcne-agent and Kubernetes. NAT support to allow worker nodes only accessible behind a NAT gateway to be added to a new or existing cluster. Install time configuration of the olcne-agent that runs on all nodes of an Oracle Linux Cloud Native Environment cluster to use a specific username or UID instead of the default. For a full list of individual software component versions included with this release, please review the Release Notes. Installation and upgrade Oracle Linux Cloud Native Environment is installed using packages from the Unbreakable Linux Network or the Oracle Linux yum server as well as container images from the Oracle Container Registry. Existing deployments can be upgraded in place using the olcnectl module update command. For more information on installing Oracle Linux Cloud Native Environment, please see Getting Started. For information on updated or upgrading, please see Updates and Upgrades. Support for Oracle Linux Cloud Native Environment Support for Oracle Linux Cloud Native Environment is included with an Oracle Linux Premier Support subscription. Oracle Linux Premier Support is included with Oracle Cloud Infrastructure subscriptions and Oracle Premier Support for Systems at no additional cost. Additional resources Oracle.com/linux Oracle Linux Cloud Native Environment documentation Oracle Linux Cloud Native Environment training

Oracle is pleased to announce the general availability of Oracle Linux Cloud Native Environment Release 1.2. This release includes several enhancements focused on improving the security and compliance...

Ksplice

Oracle Linux 8: Oracle Ksplice made easy with short training videos

Thanks to Craig McBride for this post. Training Tuesday Edition - 5 This week’s blog includes a series of short videos on using Oracle Ksplice with Oracle Linux 8. Oracle Ksplice allows you to install the latest kernel and key user-space security and bug fix updates while the system is running. You don’t need to coordinate with users to schedule system down time or stop running applications. This means with Ksplice you don’t need to reboot your systems to install kernel or user-space updates. In these videos, you will learn how to register an Oracle Linux 8 system for online mode updates with the Unbreakable Linux Network (ULN) and install the Oracle Ksplice packages. You will also learn the command line interface for the two Oracle Ksplice clients: The Ksplice Enhanced Client supports both kernel and glibc and openssl user-space updates and uses ksplice commands. The Ksplice Uptrack Client only supports updating the kernel and uses uptrack commands. Here are the links to the video series: video Enabling Oracle Ksplice (online mode) using Oracle Linux 8 (9 Min) video Running ksplice commands (online mode) on Oracle Linux 8 (4 Min) video Running uptrack commands (online mode) on Oracle Linux 8 (4 Min) video Uninstalling Oracle Ksplice (online mode) using Oracle Linux 8 (6 Min) Be sure to come back for our next edition, which will cover Remote Management.  New videos are added on an ongoing basis so check back often. Resources: Oracle Linux Oracle Linux 8 training videos Oracle Linux 8 product documentation Watch the previous Training Tuesday episodes: I: Oracle Linux 8: Installation made easy with free videos II: Oracle Linux 8: Administration made easy with free videos III: Oracle Linux 8: Package Management made easy with free videos IV: Oracle Linux 8: Networking made easy with free videos

Thanks to Craig McBride for this post. Training Tuesday Edition - 5 This week’s blog includes a series of short videos on using Oracle Ksplice with Oracle Linux 8. Oracle Ksplice allows you to install...

Linux

How to setup WireGuard on Oracle Linux

Oracle Linux engineer William Kucharski provides an introduction to the VPN protocol WireGuard   WireGuard has received a lot of attention of late as a new, easier to use VPN mechanism, and it has now been added to Unbreakable Enterprise Kernel 6 Update 1 as a technology preview. But what is it, and how do I use it? What is WireGuard? WireGuard is described by its developers as: an extremely simple yet fast and modern VPN that utilizes state-of-the-art cryptography. It aims to be faster, simpler, leaner, and more useful than IPsec, while avoiding the massive headache. (You can read the full statement from the developers here.) Though IPsec works well and is indeed a standard for secure communication, it can be difficult to configure and use for those not familiar with network administration. By comparison, WireGuard is reasonably easy to set up, and "aims to be as easy to configure and deploy as SSH." Linus Torvalds paid it perhaps the ultimate compliment on LKML not too long before the code was merged into the 5.6 kernel: Can I just once again state my love for it and hope it gets merged soon? Maybe the code isn’t perfect, but I’ve skimmed it, and compared to the horrors that are OpenVPN and IPSec, it’s a work of art. If you are curious about the inner workings of WireGuard, you can read the protocol in the original technical whitepaper. If you prefer video, a nice session on WireGuard was given at the 2018 Linux Plumber's Conference in Vancouver, viewable here. How does it work? Quoting its authors: WireGuard associates tunnel IP addresses with public keys and remote endpoints. When the interface sends a packet to a peer, it does the following: This packet is meant for 192.168.30.8. Which peer is that? Let me look... Okay, it's for peer ABCDEFGH. (Or if it's not for any configured peer, drop the packet.) Encrypt entire IP packet using peer ABCDEFGH's public key. What is the remote endpoint of peer ABCDEFGH? Let me look... Okay, the endpoint is UDP port 53133 on host 216.58.211.110. Send encrypted bytes from step 2 over the Internet to 216.58.211.110:53133 using UDP. When the interface receives a packet, this happens: I just got a packet from UDP port 7361 on host 98.139.183.24. Let's decrypt it! It decrypted and authenticated properly for peer LMNOPQRS. Okay, let's remember that peer LMNOPQRS's most recent Internet endpoint is 98.139.183.24:7361 using UDP. Once decrypted, the plain-text packet is from 192.168.43.89. Is peer LMNOPQRS allowed to be sending us packets as 192.168.43.89? If so, accept the packet on the interface. If not, drop it. Behind the scenes there is much happening to provide proper privacy, authenticity, and perfect forward secrecy, using state-of-the-art cryptography. Let's see a sample configuration! The following assumes you have WireGuard installed on the machines you've decided to use as your client and server, and that the two machines can connect to one another. You can verify WireGuard is installed by using the following commands: rpm -qa | grep wireguard modinfo wireguard The first should show that the package wireguard-tools is installed and the second should show information on the wireguard kernel module. For the sake of simplicity, I will demonstrate a configuration using IPv4 addresses, though the parameters in the setup files will support IPv6 addresses. Assume the current IP addresses for the two systems' eno1 interfaces are: 10.0.0.1 server 10.0.0.2 client and we want to use WireGuard addresses of: 192.168.2.1 server 192.168.2.2 client you would follow these steps: Server Configuration, Part One Start by generating a crypto key pair: a public key and a private key. Run the following commands on the machine you've selected as your server as root. If your system does not allow logins as root, add sudo commands as necessary: cd /etc/wireguard umask 077 wg genkey | tee privatekey | wg pubkey > publickey This will generate the initial private and public crypto keys needed to start the tunnel and store them in the files privatekey and publickey respectively. Next, edit the file /etc/sysctl.conf and make the following changes (note that depending upon your prior configuration, these values may have already been set elsewhere in the file and you may need to edit those lines appropriately): net.ipv4.ip_forward = 1 net.ipv6.conf.all.forwarding = 1 Then use the sysctl command to make the system reread the /etc/sysctl.conf configuration file: sysctl -p (You can safely ignore any errors that are output about files that sysctl cannot stat.) Client Configuration As you did for the server, you will need to generate a crypto key pair: cd /etc/wireguard umask 077 wg genkey | tee privatekey | wg pubkey > publickey Now edit the file /etc/wireguard/wg0.conf to read: [Interface] Address = 192.168.2.2/24 SaveConfig = true ListenPort = 60477 PrivateKey = <contents of /etc/wireguard/privatekey> [Peer] PublicKey = <contents of the server's /etc/wireguard/publickey file> AllowedIPs = 0.0.0.0/0, ::/0 Endpoint = 10.0.0.1:51820 Server Configuration Part Two and Bringup Edit the file /etc/wireguard/wg0.conf so that it looks like this: [Interface] Address = 192.168.2.1/24 SaveConfig = true PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o eno1 -j MASQUERADE PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eno1 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o eno1 -j MASQUERADE ListenPort = 51820 PrivateKey = <contents of /etc/wireguard/privatekey> [Peer] PublicKey = <contents of the client's /etc/wireguard/publickey file> AllowedIPs = 192.168.2.2/32 Endpoint = 10.0.0.2:60477 That's it! Run the wg-quick command to start the server: wg-quick up wg0 you will see output something like: [#] ip link add wg0 type wireguard [#] wg setconf wg0 /dev/fd/63 [#] ip link set mtu 1420 up dev wg0 [#] ip -4 route add 192.168.2.2/32 dev wg0 [#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eno1 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o eno1 -j MASQUERADE At this point, you can confirm the status of the interface by using the wg command, which will generate output like: # wg interface: wg0 public key: _server public key_ private key: (hidden) listening port: 51820 peer: <client public key> endpoint: 10.129.135.30:60477 allowed ips: 192.168.2.2/32 Client Bringup The next step is to activate the secure tunnel that will tunnel all of your client's network traffic, encrypted, through the server. Be sure to be on the console when you perform this operation. As all network traffic will now be routed through the tunnel, if you run these commands while connected via ssh you will lose your connection and will not be able to reconnect except by logging in on the machine's console. As on the server, use the wg-quick command: wg-quick up wg0 you will see output that looks like: [#] ip link add wg0 type wireguard [#] wg setconf wg0 /dev/fd/63 [#] ip -4 address add 192.168.2.2/24 dev wg0 [#] ip link set mtu 1420 up dev wg0 [#] ip -6 route add ::/0 dev wg0 table 51820 [#] ip -6 rule add not fwmark 51820 table 51820 [#] ip -6 rule add table main suppress_prefixlength 0 [#] ip6tables-restore -n [#] ip -4 route add 0.0.0.0/0 dev wg0 table 51820 [#] ip -4 rule add not fwmark 51820 table 51820 [#] ip -4 rule add table main suppress_prefixlength 0 [#] sysctl -q net.ipv4.conf.all.src_valid_mark=1 [#] iptables-restore -n Again, you can check on the status of the client using the wg command, which will generate output similar to: # wg interface: wg0 public key: <client public key> private key: (hidden) listening port: 60477 fwmark: 0xca6c peer: <server public key> endpoint: 10.0.0.1:51820 allowed ips: 0.0.0.0/0, ::/0 Use At this point, the tunnel should be up and functioning, and you should be able to issue network commands from your client machine and have them operate as usual, except traffic will be going through the WireGuard tunnel to the server. Once you have performed network operations from the client, the wg command will show usage data in addition to the configuration information it showed earlier. For example, a ping might look like this: # ping oracle.com PING oracle.com (137.254.16.101) 56(84) bytes of data. 64 bytes from bigip-ocoma-cms-adc.oracle.com (137.254.16.101): icmp_seq=1 ttl=240 time=41.9 ms 64 bytes from bigip-ocoma-cms-adc.oracle.com (137.254.16.101): icmp_seq=2 ttl=240 time=42.0 ms --- oracle.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 2002ms tt min/avg/max/mdev = 32.074/32.085/32.106/0.014 ms # wg interface: wg0 public key: <client public key> private key: (hidden) listening port: 60477 fwmark: 0xca6c peer: <server public key> endpoint: 10.0.0.1:51820 allowed ips: 0.0.0.0/0, ::/0 latest handshake: 3 seconds ago transfer: 1.56 KiB received, 756 B sent You can see that the latest handshake was three seconds ago with 1,560 bytes received from and 756 bytes sent to the tunneled connection. Client Teardown To close the tunnel and restore normal network operation, use the wg-quick command: # wg-quick down wg0 [#] wg showconf wg0 [#] ip -4 rule delete table 51820 [#] ip -4 rule delete table main suppress_prefixlength 0 [#] ip -6 rule delete table 51820 [#] ip -6 rule delete table main suppress_prefixlength 0 [#] ip link delete dev wg0 [#] iptables-restore -n [#] ip6tables-restore -n Server Teardown To shut down the WireGuard server, once again the wg-quick command is used: # wg-quick down wg0 [#] wg showconf wg0 [#] ip link delete dev wg0 [#] iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eno1 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o eno1 -j MASQUERADE Conclusion As with any network protocol, connection details are precise, but WireGuard definitely is much easier to configure and use than IPsec. Further, a variety of clients are available for other operating systems as well allowing you to provide secure communications for an entire organization quite easily as compared to other methods. This factor alone may make WireGuard a de facto standard for VPN creation in the near future.

Oracle Linux engineer William Kucharski provides an introduction to the VPN protocol WireGuard   WireGuard has received a lot of attention of late as a new, easier to use VPN mechanism, and it has now...

Linux

Oracle Linux 8: Networking made easy with free videos

Thanks to Craig McBride for this post. Training Tuesday Edition - 4 This week’s blog presents a set of free, short videos on performing network configuration functions on Oracle Linux 8. Being able to configure networks is an essential skill to access programs, storage and data on remote systems. This video series also covers firewall configuration required to keep your networks safe and secure from intruders. Oracle Linux 8 handles network communications through software configuration files and the network interface cards (NICs) in your system. NetworkManager is the default networking service in Oracle Linux 8 and includes a command-line tool, nmcli to create, display, edit, delete, activate, and deactivate network connections. You can use the ip command to display the status of a network interface, configure network properties, or for debugging or tuning the network. A firewall serves as the computer equivalent of a physical wall, gate, or fence to keep intruders out and protect what's inside. The default firewall service in Oracle Linux 8 is firewalld. In Oracle Linux 8, nftables replaced iptables, and firewalld interacts with nftables. Here’s the list of videos and the time it takes for you to complete each one: video Network Configuration Files on Oracle Linux 8 (7 Min) video Using NetworkManager CLI (nmcli) on Oracle Linux 8 (7 Min) video Using the ip command on Oracle Linux 8 (9 Min) video Introduction to using firewalld on Oracle Linux 8 (8 Min) video Using nftables on Oracle Linux 8 (7 Min) Be sure to come back for our next edition, which will cover Oracle Ksplice.  New videos are added on an ongoing basis so check back often. Resources: Oracle Linux Oracle Linux 8 training videos Oracle Linux 8 product documentation Watch the previous Training Tuesday episodes: I: Oracle Linux 8: Installation made easy with free videos II: Oracle Linux 8: Administration made easy with free videos III: Oracle Linux 8: Package Management made easy with free videos

Thanks to Craig McBride for this post. Training Tuesday Edition - 4 This week’s blog presents a set of free, short videos on performing network configuration functions on Oracle Linux 8. Being able to...

Linux

RackWare: A solution for moving workloads to Oracle Linux KVM

RackWare has certified its RackWare Management Module (RMM) hybrid cloud management solution for Oracle Linux KVM on both Oracle Linux 7 and 8. RMM is also available on Oracle Cloud Infrastructure. RMM’s high level of automation uniquely differentiates it and helps customers reduce labor costs related to the deployment and management of IT applications. Customers are looking for an enterprise KVM solution as an alternative to an expensive proprietary virtualization deployment. They are also looking for an easier migration path to the cloud. One solution is RackWare's RMM. RMM is a hypervisor-agnostic file system-based replication technology that can help migrate workloads from other hypervisors to Oracle Linux KVM. RackWare supports a range of use cases for workloads running on bare metal servers, virtual machines, or containers. Customers can move Windows, Linux, or Kubernetes/Container deployments from one data center or cloud to another data center or cloud. For more information on RackWare solutions for deployment with Oracle Linux KVM on premises and in Oracle Cloud Infrastructure, visit: RackWare certifications on Oracle Linux RackWare and Oracle RackWare Enables Container Migration to Oracle Cloud Container Engine for Kubernetes Migration and Disaster Recovery in the Oracle Cloud with RackWare Oracle.com/linux Oracle.com/virtualization  

RackWare has certified its RackWare Management Module (RMM) hybrid cloud management solution for Oracle Linux KVM on both Oracle Linux 7 and 8. RMM is also available on Oracle Cloud Infrastructure....

Announcements

Announcing the release of Oracle Linux 8 Update 3

Oracle is pleased to announce the availability of the Oracle Linux 8 Update 3 for the 64-bit Intel and AMD (x86_64) and 64-bit Arm (aarch64) platforms.  Oracle Linux brings the latest open source innovations and business-critical performance and security optimizations for cloud and on-premises deployment. Oracle Linux maintains user space compatibility with Red Hat Enterprise Linux (RHEL), which is independent of the kernel version that underlies the operating system. Existing applications in user space will continue to run unmodified on Oracle Linux 8 Update 3 with Unbreakable Enterprise Kernel Release 6 (UEK R6) and no re-certifications are needed for applications already certified with Red Hat Enterprise Linux 8 or Oracle Linux 8. Oracle Linux 8 Update 3 includes the UEK R6 on the installation image, along with the Red Hat Compatible Kernel (RHCK). For new installations, UEK R6 is enabled and installed by default and is the default kernel on first boot. UEK R6, the kernel developed, built, and tested by Oracle and based on the mainline Linux Kernel 5.4, delivers more innovation than other commercial Linux kernels.  Oracle Linux 8 Update 3 release includes: Graphical Installer Improved support for NVDIMM devices Improved support for ipv6 static configurations Installation program uses the default LUKS2 version for an encrypted container Graphical installation program includes the "root password" and "user creation settings" in the Installation Summary screen Red Hat Compatible Kernel (RHCK) lshw command provides additional CPU information /dev/random and /dev/urandom conditionally powered by the Kernel Crypto API DRBG Extended Berkeley Packet Filter added for kernel virtual machines (KVM) libbpf support now available Mellanox ConnectX-6 Dx network adapter included and driver automatically loaded TSX disabled by default on Intel CPUs that support disabling it Dynamic Programming Languages Ruby 2.7.1 module stream  Nodejs:14 module stream  python38:3.8 module stream  php:7.4 module stream  nginx:1.18 module stream  perl:5.30 module stream  squid:4 module stream updated to version 4.11 httpd:2.4 module stream enhanced git packages updated to version 2.27 Infrastructure Services Bind, updated to version 9.11, provides increased reliability on systems that have multiple CPU cores and more detailed error detection, and improvements to other tools Powertop updated to version 2.12 Tuned updated to version 2.14.0 tcpdump, updated to version 4.9.3, includes bug and security fixes iperf3 includes enhancement, bug and security fixes, and introduces support for SSL Security gnutls updated to version 3.6.14 Libreswan updated to version 3.32 libseccomp library updated to version 2.4.3 libkcapi updated to version 1.2.0 libssh library updated to version 0.9.4 setools updated to version 4.3.0 stunnel updated to version 5.56 SCAP and OpenSCAP improvements OpenSCAP updated to version 1.3.3 SCAP Workbench tool can generate results-based remediation from tailored profiles scap-security-guide packages updated to version 0.1.50 SELinux improvements fapolicyd packages updated to version 1.0 Individual CephFS files and directories can include SELinux labels Additional updates NVMe/TCP available as a Technology Preview on RHCK and fully supported on UEK R6 GCC Toolset release 10 available as Application Stream Pacemaker updated to release 2.0.4 Virtualization improvements focused on KVM hypervisor Oracle Linux 8 Update 3 includes the following kernel packages: kernel-uek-5.4.17-2011.7.4 for x86_64 and aarch64 platforms - The Unbreakable Enterprise Kernel Release 6, which is the default kernel. kernel-4.18.0-221 for x86_64 platform - The latest Red Hat Compatible Kernel (RHCK). For more details about these and other new features and changes, please consult the Oracle Linux 8 Update 3 Release Notes and Oracle Linux 8 Documentation. Oracle Linux downloads Individual RPM packages are available on the Unbreakable Linux Network (ULN) and the Oracle Linux yum server. ISO installation images are available for download from the Oracle Linux yum server and container images are available via Oracle Container Registry, GitHub Container Registry and Docker Hub. Oracle Linux can be downloaded, used, and distributed free of charge and all updates and errata are freely available. Customers decide which of their systems require a support subscription. This makes Oracle Linux an ideal choice for development, testing, and production systems. The customer decides which support coverage is best for each individual system while keeping all systems up to date and secure. For more information about Oracle Linux, please visit www.oracle.com/linux.

Oracle is pleased to announce the availability of the Oracle Linux 8 Update 3 for the 64-bit Intel and AMD (x86_64) and 64-bit Arm (aarch64) platforms.  Oracle Linux brings the latest open source...

Events

WEBINAR: 5 ways to simplify your digital transformation - An analyst view

Sriram Subramanian, Research Director IDC Register today: NA/LAT : December 10th, 10 am PST | 1 pm EST EMEA : December 15th,  10:00 am GMT | 11:00 am CET JAPAC: December 15th,  11:00 am GST Without the right foundation many digital transformation (DX) projects keep growing in complexity and end up failing. What criteria are you using to ensure your technology investments actually help achieve your business goals? Join our panel discussion to learn from an IDC analyst about their research and recommendations for implementing successful DX projects. Topics covered: Trends in Enterprise Business Applications Acceleration through hybrid cloud infrastructure A solid foundation for digital transformation 5 recommendations on how to simplify your DX projects Featured speakers: Sriram Subramanian, Research Director, IDC Infrastructure Systems, Platforms and Technologies Group Robert Shimp, Oracle Group VP Infrastructure Software and Product Strategy Karen Sigman, Oracle Marketing VP, Linux and Virtualization Marketing   Register today for this insightful webinar: NA/LAT : December 10th 10 am PST | 1 pm EST EMEA : December 15th 10:00 am GMT | 11:00 am CET JAPAC: December 15th 11:00 am GST  

Sriram Subramanian, Research Director IDC Register today: NA/LAT : December 10th, 10 am PST | 1 pm EST EMEA : December 15th,  10:00 am GMT | 11:00 am CET JAPAC: December 15th,  11:00 am GST Without the...

Announcements

Announcing the Unbreakable Enterprise Kernel Release 6 Update 1 for Oracle Linux

The Unbreakable Enterprise Kernel (UEK) for Oracle Linux provides the latest open source innovations, key optimizations, and security to cloud and on-premises workloads. It is the Linux kernel that powers Oracle Cloud and Oracle Engineered Systems such as Oracle Exadata Database Machine and Oracle Linux on 64-bit Intel and AMD or 64-bit Arm platforms. UEK Release 6 maintains compatibility with the Red Hat Compatible Kernel (RHCK) and does not disable any features that are enabled in RHCK. Additional features are enabled to provide support for key functional requirements and patches are applied to improve performance and optimize the kernel. What's New? The Unbreakable Enterprise Kernel Release 6 Update 1 (UEK R6U1) for Oracle Linux is based on the mainline kernel version 5.4. Through actively monitoring upstream check-ins and collaboration with partners and customers, Oracle continues to improve and apply critical bug and security fixes to UEK R6. This update includes several new features, added functionality, and bug fixes across a range of subsystems. UEK R6U1 can be recognized with a release number starting with 5.4-17-2036. Notable changes: Padata replaces ktask. Padata is a mechanism by which the kernel can farm jobs out to be done in parallel on multiple CPUs while optionally retaining their ordering. Oracle initially contributed padata to the mainline kernel and continues to provide ongoing development of the padata implementation in the upstream kernel and helped advance padata as the framework for parallelizing CPU-intensive work in the kernel, replacing ktask.  Improvements, bug and security fixes for Btrfs, CIFS, ext4, NFS, OCFS2 and XFS filesystems. Drivers AMD-TEE drivers, amdtee and tee, are new additions in this release to include mainline kernel updates for the AMD Milan CPU family. Atheros 802.11n HTC are updated for security fixes, including CVE-2019-19073. Broadcom BCM573xx network driver, bnxt_en, is available at version 1.10.1 and includes vendor supplied patches and updates. Intel Ethernet Connection E800 Series Linux driver, ice, is updated to version 0.8.2-k to enable support for newer Intel 800-Series Ethernet controllers and PCIe cards. Broadcom Emulex LightPulse Fibre Channel SCSI driver, lpfc, is updated to version 12.8.0.3 with vendor supplied patches and bug fixes. Broadcom MegaRAID SAS driver, megaraid_sas, is updated to version 07.714.04.00-rc1. LSI MPT Fusion SAS 3.0 device driver, mpt3sas, is updated to version 34.100.00.00 to include vendor supplied patches. QLogic Fibre Channel HBA driver, qla2xxx, is updated to version 10.01.00.25-k and includes a large number of vendor supplied patches. Realtek RTL8152/RTL8153-based USB Ethernet Adapter driver, r8152, is updated to version 1.10.11 with upstream kernel patches. Intel VMD (Volume Management Device) driver, vmd, version 0.6 is added to this kernel release and enables serviceability of NVMe devices. Tech-Preview features Core scheduling is a feature enabled in the kernel to limit trusted tasks to run concurrently on CPU cores that share compute resources to help mitigate against certain categories of 'core shared cache' processor bugs that could cause data leakage and other related vulnerabilities. Wireguard is a faster and more secure replacement for IPsec and OpenVPN. New networks are being built with modern cryptography from WireGuard rather than legacy technologies like IPsec and OpenVPN. WireGuard is enabled as a technical preview in UEK R6U1 and introduces the wireguard kernel module at version 1.0.20200712. NFS v4.2 Server Side Copy functionality is back-ported from the upstream kernel and provides mechanisms that allow an NFS client to copy file data on a server or between two servers without the data being transmitted back and forth over the network through the NFS client. For details on these and other new features and changes, please consult the Release Notes for the UEK R6 Update 1. Security (CVE) Fixes A full list of CVEs fixed in this release can be found in the Release Notes for the UEK R6U1. Software Download Oracle Linux can be downloaded, used, and distributed free of charge and all updates and errata are freely available. This allows organizations to decide which systems require a support subscription and makes Oracle Linux an ideal choice for development, testing, and production systems. The user decides which support coverage is the best for each system individually, while keeping all systems up-to-date and secure. Customers with Oracle Linux Premier Support also receive access to zero-downtime kernel updates using Oracle Ksplice. Compatibility UEK R6 Update 1 is fully compatible with the UEK R6 GA release. The kernel ABI for UEK R6 remains unchanged in all subsequent updates to the initial release. About Oracle Linux Oracle Linux is an open and complete operating environment that helps accelerate digital transformation. It delivers leading performance and security for hybrid and multicloud deployments. Oracle Linux is 100% application binary compatible with Red Hat Enterprise Linux. And, with an Oracle Linux Support subscription, customers have access to award-winning Oracle support resources and Linux support specialists, zero-downtime patching with Ksplice, cloud native tools such as Kubernetes and Kata Containers, KVM virtualization and oVirt-based virtualization manager, DTrace, clustering tools, Spacewalk, Oracle Enterprise Manager, and lifetime support. All this and more is included in a single cost-effective support offering. Unlike many other commercial Linux distributions, Oracle Linux is easy to download and completely free to use, distribute, and update.

The Unbreakable Enterprise Kernel (UEK) for Oracle Linux provides the latest open source innovations, key optimizations, and security to cloud and on-premises workloads. It is the Linux kernel that...

Linux

QEMU Live Update

In this blog Oracle Linux Kernel engineers Steve Sistare and Mark Kanda present QEMU live update.   The ability to update software with critical bug fixes and security mitigations while minimizing downtime is extremely important to customers and cloud service providers. In this blog post, we present QEMU Live Update, a new method for updating a running QEMU instance to a new version while minimizing the impact to the VM guest. The guest pauses briefly, for less than 100 milliseconds in our prototype, without loss of internal state or external connections. Live Update uses resources more efficiently than Live Migration. The latter ties up the source and target hosts, and consumes more memory and network bandwidth, and does so for an indeterminate period of time that depends on when the copy phase converges. Live migration is prohibitively expensive if large local storage must be copied across the network to the target. Implementation Live Update preserves the guest state across an exec of a new QEMU binary. It does so by leveraging QEMU's live migration vmstate framework. We enhance QEMU's existing functionality for saving and restoring VM state to allow a guest to be quickly suspended and resumed. The guest RAM is preserved across the exec and mapped at the same virtual address via a proposed madvise option called MADV_DOEXEC. This option preserves the physical pages and virtual mappings of a memory range, and works for MAP_ANON memory. Briefly, madvise sets a flag in each vma struct covering the range, and exec copies flagged vma's from the old mm struct to the new mm, much like fork. See the patch for details. The live update sequence consists of updating the QEMU binary, pausing the guest, saving the VM state, exec'ing the new QEMU binary, restoring the VM state, and resuming the guest. This implementation requires changes to QEMU and the Linux memory management framework, but no changes are required in system libraries or the KVM kernel module. Two new QEMU QMP/HMP commands are utilized: cprsave and cprload. cprsave cprsave pauses the guest to prevent further modifications to guest RAM and block devices and saves the VM state to a file. Unlike the existing savevm command, cprsave supports any type of guest image and block device. cprsave has two modes of operation: restart, for updating QEMU only, and reboot, for updating and rebooting the host kernel. Reboot is discussed later. With cprsave restart, the address and length of the RAM blocks are saved as environment variables and the RAM is tagged with the MADV_DOEXEC option to preserve it across the exec. Finally, the new QEMU binary is exec'd with the original command line arguments. After exec, QEMU reads the environment variables to find the RAM blocks, rather than allocating memory as it normally would. cprload cprload recreates the VM using the file produced by cprsave. Guest block devices are used as-is, so the contents of such devices must not be modified between the cprsave and cprload operations. If the VM was running when cprsave was executed, the VM execution will be resumed. External Connections External connections, such as the guest console, QMP connections, and vhost devices, are preserved across the update. Upon cprsave, the associated file descriptors' close-on-exec flags are cleared and the descriptors are saved as environment variables. Upon restart, QEMU finds the file descriptor environment variables, reuses them by associating them with the corresponding devices, and skips the related configuration steps. VFIO VFIO PCI devices are preserved in a similar manner. At creation time, the QEMU VFIO file descriptors (container, group, device, eventfd) are saved as environment variables. Upon cprsave, the vmstate MSI message area is saved, and all preserved file descriptors' close-on-exec flags are cleared. Upon restart, QEMU finds the file descriptor environment variables, reuses them, and skips the related configuration steps for the preserved areas (such as device and IOMMU state). Finally, upon cprload, the MSI data is loaded from the file, the preserved irq eventfd's are attached to the new KVM instance, and the guest is resumed. The hardware device itself is not quiesced during the restart, and pending DMA requests will continue to execute, reading from and writing to guest memory. This is safe because MADV_DOEXEC preserves the guest memory in place. Example The following is an example of updating QEMU from v4.2.0 to v4.2.1 on Oracle Linux 7 using the HMP version of cprsave restart. A QEMU software update is performed while the guest is running to minimize downtime. Host Kernel Update and Reboot Many critical fixes can be applied by updating only QEMU, or by ksplice'ing the host kernel and its kvm module. However, if you need to completely update the host kernel, we provide a method for doing so, using cprsave with the reboot mode argument. In this mode, cprsave saves state to a file and exits. You then kexec boot a new kernel and issue cprload. The guest RAM must be backed by a persistent shared memory file, such as device DAX or a /dev/shm file that is preserved across kexec via Anthony Yznaga's proposed PKRAM kernel patches. VFIO devices can be preserved if the guest provides an agent that implements suspend to ram, such as qemu-ga. To update, you first issue guest-suspend-ram to the agent, and the guest drivers' suspend methods flush outstanding requests and re-initialize to a reset state -- the same state reached after the host reboots. Thus when the guest resumes, the guest and host agree on the state. Connections from the guest kernel to the outside world survive the reboot. The guest pause time is longer than for restart mode, and depends heavily on the boot time of the kernel and the pre-requisite userland services. Example The following is an example of updating the host kernel on Oracle Linux 7 using the HMP version of cprsave reboot. For more information For more details, see the slides from our recent KVM Forum presentation. Soon, recordings of the 2020 sessions will be available on the KVM forum YouTube channel. We are busily working to bring this functionality to the Linux community. We submitted the first version of the QEMU patches to the qemu-devel email list, and we are working on version 2. Anthony submitted the Linux patches for the madvise option and PKRAM to the Linux kernel email list. Stay tuned for updates.

In this blog Oracle Linux Kernel engineers Steve Sistare and Mark Kanda present QEMU live update.   The ability to update software with critical bug fixes and security mitigations while minimizing...

Linux

Oracle Linux 8: Package Management made easy with free videos

Blog created by Craig McBride Training Tuesday Edition - 3 Welcome back to Training Tuesdays.  In this week’s edition, we are talking about performing software package management on Oracle Linux 8.  Software package management is an essential skill needed to keep your Oracle Linux 8 system up to date with the latest software enhancements, bug fixes, and security patches. Oracle Linux 8 includes DNF utilities to perform package management. DNF replaces YUM, which was used in previous versions of Oracle Linux. In this 3-part video series, we cover how to use DNF, how to install the latest version of the Unbreakable Enterprise Kernel (UEK) for Oracle Linux, and how to install the Extra Packages for Enterprise Linux (EPEL) software repository. video DNF on Oracle Linux 8 (16 Min) video Installing the Unbreakable Enterprise Kernel Release 6 for Oracle Linux 8 (4 Min) video Installing the EPEL repository on Oracle Linux 8 (2 Min) Be sure to come back for our next Training Tuesday edition, which will cover networking classes. New training is being added regularly, so bookmark this blog and check back often. Resources: Oracle Linux Oracle Linux 8 training videos Oracle Linux 8 product documentation Watch the previous Training Tuesday episodes: I: Oracle Linux 8: Installation made easy with free videos II: Oracle Linux 8: Administration made easy with free videos  

Blog created by Craig McBride Training Tuesday Edition - 3 Welcome back to Training Tuesdays.  In this week’s edition, we are talking about performing software package management on Oracle Linux 8. ...

Linux

Multithreaded Struct Page Initialization

Oracle Linux kernel developer Daniel Jordan contributes this post on the initial support for multithreaded jobs in padata.     The last padata blog described unbinding padata jobs from specific CPUs. This post will cover padata's initial support for multithreading CPU-intensive kernel paths, which takes us to the memory management system. The Bottleneck During boot, the kernel needs to initialize all its page structures so they can be freed to the buddy allocator and put to good use. This became expensive as memory sizes grew into the terabytes, so in 2015 Linux got a new feature called deferred struct page initialization that brought the time down on NUMA machines. Instead of a single thread doing all the work, that thread only initialized a small subset of the pages early on, and then per-node threads did the rest later. This helped significantly on systems with many nodes, saving hundreds of seconds on a 24 TB server. However, it left some performance on the table for machines with many cores but not enough nodes to take full advantage of deferred init as it was initially implemented. One of the machines I tested had 2 nodes and 768 GB memory, and its pages took 1.7 seconds to be initialized, by far the largest component of the 4 seconds it took to boot the kernel. That may seem like a small amount of time in absolute terms, but it matters in a few different cases as explained in this changelog: Optimizing deferred init maximizes availability for large-memory systems and allows spinning up short-lived VMs as needed without having to leave them running. It also benefits bare metal machines hosting VMs that are sensitive to downtime. In projects such as VMM Fast Restart, where guest state is preserved across kexec reboot, it helps prevent application and network timeouts in the guests. So there was a need to use more than one thread per node to take full advantage of system memory bandwidth on machines where memory was concentrated over relatively few nodes. The Timing Deferred init turned out to be a good place to start upstreaming support for multithreaded kernel jobs because of how early it happens. This is before userspace is ready, when there is no other significant activity in the system because it is waiting for page initialization to be finished. That allowed delaying many of the prerequisites that the community has deemed necessary for starting these jobs from userspace. These prereqs have come up a few times in the past. They blocked attempts at adding similar functionality for page migration and page zapping, and the community raised them again in the initial versions of this work. The concerns involve both the extent to which the extra threads, known has helpers, respect the resource controls of the main thread, which initiates the job, and whether these extra threads will unduly interfere with other activity on the system. In the first case, the resource that matters for page init threads is CPU consumption, which can be restricted with cgroup's CPU subsystem. The CPU controller, however, only becomes active after boot is finished, so respecting it during page init is not necessary. And in the second case, there is no concern about interfering with other tasks on the system because the page init threads run when the rest of the system is largely idle and waiting for page init to finish. For now, because all the multithreading functions are only used during boot, they are all currently marked with __init so that the kernel can both free the text after boot and enforce that no callers can use them afterward until the proper restrictions are in place. The Implementation For this first step in adding multithreading support, the implementation is thankfully fairly simple. padata, an existing framework that assigns single threads to many small jobs, grew to support assigning many threads to single large jobs. To multithread such a job, the user defines a struct padata_mt_job: struct padata_mt_job { void (*thread_fn)(unsigned long start, unsigned long end, void *arg); void *fn_arg; unsigned long start; unsigned long size; unsigned long align; unsigned long min_chunk; int max_threads; }; The job description contains basic information including a pointer to the thread function, an argument to that function containing any required shared data, and the start and size of the job. start and size are in job-specific units. For deferred init, the unit is page frame numbers (PFNs) to be initialized. A user may pass an alignment, which is useful in the page init case for avoiding cacheline bouncing of page section data between threads. The remaining two fields require a bit more explanation. The first, min_chunk, describes the minimum amount of work that is appropriate for one helper thread to do in one call to the thread function. Like start and size, it is in job-specific units. min_chunk is a hint to keep an inordinate number of threads from being started for too little work, which could hurt performance. During page init, a job is started for each of the deferred PFN ranges, and some of those ranges may be small enough to warrant starting fewer threads than the other job parameters would otherwise allow. The second, max_threads, is simply a cap on the number of threads that can be started for that job. It was not obvious at the beginning of the project what number would work best for all systems, and there was some discussion upstream of setting it to the number of cores on the node, which has performed better than using all SMT CPUs on the node in similar workloads to page init. However, performance testing across several recent CPU types found, surprisingly, that more threads always produced greater speedups, albeit with diminishing returns. Since the system is otherwise idle during page init, though, it made sense to take full advantage of the CPUs. With the job defined, the page init code starts it with padata_do_multithreaded. padata internally decides how many threads to start, taking care to assign work in amounts small enough to load balance between helpers, so they finish at roughly the same time, but large enough to minimize management overhead. The function waits for the job to complete before returning. Multithreaded page init is only available on kernels configured with DEFERRED_STRUCT_PAGE_INIT, and since performance testing has only been done on x86 systems, that is the only architecture where the feature is currently available. Other architectures are free to override deferred_page_init_max_threads with the per-node thread counts right for them. The Results Here are the numbers from all the systems tested. This is the data that led to using all SMT CPUs on a node. Intel(R) Xeon(R) Platinum 8167M CPU @ 2.00GHz (Skylake, bare metal) 2 nodes * 26 cores * 2 threads = 104 CPUs 384G/node = 768G memory kernel boot deferred init ------------------------ ------------------------ node% (thr) speedup time_ms (stdev) speedup time_ms (stdev) ( 0) -- 4089.7 ( 8.1) -- 1785.7 ( 7.6) 2% ( 1) 1.7% 4019.3 ( 1.5) 3.8% 1717.7 ( 11.8) 12% ( 6) 34.9% 2662.7 ( 2.9) 79.9% 359.3 ( 0.6) 25% ( 13) 39.9% 2459.0 ( 3.6) 91.2% 157.0 ( 0.0) 37% ( 19) 39.2% 2485.0 ( 29.7) 90.4% 172.0 ( 28.6) 50% ( 26) 39.3% 2482.7 ( 25.7) 90.3% 173.7 ( 30.0) 75% ( 39) 39.0% 2495.7 ( 5.5) 89.4% 190.0 ( 1.0) 100% ( 52) 40.2% 2443.7 ( 3.8) 92.3% 138.0 ( 1.0) Intel(R) Xeon(R) CPU E5-2699C v4 @ 2.20GHz (Broadwell, kvm guest) 1 node * 16 cores * 2 threads = 32 CPUs 192G/node = 192G memory kernel boot deferred init ------------------------ ------------------------ node% (thr) speedup time_ms (stdev) speedup time_ms (stdev) ( 0) -- 1988.7 ( 9.6) -- 1096.0 ( 11.5) 3% ( 1) 1.1% 1967.0 ( 17.6) 0.3% 1092.7 ( 11.0) 12% ( 4) 41.1% 1170.3 ( 14.2) 73.8% 287.0 ( 3.6) 25% ( 8) 47.1% 1052.7 ( 21.9) 83.9% 177.0 ( 13.5) 38% ( 12) 48.9% 1016.3 ( 12.1) 86.8% 144.7 ( 1.5) 50% ( 16) 48.9% 1015.7 ( 8.1) 87.8% 134.0 ( 4.4) 75% ( 24) 49.1% 1012.3 ( 3.1) 88.1% 130.3 ( 2.3) 100% ( 32) 49.5% 1004.0 ( 5.3) 88.5% 125.7 ( 2.1) Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz (Haswell, bare metal) 2 nodes * 18 cores * 2 threads = 72 CPUs 128G/node = 256G memory kernel boot deferred init ------------------------ ------------------------ node% (thr) speedup time_ms (stdev) speedup time_ms (stdev) ( 0) -- 1680.0 ( 4.6) -- 627.0 ( 4.0) 3% ( 1) 0.3% 1675.7 ( 4.5) -0.2% 628.0 ( 3.6) 11% ( 4) 25.6% 1250.7 ( 2.1) 67.9% 201.0 ( 0.0) 25% ( 9) 30.7% 1164.0 ( 17.3) 81.8% 114.3 ( 17.7) 36% ( 13) 31.4% 1152.7 ( 10.8) 84.0% 100.3 ( 17.9) 50% ( 18) 31.5% 1150.7 ( 9.3) 83.9% 101.0 ( 14.1) 75% ( 27) 31.7% 1148.0 ( 5.6) 84.5% 97.3 ( 6.4) 100% ( 36) 32.0% 1142.3 ( 4.0) 85.6% 90.0 ( 1.0) AMD EPYC 7551 32-Core Processor (Zen, kvm guest) 1 node * 8 cores * 2 threads = 16 CPUs 64G/node = 64G memory kernel boot deferred init ------------------------ ------------------------ node% (thr) speedup time_ms (stdev) speedup time_ms (stdev) ( 0) -- 1029.3 ( 25.1) -- 240.7 ( 1.5) 6% ( 1) -0.6% 1036.0 ( 7.8) -2.2% 246.0 ( 0.0) 12% ( 2) 11.8% 907.7 ( 8.6) 44.7% 133.0 ( 1.0) 25% ( 4) 13.9% 886.0 ( 10.6) 62.6% 90.0 ( 6.0) 38% ( 6) 17.8% 845.7 ( 14.2) 69.1% 74.3 ( 3.8) 50% ( 8) 16.8% 856.0 ( 22.1) 72.9% 65.3 ( 5.7) 75% ( 12) 15.4% 871.0 ( 29.2) 79.8% 48.7 ( 7.4) 100% ( 16) 21.0% 813.7 ( 21.0) 80.5% 47.0 ( 5.2) Server-oriented distros that enable deferred page init sometimes run in small VMs, and they still benefit even though the fraction of boot time saved is smaller: AMD EPYC 7551 32-Core Processor (Zen, kvm guest) 1 node * 2 cores * 2 threads = 4 CPUs 16G/node = 16G memory kernel boot deferred init ------------------------ ------------------------ node% (thr) speedup time_ms (stdev) speedup time_ms (stdev) ( 0) -- 716.0 ( 14.0) -- 49.7 ( 0.6) 25% ( 1) 1.8% 703.0 ( 5.3) -4.0% 51.7 ( 0.6) 50% ( 2) 1.6% 704.7 ( 1.2) 43.0% 28.3 ( 0.6) 75% ( 3) 2.7% 696.7 ( 13.1) 49.7% 25.0 ( 0.0) 100% ( 4) 4.1% 687.0 ( 10.4) 55.7% 22.0 ( 0.0) Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz (Haswell, kvm guest) 1 node * 2 cores * 2 threads = 4 CPUs 14G/node = 14G memory kernel boot deferred init ------------------------ ------------------------ node% (thr) speedup time_ms (stdev) speedup time_ms (stdev) ( 0) -- 787.7 ( 6.4) -- 122.3 ( 0.6) 25% ( 1) 0.2% 786.3 ( 10.8) -2.5% 125.3 ( 2.1) 50% ( 2) 5.9% 741.0 ( 13.9) 37.6% 76.3 ( 19.7) 75% ( 3) 8.3% 722.0 ( 19.0) 49.9% 61.3 ( 3.2) 100% ( 4) 9.3% 714.7 ( 9.5) 56.4% 53.3 ( 1.5) Future Work This post described page init, the first user of padata's support for multithreaded jobs. All future users will need to be aware of various resource controls, such as cgroup's CPU and cpuset controllers, sched_setaffinity, and NUMA memory policy. There are also some draft patches written that will be part of the next phase, such as ones to run helpers at the highest nice level on the system to avoid disturbing other tasks. The plan for the immediate future is to get the CPU controller ready to throttle kernel threads.

Oracle Linux kernel developer Daniel Jordan contributes this post on the initial support for multithreaded jobs in padata.     The last padata blog described unbinding padata jobs from specific...

Linux

Share your experience! Review Oracle Linux on TrustRadius

Today, Oracle is announcing a partnership with TrustRadius to gather feedback from real-life Oracle Linux users. TrustRadius is one of the most trusted review sites for business technology. Optimized for content quality and data integrity, they help buyers make better product decisions based on unbiased and insightful reviews. Customers choose Oracle Linux to improve security, reduce downtime, simplify operations, and save operating costs by switching from other operating environments. Each TrustRadius review is vetted for quality, depth, and detail by their research team. They do not sell your information to third parties, so you can submit  a review with confidence. Check out what your peers are saying in the links below: Holman Cárdenas, M.Eng, TOGAF®, ITIL® Information Technology Architect Ministry of Justice / Ministère de la Justice – Québec Government        It comes pre-tuned and optimized for Oracle databases out of the box. Very secure and stable. Easier to install and configure compared to other linux versions.The experience we had with the support was very positive (although we almost never had to call for support, since the system is pretty stable). Read the full review Jose de la Cruz Malena CIO (Chief Information Officer) Cooperativa Vega Real Financial Services         The positive impact of implementing Oracle Linux is that it has reduced our costs of licences by 60% With one Linux Server, we consolidated 3 Windows Servers, giving us more memory available and the best use of processors and less storage needed for the same tasks. Read the full review Please take a few minutes to share your experience with your peers.  And don’t forget to look at what your peers are saying about the technologies you are interested in.  

Today, Oracle is announcing a partnership with TrustRadius to gather feedback from real-life Oracle Linux users. TrustRadius is one of the most trusted review sites for business technology....

Linux

Oracle Linux 8: Administration made easy with free videos

Blog created by Craig McBride Training Tuesday Edition - 2 Now that you’ve had a chance to learn about Oracle Linux 8 installation – you did check out the prior blog – right? You’ll want to continue learning Oracle Linux 8 by delving into the next set of free, short videos on some common administration tasks that you can perform on Oracle Linux 8. These videos are applicable for deployment via on-premises systems or Oracle Cloud Infrastructure instances. You can learn step-by-step how to: configure the system date and time automate tasks dynamically load and unload kernel modules configure users and groups configure networking You can also explore the proc and sysfs file systems to view and configure system hardware and system processes. Here’s the list of videos and the time it takes for you to complete each one: video System Configuration Date and Time (7 Min) video System Configuration Proc File System (6 Min) video System Configuration Sysfs File System (7 Min) video Oracle Linux 8 Automating Tasks Cron Utility (7 Min) video Oracle Linux 8 Automating Tasks Anacron, At, and Batch Utilities (6 Min) video Oracle Linux 8 Kernel Module Configuration (7 Min) video Oracle Linux 8 Users and Groups (12 Min) video Network Configuration Files on Oracle Linux 8 (7 Min) video Using NetworkManager CLI (nmcli) on Oracle Linux 8 (7 Min) video Using the ip command on Oracle Linux 8 (9 Min) Be sure to come back for our next edition, which will cover package management classes.  New videos are added on an ongoing basis so check back often. Resources: Oracle Linux Oracle Linux 8 training videos Oracle Linux 8 product documentation Visit Training Tuesday Edition - I - Oracle Linux 8 installation made easy with free videos

Blog created by Craig McBride Training Tuesday Edition - 2 Now that you’ve had a chance to learn about Oracle Linux 8 installation – you did check out the prior blog – right? You’ll want to continue...

Oracle Linux 8 - Installation made easy with free videos

Training Tuesday Edition - 1 Blog written by Craig McBride With “work from home” mandates and less opportunity to go to in-person classes, you might be looking for training opportunities you can start on today. We all need some help to get started on developing our skills. To make it easy for you, we’ve put together a series of blogs where you’ll find free, short videos that you can take at your own pace to get a better at understanding of Oracle Linux 8. You can develop skills to use and administer Oracle Linux 8 on Oracle Cloud Infrastructure, on-premises, or in hybrid environments. This first blog focuses on the installation and boot process. You can learn step-by-step how to complete an Oracle Linux 8 installation for on-premises deployment and how to create an Oracle Linux 8 instance on Oracle Cloud Infrastructure. You can also learn about the boot process and how to configure different services to start at boot time. Here’s the list of videos and the time it takes for you to complete each one. In less than an hour, you can complete installation training: video Installing Oracle Linux 8 duration (8 Min) video Install Oracle Linux 8 on Oracle Cloud Infrastructure duration (4 Min) video BIOS Firmware Bootloader Process on Oracle Linux 8 duration (7 Min) video GRUB 2 on Oracle Linux 8 duration (10 Min) video Unified Extensible Firmware Interface on Oracle Linux 8 duration (8 Min) video systemd System and Service Manager on Oracle Linux 8 duration (9 Min) video systemd Target Units on Oracle Linux 8 duration (7 Min) Be sure to come back for our next edition, which will cover administration classes. New videos are added on an ongoing basis so check back often. Resources: Oracle Linux Oracle Linux 8 training videos Oracle Linux 8 product documentation

Training Tuesday Edition - 1 Blog written by Craig McBride With “work from home” mandates and less opportunity to go to in-person classes, you might be looking for training opportunities you can start...

Announcements

IRI Certifies Voracity with Oracle Linux

The Oracle Linux and Virtualization Alliance team welcomes IRI, The CoSort Company, and its Voracity data management platform to our ISV ecosystem. Voracity enables customers to marshal data without the cost or complexity of multiple tools. IRI has certified and supports Voracity on Oracle Linux 7 and 8. This can provide a rich set of performance and security features for Oracle DBAs, big data architects, and data privacy teams. IRI Voracity combines data discovery, integration, migration, governance, and analytics in a managed metadata framework built on Eclipse. It leverages the proven power of IRI CoSort or Hadoop MR2, Spark, Spark Stream, Storm, and Tez. IRI Voracity runs on Oracle Cloud Infrastructure, enabling modern PaaS and SaaS options for SMB and enterprise customers seeking faster, more affordable, and highly secure cloud execution of ETL jobs, plus data masking and synthesis, data quality and migration, and data wrangling for analytics. On Oracle Cloud Infrastructure or on premises, customers can use IRI Voracity to perform, speed, and combine critical lifecycle activities including: Data Discovery - classifying, diagramming, profiling, and searching of structured, semi-structured, and unstructured data sources Data Integration - individually optimized, task-consolidated same-pass E, T, and L operations, plus CDC, slowly changing dimensions, and ways to speed or leave any legacy ETL platform Data Migration - and conversion of data types, file formats, and database platforms, plus incremental or bulk data replication and federation Data Governance - PII data masking and re-ID risk scoring, DB subsetting, synthetic test data generation, data validation, cleansing, and enrichment, master and metadata management, etc. Analytics - embedded reporting, integrations with KNIME and cloud analytic platforms, and data wrangling to speed time-to-display in BI tools  Visit oracle.com/linux and www.iri.com for more information.  

The Oracle Linux and Virtualization Alliance team welcomes IRI, The CoSort Company, and its Voracity data management platform to our ISV ecosystem. Voracity enables customers to marshal data without...

Announcements

Unbreakable Linux Network (ULN) IP address changing on October 30, 2020

Unbreakable Linux Network (ULN) will be undergoing planned maintenance beginning on October 30th 2020 starting at 6pm Pacific time. This planned maintenance event is scheduled to be completed by 10pm Pacific time on the same date. During this planned maintenance event, the content delivery component of the Unbreakable Linux Network will move to a new IP address. Servers that need to download content from ULN that reside behind a proxy or firewall that limits outbound connectivity based on destination IP address will be blocked by that proxy or firewall when attempting to connect. This includes: servers that are directly registered to ULN servers configured to use uln-yum-mirror to synchronise content from ULN Spacewalk Server instances that synchronise content from ULN Oracle Enterprise Manager instances that synchronise content from ULN Servers that are configured as downstream clients of a uln-yum-mirror instance or who are registered to a local Spacewalk instance that do not require traversal of the restrictive proxy or firewall are not affected. Prior to October 30, 2020 customers must modify outbound proxy or firewall rules to allow outbound SSL/TLS connections on port 443 to communicate with linux-update.oracle.com using the destination IP address of 138.1.51.46. For more information, please see My Oracle Support Doc ID. 2720318.1.

Unbreakable Linux Network (ULN) will be undergoing planned maintenance beginning on October 30th 2020 starting at 6pm Pacific time. This planned maintenance event is scheduled to be completed by 10pm...

Linux Kernel Development

Check out the Oracle talks at KVM Forum 2020

The annual KVM forum conference is next week. It brings together the world's leading experts on Linux virtualization technology to present their latest work. The conference is virtual this year, with live attendance from October 28-30, or check out the recordings once they are available! https://events.linuxfoundation.org/kvm-forum. We have a good number of engineers from the Oracle Linux kernel development team who will be presenting their work at the forum.  Alexandre Chartre presents KVM Address Space Isolation, a kernel enhancement that provides a separate kernel address space for KVM when running virtual machines. This provides an extra level of protection against speculative execution exploits, improving security for all, and also was a hot topic at the Linux Plumbers Conference earlier this year. Ankur Arora tells us how to optimize lock operations in a virtualized kernel, while being able to modify the optimization if conditions on the host change, such as after a live migration. See his talk on Changing Paravirt Lock-ops for a Changing World. Annie Li talks about Implementing SR-IOV Failover for Windows Guests During Migration, which builds on the failover operation defined by the virtio specification, and enables live migration for Windows clients. Steve Sistare presents enhancements to QEMU and the Linux kernel that allows the QEMU management process to be updated to the latest version while keeping the guest alive. This enables critical bug fixes, security mitigations, and new features, without rebooting the guest. See QEMU Live Update Joao Martins guides us Towards an Alternative Memory Architecture. Memory assigned to guests is still tracked by page structs in the host kernel, which is wasteful. With modifications to the DAX sub-system, guest memory can instead be backed by DAX segments, eliminating this overhead. This is cool and useful stuff, check it out!

The annual KVM forum conference is next week. It brings together the world's leading experts on Linux virtualization technology to present their latest work. The conference is virtual this year, with...

Linux

Vinchin Backup & Recovery is now tested and supported with Oracle Linux Virtualization Manager

Oracle is pleased to announce that Vinchin, a provider of data protection solutions for enterprises, has tested and will support customers running its Backup & Recovery solution with Oracle Linux KVM and Oracle Linux Virtualization Manager. This means that you can easily and efficiently backup and restore virtual machines running on Oracle Linux Virtualization Manager with Vinchin Backup & Recovery. Vinchin offers a modern and secure IT infrastructure solution that delivers high availability and scalability to drive transformative business outcomes for customers. Oracle Linux Virtualization Manager is a server virtualization management platform based on the oVirt open-source project. It can be easily deployed to configure, monitor, and manage an Oracle Linux Kernel-based Virtual Machine (KVM) environment with support from Oracle. Vinchin has provided its oVirt-based backup solution for several years and has customers throughout China, Europe and the Americas. Vinchin now supports its reliable backup and disaster recovery solution for customers running Oracle Linux KVM and Oracle Linux Virtualization Manager. Last but not least, teaming with the Oracle Linux alliance group and Vinchin’s work to verify compatibility with Oracle Linux Virtualization Manager has given Vinchin confidence and built trust in this collaboration, which may result in creating solutions for other Oracle products. Watch the company’s respective blogs for future updates. Visit us at oracle.com/linux and stay connected:  twitter.com/oraclelinux facebook.com/oraclelinux blogs.oracle.com/linux youtube.com/oraclelinuxchannel vinchin.com/en/ vinchin.com/en/product/vm-backup-and-recovery.html

Oracle is pleased to announce that Vinchin, a provider of data protection solutions for enterprises, has tested and will support customers running its Backup & Recovery solution with Oracle Linux KVM...

Announcements

Announcing the release of Oracle Linux 7 Update 9

Oracle is pleased to announce the general availability of Oracle Linux 7 Update 9, which includes Unbreakable Enterprise Kernel (UEK) Release 6 as the default kernel. Oracle Linux brings the latest open source innovations and business-critical performance and security optimizations for cloud and on-premises deployment. Oracle Linux maintains user space compatibility with Red Hat Enterprise Linux (RHEL), which is independent of the kernel version that underlies the operating system. Existing applications in user space will continue to run unmodified on Oracle Linux 7 Update 9 with UEK release 6 and no re-certifications are needed for applications already certified with Red Hat Enterprise Linux 7 or Oracle Linux 7. Oracle Linux 7 Update 9 is available on 64-bit Arm (aarch64) and 64-bit AMD/Intel (x86-64) based systems. Oracle Linux 7 Update 9 ships with the following kernel packages, which include bug fixes, security fixes, and enhancements: UEK Release 6 (kernel-uek-5.4.17-2011.6.2.el7uek) for x86-64 and aarch64 Red Hat Compatible Kernel (RHCK) (kernel-3.10.0-1136.el7) for x86-64 only Notable new features New features and changes for Red Hat Compatible Kernel (RHCK) EDAC driver for Intel ICX systems added: The Error Detection and Correction (EDAC) driver has been added to Intel ICX systems in this release. This driver enables error detection on these systems, as well as reports any errors to the EDAC subsystem. Mellanox ConnectX-6 Dx network adapter support added: Oracle Linux 7 Update 9 adds the PCI IDs of the Mellanox ConnectX-6 Dx network adapter to the mlx5_core driver. UEK Release 6 is based on the mainline Linux kernel 5.4, supplying more innovation than other commercial Linux kernels.  Arm: Enhanced support for the Arm (aarch64) platform, including improvements in the areas of security and virtualization. Cgroup v2: UEK R6 includes all Cgroup v2 features, along with several enhancements. ktask: ktask is a framework for parallelizing CPU-intensive work in the kernel. It can be used to speed up large tasks on systems with available CPU power, where a task is single-threaded in user space. Parallelized kswapd: Page replacement is handled in the kernel asynchronously by kswapd, and synchronously by direct reclaim. When free pages within the zone free list are low, kswapd scans pages to determine if there are unused pages that can be evicted to free up space for new pages. This optimization improves performance by avoiding direct reclaims, which can be resource intensive and time consuming. Kexec firmware signing: The option to check and validate a kernel image signature is enabled in UEK R6. When kexec is used to load a kernel from within UEK R6, kernel image signature checking and validation can be implemented to ensure that a system only loads a signed and validate kernel image. Memory management: Several performance enhancements have been implemented in the kernel's memory management code to improve the efficiency of clearing pages and cache, as well as enhancements to fault management and reporting. NVDIMM: NVDIMM feature updates have been implemented so that persistent memory can be used as traditional RAM. NVMe improvements:  NVMe over Fabrics TCP host and the target drivers have been added. Multipath support and passthrough command support have been added. NVMe namespace support is extended to include Namespace Write Protect and Asynchronous Namespace Access. DTrace: DTrace support is enabled and has been re-implemented to use the Berkeley Packet Filter (BPF) that is integrated into the Linux kernel. OCFS2: Support for the OCFS2 file system is enabled. Btrfs: Support for the Btrfs file system is enabled and support to select Btrfs as a file system type when formatting devices is available. NFS: enhancements and new features that help on NFS performance. Zero copy networking: network performance enhancements and new technology to build faster networking products. New features and changes are independent of the kernels DIF/DIX (T10 P1) support for specific hardware: SCSI T10 DIF/DIX is fully supported on hardware that has been qualified by the vendor, provided that the vendor also fully supports the particular host bus adapter (HBA) and storage array configuration. FreeRDP updated to version 2.1.1: The FreeRDP feature for the Remote Desktop Protocol (RDP) is updated from version 2.0.0 to version 2.1.1 in this release. This version of FreeRDP includes new RDP options for the current Microsoft Windows terminal server version. Several security issues are also fixed in FreeRDP 2.1.1. Pacemaker updated to version 1.1.23: The Pacemaker cluster resource manager is updated in this release to version 1.1.23. This version of Pacemaker provides numerous bug fixes over the previous version. Further information is available in the Release Notes for Oracle Linux 7 Update 9. Oracle Linux downloads Individual RPM packages are available on the Unbreakable Linux Network (ULN) and the Oracle Linux yum server. ISO installation images are available for download from the Oracle Linux yum server and container images are available via Oracle Container Registry, GitHub Container Registry and Docker Hub. Oracle Linux can be downloaded, used, and distributed free of charge and all updates and errata are freely available. Customers decide which of their systems require a support subscription. This makes Oracle Linux an ideal choice for development, testing, and production systems. The customer decides which support coverage is best for each individual system while keeping all systems up to date and secure. Customers with Oracle Linux Premier Support also receive support for additional Linux programs, including Gluster Storage, Oracle Linux Software Collections, and zero-downtime updates using Oracle Ksplice. For more information about Oracle Linux, please visit www.oracle.com/linux.

Oracle is pleased to announce the general availability of Oracle Linux 7 Update 9, which includes Unbreakable Enterprise Kernel (UEK) Release 6 as the default kernel. Oracle Linux brings the latest...

Oracle Linux sessions at Open Source Summit Europe 2020

Open Source Summit connects the open source ecosystem under one roof. It covers cornerstone open source technologies; helps ecosystem leaders to navigate open source transformation; and delves into the newest technologies and latest trends touching open source. It is an extraordinary opportunity for cross-pollination between the developers, sysadmins, DevOps professionals, IT architects, and business & community leaders driving the future of technology. Check out the Oracle Linux sessions at this event and register today: Tuesday, October 27 13:00 GMT DTrace: Leveraging the Power of BPF - Kris Van Hees, Oracle Corp.   BPF and the overall tracing infrastructure in the kernel has improved tremendously and provides a powerful framework for tracing tools. DTrace is a well known and versatile tracing tool that is being re-implemented to make use of BPF and kernel tracing facilities. The goal of this open source project (hosted on github) is to provide a full-featured implementation of DTrace, leveraging the power of BPF to provide well known functionality. The presentation will provide an update on the progress of the re-implementation project of DTrace. Kris will share some of the lessons learnt along the way, highlighting how BPF provides the building blocks to implement a complex tracing tool. He will provide examples of creative techniques that showcase the power of BPF as an execution engine. Like any project, the re-implementation of DTrace has not been without some pitfalls, and Kris will highlight some of the limitations and unsolved problems the development team has encountered. Wednesday, October 28 13:00 GMT The Compact C Type (CTF) Debugging Format in the GNU Toolchain: Progress Report - Elena Zannoni & Nicholas Alcock, Oracle The Compact C Type Format (CTF) is a reduced form of debug information describing the type of C entities such as structures, unions, etc. It has been ported to Linux (from Solaris) and used to reduce the size of the debugging information for the Linux kernel and DTrace. It was extended to remove limits and add support for additional parts of the C type system. Last year, we integrated it into GCC and GNU binutils and added support for dumping CTF data in ELF objects and some support for linking CTF data into a final executable (and presented at this conference). This linking support was preliminary: it was slow and the CTF was large. Since last year, the libctf library and ld in binutils have gained the ability to properly deduplicate CTF with little performance hit: output CTF in linked ELF objects is now often smaller than the CTF in any input .o file. The libctf API has also improved, with support for new features, better error reporting, and a much-improved CTF iterator. This talk will provide an overview of CTF, the novel type deduplication algorithm used to reduce CTF size and discuss the other contributions of CTF to the toolchain, such as compiler and debugger support. 18:30 GMT KVM Address Space Isolation - Alexandre Chartre, Oracle First investigations about Kernel Address Space Isolation (ASI) were presented at Linux Plumber and KVM Forum last year. Kernel Address Space Isolation aims to mitigate some cpu hyper-threading data leaks possible with speculative execution attacks (like L1 Terminal Fault (L1TF) and Microarchitectural Data Sampling (MDS)). In particular, Kernel Address Space Isolation will provide a separate kernel address space for KVM when running virtual machines, in order to protect against a malicious guest VM attacking the host kernel using speculative execution attacks. Several RFCs for implementing this solution have been submitted. This presentation will describe the current state of the Kernel Address Space Isolation proposal with focusing on its usage with KVM, in particular the page table mapping requirements and the performance impact. Thursday, October 29 16:00 GMT Changing Paravirt Lock-ops for a Changing World - Ankur Arora, Oracle Paravirt ops are set in stone once a guest has booted. As an example we might expose `KVM_HINTS_REALTIME` to a guest and this hint is expected to stay true for the lifetime of the guest. However, events in a guest's life, like changed host conditions or migration might mean that it would be more optimal to revoke this hint. This talk discusses two aspects of this revocation: one, support for revocable `KVM_HINTS_REALTIME` and, second, work done in the paravirt ops subsystem to dynamically modify spinlock-ops.   Friday, October 30 14:00 GMT QEMU Live Update - Steven J. Sistare, Oracle The ability to update software with critical bug fixes and security mitigations while minimizing downtime is valued highly by customers and providers. In this talk, Steve presents a new method for updating a running instance of QEMU to a new version while minimizing the impact on the VM guest. The guest pauses briefly, for less than 200 msec in the prototype, without loss of internal state or external connections. The old QEMU process exec's the new QEMU binary, and preserves anonymous guest RAM at the same virtual address via a proposed Linux madvise variant. Descriptors for external connections are preserved, and VFIO pass through devices are supported by preserving the VFIO device descriptors and attaching them to a new KVM instance after exec. The update method requires code changes to QEMU, but no changes are required in system libraries or the KVM kernel module.  

Open Source Summit connects the open source ecosystem under one roof. It covers cornerstone open source technologies; helps ecosystem leaders to navigate open source transformation; and delves into...

Oracle Cloud Infrastructure

Creating an SSH Key Pair on the Linux Command Line for OCI Access

Introduction SSH is the standard on live command-line based access to Linux systems. Oracle Linux Tips and Tricks: Using SSH  is a good initial read. While an Oracle Cloud Infrastructure (OCI) instance is being created, a public SSH key is needed to be provided in the web interface to provide password-less SSH access to the new instance. The question is "How to produce the public SSH key needed?". This post aims to help the reader to achieve that objective on a Linux command-line. On Linux command line, the ssh-keygen command is used to generate the necessary public key. Starting Up Open a terminal in your Linux desktop GUI and make sure that you are logged on the user account (e.g. my_user - avoid using root account for general security reasons) that you would use to access the new Oracle Cloud Infrastructure instance via SSH Run ssh-keygen: $ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/my_user/.ssh/id_rsa): Give a name to your key pair to be generated (e.g. my_ssh_key) Enter file in which to save the key (/home/my_user/.ssh/id_rsa): my_ssh_key Enter passphrase (empty for no passphrase): Do not provide any passphrase and skip with enter. Enter same passphrase again: Your identification has been saved in my_ssh_key. Your public key has been saved in my_ssh_key.pub. The key fingerprint is: SHA256:tXpJNaug8iUdIEVCM+7WHX8gqS/AfRi//tUKanA1Eo8 my_user@my_desktop The key's randomart image is: +---[RSA 2048]----+ |   .=.o          | |   . =  ..       | |    o o ++o o    | |   o + BE=++ o   | |    = = So+.o    | |   . ..=.* + .   | |    . +o* = . .  | |     o =.o o .   | |      ..o.. .    | +----[SHA256]-----+ The file my_ssh_key.pub would have been created in your home directory. $ cat my_ssh_key.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCkDBM0WOv+AzboCPaqhr8cAN/G HBoclnR+Gvo9x4JZA9gPYQIhCgGet4E8YgcWLwa0tDrZJvg/DuVMfQ0oA2JiaWHN W54lrfuACJVdF/8wZGKpgK5vnd7/pcAIZ9r6rdeaDyFSMEscNwX3pjEnkMp92ykQ tO4rmxnHtqefsvh+O4i4DT4EQE0bUanLriYs59K1XMkA2bIUvnjjD7ILKyNqVeYK hu5w/iS72+9l0U6nfifbyzy4VbqtOI1uU8bvdqeL7J6okTQjeJl/fW2tha//pNbm /nTVyLOOdYXxmAZ8zXX7r6X4pZE5lmbmowk3AZTojlI7MTrYOKuQcxsusUJ my_u ser@my_desktop Providing Key Information to the Oracle Cloud Infrastructure Instance While creating the Oracle Cloud Infrastructure instance, in the "Add SSH Keys" section, choose "PASTE PUBLIC KEYS" and copy/paste the contents of the public key file (alternatively you ca upload the file too) After the instance is created, use ssh command with the private key to access it (where <ip_addr> is the IP address of the new Oracle Cloud Infrastructure instance: $ ssh -i my_ssh_key opc@<ip_addr>      The authenticity of host '<ip_addr>(<ip_addr>)' can't be established.      ECDSA key fingerprint is SHA256:qD2zZE5hO0TYYEMQdDpSPz5izTuaFslwZiMOZp7kwDc.      ECDSA key fingerprint is MD5:ea:c3:e8:61:e9:29:7a:df:ae:b6:43:ad:5b:71:f7:90.      Are you sure you want to continue connecting (yes/no)? yes      Warning: Permanently added '<ip_addr>' (ECDSA) to the list of known hosts. [opc@<ip_addr> ~]$ Summary To be able to access an Oracle Cloud Infrastructure instance via ssh on a Linux desktop, one can use the ssh-keygen command to generate the necessary SSH key pair and add relevant information on the Oracle Cloud Infrastructure instance as described.

Introduction SSH is the standard on live command-line based access to Linux systems. Oracle Linux Tips and Tricks: Using SSH  is a good initial read. While an Oracle Cloud Infrastructure (OCI) instance...

Announcements

Announcing updated Oracle Linux Templates for Oracle Linux KVM

Oracle is pleased to announce updated Oracle Linux Templates for Oracle Linux KVM and Oracle Linux Virtualization Manager. Oracle Linux Templates for Oracle Linux KVM provide an innovative approach to deploying a fully configured software stack by offering pre-installed and pre-configured software images. Use of Oracle Linux Templates eliminates the installation and configuration costs, and reduces the ongoing maintenance costs helping organizations achieve faster time to market and lower cost of operations. New templates include: Oracle Linux 7 Update 8 Template Unbreakable Enterprise Kernel 5 Update 4 - kernel-uek-4.14.35-2025.400.8 8GB of RAM 37GB of OS Virtual-Disk Oracle Linux 8 Update 2 Template Unbreakable Enterprise Kernel 6 - kernel-uek-5.4.17-2011.4.4 8GB of RAM 37GB of OS Virtual-Disk New Oracle Linux Templates for Oracle Linux KVM and Oracle Linux Virtualization Manager supply powerful automation. These templates are built on cloud-init, the same technology used today on Oracle Cloud Infrastructure and includes improvements and regression fixes. Downloading Oracle Linux Templates for Oracle Linux KVM Oracle Linux Templates for Oracle Linux KVM are available on yum.oracle.com website on "Oracle Linux Virtual Machine" Download section. Further information The Oracle Linux 7 Template for Oracle Linux KVM allows you to configure different options on the first boot for your Virtual Machine; cloud-init options configured on the Oracle Linux 7 Template are: VM Hostname define the Virtual Machine hostname Configure Timezone define the Virtual Machine timezone (within an existing available list) Authentication Username define a custom Linux user on the Virtual Machine Password Verify Password define the password for the custom Linux user on the Virtual Machine SSH Authorized Keys SSH authorized keys to get password-less access to the Virtual Machine Regenerate SSH Keys Option to regenerate the Virtual Machine Host SSH Keys Networks DNS Servers define the Domain Name Servers for the Virtual Machine DNS Search Domains define the Domain Name Servers Search Domain for the Virtual Machine In-guest Network Interface Name define the virtual-NIC device name for the Virtual Machine (ex. eth0) Custom script Execute a custom-script at the end of the cloud-init configuration process These options can be easily managed by the "Oracle Linux Virtualization Manager" web interface by editing the Virtual Machine and enabling the "Cloud-Init/Sysprep" option: Further details on how to import and use the Oracle Linux 7 Template for Oracle Linux KVM are available in this technical article on Simon Coter's Oracle Blog. Oracle Linux KVM & Virtualization Manager Support Support for Oracle Linux Virtualization Manager is available to customers with an Oracle Linux Premier Support subscription. Refer to the Oracle Unbreakable Linux Network for additional resources on Oracle Linux support. Oracle Linux Virtualization Manager Resources Oracle Linux Resources Oracle Virtualization Resources Oracle Linux yum server Oracle Linux Virtualization Manager Training

Oracle is pleased to announce updated Oracle Linux Templates for Oracle Linux KVM and Oracle Linux Virtualization Manager. Oracle Linux Templates for Oracle Linux KVM provide an innovative approach to...

Announcements

Announcing the release of Oracle Linux 7 Update 9 Beta

Oracle is pleased to announce the availability of the Oracle Linux 7 Update 9 Beta Release for the 64-bit Intel and AMD (x86_64) and 64-bit Arm (aarch64) platforms. Oracle Linux 7 Update 9 Beta is an update release that includes bug fixes, security fixes, and enhancements. The beta release allows Oracle partners and customers to test these capabilities before Oracle Linux 7 Update 9 becomes generally available.  It is 100% application binary compatible with Red Hat Enterprise Linux 7 Update 9 Beta. Updates include: An improved SCAP security guide for Oracle Linux 7 Updated device drivers for both UEK as well as Red Hat Compatible Kernel Wayland display server protocol is now available as a technology preview Updated virt-v2v release that now support Ubuntu and Debian conversion from VMware to Oracle Linux KVM The Oracle Linux 7 Update 9 Beta Release includes the following kernel packages: kernel-uek-5.4.17-2011.4.4 for x86_64 and aarch64 platforms - The Unbreakable Enterprise Kernel Release 6, which is the default kernel. kernel-3.10.0-1136 for x86_64 platform - The latest Red Hat Compatible Kernel (RHCK). To get started with Oracle Linux 7 Update 9 Beta Release, you can simply perform a fresh installation by using the ISO images available for download from Oracle Technology Network. Or, you can perform an upgrade from an existing Oracle Linux 7 installation by using the Beta channels for Oracle Linux 7 Update 9 Beta on the Oracle Linux yum server or the Unbreakable Linux Network (ULN).  # vi /etc/yum.repos.d/oracle-linux-ol7.repo [ol7_beta] name=Oracle Linux $releasever Update 9 Beta ($basearch) baseurl=https://yum$ociregion.oracle.com/repo/OracleLinux/OL7/beta/$basearch/ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle gpgcheck=1 enabled=1 [ol7_optional_beta] name=Oracle Linux $releasever Update 9 Beta ($basearch) Optional baseurl=https://yum$ociregion.oracle.com/repo/OracleLinux/OL7/optional/beta/$basearch/ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle gpgcheck=1 enabled=1 If your instance is running on Oracle Cloud Infrastructure (OCI), the value "$ociregion" will be automatically valued to use OCI yum mirrors. Modify the yum channel setting and enable the Oracle Linux 7 Update 9 Beta channels. You can then perform the upgrade. # yum update After the upgrade is completed, reboot the system and you will have Oracle Linux 7 Update 9 Beta running. [root@v2v-app: ~]# cat /etc/oracle-release Oracle Linux Server release 7.9 This release is provided for development and test purposes only and is not covered by Oracle Linux support; Beta releases cannot be used in production and no support will be provided to any customers running beta in production environments Further technical details and known issues for Oracle Linux 7 Update 9 Beta Release are available on Oracle Community - Oracle Linux and UEK Preview space. Oracle Linux team welcome your questions and feedback on Oracle Linux 7 Update 9 Beta Release. You may contact the Oracle Linux team at oraclelinux-info_ww_grp@oracle.com or post your questions and comments on the Oracle Linux and UEK Preview Space on the Oracle Community.

Oracle is pleased to announce the availability of the Oracle Linux 7 Update 9 Beta Release for the 64-bit Intel and AMD (x86_64) and 64-bit Arm (aarch64) platforms. Oracle Linux 7 Update 9 Beta is an...

Announcements

Announcing the release of Spacewalk 2.10 for Oracle Linux

Oracle is pleased to announce the release of Spacewalk 2.10 Server for Oracle Linux 7 along with updated Spacewalk 2.10 Client for Oracle Linux 7 and Oracle Linux 8. Client support is also provided for Oracle Linux 6 and Oracle Linux 5 (for extended support customers only). In addition to numerous fixes and other small enhancements, the Spacewalk 2.10 release includes the following significant features: Spacewalk can now sync and distribute Oracle Linux 8 content including support for mirroring a repository that contains module metadata. The module metadata can then be made available to downstream clients. Python 2 packages are no longer required on systems that have Python 3 as the default. It is now possible to manage errata severity via Spacewalk server The dwr package has been updated to version 3.0.2 to fix security vulnerabilities. Updated API calls: errata.create/setDetails: provides the capability for managing severities. system.schedulePackageRemoveByNevra: supports the removal of packages that are not in the database. For more details on this release, including additional new features and changes, please consult the Spacewalk 2.10 Release Notes. Limited support for Oracle Linux 8 clients Spacewalk 2.10 Server can mirror a repository that contains module and AppStream metadata and make that metadata available to downstream clients. This feature is sufficient to support an Oracle Linux 8 client when using the dnf tool.However, the Spacewalk 2.10 web interface and API are not AppStream or module aware and therefore has limited support for managing for Oracle Linux 8 clients. Please review section 1.4 of the Spacewalk 2.10 Release Notes for a comparison of the Spacewalk functionality that is available to each Oracle Linux client version.  

Oracle is pleased to announce the release of Spacewalk 2.10 Server for Oracle Linux 7 along with updated Spacewalk 2.10 Client for Oracle Linux 7 and Oracle Linux 8. Client support is also provided...

Announcements

Shoe Carnival Increases Security and Availability with Oracle Ksplice

In this article, we will discuss how Shoe Carnival increased their IT systems security and availability using Oracle Ksplice. Shoe Carnival, Inc. is one of the nation’s largest family footwear retailers, offering a broad assortment of moderately priced dress, casual and athletic footwear for men, women and children with emphasis on national name brands. The company operates 390 stores in 35 states and Puerto Rico, and offers online shopping. In keeping with the carnival spirit of rewarding surprises, Shoe Carnival offers their customers chances to win various coupons and discounts.  Customers can spontaneously win while spinning the carnival wheel in the store or redeeming an a promotional offer. These specials encourage customers to make a purchase. Customers are also eligible to earn loyalty rewards via a “Shoe Perks” membership. This loyalty program allows them to earn points with each purchase and receive exclusive offers. Members can redeem points and awards either when in store or shopping online. Shoe Carnival is focused on customer service and giving its clients a positive experience. To this end, its technology infrastructure plays a very important role in supporting  customer-facing operations. In each of its 395 stores, Shoe Carnival has 2 servers supporting the register systems.  These systems are critical for  helping to ensure business runs smoothly. A store clerk uses this system to look up a customer’s loyalty account, apply appropriate discounts,  and ultimately complete sales. When the system is down, it can impede Shoe Carnival’s ability to provide a high-quality customer customer experience.   Previously, Shoe Carnival was running its systems on Red Hat. They switched to Oracle Linux for several reasons including increased availability and security, improved support, and lower overall costs. Security is top of mind for retailers handing customer information. In particular, there are many compliance and regulatory mandates for handling a customer’s personal and financial payment information. To thwart and protect against cyber security threats, the Shoe Carnival IT team needs to regularly patch and update its Linux operating systems (OS) with the latest fixes. Oracle Ksplice allows them to do automated live patching without any downtime. Whereas before, it was a struggle to update all servers in a timely fashion and avoid service disruptions. Today, Ksplice has enabled Shoe Carnival to reduce planned downtime by more than 20%.  The automated features have also saved up to 35% of administrator time per system. Lastly, being a long standing Oracle Database and Oracle Exadata customer, Shoe Carnival found value in using the same support vendor for its OS.  Using Oracle Linux Premier Support has yielded a 50% faster support ticket resolution.   We are proud to help customers like Shoe Carnival increase IT systems security and availability, enabling it to deliver an improved customer experience.  Watch this video to learn more!  

In this article, we will discuss how Shoe Carnival increased their IT systems security and availability using Oracle Ksplice.Shoe Carnival, Inc. is one of the nation’s largest family...

Linux

Pella Optimizes IT Infrastructure and Reduces License Costs With Oracle Linux and Virtualization

In this article, we will discuss how Pella transformed their IT infrastructure with a newly virtualized environment. The Pella Corporation is a privately held window and door manufacturing company headquartered in Pella, Iowa.  They have manufacturing and sales operations in a number of locations in the United States. Pella Corporation employs more than 8,000 people  with 17 manufacturing sites and 200 showrooms throughout the United States and select regions of Canada. Pella’s continuous business growth has proved to be a big challenge for the IT department.  As the company’s needs increased, its older infrastructure, which was based on Unix physical servers, struggled to keep pace.  Pella needed a more flexible platform that would allow them to easily build out capacity and improve functionality. This provided a unique opportunity for the IT team. The team wanted a reliable infrastructure that could support both the current capacity, and easily expand to accommodate growth while keeping costs to a minimum. For these reasons, the IT team decided to move to a virtualized x86-server environment.    As a long time Oracle customer, Pella was already using Oracle applications and Oracle Database.  Therefore, Pella was inclined to  evaluate Oracle’s Virtualization and Linux solutions to facilitate their IT transformation.  Oracle Linux was an obvious choice for Pella primarily because it is optimized for existing Oracle workloads.  They also decided to virtualize their environment with Oracle VM mainly for the license structure advantages. With Oracle VM, Pella is able to pin CPUs to specific VMs, which in turn translated to saving on licensing costs for Oracle applications. Today, Pella’s IT departments uses Oracle Linux in 95% of their Linux environment and Oracle VM to run all of their Linux VMs. This combination has proven to be very advantageous for multiple reasons. First, Pella has significantly reduced their IT costs, including a savings of 75% on licensing costs by switching to a virtualized environment. It also saved nearly 5x on what would have been spent on purchasing physical hardware. Second, Pella saved on CPU utilization. In its previous Unix physical environment, the utilization was 50%, now it’s down to 5%.   Thirdly, Pella has simplified its operations and streamlined their support ticket process. Because they run a large number of Oracle workloads, the team has been able to use one portal to share tickets between the DBA, Linux, and applications teams.  Having a single vendor has improved their support experience and ensured their mission-critical applications are running at their best. With its new IT platform based on Oracle Linux and Oracle VM, Pella now has the ability to scale up and out as needed. It also has a reliable platform to support its manufacturing. We are proud to help customers like Pella transform their IT landscapes! Watch this video to learn more.  

In this article, we will discuss how Pella transformed their IT infrastructure with a newly virtualized environment. The Pella Corporation is a privately held window and door manufacturing company...

Linux

Getting Started With The Oracle Cloud Infrastructure Python SDK

In a recent blog post I illustrated how to use the OCI Command Line Interface (CLI) in shell scripts. While the OCI CLI is comprehensive and powerful, it may not be the best solution when you need to handle a lot of data in shell scripts. In such cases using a programming language such as Python and the Oracle Cloud Infrastructure Python SDK makes more sense. Data manipulation is much easier, and the API is —as expected— more complex. In an attempt to demystify the use of the OCI Python SDK, I have re-written and improved the sample oci-provision.sh shell script in Python. This sample project is named oci-compute and is published on GitHub. This blog post highlights the key concepts of the OCI Python SDK, and together with the oci-compute sample code it should help you to get started easily. About oci-compute The oci-compute tool does everything oci-provision.sh does; better, faster and with some additional capabilities: List available Platform, Custom and Marketplace images Create Compute Instances from a Platform, Custom or Marketplace image A cloud-init file can be specified to run custom scripts during instance configuration List, start, stop and terminate Compute Instances Command line syntax and parameters naming are similar to the OCI CLI tool. See the project README for more information on usage and configuration. I am using this tool on a daily basis to easily manage OCI Compute instances from the command line. OCI Python SDK installation At the time of this writing, the SDK supports Python version 3.5 or 3.6 and can be easily installed using pip, preferably in a Python virtual environment. Installation and required dependencies are described in detail in the documentation. oci-compute installation The oci-compute utility is distributed as a Python package. The setup.py file lists the SDK as dependency; installing the tool will automatically pull the SDK if not already installed. See the README file for detailed installation steps, but in short it is as simple as creating a virtual environment and running: $ pip3 install . The package is split in two main parts: cli.py: handles the command line parsing using the Click package. It defines all the commands, sub-commands and their parameters; instantiate the OciCompute class and invoke its methods. oci_compute.py: defines the OciCompute class which interacts with the OCI SDK. This is the most interesting part of this project. OCI SDK Key concepts This section describes the key concepts used by the OCI SDK. Configuration The first step for using the OCI SDK is to create a configuration dictionary (Python dict). While you can build it manually, you will typically use the oci.config.from_file API call to load it from a configuration file. The default configuration file is ~/.oci/config. It is worth noticing that the OCI CLI uses the same configuration file and provides a command to create it: $ oci setup config For oci-compute, the configuration file is loaded during the class initialization: self._config = oci.config.from_file(config_file, profile) API Service Clients The OCI API is organized in Services, and for each Service you will have to instantiate a Service Client. For example, our oci-compute package uses the following Services: Compute Service (part of Core Services): to manage the Compute Services (provision and manage compute hosts). Virtual Network Service (part of Core Services): to manage the Networking Components (virtual cloud network, Subnet, …) Identity Service: to manage users, groups, compartments, and policies. Marketplace Service: to manage applications in Oracle Cloud Infrastructure Marketplace We instantiate the Service Clients in the class initialization: # Instantiate clients self._compute_client = oci.core.ComputeClient(self._config) self._identity_client = oci.identity.IdentityClient(self._config) self._virtual_network_client = oci.core.VirtualNetworkClient(self._config) self._marketplace_client = oci.marketplace.MarketplaceClient(self._config) Models Models allows you to create objects needed by the API calls. Example: to use an image from the Marketplace, we need to subscribe to the Application Catalog. This is done with the ComputeClient create_app_catalog_subscription method. This method needs an CreateAppCatalogSubscriptionDetails object as parameter. We will use the corresponding model to create such object: oci.core.models.CreateAppCatalogSubscriptionDetails. In oci-compute: app_catalog_subscription_detail = oci.core.models.CreateAppCatalogSubscriptionDetails( compartment_id=compartment_id, listing_id=app_catalog_listing_agreements.listing_id, listing_resource_version=app_catalog_listing_agreements.listing_resource_version, oracle_terms_of_use_link=app_catalog_listing_agreements.oracle_terms_of_use_link, eula_link=app_catalog_listing_agreements.eula_link, signature=app_catalog_listing_agreements.signature, time_retrieved=app_catalog_listing_agreements.time_retrieved ) self._compute_client.create_app_catalog_subscription(app_catalog_subscription_detail).data Pagination All list operations are paginated; that is: they will return a single page of data and you will need to call the method again to get additional pages. The pagination module allows you, amongst other, to retrieve all data in a single API call. Example: to list the available images in a compartment we could do: response = self._compute_client.list_images(compartment_id) which will only return the first page of data. To get get all images at once we will do instead: response = oci.pagination.list_call_get_all_results(self._compute_client.list_images, compartment_id) The first parameter to list_call_get_all_results is the paginated list method, subsequent parameters are the ones of the list method itself. Waiters and Composite operations To wait for an operation to complete (e.g.: wait until an instance is started), you can use the wait_until function. Alternatively, there are convenience classes in the SDK which will perform an action on a resource and wait for it to enter a particular state: the CompositeOperation classes. Example: start an instance and wait until it is started. The following code snippet shows how to start an instance and wait until it is up and running: compute_client_composite_operations = oci.core.ComputeClientCompositeOperations(self._compute_client) compute_client_composite_operations.instance_action_and_wait_for_state( instance_id=instance_id, action='START', wait_for_states=[oci.core.models.Instance.LIFECYCLE_STATE_RUNNING]) Error handling A complete list of exceptions raised by the SDK is available in the exception handling section of the documentation. In short, if your API calls are valid (correct parameters, …) the main exception you should care about is the ServiceError one which is raised when a service returns an error response; that is: a non-2xx HTTP status. For the sake of simplicity and clarity in the sample code, oci-compute does not capture most exceptions. Service Errors will result in a Python stack traceback. A simple piece of code where we have to consider the Service Error exception is illustrated here: for vnic_attachment in vnic_attachments: try: vnic = self._virtual_network_client.get_vnic(vnic_attachment.vnic_id).data except oci.exceptions.ServiceError: vnic = None if vnic and vnic.is_primary: break Putting it all together The oci-compute sample code should be self explanatory, but let’s walk through what happens when e.g. oci-compute provision platform --operating-system "Oracle Linux" --operating-system-version 7.8 --display-name ol78 is invoked. First of all, the CLI parser will instantiate an OciCompute object. This is done once at the top level, for any oci-compute command: ctx.obj['oci'] = OciCompute(config_file=config_file, profile=profile, verbose=verbose The OciCompute class initialization will: Load the OCI configuration from file Instantiate the Service Clients The Click package will then invoke provision_platform function which in turn will call the OciCompute.provision_platform method. We use the oci.core.ComputeClient.list_images to retrieve the most recent Platform Image matching the given Operating System and its version: images = self._compute_client.list_images( compartment_id, operating_system=operating_system, operating_system_version=operating_system_version, shape=shape, sort_by='TIMECREATED', sort_order='DESC').data if not images: self._echo_error("No image found") return None image = images[0] We then call OciCompute._provision_image for the actual provisioning. This method uses all of the key concepts explained earlier. Pagination is used to retrieve the Availability Domains using the Identity Client list_availability_domains method: availability_domains = oci.pagination.list_call_get_all_results( self._identity_client.list_availability_domains, compartment_id ).data VCN and subnet are retrieved using the Virtual Network Client (list_vcns and list_subnets methods) Metadata is populated with the SSH public key and a cloud-init file if provided: # Metadata with the ssh keys and the cloud-init file metadata = {} with open(ssh_authorized_keys_file) as ssh_authorized_keys: metadata['ssh_authorized_keys'] = ssh_authorized_keys.read() if cloud_init_file: metadata['user_data'] = oci.util.file_content_as_launch_instance_user_data(cloud_init_file) Models are used to create an instance launch details (oci.core.models.InstanceSourceViaImageDetails, oci.core.models.CreateVnicDetails and oci.core.models.LaunchInstanceDetails methods): instance_source_via_image_details = oci.core.models.InstanceSourceViaImageDetails(image_id=image.id) create_vnic_details = oci.core.models.CreateVnicDetails(subnet_id=subnet.id) launch_instance_details = oci.core.models.LaunchInstanceDetails( display_name=display_name, compartment_id=compartment_id, availability_domain=availability_domain.name, shape=shape, metadata=metadata, source_details=instance_source_via_image_details, create_vnic_details=create_vnic_details) Last step is to use the launch_instance_and_wait_for_state Composite Operation to actually provision the instance and wait until it is available: compute_client_composite_operations = oci.core.ComputeClientCompositeOperations(self._compute_client) response = compute_client_composite_operations.launch_instance_and_wait_for_state( launch_instance_details, wait_for_states=[oci.core.models.Instance.LIFECYCLE_STATE_RUNNING], waiter_kwargs={'wait_callback': self._wait_callback}) We use the optional waiter callback to display a simple progress indicator oci-compute demo Short demo of the oci-compute tool: Conclusion In this post, I’ve shown how to use oci-compute to easily provision and manage your OCI Compute Instances from the command line as well as how to create your own Python scripts using the Oracle Cloud Infrastructure Python SDK.

In a recent blog post I illustrated how to use the OCI Command Line Interface (CLI) in shell scripts. While the OCI CLI is comprehensive and powerful, it may not be the best solution when you need to...

Announcements

Oracle Linux container images now available on GitHub Container Registry

Oracle is pleased to announce the availability of Oracle Linux container images on GitHub Container Registry as part of our ongoing commitment to cultivating, supporting, and promoting popular open source technologies that customers can confidently deploy in business-critical environments. GitHub has quickly become one of the world's leading software development platforms and is now Oracle's preferred repository for open source software. Oracle has made many open source projects available on GitHub to make it easy for developers to access the source for Oracle-contributed software. We also use GitHub repositories to work with engineers and developers at partner companies and in the open source community. We've been publishing the official Oracle Linux container images to the GitHub Container Registry since its public beta launch in September 2020. When GitHub added multi-arch support later in September, 2020, we started publishing both the amd64 and arm64v8 variants of our official images. Using the GitHub Container Registry There are two ways to use the Oracle Linux images published GitHub Container Registry. You can either pull the image or use it as the base image in a Dockerfile. For example, if you want to pull the oraclelinux:7-slim image from GitHub Container Registry, you can run: # docker pull ghcr.io/oracle/oraclelinux:7-slim or $ podman pull ghcr.io/oracle/oraclelinux:7-slim This will automatically pull the correct variant for your architecture type. Alternatively, you can reference the image directly from within a Dockerfile, e.g. FROM ghcr.io/oracle/oraclelinux:7-slim Finding the Oracle Linux container images on GitHub As part of our ongoing effort to provide as much content as possible to developers working with of Oracle Linux, we've been publishing the Oracle Linux official container image tarballs on GitHub since December 2014 with a complete change log to ensure developers are aware of any changes that may affect a downstream image. These tarballs are built by Oracle and are used to build the official images on Docker Hub, Oracle Container Registry and GitHub Container Registry. Customer Support Customers running Oracle Linux in production can benefit from Oracle support. Oracle offers Basic and premier support for Oracle Linux. Support subscription customers can get easy access to support via the My Oracle Support portal. Support for Oracle Linux container images is included with both Oracle Linux Basic and Premier support subscriptions. Community Support For Oracle Linux users without an Oracle Linux support subscription, the following resources are available: Opening an issue at https://github.com/oracle/container-images/issues. The Containers and Orchestration category in the Oracle Applications and Infrastructure Community. Resources In addition to the links in this article, please visit Oracle.com/Linux for more information or to chat with an Oracle Linux representative.

Oracle is pleased to announce the availability of Oracle Linux container images on GitHub Container Registry as part of our ongoing commitment to cultivating, supporting, and promoting popular open...

Partners

Noesis Solutions Certifies its Optimus Process Integration and Design Optimization Software with Oracle Linux

We are pleased to introduce Noesis Solutions’ Optimus into the ecosystem of ISV applications certified with Oracle Linux. Noesis recently certified its Optimus 2020.1 release with Oracle Linux 6 and 7. Optimus is an industry-leading process integration and design optimization (PIDO) software platform, bundling a powerful range of capabilities for engineering process integration, design space exploration, engineering optimization, and robustness and reliability. These PIDO technologies help direct engineering simulations toward design candidates that can outsmart competition while taking into account relevant design constraints - effectively implementing an objectives-driven engineering process. Optimus advanced workflow technologies offer the unique capability to intuitively automate engineering processes by capturing the related simulation workflow. These workflows free users from repetitive manual model changes, data processing, and performance evaluation tasks. Optimus simulation workflows can be executed in local environments or cloud infrastructures. Optimus also offers effective data mining technologies that help engineering teams to gain deeper insights and visualize the design space in a limited time window, to help them make informed decisions. Noesis Solutions is an engineering innovation company that works with manufacturers in engineering-intense industries. Specialized in solutions that enable objectives driven draft-to-craft engineering processes, its software products and services help customers adopt a targeted development strategy that helps resolve their toughest multi-disciplinary engineering challenges.  

We are pleased to introduce Noesis Solutions’ Optimus into the ecosystem of ISV applications certified with Oracle Linux. Noesis recently certified its Optimus 2020.1 release with Oracle Linux 6 and 7. Op...

Linux Toolchain & Tracing

DTrace for the Application Developer - Counting Function Calls

This blog entry was provided by Ruud van der Pas   Introduction DTrace is often positioned as an operating system analysis tool for the system administrators, but it has a wider use than this. In particular the application developer may find some features useful when trying to understand a performance problem. In this article we show how DTrace can be used to print a list of the user-defined functions that are called by the target executable. We also show how often these functions are called. Our solution presented below works for a multithreaded application and the function call counts for each thread are given. Motivation There are several reasons why it may be helpful to know how often functions are called: Identify candidates for compiler-based inlining. With inlining, the function call is replaced by the source code of that function. This eliminates the overhead associated with calling a function and also provides additional opportunities for the compiler to better optimize the code. The downsides are an increase in the usage of registers and potentially a reduced benefit from an instruction cache. This is why inlining works best on small functions called very often. Test coverage. Although much more sophisticated tools exist for this, for example gcov, function call counts can be useful to quickly verify if a function is called at all. Note that gcov requires the executable to be instrumented and the source has to be compiled with the appropriate options. In case the function call counts vary across the threads of a multithreaded program, there may be a load imbalance. The counts can also be used to verify which functions are executed by a single thread only.   Target Audience No background in DTrace is assumed. All DTrace features and constructs used are explained. It is expected the reader has some familiarity with developing applications, knows how to execute an executable, and has some basic understanding of shell scripts. The DTrace Basics DTrace provides dynamic tracing of both the operating system kernel and user processes. Kernel and process activities can be observed across all processes running, or be restricted to a specific process, command, or executable. There is no need to recompile or have access to the source code of the process(es) that are monitored. A probe is a key concept in DTrace. Probes define the events that are available to the user to trace. For example, a probe can be used to trace the entry to a specific system call. The user needs to specify the probe(s) to monitor. The simple D language is available to program the action(s) to be taken in case an event occurs. DTrace is safe, unintrusive, and supports kernel as well as application observability. DTrace probes are organized in sets called providers. The name of a provider is used in the definition of a probe. The user can bind one or more tracing actions to any of the probes that have been provided. A list of all of the available probes on the system is obtained using the -l option on the dtrace command that is used to invoke DTrace. Below an example is shown, but only snippets of the output are listed, because on this system there are over 110,000 probes. # dtrace -l ID PROVIDER MODULE FUNCTION NAME 1 dtrace BEGIN 2 dtrace END 3 dtrace ERROR <lines deleted> 16 profile tick-1000 17 profile tick-5000 18 syscall vmlinux read entry 19 syscall vmlinux read return 20 syscall vmlinux write entry 21 syscall vmlinux write return <lines deleted> 656 perf vmlinux syscall_trace_enter sys_enter 657 perf vmlinux syscall_slow_exit_work sys_exit 658 perf vmlinux emulate_vsyscall emulate_vsyscall 659 lockstat vmlinux intel_put_event_constraints spin-release 660 lockstat vmlinux intel_stop_scheduling spin-release 661 lockstat vmlinux uncore_pcibus_to_physid spin-release <lines deleted> 1023 sched vmlinux __sched_setscheduler dequeue 1024 lockstat vmlinux tg_set_cfs_bandwidth spin-release 1025 sched vmlinux activate_task enqueue 1026 sched vmlinux deactivate_task dequeue 1027 perf vmlinux ttwu_do_wakeup sched_wakeup 1028 sched vmlinux do_set_cpus_allowed enqueue <many more lines deleted> 155184 fbt xt_comment comment_mt return 155185 fbt xt_comment comment_mt_exit entry 155186 fbt xt_comment comment_mt_exit return 163711 profile profile-99 163712 profile profile-1003 # Each probe in this output is identified by a system-dependent numeric identifier and four fields with unique values:   provider - The name of the DTrace provider that is publishing this probe. module - If this probe corresponds to a specific program location, the name of the kernel module, library, or user-space program in which the probe is located. function - If this probe corresponds to a specific program location, the name of the kernel, library, or executable function in which the probe is located. name - A name that provides some idea of the probe's semantic meaning, such as BEGIN, END, entry, return, enqueue, or dequeue.   All probes have a provider name and a probe name, but some probes, such as the BEGIN, END, ERROR, and profile probes, do not specify a module and function field. This type of probe does not instrument any specific program function or location. Instead, these probes refer to a more abstract concept. For example, the BEGIN probe always triggers at the start of the tracing process. Wild cards in probe descriptions are supported. An empty field in the probe description is equivalent to * and therefore matches any possible value for that field. For example, to trace the entry to the malloc() function in libc.so.6 in a process with PID 365, the pid365:libc.so.6:malloc:entry probe can be used. To probe the malloc() function in this process regardless of the specific library it is part of, either the pid365::malloc:entry or pid365:*:malloc:entry probe can be used.   Upon invocation of DTrace, probe descriptions are matched to determine which probes should have an action associated with them and need to be enabled. A probe is said to fire when the event it represents is triggered.   The user defines the actions to be taken in case a probe fires. These need to be written in the D language, which is specific to DTrace, but readers with some programming experience will find it easy to learn. Different actions may be specified for different probe descriptions. While these actions can be specified at the command line, in this article we put all the probes and associated actions in a file. This D program, or script, by convention has the extension ".d". Aggregations are important in DTrace. Since they play a key role in this article we add a brief explanation here. The syntax for an aggregation is @user_defined_name[keys] = aggregation_function(). An example of an aggregation function is sum(arg). It takes a scalar expression as an argument and returns the total value of the specified expressions. For those readers who like to learn more about aggregations in particular we recommend to read this section on aggregations from the Oracle Linux DTrace Guide. This section also includes a list of the available aggregation functions. Testing Environment and Installation Instructions The experiments reported upon here have been conducted in an Oracle Cloud Infrastructure ("OCI") instance running Oracle Linux. The following kernel has been used: $ uname -srvo Linux 4.14.35-1902.3.1.el7uek.x86_64 #2 SMP Mon Jun 24 21:25:29 PDT 2019 GNU/Linux $ The 1.6.4 version of the D language and the 1.2.1 version of DTrace have been used: $ sudo dtrace -Vv dtrace: Sun D 1.6.4 This is DTrace 1.2.1 dtrace(1) version-control ID: e543f3507d366df6ffe3d4cff4beba2d75fdb79c libdtrace version-control ID: e543f3507d366df6ffe3d4cff4beba2d75fdb79c $ DTrace is available on Oracle Linux and can be installed through the following yum command: $ sudo yum install dtrace-utils After the installation has completed, please check your search path! DTrace is invoked through the dtrace command in /usr/sbin. Unfortunately there is a different tool with the same name in /usr/bin. You can check the path is correct through the following command: $ which dtrace /usr/sbin/dtrace $   Oracle Linux is not the only operating system that supports DTrace. It actually has its roots in the Oracle Solaris operating system, but it is also available on macOS and Windows. DTrace is also supported on other Linux based operating systems. For example, this blog article outlines how DTrace could be used on Fedora. Counting Function Calls In this section we show how DTrace can be used to count function calls. Various D programs are shown, successively refining the functionality. The Test Program In the experiments below, a multithreaded version of the multiplication of a matrix with a vector is used. The program is written in C and the algorithm has been parallelized using the Pthreads API. This is a relatively simple test program and makes it easy to verify the call counts are correct. Below is an example of a job that multiplies a 1000x500 matrix with a vector of length 500 using 4 threads. The output echoes the matrix sizes, the number of threads used, and the time it took to perform the multiplication: $ ./mxv.par.exe -m 1000 -n 500 -t 4 Rows = 1000 columns = 500 threads = 4 time mxv = 510 (us) $   A First DTrace Program The D program below lists all functions that are called when executing the target executable. It also shows how often these functions have been executed. Line numbers have been added for ease of reference: 1 #!/usr/sbin/dtrace -s 2 3 #pragma D option quiet 4 5 BEGIN { 6 printf("\n======================================================================\n"); 7 printf(" Function Call Count Statistics\n"); 8 printf("======================================================================\n"); 9 } 10 pid$target:::entry 11 { 12 @all_calls[probefunc,probemod] = count(); 13 } 14 END { 15 printa(@all_calls); 16 } The first line invokes DTrace and uses the -s option to indicate the D program is to follow. At line 3, a pragma is used to supress some information DTrace prints by default. The BEGIN probe spans lines 5-9. This probe is executed once at the start of the tracing and is ideally suited to initialize variables and, as in this case, print a banner. At line 10 we use the pid provider to enable tracing of a user process. The target process is either specified using a particular process id (e.g. pid365), or through the $target macro variable that expands to the process id of the command specified at the command line. The latter form is used here. The pid provider offers the flexibility to trace any command, which in this case is the execution of the matrix-vector multiplication executable. We use wild cards for the module name and function. The probe name is entry and this means that this probe fires upon entering any function of the target process. Lines 11 and 13 contain the mandatory curly braces that enclose the actions taken. In this case there is only one action and it is at line 12. Here, the count() aggregation function is used. It returns how often it has been called. Note that this is on a per-probe basis, so this line counts how often each probe fires. The result is stored in an aggregation with the name @all_calls. Since this is an aggregation, the name has to start with the "@" symbol. The aggregation is indexed through the probefunc and probemod built-in DTrace variables. They expand to the function name that caused the probe to trigger and the module this function is part of. This means that line 12 counts how many times each function of the parent process is executed and the library or exectuable this function is part of. The END probe spans lines 14-16. Recall this probe is executed upon termination of the tracing. Although aggregations are automatically printed upon termination, we explicitly print the aggregation using the printa function. The function and module name(s), plus the respective counts, are printed. Below is the output of a run using the matrix-vector program. It is assumed that the D program shown above is stored in a file with the name fcalls.d. Note that root privileges are needed to use DTrace. This is why we use the sudo tool to execute the D program. By default the DTrace output is mixed with the program output. The -o option is used to store the DTrace output in a separate file. The -c option is used to specifiy the command or executable that needs to be traced. Since we use options on the executable, quotes are needed to delimit the full command. Since the full output contains 149 lines, only some snippets are shown here:   $ sudo ./fcalls.d -c "./mxv.par.exe -m 1000 -n 500 -t 4" -o fcalls.out $ cat fcalls.out ====================================================================== Function Call Count Statistics ====================================================================== _Exit libc.so.6 1 _IO_cleanup libc.so.6 1 _IO_default_finish libc.so.6 1 _IO_default_setbuf libc.so.6 1 _IO_file_close libc.so.6 1 <many more lines deleted> init_data mxv.par.exe 1 main mxv.par.exe 1 <many more lines deleted> driver_mxv mxv.par.exe 4 getopt libc.so.6 4 madvise libc.so.6 4 mempcpy ld-linux-x86-64.so.2 4 mprotect libc.so.6 4 mxv_core mxv.par.exe 4 pthread_create@@GLIBC_2.2.5 libpthread.so.0 4 <many more lines deleted> _int_free libc.so.6 1007 malloc libc.so.6 1009 _int_malloc libc.so.6 1012 cfree libc.so.6 1015 strcmp ld-linux-x86-64.so.2 1205 __drand48_iterate libc.so.6 500000 drand48 libc.so.6 500000 erand48_r libc.so.6 500000 $   The output lists every function that is part of the dynamic call tree of this program, the module it is part of, and how many times the function is called. The list is sorted by default with respect to the function call count. The functions from module mxv.par.exe are part of the user source code. The other functions are from shared libraries. We know that some of these, e.g. drand48(), are called directly by the application, but the majority of these library functions are called indirectly. To make things a little more complicated, a function like malloc() is called directly by the application, but may also be executed by library functions deeper in the call tree. From the above output we cannot make such a distinction. Note that the DTrace functions stack() and/or ustack() could be used to get callstacks to see the execution path(s) where the calls originate from. In many cases this feature is used to zoom in on a specific part of the execution flow and therefore restricted to a limited set of probes. A Refined DTrace Program While the D program shown above is correct, the list with all functions that are called is quite long, even for this simple application. Another drawback is that there are many probes that trigger, slowing down program execution. In the second version of our D program, we'd like to restrict the list to user functions called from the executable mxv.par.exe. We also want to format the output, print a header and display the function list in alphabetical order. The modified version of the D program is shown below: 1 #!/usr/sbin/dtrace -s 2 3 #pragma D option quiet 4 #pragma D option aggsortkey=1 5 #pragma D option aggsortkeypos=0 6 7 BEGIN { 8 printf("\n======================================================================\n"); 9 printf(" Function Call Count Statistics\n"); 10 printf("======================================================================\n"); 11 } 12 pid$target:a.out::entry 13 { 14 @call_counts_per_function[probefunc] = count(); 15 } 16 END { 17 printf("%-40s %12s\n\n", "Function name", "Count"); 18 printa("%-40s %@12lu\n", @call_counts_per_function); 19 } Two additional pragmas appear at lines 4-5. The pragma at line 4 enables sorting the aggregations by a key and the next one sets the key to the first field, the name of the function that triggered the probe. The BEGIN probe is unchanged, but the probe spanning lines 12-15 has two important differences compared to the similar probe used in the first version of our D program. At line 12, we use a.out for the name of the module. This is an alias for the module name in the pid probe. It is replaced with the name of the target executable, or command, to be traced. In this way, the D program does not rely on a specific name for the target. The second change is at line 14, where the use of the probemod built-in variable has been removed because it is no longer needed. By design, only functions from the target executable trigger this probe now. The END probe has also been modified. At line 17, a statement has been added to print the header. The printa statement at line 18 has been extended with a format string to control the layout. This string is optional, but ideally suitable to print (a selection of) the fields of an aggregation. We know the first field is a string and the result is a 64 bit unsigned integer number, hence the use of the %s and %lu formats. The thing that is different compared to a regular printf format string in C/C++ is the use of the "@" symbol. This is required when printing the result of an aggregation function. Below is the output using the modified D program. The command to invoke this script is exactly the same as before. ====================================================================== Function Call Count Statistics ====================================================================== Function name Count allocate_data 1 check_results 1 determine_work_per_thread 4 driver_mxv 4 get_user_options 1 get_workload_stats 1 init_data 1 main 1 mxv_core 4 my_timer 2 print_all_results 1 The first thing to note is that with 11 entries, the list is much shorter. By design, the list is alphabetically sorted with respect to the function name. Since we no longer trace every function called, the tracing overhead has also been reduced substantially. A DTrace Program with Support for Multithreading With the above D program one can easily see how often our functions are executed. Although our goal of counting user function calls has been achieved, we'd like to go a little further. In particular, to provide statistics on the multithreading characteristics of the target application:   Print the name of the executable that has been traced, as well as the total number of calls to user defined functions. Print how many function calls each thread executed. This shows whether all threads approximately execute the same number of function calls. Print a function list with the call counts for each thread. This allows us to identify those functions executed sequentially and also provides a detailed comparison to verify load balancing at the level of the individual functions.   The D program that implements this additional functionality is shown below. 1 #!/usr/sbin/dtrace -s 2 3 #pragma D option quiet 4 #pragma D option aggsortkey=1 5 #pragma D option aggsortkeypos=0 6 7 BEGIN { 8 printf("\n======================================================================\n"); 9 printf(" Function Call Count Statistics\n"); 10 printf("======================================================================\n"); 11 } 12 pid$target:a.out:main:return 13 { 14 executable_name = execname; 15 } 16 pid$target:a.out::entry 17 { 18 @total_call_counts = count(); 19 @call_counts_per_function[probefunc] = count(); 20 @call_counts_per_thr[tid] = count(); 21 @call_counts_per_function_and_thr[probefunc,tid] = count(); 22 } 23 END { 24 printf("\n============================================================\n"); 25 printf("Name of the executable : %s\n" , executable_name); 26 printa("Total function call counts : %@lu\n", @total_call_counts); 27 28 printf("\n============================================================\n"); 29 printf(" Aggregated Function Call Counts\n"); 30 printf("============================================================\n"); 31 printf("%-40s %12s\n\n", "Function name", "Count"); 32 printa("%-40s %@12lu\n", @call_counts_per_function); 33 34 printf("\n============================================================\n"); 35 printf(" Function Call Counts Per Thread\n"); 36 printf("============================================================\n"); 37 printf("%6s %12s\n\n", "TID", "Count"); 38 printa("%6d %@12lu\n", @call_counts_per_thr); 39 40 printf("\n============================================================\n"); 41 printf(" Thread Level Function Call Counts\n"); 42 printf("============================================================\n"); 43 printf("%-40s %6s %10s\n\n", "Function name", "TID", "Count"); 44 printa("%-40s %6d %@10lu\n", @call_counts_per_function_and_thr); 45 } The first 11 lines are unchanged. Lines 12-15 define an additional probe that looks remarkably similar to the probe we have used so far, but there is an important difference. The wild card for the function name is gone and instead we specify main explicitly. That means this probe only fires upon entry of the main program. This is exactly what we want here, because this probe is only used to capture the name of the executable. It is available through the built-in variable execname. Another minor difference is that this probe triggers upon the return from this function. This is purely for demonstration purposes, because the same result would be returned if the trigger was on the entry to this function. One may wonder why we do not capture the name of the executable in the BEGIN probe. After all, it fires at the start of the tracing process and only once. The issue is that at this point in the tracing, execname does not return the name of the executable, but the file name of the D program. The probe used in the previous version of the D program has been extended to gather more statistics. There are now four aggregations at lines 18-21:   At line 18 we simply increment the counter each time this probe triggers. In other words, aggregation @total_call_counts contains the total number of function calls. The statement at line 19 is identical to what was used in the previous version of this probe. At line 20, the tid built-in variable is used as the key into an aggregation called @call_counts_per_thr. This variable contains the integer id of the thread triggering the probe. The count() aggregation function is used as the value. Therefore this statement counts how many function calls a specific thread has executed. Another aggregation called @call_counts_per_function_and_thr is used at line 21. Here we use both the probefunc and tid built-in variables as a key. Again the count() aggregation function is used as the value. In this way we break down the number of calls from the function(s) triggering this probe by the thread id.   The END probe is more extensive than before and spans lines 23-45. There are no new features or constructs though. The aggregations are printed in a similar way and the "@" symbol is used in the format string to print the results of the aggregations. The results of this D program are shown below. ====================================================================== Function Call Count Statistics ====================================================================== ============================================================ Name of the executable : mxv.par.exe Total function call counts : 21 ============================================================ Aggregated Function Call Counts ============================================================ Function name Count allocate_data 1 check_results 1 determine_work_per_thread 4 driver_mxv 4 get_user_options 1 get_workload_stats 1 init_data 1 main 1 mxv_core 4 my_timer 2 print_all_results 1 ============================================================ Function Call Counts Per Thread ============================================================ TID Count 20679 13 20680 2 20681 2 20682 2 20683 2 ============================================================ Thread Level Function Call Counts ============================================================ Function name TID Count allocate_data 20679 1 check_results 20679 1 determine_work_per_thread 20679 4 driver_mxv 20680 1 driver_mxv 20681 1 driver_mxv 20682 1 driver_mxv 20683 1 get_user_options 20679 1 get_workload_stats 20679 1 init_data 20679 1 main 20679 1 mxv_core 20680 1 mxv_core 20681 1 mxv_core 20682 1 mxv_core 20683 1 my_timer 20679 2 print_all_results 20679 1 Right below the header, the name of the executable (mxv.par.exe) and the total number of function calls (21) are printed. This is followed by the same table we saw before. The second table is titled "Function Call Counts Per Thread". The data confirms that 5 threads have been active. There is one master thread and it creates the other four threads. The thread ids are in the range 20679-20683. Note that these numbers are not fixed. A subsequent run most likely shows different numbers. What is presumably the main thread executes 13 function calls. The other four threads execute two function calls each. These numbers don't tell us much about what is really going on under the hood and this is why we generate a third table titled "Thread Level Function Call Counts". The data is sorted with respect to the function names. What we see in this table is that the main thread executes all functions, other than driver_mxv and mxv_core. These two functions are executed by the four threads that have been created. We also see that function determine_work_per_thread is called four times by the main thread. This function is used to compute the amount of work to be executed by each thread. In a more scalable design, this should be handled by the individual threads. Function my_timer is executed twice by the main thread. That is because this function is called at the start and end of the matrix-vector multiplication. While this table shows the respective thread ids, it is not immediately clear which function(s) each thread executes. It is not difficult to create a table that shows the sorted thread ids in the first column and the function names, as well as the respective counts, next to the ids. This is left as an exercise to the reader. There is one more thing we would like to mention. While the focus has been on the user written functions, there is no reason why other functions cannot be included. For example, we know this program uses the Pthreads library libpthreads.so. In case functions from this library should be counted as well, a one line addition to the main probe is sufficient: 1 pid$target:a.out::entry, 2 pid$target:libpthread.so:pthread_*:entry 3 { 4 @total_call_counts = count(); 5 @call_counts_per_function[probefunc] = count(); 6 @call_counts_per_thr[tid] = count(); 7 @call_counts_per_function_and_thr[probefunc,tid] = count(); 8 } The differences are in lines 1-2. Since we want to use the same actions for both probes, we simply place them back to back, separated by a comma. The second probe specifies the module (libpthread.so), but instead of tracing all functions from this library, for demonstration purposes we use a wild card to only select function names starting with pthread_. Additional Reading Material The above examples, plus the high level coverage of the DTrace concepts and terminology, are hopefully sufficient to get started. More details are beyond the scope of this article, but luckily, DTrace is very well documented. For example, the Oracle Linux DTrace Guide, covers DTrace in detail and includes many short code fragments. In case more information is needed, there are many other references and examples. Regarding the latter, the Oracle DTrace Tutorial contains a variety of example programs.

This blog entry was provided by Ruud van der Pas   Introduction DTrace is often positioned as an operating system analysis tool for the system administrators, but it has a wider use than this. In...

Announcements

Oracle’s Linux Team Wishes the Java Community a Happy 25th

Thanks to Kurt Goebel and Van Okamura for their help with this post.   From one open source community to another, Oracle’s Linux team would like to congratulate the Java community on its 25th anniversary! Java has an impressive history. It was a breakthrough in programming languages, allowing developers to write once and have code run anywhere. And, it has enabled developers to create a myriad of innovative solutions that help run our world. Read Georges Saab’s post to learn more. Both open source technologies, Java and Linux benefit from communities that collectively drive their advancements. While the technologies aren’t similar, there are areas where both work together and complement each other. One area is Java’s support for Linux HugePages. Using Linux HugePages can improve system performance by reducing the amount of resources needed to manage memory. The result of less overhead in the system means more resources are available for Java and the Java app, which can make both run faster. Another area is OpenJDK. It has been shipping with Linux distributions continuously and every Linux distribution has Java support out of the box. Linux was and is ubiquitous on a wide range of hardware platforms. As part of Linux, OpenJDK and Java are also running on many different hardware architectures. This helped to bring a Java ecosystem to embedded devices. Today, Java and Linux are used in virtually all industries and on everything from laptops to data centers, clouds to satellites, game consoles to scientific supercomputers. Here's to 25 more years of being moved by Java. From all of us (and Oracle Tux), we wish the Java community continued success. #MovedbyJava #OracleLinux  

Thanks to Kurt Goebel and Van Okamura for their help with this post.   From one open source community to another, Oracle’s Linux team would like to congratulate the Java community on its...

Linux

IT Convergence Improves End User Experience with Quicker Server Builds, Improved SLAs and Reduced Support Costs

IT Convergence is a global applications services provider. For the past 20 years, it has offered customers Oracle solutions, such as enterprise applications like Oracle E-Business Suite. This article explores how IT Convergence built servers faster, improved its SLAs, and reduced support costs since moving to a hosted cloud services environment running on Oracle Linux and Oracle VM.   As an Oracle Platinum Partner, IT Convergence has a comprehensive service offering across all three pillars of the Cloud (IaaS, PaaS, SaaS).  It can build, manage, and optimize customer solutions.  Additionally, it can provide connectivity into Oracle Cloud Infrastructure . These solutions create value for thousands of customers globally, including one-third of Fortune 500 companies. Before IT Convergence moved their environment to Oracle Linux and Oracle VM, they had a hybrid environment running Red Hat Enterprise Linux and VMware. Upon choosing Oracle Linux, IT Convergence decided to use the Unbreakable Enterprise Kernel (UEK) for Oracle Linux as it proved particularly fast with Oracle E-Business Suite. This video interview explains how easy and painless the conversion to Oracle Linux and Oracle VM was for IT Convergence. Its teams were able to convert 2000 servers online, without any downtime or reboots, within three months. This move to Oracle Linux and Oracle VM has resulted in several benefits. By using Oracle VM and Oracle VM Templates, IT Convergence can now build servers more rapidly for its customers. What previously took 20 hours to manually build now takes the team about two hours. Oracle VM Templates are self-contained and pre-configured virtual machines of key Oracle technologies. Each Oracle VM Template is packaged using Oracle best practices, which helps eliminate installation and configuration costs, reduces risk, and dramatically shortens deployment time. Other benefits from migrating to Oracle Linux and Oracle VM are related to technical support.  IT Convergence was supporting multiple operating systems (OS) and hypervisor solutions. This added complexity when attempting to resolve support tickets across different vendors. Specifically, lots of time was spent trying to determine the root cause analysis. Consequently, the operations team was not always able to complete a support ticket within its two-hour SLA window. Given the rest of IT Convergence’s stack is largely Oracle, from the database to the applications level, using Oracle Linux and Oracle VM simplified its vendor portfolio. It also unified support across the applications, OS and hypervisors. Now, any support tickets go through a single vendor for resolution. This has improved  the team’s overall technical support SLA capabilities.   Additionally, by moving to Oracle Linux and Oracle VM Premier Support, IT Convergence saved approximately $100,000 annually. These support cost savings in turn allow IT Convergence to offer more competitive pricing to its customers.  A win-win!    We are proud to help customers like IT Convergence to improve operational capabilities and its customer offerings.  Watch this video to learn more!  

IT Convergence is a global applications services provider. For the past 20 years, it has offered customers Oracle solutions, such as enterprise applications like Oracle E-Business Suite. This article...

Linux

Getting Started With The Vagrant Libvirt Provider For Oracle Linux

Introduction As recently announced by Sergio we now support the libvirt provider for our Oracle Linux Vagrant Boxes. The libvirt provider is a good alternative to the virtualbox one when you already use KVM on your host, as KVM and VirtualBox virtualization are mutually exclusive. It is also a good choice when running Vagrant on Oracle Cloud Infrastructure. This blog post will guide you through the simple steps needed to use these new boxes on your Oracle Linux host (Release 7 or 8). Virtualization Virtualization is easily installed using the Virtualization Host package group. On Oracle Linux 7, first enable the ol7_kvm_utils channel to get recent version of the packages: sudo yum-config-manager --enable ol7_kvm_utils After installing the packages, start the libvirtd service and add you user to the libvirt group: sudo yum group install "Virtualization Host" sudo systemctl enable --now libvirtd sudo usermod -a -G libvirt opc Do not forget to re-login to activate the group change for your user! Vagrant We need to install HashiCorp Vagrant as well as the Vagrant Libvirt Provider contributed plugin: # Vagrant itself: sudo yum install https://releases.hashicorp.com/vagrant/2.2.9/vagrant_2.2.9_x86_64.rpm # Libraries needed for the plugin: sudo yum install libxslt-devel libxml2-devel libvirt-devel \ libguestfs-tools-c ruby-devel gcc make Oracle Linux 8: at the time of this writing there is a compatibility issue between system libraries and the ones embedded with Vagrant. Run the following script as root to update the Vagrant libraries: #!/usr/bin/env bash # Description: override krb5/libssh libraries in Vagrant embedded libraries set -e # Get pre-requisites dnf -y install \ libxslt-devel libxml2-devel libvirt-devel \ libguestfs-tools-c ruby-devel \ gcc byacc make cmake gcc-c++ mkdir -p vagrant-build cd vagrant-build dnf download --source krb5-libs libssh # krb5 rpm2cpio krb5-1.17-*.src.rpm | cpio -idmv krb5-1.17.tar.gz tar xzf krb5-1.17.tar.gz pushd krb5-1.17/src ./configure make cp -a lib/crypto/libk5crypto.so.3* /opt/vagrant/embedded/lib64/ popd # libssh rpm2cpio libssh-0.9.0-*.src.rpm | cpio -imdv libssh-0.9.0.tar.xz tar xJf libssh-0.9.0.tar.xz mkdir build pushd build cmake ../libssh-0.9.0 -DOPENSSL_ROOT_DIR=/opt/vagrant/embedded make cp lib/libssh* /opt/vagrant/embedded/lib64/ popd We are now ready to install the plugin (as your non-privileged user): vagrant plugin install vagrant-libvirt Firewall The libvirt provider uses NFS to mount the /vagrant shared folder in the guest. Your firewall must be configured to allow the NFS traffic between the host and the guest. Oracle Linux 7 You can allow NFS traffic in your default zone: sudo firewall-cmd --permanent --add-service=nfs3 sudo firewall-cmd --permanent --add-service=mountd sudo firewall-cmd --permanent --add-service=rpc-bind sudo systemctl restart firewalld Alternatively you can add the libvirt bridge to your trusted zone: sudo firewall-cmd --zone=trusted --add-interface=virbr1 sudo systemctl restart firewalld Oracle Linux 8 With Oracle Linux 8, the libvirt bridge is automatically added to the libvirt zone. Traffic must be allowed in that zone: sudo firewall-cmd --permanent --zone libvirt --add-service=nfs3 sudo firewall-cmd --permanent --zone libvirt --add-service=mountd sudo firewall-cmd --permanent --zone libvirt --add-service=rpc-bind sudo systemctl restart firewalld Privileges considerations To configure NFS, Vagrant will require root privilege when you start/stop guest instances. Unless you are happy to enter your password on every vagrant up you should consider enabling password-less sudo for your user. Alternatively you can enable fine grained sudoers access as described in Root Privilege Requirement section of the Vagrant documentation. Using libvirt boxes Your first libvirt guest You are now ready to use livirt enabled boxes! mkdir ol7 cd ol7 vagrant init oraclelinux/7 https://oracle.github.io/vagrant-boxes/boxes/oraclelinux/7.json vagrant up Libvirt configuration While the libvirt provider exposes quite a lot of configuration parameters, most Vagrantfiles will run with no or little modification. Typically when you have for VirtualBox: config.vm.provider "virtualbox" do |vb| vb.cpus = 4 vb.memory = 4096 end You will need for libvirt: config.vm.provider :libvirt do |libvirt| libvirt.cpus = 4 libvirt.memory = 4096 end The Oracle vagrant-boxes repository is being updated to support the new libvirt boxes. Tips and tricks Virsh The virsh command can be used to monitor the libvirt resources. By default vagrant-libvirt uses the qemu:///system URI to connect to the KVM hypervisor and images are stored in the default storage pool. Example: [opc@bommel ~]$ vagrant global-status id name provider state directory -------------------------------------------------------------------------------------------------- 7ec55b3 ol7-vagrant libvirt shutoff /home/opc/src/vagrant-boxes/OracleLinux/7 3fd9dd9 registry libvirt shutoff /home/opc/src/vagrant-boxes/ContainerRegistry c716711 ol7-docker-engine libvirt running /home/opc/src/vagrant-boxes/DockerEngine 6a0cb46 worker1 libvirt running /home/opc/src/vagrant-boxes/OLCNE a262a29 worker2 libvirt running /home/opc/src/vagrant-boxes/OLCNE 538e659 master1 libvirt running /home/opc/src/vagrant-boxes/OLCNE b6d2661 ol6-vagrant libvirt running /home/opc/src/vagrant-boxes/OracleLinux/6 41aaa7e oracle-19c-vagrant libvirt running /home/opc/src/vagrant-boxes/OracleDatabase/19.3.0 [opc@bommel ~]$ virsh -c qemu:///system list --all Id Name State ------------------------------------------------- 23 DockerEngine_ol7-docker-engine running 24 OLCNE_worker1 running 25 OLCNE_worker2 running 26 OLCNE_master1 running 30 6_ol6-vagrant running 31 19.3.0_oracle-19c-vagrant running - 7_ol7-vagrant shut off - ContainerRegistry_registry shut off [opc@bommel ~]$ virsh -c qemu:///system vol-list --pool default Name Path ----------------------------------------------------------------------------------------------------------------------------------------------- 19.3.0_oracle-19c-vagrant.img /var/lib/libvirt/images/19.3.0_oracle-19c-vagrant.img 6_ol6-vagrant.img /var/lib/libvirt/images/6_ol6-vagrant.img 7_ol7-vagrant.img /var/lib/libvirt/images/7_ol7-vagrant.img ContainerRegistry_registry-vdb.qcow2 /var/lib/libvirt/images/ContainerRegistry_registry-vdb.qcow2 ContainerRegistry_registry.img /var/lib/libvirt/images/ContainerRegistry_registry.img DockerEngine_ol7-docker-engine-vdb.qcow2 /var/lib/libvirt/images/DockerEngine_ol7-docker-engine-vdb.qcow2 DockerEngine_ol7-docker-engine.img /var/lib/libvirt/images/DockerEngine_ol7-docker-engine.img ol7-latest_vagrant_box_image_0.img /var/lib/libvirt/images/ol7-latest_vagrant_box_image_0.img OLCNE_master1.img /var/lib/libvirt/images/OLCNE_master1.img OLCNE_worker1.img /var/lib/libvirt/images/OLCNE_worker1.img OLCNE_worker2.img /var/lib/libvirt/images/OLCNE_worker2.img oraclelinux-VAGRANTSLASH-6_vagrant_box_image_6.10.130.img /var/lib/libvirt/images/oraclelinux-VAGRANTSLASH-6_vagrant_box_image_6.10.130.img oraclelinux-VAGRANTSLASH-6_vagrant_box_image_6.10.132.img /var/lib/libvirt/images/oraclelinux-VAGRANTSLASH-6_vagrant_box_image_6.10.132.img oraclelinux-VAGRANTSLASH-7_vagrant_box_image_7.7.17.img /var/lib/libvirt/images/oraclelinux-VAGRANTSLASH-7_vagrant_box_image_7.7.17.img oraclelinux-VAGRANTSLASH-7_vagrant_box_image_7.8.135.img /var/lib/libvirt/images/oraclelinux-VAGRANTSLASH-7_vagrant_box_image_7.8.135.img Removing box image The vagrant box remove command removes the box from the user .vagrant directory, but not from the storage pool. Use virsh to cleanup the pool: [opc@bommel ~]$ vagrant box list oraclelinux/6 (libvirt, 6.10.130) oraclelinux/6 (libvirt, 6.10.132) oraclelinux/7 (libvirt, 7.8.131) oraclelinux/7 (libvirt, 7.8.135) [opc@bommel ~]$ vagrant box remove oraclelinux/6 --provider libvirt --box-version 6.10.130 Removing box 'oraclelinux/6' (v6.10.130) with provider 'libvirt'... Vagrant-libvirt plugin removed box only from your LOCAL ~/.vagrant/boxes directory From Libvirt storage pool you have to delete image manually(virsh, virt-manager or by any other tool) [opc@bommel ~]$ virsh -c qemu:///system vol-delete --pool default oraclelinux-VAGRANTSLASH-6_vagrant_box_image_6.10.130.img Vol oraclelinux-VAGRANTSLASH-6_vagrant_box_image_6.10.130.img deleted Libvirt CPU emulation mode The default libvirt CPU emulation mode is host-model, that is: the guest inherits capabilities from the host. Should the guest not start in this mode, you can override it using the custom mode – e.g.: config.vm.provider :libvirt do |libvirt| libvirt.cpu_mode = 'custom' libvirt.cpu_model = 'Skylake-Server-IBRS' libvirt.cpu_fallback = 'allow' end You can list the available CPU models with virsh cpu-models x86_64. Storage By default, the Vagrant Libvirt provider will use the default libvirt storage pool which stores images in /var/lib/libvirt/images. The storage_pool_name option allows you to use any other pool/location. Example: On the libvirt side, create a pool: [opc@bommel ~]$ virsh -c qemu:///system Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # pool-define-as vagrant dir --target /data/vagrant Pool vagrant defined virsh # pool-start vagrant Pool vagrant started virsh # pool-autostart vagrant Pool vagrant marked as autostarted In your Vagrantfile, set the storage_pool_name option: config.vm.provider :libvirt do |libvirt| libvirt.storage_pool_name = 'vagrant' end Vagrant Libvirt defaults If you have site specific options, instead of modifying all your Vagrantfiles, you can define them globally in ~/.vagrant.d/Vagrantfile (see Load Order and Merging). E.g: # Vagrant local defaults Vagrant.configure(2) do |config| config.vm.provider :libvirt do |libvirt| libvirt.cpu_mode = 'custom' libvirt.cpu_model = 'Skylake-Server-IBRS' libvirt.cpu_fallback = 'allow' libvirt.storage_pool_name = 'vagrant' end end VirtualBox and libvirt on the same host You cannot run VirtualBox and libvirt guests at the same time, but you still can have both installed and switch from the one to the other providing there is no guest VM running when you switch. The only thing you have to do is to stop/start their respective services – e.g. to switch from VirtualBox to libvirt: systemctl stop vboxdrv.service systemctl start libvirtd.service Screencast

Introduction As recently announced by Sergio we now support the libvirt provider for our Oracle Linux Vagrant Boxes. The libvirt provider is a good alternative to the virtualbox one when you already...

Linux

Using rclone to copy data in and out of Oracle Cloud Object Storage

Introduction In this blog post I’ll show how to use rclone on Oracle Linux with free object storage services included in Oracle Cloud Free Tier. Free tier includes 20GiB of object storage. Rclone is a command line program to sync files and directories to and from various cloud-based storage services. Oracle Cloud Object Storage is Amazon S3 compatible, so I’ll use Rclone’s S3 capabilities to move data between my local Oracle Linux system and object storage. One way to configure Rclone is to run rclone config and step through a series of questions, adding Oracle Cloud Object Storage as an S3 compatible provider. Instead, I’m going to use Oracle Cloud Infrastructure’s Cloud Shell to gather the relevant data and construct what’s ultimately a small configuration file. The high level steps are: On Oracle Cloud Infrastructure: Create an object storage bucket Create an Access Key/Secret Key pair Gather relevant values for Rclone configuration On your local Linux system Install Rclone Create an Rclone config file Create Object Storage Bucket One of the benefits of Cloud Shell is that it includes pre-configured OCI Client tools so you can begin using the command line interface without any configuration steps. Accessing OCI Cloud Shell   Starting in Cloud Shell, set up environment variables to make running subsequent commands easier. The following stores your region and tenancy OCID, and storage namespace in environment variables. I'm using both JMESPath and jq to parse JSON for illustration purposes. export R=$(curl -s http://169.254.169.254/opc/v1/instance/ | jq -r '.region') export C=$(oci os ns get-metadata --query 'data."default-s3-compartment-id"' --raw-output) export N=$(oci os ns get | jq -r '.data') Cloud Shell in action   To create a storage bucket: oci os bucket create --name mybucket --compartment-id $C Create an Access Key/Secret Key pair The Amazon S3 Compatibility API relies on a signing key called a Customer Secret Key. export U=$(oci os bucket list --compartment-id=$C --query 'data [?"name"==`mybucket`] | [0]."created-by"' --raw-output) oci iam customer-secret-key create --display-name storagekey --user-id $U export K=$(oci iam customer-secret-key list --user-id $U | jq -r '[.data[] | select (."display-name"=="storagekey")][0]."id"') In the response from oci iam customer-secret-key, id corresponds to the access key and key represents the secret key. Make a note of the key immediately because it will not be shown to you again! Finally, gather up the relevant values for the Rclone configuration. Remember to copy the secret key and save it somewhere. Run the following to collect and display the information you need. echo "ACCESS KEY: $K"; echo "SECRET KEY: check your notes"; echo "NAMESPACE: $N"; echo "REGION: $R" Set Up Linux System with Rclone Over to the local system on which Rclone will be used to move files to- and from object storage. Install Rclone To install Rclone: $ sudo yum install -y oracle-epel-release-el7 && sudo yum install -y rclone Create the Rclone Configuration File In your home directory, create a file, .rclone.conf using the contents below, replacing the values you gathered earlier: [myobjectstorage] type = s3 provider = Other env_auth = false access_key_id = <ACCESS KEY> secret_access_key = <SECRET KEY> endpoint = <NAMESPACE>.compat.objectstorage.<REGION>.oraclecloud.com Note that if the storage bucket you created is not in your home region, you may also need to add this entry to the [myobjectstorage] stanza: region = <REGION> Running Rclone You are now ready to start copying files to object storage. The following copies a file, myfile.txt to object storage. You can show the contents of object storage using rclone ls. $ echo `date` > myfile.txt $ rclone copy myfile.txt myobjectstorage:/mybucket $ rclone ls myobjectstorage:/mybucket Conclusion Rclone is a useful command line utility to interact with, among other types, S3 compatible cloud-based object storage. Oracle Cloud Object Storage has an S3 compatible API. In this blog post, I showed how to install Rclone from Oracle Linux yum server and configure it using free Oracle Cloud Object Storage. References Rclone documentation

Introduction In this blog post I’ll show how to use rclone on Oracle Linux with free object storage services included in Oracle Cloud Free Tier. Free tier includes 20GiB of object storage. Rclone is a...

Linux

Oracle Linux Vagrant Boxes Now Include Catalog Data and Add Support for libvirt Provider

Introduction We recently made some changes in the way we publish Oracle Linux Vagrant Boxes. First, we added boxes for the libvirt provider, for use with KVM. Secondly, we added Vagrant catalog data in JSON format. Vagrant Box Catalog Data With the catalog data in place, instead of launching vagrant environments using a URL to a box file, you launch it by pointing to a JSON file. For example: $ vagrant init oraclelinux/7 https://oracle.github.io/vagrant-boxes/boxes/oraclelinux/7.json $ vagrant up $ vagrant ssh This creates a Vagrantfile that includes the following two lines causes the most recently published Oracle Linux 7 box to be downloaded (if needed) and started: config.vm.box = "oraclelinux/7" config.vm.box_url = "https://oracle.github.io/vagrant-boxes/boxes/oraclelinux/7.json" Using this catalog-based approach to referencing Vagrant boxes, adds version-awareness and the ability to update boxes. During vagrant up you’ll be notified when a newer version of a box is available for your environment: $ vagrant up; vagrant ssh Bringing machine 'default' up with 'virtualbox' provider... ==> default: Checking if box 'oraclelinux/7' version '7.7.15' is up to date... ==> default: A newer version of the box 'oraclelinux/7' for provider 'virtualbox' is ==> default: available! You currently have version '7.7.15'. The latest is version ==> default: '7.8.128'. Run `vagrant box update` to update. To update a box: $ vagrant box update ==> default: Checking for updates to 'oraclelinux/7' default: Latest installed version: 7.7.15 default: Version constraints: default: Provider: virtualbox ==> default: Updating 'oraclelinux/7' with provider 'virtualbox' from version ==> default: '7.7.15' to '7.8.128'... ==> default: Loading metadata for box 'https://oracle.github.io/vagrant-boxes/boxes/oraclelinux/7.json' ==> default: Adding box 'oraclelinux/7' (v7.8.128) for provider: virtualbox default: Downloading: https://yum.oracle.com/boxes/oraclelinux/ol7/ol7u8-virtualbox-b128.box default: Calculating and comparing box checksum... ==> default: Successfully added box 'oraclelinux/7' (v7.8.128) for 'virtualbox'! To check if a later version of a box is available: $ vagrant box outdated --global * 'rpmcheck' for 'virtualbox' wasn't added from a catalog, no version information * 'oraclelinux/7' for 'virtualbox' is outdated! Current: 7.7.15. Latest: 7.8.128 * 'oraclelinux/6' for 'virtualbox' is outdated! Current: 6.10.13. Latest: 6.10.127 * 'oraclelinux/6' for 'virtualbox' is outdated! Current: 6.8.3. Latest: 6.10.127 Another benefit of using catalog data to install Oracle Linux Vagrant boxes, is that checksums are automatically verified after download. Vagrant Boxes for libvirt Provider With the newly released boxes for the libvirt provider you can create Oracle Linux Vagrant environments using KVM as the hypervisor. In this blog post, Philippe explains how to get started. Conclusion In this blog post, I discussed changes we made to the way we publish Oracle Linux Vagrant boxes and showed how to use Vagrant box catalog data to install and manage box versions.

Introduction We recently made some changes in the way we publish Oracle Linux Vagrant Boxes. First, we added boxes for the libvirt provider, for use with KVM. Secondly, we added Vagrant catalog data in...

Announcements

Updated Oracle Database images now available in the Oracle Cloud Marketplace

Oracle is pleased and honored to announce the updated "Oracle Database" availability in the "Oracle Cloud MarketPlace". By leveraging the "Oracle Database" you will have the option to automatically deploy a fully functional Database environment by pasting a simple cloud-config script; the deployment allows for basic customization of the environment, further configurations, like adding extra disks, NICs, is always possible post-deployment. The framework allows for simple cleanup and re-deployment, via the Marketplace interface (terminate instance and re-launch), or cleanup the Instance within and re-deploy the same Instance with changed settings (see Usage Info below). To easily introduce to the different customization options, available with the "Oracle Database" we also created a dedicated document with examples on the Oracle Database customization deployment. The deployed Instance will be based on the following software stack: Oracle Cloud Infrastructure Native Instance Oracle Linux 7.8 UEK5 (Unbreakable Enterprise Kernel, release 5) Update 3 Updated Oracle Database 12cR2, 18c and 19c with April, 2020 Critical Patch Update For further information: Oracle Database deployment on Oracle Cloud Infrastructure Oracle Database on Oracle Cloud MarketPlace Oracle Cloud Marketplace Oracle Cloud: Try it for free Oracle Database Templates for Oracle VM

Oracle is pleased and honored to announce the updated "Oracle Database" availability in the "Oracle Cloud MarketPlace". By leveraging the "Oracle Database" you will have the option to automatically...

Announcements

Announcing the release of Oracle Linux 8 Update 2

Oracle is pleased to announce the general availability of Oracle Linux 8 Update 2. Individual RPM packages are available on the Unbreakable Linux Network (ULN) and the Oracle Linux yum server. ISO installation images will soon be available for download from the Oracle Software Delivery Cloud, and Docker images will soon be available via Oracle Container Registry and Docker Hub. Starting with Oracle Linux 8 Update 2, the Unbreakable Enterprise Kernel Release 6 (UEK R6) is included on the installation image along with the Red Hat Compatible Kernel (RHCK). For new installations, UEK R6 is enabled and installed as the default kernel on first boot. UEK R6 is a heavily tested and optimized operating system kernel for Oracle Linux 7 Update 7, and later, and Oracle Linux 8 Update 1, and later. The kernel is developed, built, and tested on 64-bit Arm (aarch64) and 64-bit AMD/Intel (x86-64) platforms. UEK R6 is based on the mainline Linux kernel version 5.4 and includes driver updates, bug fixes, and security fixes; additional features are enabled to provide support for key functional requirements and patches are applied to improve performance and optimize the kernel for use in enterprise operating environments. Oracle Linux 8 Update 2 ships with: UEK R6 (kernel-uek-5.4.17-2011.1.2.el8uek) for x86_64 (Intel & AMD) and aarch64 (Arm) platforms RHCK (kernel-4.18.0-193.el8) for x86_64 (Intel & AMD) platform where both include bug fixes, security fixes and enhancements. Notable New Features for All Architectures Unbreakable Enterprise Kernel Release 6 (UEK6) For information about UEK6, please refer to the UEK6 announcement Red Hat Compatible Kernel (RHCK) "kexec-tools" documentation now includes Kdump FCoE target support "numactl" manual page updated to clarify information about memory usage "rngd" can run with non-root privileges Secure Boot available by default Compiler and Development Toolset (available as Application Streams) Compiler and Toolset Clang toolset updated to version 9.0.0 Rust toolset updated to version 1.39 Go toolset updated to 1.13.4 GCC Toolset 9 GCC version updated to 9.2.1 GDB version updated to 8.3 For further GCC Toolset updates, please check Oracle Linux 8 Update 2 release notes Database Oracle Linux 8 Update 2 ships with version 8.0 of the MySQL database Dynamic Programming Languages, Web "maven:3.6" module stream available "Python 3.8" is provided by a new python38 module. Python 3.6 continues to be supported in Oracle Linux 8. The introduction of "Python 3.8" in Oracle Linux 8 Update 2 requires that you specify which version of "mod_wsgi" you want to install, as "Python 3.6" is also supported in this release. "perl-LDAP" and "perl-Convert-ASN1" packages are now released as part of Oracle Linux 8 Update 2. Infrastructure Services "bind" updated to version 9.11.13 "tuned" updated to version 2.13 Networking "eBPF" for Traffic Control kernel subsystem supported (previously available as a technology preview) "firewalld" updated to version 0.8 Podman, Buildah, and Skopeo Container Tools are now supported on both UEK R6 and RHCK Security "audit" updated to version 3.0-0.14; several improvements introduced between Kernel version 4.18 (RHCK) and version 5.4 (UEK R6) of Audit. "lvmdbusd" service confined by SELinux. "openssl-pkcs11" updated to version 0.4.10. "rsyslog" updated to version 8.1911.0. SCAP Security Guide includes ACSC (Australian Cyber Security Centre) Essential Eight support. SELinux SELinux setools-gui and setools-console-analyses packages included SELinux improved to enable confined users to manage user session services semanage export able to display customizations related to permissive domains semanage includes capability for listing and modifying SCTP and DCCP ports "sudo" updated to version 1.8.29-3. "udica" is now capable of adding new allow rules generated from SELinux denials to an existing container policy. Virtualization Nested Virtual Machines (VM) capability added; this enhancement enables an Oracle Linux 7 or Oracle Linux 8 VM that is running on an Oracle Linux 8 physical host to perform as a hypervisor, and host its own VMs. Note: On AMD64 systems, nested KVM virtualization continues to be a Technology Preview. virt-manager application deprecated; Oracle recommends using the Cockpit web console to manage virtualization in a GUI. Cockpit Web Console Cockpit web console login timeout; web console sessions will be automatically logged out after 15 minutes of inactivity Option for logging into the web console with a TLS client certificate added Creating a new file system in the web console requires a specified mount point Virtual Machines management page improvements Important changes in this release UEK R6 brought back support for Btrfs and OCFS2 file systems. These are not available while using the Red Hat Compatible Kernel (RHCK). Further Information on Oracle Linux 8 For more details about these and other new features and changes, please consult the Oracle Linux 8 Update 2 Release Notes and Oracle Linux 8 Documentation. Oracle Linux can be downloaded, used, and distributed free of charge and all updates and errata are freely available. Customers decide which of their systems require a support subscription. This makes Oracle Linux an ideal choice for development, testing, and production systems. The customer decides which support coverage is best for each individual system while keeping all systems up to date and secure. Customers with Oracle Linux Premier Support also receive support for additional Linux programs, including Gluster Storage, Oracle Linux Software Collections, and zero-downtime kernel updates using Oracle Ksplice. Application Compatibility Oracle Linux maintains user-space compatibility with Red Hat Enterprise Linux (RHEL), which is independent of the kernel version that underlies the operating system. Existing applications in user space will continue to run unmodified on the Unbreakable Enterprise Kernel Release 6 (UEK R6) and no re-certifications are needed for RHEL certified applications. For more information about Oracle Linux, please visit www.oracle.com/linux.

Oracle is pleased to announce the general availability of Oracle Linux 8 Update 2. Individual RPM packages are available on the Unbreakable Linux Network (ULN) and the Oracle Linux yum server. ISO...

Announcements

NTT Data Intellilink - Powering Mission Critical Workloads with Oracle Linux

This article highlights customer NTT Data Intellilink and their use of Oracle Linux. As an NTT DATA group company, they aim to provide value for their customers through design, implementation and operation of mission-critical information and communication systems platform built with the latest technologies. Within NTT Data Intellilink, there is a business unit which focuses on providing customers with Oracle solutions, support, and implementation services using a wide range of Oracle products such as Oracle Database, Oracle Fusion Middleware, Oracle Linux, and Oracle Engineered Systems. These solutions are deployed on premise or in Oracle Cloud. Previously, NTT Data Intellilink had been using Red Hat Enterprise Linux before switching to Oracle Linux with the Unbreakable Enterprise Kernel (UEK). This change resulted in multiple benefits which they speak about in this video. These include optimized workload performance, improved support across the entire stack, increased security, and lowering costs by 50% overall. NTT Data Intellilink also found that Oracle's flexible support contracts were easier to manage. Additionally, NTT Data Intellilink has improved its systems management experience by leveraging Oracle Enterprise Manager and Oracle Ksplice across its portfolio. Ksplice, zero-downtime patching, allows patches and critical bug fixes to be applied without taking systems down. Both are included at no additional cost with Oracle Linux Premier Support. We are proud to enable customers like NTT Data Intellilink to deliver mission-critical systems at a lower cost. Watch this video to learn more!

This article highlights customer NTT Data Intellilink and their use of Oracle Linux. As an NTT DATA group company, they aim to provide value for their customers through design, implementation...

Announcements

Staying Ahead of Cyberthreats: Protecting Your Linux Systems with Oracle Ksplice

In this recently published white paper, "Staying Ahead of Cyberthreats: Protecting Your Linux Systems with Oracle Ksplice," we explain why regular operating system patching is so important and how Oracle Ksplice can help better protect your Linux systems. In the face of increasingly sophisticated cyberthreats, protecting IT systems has become vitally important. To help administrators more easily and regularly apply Linux updates, Oracle Ksplice offers an automated zero-downtime solution that simplifies the patching process. Ksplice allows users to automate patching of the Linux kernel, both Xen and KVM hypervisors, and critical user space libraries. It is currently the only solution to offer user space patching. Ksplice also offers several other customer benefits, which are explained in the white paper. Additionally, you will find links to customer videos that highlight the value Ksplice is providing in production environments. Customers with an Oracle Linux Premier Support subscription have access to Ksplice at no additional cost. It is available for both on premise and cloud deployments. We hope you have a chance to learn more by reading the white paper and listening to what customers are saying about Ksplice.  

In this recently published white paper, "Staying Ahead of Cyberthreats: Protecting Your Linux Systems with Oracle Ksplice," we explain why regular operating system patching is so important and how...

Announcements

Announcing the release of Oracle Linux 7 Security Technical Implementation Guide (STIG) OpenSCAP profile

On February 28 2020, the Defence Information Systems Agency (DISA) released the Oracle Linux 7 Security Technical Implementation Guide (STIG) Release 1 Version 1 (R1V1). Oracle has implemented the published STIG in Security Content Automation Protocol (SCAP) format and included it in the latest release of the scap-security-guide package for Oracle Linux 7. This can be used in conjunction with the OpenSCAP tool shipped with Oracle Linux to validate a server against the published implementation guide. The validation process can also suggest and in some cases automatically apply remediation in cases where compliance is not met. Running a STIG compliance scan with OpenSCAP To validate a server against the published profile, you will need to install the OpenSCAP scanner tool and the SCAP Security Guide content: # yum install openscap scap-security-guide Loaded plugins: ovl, ulninfo Resolving Dependencies --> Running transaction check ---> Package openscap.x86_64 0:1.2.17-9.0.3.el7 will be installed ... Dependencies Resolved =============================================================================================================================== Package Arch Version Repository Size =============================================================================================================================== Installing: openscap x86_64 1.2.17-9.0.3.el7 ol7_latest 3.8 M scap-security-guide noarch 0.1.46-11.0.2.el7 ol7_latest 7.9 M Installing for dependencies: libxslt x86_64 1.1.28-5.0.1.el7 ol7_latest 241 k openscap-scanner x86_64 1.2.17-9.0.3.el7 ol7_latest 62 k xml-common noarch 0.6.3-39.el7 ol7_latest 26 k Transaction Summary =============================================================================================================================== Install 2 Packages (+3 Dependent packages) ... Installed: openscap.x86_64 0:1.2.17-9.0.3.el7 scap-security-guide.noarch 0:0.1.46-11.0.2.el7 ... Complete! To confirm you have the STIG profile available, run: # oscap info --profile stig /usr/share/xml/scap/ssg/content/ssg-ol7-xccdf.xml Document type: XCCDF Checklist Profile Title: DISA STIG for Oracle Linux 7 Id: stig Description: This profile contains configuration checks that align to the DISA STIG for Oracle Linux V1R1. To start an evaluation of the host against the profile, run: # oscap xccdf eval --profile stig \ --results /tmp/`hostname`-ssg-results.xml \ --report /var/www/html/`hostname`-ssg-results.html \ --cpe /usr/share/xml/scap/ssg/content/ssg-ol7-cpe-dictionary.xml \ /usr/share/xml/scap/ssg/content/ssg-ol7-xccdf.xml WARNING: This content points out to the remote resources. Use `--fetch-remote-resources' option to download them. WARNING: Skipping https://linux.oracle.com/security/oval/com.oracle.elsa-all.xml.bz2 file which is referenced from XCCDF content Title Remove User Host-Based Authentication Files Rule no_user_host_based_files Result pass Title Remove Host-Based Authentication Files Rule no_host_based_files Result pass Title Uninstall rsh-server Package Rule package_rsh-server_removed Result pass ... The results will be saved to /tmp/hostname-ssg-results.xml and a human-readable report will be saved to /var/www/html/hostname-ssg-results.html as well. For further details on additional options for running OpenSCAP compliance checks, including ways to generate a full security guide from SCAP content, please see the Oracle Linux 7 Security Guide. For details on methods to automate OpenSCAP scanning using Spacewalk, please see the Spacewalk for Oracle Linux: Client Life Cycle Management Guide. For community-based support, please visit the Oracle Linux space on the Oracle Groundbreakers Community.

On February 28 2020, the Defence Information Systems Agency (DISA) released the Oracle Linux 7 Security Technical Implementation Guide (STIG) Release 1 Version 1 (R1V1). Oracle has implemented the...

Announcements

Announcing Oracle Linux Cloud Native Environment Release 1.1

Oracle is pleased to announce the general availability of Oracle Linux Cloud Native Environment Release 1.1. This release includes several new features for cluster management, updates to the existing Kubernetes module, and introduces new Helm and Istio modules. Oracle Linux Cloud Native Environment is an integrated suite of software and tools for the development and management of cloud-native applications. Based on the Open Container Initiative (OCI) and Cloud Native Computing Foundation (CNCF) standards, Oracle Linux Cloud Native Environment delivers a simplified framework for installations, updates, upgrades, and configuration of key features for orchestrating microservices. New features and notable changes Several improvements and enhancements have been made to the installation and management of Oracle Linux Cloud Native Environment, including: Cluster installation: the olcnectl module install command automatically installs and configures any required RPM packages and services. Load balancer installation: the olcnectl module install command automatically deploys a software load balancer when the --virtual-ip parameter is provided. Cluster upgrades: the olcnectl module update command can update module components. For multi-master deployments, this is done with no cluster service downtime. Cluster scaling: the olcnectl module update command can add and remove both master and worker nodes in a running cluster.  Updated Kubernetes module Oracle Linux Cloud Native Environment Release 1.1 includes Kubernetes Release 1.17.4. Please review the Release Notes for a list of the significant administrative and API changes between Kubernetes 1.14.8 and 1.17. New Helm module Helm is a package manager for Kubernetes that simplifies the task of deploying and managing software inside Kubernetes clusters. In this release, the Helm module is not supported for general use but is required and supported to deploy the Istio module. New Istio module Istio is a fully featured service mesh for deploying microservices into Kubernetes clusters. Istio can handle most aspects of microservice management, including identity, authentication, transport security, and metric scraping. The Istio module includes embedded instances of the Prometheus monitoring and Grafana graphing tools which are automatically configured with specific dashboards to better understand Istio-managed workloads. For more information about installing and using the Istio module, see Service Mesh. Installation and upgrade Oracle Linux Cloud Native Environment is installed using packages from the Unbreakable Linux Network or the Oracle Linux yum server as well as container images from the Oracle Container Registry. Existing deployments can be upgraded in place using the olcnectl module update command. For more information on installing or upgrading Oracle Linux Cloud Native Environment, please see Getting Started. Support for Oracle Linux Cloud Native Environment Support for Oracle Linux Cloud Native Environment is included with an Oracle Linux Premier Support subscription. Documentation and training Oracle Linux Cloud Native Environment documentation Oracle Linux Cloud Native Environment training

Oracle is pleased to announce the general availability of Oracle Linux Cloud Native Environment Release 1.1. This release includes several new features for cluster management, updates to the existing...

Linux

What’s new for NFS in Unbreakable Enterprise Kernel Release 6?

Oracle Linux kernel engineer Calum Mackay provides some insight into the new features for NFS in release 6 of the Unbreakable Enterprise Kernel (UEK).   UEK R6 is based on the upstream long-term stable Linux kernel v5.4, and introduces many new features compared to the previous version UEK R5, which is based on the upstream stable Linux kernel v4.14. In this blog, we look at what has been improved in the UEK R6 NFS client & server implementations. Server-side Copy (NFSv4.2 clients & servers) UEK R6 adds initial experimental support for parts of the NFSv4.2 server-side copy (SSC) mechanism. This is a feature that considerably increases efficiency when copying a file between two locations on a server, via NFS. Without SSC, this operation requires that the NFS client use READ requests to read all the file's data, then WRITE requests to write it back to the server as a new file, with every byte travelling over the network twice. With SSC, the NFS client may use one of two new NFSv4.2 operations to ask the server to perform the copy locally, on the server itself, without the file data needing to traverse the network at all. Obviously this will be enormously faster. 1. NFS COPY NFS COPY is a new operation which can be used by the client to request that the server locally copy a range of bytes from one file to another, or indeed the entire file. However, NFS COPY requires use of the copy_file_range client system call. Currently, no bundled utilities in Linux distributions appear to make use of this system call, but an application may easily be written or converted to use it. As soon as support for the copy_file_range client system call is added to client utilities, they will be able to make use of the NFSv4.2 COPY operation. Note that NFS COPY does not require any special support within the NFS server filesystem itself. 2. NFS_CLONE The new NFS CLONE operation allows clients to ask the server to use the exported filesystem's reflink mechanism to create a copy-on-write clone of the file, elsewhere within the same server filesystem. NFS CLONE requires the use of client utilities that support reflink; currently cp includes this support, with its --reflink option. In addition, NFS CLONE requires that the NFS server filesystem supports the reflink operation. The filesystems available in Oracle Linux NFS servers that support the reflink operation are btrfs, OCFS2 & XFS. NFS CLONE is much faster even than NFS COPY, since it uses copy-on-write, on the NFS server, to clone the file, provided the source and destination files are within the same filesystem. Note that in some cases the server filesystem may need to have been originally created with reflink support, especially if they were created on Oracle Linux 7 or earlier. The NFSv4.2 SSC design specifies both intra-server and inter-server operations. UEK R6 supports intra-server operations, i.e. the source and destination files exist on the same NFS server. Support for inter-server SSC (copies between two NFS servers) will be added in the future. Use of these features requires that both NFS client and server support NFSv4.2 SSC; currently server support for SSC is only available with Linux NFS servers. As an example of the performance gains possible with NFSv4.2 SSC, here's an example copying a 2GB file between two locations on an NFS server, over a relatively slow network:   .divTable { display: table; width: 100%; } .divTableRow { display: table-row; } .divTableHeading { display: table-header-group; background-color: #ddd; font-weight: bold; } .divTableCell { display: table-cell; padding: 3px 10px; border: 1px solid #999999; } Method Time Traditional NFS READ/WRITE 5 mins 22 Seconds NFSv4.2 COPY (via custom app using copy_file_range syscall) 12 Seconds   SSC is specific to NFS version 4.2 or greater. In Oracle Linux 7, NFSv4.2 is supported (provided the latest UEK kernel and userland packages are installed), but it is not the default NFS version used by an NFS client when mounting filesystems. By default, an OL7 NFS client will mount using NFSv4.1 (provided the NFS server supports it). An NFSv4.2 mount may be performed on an OL7 client, as follows: # command line mount -o vers=4.2 server:/export /mnt # /etc/fstab server:/export /mnt nfs noauto,vers=4.2 0 0 An Oracle Linux 8 NFS client will mount using NFSv4.2 by default. Just like OL7, if the NFS server does not support that, the OL8 client will try to successively lower NFS versions until it finds one that the server supports. Multiple TCP connections per NFS server (NFSv4.1 and later clients) For NFSv4.1 and later mounts over TCP, a new nconnect mount option enables an NFS client to set up multiple TCP connections, using the same client network interface, to the same NFS server. This may improve total throughput in some cases, particularly with bonded networks. Multiple transports allow hardware parallelism on the network path to be fully exploited. However, there are also improvements even using just one NIC; thanks to various efficiency savings. Using multiple connections will help most when a single TCP connection is saturated while the network itself and the server still has capacity. It will not help if the network itself is saturated, and it will still be bounded by the performance of the storage at the NFS server. Enhanced statistics reporting has been added to report on all transports when using multiple connections. Improved handling of soft mounts (NFSv4 clients) NOTE: we do not recommend the use of the soft and rw mount options together (and remember that rw is the default) unless you fully understand the implications, including possible data loss or corruption. By default, i.e. without the soft mount option, NFS mounts are described as hard, which means that NFS operations will not timeout in the case of an unresponsive NFS server, or network partition. NFS operations, including READ and WRITE, will wait indefinitely, until the NFS server is again reachable. In particular, this means that any such affected NFS filesystem cannot be unmounted, and the NFS client system itself cannot be cleanly shutdown, until the NFS server responds. When an NFS filesystem is mounted with the soft mount option, NFS operations will timeout after a certain period (based on the timeo and retrans mount options) and the associated system calls (e.g. read, write, fsync, etc) will return an EIO error to the application. The NFS filesystem may be unmounted, and the NFS client system may be cleanly shutdown. This might sound like a useful feature, but it can cause problems, which can be especially severe in the case of rw (read-write) filesystems, because of the following: Client applications often don't expect to get EIO from file I/O request system calls, and may not handle them appropriately. NFS uses asynchronous I/O which means that the first client write system call isn't necessarily going to return the error, which may instead get reported by a subsequent write, or close, or perhaps only by a subsequent fsync, which the client might not even perform; close may not be guaranteed to report the error, either. Obviously, reporting the error via a subsequent write/fsync makes it harder for the application to deal with correctly. Write interleaving may mean that the NFS client kernel can't always precisely track which file descriptors are involved, so the error may perhaps not even be guaranteed to be delivered, or not delivered via the right descriptor on close/fsync. It's important to realize that the above issues may result in NFS WRITE operations being lost, when using the soft mount option, resulting in file corruption and data loss, depending on how well the client application handles these situations. For that reason, it is dangerous to use the soft mount option with rw mounted filesystems, even with UEK R6, unless you are fully aware of how your application(s) handle EIO errors from file I/O request system calls. In UEK R6, the handling of soft mounts with NFSv4 has been improved, in particular: Reducing the risk of false-positive timeouts, e.g. in the case where the NFS server is merely congested. Faster failover of NFS READ and WRITE operations after a timeout. Better association of errors with process/fd. A new optional additional softerr mount option to return ETIMEDOUT (instead of EIO) to the application after a timeout, so that applications written to be aware of this may better differentiate between the the timeout case, e.g. to drive a failover response, and other I/O errors, so that the client application may choose a different recovery action for those cases. Mounts using only the soft mount option will see the other improvements, but timeout errors will still be returned to the application with the EIO error. General improvements will still benefit to an extent existing applications not written specifically to deal with ETIMEDOUT/EIO using the soft mount option with NFSv4, as follows: The client kernel will give the server longer to reply, without returning EIO to the application, as long as the network connection remains connected, for example if the server is badly congested. Swifter handling of real timeouts, and better correlation of error to process file descriptor. Be aware that the same caveats still apply: it's still dangerous to use soft with rw mounts unless you fully understand the implications, and all client applications are correctly written to handle the issues. If you are in any doubt about whether your applications behave correctly in the face of EIO & ETIMEDOUT errors, do not use soft rw mounts. New knfsd file descriptor cache (NFSv3 servers) UEK R6 NFSv3 servers benefit from a new knfsd file descriptor cache, so that the NFS server's kernel doesn't have to perform internal open and close calls for each NFSv3 READ or WRITE. This can speed up I/O in some cases. It also replaces the readahead cache. When an NFSv3 READ or WRITE request comes in to an NFS server, knfsd initially opens a new file descriptor, then it perform the read/write, and finally it closes the fd. While this is often a relatively inexpensive thing to do for most local server filesystems, it is usually less so for FUSE, clustered, networked and other filesystems with a slow open routine, that are being exported by knfsd. This improvement attempts to reduce some of that cost by caching open file descriptors so that they may be reused by other incoming NFSv3 READ/WRITE requests for the same file. Performance General Much work has been done to further improve the performance of NFS & RPC over RDMA transports. The performance of RPC requests has been improved, by removing BH (bottom-half soft IRQ) spinlocks. NFS clients Optimization of the default readahead size, to suit modern NFS block sizes and server disk latencies. RPC client parallelization optimizations. Performance optimizations for NFSv4 LOOKUP operations, and delegations, including not unnecessarily returning NFSv4 delegations, and locking improvements. Support the statx mask & query flags to enable optimizations when the user is requesting only attributes that are already up to date in the inode cache, or is specifying AT_STATX_DONT_SYNC. NFS servers Remove the artificial limit on NFSv4.1 performance by limiting the number of oustanding RPC requests from a single client. Increase the limit of concurrent NFSv4.1 clients, i.e. stop a few greedy clients using up all the NFS server's session slots. Diagnostics To improve debugging and diagnosibility, a large number of ftrace events have been added. Work to follow will include having a subset of these events optionally enabled during normal production, to aid fix on first occurrence without adversely impacting performance. Expose information about NFSv4 state held by servers on behalf of clients. This is especially important for NFSv4 OPEN calls, which are currently invisible to user space on the server, unlike locks (/proc/locks) and local processes' opens (/proc/pid/). A new directory (/proc/fs/nfsd/clients/) is added, with subdirectories for each active NFSv4 client. Each subdirectory has an info file with some basic information to help identify the client and a states/ directory that lists the OPEN state held by that client. This also allows forced revocation of client state. (NFSv3/NLM) Cleanup and modify lock code to show the pid of lockd as the owner of NLM locks. Miscellaneous NFS clients Finer-grained NFSv4 attribute checking. For NFS mounts over RDMA, the port=20049 (sic) mount option is now the default. NFS servers Locking and data structure improvements for duplicate request cache (DRC) and other caches. Improvements for running NFS servers in containers, including replacing the global duplicate reply cache with separate caches per network namespace; it is now possible to run separate NFS server processes in each network namespace, each with their own set of exports. NFSv3 clients and servers Improved handling of correctness and reporting of NFS WRITE errors, on both NFSv3 clients and servers. This is especially important given that NFS WRITE operations are generally done asynchronously to application write system calls. Summary In this blog we've looked at the changes and new features relating to NFS & RPC, for both clients and servers available in the latest Unbreakable Enterprise Kernel Release 6.

Oracle Linux kernel engineer Calum Mackay provides some insight into the new features for NFS in release 6 of the Unbreakable Enterprise Kernel (UEK).   UEK R6 is based on the upstream long-term stable...

Announcements

Downloading Oracle Linux ISO Images

Updated to incorporate new download options and changes in Software Delivery Cloud This post summarizes options to download Oracle Linux installation media Introduction There are several types of downloads: Full ISO image: contains everything needed to boot a system and install Oracle Linux. This is the most common download. UEK Boot ISO image: contains everything that is required to boot a system with Unbreakable Enterprise Kernel (UEK) and start an installation Boot ISO image: contains everything that is required to boot a system with Red Hat compatible kernel (RHCK) and start an installation Source DVDs ISO image: ISO images containing all the source RPMs for Oracle Linux Other types of images: Depending on the release of Oracle Linux, there are optional ISO images with additional drivers, etc. See also the documentation for Oracle Linux 7 and Oracle Linux 8 for more details on obtaining and preparing installation media. Download Oracle Linux from Oracle Linux Yum Server If all you need is an ISO image to perform an installation of a recent Oracle Linux release, your best bet is to download directly from Oracle Linux yum server. From here you can directly download full ISO images and boot ISO images for the last few updates of Oracle Linux 8, 7 and 6 for both x86_64 and Arm (aarch64). No registration or sign in required. Start here. Download from Oracle Software Delivery Cloud Oracle Sofware Delivery Cloud is the official source to download all Oracle software, including Oracle Linux. Unless you are looking for older releases of Oracle Linux or complementary downloads other than the regular installation ISO, it’s probably quicker and easier download from Oracle Linux yum server. That said, to download from Oracle Sofware Delivery Cloud, start here and sign in. Choose one of the following methods to obtain your product: If your product is included in the Popular Downloads window, then select that product to add it to the cart. If your product is not included in the Popular Downloads window, then do the following: Type “Oracle Linux 7” or “Oracle Linux 8” in the search box, then click Search. From the search results list, select the product you want to download to add it to the cart. Note that for these instructions, there is no difference between a Release (REL) and a Download Package (DLP). Click Checkout. From the Platform/Languages drop-down list, select your system’s platform, then Continue. On the next page, review and accept the terms of licenses, then click Continue. Next, you have several options to download the files you are interested in. Directly by Clicking on the File Link If you only need one or two of the files and don’t anticipate any download hiccups that require stopping and resuming a download, simply click on the filename, e.g. V995537-01.iso   Image showing how to download a single file   Using a Download Manager Use a download manager if you want to download multiple files at the same time or pause and resume file download. A Download manager can come in handy when you are having trouble completing a download, or want to queue up several files for unattended downloading. Remember to de-select any files you are not interested in. Image of download manager   Using wget If want to download directly to a system with access to a command line, only, use the WGET option to download a shell script. Download from Unofficial Mirrors In addition locations listed above Oracle Linux ISOs can be download from several unoffical mirror sites. Note that these site are not endorsed by Oracle, but that you can verify the downloaded files using the procedure outlined below. Remember to Verify Oracle Linux downloads can be verified to ensure that they are exactly the downloads as published by Oracle and that they were downloaded without any corruption. For checksum files, signing keys and steps to verify the integrity of your downloads, see these instructions. Downloading Oracle Linux Source Code To download Oracle Linux source code, use the steps described under Download from Oracle Software Delivery Cloud to onbtain Source DVD ISOs. Alternatively, you can find individual source RPMs on oss.oracle.com/sources or Oracle Linux yum server

Updated to incorporate new download options and changes in Software Delivery Cloud This post summarizes options to download Oracle Linux installation media Introduction There are several types of...

Announcements

Announcing the release of Oracle Linux 7 Update 8

Oracle is pleased to announce the general availability of Oracle Linux 7 Update 8. Individual RPM packages are available on the Unbreakable Linux Network (ULN) and the Oracle Linux yum server. ISO installation images will soon be available for download from the Oracle Software Delivery Cloud and Docker images are available via Oracle Container Registry and Docker Hub. Oracle Linux 7 Update 8 ships with the following kernel packages, which include bug fixes, security fixes and enhancements: Unbreakable Enterprise Kernel (UEK) Release 5 for x86-64 and aarch64 kernel-uek-4.14.35-1902.300.11.el7uek Red Hat Compatible Kernel (RHCK) for x86-64 only kernel-3.10.0-1127.el7 Notable new features for all architectures Oracle Linux 7 Update 8 ISO includes latest Unbreakable Kernel Release 5 Update 3 "Unbreakable Kernel Release 6" available by Oracle Linux 7 Yum channel SELinux enhancements for Tomcat domain access and graphical login sessions rsyslog has a new option for managing letter-case preservation by using the FROMHOST property for the imudp and imtcp modules Pacemaker concurrent-fencing cluster property defaults to true, speeding up recovery in a large cluster where multiple nodes are fenced Further information are available in the Release Notes for Oracle Linux 7 Update 8. Application Compatibility Oracle Linux maintains user space compatibility with Red Hat Enterprise Linux (RHEL), which is independent of the kernel version that underlies the operating system. Existing applications in user space will continue to run unmodified on Oracle Linux 7 Update 8 with UEK Release 5 and no re-certifications are needed for applications already certified with Red Hat Enterprise Linux 7 or Oracle Linux 7. About Oracle Linux The Oracle Linux operating environment delivers leading performance, scalability and reliability for business-critical workloads deployed on premise or in the cloud. Oracle Linux is the basis of Oracle Autonomous Linux and runs Oracle Gen 2 Cloud. Unlike many other commercial Linux distributions, Oracle Linux is easy to download and completely free to use, distribute, and update. Oracle Linux Support offers access to award-winning Oracle support resources and Linux support specialists; zero-downtime updates using Ksplice; additional management tools such as Oracle Enterprise Manager and Spacewalk; and lifetime support, all at a low cost. For more information about Oracle Linux, please visit www.oracle.com/linux.

Oracle is pleased to announce the general availability of Oracle Linux 7 Update 8. Individual RPM packages are available on the Unbreakable Linux Network (ULN) and the Oracle Linux yum server....

Announcements

Announcing Oracle Linux Virtualization Manager 4.3

Oracle is pleased to announce the general availability of Oracle Linux Virtualization Manager, release 4.3. This server virtualization management platform can be easily deployed to configure, monitor, and manage an Oracle Linux Kernel-based Virtual Machine (KVM) environment with enterprise-grade performance and support from Oracle. This release is based on the 4.3.6 release of the open source oVirt project. New Features with Oracle Linux Virtualization Manager 4.3 In addition to the base virtualization management features required to operate your data center, notable features added with the Oracle Linux Virtualization Manager 4.3 release include: Self-Hosted Engine: The oVirt Self-Hosted Engine is a hyper-converged solution in which the oVirt engine runs on a virtual machine on the hosts managed by that engine. The virtual machine is created as part of the host configuration, and the engine is installed and configured in parallel to the host configuration process. The primary benefit of the Self-Hosted Engine is that it requires less hardware to deploy an instance of the Oracle Linux Virtualization Manager as the engine runs as a virtual machine, not on physical hardware. Additionally, the engine is configured to be highly available. If the Oracle Linux host running the engine virtual machine goes into maintenance mode, or fails unexpectedly, the virtual machine will be migrated automatically to another Oracle Linux host in the environment. Gluster File System 6.0: oVirt has been integrated with GlusterFS, an open source scale-out distributed filesystem, to provide a hyper-converged solution where both compute and storage are provided from the same hosts. Gluster volumes residing on the hosts are used as storage domains in oVirt to store the virtual machine images. Oracle Linux Virtualization Manager is run as the Self Hosted Engine within a virtual machine on these hosts. GlusterFS 6.0 is released as an Oracle Linux 7 program. Virt-v2v: The virt-v2v tool converts a single guest from another hypervisor to run on Oracle Linux KVM. It can read Linux and Windows Virtual Machines running on Oracle VM or other hypervisors, and convert them to KVM machines managed by Oracle Linux Virtualization Manager. New guest OS support: Oracle Linux Virtualization Manager guest operating system support has been extended to include Oracle Linux 8, Red Hat Enterprise Linux 8, CentOS 8, SUSE Linux Enterprise Server (SLES) 12 SP5 and SLES 15 SP1. oVirt 4.3 features and bug fixes: Improved performance when running Windows as a guest OS. Included with this release are the latest Oracle VirtIO Drivers for Microsoft Windows. Higher level of security with TLSv1 and TLSv1.1 protocols now disabled for vdsm communications. Numerous engine, vdsm, UI, and bug fixes. More information on these features can be found in the Oracle Linux Virtualization Manager Document Library which has been updated for this release. Visit the Oracle Linux Virtualization Manager Training website for videos, documents, other useful links, and further information on setting up and managing this solution. Oracle Linux Virtualization Manager allows enterprise customers to continue supporting their on-premise data center deployments with the KVM hypervisor available on Oracle Linux 7 Update 7 with the Unbreakable Enterprise Kernel Release 5. This 4.3 release is an update release for Oracle Linux Virtualization Manager 4.2. Getting Started Oracle Linux Virtualization Manager 4.3 can be installed from the Oracle Linux yum server or the Oracle Unbreakable Linux Network. Customers that have already deployed Oracle Linux Virtualization Manager 4.2 can upgrade to 4.3 using these same sites. Two new channels have been created in the Oracle Linux 7 repositories that users will access to install or update Oracle Linux Virtualization Manager: oVirt 4.3 - base packages required for Oracle Linux Virtualization Manager oVirt 4.3 Extra Packages - additional packages for Oracle Linux Virtualization Manager Oracle Linux 7 Update 7 hosts can be installed with installation media (ISO images) available from the Oracle Software Delivery Cloud. Step-by-step instructions to download the Oracle Linux 7 Update 7 ISO can be found on the Oracle Linux Community website. Using the "Minimal Install" option during the installation process sets up a base KVM system which can then be updated using the KVM Utilities channel in the Oracle Linux 7 repositories. These KVM enhancements and other important packages for your Oracle Linux KVM host can be installed from the Oracle Linux yum server and the Oracle Unbreakable Linux Network: Latest - Latest packages released for Oracle Linux 7 UEK Release 5 - Latest Unbreakable Enterprise Kernel Release 5 packages for Oracle Linux 7 KVM Utilities - KVM enhancements (QEMU and libvirt) for Oracle Linux 7 Optional Latest - Latest optional packages released for Oracle Linux 7 Gluster 6 Packages - Latest Gluster 6 packages for Oracle Linux 7 Both Oracle Linux Virtualization Manager and Oracle Linux can be downloaded, used, and distributed free of charge and all updates and errata are freely available. Oracle Linux Virtualization Manager Support Support for Oracle Linux Virtualization Manager is available to customers with an Oracle Linux Premier Support subscription. Refer to Oracle Linux 7 License Information User Manual for information about Oracle Linux support levels. Oracle Linux Virtualization Manager Resources Oracle Linux Resources Oracle Virtualization Resources Oracle Linux yum server Oracle Linux Virtualization Manager Training

Oracle is pleased to announce the general availability of Oracle Linux Virtualization Manager, release 4.3. This server virtualization management platform can be easily deployed to configure, monitor,...

Linux

Oracle Linux Learning Library: Start On a Video Path Now

The Oracle Linux Learning Library provides you learning paths that are adapted to different environments and infrastructures. These free video based learning paths permit you to start training at any time, from anywhere and to advance at your own pace. Learning paths are enhanced on an ongoing basis. Get started today on the learning path that suits your needs and interests: Linux on Oracle Cloud Infrastructure: See how to use Linux to deliver powerful compute and networking performance with a comprehensive portfolio of infrastructure and platform cloud services. Oracle Linux Cloud Native Environment: Learn how you can deploy the software and tools to develop microservices-based applications in-line with open standards and specifications. Oracle Linux 8: This learning path is being built out so you can develop skills to use Linux on Oracle Cloud Infrastructure, on-premise, or on other public clouds. Become savvy on this operating system that is free to use, free to distribute, free to update and easy to download. Oracle Linux Virtualization Manager: Use resources available to adopt this open-source distributed server virtualization solution. Gain proficiency in deploying, configuring, monitoring, and managing an Oracle Linux Kernel-based Virtual Machine (KVM) environment with enterprise-grade performance. Resources: Oracle Linux product documentation Oracle Cloud Infrastructure product documentation Oracle Linux Virtualization Manager product documentation  

The Oracle Linux Learning Library provides you learning paths that are adapted to different environments and infrastructures. These free video based learning paths permit you to start training at any...

Announcements

Announcing the Unbreakable Enterprise Kernel Release 6 for Oracle Linux

Oracle is pleased to announce the general availability of the Unbreakable Enterprise Kernel Release 6 for Oracle Linux. The Unbreakable Enterprise Kernel (UEK) for Oracle Linux provides the latest open source innovations and business-critical performance and security optimizations for cloud and on-premise deployment. It is the Linux kernel that powers Oracle Gen 2 Cloud and Oracle Engineered Systems such as Oracle Exadata Database Machine. Oracle Linux with UEK is available on the x86-64 and 64-bit Arm (aarch64) architectures.   Notable UEK6 new features and enhancements: Linux 5.4 kernel: Based on the mainline Linux kernel version 5.4, this release includes many upstream enhancements. Arm: Enhanced support for the Arm (aarch64) platform, including improvements in the areas of security and virtualization. Cgroup v2: Cgroup v2 functionality was first introduced in UEK R5 to enable the CPU controller functionality. UEK R6 includes all Cgroup v2 features, along with several enhancements. ktask: ktask is a framework for parallelizing CPU-intensive work in the kernel. It can be used to speed up large tasks on systems with available CPU power, where a task is single-threaded in user space. Parallelized kswapd: Page replacement is handled in the kernel asynchronously by kswapd, and synchronously by direct reclaim. When free pages within the zone free list are low, kswapd scans pages to determine if there are unused pages that can be evicted to free up space for new pages. This optimization improves performance by avoiding direct reclaims, which can be resource intensive and time consuming. Kexec firmware signing: The option to check and validate a kernel image signature is enabled in UEK R6. When kexec is used to load a kernel from within UEK R6, kernel image signature checking and validation can be implemented to ensure that a system only loads a signed and validate kernel image. Memory management: Several performance enhancements have been implemented in the kernel's memory management code to improve the efficiency of clearing pages and cache, as well as enhancements to fault management and reporting. NVDIMM: NVDIMM feature updates have been implemented so that persistent memory can be used as traditional RAM. DTrace: DTrace support is enabled and has been re-implemented to use the Berkeley Packet Filter (BPF) that is integrated into the Linux kernel. OCFS2: Support for the OCFS2 file system is enabled. Btrfs: Support for the Btrfs file system is enabled and support to select Btrfs as a file system type when formatting devices is available Important UEK6 changes in this release: The following sections describe the important changes in the Unbreakable Enterprise Kernel Release 6 (UEK R6) relative to UEK R5. Core Kernel Functionality High-performance asynchronous I/O with io_uring: The io_uring is a fast, scalable asynchronous I/O interface for both buffered and unbuffered I/Os. It also supports asynchronous polled I/O. A user space library, liburing, provides basic functionality for applications with helpers to allow applications to easily set up an io_uring instance and submit/complete I/O. NVDIMM: Persistent memory can now be used as traditional RAM. Furthermore fixes, were implemented around the security-related commands within libnvdimm that allowed the use of keys where payload data was filled with zero values to allow secure operations to continue to take place where a zero-key is in use. Cryptography Simplified key description management: Keys and keyrings are more namespace aware. Zstandard compression: Zstandard compression (zstd) is added to crypto and compress. Filesystems  Brtfs: Btrfs continues to be supported. Several improvements and patches have been applied in this update, including support for swap files, ZStandard compression, and various performance improvements. ext4: 64-bit timestamps have been added to the superblock fields. OCFS2: OCFS2 continues to be supported. Several improvements and patches have been applied in this update, including support for the 'nowait' AIO feature and support on Arm platforms. XFS: A new online health reporting infrastructure with user space ioctl provide metadata health status after online fsck. Added support for fallocate swap files and swap files on real-time devices. Various performance improvements have also been made. NFS: Performance improvements and enhancements have been made to RPC and the NFS client and server components. Memory Management TLB flushing code is improved to avoid unnecessary flushes and to reduce TLB shootdowns. Memory management is enhanced to improve throughput by leveraging clearing of huge pages more optimally. Page cache efficiency is improved by using the more efficient Xarray data type. Fragmentation avoidance algorithms are improved and compaction and defragmentation times are faster. Improvements have been implemented to the handling of Transparent Huge Page faults and to provide better reporting on Transparent Huge Page status. Networking TCP Early Departure Time: The TCP stack now uses the Early Departure Time model, instead of the As Fast As Possible model, for sending packets. This brings several performance gains as it resolves a limitation in the original TCP/IP framework, and introduces the scheduled release of packets, to overcome hardware limitations and bottlenecks. Generic Receive Offload (GRO): GRO is enabled for the UDP protocol. TLS Receive: UEK R5 enabled the kernel to send TLS messages. This release enables the kernel to also receive TLS messages. Zero-copy TCP Receive: UEK R5 introduced a zero-copy TCP feature for sending packets to the network. The UEK R6 release enables receive functionality for zero-copy TCP. Packet Filtering: nftables is now the default backend for firewall rules. BPF-based networking filtering (bpfilter) is also added in this release. Express data path (XDP): XDP is a flexible, minimal, kernel-based packet transport for high speed networking has been added. Security Lockdown mode: Lockdown mode is improved. This release distinguishes between the integrity and confidentiality modes. When Secure Boot is enabled in UEK R6, lockdown integrity mode is enforced by default. IBRS: Indirect Branch Restricted Speculation (IBRS) continues to be supported for processors that do not have built-in hardware mitigations for Speculative Execution Side Channel Vulnerabilities. Improved protection in world writable directories: UEK R6 discourages spoofing attacks by disallowing the opening of FIFOs or regular files not owned by the user in world writable sticky directories, such as /tmp. Arm KASLR: Kernel virtual address randomization is enabled by default for Arm platforms. aarch64 pointer authentication: Adds primitives that can be used to mitigate certain classes of memory stack corruption attacks on Arm platforms. Storage, Virtualization, and Driver Updates NVMe: NVMe over Fabrics TCP host and the target drivers have been added. Support for multi-path and passthrough commands has been added. VirtIO: The VirtIO PMEM feature adds a VirtIO-based asynchronous flush mechanism and simulates persistent memory to a guest, allowing it to bypass a guest page cache. A VirtIO-IOMMU para-virtualized driver is also added in this release, allowing IOMMU requests over the VirtIO transport without emulating page tables. Arm platform: Guests on Arm aarch64 platform systems include pointer authentication (ARM v8.3) and Scalable Vector Extension (SVE) support. Device drivers: UEK R6 supports a large number of hardware server platforms and devices. In close cooperation with hardware and storage vendors, Oracle has updated several device drivers from the versions in mainline Linux 5.4. A complete list of the driver modules/versions included in UEK R6 is provided in the Release Notes appendix, "Appendix B, Driver Modules in Unbreakable Enterprise Kernel Release 6 (x86_64)". Security (CVE) Fixes A full list of CVEs fixed in this release can be found in the Release Notes for the UEK R6. Supported Upgrade Path Customers can upgrade existing Oracle Linux 7 and Oracle Linux 8 servers using the Unbreakable Linux Network or the Oracle Linux yum server by pointing to "UEK Release 6" Yum Channel. Software Download Oracle Linux can be downloaded, used, and distributed free of charge and updates and errata are freely available. This allows organizations to decide which systems require a support subscription and makes Oracle Linux an ideal choice for development, testing, and production systems. The user decides which support coverage is the best for each system individually, while keeping all systems up-to-date and secure. Customers with Oracle Linux Premier Support also receive access to zero-downtime kernel updates using Oracle Ksplice. About Oracle Linux The Oracle Linux operating environment delivers leading performance, scalability and reliability for business-critical workloads deployed on premise or in the cloud. Oracle Linux is the basis of Oracle Autonomous Linux and runs Oracle Gen 2 Cloud. Unlike many other commercial Linux distributions, Oracle Linux is easy to download and completely free to use, distribute, and update. Oracle Linux Support offers access to award-winning Oracle support resources and Linux support specialists; zero-downtime updates using Ksplice; additional management tools such as Oracle Enterprise Manager and Spacewalk; and lifetime support, all at a low cost.

Oracle is pleased to announce the general availability of the Unbreakable Enterprise Kernel Release 6 for Oracle Linux. The Unbreakable Enterprise Kernel (UEK) for Oracle Linux provides the latest open...

Announcements

Announcing the Unbreakable Enterprise Kernel Release 5 Update 3 for Oracle Linux

The Unbreakable Enterprise Kernel (UEK) for Oracle Linux provides the latest open source innovations and key optimizations and security to enterprise cloud workloads. It is the Linux kernel that powers Oracle Cloud and Oracle Engineered Systems such as Oracle Exadata Database Machine as well as Oracle Linux on Intel-64, AMD-64 or ARM hardware. What's New? UEK R5 Update 3 is based on the mainline kernel version 4.14.35. Through actively monitoring upstream check-ins and collaboration with partners and customers, Oracle continues to improve and apply critical bug and security fixes to the Unbreakable Enterprise Kernel (UEK) R5 for Oracle Linux. This update includes several new features, added functionality, and bug fixes across a range of subsystems. UEK R5 Update 3 can be recognized with release number starting with 4.14.35-1902.300. Notable changes: 64-bit Arm (aarch64) Architecture. Significant improvements have been made to a number of drivers, through vendor contributions, for better support on embedded 64-bit Arm platforms. Core Kernel Functionality. UEK R5U3 provides equivalent core kernel functionality to UEK R5U2, making use of the same upstream mainline kernel release, with additional patches to enhance existing functionality and provide some minor bug fixes and security improvements. On-Demand Paging. On-Demand-Paging (ODP) is a virtual memory management technique to ease memory registration. File system and storage fixes.  XFS.  A deadlock bug that caused the file system to freeze lock and not release has been fixed. CIFS.  An upstream patch was applied to resolve an issue that could cause POSIX lock leakages and system crashes. Virtualization and QEMU. Minor bugfix for hardware incompatibility with QEMU.  A minor bugfix was applied to KVM code in line with upstream fixes that resolved a trivial testing issue with certain versions of QEMU on some hardware. Driver updates. In close cooperation with hardware and storage vendors, Oracle has updated several device drivers from the versions in mainline Linux 4.14.35; further updates are provided in the Appendix A (Driver Modules in Unbreakable Enterprise Kernel Release 5 Update 3) of the Release notes. For more details on these and other new features and changes, please consult the Release Notes for the UEK R5 Update 3. Security (CVE) Fixes A full list of CVEs fixed in this release can be found in the Release Notes for the UEK R5 Update 3. Supported Upgrade Path Customers can upgrade existing Oracle Linux 7 servers using the Unbreakable Linux Network or the Oracle Linux yum server by pointing to "UEK Release 5" Yum Channel. Software Download Oracle Linux can be downloaded, used, and distributed free of charge and all updates and errata are freely available. This allows organizations to decide which systems require a support subscription and makes Oracle Linux an ideal choice for development, testing, and production systems. The user decides which support coverage is the best for each system individually, while keeping all systems up-to-date and secure. Customers with Oracle Linux Premier Support also receive access to zero-downtime kernel updates using Oracle Ksplice. Compatibility UEK R5 Update 3 is fully compatible with the UEK R5 GA release. The kernel ABI for UEK R5 remains unchanged in all subsequent updates to the initial release. About Oracle Linux The Oracle Linux operating system is engineered for an open cloud infrastructure. It delivers leading performance, scalability and reliability for enterprise SaaS and PaaS workloads as well as traditional enterprise applications. Oracle Linux Support offers access to award-winning Oracle support resources and Linux support specialists; zero-downtime updates using Ksplice; additional management tools such as Oracle Enterprise Manager and Spacewalk; and lifetime support, all at a low cost. And unlike many other commercial Linux distributions, Oracle Linux is easy to download, completely free to use, distribute, and update. Oracle tests the UEK intensively with demanding Oracle workloads, and recommends the UEK for Oracle deployments and all other enterprise deployments. Resources – Oracle Linux Documentation Oracle Linux Software Download Oracle Linux Blogs Oracle Linux Blog Oracle Virtualization Blog Community Pages Oracle Linux Social Media Oracle Linux on YouTube Oracle Linux on Facebook Oracle Linux on Twitter Data Sheets, White Papers, Videos, Training, Support & more Oracle Linux Product Training and Education Oracle Linux - education.oracle.com/linux

The Unbreakable Enterprise Kernel (UEK) for Oracle Linux provides the latest open source innovations and key optimizations and security to enterprise cloud workloads. It is the Linux kernel that...

Linux

Connect PHP 7 to Oracle Database using packages from Oracle Linux Yum Server

Note: This post was updated to include the latest available release of PHP as well as simplified installation intstructions for Oracle Instant Client introduced starting with 19c.   We recently added PHP 7.4 to our repos on Oracle Linux yum server. These repos include also include the PHP OCI8 extenstion to connect your PHP applications to Oracle database. In this post I describe the steps to install PHP 7.4, PHP OCI8 and Oracle Instant Client on Oracle Linux to connect PHP to Oracle Database. For this blog post, I used a free Autonomous Database included in Oracle Cloud Free Tier. Install Oracle Instant Client Oracle Instant Client RPMs are available on Oracle Linux yum server also. To access them, install the oracle-release-el7 package first to setup the appropriate repositories:   $ sudo yum -y install oracle-release-el7 $ sudo yum -y install oracle-instantclient19.5-basic If you want to be able to use SQL*Plus (this can come in handy for some sanity checks), install the SQL*Plus RPM also: $ sudo yum -y install oracle-instantclient19.5-sqlplus Create a Schema and Install the HR Sample Objects (Optional) You can use any schema you already have in your database. I’m going to use the HR schema from the Oracle Database Sample Schemas on github.com If you already have a schema with database objects to work with, you can skip this step. $ yum -y install git $ git clone https://github.com/oracle/db-sample-schemas.git $ cd db-sample-schemas/human_resources As SYSTEM (or ADMIN, if you are using Autonomous Database), create a user PHPTEST SQL> grant connect, resource, create view to phptest identified by <YOUR DATABASE PASSWORD>; SQL> alter user PHPTEST quota 5m on USERS; If you are using Autonomous Database like I am, change the tablespace above to DATA: SQL> alter user phptest quota 5m on DATA; As the PHPTEST user, run the scripts hr_cre.sql and hr_popul.sql to create and populate the HR database objects SQL> connect phptest/<YOUR DATABASE PASSWORD>@<YOUR CONNECT STRING> SQL> @hr_cre.sql SQL> @hr_popul.sql Install PHP and PHP OCI8 To install PHP 7.4, make sure you have the latest oracle-php-release-el7 package installed first. $ sudo yum install -y oracle-php-release-el7 Next, install PHP and the PHP OCI8 extenstion corresponding to the Oracle Instant Client installed earlier: $ sudo yum -y install php php-oci8-19c Running the following php code snippet should verify that we can connect PHP to the database and bring back data. Make sure you replace the schema and connect string as appropriate. Create a file emp.php based on the code above. Run it! $ php emp.php This should produce the following: King Kochhar De Haan Hunold Ernst Austin Pataballa Lorentz Greenberg Faviet Chen Sciarra ...  

Note: This post was updated to include the latest available release of PHP as well as simplified installation intstructions for Oracle Instant Client introduced starting with 19c.   We recently added...