X

News, tips, partners, and perspectives for the Oracle Solaris operating system

Recent Posts

Announcements

Announcing Oracle Solaris 11.4 SRU 14 Images in OCI Marketplace

Following on to last week's release of Oracle Solaris 11.4 SRU 14, we've updated the Solaris listing in the Orace Cloud Infrastructure (OCI) Marketplace to make pre-built 11.4 SRU 14 images available, in addition to the Solaris 11.4 GA images previously available.  This will make it easier than ever to use the latest Solaris version in OCI!  With this update we've also taken the opportunity to re-organize our listing a bit.  Rather than the separate virtual machine and bare metal listings we made available previously, they've all been consolidated into a single Oracle Solaris 11.4 listing.  Within that listing, you can select the version of the image that you want to deploy.  Here's a shot of the main marketplace listing showing the choices. Once you select the image and compartment and click Launch Instance, the OCI console will take you to the Create Compute Instance page to allow you to fill in the rest of the details of the instance you'd like to launch, such as the shape, networks, and ssh keys. The Solaris images can also be selected from the Create Compute Instance page when you click the Change Image Source button.  In the Browse All Images overlay, select Oracle Images and then scroll to find Oracle Solaris 11.4.  Click on the checkbox to select it, which will default to the 11.4.14 VM image.  To select a different version, click on the arrow to the right of the listing name, which will expand the choices as shown below. Choose whichever version is desired, then check that you've reviewed the terms and conditions and click the Select Image button, which will return you to the Create Compute Instance page to complete selecting the remaining details of the instance.  (As an aside, the Browse All Images page does not display the correct terms and conditions for Oracle Solaris; OCI is aware of the bug and we hope it will be fixed soon.  Meantime, rest assured that the Terms of Use for Oracle Solaris on the Oracle Cloud Infrastructure Marketplace apply no matter which path you use to select these images.) One new item to note in the SRU 14 images is that they are now configured to use the Oracle Solaris support package repository by default since that's where you'll need to obtain any additional Solaris components that are not included in the base image.  However, since the support repository requires support credentials to access, you'll need to configure your certificate and key obtained from the registration application in the image first.  That's done by uploading the certificate and key files to the instance, then running # pkg set-publisher -k <key file> -c <certificate file> solaris It's easy to automate this setup using a userdata script supplied at instance creation, see my blog post on automatic configuration for an example.  We expect to update the Solaris listing on a regular basis with additional SRU's.  Most likely we will update quarterly to align with the SRU's that are designated as Critical Patch Updates (CPU's) but we'd welcome comments on the frequency that would be useful to customers. One last thing to note is that we've enhanced the marketplace listing's usage instructions with more details and links to my past blog posts that show how to use various Solaris and OCI features together.  For convenience, I'm reproducing those below.  We'll look forward to comments and suggestions on using Solaris in OCI! Usage Instructions Ensure you have an SSH key available to provide when launching the image, then ssh to the opc user on the instance using that key. To install additional software from the Oracle Solaris Support Repository, you will need to register for a user key and certificate. For more information on using Solaris with OCI, see the following posts on the Oracle Solaris blog: How to Use OCI Storage Resources with Solaris Using Solaris Zones on Oracle Cloud Infrastructure Automatic Configuration of Solaris OCI Guests How to Install the OCI CLI on Oracle Solaris

Following on to last week's release of Oracle Solaris 11.4 SRU 14, we've updated the Solaris listing in the Orace Cloud Infrastructure (OCI) Marketplace to make pre-built 11.4 SRU 14 images...

Reimplementing a Solaris command in Python gained 17x performance improvement from C

As a result of fixing a memory allocation issue in the /usr/bin/listusers command, that had caused issues when it was converted to 64 bit, I decided to investigate if this ancient C code could be improved by conversion to Python. The C code was largely untouched since 1988 and was around 800 lines long, it was written in an era when the number of users was fairly small and probably existed in the local files /etc/passwd or a smallish NIS server. It turns out that the algorithm to implement the listusers is basically some simple set manipulation.  With no arguments listusers just dumps a sorted list of all users in the nameservice, with the -l and -g it filters down the list of users and groups. My rewrite of listusers in Python 3 turned out to be roughly a 10th of the number of lines of code - since Python includes native set manipulation and the C code had implemented the set operations it via linked lists. But Python would be slower right ?  Turns out it isn't and in fact for some of my datasets (that had over 100,000 users in them) it was 17 times faster.  I also made sure that the Python version doesn't pull the entire nameservice into memory when it knows it is going to be filtering it based on the -l and -g options. It also turned out to be much easier to fix a long standing bug that listusers did not properly expand nested groups.  Nested groups did not exist as a concept when the original C code was written, but with LDAP they became possible. It also turns out that 100 lines of Python 3 code will be a lot easier to maintain going forward - not that I expect listusers to need updating again given it survived several decades in its original form! Update: A few of the comments asked about the availability of the Python version.  The listusers command in Oracle Solaris 11.4 SRU 9 and higher.  Since we ship the /usr/bin/listusers file as the Python code you can see it by just viewing the file in an editor.  Note though that is not open source and is covered by the Oracle Solaris licenses. As for the use of Python 3.5, that will automatically change over to Python 3.7 in a future SRU. At the time I did this work Python 3.5 was the latest I had available to do the work.  Since this is a very simple Python program all we need to do to port it is update our Makefile to target it to 3.7 instead of 3.5. We will be doing this as part of a much bigger sweep through the Oracle Solaris code as we migrate away from Python 2.7. Darren

As a result of fixing a memory allocation issue in the /usr/bin/listusers command, that had caused issues when it was converted to 64 bit, I decided to investigate if this ancient C code could be...

Announcing Oracle Solaris 11.4 SRU14

Today we are releasing SRU 14, the October 2019 CPU,  for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU introduces the following enhancements: All Sun Solaris perl modules are available for perl 5.26 rsyslog has been updated to 8.1907.0 Apache Tomcat has been updated to 8.5.45 The following components have also been updated to address security issues: oniguruma has been updated to 6.9.3 poppler has been updated to 0.79.0 Nghttp2 library has been updated to 1.39.2 rdesktop has been updated to 1.8.4 Apache Web Server has been updated to 2.4.41 libpng10 has been updated to 1.0.69 libpng12 has been updated to 1.2.59 libpng14 has been updated to 1.4.22 MySQL 5.6 has been updated to 5.6.45 MySQL 5.7 has been updated to 5.7.27 expat has been updated to 2.2.7 Mercurial has been updated to 4.9.1 Firefox has been updated to 60.9.0esr Django has been updated to 1.11.23 Wireshark has been updated to 2.6.11 Subversion has been updated to 1.10.6 Thunderbird to has been updated to 60.9.0 python 2.7, libxslt, libtiff, gnome, openjpeg. Full details of this SRU can be found in My Oracle Support Doc 2597515.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we are releasing SRU 14, the October 2019 CPU,  for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc...

No More LSUs: Move to Solaris 11.4

  After January 2020 you will not get any more LSUs (Limited Support Updates) for Solaris 11.3, so it is time to move to Solaris 11.4. If this is old news to you you can safely stop reading this article, we have stated that publicly e. g. in MOS doc 2382427.1 The LSU releases were never an equivalent to the SRU (Support Repository Update) releases we continue to provide, they always were ports of just a subset of fixes to Solaris 11.4. The only intent to offer the LSU train was to give our customers more time to upgrade to 11.4 since with Solaris 11.4 we switch to a continuous release model. This "continuous release model" contiuously prorvides feature enhancement in parallel with bug fixes to Solaris 11.4, for details see Oracle Solaris Moving to a Continuous Delivery Model (January 2017) or "Continuous Delivery, Really?" (December 2017) Why do do we stop that LSU train now? One main driver is python. Solaris, like many other software products, makes extensive use of the Python programming language, it used to use python 2.7 a lot, but python 2.7 is coming an end, cf. the "official" announcement "PEP 373 -- Python 2.7 Release Schedule" , or the nicely animated countdown clock. After January 1st 2020 python 2.7 is considered "unsupported", and without upstream support for python 2.7 we cannot rely on python 2.7, and thus had to move Solaris 11 to python 3. This migration has been happening in Solaris 11.4 and will continue in 11.4. Oracle has also published the end date for the LSU train for Solaris 11.3 in its support docs, cf. the MOS doc "Oracle Solaris 11.3 Limited Support Updates (LSU) Index" How can you check if you can seamlessly upgrade your Solaris 11.3 environment to Solaris 11.4? With the release of Solaris 11.4 we also delivered a tool that allows you to check if your environment could have issues when running under Solaris 11.4. This tool is commonly called "Enterprise Health Check", and there is the MOS doc Pre-Update Enterprise Health Check Tool execution example So do not be afraid of upgrading to Solaris 11.4 even if you tried before and faced challenges. We are well aware that we to aggressively removed functionality from Solaris 11.4 when we released it in the first place (wrongly assuming that it is not used anymore) We have brought back a number features, e. g. libucb which missed in the first release of Solaris 11.4 and with the current SRU13 there should be more issues of that kind, and we consider any occurrence of such an issue worth a SR

  After January 2020 you will not get any more LSUs (Limited Support Updates) for Solaris 11.3, so it is time to move to Solaris 11.4. If this is old news to you you can safely stop reading this...

Oracle Solaris 10

Another Update on Oracle Java and Oracle Solaris

Since our last update on Oracle Solaris and Oracle Java we've been getting more questions, so we wanted to write a quick blog to share the latest on this topic. There are a lot of applications out there running on Oracle Java SE 8 and Oracle Java SE 11, these are mature and widely adopted platforms. As you can see on the Oracle Java SE Support Roadmap, Oracle is committed to supporting these releases until at least 2025 and 2026 respectively. In addition, this page notes: "Oracle Java SE product dates are provided as examples to illustrate the support policies. Customers should refer to the Oracle Lifetime Support Policy for the most up-to-date information. Timelines may differ for Oracle Products with a Java SE dependency" This is important given the Oracle Applications Unlimited commitment. This essentially says that if you are running on-premise applications covered under Oracle Applications Unlimited (see the list included here), Oracle is committed to offering Oracle Premier support until at least 2030, and that includes the Oracle Java SE releases used by these applications, even if the Oracle Java SE support timelines appear shorter. Lastly, our long term commitment to deliver innovation on Oracle Solaris continues, with Oracle Premier Support offered on Oracle Solaris 11 until 2031, and Oracle Extended Support until 2034. The bottomline: Oracle is committed to supporting on-premises applications and platforms for your business until 2030 and beyond.

Since our last update on Oracle Solaris and Oracle Java we've been getting more questions, so we wanted to write a quick blog to share the latest on this topic. There are a lot of applications out...

Oracle Solaris 11

What is Automatic Boot Environment Naming and why is it important?

Ever since the initial delivery of the Image Packaging System (IPS) during the development of Oracle Solaris 11 it has been tightly integrated with Boot Environments; this provides a safe, reliable upgrade/rollback facility. Naming of the boot environments was left to the administrator.  How do you choose a "good" name for your BE ?  If you are following the Oracle Solaris 11 SRU train then using the release and SRU number as part of the BE is a good start.  The name of a BE created during a 'pkg update' can be chosen in advance by using the --be-name option. However if you either don't upgrade to every SRU, or something in your pkg image configuration means it can't do an upgrade to the SRU but can do some update, then you need to predict in advance what the BE name should be and you might get it wrong. You can always rename the BE after the package operation.  Doing that turns out to less easy than it ideally should be, because beadm won't allow you to rename the current or active on next-boot BE. So you would have to reactivate the current BE, rename the new BE and then reactivate the new BE again.  I had been using exactly that sequence of operations for a few years on our internal development builds, and selecting the BE name based on knowing what uname -v would produce.  The value of uname -v now comes from a protected file in /etc/versions that is delivered by pkg:/release/name. For some background on how uname is set see Pete's "What's in a uname?" blog post. Surely we can do better than that ?  After all when pkg is the one creating the BE it already knows it has an upgrade solution and knows exactly what it will be delivering! The first obvious answer is just have package do the same BE rename as it finishes.  While that would work it is a little messy and it means we don't know the name of the BE until the very end of the 'pkg update', which is fine in most cases but not when you run 'pkg update -nv' to find out what it would do, because in that case we won't see the /etc/versions/uts_version file. So what we need is a way of having a suggested BE name stored in package metadata, this is exactly what I choose to do.  As of Oracle Solaris 11.4.13 we have delivered the metadata in pkg:/release/name and changes to the pkg command so that it will automatically use a BE name that matches what 'uname -v' will return.  If you wish you can still choose your on boot environment names.  Since this needs the new pkg command support that is delivered in 11.4.13 you won't see this activate until you upgrade from 11.4.13 to a later SRU. When you do it will look a bit like this: root@S11-4-SRU:~# pkg update entire@latest Packages to install: 27 Packages to update: 298 Mediators to change: 1 Create boot environment: Yes Create backup boot environment: No DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 325/325 8001/8001 903.6/903.6 4.2M/s PHASE ITEMS Removing old actions 1850/1850 Installing new actions 7181/7181 Updating modified actions 3883/3883 Updating package state database Done Updating package cache 298/298 Updating image state Done Creating fast lookup database Done Updating package cache 5/5 A clone of 11.4.13.4.0 exists and has been updated and activated. On the next boot the Boot Environment be://rpool/11.4.14.4.0 will be mounted on '/'. Reboot when ready to switch to this updated BE. Updating package cache 5/5   Now what if the OS would just do the update for you ?  Would that be useful ? -- Darren  

Ever since the initial delivery of the Image Packaging System (IPS) during the development of Oracle Solaris 11 it has been tightly integrated with Boot Environments; this provides a safe, reliable...

Announcing Oracle Solaris 11.4 SRU13

Today we are releasing the SRU 13 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU introduces the following enhancements: Hotplug framework support for hot removal of SR-IOV PCIe device in an IOR configuration. Two new LDoms manager ldm(1M) sub-commands, "evacuate-io" and "restore-io", are added to support device removal and subsequent replacement. For more information, see Oracle VM Server for SPARC 3.6 Administration Guide. Explorer 19.3.1 is now available The timezone data has been updated to 2019b Version based automatic BE names on pkg update UDP needs an sd_recv_uio entry point to improve performance Allow a privileged user to request smaller granularity timers on a per process basis The following components have also been updated to address security issues: ImageMagick has been updated to 6.9.10-57 wget has been updated to 1.20.3 bzip2 has been updated to 1.0.7 nss has been updated to 3.45 hex has been updated to 0.20.1 Ghostscript has been updated to 9.27 libxslt & tcpdump Full details of this SRU can be found in My Oracle Support Doc 2587604.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we are releasing the SRU 13 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU...

Announcements

Great News About Extended Support for Oracle Solaris 10 OS

We got you covered on your way to Oracle Solaris 11 OS The Oracle Solaris 11 Operating System (OS) platform provides major improvements over earlier releases in areas critical to our customers, including gains in security, compliance, performance, virtualization, and observability. There are important business reasons for you to adopt the Oracle Solaris 11 OS release, including that it is much easier to manage and it can help to lower your cost to run and maintain your IT systems. Oracle’s continuous delivery process for the Oracle Solaris 11 OS enables customers to implement new features and improvements with simplicity and consistency.  We recognize that many of our customers have a deep investment in applications running on Oracle Solaris 10 OS, and realize that you may need more time to move to the Oracle Solaris 11 OS, while maintaining the security of your Oracle Solaris 10 environments as much as possible. To help you transition to the Oracle Solaris 11 OS we are providing: Three additional years of Extended Support for the Oracle Solaris 10 OS at the same rates. The new end date for Extended Support is January 2024. Options for migrating your applications:  The best option is moving your application to the full Oracle Solaris 11 OS stack. This provides you with the most benefits, newest features, and the latest security for systems running Oracle Solaris. You can choose to deploy Oracle Solaris 11.4 in virtualized environments using Oracle Solaris Zones or Kernel Zones, Oracle VM Server for SPARC logical domains, or a combination of them, giving you unmatched flexibility and efficiency. You also have the ability to run natively without virtualization, if required. The Oracle Solaris Binary Application Guarantee provides the confidence that you can move applications simply and consistently to Oracle Solaris 11.4 from previous releases of the Oracle Solaris OS. A good compromise option is moving Oracle Solaris 10 environments into Oracle Solaris 10 Zones running on the Oracle Solaris 11 OS. This provides you with an easier conversion path by maintaining applications, basically unchanged, while benefiting from the functionality and security of the Oracle Solaris 11 kernel. This option allows you more time to move your applications to the Oracle Solaris 11 OS—see the preferred option above.  For Oracle Solaris 10 environments running in SPARC servers, the option that requires the least amount of modifications to applications is to simply move them to guest domains (LDOMs) managed by Oracle VM Server for SPARC. While the guest domains can run the Oracle Solaris 10 OS, they are managed by a control domain running the Oracle Solaris 11 OS. Although this option allows applications to be moved unchanged to Oracle Solaris 10 domains, it leverages only a fraction of the benefits offered by the new features, functionality, and security of the Oracle Solaris 11 OS. Oracle is committed to our enterprise customers and offers you easy paths to the Oracle Solaris 11 OS for your Oracle Solaris based applications. We continue to provide migration tools that help you make the move and allow you to take advantage of new features, latest security, and easier management that Oracle Solaris 11 OS offers for your systems. Start your move to the Oracle Solaris 11 platform today! Check these resources: Oracle and Sun System Software: Lifetime Support Policy Oracle Solaris Continuous Delivery Oracle Solaris Guarantee Program What’s New in Solaris 11.4 Guide for Migrating to Oracle Solaris 11.4 Oracle Solaris 10 - Lift and Shift Oracle Solaris 10 Support Explained Announcing Oracle Solaris 11.4 SRU12 Oracle Solaris 11.4 Released for General Availability Overview of Oracle Solaris Release Cadence Oracle Solaris Blog

We got you covered on your way to Oracle Solaris 11 OS The Oracle Solaris 11 Operating System (OS) platform provides major improvements over earlier releases in areas critical to our customers,...

Migrating to Oracle Solaris 10 Branded Zones from Older Oracle Solaris 10 Environments

If you are running Oracle Solaris 10, you have several options to modernize your environment. In most cases, you can run your application directly on Oracle Solaris 11 and get the best performance, security and availability. If the application absolutely requires a Solaris 10 environment, you can lift and shift to Oracle Solaris 10 guest domains or preferably migrate to Oracle Solaris 10 branded zones that are running on Oracle Solaris 11 on modern SPARC hardware. I blogged earlier about how to perform Physical-to-Virtual (P2V) and Virtual-to-Virtual (V2V) migrations to Oracle Solaris 10 guest domains running on Oracle Solaris 11. The blogs are available here: LDOM V2V Migration LDOM P2V Migration   Today's blog introduces another P2V migration. This time we describe how to migrate to Oracle Solaris 10 branded zones in 3 easy steps: 1) Assess -- Run the zonep2vchk command to identify possible configuration issues that might impact the migration. 2) Lift -- Run the flarcreate command to create an archive file (FLAR) of the source environment. 3) Shift -- Run the zoneadm and zonecfg commands to instantiate the source environment into the Oracle Solaris 10 branded zone on the target system.   The following diagram illustrates the lift and shift process.     Support for Older Versions of Oracle Solaris 10 Until recently, the lift and shift utilities only supported migrating source systems that were running Oracle Solaris 10 Update 9 or later to Oracle Solaris 10 branded zones.  Oracle Solaris 11.4 SRU 12 provides an improved zoneadm option that enables you to easily migrate from any Oracle Solaris 10 system. The new option automatically applies the required patches in the Oracle Solaris 10 branded zone on the target system, so there is no need to bring the source system to Oracle Solaris 10 Update 9 or later to initiate the migration. In addition, you don't need to use a separate staging environment. Other than that, the migration is performed as described earlier in this blog. To migrate an older Oracle Solaris 10 environment (earlier than Update 9), use the zoneadm install –P patch_dir option on the target system. patch_dir is the path to the directory that contains the Solaris 10 Patchset required to update the new Solaris 10 branded zone.  While you can choose to use any recent Solaris 10 CPU OS Patchset or Recommended OS Patchset, we recommend you use the most recent Solaris 10 recommended patch set.  Consult MOS Doc ID 1273718.1 for information on availability of the latest Solaris 10 patches. This new feature clones the Zone Boot Environment (ZBE), applies the specified patches, and promotes the patched ZBE to the new boot environment for the zone, as shown in the following diagram.   If you have UFS as the root file system and SVM as the volume manager on the source machine, migration to Oracle Solaris 10 Branded Zones modernizes your root file system to ZFS. ZFS has better performance, scalability and simplified administration. On the target system, your  compute environment is also modernized because you are running on an Oracle Solaris 11.4 Global Zone and the Oracle Solaris 11.4 kernel. This delivers better availability, security and performance for your application while preserving the Solaris 10 environment that your application requires. You can consolidate many workloads into Oracle Solaris 10 branded zones as shown below.   It Works We validated the end-to-end migration procedure using an Oracle Database use case running on UFS and SVM on Oracle Solaris 10 source system. After the migration, the Oracle Database is up and running in an Oracle Solaris 10 Branded Zone on a ZFS root file system.   This migration process is documented in the guide titled Lift and Shift Guide - Migrating Workloads from Oracle Solaris 10 SPARC Systems to Oracle Solaris 10 Branded Zones. The full collection of Lift and Shift guides is available in the Lift and Shift Documentation Library. We will continue to improve our migration tools, documentation and we welcome your feedback.

If you are running Oracle Solaris 10, you have several options to modernize your environment. In most cases, you can run your application directly on Oracle Solaris 11 and get the best performance,...

See it all at Openworld: Oracle Solaris 11.4 WebUI, Software in Silicon, Compliance, SPARC M8

  HOL4974: Pulling Analytics Data from Oracle Solaris Through the REST API (Moscone West, Room 3023, Monday 09/16, 1pm-2pm) At this year's Oracle Openworld the Solaris Product Management will host a Hands-on-Lab (HOL) that brings together a lot of innovations Oracle has recently added to its portfolio: we will demonstrate the fastest CPU, the SPARC M8 with its Software in Silicon extensions, Oracle Solaris 11.4 and the Oracle Solaris Analytics/WebUI load and performance monitoring, the Compliance Framework to ensure your systems adhere to standards relevant to security and reliability, and the REST based API provided by Oracle Solaris' RAD daemon. Software in Silicon and the SPARC M8 CPU If you have ever wondered what this Software in Silicon is all about this Hands-on-Lab will give you the chance to see it and touch it. We have SPARC M8 based systems on-site running Oracle databases, you will get access to them in a dedicated environment, together with a load generation scenario that puts the database under load and leverages the Software in Silicon infrastructure. Furthermore you will earn how you can monitor the CPUs activity with regards to database transactions and the use of the Software in Silicon circuitry through the Oracle Solaris WebUI. The Oracle Solaris WebUI presents a lot of detailed information on CPU activity including activity of the Software in Silicon activity and database transaction activity and gives you a graphical insight on what's happening on the system, now, or historically. Oracle Solaris Compliance Framework This Hands-on-Lab will also contain a section on the practical use of Oracle Solaris' Compliance Framework. You will learn how you can adapt the predefined compliance assessments to your own needs. Oracle Solaris 11.4 now allows you to "tailor" a given assessment in the sense that you don't need run all the tests every time, compliance assessments can be reduced in size and runtime by "deactivating" tests. We will demonstrate how tests are deactivated, how compliance assessments are run at a regular cadence, and how results are display in the Oracle Solaris WebUI. Oracle Solaris 11.4 WebUI This Hands-on-Lab will teach you how to perform more complex monitoring tasks with much less effort. We will demonstrate how you can view and link database activity (e.g. measured by tx/sec) to CPU activity and core utilization, disk I/O activity, and the leverage of the Software in Silicon DAX hardware accelerator. We'll show you how you can create you're own custom view called a Sheet, that combines all this info in one glance. You not only have a graphical frontend to all that performance data, any data that is visualized inside the WebUI can also be retrieved by means of a command line interface, and a REST interface. Furthermore the Remote Administration Daemon (RAD) allows administrative tasks to be performed through a REST API, we will share some examples of how these are used, both from a low-level command line interface and through more sophisiticated applications like Postman. This opens the Oracle Solaris monitoring and management up to any 3rd Party tools that support a REST API. Come and join us on Monday, just after lunch. Interested in more information about Oracle Solaris and SPARC systems here's a blog with the list of sessions at Oracle OpenWorld this year.  

  HOL4974: Pulling Analytics Data from Oracle Solaris Through the REST API (Moscone West, Room 3023, Monday 09/16, 1pm-2pm) At this year's Oracle Openworld the Solaris Product Management will host a...

Oracle Solaris 11

Expanding the Library

It's now been one year since the GA release of Oracle Solaris 11.4. During that time, we've released 12 Support Repository Updates (SRUs), with the latest, SRU12 (aka 11.4.12) coming out last week. These releases have included hundreds of updates to the bundled FOSS packages, over 2500 bug fixes, and over 300 enhancements. One area a few of us have been working on lately is making improvements to the standard C library, libc, that increase compatibility with the Linux & BSD libraries and make developing for Solaris easier. These include: I expanded our set of common memory allocation interfaces with malloc_usable_size() and reallocarray() in SRU 10, and reallocf() in SRU 12. These are explained in detail later in this post. In SRU 12, Ali added implementations of explicit_memset() and explicit_bzero() which do the same things as memset() and bzero(), but under names that the compiler won't optimize out if it thinks the memory is never read after these calls. Ali also added the qsort_r() function in SRU 12, which allows passing data to the comparison function as an argument instead of relying on global pointers, which isn't multi-thread safe. We've followed the form proposed for standardization, as found in glibc, which differs in the order of arguments to the one found in FreeBSD 12. Darren added support in SRU 12 for the MADV_DONTDUMP flag to madvise() to exclude a memory region from a core dump, using the functionality already implemented as the MC_CORE_PRUNE_OUT flag to memcntl(). Ali also updated <sys/mman.h> in SRU 12 to use void * instead of caddr_t in the arguments to madvise(), memcntl(), and mincore(). Since the C standard allows promoting any pointer type to a void * in function arguments, this should be safe unless your code has its own declaration of these functions that uses caddr_t. If necessary, you can #define __USE_LEGACY_PROTOTYPES__ to have the header use the old caddr_t prototypes instead. We're not stopping there. Implementations of mkostemp() and mkostemps() are currently in testing for a future SRU, and we're evaluating what to do next. My contributions were in our common memory allocators, and the effects go beyond just increasing compatibility. The simplest of the new functions is reallocf(), which takes the same arguments as realloc(), but if the allocation fails, it frees the old pointer, so that every caller doesn't have to replicate the same boilerplate code to save the old pointer and free it manually on failure. Forgetting to do this is a very common error, as many static analysers will report, so this makes it easier to clean your code of such issues at a low cost. You may note in the SRU 12 ReadMe that we indeed used this function to clean up a couple of our own bugs: 29405632 code to get isalist leaks memory if realloc() fails 29405650 mkdevalloc frees wrong pointer when realloc() fails reallocarray() is also fairly simple - like realloc() it can either allocate new memory or change the size of an existing allocation, but instead of a single size value, it takes the size of an object and the number of objects to allocate space for, like calloc(), and makes sure there isn't an integer overflow when multiplied together. This also helps save adding integer overflow checks to many callers, and is easier to audit for correctness by having one overflow check in libc, instead of hundreds or more spread across a code base. I first encountered reallocarray() not long after spending months working to eradicate integer overflow bugs from the protocol handling code in X11 servers and clients. Reading about OpenBSD introducing reallocarray seemed like such a better way to solve these problems than all the hand-coded checks we'd added to the X code base. (Though it couldn't replace all of them, since many didn't involve memory allocations.) Adding an implementation of reallocarray() to the Xserver and then to libX11, along with a macro to make a simple mallocarray() equivalent that just called reallocarray() with a NULL pointer for the old value, helped prove the usefulness of having this function. As adoption spread, I nominated it for standardization, and added it to the Solaris libc as well. The interface that was most complicated and took the most work was malloc_usable_size(). The other two could be implemented as simple wrappers around any realloc() implementation without knowing the details of the allocation internals, and thus just have a single version in libc. However, since malloc_usable_size() has to look into the internal data structures of the memory allocator to determine the size of an allocated block, it has to have a different implementation for each memory allocator. As noted in the Alternative Implementations section of the malloc(3C) man page we currently ship 8 different implementations in Oracle Solaris 11.4 - in the libc, libadimalloc, libbsdmalloc, libmalloc, libmapmalloc, libmtmalloc, libumem, and watchmalloc libraries. While there are some commonalities, there are also some major differences, and this involved several completely different sets of code to implement across all these libraries. While this seems like a lot of work for a rarely used function (though the few users are rather important, like the Mozilla applications and several memory/address sanitizers for systems without ADI support), it turned out to be a very worthwhile addition as it allowed writing a common set of unit tests across all 8 malloc implementations. These tests not only helped confirm the fixes for some long-standing bugs, but helped find bugs we hadn't known about in various corner cases, either by checking that allocation sizes are in the range expected, or by using malloc_usable_size() as a reliable way to check whether or not a pointer is free after given operations. For instance, a test that boils down to p2 = malloc(73); assert(p2 != NULL); s = malloc_usable_size(p2); /* request a size that realloc can't possibly allocate */ p = realloc(p2, SIZE_MAX - 4096); assert(p == NULL); assert(malloc_usable_size(p2) == s); found that some of our realloc() implementations would first increase the size of the block to include any following free blocks from the memory arena, and if that wasn't enough, tried allocating a new block — but never restored the original block size if that failed, leading to wasted memory allocation. The ancient bsdmalloc() actually freed the old block before trying to allocate a new one, and if it failed, left the old block unexpectedly unallocated. Bugs fixed as part of this include: 29242273 libmalloc: SEGV in realloc of sizes near SIZE_MAX 29497791 libmalloc can waste memory when realloc() fails 15243567 libbsdmalloc doesn't set errno when it should 29242228 libbsdmalloc has incomplete set of malloc replacement symbols 29497790 libbsdmalloc can free old pointer when realloc() fails 15225938 realloc() in libmapmalloc gets SIGSEGV when cannot allocate enough memory 29503888 libmapmalloc can waste memory when realloc() fails So you can see we've both strengthened our compatibility story, and hardened our code in the process, making Solaris 11.4 better for both developers and users.

It's now been one year since the GA release of Oracle Solaris 11.4. During that time, we've released 12 Support Repository Updates (SRUs), with the latest, SRU12 (aka 11.4.12) coming out last week....

Announcements

Oracle Solaris Now Available in the OCI Marketplace

We're pleased to announce that Oracle Solaris 11.4 images are now available in the Oracle Cloud Infrastructure (OCI) Marketplace!  This means that customers of OCI IaaS can run Solaris workloads in the cloud with full support from Oracle, which is included in your IaaS subscription.  There are two images, one for bare metal and another for virtual machines, and they're available in all OCI regions. Using the Solaris images is very simple.  In the OCI console, select the Marketplace item in the OCI console main menu.  In the All Applications directory, you can use filters to help find the Oracle Solaris listings, as in this screenshot:   Selecting one of the Solaris items will bring you to its listing screen: From the listing page, you can acknowledge the terms & conditions and select the compartment for deployment.  Clicking Launch Instance will take you to the Create Compute Instance page: I've not shown the remainder of the create instance page here, but you'll need to select the shape, provide an ssh key, and select the virtual network to which the instance will be connected.  Once the instance is launched, you'll be able to ssh in as the opc user within a couple of minutes.  That's all it takes to get started with Oracle Solaris in OCI!  You'll likely also want to review my prior blogs on using Solaris in OCI to make best use of the cloud: Automatic Configuration of Solaris OCI Guests How to Use OCI Storage Resources with Solaris Using Solaris Zones on OCI How to Install the OCI CLI on Oracle Solaris The images come with a relatively minimal set of packages, defined by the solaris-cloud-guest group package, so most likely you'll need to add to them to run your applications; just use the pkg command to install what you need.  In particular, you may wish to add the system/locale package in order to obtain UTF-8 locales for the major languages supported by Solaris.  You'll also likely also want to update to the current Support Repository Update (SRU), just follow the standard instructions for accessing the Oracle Solaris support repository. We plan to update these images on a regular basis to include recent SRUs, and welcome both questions and feedback on the image content.

We're pleased to announce that Oracle Solaris 11.4 images are now available in the Oracle Cloud Infrastructure(OCI) Marketplace!  This means that customers of OCI IaaS can run Solaris workloads in...

Announcing Oracle Solaris 11.4 SRU12

Today we are releasing the SRU 12 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU introduces the following enhancements: python 3.7 (3.7.3) has been added to Oracle Solaris 11.4 Update device ids data files to current versions as of 2019.06 Deliver XML Security library Deliver Lasso library Deliver mod_auth_mellon for Apache 2.4 GCC 9.1 has been added to Oracle Solaris 11.4 Add support for new SSD BearCove Plus Add support for new HDD LEO-B 14TB New python packages have been added: atomicwrites attrs hypothesis pathlib2 pluggy scandir Updated versions of: bash has been updated to 5.0 patchlevel 3 Node.js has been updated to 8.16.0 cryptography has been updated to 2.5 Jsonrpclib has been updated to 0.4.0 Coverage has been updated to 4.5.2 Markupsafe has been updated to 1.1.0 pygments has been updated to 2.3.1 pyOpenssl has been updated to 19.0.0 hg-git has been updated to version 0.8.12 py has been updated to 1.8.0 pytest has been updated to 4.4.0 zope.interface has been updated to 4.6.0 setuptools_scm has been updated to 3.3.3 boto has been updated to 2.49.0 mock has been updated to 3.0.5 psutil has been updated to 5.6.2 astroid has been updated to 2.2.5 lazy-object-proxy has been updated to 1.4.1 pylint has been updated to 2.3.1 sqlparse has been updated to 0.3.0 pluggy has been updated to 0.9.0 graphviz has been updated to 2.40.1 less has been updated to 551 The following components have also been updated to address security issues: MySQL 5.6 has been updated to 5.6.44 MySQL 5.7 has been updated to 5.7.26 BIND has been updated to 9.11.8 vim has been updated to version 8.1.1561 irssi has been updated to 1.1.3 Apache Tomcat has been updated to 8.5.42 Thunderbird has been updated to 60.8.0 python 2.7 has been updated to 2.7.16 python 3.4 has been updated to 3.4.10 python 3.5 has been updated to 3.5.7 Firefox has been updated to 60.8.0esr Wireshark has been updated to 2.6.10 glib & xscreensaver   New options added to the following utilities in core OS: ps(1) should be able to limit the output of the last column -W When the output is to a terminal, lines are truncated to the width of the screen. ps option to show associated service ps -e -o pid,user,fmri | tail prstat() should be able to monitor a service using contract ID -x Report the associated SMF service for a given process prstat should include an option to print a process zonename -Z Report information about processes and zones. In this mode, prstat displays separate reports about processes and zones at the same time. Add option to truss(1) to print only syscalls which returned with an error -N Report only system calls which have returned an error. libc compatibility enhancements with Linux: madvise() MADV_DONTDUMP support for Linux compat libc should provide explicit_bzero() and explicit_memset() libc should provide reallocf() libc should provide qsort_r() Full details of this SRU can be found in My Oracle Support Doc 2577197.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we are releasing the SRU 12 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU...

Announcements

Oracle Solaris Sessions at Oracle Openworld 2019

Oracle Solaris Sessions at Oracle Openworld 2019 This year Oracle Openworld will take place from Monday, Sep. 16 to Wednesday, Sep. 19 at the Moscone Center in San Francisco, and like in previous years many sessions will cover Oracle Solaris topics. Oracle Solaris will also be covered in a hands-on lab session on the first day in the afternoon, where we will demonstrate how to use REST to gather information about an Oracle Solaris instance Here is a list of these sessions, with the list of presenters and when they will take place: Oracle Solaris and SPARC: Consistent, Simple, Secure Session PRO4962 In this session learn about the new capabilities in Oracle Solaris 11.4 and the current and future strategy for Oracle Solaris. See how Oracle Solaris and SPARC provide a consistent platform for your mission-critical workloads and why customers are moving to Oracle Solaris 11 today. Learn how our continuous delivery model makes it easier for you to adopt new capabilities, simplifies maintenance, and helps you better secure your data center while protecting your existing application investments. Discover the advanced features in Oracle SPARC that help secure your systems and applications, deliver zero overhead end-to-end encryption, and accelerate the performance of Oracle Database and enterprise applications. (Bill Nesheim, Senior Vice President Oracle Solaris Development on Tuesday, Sep 17, 12:30pm-1:15pm and 3:15pm-4:00pm) Pulling Analytics Data from Oracle Solaris Through the REST API Session HOL4974 With the introduction StatsStore in Oracle Solaris 11.4 it's now possible to gather very rich data on application and system performance from a single location. Being able to pull this data into a data collection tool such as Nagios, Splunk, or Grafana gives a huge value to being able to monitor and maintain your systems. This hands-on lab explores how you can use the integrated REST API to collect this data from Oracle Solaris into a regular open source tool such as Nagios, and even collect combined database and system data. (Joost Pronk Van Hoogeveen, Senior Principal Product Strategy Manager and Martin Mueller, Senior Principal Software Engineer on Monday, Sep. 16, 1:00pm-2:00pm) Oracle Solaris: Simple, Flexible, Fast Virtualization Session 4972 In this session hear about the latest developments in Oracle Solaris virtualization. Learn about the new features in the recent Oracle Solaris 11.4 release including the ability to define inter-VM dependencies on boot-up, single-step system evacuation, and storage live migration. Also learn about further enhancements to unique features such as read-only VMs (Immutable Zones), running VMs from shared storage, and the powerful network virtualization. (Joost Pronk Van Hoogeveen Senior Principal Product Strategy Manager on Tue, Sep. 17, 3:15pm-4:00pm) Protect Your Database and App Infrastructure with Oracle Solaris Support Session THT5014 You have invested in your core business applications. In this session learn how Oracle Solaris gives you assurance that your investment is protected under Oracle Premier Support for Systems and Oracle Premier Support for Operating Systems. Oracle Solaris delivers peace of mind through Oracle Solaris 11 continuous delivery and the Oracle Solaris binary application guarantee. (Tamara Laborde, Premier Support Product Director on Tuesday, Sep. 17, 4:30pm-5:00pm) Oracle Solaris: Lift and Shift to New Platforms Session PRO4969 Many customers still have critical applications on older versions of Oracle Solaris. In this session learn how you can use Oracle Solaris tools to migrate your workloads to newer platforms—both systems and Oracle Solaris versions—greatly reducing costs and simplifying operations. Running on newer platforms also provides even better security tools to meet the latest security and compliance rules. (Joost Pronk Van Hoogeveen, Senior Principal Product Strategy Manager on Monday, Sep. 16, 11:15am-12:00pm) Oracle Solaris: Simplifying Security and Compliance for the Enterprise Session PRO4964 Oracle Solaris is engineered for security at every level. It allows you to mitigate risk and prove on-premises and cloud compliance easily, so you can spend time innovating. In this session learn how Oracle combines the power of industry-standard security features, unique security and antimalware capabilities, and mulitnode compliance management tools for low-risk application deployments and cloud infrastructure. (Darren Moffat, Senior Software Architect, Oracle on Tuesday, Sep. 17, 5:15pm-6:00pm) Securing and Simplifying Data Management with Oracle Solaris and ZFS Session PRO4971 ZFS is the premier enterprise file system in the world. It provides integrated compression, deduplication, and hardware accelerated encryption seamlessly into the file system. In this session learn how you can take advantage of Oracle Solaris ZFS to secure your data from malware and ransomware, simplify your data management, significantly lower storage costs, and improve performance. (Cindy Swearingen, Principal Product Manager on Tuesday, Sep. 17, 3:15pm-4:00pm) Oracle Solaris: Continuous Observability of Systems and Applications Session PRO4970 In this session learn about the Oracle Solaris StatsStore, which provides powerful and actionable data that helps you identify trends and isolate system issues across the entire stack, from the hardware to OS, virtualization, middleware, and apps. See a live demo showing how you can use the web dashboard to view resource consumption and isolate issues. In addition, see how you can get additional details through standard and customizable sheets that can be modified to fit your specific monitoring and business needs. (Rudolf Rotheneder, CEO, cons4you GmbH, Joost Pronk Van Hoogeveen, Senior Principal Product Strategy Manager and Martin Mueller, Product Manager on Tuesday, Sep. 17, 11:15am-12:00pm) Oracle Solaris Virtualization Best Practices: Learn to Love Your System Again Session TIP5014 This session provides Oracle Solaris virtualization technology best practices from a top Oracle Support expert. Learn how to fully maximize your virtual environment, including performance optimization, redundancy, and availability. These best practices are key to using the complementary solutions of Oracle Solaris VM for SPARC and Oracle Solaris Zones, and loving their strength and capabilities. (Dave Ryder, Senior Principal Technical Support Engineer on Tuesday, Sep. 17, 12:30pm-1:15pm) Top Five Strategies to Optimize Support for Your Database and Apps Infrastructure Session PRO5003 Oracle is committed to protecting your investment in Oracle Solaris infrastructure and is mindful of your mission-critical environments and resources. The Oracle Solaris 11 continuous delivery model and Oracle Solaris binary application guarantee, combined with the virtualization features of Oracle Solaris and SPARC systems, provide you with powerful options to manage, move, and consolidate legacy application environments. This session explores five strategies to protect your investment on core business applications, increase security, modernize your infrastructure, and utilize Oracle Premier Support as a key asset to your operations. (Tamara Laborde, Premier Support Product Director and Renato Ribeiro, Sr. Director, Infrastructure Technologies on Wednesday, Sep 18, 4:45pm-5:30pm)

Oracle Solaris Sessions at Oracle Openworld 2019 This year Oracle Openworld will take place from Monday, Sep. 16 to Wednesday, Sep. 19 at the Moscone Center in San Francisco, and like in previous years...

Oracle Solaris 11

Using REST to Find Your Oracle Solaris Packages

In my previous blog on the REST API in Oracle Solaris I showed how you can enable the remote REST interface on Oracle Solaris and connect to it. I also mentioned the REST API is layered on top of the Remote Administration Daemon (RAD) so it gives you access to all its modules. In this blog I wanted to highlight the IPS RAD module and how you can use it through the REST interface to do some common IPS tasks using it. Where in the previous blog I showed the initial REST calls using the Curl and Python, this time I'll be using Postman, an API development tool that can help you quickly explore a REST API. The nice thing is you can save the different calls and also allow you to create a test run in the case you want to run a series of calls in sequence. In general once they've figured out what they want to do with the API, folks will move to more programmatic methods like Python, especially if you need to recurse through a bunch of things and react to them. But for testing and exploring Postman works great, like in this case where I'll just be showing a few commands. In this blog I'll be showing how you can use the /api/com.oracle.solaris.rad.ips/1.0 IPS API to get a list of the installed packages, searching for a specific package and getting it's info. There are also other tasks you can do like installing and removing a package, and triggering a system update to move to a new SRU, but that would make the post very long so I'll write about them in a later post. Note that the documentation for the full REST API is available through the Web Dashboard by installing the webui-docs package. Authenticating Of course we first need authenticate like before but because I'm using Postman I thought I'd show how I'm doing this: Note I'm using centrally set variables {{server}}, {{username}}, and {{password}} in the URL and the JSON payload, in part to obscure them but most importantly if I want to connect to a different server or connect under a different user ID I can change it once and all my URLs and payloads are updated where appropriate. Equally if you use a Postman test run you can load these in to fit your test. Getting List of Installed Packages The first real step now is to connect and ask for the list of installed packages. We do this by using the method info to the PkgImage interface of the IPS RAD module. To use this we use _rad_method to indicate we want to use a method. This is a PUT call so we also need to supply a JSON payload but initially will just send an empty { }. And thus the result looks like this: As you can see the URI /api/com.oracle.solaris.rad.ips/1.0/PkgImage/%2F/_rad_method/info — where %2F stands for / — will give you something that looks like this: { "status": "success", "payload": [ { "href": "/api/com.oracle.solaris.rad.ips/1.0/PkgInfo/_rad_reference/3073" }, { "href": "/api/com.oracle.solaris.rad.ips/1.0/PkgInfo/_rad_reference/3585" }, { "href": "/api/com.oracle.solaris.rad.ips/1.0/PkgInfo/_rad_reference/4097" }, { "href": "/api/com.oracle.solaris.rad.ips/1.0/PkgInfo/_rad_reference/4609" }, { "href": "/api/com.oracle.solaris.rad.ips/1.0/PkgInfo/_rad_reference/5121" }, ... { "href": "/api/com.oracle.solaris.rad.ips/1.0/PkgInfo/_rad_reference/320513" } ] } So you see this is a very long list of references each connected to a PkgInfo object. So the next step is to have a look at what these objects look like. Checking the Package Info To look at a specific PkgInfo object we use ?_rad_detail at the end of the URI. So taking a random reference 227329 we have a look with a GET call: Where the /api/com.oracle.solaris.rad.ips/1.0/PkgInfo/rad_reference/227329?rad_detail URI gives: { "status": "success", "payload": { "href": "/api/com.oracle.solaris.rad.ips/1.0/PkgInfo/_rad_reference/227329", "PkgInfo": { "pkg_name": "system/io/infiniband/ib-device-mgt-agent", "summary": "InfiniBand Device Manager Agent", "description": "The InfiniBand Device Manager Agent (IBDMA) implements limited portions of the target (agent) side of the InfiniBand Device Management class as described in InfiniBand Architecture Specification, Volume 1: Release 1.2.1. IBDMA responds to incoming Device Management Datagrams (MADS) by enumerating available target-side InfiniBand services.", "category": "System/Hardware", "state": "Installed", "renamed_to": null, "publisher": "solaris", "last_install_time": null, "last_update_time": null, "size": "44.84 kB", "fmri": { "href": "/api/com.oracle.solaris.rad.ips/1.0/PkgFmri/_rad_reference/227585" }, "licenses": [] } } } And as you can see this looks a lot like the info you'd get from a regular pkg info command, no surprise I guess. The data essentially is a list of dictionaries. Search and Get Info for a Specific Package Now if you want to search for a specific package or packages and get its/their info you need to change the JSON payload to reflect which package(s) you're looking for, in this case I'm looking for the entire package: { "pkg_fmri_patterns": [ "entire" ] } And this gives us a much smaller response: { "status": "success", "payload": [ { "href": "/api/com.oracle.solaris.rad.ips/1.0/PkgInfo/_rad_reference/2049" } ] } Here's the screenshot from Postman: We can now use this reference to get further details like we did in the example above. Note that these references are dynamic and aren't necessarily bound to a specific package, especially over different connections/authentications/reboots. Also note that the info method give us more ways to filter the result but for that I'll refer to the documentation. Another way to search for a specific package using the PkgImage interface and getting a PkgFmri reference directly is to use the list_packages method on the / image path. This results in the following URI: /api/com.oracle.solaris.rad.ips/1.0/PkgImage/%2F/_rad_method/list_packages/ Where we use a similar JSON payload as before: { "pkg_fmri_patterns": [ "entire" ] } With a PUT call resulting in: { "status": "success", "payload": [ { "href": "/api/com.oracle.solaris.rad.ips/1.0/PkgFmri/_rad_reference/2049" } ] } Again seen in Postman: On the PkgFmri interface we also have individual methods to get specific piece of information, so for example if we use this URI: /api/com.oracle.solaris.rad.ips/1.0/PkgFmri/_rad_reference/2049/_rad_method/get_fmri With a PUT and an empty JSON payload ({ }) the response is: { "status": "success", "payload": "pkg://solaris/entire@11.4,5.11-11.4.6.0.1.4.0:20190201T214604Z" } You can see that we're running 11.4 SRU 6 in on this system. By the way, this interface only gives information which is stored in the package FMRI like the package name, the publisher, the version, the timestamp, and things like that. Summary In the examples above I've essentially started with the PkgImage interface and then used the RAD methods info and list_packages as ways to start looking for packages. This can be refined by altering the JSON payload you send with the PUT call. I've only used one — "pkg_fmri_patterns" — but there are many others that you can use with the different methods. For example with the info methods I can use "info_local": <boolean> and "info_remote": <boolean> to refine my search to either on the system or in the repository. Similarly the list_packages methods has "list_upgradable": <boolean> to give you a list of packages that are upgradable, which is equivalent to the CLI pkg list -u option. So there's much more you can do with this than I've shown. With that I'll close off. There are other things like installing and uninstalling packages as well as initiating a package update, and changing the package publisher. But these will have to wait for a future blog post. Enjoy.

In my previous blog on the REST API in Oracle Solaris I showed how you can enable the remote REST interface on Oracle Solaris and connect to it. I also mentioned the REST API is layered on top of the...

Announcements

FIPS 140-2 for Oracle Solaris 11.4

I'm pleased to be able to announce progress on our latest FIPS 140 validation of the Cryptographic Framework in Oracle Solaris.  The version of the Cryptographic Framework in Oracle Solaris 11.4 is now listed as an Implementation Under Test for FIPS 140-2. Previous releases of Oracle Solaris 11 have been validated to FIPS 140-2 on SPARC and x86 hardware.  For Oracle Solaris 11.4 the implementation of the Cryptographic Framework in userspace, underwent a fairly significant re-architecture and re-implementation at its lower levels.  One of the main motivations for this work was to assist with FIPS 140 validations by keeping the "FIPS 140 module boundary" as small as possible but also to provide an API with lower overhead and that was easier to use. This helps to address the issues I raised in a previous blog post about the impact a FIPS 140-2 validation can have on a cryptographic library. In previous releases of Oracle Solaris 11 the "FIPS 140 module boundary" in userspace was at the PKCS #11 library layer, the target of validation of Oracle Solaris 11.4 is the libucrypto library which is a lightweight cryptographic API that the PKCS #11 layer is built on top of.  The API for libucrypto is essentially the same as that used in the kernel cryptographic framework in all version of Oracle Solaris. As in previous validations the use of hardware acceleration capabilities in the processor is included in the validated configuration. Further updates will be posted when the CAVP (Cryptographic Algorithm Validation Program) certificates for each of the ciphers are available and when the final CMVP certificate is issued.  Timing of those is dependent on many parties; including Oracle, the validation lab and NIST. Darren    

I'm pleased to be able to announce progress on our latest FIPS 140 validation of the Cryptographic Framework in Oracle Solaris.  The version of the Cryptographic Framework in Oracle Solaris 11.4 is...

Oracle Solaris 11

Oracle Solaris Kernel Zones and Large Pages

One of the things that makes Oracle Solaris unique is how easily and transparently it can deal with different memory page sizes, officially Multiple Page Size Support (MPSS) introduced eons ago in Oracle Solaris 9. This to say the support for using larger memory pages than the standard page size (4KB on x86 and 8KB on SPARC). Anything larger than this smallest page size is seen as a "Large Page". The reason these large pages are important is for performance. As systems grew memory over time using only the smallest pages to allocate application memory created a huge bottleneck on the memory reference systems in the CPUs (like the TBL) and having a way to reference larger chunks of memory in one go became incredibly important. This resulted in the different CPU architectures supporting different larger page sizes, Where Intel now supports 4KB, 2MB, and 1GB pages, and SPARC 8KB, 64KB, 512KB, 4MB, 32MB, 256MB, 2GB, and 16GB pages. This means that if you have a very large application using many GBs or even TBs of memory you can give it 16GB pages and only use relatively few TLB registers in contrast to many many in the smaller page case — which results in TLB thrashing in the case you're running through the memory in your application. So using large pages can give a huge performance boost to large memory applications. Other OSes allow the use of them too but it's often clunky how to use and configure them in those cases, or require a reboot of the OS to start using them. The ability to dynamically allocate different size pages and no admin steps necessary make Oracle Solaris unique. However to quote the late great Johan Cruijff "Every advantage has its disadvantage" — well he actually said "Every disadvantage has its advantage" but that's essentially the same — and in the case of large pages this is no different. One of the main problems is that in order to use a page that large you (the kernel) need to free up that amount of memory contiguously, i.e. one large uninterrupted block of memory. So when the application is started it requests this memory and the kernel needs to find it and probably free this memory up. That is to say, move other smaller pages over to somewhere else so you can create the large page. It's sort of analogous to trying to park a huge truck and needing to move some parked cars to another spot to create one large parking spot for the truck. The scales are just different than with cars and trucks, if the truck/large page is 2GB there could be quite some 64KB or 8KB cars/pages that need to be cleared. Now, this all just takes some time, and if all the cars/pages can be moved it'll just make the starting of the application a little slower. However, sometimes certain cars/pages can't be moved, and in that case that area can't be used for a truck/large page. Even if the total memory would show enough space is available. And the larger the page the higher the chance this happens — This is one of the reasons why we introduced Virtual Memory 2 (a.k.a. VM 2) in Oracle Solaris 11.1 to help deal with this, maybe something for a future blog. When a system/VM boots hardly any of it's memory is being used so everything is still nice, and pristine, and empty. So allocating a bunch of large pages for an application — say an Oracle Database — has no problems. Like having an near empty parking lot. However if the system has been running for a (long) while you can expect memory to be used all over the place. Not a problem, the applications that are running already have their allocation, all is good. Now if you stop the application for some reason — for example to patch it or so — this contiguous memory is freed up, and while the application isn't there anything can use it. And when the application is restarted the kernel will try to move the "squatters" out again but sometimes can't. Some pages aren't relocatable, for example kernel pages, for example certain ZFS pages. And in that case it could be the kernel can't find the large page size it's looking for and maybe needs to revert to giving smaller pages instead, impacting performance. Over the different releases of Oracle Solaris (Updates and SRUs) we've introduced more and more techniques to combat this either by reducing the chance that it happens through grouping of page size allocation (in VM 2), or by reducing the use of unmovable pages, or cleaning them up better. Or reducing the area of memory they can use. An example of this is limiting the ZFS ARC cache size. Now to Kernel Zones. When you boot a Kernel Zone, you're essentially booting a very large application — that happens to be a Type 2 Hypervisor that in turn runs a kernel and applications — and to the hosting kernel is just one application. When we start the Kernel Zone it wants to use the largest size page possible, so that it in turn can give these large pages to it's applications. Going back to the parking lot analogy, think of it as a super large truck, or an event tent or so, that in turn has cars and trucks in it. Again not a problem at the initial boot of the hosting OS, but after a while you might have problems when rebooting you're Kernel Zone, if you've taken it down, runs some applications in the host OS or used the filesystem a lot or so, and want to boot it again. To help control this Oracle Solaris 11.4 SRU1 introduced Memory Reservation Pools (see the Memory Reservation Pools section of the solaris-kz(7) man page). In this case you can reserve part of the system memory to only be used by Kernel Zones. Nothing else can use this memory, it's blocked off and reserved, essentially guaranteeing that the Kernel Zones can reboot and always have enough memory — as long as you've reserved enough memory for all the Kernel Zones you want to run. Joerg Moellenkamp wrote an excellent blog on how you can use it and see that it's been allocated. At the last few events we did, we'd been getting questions on this topic and I wanted to write a small blog to point to Joerg's blog, but while writing it realized I wanted to write a bit more of the backstory to hopefully give a better feel for what's going on. Hence the long intro to the link to Joerg's blog.

One of the things that makes Oracle Solaris unique is how easily and transparently it can deal with different memory page sizes, officially Multiple Page Size Support (MPSS) introduced eons ago...

Oracle Solaris 11

Managing Oracle Solaris through REST

This blog is intended as an introduction to managing Oracle Solaris 11 through a REST interface, that is to say, managing it through http/https GET/PUT/POST/DELETE commands. Over the last few years the REST API has become a very popular way to talk to/manage services on the network. This is in part because pretty much anything can talk http/https — read: can talk over http/https and/or has an http/https server built in — so you can equally talk to and between applications, operating systems, firmware, and hardware devices, and also in part because http and https as a transport can easily move through firewalls avoiding special requests to open ports for management channels. This is especially useful in a world where things are on-prem, in the cloud, or a combination of the two. It also often has the advantage that you don't need to create a special agent that you need to install, certify, maintain, and patch. Even though connecting through REST is interesting, it's just another transport and a transport is only interesting if there's something interesting to do over it. So to not make this post too long it will concentrate on setting up and connecting through the REST interface and other future posts will explore the things you can connect to and manage. Note: There are many other ways to manage Oracle Solaris beyond the classic Command Line Interface (CLI), there are Oracle tools like Oracle Ops Center, Oracle Enterprise Manager, and popular open source change management tools like Puppet, Chef, and Ansible. All of these are options for Oracle Solaris we give you choice to use the tool that best fits your company's tool chain. The ability to use REST to connect to Oracle Solaris was first introduced in 11.3 but with 11.4 we've added many more things you can manage making it a pretty complete monitoring and management option for Oracle Solaris. The key thing to understand is that in Oracle Solaris the REST interface is built on top of our built-in Remote Administration Daemon a.k.a. RAD, and the REST interface essentially exposes the RAD API through REST. This means that as we add extra modules to RAD we transparently get extra functionality added to the REST interface. And because this is essentially just another extension of the RAD interface we tend to talk about RAD/REST and also set this up as a RAD service. Requirements To be able to connect over RAD/REST we require you use the secure https transport and in order for this to work your server — i.e. the system you're connecting to — must have a valid SSL certificate in place for the client to know it can be trusted. Many production systems will have certificate in place issued by either a public CA (Certificate Authority) or an internal CA managed by the company (I'll call these a managed CA). However quite often systems you're testing on might not have one of these certificates from a managed CA. For example you might be using a Oracle Solaris running in VirtualBox on your laptop. In this case to still be able to connect to the server you'll need to copy a certificate the host CA created over to the system you're connecting from. By default in Oracle Solaris 11.4 the identity:cert service will act as the host CA and create the certificates using the hostname and DNS name information it can find after installation. It will create and put them in /etc/certs/localhost/ and it's the host.crt you're looking to copy across (the host.key should remain in place). It's important to realize that the names that the certificate was created with should be the same as the name the client sees, if it doesn't match you'll have to create a certificate by hand and then put it in the same location. The other thing you'll need to do is enable the RAD Remote service. This is an SMF service that allows you to connect remotely to the system over REST. By default this service will be turned off. This is in conformance with our secure by default policy that only enables the bare minimum in remotely accessible strongly authenticated services. In the future we might choose to add the RAD remote service to this bare minimum list. Once the service is on it'll stay on until to turn it off. Note that in Oracle Solaris 11.3 you needed to configure a custom SMF service before you could enable it. With the release of Oracle Solaris 11.4 there is a prebuilt SMF service — called rad:remote — that you only need to enable. In the Oracle Solaris 11.3 case you choose your own service parameters, like port number. Also note that if you've created your own certificate you'll need to make sure that either the custom certificate is placed in the location the rad:remote service is expecting it or you change the service configuration to point to your certificates. Once you have the certificates created, copied and the SMF service started you can use your favorite client tool — Curl, Python, Postman — and start connecting with the server. Setting Up the RAD/REST Service This blog will assume you're using Oracle Solaris 11.4, if you're using Oracle Solaris 11.3 you'll need to create/configure your the rad:remote service first, for more details read the RAD Developer's Guide. Enabling rad:remote On the server enable the rad:remote service and check it's running: root@test_server:~# svcadm enable rad:remote root@test_server:~# svcs -l rad:remote fmri svc:/system/rad:remote name Remote Administration Daemon enabled true state online next_state none state_time Mon Jun 24 22:22:37 2019 logfile /var/svc/log/system-rad:remote.log restarter svc:/system/svc/restarter:default contract_id 918 manifest /lib/svc/manifest/system/rad.xml dependency require_all/refresh svc:/system/identity:cert (online) dependency require_all/none svc:/milestone/multi-user (online) dependency require_all/none svc:/system/filesystem/minimal:default (online) Now it's running, you can connect to the system remotely. Copying The Certs In case where you're system is using a certificate issued by the host CA — like in my case — you'll need to copy that across to your client system first: -bash-4.4$ scp testuser@test_server.example.com:/etc/certs/localhost/host.crt . Password: host.crt 100% 1147 666.9KB/s 00:00 Now we can use the host.crt in all our REST conversations that connect to this server. Testing the Connection Before we'll connect to the server, I need to make a short detour to talk about the two authentication methods Oracle Solaris supports. The first one was introduced in Oracle Solaris 11.3 and uses a single step with a JSON datafile that contains both username and password. It looks something like this: { "username": "testuser", "password": "your_password", "scheme": "pam", "preserve": true, "timeout": -1 } And this authentication method can be found at /api/authentication/1.0/Session/. The second method was introduced in Oracle Solaris 11.4 and is a two step process where you first send the username, and once this is met with success, you send the password. The two JSON data files would something like this: { "username":"testuser", "preserve": true } and: { "value": { "pam": {"responses": ["your_password"]}, "generation": 1 } } This second authentication method can be found at /api/authentication/2.0/Session/. This second method was added to allow for more advanced authentication methods like two-factor authentication for javascript based apps like the Oracle Solaris WebUI. For our purposes the first one works fine. I've put the JSON data in a file called login.json and will refer to it in the coming examples. Using Curl On the client, from the directory I put the certificate in I can run: -bash-4.4$ curl -c cookie.txt -X POST --cacert host.crt --header 'Content-Type:application/json' --data '@login.json' https://test_server.example.com:6788/api/authentication/1.0/Session/ { "status": "success", "payload": { "href": "/api/com.oracle.solaris.rad.authentication/1.0/Session/_rad_reference/2560" } } Note, I'm using a cookie.txt file to save the session cookies. And the response shows success and an href if I need it. I can now for example ask the SMF RAD module to list the RAD services it has: -bash-4.4$ curl -b cookie.txt --cacert host.crt -H 'Content-Type:application/json' -X GET https://test_server.example.com:6788/api/com.oracle.solaris.rad.smf/1.0/Service/system%2Frad/instances { "status": "success", "payload": [ "local", "remote" ] } Note again that I'm referring to the cookie.txt file to use the current session's cookie. And you see it knows of a local and a remote service. Now to check the current status of the rad:remote service: -bash-4.4$ curl -b cookie.txt --cacert host.crt -H 'Content-Type:application/json' -X GET https://test_server.example.com:6788/api/com.oracle.solaris.rad.smf/1.0/Instance/system%2Frad,remote/state { "status": "success", "payload": "ONLINE" } And the service is online, not a surprise I guess. Using Python To illustrate how to the same using Python I have a short example script. Note this is a very basic script with hardly any error handling and pretty ugly code, the point is to illustrate the way to connect. Here's my code: import requests import json config_filename = "login.json" try: with open(config_filename, 'r') as f: config_json = json.load(f) except Exception as e: print(e) #Build session with requests.Session() as s: #Login to server login_url = "https://test_server.example.com:6788/api/authentication/1.0/Session" print("logging in to the Server") r = s.post(login_url, json=config_json, verify='host.crt') print("The status code is: " + str(r.status_code)) print("The return text is: " + r.text) # Get list of all SMF instances of the RAD module query_url0 = "https://test_server.example.com:6788/api/com.oracle.solaris.rad.smf/1.0/Service/system%2Frad/instances" print("Getting the list of SMF instances of the RAD module") r = s.get(query_url0) print("The status code is: " + str(r.status_code)) print("The return text is: " + r.text) # Getting the status of the rad:remote module query_url1 = "https://test_server.example.com:6788/api/com.oracle.solaris.rad.smf/1.0/Instance/system%2Frad,remote/state" print("Getting the status of the rad:remote module") r = s.get(query_url1) print("The status code is: " + str(r.status_code)) print("The return text is: " + r.text) And this is what it returns: -bash-4.4$ python sample.py logging in to the Server The status code is: 201 The return text is: { "status": "success", "payload": { "href": "/api/com.oracle.solaris.rad.authentication/1.0/Session/_rad_reference/3840" } } Getting the list of SMF instances of the RAD module The status code is: 200 The return text is: { "status": "success", "payload": [ "local", "remote" ] } Getting the status of the rad:remote module The status code is: 200 The return text is: { "status": "success", "payload": "ONLINE" } You see the same result. More Documentation So now you know how to connect, the next question that tends to rise is where to find an explanation of the full RAD/REST API. There are of course the online docs on the RAD interface which also has a section on REST in it. But this is by no means a complete description of the API. Plus the API is also dynamic, as we add RAD modules there are more endpoints to talk to. To solve for this we've included a documentation package in Oracle Solaris 11.4 called webui-docs, that when added to the system with give an extra Application in the Oracle Solaris WebUI. Once installed you'll see "Solaris Documentation" as an option below "Solaris Analytics" and "Solaris Dashboard" in the "Applications" pull-down menu. Once selected you'll see a link to "Solaris APIs", and clicking this will bring you to the full REST API description of all the RAD modules on that system. This blog is a starter in a series of blogs we want to do about the RAD/REST interface. Others will probably go in to a specific RAD module or how to combine information between RAD modules. So stay tuned.

This blog is intended as an introduction to managing Oracle Solaris 11 through a REST interface, that is to say, managing it through http/https GET/PUT/POST/DELETE commands. Over the last few years...

Announcing Oracle Solaris 11.4 SRU10

Today we are releasing the SRU 10 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU introduces the following enhancements: Inclusion of PHP 7.3.2 which includes xdebug 2.7.0 SCAT 5.5.1 has been added GNU Parallel has been updated to 20190322 libvdpau has been updated to 1.2 pcre2 has been updated to 10.32 harfbuzz has been updated to 2.3.1 util-macros has been updated to 1.19.2 pixman has been updated to 0.38.0 and libSM to 1.2.3 libogg has been updated to 1.3.3 libevent has been updated to v2.1.8 libdrm has been updated to 2.4.97 xorg-server has been updated to 1.20.3 GNU Emacs has been updated to 26.1 xcb-proto has been updated to 1.13 libxcb has been updated to 1.13.1 libgpg-error has been updated to 1.36 gzip has been updated to 1.10 oniguruma has been updated to 6.8.2 tcl has been updated to 8.6 The following components have also been updated to address security issues: sqlite has been updated to 3.28.0 poppler has been updated to 0.75.0 ntp has been updated to 4.2.8p13 Thunderbird has been updated to 60.7.0 Wireshark has been updated to 2.6.9 ICU has been updated to 63.1 Firefox has been updated to 60.7.0esr Imagemagick has been updated to 6.9.10-34 Updates to nvidia driver & gnuplot. Full details of this SRU can be found in My Oracle Support Doc 2555953.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we are releasing the SRU 10 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU...

Announcing Oracle Solaris 11.4 SRU9

Today we are releasing the SRU 9 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU introduces the following enhancements: HMP has been updated to 2.4.6, and includes support for CX5 25G and Aura8 AIC Solaris now shows appropriate year for the Japanese Era name change Explorer 19.2 is now available The timezone data has been updated to 2018i Inclusion of new python modules (tempora, portend, setuptools_scm) to support CherryPy Migration of graphviz PHP bindings to version 7 gmime 3.2.3 has been added to ship alongside the existing gmime 2.6.23 Updated versions of: Ruby has been updated to 2.6 Node.js has been updated to 8.15.1 pycups has been updated to 1.9.74 pinentry has been updated to 1.1.0 libgpg-error has been updated to 1.31 tmux has been updated to 2.8 Firefox has been updated to 60.6.3esr GNU coreutils has been updated to 8.30 Automake has been updated to 1.16.1 The following components have also been updated to address security issues: gnupg has been updated to 2.2.8, plus inclusion of libassuan 2.5.1 and gpgme 1.11.1 libgcrypt has been updated to to 1.8.3 webkitgk has been updated to 2.22.6 binutils has been updated to 2.32 Apache Web Server has been updated to 2.4.39 Apache Tomcat has been updated to 8.5.39 Wireshark has been updated to 2.6.8 pcre has been updated to 8.42 Thunderbird has been updated to 60.6.1 Updated version of setuptools (39.x) and pip (10.x) django has been updated to 1.11.20 Full details of this SRU can be found in My Oracle Support Doc 2547299.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we are releasing the SRU 9 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU...

Oracle Solaris 11

Using GNOME 3 in Oracle Solaris 11.4

When we first updated the GNOME desktop in our Solaris development builds from 2.30 to 3.18, we started a wiki page for our internal test users and developers to share tips, tricks, and troubleshooting info with each other.  We kept it up as we moved to GNOME 3.24 for the Oracle Solaris 11.4 release. Now that our customers have been using it for a while, we've seen some of the same issues come up, so I've published many of the items from that page to Oracle Support Document 2541799.1 and have listed them below as well.  The version in My Oracle Support is likely to stay more up to date as all our support engineers can update it, but for now, most of the content is the same. Oracle Solaris 11.4 provides the GNOME 3 desktop, an upgrade from the GNOME 2.30 desktop provided in Oracle Solaris 11.0 through 11.3. The initial Oracle Solaris 11.4 releases include GNOME 3.24, and this may be upgraded to new releases in future Solaris 11.4 Support Repository Updates (SRUs). Documentation GNOME has extensive online help integrated into the product, though you may need to install the documentation/gnome/* packages to get all of it. Upstream docs include: GNOME 3.24 User Help GNOME 3.24 SysAdmin Guide GNOME 3 Cheat Sheet Oracle has also provided documentation of the accessibility features in the Oracle® Solaris 11.4 Desktop Accessibility Guide. Tips Many additional "power user" settings beyond what's in the base settings panel are available by starting the "Tweak Tool" application, either by selecting "Tweak Tool" in the application selector or running /usr/bin/gnome-tweak-tool directly. On the new login screen, if you want to choose your session type (the default modern GNOME Shell, GNOME Classic, or xterm), you must first enter your username, hit enter, and then before entering your password, click the gear icon next to the login button. If you're using remote display (VNC, ILOM KVMS, etc.) or a less powerful graphics device, you may get better performance by going into Tweak Tool, selecting the "Appearance" tab and turning off animations. GNOME Terminal has hidden the new tab option. Use shift-ctrl-t to open a new tab (or shift-ctrl-n for a new window). For the new GNOME3 method of opening tabs, see http://worldofgnome.org/opening-a-new-terminal-tabwindow-in-gnome-3-12/ If you prefer having simple minimize and maximize buttons on window title bars instead of having to right click on them to bring up a menu, start Tweak Tool, go to the "Windows" tab and turn on the "Maximize" and "Minimize" buttons under "Titlebar Buttons". To change the image used on the gdm login screen, see: https://wiki.archlinux.org/index.php/GDM#Log-in_screen_background_image To change the background image on the desktop, run Tweak Tool, go to Desktop --> Background Location. Specify a graphic file (jpg, png, gif, etc.) or, for special transition effects, a XML file. It can be in any directory. Also specify the mode ("Spanned" mode stretches the image to fit the screen). To prevent the screen-saver from locking the screen, run: gsettings set org.gnome.desktop.screensaver lock-enabled false gsettings get org.gnome.desktop.screensaver lock-enabled To prevent gnome-shell from dimming and then turning off the display, run: gsettings set org.gnome.desktop.session idle-delay 0 gsettings get org.gnome.desktop.session idle-delay If you want a lock screen button/icon in the top panel, install the Lock Screen extension. gnome-terminal does not include ":" in the default list of word separator characters, which means double-clicking on a full URL does not work; you have to click and drag. To fix this, run: PROFILE_ID=$(gsettings get org.gnome.Terminal.ProfilesList list | awk -F\' '{print $2}') dconf write /org/gnome/terminal/legacy/profiles:/:${PROFILE_ID}/word-char-exceptions '@ms "-,.;/?%&#_=+@~·:"' If you have a Sun keyboard and can't get keys like Copy and Paste to work, then: run Tweak Tool click on the Typing tab click on the triangle next to Maintain key compatibility with old Solaris keycodes click on the Sun Key compatibility button If gnome-terminal isn't starting (e.g., "Terminal exited with status 8"), then it probably means your locale isn't set to what gnome-terminal wants. gnome-terminal-server won't exec unless LANG is set to a UTF-8 locale, such as en_US.UTF-8. Ditto for LC_ALL if it is set. (See GNOME bug 732127 - CLOSED WONTFIX, and the GNOME FAQ entry.) If you want to set up your desktop :0 display for remote access (aka vino), you may need to first set up the network interface for it. The GUI "Sharing" app under system tools does not function correctly on Solaris. gsettings set org.gnome.Vino network-interface net0 /usr/lib/vino-server & If windows are all black when running inside VirtualBox, disable 3D acceleration in the VirtualBox Display settings for the VM. To set Emacs mode for various apps (e.g., Firefox): gsettings set org.gnome.desktop.interface gtk-key-theme Emacs ALT-TAB now switches between applications rather than between windows. If you want to embrace this you note you can use ALT-' to switch between instances of an application. If you want to return to the more familiar then you can switch it back. Activities -> Show Applications -> Settings -> Keyboard -> Shortcuts -> Navigation. Clear 'Switch Applications' and add 'ALT-TAB' to 'Switch Windows' To reduce the GIANT size of desktop icons, go to Applications -> Favorites ->Files. Click on the array of dots icon on the upper right. Click on the "Size" radio button. Then scroll a size slider to the left (or scroll to the right to make the icons even BIGGER!). To allow detach of VIM-ATTENTION dialog window that pops up when two gvim sessions are trying to edit the same file run /usr/bin/dconf-editor, go to org->gnome->shell->overrides and disable "attach-modal-dialogs". To disable the auto-maximize window when you drag a window to the top of the workspace, disable "edge-tiling" in the same config panel. To install additional fonts, see How to Install a New Font on Oracle Solaris 11 (Doc ID 2484242.1). Troubleshooting "Oh No! Something has gone wrong." full screen dialog: What can trigger it and tips to get rid of it Files to check for log messages on Solaris: /var/svc/log/application-graphical-login-gdm:default.log /var/log/gdm/:0-greeter.log /var/log/Xorg.0.log Places you can enable more debugging info: Edit /usr/bin/gnome-session and add --debug to the /usr/lib/gnome-session-binary flags, then look in /var/adm/messages Edit /etc/gdm/custom.conf and uncomment Enable=true under [debug], then restart gdm (this enables more debug messages in both the gdm logs and the Xorg logs) If gdm is not displaying the login screen, besides checking the above log files, check your PAM configuration. How to use the pam_debug file (Oracle Support Doc ID 1007720.1) may help with this. If there is an error in /var/svc/log/application-graphical-login-gdm:default.log that suggests that gdm didn't start because of not finding the gnome-initial-setup user, add InitialSetupEnable=False to the daemon section in /etc/gdm/custom.conf and restart the gdm service. gdm will hang if there are multiple users in /etc/passwd having the same uid. This is a known outstanding issue in the community: https://bugzilla.redhat.com/show_bug.cgi?id=1354075 Various failures can be caused by corruption or mismatches in files in $HOME/.cache – logging out, moving that directory aside or removing it, and then logging back in can help in some situations. For problems on the login screen, you may need to do this in /var/lib/gdm/.cache as well, since that screen runs as the gdm user. $HOME/.pulse contents have also caused gnome-shell issues that resulted in an unusable desktop (black screens). Debugging application icon and .desktop file usage in GNOME shell GTK+ widget/style inspector Sites with more information Additional reference sites ArchLinux Wiki: GNOME GNOME Shell FAQ GNOME Shell Cheat Sheet Useful gnome-shell extensions Workspace Grid gives you a side-to-side workspace movement as well as up and down. Top GNOME 3 Shell Extensions reviews 9 popular extensions for gnome-shell.

When we first updated the GNOME desktop in our Solaris development builds from 2.30 to 3.18, we started a wiki page for our internal test users and developers to share tips, tricks, and...

Announcing Oracle Solaris 11.4 SRU8

Today we are releasing the SRU 8 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU introduces the following enhancements: Integration of 28060039 introduced an issue where any firmware update/query commands will log eereports and repeated execution of such commands led to faulty/degraded NIC. The issue has been addressed in this SRU. UCB (libucb, librpcsoc, libdbm, libtermcap, and libcurses) libraries have been reinstated for Oracle Solaris 11.4 Re-introduction of the service fc-fabric. ibus has been updated to 1.5.19 The following components have also been updated to address security issues: NTP has been updated to 4.2.8p12 Firefox has been updated to 60.6.0esr BIND has been updated to 9.11.6 OpenSSL has been updated to 1.0.2r MySQL has been updated to 5.6.43 & 5.7.25 libxml2 has been updated to 2.9.9 libxslt has been updated to 1.1.33 Wireshark has been updated to 2.6.7 ncurses has been updated to 6.1.0.20190105 Apache Web Server has been updated to 2.4.38 perl 5.22 pkg.depot Full details of this SRU can be found in My Oracle Support Doc 2529131.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we are releasing the SRU 8 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU...

Events

Coming To a Place Near You - Presentations on Oracle Solaris

In the coming months we have some really nice in person events coming your way. First thing I wanted to mention is that we will be holding two Oracle Solaris TechDays in on April 29th in Dubai and on May 14th in Prague. These TechDays are all day events with in-depth presentations and demonstrations on various Oracle Solaris technologies and features, and how to use them. What also makes these TechDays special is that the presentations are given by Bill Nesheim -- the VP of Engineering for Oracle Solaris -- and his top engineers, giving a unique opportunity to interact with them directly. So these are both a full day of technical presentations on Oracle Solaris. Then I wanted to also point out there are other events happening in other locations. These events will have a combination of presentations on various Oracle Systems products like the Private Cloud Appliance (PCA), Oracle SPARC systems, the storage Oracle ZS-7, Oracle Solaris, as well as a presentation on how to best refresh your estate to lower costs and risk. The events will be on April 9th in Stockholm, on April 9th in Rome, on April 10th in Helsinki, on April 25th in Dallas TX, and on April 25th in Utrecht. So if you live close to these events and are interested you are cordially invited to register and attend. Going forward more events will be organized to find these you can always check Oracle Events website, and search on "Solaris". And finally I wanted to point out that for those who don't live close to these locations, we'll also be holding a Virtual Seminar on Continuous Innovation with Oracle Solaris 11.4 that anybody from anywhere can join. This Virtual Seminar is a more technical in-depth 2 hour session on various Oracle Solaris 11.4 technologies with examples on how they could be used. The invite and registration page to follow shortly.  [Update] Here's the link to the Virtual Seminar on April 25th.

In the coming months we have some really nice in person events coming your way. First thing I wanted to mention is that we will be holding two Oracle Solaris TechDays in on April 29th in Dubai and on...

Oracle Developer Studio

Building Opensource on Solaris 11.4

I love opensource software, I use it every day, but sometimes a component I need is either not present in or at the wrong version in Oracle Solaris. We often hear the same requests from our customers too. Please can I have version x.y of opensource software FOO on Oracle Solaris 11.4. Oracle Solaris depends heavily on opensource software and we try to keep them reasonably up to date, while being careful not to break anything. However we don't have every peice of opensource available, so I thought I'd address this by writing a blog to show how simple it was to use our Userland build infrastructure to build a piece of opensource I actually wanted to use this week. So what's the software? Well I use cscope a fair bit. Internally we have two versions cscope and cscope-fast. The former came to us when we licensed AT&T System V Unix, and we have evolved it ever since. The latter is a fork to optimise search speed (at the expense of it being slower to generate the index file). cscope-fast isn't that good at doing python either, which I am doing a lot more of these days. I noticed that in 2001 SCO, who bought the rights to AT&T System V unix, had opensourced cscope under the BSD license, so thought it was worth seeing how it compared, both in terms of speed and how it handles python. It might now be worth a brief discussion about what "userland" actually is and how it works. Userland is our way of importing and building opensource software in a simple repeatable way. It does so by downloading the source of a project (either as a tar ball, or by using the source code management system to take a copy), it then applies any patches we need to get it to work on Oracle Solaris (thankfully fewer and fewer these days) and then builds it. It can even generate IPS packages if you create a package manifest. I've done everything in 11.4 GA (General Availability) using the Oracle Technical Network (OTN) versions of Oracle Developer Studio and Oracle Solaris, but it you have a support contract obviously you should use the latest versions you can. I went to OTN and registered for the developer studio package repository (I won't cover that here, it's pretty straight forward and documented on the OTN site). You need to register to get certificates and keys to access it. So I booted up my 11.4 GA vm and installed the following packages to get me started $ sudo pkg install flex git gcc-7 I know I'm going to need gcc and git, I only found out later I needed flex to build cscope. I decided to install Developer Studio 12.6 (The OTN version of Studio 12.4 doesn't install on 11.4 as 11.4 removed the Python2.6 interpreter, you can get a patched version of Studio 12.4 if you really need to) $ sudo pkg install --accept developerstudio-126 We need studio to do the setup for the userland build infrastructure, we won't be using it to build cscope. OK, so now we're ready to start. First off create a directory where you want to do the work $ git clone https://github.com/oracle/solaris-userland.git Cloning into 'solaris-userland'... remote: Enumerating objects: 1251, done. remote: Counting objects: 100% (1251/1251), done. remote: Compressing objects: 100% (683/683), done. remote: Total 91074 (delta 677), reused 935 (delta 530), pack-reused 89823 Receiving objects: 100% (91074/91074), 137.57 MiB | 8.05 MiB/s, done. Resolving deltas: 100% (34629/34629), done. Now we have a local copy to work with Next we need to setup the build environment to work. Because we're not in Oracle and using Studio 12.6 we need to change a couple of things in make-rules/shared-macros.mk, find the SPRO_VROOT definition and change it to SPRO_VROOT ?= $(SPRO_ROOT)/developerstudio12.6 and remove the line starting INTERNAL_ARCHIVE_MIRROR= and in make-rules/ips-buildinfo.mk find CANONICAL_REPO line (it's currently the last one) and change it to point to the Oracle Solaris release repo CANONICAL_REPO ?= http://pkg.oracle.com/solaris/release/ Now we can setup the build environment $ gmake setup /bin/mkdir -p /export/home/chris/userland-src/solaris-userland/i386 Generating component list... Generating component dependencies... Generating component list... Generating component dependencies... /bin/mkdir -p /export/home/chris/userland-src/solaris-userland/i386/logs /bin/mkdir -p /export/home/chris/userland-src/solaris-userland/i386/home /usr/bin/pkgrepo create file:/export/home/chris/userland-src/solaris-userland/i386/repo /usr/bin/pkgrepo add-publisher -s file:/export/home/chris/userland-src/solaris-userland/i386/repo nightly /usr/bin/pkgrepo add-publisher -s file:/export/home/chris/userland-src/solaris-userland/i386/repo userland-localizable /usr/bin/pkgrepo create file:/export/home/chris/userland-src/solaris-userland/i386/repo.experimental /usr/bin/pkgrepo add-publisher -s file:/export/home/chris/userland-src/solaris-userland/i386/repo.experimental nightly /usr/bin/pkgrepo add-publisher -s file:/export/home/chris/userland-src/solaris-userland/i386/repo.experimental userland-localizable building tools... /usr/gnu/bin/make -C ../tools clean make[1]: Entering directory '/export/home/chris/userland-src/solaris-userland/tools' make[1]: Leaving directory '/export/home/chris/userland-src/solaris-userland/tools' /usr/gnu/bin/make -C ../tools setup make[1]: Entering directory '/export/home/chris/userland-src/solaris-userland/tools' make[1]: Leaving directory '/export/home/chris/userland-src/solaris-userland/tools' Generating pkglint(1) cache from CANONICAL_REPO http://pkg.oracle.com/solaris/release/... The last bit takes a while. Now create a directory for cscope $ mkdir components/cscope $ cd components/cscope We need to create a Makefile. This is what mine looks like $ cat Makefile # # CDDL HEADER START # # The contents of this file are subject to the terms of the # Common Development and Distribution License (the "License"). # You may not use this file except in compliance with the License. # # You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE # or http://www.opensolaris.org/os/licensing. # See the License for the specific language governing permissions # and limitations under the License. # # When distributing Covered Code, include this CDDL HEADER in each # file and include the License file at usr/src/OPENSOLARIS.LICENSE. # If applicable, add the following below this CDDL HEADER, with the # fields enclosed by brackets "[]" replaced with your own identifying # information: Portions Copyright [yyyy] [name of copyright owner] # # CDDL HEADER END # # # Copyright (c) 2019, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64 COMPILER=gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= cscope COMPONENT_VERSION= 15.9 COMPONENT_SRC= $(COMPONENT_NAME)-$(COMPONENT_VERSION) IPS_COMPONENT_VERSION= $(COMPONENT_VERSION) BUILD_VERSION= 1 COMPONENT_PROJECT_URL= https://cscope.sourceforge.net #COMPONENT_BUGDB= utility/cscope #COMPONENT_ANITYA_ID= 1865 COMPONENT_ARCHIVE= cscope-15.9.tar.gz COMPONENT_ARCHIVE_URL= https://sourceforge.net/projects/cscope/files/latest/download COMPONENT_ARCHIVE_HASH= COMPONENT_MAKE_JOBS= 1 BUILD_STYLE= configure TEST_TARGET= $(NO_TESTS) include $(WS_MAKE_RULES)/common.mk REQUIRED_PACKAGES += library/ncurses This is about as small as I could make it, You need the location where to download the tarball, package and component names, and I only found later I needed to put a dependency on library/ncurses to get packaging to resolve the dependency correctly. Later on you will want to populate the COMPONENT_ARCHIVE_HASH to be sure you're getting the file you intended. BUILD_STYLE tells the makefiles to run configure within the build directory We will need to create a package manifest, but if we wait till we're building cleanly we can use the gmake sample-manifest to get something close to usable So assuming I've got the Makefile right should be able to build it $ gmake install <stuff deleted> This is gmake getting the source for us /export/home/chris/userland-src/solaris-userland/tools/userland-fetch --file cscope-15.9.tar.gz --url 'https://sourceforge.net/projects/cscope/files/latest/download' Source cscope-15.9.tar.gz... not found, skipping file copy Source https://sourceforge.net/projects/cscope/files/latest/download... downloading... validating signature... skipped (no signature URL) validating hash... skipping (no hash) hash is: sha256:c5505ae075a871a9cd8d9801859b0ff1c09782075df281c72c23e72115d9f159 /usr/bin/touch cscope-15.9.tar.gz <More stuff deleted> This is it running configure (and where I found I needed flex installed) (cd /export/home/chris/userland-src/solaris-userland/components/cscope/build/amd64 ; /usr/bin/env CONFIG_SHELL="/bin/bash" PKG_CONFIG_PATH="/usr/lib/amd64/pkgconfig" CC="/usr/gcc/7/bin/gcc" CXX="/usr/gcc/7/bin/g++" PATH="/usr/bin/amd64:/usr/bin:/usr/gnu/bin" CC_FOR_BUILD="/usr/gcc/7/bin/gcc -m64" CXX_FOR_BUILD="/usr/gcc/7/bin/g++ -m64" CPPFLAGS="-m64" "ac_cv_func_realloc_0_nonnull=yes" "NM=/usr/gnu/bin/nm" INTLTOOL_PERL="/usr/perl5/5.22/bin/perl" CFLAGS="-m64 -O3" CXXFLAGS="-m64 -O3" LDFLAGS="" /bin/bash \ /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/configure --prefix=/usr --mandir=/usr/share/man --bindir=/usr/bin --sbindir=/usr/sbin --libdir=/usr/lib/amd64 ) checking for a BSD-compatible install... /usr/bin/ginstall -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/bin/gmkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking build system type... x86_64-pc-solaris2.11 checking host system type... x86_64-pc-solaris2.11 checking for style of include used by make... GNU checking for gcc... /usr/gcc/7/bin/gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... <More stuff deleted> Here it's starting gmake install in the build area make[4]: Entering directory '/export/home/chris/userland-src/solaris-userland/components/cscope/build/amd64/src' /usr/gcc/7/bin/gcc -DHAVE_CONFIG_H -I. -I/export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src -I.. -I/usr/include/ncurses -m64 -m64 -O3 -MT fscanner.o -MD -MP -MF .deps/fscanner.Tpo -c -o fscanner.o /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src/fscanner.c mv -f .deps/fscanner.Tpo .deps/fscanner.Po /usr/gcc/7/bin/gcc -DHAVE_CONFIG_H -I. -I/export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src -I.. -I/usr/include/ncurses -m64 -m64 -O3 -MT egrep.o -MD -MP -MF .deps/egrep.Tpo -c -o egrep.o /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src/egrep.c mv -f .deps/egrep.Tpo .deps/egrep.Po /usr/gcc/7/bin/gcc -DHAVE_CONFIG_H -I. -I/export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src -I.. -I/usr/include/ncurses -m64 -m64 -O3 -MT alloc.o -MD -MP -MF .deps/alloc.Tpo -c -o alloc.o /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src/alloc.c mv -f .deps/alloc.Tpo .deps/alloc.Po /usr/gcc/7/bin/gcc -DHAVE_CONFIG_H -I. -I/export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src -I.. -I/usr/include/ncurses -m64 -m64 -O3 -MT basename.o -MD -MP -MF .deps/basename.Tpo -c -o basename.o /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src/basename.c mv -f .deps/basename.Tpo .deps/basename.Po /usr/gcc/7/bin/gcc -DHAVE_CONFIG_H -I. -I/export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src -I.. -I/usr/include/ncurses -m64 -m64 -O3 -MT build.o -MD -MP -MF .deps/build.Tpo -c -o build.o /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src/build.c mv -f .deps/build.Tpo .deps/build.Po /usr/gcc/7/bin/gcc -DHAVE_CONFIG_H -I. -I/export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src -I.. -I/usr/include/ncurses -m64 -m64 -O3 -MT command.o -MD -MP -MF .deps/command.Tpo -c -o command.o /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src/command.c mv -f .deps/command.Tpo .deps/command.Po Which ends up creating a prototype directory under $(BUILD) to mirror the directory structure of the installed package. This is very useful as pkg(7) can use this to generate a manifest for us. So at this point we can run cscope from the prototype area, but that's not really very interesting, we what IPS packages to install. Well fortunately now we have the proto area, we can run the gmake sample-manifest $ gmake sample-manifest /export/home/chris/userland-src/solaris-userland/tools/manifest-generate \ /export/home/chris/userland-src/solaris-userland/components/cscope/build/prototype/i386 | \ /usr/bin/pkgmogrify -D PERL_ARCH=i86pc-solaris-thread-multi-64 -D PERL_VERSION=5.22 -D IPS_COMPONENT_RE_VERSION=15\\.9 -D COMPONENT_RE_VERSION=15\\.9 -D PYTHON_2.7_ONLY=# -D PYTHON_3.4_ONLY=# -D PYTHON_3.5_ONLY=# -D SQ=\' -D DQ=\" -D Q=\" -I/export/home/chris/userland-src/solaris-userland/components/cscope -D SOLARIS_11_3_ONLY="#" -D SOLARIS_11_5_ONLY="#" -D SOLARIS_11_3_4_ONLY="" -D SOLARIS_11_4_5_ONLY="" -D SOLARIS_11_4_ONLY="" -D PY3_ABI3_NAMING="#" -D PY3_CYTHON_NAMING="" -D ARC_CASE="" -D TPNO="" -D BUILD_VERSION="1" -D OS_RELEASE="5.11" -D SOLARIS_VERSION="2.11" -D PKG_SOLARIS_VERSION="11.5" -D CONSOLIDATION="userland" -D CONSOLIDATION_CHANGESET="" -D CONSOLIDATION_REPOSITORY_URL="https://github.com/oracle/solaris-userland.git" -D COMPONENT_VERSION="15.9" -D IPS_COMPONENT_VERSION="15.9" -D HUMAN_VERSION="" -D COMPONENT_ARCHIVE_URL="https://sourceforge.net/projects/cscope/files/latest/download" -D COMPONENT_PROJECT_URL="https://cscope.sourceforge.net" -D COMPONENT_NAME="cscope" -D HG_REPO="" -D HG_REV="" -D HG_URL="" -D GIT_COMMIT_ID="" -D GIT_REPO="" -D GIT_TAG="" -D MACH="i386" -D MACH32="i86" -D MACH64="amd64" -D PUBLISHER="nightly" -D PUBLISHER_LOCALIZABLE="userland-localizable" /dev/fd/0 /export/home/chris/userland-src/solaris-userland/transforms/generate-cleanup | \ sed -e '/^$/d' -e '/^#.*$/d' | /usr/bin/pkgfmt | \ cat /export/home/chris/userland-src/solaris-userland/transforms/manifest-metadata-template - >/export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-generated.p5m So we have a manifest in /home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-generated.p5m. It's not useable but is close to it. For me it was a bit hit and miss adding and removing things till it worked. The diffs I ended up with are   $ diff -u /export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-generated.p5m cscope.p5m --- /export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-generated.p5m 2019-03-28 16:51:42.034736910 +0000 +++ cscope.p5m 2019-03-28 15:58:57.122806976 +0000 @@ -23,26 +23,17 @@ # Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved. # + default mangler.man.stability volatile> set name=pkg.fmri \ - value=pkg:/$(IPS_PKG_NAME)@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) -set name=pkg.summary value="XXX Summary XXX" -set name=com.oracle.info.description value="XXX Description XXX" -set name=com.oracle.info.tpno value=$(TPNO) + value=pkg:/site/developer/cscope@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) +set name=pkg.summary value=cscope +set name=com.oracle.info.description value="Source Browser" set name=info.classification \ - value="org.opensolaris.category.2008:XXX Classification XXX" + value=org.opensolaris.category.2008:Development/System set name=info.source-url value=$(COMPONENT_ARCHIVE_URL) set name=info.upstream-url value=$(COMPONENT_PROJECT_URL) set name=org.opensolaris.arc-caseid value=PSARC/YYYY/XXX -set name=org.opensolaris.consolidation value=$(CONSOLIDATION) - -license $(COPYRIGHT_FILE) license='$(COPYRIGHTS)' - file path=usr/bin/cscope file path=usr/bin/ocs file path=usr/share/man/man1/cscope.1 +license BSD license=BSD So I set the category, package name, summary, description and license manually. I also add a transform for the man page. This is required. A quick point about package names, I've put it in a hierachy that Oracle will never publish packages in (site). We need to also consider the license. You can't delete it, so I copied the license from the cscope home page in to a file called BSD in the cscope component directory. Remove generated package manifest (else it'll try to use that instead) and run pkgfmt on your manifest to make it acceptable to pkg and simply $ gmake publish /usr/bin/env; /usr/bin/pkgdepend generate \ -m -d /export/home/chris/userland-src/solaris-userland/components/cscope/build/prototype/i386/mangled -d /export/home/chris/userland-src/solaris-userland/components/cscope/build/prototype/i386 -d /export/home/chris/userland-src/solaris-userland/components/cscope/build -d /export/home/chris/userland-src/solaris-userland/components/cscope -d cscope-15.9 -d /export/home/chris/userland-src/solaris-userland/licenses /export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-cscope.mangled >/export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-cscope.depend /usr/bin/pkgdepend resolve -e /export/home/chris/userland-src/solaris-userland/components/cscope/build/resolve.deps -m /export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-cscope.depend /usr/bin/touch /export/home/chris/userland-src/solaris-userland/components/cscope/build/.resolved-i386 <More stuff deleted> /usr/bin/pkgsend -s file:/export/home/chris/userland-src/solaris-userland/i386/repo publish --fmri-in-manifest --no-catalog -d /export/home/chris/userland-src/solaris-userland/components/cscope/build/prototype/i386/mangled -d /export/home/chris/userland-src/solaris-userland/components/cscope/build/prototype/i386 -d /export/home/chris/userland-src/solaris-userland/components/cscope/build -d /export/home/chris/userland-src/solaris-userland/components/cscope -d cscope-15.9 -d /export/home/chris/userland-src/solaris-userland/licenses -T \*.py /export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-cscope.depend.res pkg://nightly/developer/cscope@15.9,1:20190328T160907Z PUBLISHED <More stuff deleted> Right we have packages in a repository, we can install it $ sudo pkg set-publisher -p file:/export/home/chris/userland-src/solaris-userland/i386/repo pkg set-publisher: Added publisher(s): nightly Updated publisher(s): userland-localizable $ sudo pkg install cscope Packages to install: Create boot environment: No Create backup boot environment: No DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 4/4 0.2/0.2 -- PHASE ITEMS Installing new actions 18/18 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 4/4 So lets run it on cscope and see what it looks like $ cd cscope-15.9/src $ cscope -R   So there you have it, cscope, cscoping cscope source  

I love opensource software, I use it every day, but sometimes a component I need is either not present in or at the wrong version in Oracle Solaris. We often hear the same requests from our customers...

Oracle Developer Studio

Building OpenJDK 12 using JDK 8

Guest Author: Petr Sumbera As you may know, a new OpenJDK 12 was released which you might want to try on Oracle Solaris. There are already some great blog posts out by Martin Mueller and Andrew Watkins describing how to build recent OpenJDK versions from sources. But there is an issue. To build OpenJDK you will always need previous version as a boot JDK. Basically to build OpenJDK 12 you will need OpenJDK 11 and so on. For SPARC it's rather easy as you can download Java SE Development Kit 11 for Oracle Solaris on SPARC. But not for Oracle Solaris on x64. So you will need to start with JDK 8 which is bundled with Oracle Solaris. Following are copy paste instructions which should build all OpenJDK versions from 9 till latest 12. You just need to have Oracle Solaris 11.4 with installed system header files, Mercurial and JDK 8: pkg install mercurial system/header jdk-8 You might also want to check whether you have the facet.devel set to True or False. When set to False there are no C header files on system. So the correct setting is True: pkg facet | grep ^devel && pkg change-facet facet.devel=True Note that a setting of None should default to True, i.e. have the same effect as True As compiler you will need Oracle Solaris Studio 12.4 (ideally latest update). Installation guide is here. After successful build procedure you should find your fresh OpenJDK tar archives in openjdk*/build/solaris-*-server-release/bundles/ directory. So now copy and paste the following: cat > build.sh <<EOF set -xe # Oracle Developer Studio 12.4 and JDK 8 are needed. STUDIO="/opt/solarisstudio12.4/bin" JDK="/usr/jdk/instances/jdk1.8.0/bin/" # OpenJDK 12 has some issues: # - https://bugs.openjdk.java.net/browse/JDK-8211081 # - shenandoahgc doesn't build on Solaris i386 (and is not supported on sparc) CONFIG_OPTS_JDK12="--with-jvm-features=-shenandoahgc --disable-warnings-as-errors" # EM_486 is no longer defined since Solaris 11.4 function fixsrc1 { FILE=os_solaris.cpp for f in hotspot/src/os/solaris/vm/\$FILE src/hotspot/os/solaris/\$FILE; do [ ! -f "\$f" ] || gsed -i 's/EM_486/EM_IAMCU/' "\$f" done } # caddr32_t is already defined in Solaris 11 function fixsrc2 { FILE=src/java.base/solaris/native/libnio/ch/DevPollArrayWrapper.c for f in "\$FILE" "jdk/\$FILE"; do [ ! -f "\$f" ] || gsed -i '/typedef.*caddr32_t;/d' "\$f" done } for VERSION in {9..12}; do hg clone http://hg.openjdk.java.net/jdk-updates/jdk\${VERSION}u openjdk\${VERSION} cd openjdk\${VERSION}/ # OpenJDK 9 uses script to download nested repositories test -f get_source.sh && bash get_source.sh # There are needed some source changes to build OpenJDK on Solaris 11.4. fixsrc1 ; fixsrc2 [[ \${VERSION} -lt 12 ]] || CONFIGURE_OPTIONS="\${CONFIG_OPTS_JDK12}" # Oracle Solaris Studio 12.4 may fail while building hotspot-gtest (_stdio_file.h issue) PATH="\$JDK:\$STUDIO:/usr/bin/" bash ./configure --disable-hotspot-gtest \${CONFIGURE_OPTIONS} gmake bundles 2>&1 | tee build.log RELEASE_DIR=\`grep "^Finished building target" build.log | cut -d \' -f 4\` JDK="\`pwd\`/build/\${RELEASE_DIR}/images/jdk/bin/" cd .. done EOF bash build.sh This should correctly build all the versions of OpenJDK from 9 to 12. Have fun.

Guest Author: Petr Sumbera As you may know, a new OpenJDK 12 was released which you might want to try on Oracle Solaris. There are already some great blog posts out by Martin Mueller and Andrew...

Announcements

Registration is Open for Oracle OpenWorld and Oracle Code One San Francisco 2019

Register now for Oracle OpenWorld and Oracle Code One San Francisco. These concurrent events are happening September 16-19, 2019 at Moscone Center. By registering now, you can take advantage of the Super Saver rate before it expires on April 20, 2019.  This year at Oracle OpenWorld San Francisco, you’ll learn how to do more with your applications, adopt new technologies, and network with product experts and peers.    Don’t miss the opportunity to experience: Future technology firsthand such as Oracle Autonomous Database, blockchain, and artificial intelligence New connections by meeting some of the brightest technologists from some of the world’s most compelling companies Technical superiority by taking home new skills and getting an insider’s look at the latest Oracle technology Lasting memories while experiencing all that Oracle has to offer, including many opportunities to unwind and have some fun   At Oracle Code One, the most inclusive developer conference on the planet, come learn, experiment, and build with us. You can participate in discussions on Linux, Java, Go, Rust, Python, JavaScript, SQL, R, and more. See how you can shape the future and break new ground. Join deep-dive sessions and hands-on labs covering leading-edge technology such as blockchain, chatbots, microservices, and AI. Experience cloud development technology in the Groundbreakers Hub, featuring workshops and other live, interactive experiences and demos. Register Now and Save! Now is the best time to register for these popular conferences and take us up on the Super Saver rate. Then be sure to check back in early May, 2019 for the full content catalog where you will see the breadth and depth of our sessions. You can also signup to be notified when the content catalog goes live. Register now for Oracle OpenWorld San Francisco 2019 Register now for Oracle Code One San Francisco 2019 We look forward to seeing you in September!  

Register now for Oracle OpenWorld and Oracle Code One San Francisco. These concurrent events are happening September 16-19, 2019 at Moscone Center. By registering now, you can take advantage of the...

Announcements

Future of Python on Solaris

If you are an administrator of Oracle Solaris 11 systems you will very likely be aware that several of the critical installation and management tools are implemented in Python rather than the more traditional C and/or shell script.  Python is often a good choice as a systems development language and Oracle Solaris is not alone in choosing to use it more extensively. If you have done any programming with Python or have installed other Python tools you are probably aware that there are compatibility issues between the Python 2.x and 3.x language runtimes.  In Oracle Solaris 11.4.7 we deliver multiple releases of Python: 2.7, 3.4, 3.5.  Future updates to the Oracle Solaris support repository will change the exact version mix.  The Python community will no longer be supporting and providing security or other fixes for the 2.7 environment from the year 2020.  That is currently just about 9 months away. Python Versions A default installation of Oracle Solaris 11.4 will currently have at least Python 2.7 and a version of Python 3, /usr/bin/python by default will run Python 2.7.  There are also paths that allow explicitly picking a version, this is very useful for the #! line of a script: eg /usr/bin/python2, /usr/bin/python3 or even more specific /usr/bin/python3.5. The packaging system allows you to choose now which version of Python /usr/bin/python represents; there are two package mediators used to control this.  The 'python' mediator allows selecting which /usr/bin/python represents and the 'python3' mediator allows selecting which minor version of the 3.x release train the /usr/bin/python3 link points to. In a future support repository update we will change the default to be a version of Python 3.  Python Modules & Solaris Tools The Oracle Solaris 11 package tools (IPS: Image Packaging System) is one of the critical OS tools implemented in Python, in current releases of 11.4 this is still using Python 2.7, but we have also delivered a parallel version for testing that uses Python 3.4. We don't expect to have a single support repository update where we switch over all of the OS Python consumers to Python 3 in one release. When we implement operating system tools using Python the #! line should always be precise as to which major.minor version it is the script is expected to run with. Our package dependencies then ensure the correct versions of pkg:/runtime/python are installed. As a result of using Python to implement parts of the OS we deliver a fairly large number of Python modules from open source communities.  When we deliver a Python module via IPS it is always installed below /usr/lib/pythonX.Y/vendor-packages/.  The intent of the vendor-packages area is to be like the core Python site-packages area but to make it clear these modules came as part of the OS and should be updated with the OS tools.  This provides some level of isolation between OS installed modules and administrator/developer use of tools such as 'pip' package installer for Python. Sharing the Python Installation We have seen some recent cases where use of 'pip' to add additional modules, or different versions of modules delivered to vendor-packages has broken critical system administration tools such as /usr/bin/pkg and /usr/bin/beadm.  The reason for this is because vendor-packages is really just an extension of the core Python site-packages concept.  While it is possible to instruct the Python interpreter to ignore the user and site packages area for isolation reasons that doesn't help when the OS core tools are implemented in Python and need access to content installed in the vendor-packages area.  Oracle Solaris is not the only operating system impacted by this problem, other OS vendors are developing their own solutions to this, some of the concepts and recommendations you will find are similar. Firstly, like others do we strongly recommend the use of Python Virtual Environments when using pip or other Python install tools to add modules not delivered by Oracle Solaris packages.  If you wish to leverage the fact that Oracle Solaris has already included some of the modules you need, you can use '--system-site-packages' when you create the virtual environment. Using that allows pip/python to reference the system site-packages and thus find the vendor-package are but it won't install content into /usr/lib/pythonX.y/site-packages.  For example: $ python3 -m venv --system-site-packages /usr/local/mytool $ source /usr/local/mytool/bin/activate (mytool) $ python Python 3.5.6 (default, Mar 7 2019, 08:31:32) [C] on sunos5 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path ['', '/usr/lib/python35.zip', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-sunos5', '/usr/lib/python3.5/lib-dynload', '/usr/local/mytool/lib/python3.5/site-packages', '/usr/lib/python3.5/site-packages', '/usr/lib/python3.5/vendor-packages'] >>> We can see from the above that the Python module search path includes the system and the 'mytool' application area as well as the Solaris IPS delivered content in vendor-packages.  If we now run 'pip install' it will add any modules it needs to the site-packages directory below 'mytool' rather than the one under /usr/lib/python3.5. Hardending the shared Python Installation So that we can keep providing open source Python modules for administrator/developer use and continue to use Python for the OS itself we need a solution that protects the Python modules in vendor-packages from being overridden by administrator installed ones in site-packages.  We don't wish to duplicate whole Python environments, since as I mentioned above we currently use multiple Python versions for the OS itself.  The solution we have come up was based on the following requirements: pip and other install tools should not be able to add incompatible versions of modules to site-packages that confuse core OS tools pip uninstall must not be able to remove packages from vendor-packages instead of site-packages. This happens because 'vendor-packages' is higher than 'site-packages' in the Python module search path. pip and other Python installers should work in a virtualenv and remain working for the system 'site-packages' area the solution should not require patching the Python runtime or pip to enforce the above. We will be delivering parts of the solution to this over time when they are ready and tested to our satisfaction, there should be no impact to any of your tooling that uses Python we deliver. The first change we will deliver is to further harden the core Python tools that we author against content in the 'site-packages' area that is potentially toxic to our tools.  Ideally we should have been able to just pass -S as part of the #! line of the script, our tools already use -s to ignore the per user 'site-packages' but that would mean the vendor-packages area is never setup since it depends on /usr/lib/pythonX.Y/site-packages/vendor-packages.pth to get added.  So instead we deliver a new module 'solaris.no_site_packages' that removes the system site-packages leaving vendor-packages.  This module only makes sense for Oracle Solaris core components, in particular any 3rd party FOSS tools that are in Python do not include this. We are still investigating the best  the solution to ensure that tools such as pip can not uninstall IPS delivered content in vendor-packages. Patching the version of pip we deliver would only be  a partial solution, since it would only work until an admin runs 'pip install --upgrade pip', since that would effectively remove any patch we made to ignore the vendor-packages area. Even with that restriction we may still apply a patch to pip.  We could potentially deliver an /etc/pip.conf file that forces the use of virtual environments, as described in this Python document virtual-env. However doing that could potentially have unexpected side effects if you are managing to safely use 'pip install' to place content into the system 'site-packages' area. Instead we intend to use a feature of Oracle Solaris that allows us to tag files as not being able to be removed even by the root user (even when running with all privileges), a feature I described in this blog post 11 years ago! IPS can deliver system attributes (sysattr) for files already.  Currently we only use that in a single package (pkg://solaris/release/name) to ensure that the version files that contain the value used for 'uname -v' are not easy to "accidentally" change, for that we add sysattr=readonly to the package manifest for those files. We are exploring the possibility of using the 'nounlink' or maybe even the 'immutable' attribute to protect all of the Python modules delivered by IPS to vendor-packages directory. In theory, we could do this for all content delivered via IPS that is not explicitly marked with the preserve attribute but we probably won't go that far in the initial use and instead focus just on hardening the Python environment. In summary Python on Oracle Solaris is very much here to stay as /usr/bin/python and we are working to ensure that you can continue to use the Python modules we deliver while providing an environment where our core tools implemented in Python are safe from any additional Python modules added to the system.

If you are an administrator of Oracle Solaris 11 systems you will very likely be aware that several of the critical installation and management tools are implemented in Python rather than the...

Perspectives

How to Install the OCI CLI on Oracle Solaris

Following on to my previous posts on using Oracle Solaris with Oracle Cloud Infrastructure (OCI)... All modern clouds provide a fancy browser console that lets you easily manage any of your cloud resources.  For power users, though, the browser console is often the slowest way to get things done.  Plus, if you're trying to leverage the cloud as part of other automation, which has often been written as traditional Unix shell scripts,you may need a command line interface (CLI), Oracle Cloud Infrastructure (OCI) provides a comprehensive CLI, the oci command, written in Python as is true of many recent system tools.  The CLI and the underlying Python SDK are very portable, but have some dependencies on other Python modules at particular versions, so can be somewhat complex to install on any particular OS platform.  Fortunately, Python has a powerful module known as virtualenv that makes it fairly simple to install and run a Python application and its dependencies in isolation from other Python programs that may have conflicting requirements.  This is especially an issue with modern enterprise operating systems such as Solaris, which use Python for system tools and don't necessarily update all of their modules as quickly as the cloud SDK's require.  OCI makes use of virtualenv to provide an easy installation setup for any Unix-like operating system, including Solaris 11.4.  There are just a couple of extra things we need to do in order for it to work on Solaris, so without further ado, here's a step-by-step guide to installing the OCI CLI on Solaris 11.4 Install Oracle Developer Studio's C compiler - any recent version should do, I've tested 12.4, 12.5 and 12.6.  Once you've obtained credentials for the package repository and configured the solarisstudio publisher, the command is simply pkg install --accept developer/developerstudio-126/cc Ensure the C compiler command is in your path: PATH=$PATH:/opt/developerstudio12.6/bin Install the system headers and pkg-config tool. pkg install system/header developer/build/pkg-config Set the correct include path in your environment: export CFLAGS=$(pkg-config --cflags libffi) Now follow the installation instructions from the OCI CLI documentation Once you download and execute the basic install script, you'll have to answer several prompts regarding the installation locations for the OCI CLI components, after that it will download a series of dependencies, build and install them to the specified location, this takes just a couple of minutes. I've included a transcript of a session below so you can see what a successful runof the OCI install script looks like.  Happy CLI-ing! {cli-114} bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6283 100 6283 0 0 49472 0 --:--:-- --:--:-- --:--:-- 49472 Downloading Oracle Cloud Infrastructure CLI install script from https://raw.githubusercontent.com/oracle/oci-cli/6dc61e3b5fd2781c5afff2decb532c24969fa6bf/scripts/install/install.py to /tmp/oci_cli_install_tmp__Xfw. ######################################################################### 100.0% System version of Python must be either a Python 2 version >= 2.7.5 or a Python 3 version >= 3.5.0. Running install script. python /tmp/oci_cli_install_tmp__Xfw < /dev/tty -- Verifying Python version. -- Python version 2.7.14 okay. ===> In what directory would you like to place the install? (leave blank to use '/home/dminer/lib/oracle-cli'): /export/oci/lib -- Install directory '/export/oci/lib' is not empty and may contain a previous installation. ===> Remove this directory? (y/N): y -- Deleted '/export/oci/lib'. -- Creating directory '/export/oci/lib'. -- We will install at '/export/oci/lib'. ===> In what directory would you like to place the 'oci' executable? (leave blank to use '/home/dminer/bin'): /export/oci/bin -- The executable will be in '/export/oci/bin'. ===> In what directory would you like to place the OCI scripts? (leave blank to use '/home/dminer/bin/oci-cli-scripts'): /export/oci/bin/oci-cli-scripts -- Creating directory '/export/oci/bin/oci-cli-scripts'. -- The scripts will be in '/export/oci/bin/oci-cli-scripts'. -- Downloading virtualenv package from https://github.com/pypa/virtualenv/archive/15.0.0.tar.gz. -- Downloaded virtualenv package to /tmp/tmpGBgVu5/15.0.0.tar.gz. -- Checksum of /tmp/tmpGBgVu5/15.0.0.tar.gz OK. -- Extracting '/tmp/tmpGBgVu5/15.0.0.tar.gz' to '/tmp/tmpGBgVu5'. -- Executing: ['/usr/bin/python', 'virtualenv.py', '--python', '/usr/bin/python', '/export/oci/lib'] Already using interpreter /usr/bin/python New python executable in /export/oci/lib/bin/python Installing setuptools, pip, wheel...done. -- Executing: ['/export/oci/lib/bin/pip', 'install', '--cache-dir', '/tmp/tmpGBgVu5', 'oci_cli', '--upgrade'] DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. Collecting oci_cli Downloading https://files.pythonhosted.org/packages/3b/8e/dd495ec5eff6bbe8e97cb84db9a1f1407fe5ee0e901f5f2833bb47920ae3/oci_cli-2.5.4-py2.py3-none-any.whl (3.7MB) 100% |████████████████████████████████| 3.7MB 3.3MB/s Collecting PyYAML==3.13 (from oci_cli) Downloading https://files.pythonhosted.org/packages/9e/a3/1d13970c3f36777c583f136c136f804d70f500168edc1edea6daa7200769/PyYAML-3.13.tar.gz (270kB) 100% |████████████████████████████████| 276kB 21.1MB/s Collecting configparser==3.5.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/7c/69/c2ce7e91c89dc073eb1aa74c0621c3eefbffe8216b3f9af9d3885265c01c/configparser-3.5.0.tar.gz Collecting python-dateutil==2.7.3 (from oci_cli) Downloading https://files.pythonhosted.org/packages/cf/f5/af2b09c957ace60dcfac112b669c45c8c97e32f94aa8b56da4c6d1682825/python_dateutil-2.7.3-py2.py3-none-any.whl (211kB) 100% |████████████████████████████████| 215kB 4.0MB/s Collecting pytz==2016.10 (from oci_cli) Downloading https://files.pythonhosted.org/packages/f5/fa/4a9aefc206aa49a4b5e0a72f013df1f471b4714cdbe6d78f0134feeeecdb/pytz-2016.10-py2.py3-none-any.whl (483kB) 100% |████████████████████████████████| 491kB 14.4MB/s Collecting cryptography==2.4.2 (from oci_cli) Downloading https://files.pythonhosted.org/packages/f3/39/d3904df7c56f8654691c4ae1bdb270c1c9220d6da79bd3b1fbad91afd0e1/cryptography-2.4.2.tar.gz (468kB) 100% |████████████████████████████████| 471kB 5.1MB/s Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done Collecting click==6.7 (from oci_cli) Downloading https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl (71kB) 100% |████████████████████████████████| 71kB 1.6MB/s Collecting cx-Oracle==7.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/b7/70/03dbb0f055ee97f7ddb6c6f11668f23a97b5884fdf4826a006ef91c5085c/cx_Oracle-7.0.0.tar.gz (281kB) 100% |████████████████████████████████| 286kB 19.0MB/s Collecting retrying==1.3.3 (from oci_cli) Downloading https://files.pythonhosted.org/packages/44/ef/beae4b4ef80902f22e3af073397f079c96969c69b2c7d52a57ea9ae61c9d/retrying-1.3.3.tar.gz Collecting six==1.11.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl Collecting arrow==0.10.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/54/db/76459c4dd3561bbe682619a5c576ff30c42e37c2e01900ed30a501957150/arrow-0.10.0.tar.gz (86kB) 100% |████████████████████████████████| 92kB 2.1MB/s Collecting jmespath==0.9.3 (from oci_cli) Downloading https://files.pythonhosted.org/packages/b7/31/05c8d001f7f87f0f07289a5fc0fc3832e9a57f2dbd4d3b0fee70e0d51365/jmespath-0.9.3-py2.py3-none-any.whl Collecting certifi (from oci_cli) Downloading https://files.pythonhosted.org/packages/60/75/f692a584e85b7eaba0e03827b3d51f45f571c2e793dd731e598828d380aa/certifi-2019.3.9-py2.py3-none-any.whl (158kB) 100% |████████████████████████████████| 163kB 4.5MB/s Collecting httpsig-cffi==15.0.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/93/f5/c9a213c0f906654c933f1192148d8aded2022678ad6bce8803d3300501c6/httpsig_cffi-15.0.0-py2.py3-none-any.whl Collecting pyOpenSSL==18.0.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/96/af/9d29e6bd40823061aea2e0574ccb2fcf72bfd6130ce53d32773ec375458c/pyOpenSSL-18.0.0-py2.py3-none-any.whl (53kB) 100% |████████████████████████████████| 61kB 14.1MB/s Collecting oci==2.2.3 (from oci_cli) Downloading https://files.pythonhosted.org/packages/d9/e1/1301df1b5ae84aa01188e5c42b41c1ad44739449ff3af2ab81790a952bb3/oci-2.2.3-py2.py3-none-any.whl (2.0MB) 100% |████████████████████████████████| 2.0MB 4.0MB/s Collecting terminaltables==3.1.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/9b/c4/4a21174f32f8a7e1104798c445dacdc1d4df86f2f26722767034e4de4bff/terminaltables-3.1.0.tar.gz Collecting idna<2.7,>=2.5 (from oci_cli) Downloading https://files.pythonhosted.org/packages/27/cc/6dd9a3869f15c2edfab863b992838277279ce92663d334df9ecf5106f5c6/idna-2.6-py2.py3-none-any.whl (56kB) 100% |████████████████████████████████| 61kB 12.6MB/s Collecting enum34; python_version < "3" (from cryptography==2.4.2->oci_cli) Downloading https://files.pythonhosted.org/packages/c5/db/e56e6b4bbac7c4a06de1c50de6fe1ef3810018ae11732a50f15f62c7d050/enum34-1.1.6-py2-none-any.whl Collecting cffi!=1.11.3,>=1.7 (from cryptography==2.4.2->oci_cli) Downloading https://files.pythonhosted.org/packages/64/7c/27367b38e6cc3e1f49f193deb761fe75cda9f95da37b67b422e62281fcac/cffi-1.12.2.tar.gz (453kB) 100% |████████████████████████████████| 460kB 5.1MB/s Collecting asn1crypto>=0.21.0 (from cryptography==2.4.2->oci_cli) Downloading https://files.pythonhosted.org/packages/ea/cd/35485615f45f30a510576f1a56d1e0a7ad7bd8ab5ed7cdc600ef7cd06222/asn1crypto-0.24.0-py2.py3-none-any.whl (101kB) 100% |████████████████████████████████| 102kB 1.6MB/s Collecting ipaddress; python_version < "3" (from cryptography==2.4.2->oci_cli) Downloading https://files.pythonhosted.org/packages/fc/d0/7fc3a811e011d4b388be48a0e381db8d990042df54aa4ef4599a31d39853/ipaddress-1.0.22-py2.py3-none-any.whl Collecting pycparser (from cffi!=1.11.3,>=1.7->cryptography==2.4.2->oci_cli) Downloading https://files.pythonhosted.org/packages/68/9e/49196946aee219aead1290e00d1e7fdeab8567783e83e1b9ab5585e6206a/pycparser-2.19.tar.gz (158kB) 100% |████████████████████████████████| 163kB 4.7MB/s Building wheels for collected packages: cryptography Building wheel for cryptography (PEP 517) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/13/ad/1b/94b787de6776646c28a03dc2f4a6387e3ab375533028c58195 Successfully built cryptography Building wheels for collected packages: PyYAML, configparser, cx-Oracle, retrying, arrow, terminaltables, cffi, pycparser Building wheel for PyYAML (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/ad/da/0c/74eb680767247273e2cf2723482cb9c924fe70af57c334513f Building wheel for configparser (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/a3/61/79/424ef897a2f3b14684a7de5d89e8600b460b89663e6ce9d17c Building wheel for cx-Oracle (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/31/db/58/a89e912df33e3545643a49cd8bcfe0f513d101b9d115cbeae4 Building wheel for retrying (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/d7/a9/33/acc7b709e2a35caa7d4cae442f6fe6fbf2c43f80823d46460c Building wheel for arrow (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/ce/4f/95/64541c7466fd88ffe72fda5164f8323c91d695c9a77072c574 Building wheel for terminaltables (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/30/6b/50/6c75775b681fb36cdfac7f19799888ef9d8813aff9e379663e Building wheel for cffi (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/bb/f8/22/e3e8d9dd87e0cc6df8201325bd0ae815e701d1ef2b95571cf2 Building wheel for pycparser (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/f2/9a/90/de94f8556265ddc9d9c8b271b0f63e57b26fb1d67a45564511 Successfully built PyYAML configparser cx-Oracle retrying arrow terminaltables cffi pycparser Installing collected packages: PyYAML, configparser, six, python-dateutil, pytz, enum34, idna, pycparser, cffi, asn1crypto, ipaddress, cryptography, click, cx-Oracle, retrying, arrow, jmespath, certifi, httpsig-cffi, pyOpenSSL, oci, terminaltables, oci-cli Successfully installed PyYAML-3.13 arrow-0.10.0 asn1crypto-0.24.0 certifi-2019.3.9 cffi-1.12.2 click-6.7 configparser-3.5.0 cryptography-2.4.2 cx-Oracle-7.0.0 enum34-1.1.6 httpsig-cffi-15.0.0 idna-2.6 ipaddress-1.0.22 jmespath-0.9.3 oci-2.2.3 oci-cli-2.5.4 pyOpenSSL-18.0.0 pycparser-2.19 python-dateutil-2.7.3 pytz-2016.10 retrying-1.3.3 six-1.11.0 terminaltables-3.1.0 ===> Modify profile to update your $PATH and enable shell/tab completion now? (Y/n): n -- If you change your mind, add 'source /export/oci/lib/lib/python2.7/site-packages/oci_cli/bin/oci_autocomplete.sh' to your rc file and restart your shell to enable tab completion. -- You can run the CLI with '/export/oci/bin/oci'. -- Installation successful. -- Run the CLI with /export/oci/bin/oci --help

Following on to my previous posts on using Oracle Solaris with Oracle Cloud Infrastructure (OCI)... All modern clouds provide a fancy browser console that lets you easily manage any of your cloud...

Perspectives

Because the NSA Says So

This post is in part tong-in-cheek, and in part serious. Last week someone told me that even the NSA (yes the National Security Agency of the USA) now advises to upgrade to Oracle Solaris 11.4 and my initial thought was "sure they do", but I was swiftly pointed to this very recent Cybersecurity Advisory they had put out (which also acts as guidance for the various government agencies). And sure enough, it states: "Upgrade to Solaris® 11.4 from earlier versions of Solaris® and apply the latest Support Repository Update (SRU)." It also says that the best way to keep your systems and applications safe is to keep up to date with the latest SRU, or at least "every third SRU" which we call a Critical Patch Update a.k.a. a CPU. They state  that this "contains critical fixes for multiple vulnerabilities, including those documented as Common Vulnerabilities and Exposure (CVE®2) entries". Of course we don't hold any critical fixes back for each CPU, and release them in the next SRU available. But if you can't update the system more often than every quarter the CPUs are the best SRU to go for. By the way, when we say the CPU comes out "every quarter", we mean it's the SRU that comes out in January, April, July, and October. So if you have restricted update possibilities in you schedule it's best to focus on these dates, even if you only update once every 6 months or 12 months. The simplest way to do this is continuously update this package every month: # pkg update solaris-11-cpu@latest It is also good to note that applying an older SRU or CPU doesn't help, as it doesn't "mature" over time it is a point in time version of what we think is the best most secure most stable version of Oracle Solaris. So when applying an SRU or CPU please always consider the newest one. You could choose to apply the latest CPU even if there are newer SRUs out, but holding off doesn't make a version better. This of course holds for Oracle Solaris versions and updates too. So if you're still running Oracle Solaris 10 or maybe Oracle Solaris 11.3, it it isn't absolutely necessary or there's some technical reason holding you back we strongly advise to move/update to Oracle Solaris 11.4. And if you're running and earlier version Oracle Solaris 11, this is as easy as applying an SRU, and it gives you an easy way to roll back if necessary.  Oracle Solaris 11.4 and it's SRUs/CPUs are simply the best, most secure version of our Operating System. And now if you get pushback "why we should move to 11.4?" you can say: "Because the NSA says so."

This post is in part tong-in-cheek, and in part serious. Last week someone told me that even the NSA (yes the National Security Agency of the USA) now advises to upgrade to Oracle Solaris 11.4 and my...

Announcing Oracle Solaris 11.4 SRU7

Today we are releasing the SRU 7 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU introduces the following enhancements: Oracle VM Server for SPARC 3.6.1: Reliability improvements in LDoms Manager startup, synchronization with Hypervisor and SP, and several 'ldm' subcommands Improvement in Oracle VM Tools to better enable moving existing workloads onto modern HW, OS & virtualized environments Live Migration reliability improvements and improved messaging when problems occur For more information, see Oracle VM Server for SPARC 3.6.1 Release Notes Add Ninja build system to Oracle Solaris GNU awk has been updated to 4.2.1 FFTW has been updated to 3.3.8 Nghttp2 library has been updated to 1.35.0 tigervnc has been updated to 1.9.0 fix for pkg info no longer shows Last Install/Update Time after catalog refresh Many fixes and improvements for mdb and PTP There has been a latent bug within the '/usr/bin/grep' command such that when used with the '-w' option it did not correctly report on the patterns it was supposed to match. This has been present for a while. The bug is now fixed however this may mean that more results are returned to any script that uses the command. See the README for examples of the behaviour. The following components have also been updated to address security issues: dnsmasq has been updated to 2.80 webkitgtk has been updated to 2.22.5 NSS has been updated to 4.38 libdwarf has been updated to 20190104 GNU Tar has been updated to 1.31 libarchive has been updated to 3.3.3 readline has been updated to 7.0 Firefox has been updated to 60.5.1esr curl has been updated to 7.64.0 mercurial has been updated to 4.7.1 Python: python 3.4 has been updated to 3.4.9 python 3.5 has been updated to 3.5.6 golang, openjpeg & openssh Full details of this SRU can be found in My Oracle Support Doc 2517183.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we are releasing the SRU 7 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU...

Oracle Solaris 11

Recent Blogs on Oracle Solaris

  On another forum, the other day, Alan Coopersmith shared a nice list of recent blogs on Oracle Solaris and related topics and I thought that this would be good to share on this blog roll too. It has blogs from this blog roll as well as many others and it gives a nice insight into the recent activity in this area as well as the new features delivered via the "Continuous Innovation" process in the 11.4 SRU's. Here's the list: Update on Oracle Java on Oracle Solaris — https://blogs.oracle.com/solaris/update-on-oracle-java-on-oracle-solaris OpenJDK on Solaris/SPARC — https://blogs.oracle.com/solaris/openjdk-on-solarissparc-v2 Building OpenJDK 12 on Solaris 11 x86_64 — http://notallmicrosoft.blogspot.com/2019/02/building-openjdk-12-on-solaris-11-x8664.html Configuring auditing for select files in Oracle Solaris 11.4 — http://blog.moellenkamp.org/archives/42-Configuring-auditing-for-select-files-in-Oracle-Solaris-11.4.html Oracle Solaris Repository Reorganisation — https://blogs.oracle.com/solaris/oracle-solaris-repository-reorganisation-v2 Customize Network Configuration with Solaris Automated Installs — https://blogs.oracle.com/solaris/customize-network-configuration-with-solaris-automated-installs-v2 Customize network configuration in a Golden Image — https://blogs.oracle.com/solaris/customize-network-configuration-in-a-golden-image Migrate Oracle Solaris 10 Physical Systems with ZFS root to Guest Domains in 3 Easy Steps — https://blogs.oracle.com/solaris/migrate-oracle-solaris-10-physical-systems-with-zfs-root-to-guest-domains-in-3-easy-steps Oracle Solaris 10 in the Oracle Cloud Infrastructure — https://blogs.oracle.com/solaris/oracle-solaris-10-in-the-oracle-cloud-infrastructure A brief history of the admins time — http://blog.moellenkamp.org/archives/43-A-brief-history-of-the-admins-time.html Audit Annotations — http://blog.moellenkamp.org/archives/32-Less-known-Solaris-Feature-Audit-Annotations.html Discover the New Oracle Database Sheet — https://blogs.oracle.com/solaris/discover-the-new-oracle-database-sheet Database performance sheet in Solaris Dashboard in 11.4 SRU 6 — http://blog.moellenkamp.org/archives/33-Database-performance-overview-for-Solaris-11.4-SRU6.html Latency distribution with iostat — http://blog.moellenkamp.org/archives/34-Latency-distribution-with-iostat.html Filesystem latencies with fsstat — http://blog.moellenkamp.org/archives/36-Filesystem-latencies-with-fsstat..html Periodic scrubs of ZFS filesystems — http://blog.moellenkamp.org/archives/37-Periodic-scrubs-of-ZFS-filesystems.html Adding datasets to running zones — http://blog.moellenkamp.org/archives/44-Adding-datasets-to-running-zones.html Memory Reservation Pools in Oracle Solaris 11.4 SRU1 — http://blog.moellenkamp.org/archives/38-Memory-Reservation-Pools-in-Oracle-Solaris-11.4-SRU1.html Labeled Sandboxes — http://blog.moellenkamp.org/archives/39-Labeled-Sandboxes.html DTrace %Y print format with nanoseconds — http://milek.blogspot.com/2019/03/dtrace-y-print-format-with-nanoseconds.html RAID-Z improvements and cloud device support — http://milek.blogspot.com/2018/11/raid-z-improvements-and-cloud-device.html Solaris: Spectre v2 & Meltdown fixes — http://milek.blogspot.com/2018/10/solaris-spectre-v2-meltdown-fixes.html And then for some more fun: The Story of Sun Microsystems PizzaTool — https://medium.com/@donhopkins/the-story-of-sun-microsystems-pizzatool-2a7992b4c797 DTrace on ... what ??? Err, DTrace on Windows — http://blog.moellenkamp.org/archives/48-DTrace-on-...-what-Err,-DTrace-on-Windows.html As you can see many of these blogs are by Joerg Moellenkamp, so if you still don't have enough you can find much more at: http://blog.moellenkamp.org Enjoy.

  On another forum, the other day, Alan Coopersmith shared a nice list of recent blogs on Oracle Solaris and related topics and I thought that this would be good to share on this blog roll too. It has...

Announcements

Discover the New Oracle Database Sheet

As part of the Oracle Solaris 11.4 release we introduced the new StatsStore and Dashboard functionality allowing you to collect and analyze stats from all over the system that Oracle Solaris is running on. Some of these stats come from the Oracle Solaris kernel and some from userland applications. One of the interesting things about the design of the StatsStore and Dashboard however is that it also allows you to collect data from other sources like applications running on Oracle Solaris. As part of our Continuous Delivery strategy we're taking new functionality from our development gate of the next Oracle Solaris update and release it as part of the Oracle Solaris 11.4 SRUs. An example of this is the release of the new Database Stats Sheet as part of Oracle Solaris 11.4 SRU6. It uses this ability of the StatsStore to collect in stats from userland applications and show this in a new sheet of the Oracle Solaris Dashboard. To make this as easy as possible we've introduced new commands statcfg(1) and rdbms-stat(1) that combine to do all the configuration for you. This configures and starts a new SMF service that connects with the Oracle Database and pulls key performance data from the Database and stores it in the StatsStore. To do this it creates a new set of stats in the StatsStore and defines what they are. Additionally they create a new Database sheet that gives a default view of these Database stats together with certain stats coming from Oracle Solaris, giving the administrator a unique graphical representation of this combined data. This blog will give an example on how this is all configured and show what the default Database sheet looks like. Configuring the Database Wallet One of the key steps in setting this up is to configure the Oracle Database Wallet so the SMF service can access the V$ tables in the database. The key thing is that the database user you configure in the wallet must have the SYSDBA connect privilege for the database SID you want to monitor. This is something the database admin will have to do for you if it doesn't already allow this. For this example I'll use a database setup that we used for our Hands on Lab at Oracle OpenWorld last year, where we have a container database (CDB) and a pluggable database (PDB). We'll be configuring the wallet for this PDB. Before we add anything to the wallet we'll need to ensure that the tnsnames.ora file has the appropriate alias and connection string to connect to the database SID. By default it's located in $ORACLE_HOME/network/admin/, but you could also put is somewhere else and have a symbolic link in this default directory that points to it. Here's our tnsnames.ora file: -bash-4.4$ cat $ORACLE_HOME/network/admin/tnsnames.ora mypdb = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = pdb1) ) ) So the SID for the PDB is pdb1 but the alias we've created is called mypdb. We'll also need to amend the sqlnet.ora file to make sure it points to where we want to locate our wallet file when something connects to the database. It too by default is located in $ORACLE_HOME/network/admin/, but can be located somewhere else if you want. Here's our sqlnet.ora: -bash-4.4$ cat $ORACLE_HOME/network/admin/sqlnet.ora # sqlnet.ora Network Configuration File: /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora # Generated by Oracle configuration tools. NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/export/home/oracle/wallet))) SQLNET.WALLET_OVERRIDE = TRUE Now we can create the wallet: -bash-4.4$ mkstore -wrl $HOME/wallet -create Oracle Secret Store Tool : Version 12.2.0.1.0 Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved. Enter password: Enter password again: Now we have the wallet we can add keys to it. Here we add the key to our mypdb instance and check if it's stored correctly: -bash-4.4$ mkstore -wrl $HOME/wallet -createCredential mypdb sys "<my_database_password>" Oracle Secret Store Tool : Version 12.2.0.1.0 Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved. Enter wallet password: -bash-4.4$ mkstore -wrl $HOME/wallet -listCredential Oracle Secret Store Tool : Version 12.2.0.1.0 Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved. Enter wallet password: List credential (index: connect_string username) 1: mypdb sys Note that you would put your own database password instead of "<my_database_password>" and that this password is now stored behind entry #1. We can now check if all works correctly: -bash-4.4$ sqlplus /@mypdb as SYSDBA SQL*Plus: Release 12.2.0.1.0 Production on Thu Feb 28 05:15:34 2019 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> It works! We can connect through the wallet. This is essentially what the SMF process will do. Installing the statcfg and oracle-rdbms-stats Packages Now to install the packages for the new Database sheet. The main new package that pulls data into the StatsStore is pkg:/service/system/statcfg however there's an additional package that configures statcfg and understand how to pull data in from an Oracle Database. This package is called pkg://solaris/service/oracle-rdbms-stats or oracle-rdbms-stats for short. And because it's dependent upon the statcfg package we only need to add the oracle-rdbms-stats package and it will pull statcfg in. Become root and install the package: -bash-4.4$ su - Password: Oracle Corporation SunOS 5.11 11.4 January 2019 root@HOL3797-19:~# pkg install oracle-rdbms-stats Packages to install: 6 Mediators to change: 1 Services to change: 1 Create boot environment: No Create backup boot environment: No DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 6/6 92/92 146.3/146.3 82.8M/s PHASE ITEMS Installing new actions 208/208 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 Note it pulled in a total of 6 packages, so 4 more besides the statcfg package. At this point we're ready to configure the connection. Configuring the New Service with statcfg Now we're ready for the final steps. First, still as the root role, give the user oracle the authorizations to write into and update the StatsStore: root@HOL3797-19:~# usermod -A +solaris.sstore.update.res,solaris.sstore.write oracle UX: usermod: oracle is currently logged in, some changes may not take effect until next login. We can ignore this last message as the thing we care about is the new oracle-database-stats service and it's process running as user oracle, and when the service is started it will now automatically pick up these authorizations. Now we configure the new service. Because we're still running the root role the environment settings needed to find the database aren't set, and so the database connection string and the wallet wouldn't be found either. So we set ORACLE_HOME: root@HOL3797-19:~# export ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 Then we use statcfg(1) to configure the new service, plus we make sure the new service also has ORACLE_HOME set: root@HOL3797-19:~# /usr/bin/statcfg oracle-rdbms -u oracle -g oinstall -s mypdb -c mypdb Creating a new service instance for 'mypdb' root@HOL3797-19:~# svccfg -s oracle-database-stats:mypdb setenv ORACLE_HOME /u01/app/oracle/product/12.2.0/dbhome_1 At this point you can check if the service has successfully started by either checking if there are any issues with svcs -x or svcs -l oracle-database-stats:mypdb. The first will show if any services are having issues and the second will show you the current status of the new oracle-database-stats:mypdb service. Note: There was an early glitch where after installing the new packages the sstored wasn't restarted resulting in the a problem with the new //:class.app/oracle/rdbms class not being added to StatsStore, this is easy to fix by restarting the sstore service. If the oracle-database-stats:mypdb has gone into maintenance because of this you can easily clear it once the sstore service is restarted. This should be addressed soon if not already when you read this. Looking at the new Database Sheet Running the statcfg command not only created and configured a new service that pulls data from the Oracle Database, it also creates a new sheet for the Web Dashboard and to look at this connect to the Dashboard on port 6787. Once there navigate to the page that shows you the overview of all the sheets you can choose from and you should see these two added: The one on the right is dedicated to the new oracle-database-stats:mypdb service and if you click on it, it should look something like this: This is a screenshot of my database running a swingbench workload to show what type of information you get to see. The top two graphs are coming from the Oracle Database (v$system_event and v$sysstat tables) and they give an insight on what the activity is inside the database and the next is a graph showing all the Oracle processes and their memory size (these are mainly the database shadow processes sharing memory). The other graphs are showing general system stats. All of these are coming out of the StatsStore. (I've zoomed in on the top graph quite a lot to get a nice picture, otherwise you only see the one or two peaks and hardly the small stuff in-between.) This sheet is a basic example that is provided to get the initial isights in what is happening inside and outside the database. The nice thing about the Dashboard is that you can create your own new sheet, either by copying this sheet and editing it or by creating a brand new one, and this way you can combine these database stats with other stats to fit your needs. But that's for another blog. Alternatively you can also use the REST interface to pull the stats out into the central monitoring tools of your choice. This too is something for another blog. Note 1: It is good to know that this functionality focused on local databases. Even though when configuring the tnsnames.ora you can set the connection string to connect to a database on a remote system and the configured oracle-database-stats service will then pull in stats from this remote database, but the StatsStore won't have any OS-based stats to compliment this. Plus the Web Dashboard sheet that is automatically created also won't show correlated data. Note 2: There's no additional license required to gather this data, so if you don't have the Diagnostics Pack enabled this will still work. Of course, the goal of this functionality is not built to replace the Diagnostics Pack and things it gives you (like AWR reports), it's to give you current and historic data from the database and the OS as a starting point to better investigate where possible issues are.

As part of the Oracle Solaris 11.4 release we introduced the new StatsStore and Dashboard functionality allowing you to collect and analyze stats from all over the system that Oracle Solaris is...

Oracle Solaris 11

OpenJDK on Solaris/SPARC

Building OpenJDK Java Releases on Oracle Solaris/SPARC Starting with Java SE 8, Oracle created the concept of Long-Term-Support (LTS) releases, which are expected to be used and supported for an extended amount of time and for production purposes. Oracle Java SE 8 and Java SE 11 are LTS releases and are available and supported for Oracle Solaris - subject to licensing and support requirements detailed here https://blogs.oracle.com/solaris/update-on-oracle-java-on-oracle-solaris and here https://www.oracle.com/technetwork/java/java-se-support-roadmap.html. Oracle Java SE 9 and Java SE 10 were considered a cumulative set of implementation enhancements of Java SE 8. Once a new feature release is made available, any previous non‑LTS release are considered superseded. The LTS releases are designed to be in production environments while the non-LTS releases provide access to new features ahead of their appearance in a LTS release. The vast majority of users rely on Java SE 8 and Java SE 11 for their production use. However, Oracle also makes available OpenJDK builds that allow users to test and deploy the latest Java innovations. If you find a need to run a development release of Java on Oracle Solaris, you can access the freely available OpenJDK Java releases http://openjdk.java.net/, under an open source license http://openjdk.java.net/legal/gplv2+ce.html. This article will describe how to build a Java 12 JDK from OpenJDK sources on a recent Oracle Solaris 11.4SRU6 installation on a Oracle SPARC T8-1 system. Preparing the Operating System The first step to prepare the OS installaton for development purposes would be to check if the right facet had been installed: root@t8-1-001:~# pkg facet FACET VALUE SRC devel True local ... Make sure that the "devel" facet is set to true (if it wasn't packages like system/header would not install the full set of include files and thus one would not be able to compile). In case your operating system installation has it set to false, a simple pkg change-facet "facet.devel=True" will fix this, Oracle Solaris will install all the missing files, among others those you would miss in /usr/include. The OpenJDK website maintains a list of suggested and sometimes tested build enviroments on https://wiki.openjdk.java.net/display/Build/Supported+Build+Platforms, for most Java versions Oracle Solaris Studio 12.4 has been used. We will also use version 12.4, although the latest version according to the list of build environments should work as well. The build process also needs a few more development tools, namely the GNU binutils suite, the Oracle Solaris assembler, and mercurial to clone the development tree from OpenJDK: root@t8-1-001:~# pkg install mercurial developer/assembler cc@12.4 c++@12.4 autoconf gnu-binutils ... Building a JDK from OpenJDK sources also requires a so called "boot JDK" which usually is a JDK one version older than the JDK that is to be built. In our case we need a JDK11 which is available from Oracle's Technology Network pages, https://www.oracle.com/technetwork/java/javase/downloads/jdk11-downloads-5066655.html. It doesn't need not be installed as root, in our example it was installed right into the home directory of a user "oracle" used to build the JDK12. Now the system is ready to be used as a build environment for OpenJDK. Building the JDK An OpenJDK build is similar to many other open source builds, one first has to "configure" the enviroment, and then do the actual build. Note, it is important to use the GNU variant of make, gmake , the "normal" make is not able to process the Makefiles generated during the "configure" step. But we haven't downloaded the sources yet, these are provided as a mercurial repository. To download the source tree the repository has to be cloned: oracle@t8-1-001:~$ hg clone http://hg.openjdk.java.net/jdk/jdk12 jdk12-build This way the OpenJDK 12 build tree is cloned into the directory "jdk12-build". This process takes a few minutes, depending on the speed of your network connection. The configure step is straightforward and sets up the build directory for the actual build process: oracle@t8-1-001:~/jdk12-build$ chmod +x configure oracle@t8-1-001:~/jdk12-build$ ./configure --with-devkit=/opt/solarisstudio12.4/ --with-boot-jdk=/export/home/oracle/jdk-11.0.2/ If one now issues a gmake one would run into a compile error, src/hotspot/os/solaris/os_solaris.cpp references the undefined constant "EM_486". Oracle Solaris 11.4 removed EM_486 from sys/elf.h , one of the ways to fix this is to delete the corresponding line in os_solaris.cpp . Now the source tree is ready for building OpenJDK 12 on Oracle Solaris 11.4, gmake will build a Java 12 JDK. the result will end up in the directory build/solaris-sparcv9-server-release/jdk/ inside the build directory jdk12-build. Testing the JDK Now that you have successfully built Java 12 you might want to test it. The resulting JDK12 did end up in build/solaris-sparcv9-server-release/jdk/ n your build directory, jdk12-build in our example. Simply execute oracle@t8-1-001:~/jdk12-build$ ./build/solaris-sparcv9-server-release/jdk/bin/java -version openjdk version "12-internal" 2019-03-19 OpenJDK Runtime Environment (build 12-internal+0-adhoc..jdk12-build) OpenJDK 64-Bit Server VM (build 12-internal+0-adhoc..jdk12-build, mixed mode) to check that your newly built java can do something. Another way would be to compile a "Hello World" Java program: oracle@t8-1-001:~$ cat HelloWorld.java public class HelloWorld { public static void main(String[] args) { // Prints "Hello, World" to the terminal window. System.out.println("Hello, World"); } } Compile and execute will not create big surprises: oracle@t8-1-001:~$ ./jdk12-build/build/solaris-sparcv9-server-release/jdk/bin/javac HelloWorld.java oracle@t8-1-001:~$ ./jdk12-build/build/solaris-sparcv9-server-release/jdk/bin/java HelloWorld Hello, World  

Building OpenJDK Java Releases on Oracle Solaris/SPARC Starting with Java SE 8, Oracle created the concept of Long-Term-Support (LTS) releases, which are expected to be used and supported for an...

Solaris

Update on Oracle Java on Oracle Solaris

With the recent changes to the Oracle Java SE 8 licensing and support, which also effect the Oracle Java SE that is shipped with Oracle Solaris, we've been getting some questions. So to help clear things up we decided to write a short blog about this and other questions we get around running Java on Oracle Solaris. The key thing with regards to Oracle Java SE licensing and support is that starting in January 2019 any commercial/production use of Oracle Java will require a commercial license, even if the Oracle Java being used was shipped as part of an Operating System like Oracle Solaris (see MOS Doc. ID 2370065.1). Now the reality is that a large majority of the use of Oracle Java SE is with applications like Oracle WebLogic Server, and the licenses and support on these not only cover this commercial use, they often come with their own specific Oracle Java install to ensure all the patch levels are in sync. So in that case they're not even using the Oracle Java that's bundled with Oracle Solaris. So here nothing really changed, even if they use the Oracle Java SE that's shipped with the Operating System, Oracle Solaris. However for those customers that are using the Oracle Java SE that ships with Oracle Solaris, for example because they like how easy it installs and updates, they will need to now pay attention to their licenses. From January 2019 on forward the Oracle Solaris license only covers the commercial use of the Oracle Java it ships for Oracle Solaris components that use Oracle Java, that is to say the applications that ship as part of Oracle Solaris, so for it's own use. Any other use is permitted as long as the right license (and support) are acquired. Again here not too many folks will probably be impacted, because for example many customers that used to use Oracle Java SE for things like system agents have now moved to a language like Python for this task, but still it's important to understand and check. Another question that we've been getting is around which Java releases we include with Oracle Solaris. We're focused on Long-Term-Support (LTS) releases of Java for maximum investment protection, consistency, and stability. This article has a good explanation of Oracle's plans for LTS and non-LTS releases. Oracle Solaris is a production platform where the focus is on running the LTS releases Oracle Java SE, such as Oracle Java SE 8. As this release is fully supported and sufficient for use by Oracle Solaris components, it is used as the system default version. The reality is that many customers have their production applications built on Oracle Java SE 8 and only now are starting to look at moving to Oracle Java SE 11 for their new applications, which means they will probably still be running Oracle Java SE 8 for a long time. The good news is that the Oracle Java SE 8 release will be supported with Oracle Premier Support at least until 2022, and Oracle Java SE 11 will be supported even longer. By the way, for those customers that would like to try/use Java 12 on Oracle Solaris, this is still possible of course if you build it yourself with OpenJDK. A pretty strait forward task, as outlined in Martin Müller's excellent blog about how to do this. The benefit here is that you're much more in control of which exact version you compile and use, which is sometimes very important when using non-LTS releases. Similarly of course you could build the OpenJDK version of Java 9 or 10 if you'd still be interested. More information with regards to the Oracle Java SE releases can be found in this nice FAQ blog by Sharat Chander.

With the recent changes to the Oracle Java SE 8 licensing and support, which also effect the Oracle Java SE that is shipped with Oracle Solaris, we've been getting some questions. So to help clear...

Announcing Oracle Solaris 11.4 SRU6

Today we are releasing the SRU 6 for  Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU introduces the following enhancements: Analytics Database Sheet - This allows you to use the Oracle Solaris System Web Interface to view a high-level overview of Oracle Database performance and problems on that system. Further details can be found at https://docs.oracle.com/cd/E37838_01/html/E56520/dbsheet.html. HMP has been updated to 2.4.5 RabbitMQ has been updated to 3.7.8 and includes hex 0.18.2 and elixir 1.7.3 Erlang has been updated to 21.0 Samba has been updated to 4.9.3 tracker has been updated to 1.12.0 irssi has been updated to 1.1.2 Explorer 19.1 is now available The Java 8 and Java 7 packages have been updated. The following customer associated bugs in Core OS have been fixed nxe panics with lm_hw_attn.c (1400): Condition Failed! Dtrace %Y format should allow decimal places mpathadm Access State not consistent with RTPG command response from NetApp Aborted NDMP session lingers, preventing dataset destruction logadm default still allows system memory to fill up Avoiding shell execution is too aggressive kclient fails to properly interact with Active Directory Forests svc:/network/ipf2pf:default SMF goes into maintenance state after S11.4 OS boots The following components have also been updated to address security issues: Firefox has been updated to 60.5.0 ESR perl has been updated to 5.26.3 & 5.22 ruby has been updated to 2.3.8 libtiff has been updated to 4.0.10 LFTP has been updated to 4.8.4 mod_jk has been updated to 1.2.46 Python 2.7 has been updated to 2.7.15 git has been updated to 2.19.2 Samba has been updated to 4.9.3 tracker has been updated to 1.12.0 OpenSSH has been updated to 7.7p1 Wireshark has been updated to 2.6.6 Full details of this SRU can be found in My Oracle Support Doc 2507241.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we are releasing the SRU 6 for  Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU...

Oracle Solaris 11

Customize network configuration in a Golden Image

Many of our customers use golden images of Solaris with fully installed and configured applications to deploy their production systems. They create a golden image by capturing a snapshot of their staging system's files in a Solaris Unified Archive. Solaris normally clears the staging system's network configuration during golden image installation. This makes sense given that each production system will have a different IP address and often resides in a different site with unique subnet prefix and default route. There are times however when you might want parts of your network configuration to survive golden image deployment. Now that Solaris 11.4 stores its persistent network configuration in SMF, you can do this by creating custom installation packages to establish the network configuration settings you wish preserved. In a previous blog, I showed how I used the Automated Installer to replicate my SPARC staging system's network configuration on other systems. I simplified the effort by using SMF to extract the working configuration in SC-profile format. In this blog, I'll start with the staging system as it was previously configured and will use the same SC-profiles I generated for that blog.  I'll show how I packaged and installed one of the SC-profiles so that the common part of my network configuration was able to survive golden image deployment. The process I'm using works exactly the same on X86 systems. Do the following on the staging system I split the staging system's configuration in my previous blog into three SC-profile files. One of these, the labnet-profile.xml file, set up a pair of high speed ethernet links configured to support 9000 byte jumbo frames in a link aggregation. This is the part of the network configuration information I'll be preserving in my golden image. root@headbanger:~# dladm LINK CLASS MTU STATE OVER aggr0 aggr 9000 up net0 net1 net0 phys 9000 up -- net1 phys 9000 up -- net2 phys 1500 up -- net3 phys 1500 up -- sp-phys0 phys 1500 up -- Place the network configuration profile in a directory from which to build a package. root@headbanger:~# mkdir proto root@headbanger:~# cp labnet-profile.xml proto/. Generate the initial package manifest from the files under the proto directory. root@headbanger:~# pkgsend generate proto | \ pkgfmt > labnet-profile.p5m Use a text editor to add descriptive package metadata to the manifest, adjust the destination directory of the SC-profile file, and add a package actuator to automatically import the SC-profile upon installation. This example places the SC-profile file in etc/svc/profile/node/ which will cause SMF to import the profile into its node-profile layer. The changes I made are shown in blue. labnet-profile.p5m set name=pkg.fmri value=labnet-profile@1.0 set name=pkg.summary value="lab net configuration" set name=pkg.description \ value="My sample lab system net configuration" file labnet-profile.xml \ path=etc/svc/profile/node/labnet-profile.xml \ owner=root group=bin mode=0644 \ restart_fmri=svc:/system/manifest-import:default Build the new package in a temporary package repository. root@headbanger:~# pkgrepo create /tmp/lab-repo root@headbanger:~# pkgrepo -s /tmp/lab-repo set publisher/prefix=lab root@headbanger:~# pkgsend -s /tmp/lab-repo publish \ -d proto labnet-profile.p5m Create a package archive from the contents of the package repository. root@headbanger:~# pkgrecv -s /tmp/lab-repo \ -a -d lab.p5p labnet-profile The package I created can be installed directly from this archive. The archive can also be easily copied and installed on other systems. This particular package could even be installed directly on an X86 staging system. Install the package just created on the staging system. root@headbanger:~# pkg set-publisher -g lab.p5p lab root@headbanger:~# pkg install labnet-profile Complete development and testing of the staging system. Create the unified archive containing the golden image. Note that the staging system's network interfaces must be running with access to the site's package servers when creating this golden image. root@headbanger:~# archiveadm create /var/tmp/labsys.uar Copy the golden image from the staging system to a distribution site. I used "http://example-ai.example.com/datapool/labsys.uar" for this example. Do the following on the AI server Solaris provides multiple options for deploying a unified archive on other systems. I chose to deploy my golden image from my lab's install server onto node beatnik. Create an installation manifest that points to the golden image. netclone-manifest.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="netclone" auto_reboot="true"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="ARCHIVE"> <source> <file uri="http://example-ai.example.com/datapool/labsys.uar"/> </source> <software_data action="install"> <name>global</name> </software_data> </software> </ai_instance> </auto_install> Create the install service using A Solaris install image that is based the version of Solaris installed on the golden image, The manifest created here, The base SC-profile from my previous blog, and The beatnik-specific SC-profile from my previous blog. Note that there is no need to create a profile with labnet-profile.xml. That configuration information will be installed with the unified archive. I used node beatnik for my production server example. # installadm create-service -y -n netclone-sparc \ -a sparc \ -p solaris=http://pkg.oracle.com/solaris/release/ \ -s install-image/solaris-auto-install@latest # installadm create-manifest -n netclone-sparc \ -d -m netclone-manifest \ -f netclone-manifest.xml # installadm create-profile -n netclone-sparc \ -p base-profile \ -f base-profile.xml # installadm create-client -n netclone-sparc \ -e 0:0:5e:0:53:24 # installadm create-profile -n netclone-sparc \ -p beatnik-profile \ -f beatnik-profile.xml \ -c mac=0:0:5e:0:53:24 Do the following on the production servers Log into the console and start the installation. {0} ok boot net:dhcp - install Log in after installation completes. The dladm command output verifies the definition of the aggr link from the unified archive was preserved. root@beatnik:~# dladm LINK CLASS MTU STATE OVER aggr0 aggr 9000 up net0 net1 net0 phys 9000 up -- net1 phys 9000 up -- net2 phys 1500 up -- net3 phys 1500 up -- sp-phys0 phys 1500 up -- The svccvg listprop command shown here indicates in the third column what SMF database layers different parts of the beatnik's datalink configuration are from. root@beatnik:~# svccfg -s datalink-management:default listprop -l all datalinks datalinks application sysconfig-profile datalinks application node-profile datalinks application manifest datalinks/aggr0 datalink-aggr sysconfig-profile datalinks/aggr0 datalink-aggr node-profile datalinks/aggr0/aggr-mode astring node-profile dlmp datalinks/aggr0/force boolean node-profile false datalinks/aggr0/key count node-profile 0 datalinks/aggr0/lacp-mode astring node-profile off datalinks/aggr0/lacp-timer astring node-profile short datalinks/aggr0/media astring node-profile Ethernet datalinks/aggr0/num-ports count node-profile 2 datalinks/aggr0/policy astring node-profile L4 datalinks/aggr0/ports astring node-profile "net0" "net1" datalinks/aggr0/probe-ip astring sysconfig-profile 198.51.100.64+ 198.51.100.1 datalinks/net0 datalink-phys node-profile datalinks/net0/devname astring admin i40e0 datalinks/net0/loc astring admin /SYS/MB datalinks/net0/media astring admin Ethernet datalinks/net0/mtu count admin 9000 datalinks/net0/mtu count node-profile 9000 datalinks/net1 datalink-phys node-profile datalinks/net1/devname astring admin i40e1 datalinks/net1/loc astring admin /SYS/MB datalinks/net1/media astring admin Ethernet datalinks/net1/mtu count admin 9000 datalinks/net1/mtu count node-profile 9000 datalinks/net2 datalink-phys admin datalinks/net2/devname astring admin i40e2 datalinks/net2/loc astring admin /SYS/MB datalinks/net2/media astring admin Ethernet datalinks/net3 datalink-phys admin datalinks/net3/devname astring admin i40e3 datalinks/net3/loc astring admin /SYS/MB datalinks/net3/media astring admin Ethernet datalinks/sp-phys0 datalink-phys admin datalinks/sp-phys0/devname astring admin usbecm2 datalinks/sp-phys0/media astring admin Ethernet In this case, the layers correspond to the following sources. I defined it this way to show how various parts of the configuration from different sources fit together. manifest from Solaris service manifests node-profile from my package in the golden image sysconfig-profile from profiles defined on the AI server admin automatically generated on first boot

Many of our customers use golden images of Solaris with fully installed and configured applications to deploy their production systems. They create a golden image by capturing a snapshot of their...

Oracle Solaris 11

Customize Network Configuration with Solaris Automated Installs

The first item in Solaris 11.4’s list of new networking features was "Migration of persistent network configuration to SMF". Does it really matter where the network configuration is stored? Absolutely! Solaris is often installed in large data centers where manual installation and configuration of each individual system is not practical. Solaris 11's Automated Installer along with Solaris' System Management Facility (SMF) were developed to make installation and configuration of multiple systems significantly easier. Having Solaris network configuration stored outside SMF left a significant gap in configurations these tools supported. When using the Automated Installer, you define system configurations in System Configuration (SC) profile files. SMF loads these SC-profiles into its database during first boot of a freshly installed system.  The network/install service, which existed before Sorlaris 11.4, initializes the system’s network configuration from its SMF based configuration parameters. This service only supports creation of IP addresses over the system's existing physical interfaces. There are a great many other Solaris network features commonly used in data centers such as jumbo frames, link aggregation, vlans, and flows that can not be configured by the network/install service. Now that the dladm(8), flowadm(8), ipadm(8), netcfg(8), and route(8) utilities store their persistent information in SMF, you can directly configure network services in SC-profiles to handle anything these utilities support. SC-profiles use an XML based format to define service configuration information. If your like me, you would rather avoid the effort it takes to learn the detailed syntax rules for network configuration. Fortunately, you don't have to. I find an easier approach is to use the network utilities to set up the configuration I want on a live staging system. I use the SMF's svccfg(8) utility to capture the running configuration in XML format, then create my SC-profiles directly from the svccfg generated files. The instructions below show how I built SC-profiles to automatically install SPARC systems with a pair of high speed ethernet links supporting 9000 byte jumbo frames in a link aggregation. The process works exactly the same on X86 systems. Step 1: Install staging system without external interfaces configured I started by installing the staging system with no external network interfaces configured. Solaris automatically configures the following on first boot after the install completes. dladm link name to physical device mappings loopback IP interface (lo0) IP interface to the service processor (sp-phys) These details could cause unwanted problems if they were included in my final SC-profile. For example, physical device names are unlikely to be the same on all systems where I expect to apply my custom network configuration. I'll be capturing this initial network configuration without my changes to identify what parts of the full configuration I should remove later. Do the following on the AI Server. Use a text editor to create the AI manifest file. The manifest in this example installs the large server group from a solaris repository. Replace "http://pkg.oracle.com/solaris/release/" with a local copy of the solaris repository if one is available. Feel free to list any other packages desired. base-manifest.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="lab" auto_reboot="true"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <source> <publisher name="solaris"> <origin name="http://pkg.oracle.com/solaris/release/"/> </publisher> </source> <software_data action="install"> <name>pkg:/entire@latest</name> <name>pkg:/group/system/solaris-large-server</name> </software_data> </software> </ai_instance> </auto_install> Use sysconfig(8) to generate the initial SC-profile file with login, language, and timezone information you normally used at the site. Be sure to select "No network" on the initial Network Configuration screen. This example used the Computer Name "headbanger" for the example staging system. # sysconfig create-profile -o headbanger (fill in sysconfig's on-screen menus) # mv headbanger/sc_profile.xml headbanger/base-profile.xml Use installadm(8) to configure the install server. The staging system in this example is a SPARC system with MAC address 0:0:5e:0:53:d2. # installadm create-service -y -n nettest-sparc \ -a sparc \ -p solaris=http://pkg.oracle.com/solaris/release/ \ -s install-image/solaris-auto-install@latest # installadm create-manifest -n nettest-sparc \ -d -m base-manifest \ -f base-manifest.xml # installadm create-profile -n nettest-sparc \ -p base-profile \ -f base-profile.xml # installadm create-client -n nettest-sparc \ -e 0:0:5e:0:53:d2 Do the following on the staging system. Log into the system's service processor, connect to the system console, and start a network boot to launch the installation. -> start /SP/console (halt operating system if currently running) {0} ok boot net:dhcp - install-> start /SP/console When first boot completes, verify no external interfaces have been configured yet. root@headbanger:~# dladm LINK CLASS MTU STATE OVER net0 phys 1500 up -- net1 phys 1500 up -- net2 phys 1500 up -- net3 phys 1500 up -- sp-phys0 phys 1500 up -- root@headbanger:~# ipadm NAME CLASS/TYPE STATE UNDER ADDR lo0 loopback ok -- -- lo0/v4 static ok -- 127.0.0.1/8 lo0/v6 static ok -- ::1/128 sp-phys0 ip ok -- -- sp-phys0/v4 static ok -- 203.0.113.77/24 Capture the initial first-boot configuration in SC-profile format for later reference. root@headbanger:~# svccfg extract -l admin \ network/datalink-management:default \ > initial-profile.xml root@headbanger:~# svccfg extract -l admin \ network/ip-interface-management:default \ >> initial-profile.xml Step 2: Configure network on staging system using Solaris utilities Use Solaris network configuration utilities to create the desired configuration. root@headbanger:~# dladm set-linkprop -p mtu=9000 net0 root@headbanger:~# dladm set-linkprop -p mtu=9000 net1 root@headbanger:~# dladm create-aggr -m dlmp -l net0 -l net1 aggr0 root@headbanger:~# ipadm create-ip aggr0 root@headbanger:~# ipadm create-addr -a 198.51.100.63/24 aggr0 aggr0/v4 root@headbanger:~# ipadm create-addr -T addrconf \ -p stateless=no,stateful=no aggr0 aggr0/v6 root@headbanger:~# ipadm create-addr \ -a 2001:db8:414:60bc::3f/64 aggr0 aggr0/v6a root@headbanger:~# route -p add default 198.51.100.1 add net default: gateway 198.51.100.1 root@headbanger:~# dladm set-linkprop \ -p probe-ip=198.51.100.63+198.51.100.1 aggr0 Verify the configuration was defined as intended. root@headbanger:~# dladm LINK CLASS MTU STATE OVER aggr0 aggr 9000 up net0 net1 net0 phys 9000 up -- net1 phys 9000 up -- net2 phys 1500 up -- net3 phys 1500 up -- sp-phys0 phys 1500 up -- root@headbanger:~# dladm show-linkprop -p mtu LINK PROPERTY PERM VALUE EFFECTIVE DEFAULT POSSIBLE aggr0 mtu rw 9000 9000 1500 576-9706 net0 mtu rw 9000 9000 1500 576-9706 net1 mtu rw 9000 9000 1500 576-9706 net2 mtu rw 1500 1500 1500 576-9706 net3 mtu rw 1500 1500 1500 576-9706 sp-phys0 mtu rw 1500 1500 1500 1500 root@headbanger:~# dladm show-aggr -Sn LINK PORT FLAGS STATE TARGETS XTARGETS aggr0 net0 u--3 active 198.51.100.1 net1 -- net1 u-2- active -- net0 root@headbanger:~# ipadm NAME CLASS/TYPE STATE UNDER ADDR aggr0 ip ok -- -- aggr0/v4 static ok -- 198.51.100.63/24 aggr0/v6 addrconf ok -- fe80::8:20ff:fe67:6164/10 aggr0/v6a static ok -- 2001:db8:414:60bc::3f/64 lo0 loopback ok -- -- lo0/v4 static ok -- 127.0.0.1/8 lo0/v6 static ok -- ::1/128 sp-phys0 ip ok -- -- sp-phys0/v4 static ok -- 203.0.113.77/24 root@headbanger:~# netstat -nr Routing Table: IPv4 Destination Gateway Flags Ref Use Interface -------------------- -------------------- ----- ----- ---------- --------- default 198.51.100.1 UG 2 31 198.51.100.0 198.51.100.63 U 3 73854 aggr0 127.0.0.1 127.0.0.1 UH 4 68 lo0 203.0.113.0 203.0.113.77 U 3 131889 sp-phys0 Routing Table: IPv6 Destination/Mask Gateway Flags Ref Use If ------------------------- --------------------------- ----- --- ------- ----- ::1 ::1 UH 2 24 lo0 2001:db8:414:60bc::/64 -- U 2 0 aggr0 2001:db8:414:60bc::/64 2001:db8:414:60bc::3f U 2 0 aggr0 fe80::/10 fe80::8:20ff:fe67:6164 U 3 0 aggr0 default fe80::200:5eff:fe00:530c UG 2 0 aggr0 Reboot the staging system and retest to ensure the configuration was persistently applied. Step 3: Capture the custom network configuration from SMF Use the SMF svccfg(8) utility to extract the current configuration from the modified network services. root@headbanger:~# svccfg extract -l admin \ network/datalink-management:default \ >> labnet-profile.xml root@headbanger:~# svccfg extract -l admin \ network/ip-interface-management:default \ >> labnet-profile.xml Edit the labnet-profile.xml file to: Delete replicated service bundle overhead between the extracted information of each service. Delete entries that exist in the initial-profile.xml file saved previously in Step 1. labnet-profile.xml <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='profile' name='labnet-profile'> <service name='network/datalink-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='datalinks' type='application'> <property_group name='aggr0' type='datalink-aggr'> <propval name='aggr-mode' type='astring' value='dlmp'/> <propval name='force' type='boolean' value='false'/> <propval name='key' type='count' value='0'/> <propval name='lacp-mode' type='astring' value='off'/> <propval name='lacp-timer' type='astring' value='short'/> <propval name='media' type='astring' value='Ethernet'/> <propval name='num-ports' type='count' value='2'/> <propval name='policy' type='astring' value='L4'/> <propval name='probe-ip' type='astring' value='198.51.100.63+198.51.100.1'/> <property name='ports' type='astring'> <astring_list> <value_node value='net0'/> <value_node value='net1'/> </astring_list> </property> </property_group> <property_group name='net0' type='datalink-phys'> <propval name='devname' type='astring' value='i40e0'/> <propval name='loc' type='astring' value='/SYS/MB'/> <propval name='media' type='astring' value='Ethernet'/> <propval name='mtu' type='count' value='9000'/> </property_group> <property_group name='net1' type='datalink-phys'> <propval name='devname' type='astring' value='i40e1'/> <propval name='loc' type='astring' value='/SYS/MB'/> <propval name='media' type='astring' value='Ethernet'/> <propval name='mtu' type='count' value='9000'/> </property_group> <property_group name='net2' type='datalink-phys'> <propval name='devname' type='astring' value='i40e2'/> <propval name='loc' type='astring' value='/SYS/MB'/> <propval name='media' type='astring' value='Ethernet'/> </property_group> <property_group name='net3' type='datalink-phys'> <propval name='devname' type='astring' value='i40e3'/> <propval name='loc' type='astring' value='/SYS/MB'/> <propval name='media' type='astring' value='Ethernet'/> </property_group> <property_group name='sp-phys0' type='datalink-phys'> <propval name='devname' type='astring' value='usbecm2'/> <propval name='media' type='astring' value='Ethernet'/> </property_group> </property_group> <property_group name='linkname-policy' type='application'> <propval name='initialized' type='boolean' value='true'/> </property_group> </instance> </service> </service_bundle> <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='profile' name='extract'> <service name='network/ip-interface-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='interfaces' type='application'> <property_group name='aggr0' type='interface-ip'> <property name='address-family' type='astring'> <astring_list> <value_node value='ipv4'/> <value_node value='ipv6'/> </astring_list> </property> <property_group name='v4' type='address-static'> <propval name='ipv4-address' type='astring' value='198.51.100.63'/> <propval name='prefixlen' type='count' value='24'/> <propval name='up' type='astring' value='yes'/> </property_group> <property_group name='v6' type='address-addrconf'> <propval name='interface-id' type='astring' value='::'/> <propval name='prefixlen' type='count' value='0'/> <propval name='stateful' type='astring' value='no'/> <propval name='stateless' type='astring' value='no'/> </property_group> <property_group name='v6a' type='address-static'> <propval name='ipv6-address' type='astring' value='2001:db8:414:60bc::3f'/> <propval name='prefixlen' type='count' value='64'/> <propval name='up' type='astring' value='yes'/> </property_group> </property_group> <property_group name='lo0' type='interface-loopback'> <property name='address-family' type='astring'> <astring_list> <value_node value='ipv4'/> <value_node value='ipv6'/> </astring_list> </property> <property_group name='v4' type='address-static'> <propval name='ipv4-address' type='astring' value='127.0.0.1'/> <propval name='prefixlen' type='count' value='8'/> <propval name='up' type='astring' value='yes'/> </property_group> <property_group name='v6' type='address-static'> <propval name='ipv6-address' type='astring' value='::1'/> <propval name='prefixlen' type='count' value='128'/> <propval name='up' type='astring' value='yes'/> </property_group> </property_group> <property_group name='sp-phys0' type='interface-ip'> <property name='address-family' type='astring'> <astring_list> <value_node value='ipv4'/> <value_node value='ipv6'/> </astring_list> </property> <property_group name='v4' type='address-static'> <propval name='ipv4-address' type='astring' value='203.0.113.77'/> <propval name='prefixlen' type='count' value='24'/> <propval name='up' type='astring' value='yes'/> </property_group> </property_group> </property_group> <property_group name='ipmgmtd' type='application'> <propval name='datastore_version' type='integer' value='2047'/> </property_group> <property_group name='static-routes' type='application'> <property_group name='route-1' type='static-route'> <propval name='destination' type='astring' value='default'/> <propval name='family' type='astring' value='inet'/> <propval name='gateway' type='astring' value='198.51.100.1'/> </property_group> </property_group> </instance> </service> </service_bundle> Step 4: Reinstall staging system with full network configuration The combined contents of base-profile.xml from Step 1 and labnet-profile.xml from step 3 could be used as the SC-profile for autoconfiguring headbanger on a fresh install. However, my goal is to easily replicate this configuration to many systems. I made replication easier by moving system-specific details from these two files to a smaller system-specific SC-profile file. This is all just cutting and pasting existing XML text blocks. headbanger-profile.xml <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='profile' name='headbanger-profile'> <service name='system/identity' type='service' version='0'> <instance name='node' enabled='true'> <property_group name='config' type='application'> <propval name='nodename' type='astring' value='headbanger'/> </property_group> </instance> </service> <service name='network/datalink-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='datalinks' type='application'> <property_group name='aggr0' type='datalink-aggr'> <propval name='probe-ip' type='astring' value='198.51.100.63+198.51.100.1'/> </property_group> </property_group> </instance> </service> <service name='network/ip-interface-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='interfaces' type='application'> <property_group name='aggr0' type='interface-ip'> <property_group name='v4' type='address-static'> <propval name='ipv4-address' type='astring' value='198.51.100.63'/> <propval name='prefixlen' type='count' value='24'/> <propval name='up' type='astring' value='yes'/> </property_group> <property_group name='v6a' type='address-static'> <propval name='ipv6-address' type='astring' value='2001:db8:414:60bc::3f'/> <propval name='prefixlen' type='count' value='64'/> <propval name='up' type='astring' value='yes'/> </property_group> </property_group> </property_group> <property_group name='static-routes' type='application'> <property_group name='route-1' type='static-route'> <propval name='destination' type='astring' value='default'/> <propval name='family' type='astring' value='inet'/> <propval name='gateway' type='astring' value='198.51.100.1'/> </property_group> </property_group> </instance> </service> </service_bundle> The two original SC-profile files now contain: base-profile.xml <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="$5$rounds=10000$HJbjRmpw$JehxKTyBZxAbZmq0HN7gi367n8b4tcs1GkdgzaRbjk6"/> <propval type="astring" name="type" value="normal"/> </property_group> </instance> </service> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="US/Eastern"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="sun-color"/> </property_group> </instance> </service> </service_bundle> labnet-profile.xml <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='profile' name='labnet-profile'> <service name='network/datalink-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='datalinks' type='application'> <property_group name='aggr0' type='datalink-aggr'> <propval name='aggr-mode' type='astring' value='dlmp'/> <propval name='force' type='boolean' value='false'/> <propval name='key' type='count' value='0'/> <propval name='lacp-mode' type='astring' value='off'/> <propval name='lacp-timer' type='astring' value='short'/> <propval name='media' type='astring' value='Ethernet'/> <propval name='num-ports' type='count' value='2'/> <propval name='policy' type='astring' value='L4'/> <property name='ports' type='astring'> <astring_list> <value_node value='net0'/> <value_node value='net1'/> </astring_list> </property> </property_group> <property_group name='net0' type='datalink-phys'> <propval name='mtu' type='count' value='9000'/> </property_group> <property_group name='net1' type='datalink-phys'> <propval name='mtu' type='count' value='9000'/> </property_group> </property_group> </instance> </service> <service name='network/ip-interface-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='interfaces' type='application'> <property_group name='aggr0' type='interface-ip'> <property name='address-family' type='astring'> <astring_list> <value_node value='ipv4'/> <value_node value='ipv6'/> </astring_list> </property> <property_group name='v6' type='address-addrconf'> <propval name='interface-id' type='astring' value='::'/> <propval name='prefixlen' type='count' value='0'/> <propval name='stateful' type='astring' value='no'/> <propval name='stateless' type='astring' value='no'/> </property_group> </property_group> </property_group> </instance> </service> </service_bundle> Do the following on the AI server. Copy the three profile files to the AI server then use installadm(8) then apply them to the original nettest-sparc configuration. I found installadm's ability to use multiple SC-profiles on a single service with client selection criteria is extremely handy for this situation. # installadm update-profile -n nettest-sparc \ -p base-profile \ -f base-profile.xml # installadm create-profile -n nettest-sparc \ -p labnet-profile \ -f labnet-profile.xml # installadm create-profile -n nettest-sparc \ -p headbanger-profile \ -f headbanger-profile.xml \ -c mac=0:0:5e:0:53:d2 Do the following on the staging system. Log into the staging system's console and reinstall. root@headbanger:~# halt {0} ok boot net:dhcp - install This repeats the installation of the staging system as before, but with the full network configuration automatically applied on first boot. After installation is complete, log in and verify the system is configured as intended. Step 5. Repeat installation on other systems Duplicating the network configuration on other systems is relatively easy now. In this example, I will recreate the configuration on node "beatnik". Do the following on the AI server. Start by copying the headbanger-profile.xml from step 4 to beatnik-profile.xml then use an editor to provide beatnik's unique configuration details. Bold fonts show where I made changes. beatnik-profile.xml <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='profile' name='beatnik-profile'> <service name='system/identity' type='service' version='0'> <instance name='node' enabled='true'> <property_group name='config' type='application'> <propval name='nodename' type='astring' value='beatnik'/> </property_group> </instance> </service> <service name='network/datalink-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='datalinks' type='application'> <property_group name='aggr0' type='datalink-aggr'> <propval name='probe-ip' type='astring' value='198.51.100.64+198.51.100.1'/> </property_group> </property_group> </instance> </service> <service name='network/ip-interface-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='interfaces' type='application'> <property_group name='aggr0' type='interface-ip'> <property_group name='v4' type='address-static'> <propval name='ipv4-address' type='astring' value='198.51.100.64'/> <propval name='prefixlen' type='count' value='24'/> <propval name='up' type='astring' value='yes'/> </property_group> <property_group name='v6a' type='address-static'> <propval name='ipv6-address' type='astring' value='2001:db8:414:60bc::40'/> <propval name='prefixlen' type='count' value='64'/> <propval name='up' type='astring' value='yes'/> </property_group> </property_group> </property_group> <property_group name='static-routes' type='application'> <property_group name='route-1' type='static-route'> <propval name='destination' type='astring' value='default'/> <propval name='family' type='astring' value='inet'/> <propval name='gateway' type='astring' value='198.51.100.1'/> </property_group> </property_group> </instance> </service> </service_bundle> Use installadm(8) on the install server to load beatnik's installation details. # installadm create-client -n nettest-sparc \ -e 0:0:5e:0:53:24 # installadm create-profile -n nettest-sparc \ -p beatnik-profile \ -f beatnik-profile.xml \ -c mac=0:0:5e:0:53:24 The following installadm list command shows how selection criteria is used to identify which SC-profile files are applied during the installation of each system. # installadm list -p -n nettest-sparc Service Name Profile Name Environment Criteria ------------ ------------ ----------- -------- nettest-sparc base-profile install none system labnet-profile install none system headbanger-profile install mac=0:0:5e:0:53:d2 system beatnik-profile install mac=0:0:5e:0:53:24 system Do the following on other systems targeted for the same configuration. Log into the console and start the installation. {0} ok boot net:dhcp - install The full network configuration will automatically be applied on first boot of beatnik just as it was on headbanger.

The first item in Solaris 11.4’s list of new networking features was "Migration of persistent network configuration to SMF". Does it really matter where the network configuration is stored?...

Migrate Oracle Solaris 10 Physical Systems with ZFS root to Guest Domains in 3 Easy Steps

  We are happy to announce that we now offer a new tool called ldmp2vz(1M) that can migrate Oracle Solaris 10 source systems with ZFS root. This tool has two commands; ldmp2vz_collect(1M) and ldmp2vz_convert(1M). These two commands enable migration of a sun4u or a sun4v Oracle Solaris 10 physical machine to an Oracle Solaris 10 guest domain on a system running Oracle Solaris 11 in the control domain as shown in this diagram.       Migration of the system is accomplished in three easy steps.   1. Collection: Run ldmp2vz_collect(1M) on the source machine to collect system image and configuration data. The resulting image file can be transmitted to the target system’s control domain or stored on an NFS server that is available on the target machine.   2. Preparation:  Create a new Oracle Solaris 10 guest domain on the SPARC target machine that is running Oracle Solaris 11 in the control domain, using any supported method. You can use ovmtdeploy(8), Jumpstart or install from Oracle Solaris DVD.   3. Conversion: Run ldmp2vz_convert(1M) inside the newly created Oracle Solaris 10 guest domain on the target machine. This command uses the system image from the source machine and restores the root file system and configuration data using Live Upgrade technology.  This process also replaces sun4u packages with the corresponding sun4v versions if needed.       The Oracle Solaris 10 guest domain on the target machine is now ready for application and data migration. We documented a comprehensive end-to-end migration use case with an Oracle Solaris 10 source system that has a Database zone running Oracle Single Instance Database 12.1.0.2 and a Web zone running Apache web server. Application data on the source system is stored on ZFS, UFS and ASM raw disks and the document explains the process for transferring the data from the source machine to the Oracle Solaris 10 guest domain on the target system.       The ldmp2vz(1M) tool is available in Oracle VM Server for SPARC 3.2 Patch 151934-06. The migration use case document, titled Lift and Shift Guide – Migrating Workloads from Oracle Solaris 10 (ZFS) SPARC Systems to Oracle Solaris 10 Guest Domains, is available in the Lift and Shift Documentation Library

  We are happy to announce that we now offer a new tool called ldmp2vz(1M) that can migrate Oracle Solaris 10 source systems with ZFS root. This tool has two commands; ldmp2vz_collect(1M)...

Oracle Solaris Cluster

Cluster File System with ZFS: General Deployment Questions

Cluster File System with ZFS: Introduction and Configuration Is there a picture or theory of operation explaining the working model of Cluster File System with ZFS? Will this scale? (i.e. to 4 nodes, 40 nodes, 400 nodes, 4000 nodes?) Is it safe for diversity? (i.e. 3 datacenters, located in different cities or continents?) What is the performance implication when you add a node? Can multiple nodes drop dead and the application continue to run without losing data? I am trying to determine the use cases for clustering with ZFS... (i.e. is it a "Cloud Solution" where we could lose a datacenter and still have applications continue to run in the remaining datacenters?) Is there a picture or theory of operation explaining the working model of Cluster File System with ZFS? Here is one such diagram that explains the working model of Cluster File System or Proxy File System (PxFS) with ZFS at a high level.   From the diagram above, it can be observed that all the data flow/transactions go through one machine.  PxFS is another layer that sits above the real file system. PxFS client talks to the PxFS Server layer on the primary node (the information on which is being checkpointed to secondary nodes). Then finally the PxFS server layer talks to the ZFS file system, which then talks to the physical storage. Note that this is a very high level architecture and has many other components involved.  The block that shows "Cluster Communication Subsystem" exists within each node of the cluster. Will this scale? (i.e. to 4 nodes, 40 nodes, 400 nodes, 4000 nodes?) See the URL https://docs.oracle.com/cd/E69294_01/html/E69310/bacfbadd.html#CLCONbacbbbbb Excerpts from the link above: Supported Configurations Depending on your platform, Oracle Solaris Cluster software supports the following configurations: SPARC: Oracle Solaris Cluster software supports from one to 16 cluster nodes in a cluster. Different hardware configurations impose additional limits on the maximum number of nodes that you can configure in a cluster composed of SPARC-based systems. See Oracle Solaris Cluster Topologies in Concepts for Oracle Solaris Cluster 4.4 for the supported configurations. x86: Oracle Solaris Cluster software supports from one to eight cluster nodes in a cluster. Different hardware configurations impose additional limits on the maximum number of nodes that you can configure in a cluster composed of x86-based systems. See Oracle Solaris Cluster Topologies in Concepts for Oracle Solaris Cluster 4.4 for the supported configurations As you can observe from the diagram in answer for #1, all the client requests go through the PxFS server. As many client nodes, so many read/write operations to PxFS primary. So, the scalability is limited by the server's bandwidth. Is it safe for diversity? (i.e. 3 datacenters, located in different cities or continents?) Yes, this feature can be used in campus clusters where nodes of a cluster can be located in different rooms and with Solaris Cluster Disaster Recovery Framework where you can achieve Multi-Cluster and Multi-site capability. A cluster file system only exists within a cluster instance. Depending on the distance, nodes of one cluster instance can be spread between two locations. See https://docs.oracle.com/cd/E69294_01/html/E69325/z40001987941.html#scrolltoc There is support for replication of cluster file systems based on ZFS between clusters which can be at different geographies with services availability managed by the disaster recovery framework. See https://docs.oracle.com/cd/E69294_01/html/E69466/index.html for more information. What is the performance implication when you add a node? It will have no impact on Cluster File system as such. However, it also depends on the activity being performed on the Cluster File System. If the new node joins a cluster as a Secondary, all the existing PxFS information related to the file systems will be checkpointed to the Secondary node, which in turn affects the performance. But if the new node joins the cluster when it has the desired number of secondaries, then the new node is added as a Spare which means no existing data from the PxFS primary is checkpointed to the new node. Can multiple nodes drop dead and the application continue to run without losing data? File system access will continue on a node failure. Completion of file and directory operations during a node failure will maintain the semantics of local ZFS file system. Any file system state change that occurred on a globally mounted cluster file system ensures that the latest state is preserved after a failover operation. A multiple node drop dead scenario is similar to a single node death. I am trying to determine the use cases for clustering with ZFS... (i.e. is it a "Cloud Solution" where we could lose a datacenter and still have applications continue to run in the remaining datacenters?) Some of the use cases are: 1) For having the live migration support of Solaris Cluster HA-LDOM data service, global access is a requirement. 2) Shared file systems access multi-tier applications like SAP, Oracle E-Business Suite. 3) Shared file systems for multiple instances of same application deployed in the cluster. 4) ZFS snapshot replication for Cluster File systems with ZFS managed by Solaris Cluster Disaster Recovery Framework. As you can observe from the diagram, PxFS is a layer above the underlying file system. If you have a requirement for multiple nodes to be able to access 'the same' filesystem and performance is not the main requirement - Solaris Cluster File System is the way to go. If the application has heavy read/write requirements, then it is always better to use highly available local file systems.  

Cluster File System with ZFS: Introduction and Configuration Is there a picture or theory of operation explaining the working model of Cluster File System with ZFS? Will this scale? (i.e. to 4 nodes,...

Python Lambda Functions - Quick Notes

Lambda functions: are anonymous functions with no function name but with function body or definition are useful when a small function is expected to be called once or just a few times are useful as arguments to a higher-order function that takes other functions as arguments For example, to filter out some data from a list based on some condition, using a lambda function might be concise and simpler than writing a full-fledged function to accomplish the same task. are defined using the lambda keyword in Python can accept any number of arguments (zero or more) can have only one expression that gets evaluated and returned Syntax: lambda argument(s): expression eg., A trivial example. def areaofcircle(radius): return math.pi * radius * radius can be written as: circlearea = lambda radius : math.pi * radius * radius In this example, the lambda function accepts a lone argument radius; and the function evaluates the lambda expression (π * radius2) to calculate the area of a circle and returns the result to caller. In case of lambda function, the identifier "circlearea" is assigned the function object the lambda expression creates so circlearea can be called like any normal function. eg., circlearea(5) Another trivial example that converts first character in each word in a list to uppercase character. >> fullname = ["john doe", "richard roe", "janie q"] >>> fullname ['john doe', 'richard roe', 'janie q'] >>> map(lambda name : name.title(), fullname) ['John Doe', 'Richard Roe', 'Janie Q'] Alternative URL: Python Lambda Functions @technopark02.blogspot.com

Lambda functions: are anonymous functions with no function name but with function body or definition are useful when a small function is expected to be called once or just a few times are useful as...

Oracle Solaris Cluster support for Oracle E-Business Suite 12.2.8

We are very pleased to announce further support for Oracle E-Business Suite 12.2.8 on the following Oracle Solaris Cluster versions: Oracle Solaris Cluster 4.3 SRU 3 or later on Oracle Solaris 11.3 Oracle Solaris Cluster 4.4 or later on Oracle Solaris 11.4 Deploying Oracle E-Business Suite 12.2 with Oracle Solaris Cluster provides the ability to take advantage of the zone cluster feature to provide security isolation, application fault isolation as well as individual resource management within a zone cluster. Note, Oracle offers the ability to license much of its software, including the Oracle Database, based on the quantity of CPUs that will run the software. When performance goals can be met with only a subset of the computer's CPUs, it may be appropriate to limit licensing costs by using a processor-based licensing metric. The document Hard Partitioning With Oracle Soalris Zones explains the different Solaris features that can be used to limit software licensing costs when a processor-based metric is used. In the deployment example below, the Oracle E-Business Suite 12.2.8 DB-Tier and Mid-Tiers have been deployed within separate zone clusters. Furthermore, the Primary Application Tier and subsequent WebLogic Administration Server have been installed within an Oracle Solaris Cluster failover resource group using a logical host. What this means is that if the physical node or zone cluster node hosting the Primary Application Tier and subsequent WebLogic Administration Server fails then Oracle Solaris Cluster will fail over the Primary Application Tier to the other zone cluster node. Oracle Solaris Cluster will detect a zone cluster node or physical node failure within seconds and will automatically fail over the logical host to another zone cluster node where the Primary Application Tier services are automatically started again. Typically, the WebLogic Administration Server is available again within 2-3 minutes after a zone cluster node or physical node failure. The following diagram shows a typical Oracle E-Business 12.2 deployment on Oracle Solaris Cluster 4.3 with Oracle Solaris 11, using Oracle Solaris Zone Clusters. With this deployment example, after Oracle E-Business Suite 12.2.8 was installed and configured on Oracle Solaris Cluster 4.3, a version update was performed from the latest version of 4.3, using the rolling upgrade method, to Oracle Solaris Cluster 4.4 and Oracle Solaris 11.4 in order to take advantage of new features offered by Oracle Solaris Cluster 4.4. One new feature is the support for immutable zone clusters. Once running on Oracle Solaris Cluster 4.4 and Oracle Solaris 11.4, both Oracle Solaris zone clusters were configured as immutable zone clusters in order to restrict access with read-only roots. A read-only zone cluster can be configured by setting the file-mac-property. Using a read-only zone cluster root expands the secure runtime boundary. The following example provides simple steps to configure immutable zone clusters for db-zc and app-zc. Note, a zone cluster node reboot is required to implement the immutable zone cluster configuration. However, in order to maintain the availability of Oracle E-Business Suite 12.2.8 services a rolling zone cluster node reboot is performed. As shown within the diagram above, the Primary Application Tier contains the HTTP Server which represents the single Web Entry Point. As such, when an app-zc zone cluster node is rebooted that is running the HTTP Server a small HTTP Server interrupt will occur. Typically, this takes less than 1 minute before Oracle Solaris Cluster has restarted the HTTP Server on the surviving zone cluster node for app-zc. Note, already connected clients may not notice that small outage if they did not hit enter, during that outage, as their session will remain connected. Verify that zone clusters db-zc and app-zc are running on each physical node root@node1:~# clzc status db-zc app-zc === Zone Clusters === --- Zone Cluster Status --- Name     Brand     Node Name   Zone Host Name   Status   Zone Status ----     -----     ---------   --------------   ------   ----------- db-zc    solaris   node1       vdbnode1         Online   Running                    node2       vdbnode2         Online   Running app-zc   solaris   node1       vappnode1        Online   Running                    node2       vappnode2        Online   Running root@ node1:~# Configure immutable zone clusters for db-zc and app-zc root@node1:~# clzc configure db-zc clzc:db-zc> set file-mac-profile=fixed-configuration clzc:db-zc> exit root@node1:~# root@node1:~# clzc configure app-zc clzc:db-zc> set file-mac-profile=fixed-configuration clzc:db-zc> exit root@node1:~# Selectively reboot each db-zc zone cluster node as an immutable zone cluster node Verify that that the Oracle RAC Database instances are running. root@node1:~# zlogin db-zc 'su - oracle -c "srvctl status db -d vis"' Oracle Corporation      SunOS 5.11      11.4    November 2018 Instance VIS1 is running on node vdbnode1 Instance VIS2 is running on node vdbnode2 root@node1:~# Reboot zone cluster db-zc on node1. root@node1:~# clzc reboot -n node1 db-zc Waiting for zone reboot commands to complete on all the nodes of the zone cluster "db-zc"... root@node1:~# Once complete verify that the Oracle RAC Database instances are running, as shown above, before continuing. Reboot zone cluster db-zc on node2. root@node1:~# clzc reboot -n node2 db-zc Waiting for zone reboot commands to complete on all the nodes of the zone cluster "db-zc"... root@node1:~# Once complete verify that the Oracle RAC Database instances are running, as shown above, before continuing. Selectively reboot each app-zc zone cluster node as an immutable zone cluster node Verify that that Oracle E-Business Suite 12.2.8 services  are running. root@node1:~# clrs status -Z app-zc Reboot zone cluster app-zc on node1. root@node1:~# clzc reboot -n node1 app-zc Waiting for zone reboot commands to complete on all the nodes of the zone cluster "app-zc"... root@node1:~# Once complete verify that that the Oracle E-Business Suite 12.2.8 services  are running, as shown above, before continuing. Reboot zone cluster app-zc on node2. root@node1:~# clzc reboot -n node2 app-zc Waiting for zone reboot commands to complete on all the nodes of the zone cluster "app-zc"... root@node1:~# Once complete, immutable zone clusters providing read-only roots are now underpinning Oracle E-Business Suite 12.2.8 services. As noted above, Oracle Solaris Cluster provides the ability to take advantage of the zone cluster feature to provide security isolation, application fault isolation as well as individual resource management within a zone cluster. In this context, it is possible to consolidate other Oracle E-Business Suite deployments within separate zone clusters. A future article will expand this deployment example to include Business Continuity for Oracle E-Business Suite 12.2. While there are many use cases that could be considered, a largely passive standby site, within a primary and standby Disaster Recovery configuration, could be used for development / test Oracle E-Business Suite deployments. Those deployments would be quiesced before a takeover or switchover to a standby site of the production Oracle E-Business Suite 12.2.8. Such other Oracle E-Business Suite deployments could be deployed within separate zone clusters and would be isolated from the production Oracle E-Business Suite 12.2.8 deployment. For more information please refer to the following: Oracle Solaris Cluster 4.4 Release Notes How to Deploy Oracle RAC on an Exclusive-IP Oracle Solaris Zones Cluster How to configure a Zone Cluster to Be Immutable Oracle Solaris Cluster Data Service for Oracle E-Business Suite as of Release 12.2 Guide Hard Partitioning With Oracle Solaris Zones

We are very pleased to announce further support for Oracle E-Business Suite 12.2.8 on the following Oracle Solaris Cluster versions: Oracle Solaris Cluster 4.3 SRU 3 or later on Oracle Solaris 11.3 Oracl...

Perspectives

Programming in C: Few Tidbits #8

1) Function Pointers Declaring Function Pointers Similar to a variable declared as pointer to some data type, a variable can also be declared to be a pointer to a function. Such a variable stores the address of a function that can later be called using that function pointer. In other words, function pointers point to the executable code rather than data like typical pointers. eg., void (*func_ptr)(); In the above declaration, func_ptr is a variable that can point to a function that takes no arguments and returns nothing (void). The parentheses around the function pointer cannot be removed. Doing so makes the declaration a function that returns a void pointer. The declaration itself won't point to anything so a value has to be assigned to the function pointer which is typically the address of the target function to be executed. Assigning Function Pointers If a function by name dummy was already defined, the following assignment makes func_ptr variable to point to the function dummy. eg., void dummy() { ; } func_ptr = dummy; In the above example, function's name was used to assign that function's address to the function pointer. Using address-of / address operator (&) is another way. eg., void dummy() { ; } func_ptr = &dummy; The above two sample assignments highlight the fact that similar to arrays, a function's address can be obtained either by using address operator (&) or by simply specifying the function name - hence the use of address operator is optional. Here's an example proving that. % cat funcaddr.c #include <stdio.h> void foo() { ; } void main() { printf("Address of function foo without using & operator = %p\n", foo); printf("Address of function foo using & operator = %p\n", &foo); } % cc -o funcaddr funcaddr.c % ./funcaddr Address of function foo without using & operator = 10b6c Address of function foo using & operator = 10b6c Using Function Pointers Once we have a function pointer variable pointing to a function, we can call the function that it points to using that function pointer variable as if it is the actual function name. Dereferencing the function pointer is optional similar to using & operator during function pointer assignment. The dereferencing happens automatically if not done explicitly. eg., The following two function calls are equivalent, and exhibit the same behavior. func_ptr(); (*func_ptr)(); Complete Example Here is one final example for the sake of completeness. This example demonstrate the execution of couple of arithmetic functions using function pointers. Same example also highlights the optional use of & operator and pointer dereferencing. % cat funcptr.c #include <stdio.h> int add(int first, int second) { return (first + second); } int multiply(int first, int second) { return (first * second); } void main() { int (*func_ptr)(int, int); /* declaration */ func_ptr = add; /* assignment (auto func address) */ printf("100+200 = %d\n", (*func_ptr)(100,200)); /* execution (dereferencing) */ func_ptr = &multiply; /* assignment (func address using &) */ printf("100*200 = %d\n", func_ptr(100,200)); /* execution (auto dereferencing) */ } % cc -o funcptr funcptr.c % ./funcptr 100+200 = 300 100*200 = 20000 Few Practical Uses of Function Pointers Function pointers are convenient and useful while writing functions that sort data. Standard C Library includes qsort() function to sort data of any type (integers, floats, strings). The last argument to qsort() is a function pointer pointing to the comparison function. Function pointers are useful to write callback functions where a function (executable code) is passed as an argument to another function that is expected to execute the argument (call back the function sent as argument) at some point. In both examples above function pointers are used to pass functions as arguments to other functions. In some cases function pointers may make the code cleaner and readable. For example, array of function pointers may simplify a large switch statement. 2) Printing Unicode Characters Here's one possible way. Make use of wide characters. Wide character strings can represent Unicode character value (code point). The standard C library provides wide-character functions. Include the header file wchar.h Set proper locale to support wide characters Print the wide character(s) using standard printf and "%ls" format specifier -or- using wprintf to output formatted wide characters Following rudimentary code sample prints random currency symbols and a name in Telugu script using both printf and wprintf function calls. % cat -n unicode.c 1 #include <wchar.h> 2 #include <locale.h> 3 #include <stdio.h> 4 5 int main() 6 { 7 setlocale(LC_ALL,"en_US.UTF-8"); 8 wprintf(L"\u20AC\t\u00A5\t\u00A3\t\u00A2\t\u20A3\t\u20A4"); 9 wchar_t wide[4]={ 0x0C38, 0x0C30, 0x0C33, 0 }; 10 printf("\n%ls", wide); 11 wprintf(L"\n%ls", wide); 12 return 0; 13 } % cc -o unicode unicode.c % ./unicode € ¥ £ ¢ ₣ ₤ సరళ సరళ Here is one website where numerical values for various Unicode characters can be found.

1) Function Pointers Declaring Function Pointers Similar to a variable declared as pointer to some data type, a variable can also be declared to be a pointer to a function. Such a variable stores the...

Perspectives

Using Solaris Zones on Oracle Cloud Infrastructure

Continuing my series of posts on how to use Oracle Solaris in Oracle Cloud Infrastructure (OCI), I'll next explore using Solaris Zones in OCI.  In this post I assume you're somewhat familiar with zones already, as they've been around since we released Solaris 10 in 2005. Before diving in, there's a little terminology to review.  The original zones introduced in Solaris 10 are known as non-global zones, which share a kernel with the global zone but otherwise appear to applications as a separate instance of Solaris.  More recently, we introduced kernel zones in Solaris 11.2.  These run a separate kernel, with specialized network and I/O paths that behave more like a paravirtualized virtual machine.  There are also Solaris 10 branded zones, which emulate a Solaris 10 environment on a Solaris 11 kernel. All of the brands of zones provide a Solaris-native virtualization environment with minimal overhead that can help you get more out of your OCI compute resources.  The image at the start of this post from the Solaris 11.4 documentation shows a complex zones environment that might be built anywhere, including in OCI, but this post provides a basic how-to for getting started with non-global and kernel zones in the OCI environment. Setup As a first step, you need to deploy Solaris 11.4 as a bare metal or virtual machine instance in OCI.  See my earlier post on how to import the Solaris 11.4 images into your tenancy if you haven't already done so.  As you create the instance, I'd recommend customizing the boot volume size to a larger value than its default 50 GB in order to accommodate the extra storage required for the zones.  After the instance is booted, you'll need to ssh in as the opc user and toggle the root pool's autoexpand property to allow ZFS to see the extra space beyond 50 GB (this is a workaround for an issue with autoexpand on the root pool): # zpool set autoexpand=off rpool;sleep 15;zpool set autoexpand=on rpool Next, you'll need to use the OCI console or CLI add a second VNIC to the instance that we'll use for the zone's network access.  When you're done, the instance's Attached VNICs section in the OCI console should look something like this:   Configuring a Non-Global Zone on Bare Metal Start by setting the default vlan tag on net0 to ensure its traffic is tagged correctly: # dladm set-linkprop -p default-tag=0 net0 Now it's time to configure the zone.  The key requirement is to configure the anet resource to use the VLAN tag, IP and MAC addresses assigned to the VNIC by OCI, otherwise there won't be any network traffic allowed to or from the zone.  Note that you need the prefix length from the VCN configuration. We also set the default router in the zone configuration to ensure it can reach resources outside its local link (by convention the router is the first address on the network). # zonecfg -z ngz1 Use 'create' to begin configuring a new zone. zonecfg:ngz1> create create: Using system default template 'SYSdefault' zonecfg:ngz1> info zonename: ngz1 brand: solaris anet 0: linkname: net0 configure-allowed-address: true zonecfg:ngz1> select anet 0 zonecfg:ngz1:anet> set allowed-address=100.106.200.4/23 zonecfg:ngz1:anet> set mac-address=00:00:17:00:BA:64 zonecfg:ngz1:anet> set vlan-id=1 zonecfg:ngz1:anet> set defrouter=100.106.200.1 zonecfg:ngz1:anet> end zonecfg:ngz1> info zonename: ngz1 brand: solaris anet 0: linkname: net0 allowed-address: 100.106.200.4/23 configure-allowed-address: true defrouter: 100.106.200.1 link-protection: "mac-nospoof, ip-nospoof" mac-address: 00:00:17:00:ba:64 vlan-id: 1 zonecfg:ngz1> commit zonecfg:ngz1> exit Configuring a Non-Global Zone on a Virtual Machine When using a VM as the global zone, the process is slightly different as the hypervisor exposes the VNIC to the guest automatically.  This makes the zone configuration somewhat simpler as we don't have to worry about VLAN tags (the hypervisor handles that) and can just delegate that physical link as a net resource and configure the allowed address and router.  First, here's the OCI VNIC configuration for our VM: To configure the zone: # zonecfg -z ngz1 Use 'create' to begin configuring a new zone. zonecfg:ngz1> create create: Using system default template 'SYSdefault' zonecfg:ngz1> info zonename: ngz1 brand: solaris anet 0: linkname: net0 configure-allowed-address: true zonecfg:ngz1> remove anet 0 zonecfg:ngz1> add net zonecfg:ngz1:net> set physical=net1 zonecfg:ngz1:net> set allowed-address=100.106.196.11/23 zonecfg:ngz1:net> set defrouter=100.106.196.1 zonecfg:ngz1:net> end zonecfg:ngz1> info zonename: ngz1 brand: solaris net 0: allowed-address: 100.106.196.11/23 physical: net1 defrouter: 100.106.196.1 zonecfg:ngz1> commit zonecfg:ngz1> exit Once the zone is configured, install it as usual with zoneadm -z ngz1 install, then boot it, login via zlogin, and verify the network works. Configuring a Kernel Zone Kernel zones are not supported by OCI's virtual machines, so you'll need a bare metal host to run them. First ensure that you've installed the kernel zone brand package, which isn't included in the Solaris image for OCI: # pkg install brand-solaris-kz In this example, we're reusing the same VNIC for the kernel zone as was used for the non-global zone example above.  Kernel zones don't support setting the IP configuration in the zone configuration, so we only set the MAC address and VLAN tag here.  The correct IP address, network prefix, and router must be provided either in the SMF profile used to configure the kernel zone or interactively to the system configuration tool that will run at first boot of the kernel zone.  Here's the zone configuration: # zonecfg -z kz1 Use 'create' to begin configuring a new zone. zonecfg:kz1> create -t SYSsolaris-kz zonecfg:kz1> info zonename: kz1 brand: solaris-kz hostid: 0x2bab3170 anet 0: configure-allowed-address: true id: 0 device 0: storage.template: dev:/dev/zvol/dsk/%{global-rootzpool}/VARSHARE/zones/%{zonename}/disk%{id} storage: dev:/dev/zvol/dsk/rpool/VARSHARE/zones/kz1/disk0 id: 0 bootpri: 0 virtual-cpu: ncpus: 4 capped-memory: physical: 4G pagesize-policy: largest-available zonecfg:kz1> select anet 0 zonecfg:kz1:anet> set mac-address=00:00:17:00:BA:64 zonecfg:kz1:anet> set vlan-id=1 zonecfg:kz1:anet> end zonecfg:kz1> commit zonecfg:kz1> exit As before, proceed to install the zone and boot it using the zoneadm command. There's plenty more to explore in using zones with OCI, consult the zones documentation for more details on their capabilities.

Continuing my series of posts on how to use Oracle Solaris in Oracle Cloud Infrastructure (OCI), I'll next explore using Solaris Zones in OCI.  In this post I assume you're somewhat familiar...

Oracle Solaris Cluster

Oracle Solaris Cluster Centralized Installation

Oracle Solaris Cluster Centralized Installation is available with Oracle Solaris Cluster 4.4. The tool is called "Centralized installer" which provides complete cluster software package installation from one of the cluster nodes. Centralized installation offers a wizard-style installer “clinstall” in both text-based and GUI mode to improve the ease of use.  Centralized Installer can be used during new cluster installs, when adding new nodes to an existing cluster or to install additional cluster software to an existing cluster. How to install Oracle solaris cluster packages using clinstall  Refer https://docs.oracle.com/cd/E69294_01/html/E69313/gsrid.html#scrolltoc for pre-requisites before you begin the installation. Restore external access to remote procedure call (RPC) communication on all cluster nodes. phys-schost# svccfg -s svc:/network/rpc/bind setprop config/local_only = false phys-schost# svcadm refresh network/rpc/bind:default Set the location of the Oracle Solaris Cluster 4.4 package repository on all nodes. phys-schost# pkg set-publisher -p file:///mnt/repo Install package “ha-cluster/system/preinstall” on all nodes phys-schost# pkg install --accept ha-cluster/system/preinstall Determine a cluster node which is used as control node to issue installation commands. Verify that the control node can reach each of the other cluster nodes to install Authorize acceptance of cluster installation commands by the control node. Perform this step on all the nodes phys-schost# clauth enable -n phys-control From the control node, launch the clinstall installer To launch the graphical user interface (GUI), use the -g option. phys-control# clinstall –g To launch the interactive text-based utility, use the -t option. phys-control# clinstall -t Example 1: Cluster initial installation from IPS repository using interactive text-based utility with clinstall -t.  Select initial installation phys-control# clinstall -t *** Main Menu *** 1) Initial Installation 2) Additional Cluster Package Installation ?) Help with menu options q) Quit Option: 1  Specify nodes on which software package needs to be installed >>> Enter nodes to install Oracle Solaris Cluster package <<< This Oracle Solaris Cluster release supports up to 16 nodes. Type the names of the nodes you want to install with  Oracle Solaris     Cluster software packages. Enter one node name  per line.  When finished, type Control-D: Node name:  phys-control Node name (Control-D to finish):  phys-schost Node name (Control-D to finish):  ^D This is the complete list of nodes: phys-control phys-schost Is it correct (y/n) ?  y Checking all nodes... Checking "phys-control"...NAME (PUBLISHER)                                  VERSION                    IFO ha-cluster/system/preinstall (ha-cluster)         4.4-0.21.0                 i-- Solaris 11.4.0.15.0 i386...ready Checking "phys-schost"...Solaris 11.4.0.15.0 i386...ready All nodes are ready for the cluster installation. Press Enter to continue... Select the installation from either an IPS repository or an ISO file that is usually  downloaded from Oracle support website. Select the source of Oracle Solaris Cluster software 1) Install from IPS Repository 2) Install from ISO Image File Option:  1 Enter the IPS repository URI [https://pkg.oracle.com/ha-cluster/release]: Accessing the repository for the cluster components ... [|]  Select the cluster component which you want to install. Select an Oracle Solaris Cluster component that you want to install: Package                        Description 1) ha-cluster-framework-minimal   OSC Framework minimal group package 2) ha-cluster-framework-full      OSC Framework full group package 3) ha-cluster-full                OSC full installation group package 4) ha-cluster/system/manager      OSC Manager q) Done Option: 1 The package, "ha-cluster-framework-minimal" is selected to install on the following nodes. phys-control,phys-schost Do you want to continue with the installation (y/n) ?  y Upon completion, cluster packages in ha-cluster-framework-minimal group package are installed on phys-control and phys-schost. Administrator can now proceed to do initial cluster configuration. Example 2: Additional cluster package installation from IPS repository using text based utility phys-control# clinstall -t *** Main Menu *** 1) Initial Installation 2) Additional Cluster Package Installation ?) Help with menu options q) Quit Option:  2 >>> Enter nodes to install Oracle Solaris Cluster package <<< This Oracle Solaris Cluster release supports up to 16 nodes.   Type the names of the nodes you want to install with  Oracle Solaris     Cluster software packages. Enter one node name  per line.  When finished, type Control-D: Node name:  phys-control Node name (Control-D to finish):  phys-schost Node name (Control-D to finish):  ^D This is the complete list of nodes: phys-control phys-schost Is it correct (y/n) ?  y Checking all nodes... Checking "phys-control"...NAME (PUBLISHER)    VERSION      IFO ha-cluster/system/preinstall (ha-cluster)   4.4-0.21.0   i-- Solaris 11.4.0.15.0 i386...ready Checking "phys-schost"...Solaris 11.4.0.15.0 i386...ready All nodes are ready for the cluster installation. Press Enter to continue... Select the source of Oracle Solaris Cluster software 1) Install from IPS Repository 2) Install from ISO Image File Option:  1 Enter the IPS repository URI [https://pkg.oracle.com/ha-cluster/release]: Accessing the repository for the cluster components ... [|] Select an Oracle Solaris Cluster component that you want to install      Package                        Description 1) *data-service                  Select an Oracle Solaris Cluster data-service to i 2) ha-cluster-data-services-full  OSC Data Services full group package 3) ha-cluster-framework-full      OSC Framework full group package 4) ha-cluster/system/manager      OSC Manager 5) ha-cluster-geo-full            OSC Disaster Recovery Framework full group package 6) ha-cluster-full                OSC full installation group package q) Done Option:  5 The package, "ha-cluster-geo-full" is selected to install on the following nodes. phys-control, phys-schost Do you want to continue with the installation (y/n) ?  y Upon completion, the Solaris Cluster Disaster Recovery Framework software is installed on phys-control and phys-schost. Administrator can now proceed to configure the disaster recovery framework on the cluster. Example 3: Cluster initial installation from ISO image using the graphical user interface (GUI), clinstall with -g option phys-control# clinstall –g Upon completion, cluster packages in ha-cluster-full package are installed on phys-control and phys-schost. Administrator can now proceed to do initial cluster configuration.

Oracle Solaris Cluster Centralized Installation is available with Oracle Solaris Cluster 4.4. The tool is called "Centralized installer" which provides complete cluster software package installation...

Oracle Solaris 10

Oracle Solaris 10 in the Oracle Cloud Infrastructure

With the release of the October 2018 Solaris 10 Extended Support Recommended patch set, you can now run Solaris 10 in Oracle Cloud. I thought it would be good to document the main steps for getting an image you can run in OCI High level steps are Create a Solaris 10 image using Virtualbox and get it patched with the October 2018 patch set Unconfigure it and shut it down Upload it to the Oracle Cloud object storage Create a custom image from that object you've just uploaded Create a compute instance of the custom image Boot it up and perform configuration tasks Creating a Solaris 10 image 1) Download Solaris10. I chose to download the Oracle VM VirtualBox template, which is preconfigured and installed with Solaris 10 1/13, which is the last update release of Solaris 10, you could equally install from the ISO, just make sure you pick vmdk as the disk image format. 2) Install VirtualBox on any suitable x86 host and operating system, I'm using Oracle Linux 7 which is configured to kickstart in our lab, but you could download it from Oracle at https://www.oracle.com/linux. One reason I picked Oracle Linux 7 is to make it easier to run the OCI tools for uploading images to Oracl Cloud Infrastructure. VirtualBox can be downloaded from http://virtualbox.org, or better it's in the Oracle Linux 7 yum repositories, just make sure the addons and developer repos are enabled in the file /etc/yum.repos.d/public-yum-ol7.repo, then run # yum install VirtualBox-5.2.x86_64 3) Import the VirtualBox template you downloaded above using the import appliance menu On the import Virtual Appliance dialog I've increased the amount of memory and changed the location of the root disk I also changed the imported appliance to run with USB 1.1 as I haven't got the extension pack installed, but you probably should install that any way. When it comes up it'll be using dhcp, so you should be able to just select dhcp during the usual sysconfig phase, select timezone, and root password, and it'll eventually come up with a desktop login.   Now you can see we've got Solaris 10 up and running. For good measure I updated the guest additions. They're installed any way, but at an older version, so it works better with the new versions. 4) Next step is to download the recommended patch set. Specifically the October 2018 patch set. This contains some fixes needed to work in OCI These are available from https://updates.oracle.com/download/28797770.html It is 2.1GB in size, so it'll take some time. Then we simply extract the zip archive, change directory in to the directory you've just extracted it to and run ./installpatchset --s10patchset (If I was doing this on a real machine I'd probably create a new liveupgrade boot environment and patch that, having scoured the README) Currently 403 patches are analysed, that will change over time. Shut it down and prepare it for use in OCI When you reboot be sure to read the messages at the end of the patch install phase, and the README.  Particularly this section "If the "-B" Live Upgrade flag is used, then the luactivate command will need to be run, and either an init(1M) or a shutdown(1M) will be needed to complete activation of the boot environment. A reboot(1M) will not complete activation of a boot environment following an luactivate." Two more things to do before we shutdown though, first off remove the SUNWvboxguest package, second is to sys-unconfig the VM, so we get to configure it properly on reboot. # pkgrm SUNWvboxguest # sys-unconfig   Upload it to Oracle Cloud object storage So we now have a suitably patched S10 image, ready to upload to Oracle Cloud. To do this you need to have the oci tools installed on your linux machine. Doing that will be the subject of another blog. But there's pretty good documentation here too (which is all I followed to create the blog). Assuming you now have ocitools working and in your path you upload the the disk image using this command $ oci os object put --bucket-name images --file Solaris10_1-13-disk1.vmdk It'll use the keys you've configured and uploaded to the console, and is surprisingly quick to upload the image - given this disk file is ~20GB in size, it only took about 10minutes to upload. [oci@ol7-1]]# oci os object put --bucket-name images --file Solaris10_1-13-disk1.vmdk Upload ID: c94aaf0d-a0e2-d3d1-7fb6-5aed125c3921 Split file into 145 parts for upload. Uploading object  [###############################-----]   87%  0d 00:01:26 Once it's there you can see it in the object storage pane of the OCI console And critically you need to get the object storage URI to allow you to create a custom image of it   Create a custom image Then go to the compute section and create a custom image. Selecting the "Emulated" mode, and giving it the object storage URI It takes a while for the image to be created, but once it is you can deploy it multiple times Create a compute instance Now go to the Compute menu and create an instance Key things about this stage are to select the Custom image you just created and an appropriate VM shape You will then be shown a page like this one And finally you can use vnc to connect to the console by using your rsa public key and creating a console connection. If you select the connect with VNC option from the 3dots on the right of the console connection, it gives you the command to set up an ssh tunnel from your system to the console. Boot it up and perform configuration tasks You connect with vncviewer :5900 and you'll see the VM has panicked. Solaris 10 uses an older version of grub, which can't easily find the root disk if the device configuration changes. So we need to trick it in to finding the rpool. To do this you can boot the failsafe archive  and mount the rpool.   Then you touch /a/reconfigure and reboot, next time through the system should boot up correctly. It does take a while after loading the ORACLE SOLARIS image for the system to actually boot, so don't panic if you see a blue screen for a while before seeing the SunOS Release boot messages. Of course we remembered to sys-unconfig before shutting the VM down, so we will have to run through the sysconfig setup. Just remember to set it up as DHCP and you do get asked for nameservice info, you will probably want to use the local DNS resolver at 169.254.169.254. Oracle Cloud also has lots of more specific options for managing your own DNS records and zones. If you forget to remove the SUNWvboxguest package the X server will fail to start. And there you have it, Oracle Solaris 10 running in OCI          

With the release of the October 2018 Solaris 10 Extended Support Recommended patch set, you can now run Solaris 10 in Oracle Cloud. I thought it would be good to document the main steps for getting an...

Oracle Solaris Cluster

Cluster File System with ZFS - Introduction and Configuration

Oracle Solaris Cluster as of 4.3 release has support for Cluster File System with UFS ZFS as a failover file system For application deployments that require accessing the same file system across multiple nodes, ZFS as a failover file system will not be a suitable solution. ZFS is the preferred data storage management on Oracle Solaris 11 and has many advantages over UFS. With Oracle Solaris Cluster 4.4, now you can have both ZFS and global access to ZFS file systems. Oracle Solaris Cluster 4.4 has added support for Cluster File System with ZFS With this new feature, you can now make ZFS file systems accessible from multiple nodes and run applications from the same file systems simultaneously on those nodes. It must be noted that zpool for globally mounted ZFS file systems does not actually mean a global ZFS pool, instead there is a Cluster File System layer that is present on top of ZFS that makes the file systems of the ZFS pool globally accessible. The following procedures explain and illustrate a couple of methods to bring up this configuration. How to create a zpool for globally mounted ZFS file systems: 1) Identify the shared device to be used for ZFS pool creation. To configure a zpool for globally mounted ZFS file systems, choose one or more multi-hosted devices from the output of the cldevice show command. phys-schost-1# cldevice show | grep Device In the following example, the entries for DID devices /dev/did/rdsk/d1 and /dev/did/rdsk/d4 shows that those devices are connected only to phys-schost-1 and phys-schost-2 respectively, while /dev/did/rdsk/d2 and /dev/did/rdsk/d3 are accessible by both nodes of this two-node cluster, phys-schost-1 and phys-schost-2. In this example, DID device /dev/did/rdsk/d3 with device name c1t6d0 will be used for global access by both nodes. # cldevice show | grep Device === DID Device Instances === DID Device Name:                                /dev/did/rdsk/d1   Full Device Path:                                phys-schost-1:/dev/rdsk/c0t0d0 DID Device Name:                                /dev/did/rdsk/d2   Full Device Path:                                phys-schost-1:/dev/rdsk/c0t6d0   Full Device Path:                                phys-schost-2:/dev/rdsk/c0t6d0 DID Device Name:                                /dev/did/rdsk/d3   Full Device Path:                                phys-schost-1:/dev/rdsk/c1t6d0   Full Device Path:                                phys-schost-2:/dev/rdsk/c1t6d0 DID Device Name:                                /dev/did/rdsk/d4   Full Device Path:                                phys-schost-2:/dev/rdsk/c1t6d1 2) Create a ZFS pool for the DID device(s) that you chose. phys-schost-1# zpool create HAzpool c1t6d0 phys-schost-1# zpool list NAME      SIZE  ALLOC   FREE  CAP  DEDUP    HEALTH  ALTROOT HAzpool  49.8G  2.22G  47.5G   4%  1.00x    ONLINE  / 3) Create ZFS file systems on the pool. phys-schost-1# zfs create -o mountpoint=/global/fs1 HAzpool/fs1 phys-schost-1# zfs create -o mountpoint=/global/fs2 HAzpool/fs2 4) Create files to show global access of the file systems. Copy some files to the newly created file systems. These files will be used in procedures below to demonstrate the file systems globally accessible by all cluster nodes. phys-schost-1# cp /usr/bin/ls /global/fs1/ phys-schost-1# cp /usr/bin/date /global/fs2/ phys-schost-1# ls -al /global/fs1/ /global/fs2/ /global/fs1/: total 120 drwxr-xr-x   3 root     root           4 Oct 8 23:22 . drwxr-xr-x   5 root     sys            5 Oct 8 23:21 .. -r-xr-xr-x   1 root     root       57576 Oct 8 23:22 ls /global/fs2/: total 7 drwxr-xr-x   3 root     root           4 Oct 8 23:22 . drwxr-xr-x   5 root     sys            5 Oct 8 23:21 .. -r-xr-xr-x   1 root     root       24656 Oct 8 23:22 date At this point the ZFS file systems of the zpool are accessible only on the node where the zpool is imported. There are two ways of configuring a zpool for globally mounted ZFS file systems. Method-1: Using device group: You would use this method when the requirement is only to provide global access to the ZFS file systems, when it is not known how HA services will be created using the file systems or which cluster resource groups will be created. 1) Create a device group of the same name as the zpool you created in step 1 of type zpool with poolaccess set to global. phys-schost-1# cldevicegroup create -p poolaccess=global -n \ phys-schost-1,phys-schost-2 -t zpool HAzpool Note: The device group must have the same name HAzpool as chosen for the pool. The poolaccess property is set to global to indicate that the file systems of this pool will be globally accessible across the nodes of the cluster. 2) Bring the device group online. phys-schost-1# cldevicegroup online HAzpool 3) Verify the configuration. phys-schost-1# cldevicegroup show === Device Groups === Device Group Name:                              HAzpool   Type:                                            ZPOOL   failback:                                        false   Node List:                                       phys-schost-1, phys-schost-2   preferenced:                                     false   autogen:                                         false   numsecondaries:                                  1   ZFS pool name:                                   HAzpool   poolaccess:                                      global   readonly:                                        false   import-at-boot:                                  false   searchpaths:                                     /dev/dsk phys-schost-1# cldevicegroup status === Cluster Device Groups === --- Device Group Status --- Device Group Name     Primary     Secondary     Status -----------------     -------     ---------     ------ HAzpool               phys-schost-1     phys-schost-2       Online In these configurations, the zpool is imported on the node that is primary for the zpool device group, but the file systems in the zpool are mounted globally. Execute the files copied in step#3 in the previous section, from a different node. It can be observed that the file systems are mounted globally and accessible across all nodes. phys-schost-2# /global/fs1/ls -al /global/fs2 total 56 drwxr-xr-x   3 root     root           4 Oct 8 23:22 . drwxr-xr-x   5 root     sys            5 Oct 8 23:21 .. -r-xr-xr-x   1 root     root       24656 Oct 8 23:22 date phys-schost-2# /global/fs2/date Fri Oct 9 04:08:59 PDT 2018 You can also verify that a newly created ZFS file system is immediately accessible from all nodes by executing the below commands. From the cldevicegroup status above, it can be observed that phys-schost-1 is the primary node for the device group.Execute the below command on the primary node: phys-schost-1# zfs create -o mountpoint=/global/fs3 HAzpool/fs3 Then from a different node, verify if the file system is accessible. phys-schost-2# df -h /global/fs3 file system             Size   Used  Available Capacity  Mounted on HAzpool/fs3             47G    40K        47G     1%    /global/fs3 Method-2: Using HAStoragePlus resource: You would typically use this method when you have planned on how HA services in resource groups would use the globally mounted file systems and expect dependencies from  resources managing the application on a resource managing the file systems. The device group of type zpool with poolaccess set to global is created when an HAStoragePlus resource is created and globalzpools property is defined, that is if such device group is not already created. 1) Create HAStoragePlus resource for a zpool for globally mounted file systems and bring it online. Note: The resource group can be scalable or failover as needed by the configuration. phys-schost-1# clresourcegroup create hasp-rg phys-schost-1# clresource create -t HAStoragePlus -p \ GlobalZpools=HAzpool -g hasp-rg hasp-rs phys-schost-1# clresourcegroup online -eM hasp-rg 2) Verify the configuration. phys-schost-1# clrs status hasp-rs === Cluster Resources === Resource Name       Node Name      State        Status Message -------------       ---------      -----        -------------- hasp-rs             phys-schost-1        Online       Online                     phys-schost-2        Offline      Offline phys-schost-1# cldevicegroup show === Device Groups === Device Group Name:                              HAzpool   Type:                                            ZPOOL   failback:                                        false   Node List:                                       phys-schost-1, phys-schost-2   preferenced:                                     false   autogen:                                         true   numsecondaries:                                  1   ZFS pool name:                                   HAzpool   poolaccess:                                      global   readonly:                                        false   import-at-boot:                                  false   searchpaths:                                     /dev/dsk phys-schost-1# cldevicegroup status === Cluster Device Groups === --- Device Group Status --- Device Group Name     Primary     Secondary     Status -----------------     -------     ---------     ------ HAzpool               phys-schost-1     phys-schost-2       Online Execute the files copied in step#3 in the previous section, from a different node. It can be observed that the file systems are mounted globally and accessible across all nodes. phys-schost-2# /global/fs1/ls -al /global/fs2 total 56 drwxr-xr-x   3 root     root           4 Oct 8 23:22 . drwxr-xr-x   5 root     sys            5 Oct 8 23:21 .. -r-xr-xr-x   1 root     root       24656 Oct 8 23:22 date phys-schost-2# /global/fs2/date Fri Oct 9 04:23:26 PDT 2018 You can also verify that a newly created ZFS file system is immediately accessible from all nodes by executing the below commands. From the cldevicegroup status above, it can be observed that phys-schost-1 is the primary node for the device group. Execute the below command on the primary node: phys-schost-1# zfs create -o mountpoint=/global/fs3 HAzpool/fs3 Then from a different node, verify if the file system is accessible. phys-schost-2# df -h /global/fs3 file system             Size   Used  Available Capacity  Mounted on HAzpool/fs3             47G    40K        47G     1%    /global/fs3 How to configure a zpool for globally mounted ZFS file systems for an already registered device group: There might be situations where a system administrator had used Method 1 to meet the global access requirement for application admins to install the application but later finds a requirement for an HAStoragePlus resource for HA services deployment. In those situations, there is no need to undo the steps done in Method 1 and redo it over in Method 2. HAStoragePlus also supports zpools for globally mounted ZFS file systems for already manually registered zpool device groups. The following steps illustrate how this configuration can be achieved. 1) Create HAStoragePlus resource with an existing zpool for globally mounted ZFS file systems and bring it online. Note: The resource group can be scalable or failover as needed by the configuration. phys-schost-1# clresourcegroup create hasp-rg phys-schost-1# clresource create -t HAStoragePlus -p \ GlobalZpools=HAzpool -g hasp-rg hasp-rs phys-schost-1# clresourcegroup online -eM hasp-rg 2) Verify the configuration. phys-schost-1~# clresource show -p GlobalZpools hasp-rs   GlobalZpools:                                 HAzpool phys-schost-1# cldevicegroup status === Cluster Device Groups === --- Device Group Status --- Device Group Name     Primary     Secondary     Status -----------------     -------     ---------     ------ HAzpool               phys-schost-1     phys-schost-2       Online Note: When the HAStoragePlus resource is deleted, the zpool device group is not automatically deleted. While the zpool device group exists, the ZFS filesystems in the zpool will be mounted globally when the device group is brought online (# cldevicegroup online HAzpool). For more information, see How to Configure a ZFS Storage Pool for Cluster-wide Global Access Without HAStoragePlus For information on how to use HAStoragePlus to manage ZFS pools for global access by data services, see  How to Set Up an HAStoragePlus Resource for a Global ZFS Storage Pool in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

Oracle Solaris Cluster as of 4.3 release has support for Cluster File System with UFS ZFS as a failover file system For application deployments that require accessing the same file system across multiple...

Oracle Announces 2018 Oracle Excellence Awards – Congratulations to our “Leadership in Infrastructure Transformation" Winners

We are pleased to announce the 2018 Oracle Excellence Awards “Leadership in Infrastructure Transformation" Winners. This elite group of recipients includes customers and partners who are using Oracle Infrastructure Technologies to accelerate innovation and drive business transformation by increasing agility, lowering costs, and reducing IT complexity. This year, our 10 award recipients were selected from amongst hundreds of nominations. The winners represent 5 different countries: Austria, Russia, Turkey, Sweden, United States and 6 different industries:  Communications, Financial, Government, Manufacturing, Technology, Transportation. Winners must use at least one, or a combination, of the following for category qualification: Oracle Linux Oracle Solaris Oracle Virtualization (VM, Virtual Box) Oracle Private Cloud Appliance Oracle SuperCluster Oracle SPARC Oracle Storage, Tape/Disk Oracle is pleased to honor these leaders who have delivered value to their organizations through the use of multiple Oracle technologies which have resulted in reduced cost of IT operations, improved time to deployment, and performance and end user productivity gains.  This year’s winners are Michael Polepchuk, Deputy Chief Information Officer, BCS Global Markets; Brian Young, Vice President, Cerner, Brian Bream, CTO, Collier IT; Rudolf Rotheneder, CEO, cons4u GmbH; Heidi Ratini, Senior Director of Engineering, IT Convergence; Philip Adams, Chief Technology Officer, Lawrence Livermore National Labs; JK Pareek, Vice President, Global IT and CIO, Nidec Americas Holding Corporation; Baris Findik, CIO, Pegasus Airlines; Michael Myhrén, Senior DBA Senior Systems Engineer and Charles Mongeon, Vice President Data Center Solutions and Services (TELUS Corporation). More information on these winners can be found at https://www.oracle.com/corporate/awards/leadership-in-infrastructure-transformation/winners.html

We are pleased to announce the 2018 Oracle Excellence Awards “Leadership in Infrastructure Transformation" Winners. This elite group of recipients includes customers and partners who are using Oracle...

Oracle Solaris/SPARC and Cloud Adjacent Solutions at Oracle Open World 2018

If you want to know what sessions at Oracle Open World 2018 are about Oracle Solaris and SPARC, you can find them in the Oracle Solaris and SPARC Focus on Document. Monday at 11:30am, in Moscone South - Room 214, Bill Nesheim, Senior Vice President, Oracle Solaris Development, Masood Heydari, SVP, Hardware Development, and Brian Bream, Chief Technology Office, Collier-IT, will be speaking about what's happening with Oracle Solaris, SPARC and how they are being used in real-world, mission critical applications to deploy Oracle Solaris/SPARC applications cloud adjacent.  Giving you all the consistency, simplicity and security of Oracle Solaris/SPARC and the monetary advantages of cloud. Oracle Solaris and SPARC Update: Security, Simplicity, Performance [PRM3358] Oracle Solaris and SPARC technologies are developed through Oracle’s unique vision of coengineering operating systems and hardware together with Oracle Database, Java, and enterprise applications. Discover the advanced features that secure application data with less effort, implement end-to-end encryption, streamline compliance assessment, simplify management of virtual machines, and accelerate performance for Oracle Database and middleware. In this session learn how SPARC/Solaris engineered systems and servers deliver continuous innovation and investment protection to organizations in their journey to the cloud, see examples of cloud-ready infrastructure, and learn Oracle's strategy for future enhancements for Oracle Solaris and SPARC systems. Masood Heydari, SVP, Hardware Development, Oracle Brian Bream, Chief Technology Officer, Vaske Computer, Inc. Bill Nesheim, Senior Vice President, Oracle Solaris Development, Oracle Monday, Oct 22, 11:30 a.m. - 12:15 p.m. | Moscone South - Room 206 To find out more, join us at Oracle Open World 2018!

If you want to know what sessions at Oracle Open World 2018 are about Oracle Solaris and SPARC, you can find them in the Oracle Solaris and SPARC Focus on Document. Monday at 11:30am, in Moscone South...

Oracle Solaris Security at OOW18

Oracle Open World 2018 is next week! We're busy creating session slides, making Hands-on-Labs and constructing demos. But I wanted to take a few moments to highlight one of our sessions. Darren Moffat, our Oracle Solaris Architect, will be joined by Thorsten Muhlmann, from VP Bank, AG.  They will be speaking at about how Oracle Solaris helps simplify securing your data center and the new security capabilities of Oracle Solaris 11.4: Oracle Solaris: Simplifying Security and Compliance for On-Premises and the Cloud [PRO1787] Oracle Solaris is engineered for security at every level. It allows you to mitigate risk and prove on-premises and cloud compliance easily, so you can spend time innovating. In this session learn how Oracle combines the power of industry-standard security features, unique security and antimalware capabilities, and mulitnode compliance management tools for low-risk application deployments and cloud infrastructure.  Thorsten Muehlmann, Senior Systems Engineer, VP Bank AG Darren Moffat, Senior Software Architect, Oracle Monday, Oct 22, 10:30 a.m. - 11:15 a.m. | Moscone South - Room 216 Want to know more about Oracle Solaris Application Sandboxing, multi-node compliance or per-file auditing, or how a customer uses Oracle Solaris security capabilities in a real-world, mission-critical data center? Join them on Monday at 10:30am at Oracle Open World!

Oracle Open World 2018 is next week! We're busy creating session slides, making Hands-on-Labs and constructing demos. But I wanted to take a few moments to highlight one of our sessions. Darren Moffat,...

Oracle Solaris Cluster

Exclusive-IP Zone Cluster - Automatic Network Configuration

Prior to the 4.4 release of Oracle Solaris Cluster (OSC), it was not possible to perform automatic public network configuration for  Exclusive-IP Zone Cluster (ZC) by specifying a System Configuration (SC) profile to the clzonecluster 'install' command. To illustrate this let us consider installation of a typical ZC  with a separate IP stack and two data-links to achieve network redundancy needed to run HA services. The data-links which are vnics previously created in the global zone are configured as part of an IPMP group that is needed to host the LogicalHostname or SharedAddress resource IP address. The zone cluster was configured as shown by the clzc 'export' command output below.   root@clusterhost1:~# clzc export zc1 create -b set zonepath=/zones/zc1 set brand=solaris set autoboot=false set enable_priv_net=true set enable_scalable_svc=false set file-mac-profile=none set ip-type=exclusive add net set address=192.168.10.10 set physical=auto end add attr set name=cluster set type=boolean set value=true end add node set physical-host=clusterhost1 set hostname=zc1-host-1 add net set physical=vnic3 end add net set physical=vnic0 end add privnet set physical=vnic1 end add privnet set physical=vnic2 end end add node set physical-host=clusterhost2 set hostname=zc1-host-2 add net set physical=vnic3 end add net set physical=vnic0 end add privnet set physical=vnic1 end add privnet set physical=vnic2 end end In OSC 4.3, after installing the ZC with a SC profile and booting it up, ZC will be in Online Running state but without the public network configuration.  The following ipadm(1M) commands are needed to set up the static network configuration in each non-global zone of the ZC. root@zc1-host-1:~# ipadm create-ip vnic0 root@zc1-host-1:~# ipadm create-ip vnic3 root@zc1-host-1:~# ipadm create-ipmp -i vnic0 -i vnic3 sc_ipmp0 root@zc1-host-1:~# ipadm create-addr -T static -a 192.168.10.11/24 sc_ipmp0/v4 root@zc1-host-2:~# ipadm create-ip vnic0 root@zc1-host-2:~# ipadm create-ip vnic3 root@zc1-host-2:~# ipadm create-ipmp -i vnic0 -i vnic3 sc_ipmp0 root@zc1-host-2:~# ipadm create-addr -T static -a 192.168.10.12/24 sc_ipmp0/v4 In OSC 4.4 it is now possible to build a SC profile such that no manual steps will be required to complete the network configuration and all the zones of the ZC can boot up to "Online Running" state upon first boot of the ZC. How is this made possible in OSC 4.4 on Solaris 11.4? Well, the clzonecluster(8CL) command can recognize sections of the SC profile XML that are applicable for individual zones of the ZC by inserting these sections within the <instances_for_node node_name="ZCNodeName"></instances_for_node> XML tags. Other sections of the SC profile that are not within these XML tags are applicable for all the zones of the ZC. Solaris 11.4 now supports arbitrarily complex network configurations in SC profiles. The following is a snippet of the SC profile that can be used for our typical ZC configuration that is derived from the template /usr/share/auto_install/sc_profiles/ipmp_network.xml. The section of the SC profile which is common for all the zones of the ZC has not been included in this snippet. <instances_for_node node_name="zc1-host-1"> <service version="1" name="system/identity"> <instance enabled="true" name="node"> <property_group name="config"> <propval name="nodename" value="zc1-host-1"/> </property_group> </instance> </service> <service name="network/ip-interface-management" version="1" type="service"> <instance name="default" enabled="true"> <property_group name="interfaces" type="application"> <!-- vnic0 interface configuration --> <property_group name="vnic0" type="interface-ip"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <propval name="ipmp-interface" type="astring" value="sc_ipmp0"/> </property_group> <!-- vnic3 interface configuration --> <property_group name="vnic3" type="interface-ip"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <propval name="ipmp-interface" type="astring" value="sc_ipmp0"/> </property_group> <!-- IPMP interface configuration --> <property_group name="sc_ipmp0" type="interface-ipmp"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <property name="under-interfaces" type="astring"> <astring_list> <value_node value="vnic0"/> <value_node value="vnic3"/> </astring_list> </property> <!-- IPv4 static address --> <property_group name="data1" type="address-static"> <propval name="ipv4-address" type="astring" value="192.168.10.11"//> <propval name="prefixlen" type="count" value="24"/> <propval name="up" type="astring" value="yes"/> </property_group> </property_group> </property_group> </instance> </service> <instances_for_node node_name="zc1-host-2"> <service version="1" name="system/identity"> <instance enabled="true" name="node"> <property_group name="config"> <propval name="nodename" value="zc1-host-2"/> </property_group> </instance> </service> <service name="network/ip-interface-management" version="1" type="service"> <instance name="default" enabled="true"> <property_group name="interfaces" type="application"> <!-- vnic0 interface configuration --> <property_group name="vnic0" type="interface-ip"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <propval name="ipmp-interface" type="astring" value="sc_ipmp0"/> </property_group> <!-- vnic0 interface configuration --> <property_group name="vnic3" type="interface-ip"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <propval name="ipmp-interface" type="astring" value="sc_ipmp0"/> </property_group> <!-- IPMP interface configuration --> <property_group name="sc_ipmp0" type="interface-ipmp"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <property name="under-interfaces" type="astring"> <astring_list> <value_node value="vnic0"/> <value_node value="vnic3"/> </astring_list> </property> <!-- IPv4 static address --> <property_group name="data1" type="address-static"> <propval name="ipv4-address" type="astring" value="192.168.10.12"//> <propval name="prefixlen" type="count" value="24"/> <propval name="up" type="astring" value="yes"/> </property_group> </property_group> </property_group> </instance> </service> </instances_for_node> You can find the complete SC profile here. Some fields like zone host name, IP and encrypted passwords needs to be substituted in this file. Other changes can be made to this profile for different configuration for e.g, configuring an Active-Standby IPMP group instead of Active-Active IPMP configuration shown in this example. We can use this SC profile to install and boot our ZC as shown below. root@clusterhost1:~# clzc install -c /var/tmp/zc1_config.xml zc1 Waiting for zone install commands to complete on all the nodes of the zone cluster "zc1"... root@clusterhost1:~# clzc boot zc1 Waiting for zone boot commands to complete on all the nodes of the zone cluster "zc1"... After a short duration, we will see the ZC in Online Running state. root@clusterhost1:~# clzc status zc1 === Zone Clusters === --- Zone Cluster Status --- Name   Brand     Node Name   Zone Host Name   Status   Zone Status ----   -----     ---------   --------------   ------   ----------- zc1    solaris   clusterhost1 zc1-host-1       Online   Running                  clusterhost2 zc1-host-2       Online   Running root@zc1-host-1:~# ipadm NAME              CLASS/TYPE STATE        UNDER      ADDR clprivnet1        ip         ok           --         -- clprivnet1/?   static     ok           --         172.16.3.65/26 lo0               loopback   ok           --         -- lo0/v4          static     ok           --         127.0.0.1/8 lo0/v6          static     ok           --         ::1/128 sc_ipmp0          ipmp       ok           --         -- sc_ipmp0/data1 static     ok           --         192.168.10.11/24 vnic0             ip         ok           sc_ipmp0   -- vnic1             ip         ok           --         -- vnic1/?         static     ok           --         172.16.3.129/26 vnic2             ip         ok           --         -- vnic2/?         static     ok           --         172.16.3.193/26 vnic3            ip         ok           sc_ipmp0   -- root@zc1-host-2:~# ipadm NAME              CLASS/TYPE STATE        UNDER      ADDR clprivnet1        ip         ok           --         -- clprivnet1/?   static    ok           --         172.16.3.66/26 lo0               loopback   ok           --         -- lo0/v4          static     ok           --         127.0.0.1/8 lo0/v6          static     ok           --         ::1/128 sc_ipmp0          ipmp       ok           --         -- sc_ipmp0/data1 static     ok           --         192.168.10.12/24 vnic0             ip         ok           sc_ipmp0   -- vnic1             ip         ok           --         -- vnic1/?        static     ok           --         172.16.3.130/26 vnic2             ip         ok           --         -- vnic2/?        static     ok           --         172.16.3.194/26 vnic3            ip         ok           sc_ipmp0   -- With this feature it is easy to create templates for SC profiles and can be used for fast deployments of ZC's without the need for administrator intervention to complete system configuration of the zones in the ZC. For more details on the SC profile configuration in 11.4 refer to Customizing Automated Installations With Manifests and Profiles. For cluster documentation refer to clzonecluster(8cl).

Prior to the 4.4 release of Oracle Solaris Cluster (OSC), it was not possible to perform automatic public network configuration for  Exclusive-IP Zone Cluster (ZC) by specifying a System Configuration...

Oracle Solaris Cluster

Oracle SuperCluster: Brief Introduction to osc-interdom

Target Audience: Oracle SuperCluster customers The primary objective of this blog post is to provide some related information on this obscure tool to inquisitive users/customers as they might have noticed the osc-interdom service and the namesake package and wondered what is it for at some point. SuperCluster InterDomain Communcation Tool, osc-interdom, is an infrastructure framework and a service that runs on Oracle SuperCluster products to provide flexible monitoring and management capabilities across SuperCluster domains. It provides means to inspect and enumerate the components of a SuperCluster so that other components can fulfill their roles in managing the SuperCluster. The framework also allows commands to be executed from a control domain to take effect across all domains on the server node (eg., a PDom on M8) and, optionally, across all servers (eg., other M8 PDoms in the cluster) on the system. SuperCluster Virtual Assistant (SVA), ssctuner, exachk and Oracle Enterprise Manager (EM) are some of the consumers of the osc-interdom framework. Installation and Configuration Interdom framework requires osc-interdom package from exa-family repository be installed and enabled on all types of domains in the SuperCluster. In order to enable communication between domains in the SuperCluster, interdom must be configured on all domains that need to be part of the inter-domain communication channel. In other words, it is not a requirement for all domains in the cluster to be part of the osc-interdom configuration. It is possible to exclude some domains from the comprehensive interdom directory either during initial configuration or at a later time. Also once the interdom directory configuration was built, it can be refreshed or rebuilt any time at will. Since installing and configuring osc-interdom was automated and made part of SuperCluster installation and configuration processes, it is unlikely that anyone from the customer site need to know or to perform those tasks manually. # svcs osc-interdom STATE STIME FMRI online 22:24:13 svc:/site/application/sysadmin/osc-interdom:default Domain Registry and Command Line Interface (CLI) Configuring interdom results in a Domain Registry. The purpose of the registry is to provide an accurate and up-to-date database of all SuperCluster domains and their characteristics. oidcli is a simple command line interface for the domain registry. The oidcli command line utility is located in /opt/oracle.supercluster/bin directory. oidcli utility can be used to query interdom domain registry for data that is associated with different components in the SuperCluster. Each component maps to a domain in the SuperCluster; and each component is uniquely identified by a UUID. The SuperCluster Domain Registry is stored on Master Control Domain (MCD). The "master" is usually the first control domain in the SuperCluster. Since the domain registry is on the master control domain, it is expected to run oidcli on MCD to query the data. When running from other domains, option -a must be specified along with the management IP address of the master control domain. Keep in mind that the data returned by oidcli is meant for other SuperCluster tools that have the ability to interpret the data correctly and coherently. Therefore humans who are looking at the same data may need some extra effort to digest and understand. eg., # cd /opt/oracle.supercluster/bin # ./oidcli -h Usage: oidcli [options] dir | [options] [...] invalidate|get_data|get_value , other component ID or 'all' (e.g. 'hostname', 'control_uuid') or 'all' NOTE: get_value must request single Options: -h, --help show this help message and exit -p Output in PrettyPrinter format -a ADDR TCP address/hostname (and optional ',') for connection -d Enable debugging output -w W Re-try for up to seconds for success. Use 0 for no wait. Default: 1801.0. List all components (domains) # ./oidcli -p dir [ ['db8c979d-4149-452f-8737-c857e0dc9eb0', True], ['4651ac93-924e-4990-8cf9-83be556eb667', True], .. ['945696fb-97f1-48e3-aa20-8c8baf198ea8', True], ['4026d670-61db-425e-834a-dfc45ff9a533', True]] List the hostname of all domains # ./oidcli -p get_data all hostname db8c979d-4149-452f-8737-c857e0dc9eb0: { 'hostname': { 'mtime': 1538089861, 'name': 'hostname', 'value': 'alpha'}} 3cfc9039-2157-4b62-ac69-ea3d85f2a19f: { 'hostname': { 'mtime': 1538174309, 'name': 'hostname', 'value': 'beta'}} ... List all available properties for all domains   # ./oidcli -p get_data all all db8c979d-4149-452f-8737-c857e0dc9eb0: { 'banner_name': { 'mtime': 1538195164, 'name': 'banner_name', 'value': 'SPARC M7-4'}, 'comptype': { 'mtime': 1538195164, 'name': 'comptype', 'value': 'LDom'}, 'control_uuid': { 'mtime': 1538195164, 'name': 'control_uuid', 'value': 'Unknown'}, 'guests': { 'mtime': 1538195164, 'name': 'guests', 'value': None}, 'host_domain_chassis': { 'mtime': 1538195164, 'name': 'host_domain_chassis', 'value': 'AK00251676'}, 'host_domain_name': { 'mtime': 1538284541, 'name': 'host_domain_name', 'value': 'ssccn1-io-alpha'}, ... Query a specific property from a specific domain # ./oidcli -p get_data 4651ac93-924e-4990-8cf9-83be556eb667 mgmt_ipaddr mgmt_ipaddr: { 'mtime': 1538143865, 'name': 'mgmt_ipaddr', 'value': ['xx.xxx.xxx.xxx', 20, 'scm_ipmp0']} The domain registry is persistent and updated hourly. When accurate and up-to-date is needed, it is recommended to query the registry with --no-cache option. eg., # ./oidcli -p get_data --no-cache 4651ac93-924e-4990-8cf9-83be556eb667 load_average load_average: { 'mtime': 1538285043, 'name': 'load_average', 'value': [0.01171875, 0.0078125, 0.0078125]} The mtime attribute in all examples above represent the UNIX timestamp. Debug Mode By default, osc-interdom service runs in non-debug mode. Running the service in debug mode enables logging more details to osc-interdom service log. In general, if osc-interdom service is transitioning to maintenance state, switching to the debug mode may provide few additional clues. To check if debug mode was enabled, run: svcprop -c -p config osc-interdom | grep debug To enable debug mode, run: svccfg -s sysadmin/osc-interdom:default setprop config/oidd_debug '=' true svcadm restart osc-interdom Finally check the service log for debug messages. svcs -L osc-interdom command output points to the location of osc-interdom service log. Documentation Similar to SuperCluster resource allocation engine, osc-resalloc, interdom framework is mostly meant for automated tools with little or no human interaction. Consequently there are no references to osc-interdom in SuperCluster Documentation Set. Related: Oracle SuperCluster: A Brief Introduction to osc-resalloc Oracle SuperCluster: A Brief Introduction to osc-setcoremem Non-Interactive osc-setcoremem Oracle Solaris Tools for Locality Observability Acknowledgments/Credit:   Tim Cook

Target Audience: Oracle SuperCluster customers The primary objective of this blog post is to provide some related information on this obscure tool to inquisitive users/customers as they might have...

Announcements

Oracle Solaris at Oracle Open World 2018

Oracle Open World is coming to San Francisco October 22-25! With it being less than a month away, I wanted to preview Oracle Solaris this year. With the release of Oracle Solaris 11.4 on August 28, 2018, we have much to talk about. For information about the future of Oracle Solaris and SPARC, Bill Nesheim, Senior Vice President of Oracle Solaris Development is joined by Massod Heydari, Senior Vice President of Hardware Development, and Brian Bream, Chief Technology Officer of Vaske Computer in our Oracle Solaris and SPARC Roadmap Session.  Brian will be discussing his unique solution for delivering Oracle SPARC solutions adjacent to Oracle Cloud, giving you the best of both worlds, cloud infrastructure for your scale-out needs and Oracle Solaris and SPARC hosted for your mission-critical needs. You can get hands on with the new cloud-scale capabilities in Oracle Solaris 11.4 with two Hands-on-Labs. "Get the Auditors Off Your Back: One Stop Cloud Compliance Validation" will show you how to setup your Oracle Solaris systems to meet compliance, lock them down so they can't be tampered with, centrally manage their compliance from a single system and even see the compliance benchmark trends over time meeting your auditors requirements. In our second Hands-on-Lab, "The CAT Scan for Oracle Solaris: StatsStore and Web Dashboard," you will get into utilizing the new Oracle Solaris System Web Interface and StatsStore to give you unique insight into not just the current state of a system but also insight into the historic state of the system.  You'll see how the Oracle Solaris 11.4 System Web Interface automatically correlates audit and system events to allow you to diagnose and root cause potential system issues quickly and easily. To find out more about Oracle Solaris and SPARC sessions and Hands-on-Labs at Oracle Open World 2018, see our Focus on Oracle Solaris and SPARC document. See you at Oracle Open World 2018!

Oracle Open World is coming to San Francisco October 22-25! With it being less than a month away, I wanted to preview Oracle Solaris this year. With the release of Oracle Solaris 11.4 on August...

Announcing Oracle Solaris 11.4 SRU1

Today we're releasing the first SRU for Oracle Solaris 11.4! This is the next installment in our ongoing support train for Oracle Solaris 11 and there will be no further Oracle Solairs 11.3 SRUs delivered to the support repository.  Due to the timing of our releases and some fixes being in Oracle Solaris 11.3 SRU35 but not in 11.4, not all customers on Oracle Solaris 11.3 SRU35 were able to update to Oracle Solaris 11.4 when it was released. SRU1 includes all these fixes and customers can now update to Oracle Solaris 11.4 SRU1 via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1.    This SRU introduces 'Memory Reservation Pools for Kernel Zones'. On some more heavily loaded systems that experience memory contention or sufficiently fragmented memory, it can be difficult for an administrator to guarantee that Kernel Zones can allocate all the memory they need to boot or even reboot. By allowing memory to be reserved ahead of time early during boot when it is more likely that enough memory of the desired pagesize is available, the impact of overall system memory usage on the ability of a Kernel Zone to boot/reboot will be minimized. Further details on using this feature are in the SRU readme file.  Other fixes of note in this SRU include:  - vim has been updated to 8.1.0209  - mailman has been updated to 2.1.29  - Updated Intel tool in HMP  - Samba has been updated to 4.8.4  - Kerberos 5 has been updated to 1.16.1  - webkitgtk+ has been updated to 2.18.6  - curl has been updated to 7.61.0  - sqlite has been updated to 3.24.0  - Apache Tomcat has been updated to 8.5.32  - Thunderbird has been updated to 52.9.1  - Apache Web Server has been updated to 2.4.34  - Wireshark has been updated to 2.6.2  - MySQL has been updated to 5.7.23  - OpenSSL has been updated to 1.0.2p  Full details of this SRU can be found in My Oracle Support Doc 2449090.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we're releasing the first SRU for Oracle Solaris 11.4! This is the next installment in our ongoing support train for Oracle Solaris 11 and there will be no further Oracle Solairs 11.3 SRUs...

Oracle Solaris 11

How To build an Input Method Engine for Oracle Solaris 11.4

Contributed by: Pavel Heimlich and Ales Cernosek Oracle Solaris 11.4 delivers a modern and extensible enterprise desktop environment. The desktop environment supports and delivers an IBus (Intelligent Input Bus) runtime environment and libraries that enable customers to easily customize and build Open Source IBus Input Method Engines to suit their preference. This article explores the step-by-step process of building one of the available IBus input methods - KKC, a Japanese input method engine. We'll start with the Oracle Solaris Userland build infrastructure and use it to build the KKC dependencies and KKC itself, and publish the Input Method IPS packages to a local publisher. A similar approach can be used for building and delivering other input method engines for other languages. Please note the components built may no longer be current; newer versions may have been released since these steps were made available. You should check for and use the latest version available. We'll start with installing all the build system dependencies : # sudo pkg exact-install developer/opensolaris/userland system/input-method/ibus group/system/solaris-desktop # sudo reboot To make this task as easy as possible, we'll reuse the Userland build infrastructure. This also has the advantage of using the same compiler, build flags and other defaults as the rest of the GNOME desktop. First clone the Userland workspace: # git clone https://github.com/oracle/solaris-userland gate Use your Oracle Solaris 11.4 IPS publisher; set this variable to empty: # export CANONICAL_REPO=http://pkg.oracle.com/... Do not use the internal Userland archive mirror: # export INTERNAL_ARCHIVE_MIRROR='' Set a few other variables and prepare the build environment: # export COMPILER=gcc # export PUBLISHER=example # export OS_VERSION=11.4 # cd gate # git checkout 8b36ec131eb42a65b0f42fc0d0d71b49cfb3adf3 # gmake setup We are going to need the intermediary packages installed on the build system, so let's add the build publisher: # pkg set-publisher -g `uname -p`/repo example KKC has a couple of dependencies. The first of them is marisa-trie. Create a folder for it : # mkdir components/marisa # cd components/marisa And create a Makefile as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64_and_32 COMPILER= gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= marisa COMPONENT_VERSION= 0.2.4 COMPONENT_PROJECT_URL= https://github.com/s-yata/marisa-trie COMPONENT_ARCHIVE_HASH= \ sha256:67a7a4f70d3cc7b0a85eb08f10bc3eaf6763419f0c031f278c1f919121729fb3 COMPONENT_ARCHIVE_URL= https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/marisa-trie/marisa-0.2.4.tar.gz TPNO= 26209 REQUIRED_PACKAGES += system/library/gcc/gcc-c-runtime REQUIRED_PACKAGES += system/library/gcc/gcc-c++-runtime REQUIRED_PACKAGES += system/library/math TEST_TARGET= $(NO_TESTS) COMPONENT_POST_INSTALL_ACTION += \ cd $(SOURCE_DIR)/bindings/python; \ CC=$(CC) CXX=$(CXX) CFLAGS="$(CFLAGS) -I$(SOURCE_DIR)/lib" LDFLAGS=-L$(PROTO_DIR)$(USRLIB) $(PYTHON) setup.py install --install-lib $(PROTO_DIR)/$(PYTHON_LIB); # to avoid libtool breaking build of libkkc COMPONENT_POST_INSTALL_ACTION += rm -f $(PROTO_DIR)$(USRLIB)/libmarisa.la; include $(WS_MAKE_RULES)/common.mk Most of the build process is defined in shared macros from the Userland workspace, so building marisa is now as easy as running the following command : # gmake install Copy the marisa copyright file so that the package follows Oracle standards: # cat marisa-0.2.4/COPYING > marisa.copyright Create the marisa.p5m package manifest as follows: # # Copyright (c) 2015, 2018, Oracle and/or its affiliates. All rights reserved. # <transform file path=usr.*/man/.+ -> default mangler.man.stability uncommitted> set name=pkg.fmri \ value=pkg:/example/system/input-method/library/marisa@$(BUILD_VERSION) set name=pkg.summary value="Marisa library" set name=pkg.description \ value="Marisa - Matching Algorithm with Recursively Implemented StorAge library" set name=com.oracle.info.description \ value="Marisa - Matching Algorithm with Recursively Implemented StorAge library" set name=info.classification \ value=org.opensolaris.category.2008:System/Internationalization set name=info.source-url \ value=https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/marisa-trie/marisa-0.2.4.tar.gz set name=info.upstream-url value=https://github.com/s-yata/marisa-trie set name=org.opensolaris.arc-caseid value=PSARC/2009/499 set name=org.opensolaris.consolidation value=$(CONSOLIDATION) file path=usr/bin/marisa-benchmark file path=usr/bin/marisa-build file path=usr/bin/marisa-common-prefix-search file path=usr/bin/marisa-dump file path=usr/bin/marisa-lookup file path=usr/bin/marisa-predictive-search file path=usr/bin/marisa-reverse-lookup file path=usr/include/marisa.h file path=usr/include/marisa/agent.h file path=usr/include/marisa/base.h file path=usr/include/marisa/exception.h file path=usr/include/marisa/iostream.h file path=usr/include/marisa/key.h file path=usr/include/marisa/keyset.h file path=usr/include/marisa/query.h file path=usr/include/marisa/scoped-array.h file path=usr/include/marisa/scoped-ptr.h file path=usr/include/marisa/stdio.h file path=usr/include/marisa/trie.h link path=usr/lib/$(MACH64)/libmarisa.so target=libmarisa.so.0.0.0 link path=usr/lib/$(MACH64)/libmarisa.so.0 target=libmarisa.so.0.0.0 file path=usr/lib/$(MACH64)/libmarisa.so.0.0.0 file path=usr/lib/$(MACH64)/pkgconfig/marisa.pc link path=usr/lib/libmarisa.so target=libmarisa.so.0.0.0 link path=usr/lib/libmarisa.so.0 target=libmarisa.so.0.0.0 file path=usr/lib/libmarisa.so.0.0.0 file path=usr/lib/pkgconfig/marisa.pc file path=usr/lib/python2.7/vendor-packages/64/_marisa.so file path=usr/lib/python2.7/vendor-packages/_marisa.so file path=usr/lib/python2.7/vendor-packages/marisa-0.0.0-py2.7.egg-info file path=usr/lib/python2.7/vendor-packages/marisa.py license marisa.copyright license="marisa, LGPLv2.1" Generate the IPS package: # gmake publish It's necessary to install the package as it's a prerequisite to build other packages : # sudo pkg install marisa # cd .. The next step is to build the KKC library: # mkdir libkkc # cd libkkc Create the Makefile as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64_and_32 COMPILER= gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= libkkc COMPONENT_VERSION= 0.3.5 COMPONENT_PROJECT_URL= https://github.com/ueno/libkkc COMPONENT_ARCHIVE= $(COMPONENT_SRC).tar.gz COMPONENT_ARCHIVE_HASH= \ sha256:89b07b042dae5726d306aaa1296d1695cb75c4516f4b4879bc3781fe52f62aef COMPONENT_ARCHIVE_URL= $(COMPONENT_PROJECT_URL)/releases/download/v$(COMPONENT_VERSION)/$(COMPONENT_ARCHIVE) TPNO= 26171 TEST_TARGET= $(NO_TESTS) REQUIRED_PACKAGES += example/system/input-method/library/marisa REQUIRED_PACKAGES += library/glib2 REQUIRED_PACKAGES += library/json-glib REQUIRED_PACKAGES += library/desktop/libgee REQUIRED_PACKAGES += system/library/gcc/gcc-c-runtime export LD_LIBRARY_PATH=$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(USRLIB) export PYTHONPATH=$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(PYTHON_LIB) CPPFLAGS += -I$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)/usr/include LDFLAGS += -L$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(USRLIB) # for gsed - metadata PATH=$(GNUBIN):$(USRBINDIR) include $(WS_MAKE_RULES)/common.mk # some of this is likely unnecessary CONFIGURE_OPTIONS += --enable-introspection=no PKG_CONFIG_PATHS += $(COMPONENT_DIR)/../marisa/build/$(MACH$(BITS)) # to avoid libtool breaking build of ibus-kkc COMPONENT_POST_INSTALL_ACTION = rm -f $(PROTO_DIR)$(USRLIB)/libkkc.la # to rebuild configure for libtool fix and fix building json files COMPONENT_PREP_ACTION = \ (cd $(@D) ; $(AUTORECONF) -m --force -v; gsed -i 's@test -f ./$$<@test -f $$<@' data/rules/rule.mk) And build the component: # gmake install Prepare the copyright file : # cat libkkc-0.3.5/COPYING > libkkc.copyright Create the libkkc.p5m package manifest as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # <transform file path=usr.*/man/.+ -> default mangler.man.stability volatile> set name=pkg.fmri \ value=pkg:/example/system/input-method/library/libkkc@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) set name=pkg.summary value="libkkc - Kana Kanji input library" set name=pkg.description \ value="libkkc - Japanese Kana Kanji conversion input method library" set name=com.oracle.info.description value="libkkc - Kana Kanji input library" set name=info.classification \ value=org.opensolaris.category.2008:System/Internationalization set name=info.source-url \ value=https://github.com/ueno/libkkc/releases/download/v0.3.5/libkkc-0.3.5.tar.gz set name=info.upstream-url value=https://github.com/ueno/libkkc set name=org.opensolaris.arc-caseid value=PSARC/2009/499 set name=org.opensolaris.consolidation value=$(CONSOLIDATION) file path=usr/bin/kkc file path=usr/bin/kkc-package-data dir path=usr/include/libkkc file path=usr/include/libkkc/libkkc.h dir path=usr/lib/$(MACH64)/libkkc link path=usr/lib/$(MACH64)/libkkc.so target=libkkc.so.2.0.0 link path=usr/lib/$(MACH64)/libkkc.so.2 target=libkkc.so.2.0.0 file path=usr/lib/$(MACH64)/libkkc.so.2.0.0 file path=usr/lib/$(MACH64)/pkgconfig/kkc-1.0.pc dir path=usr/lib/libkkc link path=usr/lib/libkkc.so target=libkkc.so.2.0.0 link path=usr/lib/libkkc.so.2 target=libkkc.so.2.0.0 file path=usr/lib/libkkc.so.2.0.0 file path=usr/lib/pkgconfig/kkc-1.0.pc dir path=usr/share/libkkc dir path=usr/share/libkkc/rules dir path=usr/share/libkkc/rules/act dir path=usr/share/libkkc/rules/act/keymap file path=usr/share/libkkc/rules/act/keymap/default.json file path=usr/share/libkkc/rules/act/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/act/keymap/hiragana.json file path=usr/share/libkkc/rules/act/keymap/katakana.json file path=usr/share/libkkc/rules/act/keymap/latin.json file path=usr/share/libkkc/rules/act/keymap/wide-latin.json file path=usr/share/libkkc/rules/act/metadata.json dir path=usr/share/libkkc/rules/act/rom-kana file path=usr/share/libkkc/rules/act/rom-kana/default.json dir path=usr/share/libkkc/rules/azik dir path=usr/share/libkkc/rules/azik-jp106 dir path=usr/share/libkkc/rules/azik-jp106/keymap file path=usr/share/libkkc/rules/azik-jp106/keymap/default.json file path=usr/share/libkkc/rules/azik-jp106/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/azik-jp106/keymap/hiragana.json file path=usr/share/libkkc/rules/azik-jp106/keymap/katakana.json file path=usr/share/libkkc/rules/azik-jp106/keymap/latin.json file path=usr/share/libkkc/rules/azik-jp106/keymap/wide-latin.json file path=usr/share/libkkc/rules/azik-jp106/metadata.json dir path=usr/share/libkkc/rules/azik-jp106/rom-kana file path=usr/share/libkkc/rules/azik-jp106/rom-kana/default.json dir path=usr/share/libkkc/rules/azik/keymap file path=usr/share/libkkc/rules/azik/keymap/default.json file path=usr/share/libkkc/rules/azik/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/azik/keymap/hiragana.json file path=usr/share/libkkc/rules/azik/keymap/katakana.json file path=usr/share/libkkc/rules/azik/keymap/latin.json file path=usr/share/libkkc/rules/azik/keymap/wide-latin.json file path=usr/share/libkkc/rules/azik/metadata.json dir path=usr/share/libkkc/rules/azik/rom-kana file path=usr/share/libkkc/rules/azik/rom-kana/default.json dir path=usr/share/libkkc/rules/default dir path=usr/share/libkkc/rules/default/keymap file path=usr/share/libkkc/rules/default/keymap/default.json file path=usr/share/libkkc/rules/default/keymap/direct.json file path=usr/share/libkkc/rules/default/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/default/keymap/hiragana.json file path=usr/share/libkkc/rules/default/keymap/katakana.json file path=usr/share/libkkc/rules/default/keymap/latin.json file path=usr/share/libkkc/rules/default/keymap/wide-latin.json file path=usr/share/libkkc/rules/default/metadata.json dir path=usr/share/libkkc/rules/default/rom-kana file path=usr/share/libkkc/rules/default/rom-kana/default.json dir path=usr/share/libkkc/rules/kana dir path=usr/share/libkkc/rules/kana/keymap file path=usr/share/libkkc/rules/kana/keymap/default.json file path=usr/share/libkkc/rules/kana/keymap/direct.json file path=usr/share/libkkc/rules/kana/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/kana/keymap/hiragana.json file path=usr/share/libkkc/rules/kana/keymap/katakana.json file path=usr/share/libkkc/rules/kana/keymap/latin.json file path=usr/share/libkkc/rules/kana/keymap/wide-latin.json file path=usr/share/libkkc/rules/kana/metadata.json dir path=usr/share/libkkc/rules/kana/rom-kana file path=usr/share/libkkc/rules/kana/rom-kana/default.json dir path=usr/share/libkkc/rules/kzik dir path=usr/share/libkkc/rules/kzik/keymap file path=usr/share/libkkc/rules/kzik/keymap/default.json file path=usr/share/libkkc/rules/kzik/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/kzik/keymap/hiragana.json file path=usr/share/libkkc/rules/kzik/keymap/katakana.json file path=usr/share/libkkc/rules/kzik/keymap/latin.json file path=usr/share/libkkc/rules/kzik/keymap/wide-latin.json file path=usr/share/libkkc/rules/kzik/metadata.json dir path=usr/share/libkkc/rules/kzik/rom-kana file path=usr/share/libkkc/rules/kzik/rom-kana/default.json dir path=usr/share/libkkc/rules/nicola dir path=usr/share/libkkc/rules/nicola/keymap file path=usr/share/libkkc/rules/nicola/keymap/default.json file path=usr/share/libkkc/rules/nicola/keymap/direct.json file path=usr/share/libkkc/rules/nicola/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/nicola/keymap/hiragana.json file path=usr/share/libkkc/rules/nicola/keymap/katakana.json file path=usr/share/libkkc/rules/nicola/keymap/latin.json file path=usr/share/libkkc/rules/nicola/keymap/wide-latin.json file path=usr/share/libkkc/rules/nicola/metadata.json dir path=usr/share/libkkc/rules/nicola/rom-kana file path=usr/share/libkkc/rules/nicola/rom-kana/default.json dir path=usr/share/libkkc/rules/tcode dir path=usr/share/libkkc/rules/tcode/keymap file path=usr/share/libkkc/rules/tcode/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/tcode/keymap/hiragana.json file path=usr/share/libkkc/rules/tcode/keymap/katakana.json file path=usr/share/libkkc/rules/tcode/keymap/latin.json file path=usr/share/libkkc/rules/tcode/keymap/wide-latin.json file path=usr/share/libkkc/rules/tcode/metadata.json dir path=usr/share/libkkc/rules/tcode/rom-kana file path=usr/share/libkkc/rules/tcode/rom-kana/default.json dir path=usr/share/libkkc/rules/trycode dir path=usr/share/libkkc/rules/trycode/keymap file path=usr/share/libkkc/rules/trycode/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/trycode/keymap/hiragana.json file path=usr/share/libkkc/rules/trycode/keymap/katakana.json file path=usr/share/libkkc/rules/trycode/keymap/latin.json file path=usr/share/libkkc/rules/trycode/keymap/wide-latin.json file path=usr/share/libkkc/rules/trycode/metadata.json dir path=usr/share/libkkc/rules/trycode/rom-kana file path=usr/share/libkkc/rules/trycode/rom-kana/default.json dir path=usr/share/libkkc/rules/tutcode dir path=usr/share/libkkc/rules/tutcode-touch16x dir path=usr/share/libkkc/rules/tutcode-touch16x/keymap file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/hiragana.json file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/katakana.json file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/latin.json file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/wide-latin.json file path=usr/share/libkkc/rules/tutcode-touch16x/metadata.json dir path=usr/share/libkkc/rules/tutcode-touch16x/rom-kana file path=usr/share/libkkc/rules/tutcode-touch16x/rom-kana/default.json dir path=usr/share/libkkc/rules/tutcode/keymap file path=usr/share/libkkc/rules/tutcode/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/tutcode/keymap/hiragana.json file path=usr/share/libkkc/rules/tutcode/keymap/katakana.json file path=usr/share/libkkc/rules/tutcode/keymap/latin.json file path=usr/share/libkkc/rules/tutcode/keymap/wide-latin.json file path=usr/share/libkkc/rules/tutcode/metadata.json dir path=usr/share/libkkc/rules/tutcode/rom-kana file path=usr/share/libkkc/rules/tutcode/rom-kana/default.json dir path=usr/share/libkkc/templates dir path=usr/share/libkkc/templates/libkkc-data file path=usr/share/libkkc/templates/libkkc-data/Makefile.am file path=usr/share/libkkc/templates/libkkc-data/configure.ac.in dir path=usr/share/libkkc/templates/libkkc-data/data file path=usr/share/libkkc/templates/libkkc-data/data/Makefile.am dir path=usr/share/libkkc/templates/libkkc-data/data/models file path=usr/share/libkkc/templates/libkkc-data/data/models/Makefile.sorted2 file path=usr/share/libkkc/templates/libkkc-data/data/models/Makefile.sorted3 dir path=usr/share/libkkc/templates/libkkc-data/data/models/sorted2 file path=usr/share/libkkc/templates/libkkc-data/data/models/sorted2/metadata.json dir path=usr/share/libkkc/templates/libkkc-data/data/models/sorted3 file path=usr/share/libkkc/templates/libkkc-data/data/models/sorted3/metadata.json dir path=usr/share/libkkc/templates/libkkc-data/data/models/text2 file path=usr/share/libkkc/templates/libkkc-data/data/models/text2/metadata.json dir path=usr/share/libkkc/templates/libkkc-data/data/models/text3 file path=usr/share/libkkc/templates/libkkc-data/data/models/text3/metadata.json dir path=usr/share/libkkc/templates/libkkc-data/tools file path=usr/share/libkkc/templates/libkkc-data/tools/Makefile.am file path=usr/share/libkkc/templates/libkkc-data/tools/genfilter.py file path=usr/share/libkkc/templates/libkkc-data/tools/sortlm.py file path=usr/share/locale/ja/LC_MESSAGES/libkkc.mo file path=usr/share/vala/vapi/kkc-1.0.deps file path=usr/share/vala/vapi/kkc-1.0.vapi license libkkc.copyright license="libkkc, GPLmix" \ com.oracle.info.description="libkkc - Kana Kanji input library" \ com.oracle.info.name=pci.ids com.oracle.info.tpno=26171 \ com.oracle.info.version=0.3.5 Use the following command to create the IPS package: # gmake publish # cd .. Then install the package so that it's available for dependency resolution: # sudo pkg install libkkc Next build libkkc-data: # mkdir libkkc-data # cd libkkc-data Create the Makefile as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64_and_32 COMPILER= gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= libkkc-data COMPONENT_VERSION= 0.2.7 COMPONENT_PROJECT_URL= https://github.com/ueno/libkkc COMPONENT_ARCHIVE= $(COMPONENT_SRC).tar.xz COMPONENT_ARCHIVE_HASH= \ sha256:9e678755a030043da68e37a4049aa296c296869ff1fb9e6c70026b2541595b99 COMPONENT_ARCHIVE_URL= https://github.com/ueno/libkkc/releases/download/v0.3.5/$(COMPONENT_ARCHIVE) TPNO= 26171 TEST_TARGET= $(NO_TESTS) export LD_LIBRARY_PATH=$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)/$(USRLIB) export PYTHONPATH=$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(PYTHON_LIB) include $(WS_MAKE_RULES)/common.mk CONFIGURE_ENV += MARISA_CFLAGS="-I$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)/usr/include" CONFIGURE_ENV += MARISA_LIBS="-L$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(USRLIB) -lmarisa" Build libkkc-data : # gmake install Prepare the copyright file : # cat libkkc-data-0.2.7/COPYING > libkkc-data.copyright Create the libkkc-data.p5m package manifest as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # <transform file path=usr.*/man/.+ -> default mangler.man.stability volatile> set name=pkg.fmri \ value=pkg:/example/system/input-method/library/libkkc-data@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) set name=pkg.summary value="libkkc-data - Kana Kanji input library data" set name=pkg.description \ value="libkkc-data - data for Japanese Kana Kanji conversion input method library" set name=com.oracle.info.description \ value="libkkc-data - Kana Kanji input library data" set name=info.classification \ value=org.opensolaris.category.2008:System/Internationalization set name=info.source-url \ value=https://bitbucket.org/libkkc/libkkc-data/downloads/libkkc-data-0.2.7.tar.xz set name=info.upstream-url value=https://bitbucket.org/libkkc/libkkc-data set name=org.opensolaris.arc-caseid value=PSARC/2009/499 set name=org.opensolaris.consolidation value=$(CONSOLIDATION) dir path=usr/lib/$(MACH64)/libkkc/models dir path=usr/lib/$(MACH64)/libkkc/models/sorted3 file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.1gram file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.1gram.index file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.2gram file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.2gram.filter file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.3gram file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.3gram.filter file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.input file path=usr/lib/$(MACH64)/libkkc/models/sorted3/metadata.json dir path=usr/lib/libkkc/models dir path=usr/lib/libkkc/models/sorted3 file path=usr/lib/libkkc/models/sorted3/data.1gram file path=usr/lib/libkkc/models/sorted3/data.1gram.index file path=usr/lib/libkkc/models/sorted3/data.2gram file path=usr/lib/libkkc/models/sorted3/data.2gram.filter file path=usr/lib/libkkc/models/sorted3/data.3gram file path=usr/lib/libkkc/models/sorted3/data.3gram.filter file path=usr/lib/libkkc/models/sorted3/data.input file path=usr/lib/libkkc/models/sorted3/metadata.json license libkkc-data.copyright license="libkkc-data, GPLv3" \ com.oracle.info.description="libkkc - Kana Kanji input library language data" \ com.oracle.info.name=usb.ids com.oracle.info.tpno=26171 \ com.oracle.info.version=0.2.7 Use the following command to create the IPS package: # gmake publish # cd .. Then install the package so that it's available for dependency resolution: # sudo pkg install libkkc-data Finally, create the KKC package: # mkdir ibus-kkc # cd ibus-kkc Create the Makefile as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64_and_32 COMPILER= gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= ibus-kkc COMPONENT_VERSION= 1.5.22 COMPONENT_PROJECT_URL= https://github.com/ueno/ibus-kkc IBUS-KKC_PROJECT_URL= https://github.com/ueno/ibus-kkc COMPONENT_ARCHIVE= $(COMPONENT_SRC).tar.gz COMPONENT_ARCHIVE_HASH= \ sha256:22fe2552f08a34a751cef7d1ea3c088e8dc0f0af26fd7bba9cdd27ff132347ce COMPONENT_ARCHIVE_URL= $(COMPONENT_PROJECT_URL)/releases/download/v$(COMPONENT_VERSION)/$(COMPONENT_ARCHIVE) TPNO= 31503 TEST_TARGET= $(NO_TESTS) REQUIRED_PACKAGES += system/input-method/ibus REQUIRED_PACKAGES += example/system/input-method/library/libkkc REQUIRED_PACKAGES += example/system/input-method/library/libkkc-data REQUIRED_PACKAGES += library/desktop/gtk3 REQUIRED_PACKAGES += library/desktop/libgee REQUIRED_PACKAGES += library/json-glib REQUIRED_PACKAGES += library/glib2 # for marisa REQUIRED_PACKAGES += system/library/gcc/gcc-c-runtime REQUIRED_PACKAGES += system/library/gcc/gcc-c++-runtime REQUIRED_PACKAGES += system/library/math CPPFLAGS += -I$(COMPONENT_DIR)/../libkkc/build/prototype/$(MACH)/usr/include LDFLAGS += "-L$(COMPONENT_DIR)/../libkkc/build/prototype/$(MACH)$(USRLIB)" LDFLAGS += "-L$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(USRLIB)" include $(WS_MAKE_RULES)/common.mk CONFIGURE_ENV += PATH=$(GNUBIN):$(USRBINDIR) CONFIGURE_OPTIONS += --libexecdir=$(USRLIBDIR)/ibus #CONFIGURE_OPTIONS += --enable-static=no PKG_CONFIG_PATHS += $(COMPONENT_DIR)/../libkkc/build/$(MACH$(BITS))/libkkc PKG_CONFIG_PATHS += $(COMPONENT_DIR)/../marisa/build/$(MACH$(BITS)) # to rebuild configure for libtool fix COMPONENT_PREP_ACTION = \ (cd $(@D) ; $(AUTORECONF) -m --force -v) PKG_PROTO_DIRS += $(COMPONENT_DIR)/../marisa/build/prototype/$(MACH) PKG_PROTO_DIRS += $(COMPONENT_DIR)/../libkkc/build/prototype/$(MACH) PKG_PROTO_DIRS += $(COMPONENT_DIR)/../libkkc-data/build/prototype/$(MACH) And build the component: # gmake install Prepare the copyright file : # cat ibus-kkc-1.5.22/COPYING > ibus-kkc.copyright Create the ibus-kkc.p5m package manifest as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # <transform file path=usr.*/man/.+ -> default mangler.man.stability volatile> set name=pkg.fmri \ value=pkg:/example/system/input-method/ibus/kkc@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) set name=pkg.summary value="IBus Japanese IME - kkc" set name=pkg.description value="Japanese Kana Kanji input engine for IBus" set name=com.oracle.info.description value="ibus kkc - Kana kanji input engine" set name=info.classification \ value=org.opensolaris.category.2008:System/Internationalization set name=info.source-url value=$(COMPONENT_ARCHIVE_URL) set name=info.upstream-url value=$(COMPONENT_PROJECT_URL) set name=org.opensolaris.arc-caseid value=PSARC/2009/499 set name=org.opensolaris.consolidation value=$(CONSOLIDATION) file path=usr/lib/ibus/ibus-engine-kkc mode=0555 file path=usr/lib/ibus/ibus-setup-kkc mode=0555 file path=usr/share/applications/ibus-setup-kkc.desktop dir path=usr/share/ibus-kkc dir path=usr/share/ibus-kkc/icons file path=usr/share/ibus-kkc/icons/ibus-kkc.svg file path=usr/share/ibus/component/kkc.xml file path=usr/share/locale/ja/LC_MESSAGES/ibus-kkc.mo license ibus-kkc.copyright license="ibus-kkc, GPLmix" depend type=require fmri=example/system/input-method/library/libkkc-data Perform the final build: # gmake publish Use the following command to install the package from the build's IPS repository: # sudo pkg install ibus/kkc Other input method engines can be built in a similar way: for example, ibus-hangul for Korean, ibus-chewing or ibus-table for Chinese, and others.

Contributed by: Pavel Heimlich and Ales Cernosek Oracle Solaris 11.4 delivers a modern and extensible enterprise desktop environment. The desktop environment supports and delivers an IBus (Intelligent...

Perspectives

How to Use OCI Storage Resources with Solaris

After getting started by importing an Oracle Solaris image into your Oracle Cloud Infrastructure (OCI) tenant and launched an instance, you'll likely want to use other OCI resources to run your applications.  In this post, I'll provide a quick cheat sheet to using the storage resources: block volumes and file storage.  I'm not going to cover the basics of creating these objects in OCI, which is covered well by the documentation.  This post just shows how to do the Solaris-specific things that the OCI console will otherwise tell you for Linux guests. Block Volumes Once you've created a block volume and attached it to your Solaris guest in the OCI console, you still need to do a small amount of work on the Solaris guest to let it see the storage you've attached.  The OCI Console will display the Linux iSCSI commands, you just need to translate them to the Solaris equivalents.  Here's an example of the Linux commands: sudo iscsiadm -m node -o new -T iqn.2015-12.com.oracleiaas:de785f06-2288-4f2f-8ef7-f0c2c25d6144 -p 169.254.2.2:3260 sudo iscsiadm -m node -o update -T iqn.2015-12.com.oracleiaas:de785f06-2288-4f2f-8ef7-f0c2c25d6144 -n node.startup -v automatic sudo iscsiadm -m node -T iqn.2015-12.com.oracleiaas:de785f06-2288-4f2f-8ef7-f0c2c25d6144 -p 169.254.2.2:3260 -l While the Solaris command for these tasks is also iscsiadm, it uses a different command line design so we have to translate.  Using the iqn string and address:port strings from the first command above, we define a static target, and then enable static discovery. sudo iscsiadm add static-config iqn.2015-12.com.oracleiaas:de785f06-2288-4f2f-8ef7-f0c2c25d6144,169.254.2.2:3260 sudo iscsiadm modify discovery --static enable At this point, the device will be available and can be seen in the format utility: opc@instance-20180904-1408:~$ sudo format </dev/null Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c2d0 <QEMU HAR-QM0000-0001-48.83GB> /pci@0,0/pci-ide@1,1/ide@1/cmdk@0,0 1. c3t0d0 <ORACLE-BlockVolume-1.0-1.00TB> /iscsi/disk@0000iqn.2015-12.com.oracleiaas%3Ade785f06-2288-4f2f-8ef7-f0c2c25d61440001,1 And now it's trivial to create a ZFS pool on the storage: opc@instance-20180904-1408:~$ sudo zpool create tank c3t0d0 opc@instance-20180904-1408:~$ sudo zpool status tank pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 errors: No known data errors Since the iscsiadm command automatically persists the configuration you enter, the storage (and the ZFS pool) will still be visible after the guest is rebooted. File Storage The OCI file storage service provides a file store that's accessible over NFS, and it's fully interoperable with clients using NFSv3.  Once you've created a file system and associated a mount target, the OCI console will show you the commands to install the NFS client, create a mount point directory, and then mount the file system.  The Solaris images we've provided already include the NFS client, so all you need to do is create a mount point directory and mount the file system.  The Solaris commands for those tasks are the same as on Linux, so you can directly copy and paste those from the OCI console.  To make the mount persistent, you'll either need to add the mount entry to /etc/vfstab or configure the automounter.  I won't provide a tutorial on those topics here, but refer you to the Solaris documentation.

After getting started by importing an Oracle Solaris image into your Oracle Cloud Infrastructure (OCI) tenant and launched an instance, you'll likely want to use other OCI resources to run...

Oracle Solaris 11

Random Solaris & Shell Command Tips — kstat, tput, sed, digest

The examples shown in this blog post were created and executed on a Solaris system. Some of these tips and examples are applicable to all *nix systems. Digest of a File One of the typical uses of computed digest is to check if a file has been compromised or tampered. The digest utility can be used to calculate the digest of files. On Solaris, -l option lists out all cryptographic hash algorithms available on the system. eg., % digest -l sha1 md5 sha224 sha256 .. sha3_224 sha3_256 .. -a option can be used to specify the hash algorithm while computing the digest. eg., % digest -v -a sha1 /usr/lib/libc.so.1 sha1 (/usr/lib/libc.so.1) = 89a588f447ade9b1be55ba8b0cd2edad25513619   Multiline Shell Script Comments Shell treats any line that start with '#' symbol as a comment and ignores such lines completely. (The line on top that start with #! is an exception). From what I understand there is no multiline comment mechanism in shell. While # symbol is useful to mark single line comments, it becomes laborious and not well suited to comment quite a number of contiguous lines. One possible way to achieve multiline comments in a shell script is to rely on a combination of shell built-in ':' and Here-Document code block. It may not be the most attractive solution but gets the work done. Shell ignores the lines that start with a ":" (colon) and returns true. eg., % cat -n multiblock_comment.sh 1 #!/bin/bash 2 3 echo 'not a commented line' 4 #echo 'a commented line' 5 echo 'not a commented line either' 6 : <<'MULTILINE-COMMENT' 7 echo 'beginning of a multiline comment' 8 echo 'second line of a multiline comment' 9 echo 'last line of a multiline comment' 10 MULTILINE-COMMENT 11 echo 'yet another "not a commented line"' % ./multiblock_comment.sh not a commented line not a commented line either yet another "not a commented line"   tput Utility to Jazz Up Command Line User Experience The tput command can help make the command line terminals look interesting. tput can be used to change the color of text, apply effects (bold, underline, blink, ..), move cursor around the screen, get information about the status of the terminal and so on. In addition to improving the command line experience, tput can also be used to improve the interactive experience of scripts by showing different colors and/or text effects to users. eg., % tput bold <= bold text % date Thu Aug 30 17:02:57 PDT 2018 % tput smul <= underline text % date Thu Aug 30 17:03:51 PDT 2018 % tput sgr0 <= turn-off all attributes (back to normal) % date Thu Aug 30 17:04:47 PDT 2018 Check the man page of terminfo for a complete list of capabilities to be used with tput.   Processor Marketing Name On systems running Solaris, processor's marketing or brand name can be extracted with the help of kstat utility. cpu_info module provides information related to the processor(s) on the system. eg., On SPARC: % kstat -p cpu_info:1:cpu_info1:brand cpu_info:1:cpu_info1:brand SPARC-M8 On x86/x64: % kstat -p cpu_info:1:cpu_info1:brand cpu_info:1:cpu_info1:brand Intel(r) Xeon(r) CPU L5640 @ 2.27GHz In the above example, cpu_info is the module. 1 is the instance number. cpu_info1 is the name of the section and brand is the statistic in focus. Note that cpu_info module has only one section cpu_info1. Therefore it is fine to skip the section name portion (eg., cpu_info:1::brand). To see the complete list of statistics offered by cpu_info module, simply run kstat cpu_info:1.   Consolidating Multiple sed Commands sed utility allows specifying multiple editing commands on the same command line. (in other words, it is not necessary to pipe multiple sed commands). The editing commands need to be separated with a semicolon (;) eg., The following two commands are equivalent and yield the same output. % prtconf | grep Memory | sed 's/Megabytes/MB/g' | sed 's/ size//g' Memory: 65312 MB % prtconf | grep Memory | sed 's/Megabytes/MB/g;s/ size//g' Memory: 65312 MB

The examples shown in this blog post were created and executed on a Solaris system. Some of these tips and examples are applicable to all *nix systems. Digest of a File One of the typical uses of...

Perspectives

Getting Started with Oracle Solaris 11.4 on Oracle Cloud Infrastructure (OCI)

With today's release of Oracle Solaris 11.4, we're making pre-built images available for use in Oracle Cloud Infrastructure (OCI).  The images aren't part of the official OCI image catalog at this time, but using them is easy, just follow these steps, which are the same as in my previous post on the 11.4 beta images. Login to your OCI console and select Compute->Custom Images from the main menu, this will display the Images page. Press the blue Import Image button.  This will display the Import Image dialog. In the dialog, select a compartment into which the image will be imported, and enter a name, such as "Solaris 11.4".  Select Linux for the operating system since OCI doesn't yet know about Solaris and that will avoid any special handling that OCI has for Windows images.  At this point, choose which image you wish to import: Bare Metal: Copy this link and paste it into the Object Storage URL field.  Select QCOW2 as the Image Type, and Native Mode as the Launch Mode. Enter any tags you wish to apply, and then press Import Image. Virtual Machine: Copy this link and paste it into the Object Storage URL field.  Select VMDK as the Image Type, and Emulated Mode as the Launch Mode.  Enter any tags you wish to apply, and then press Import Image. It'll take a few minutes for OCI to copy the image from object storage into your tenant's image repository.  Once that's complete, you can launch an instance using the image.  First, one tip: if you've imported the Bare Metal image, you should go to its Image Details page and press the Edit Details button.  In the Edit Image Details dialog that comes up, there's a Compatible Shapes list.  You'll find that all of the shapes have a blue checkmark.  You should uncheck all of the VM shapes and then Save the image.  The reason is that Solaris is not capable of booting in OCI's native virtual machine shapes at this time and this will prevent anyone who uses that image from inadvertently launching a VM that won't be accessible.  We're working on running Solaris under OCI's native VM technology, but since it's not ready yet, we've made the emulated mode image available for now. When creating an instance, select Custom Image as the boot volume type and select the image you've imported along with a compatible shape.  You'll need to supply an ssh key in order to login to the instance once it's started; when creating a VM, it's necessary to click the Show Advanced Options link to access the SSH Keys settings. After you start an instance, login using ssh opc@<instance ip>.  The image contains a relatively minimal Solaris installation suitable for bootstrapping into a cloud environment - this is the solaris-cloud-guest group package.  You'll likely need to install more software to do anything beyond some simple exploration; to add more Solaris packages, just use the pkg command to install from the Solaris release repository. Now that you've got an instance running, there's a lot more you can do with it, including saving any modifications you make as a new Custom Image of your own that you can then redeploy directly to a new instance (note, though, that at this point a modified bare metal image will only be deployable to bare metal, and a VM image will only be deployable to a VM).  Leave a comment here, post on the Solaris 11 community forum, or catch me @dave_miner on Twitter if you have topic suggestions or questions.  And of course check out my previous post on automatically configuring Solaris guests in OCI.

With today's release of Oracle Solaris 11.4, we're making pre-built images available for use in Oracle Cloud Infrastructure (OCI).  The images aren't part of the official OCI image catalog at this...

Announcements

Oracle Solaris 11.4 Released for General Availability

Oracle Solaris 11.4: The trusted business platform. I'm pleased to announce the release of Oracle Solaris 11.4. Of the four releases of Oracle Solaris that I've been involved in, this is the best one yet! Oracle Solaris is the trusted business platform that you depend on. Oracle Solaris 11 gives you consistent compatibility, is simple to use and is designed to always be secure. Some fun facts about Oracle Solaris 11.4 There have been 175 development builds to get us to Oracle Solaris 11.4. We've tested Oracle Solaris 11.4 for more than 30 million machine hours. Over 50 customers have already put Oracle Solaris 11.4 into production and it already has more than 3000 applications certified to run on it. Oracle Solaris 11.4 is the first and, currently, the only operating system that has completed UNIX® V7 certification. What's new Consistently Compatible That last number in the fun facts is interesting because that number is a small subset of applications that will run on Oracle Solaris 11.4.  It doesn't include applications that will run on Oracle Solaris 11 that were designed and build for Oracle Solaris 10 (nor 8 and 9 for that matter). One of the reasons why Oracle Solaris is trusted by so many large companies and governments around the world to run their most mission-critical applications is our consistency. One of the key capabilities for Oracle Solaris is the Oracle Solaris Application Compatibility Guarantee. For close to 20 years now, we have guaranteed that Oracle Solaris will run applications built on previous releases of Oracle Solaris, and we continue to keep that promise today. Additionally, we've made it easier than ever to migrate your Oracle Solaris 10 workloads to Oracle Solaris 11. We've enhanced our migration tools and documentation to make moving from Oracle Solaris 10 to Oracle Solaris 11 on modern hardware simple.  All in an effort to save you money. Simple to Use Of course with every release of Oracle Solaris, we work hard to make life simpler for our users. This release is no different. We've included several new features in Oracle Solaris 11.4 that make it easier than ever to manage. The coolest of those new features is our new Observability Tools System Web Interface. The System Web Interface brings together several key observability technologies, including the new StatsStore data, audit events and FMA events, into a centralized, customizable browser-based interface, that allows you to see the current and past system behavior at a glance. James McPherson did an excellent job of writing all about the Web Interface here. He also wrote about what we collect by default here. Of course, you can also add your own data to be collected and customize the interface as you like.  And if you want to export the data to some other application like a spreadsheet or database, my colleague Joost Pronk wrote a blog on how to get the data into a csv format file. For more information about that, you can read more about it all in our Observability Tools documentation. The Service Management Framework has been enhanced to allow you to automatically monitor and restart critical applications and services. Thejaswini Kodavur shows you how to use our new SMF goal services. We've made managing and updating Oracle Solaris Zones and the applications you run inside them simpler than ever. We started by supplying you with the ability to evacuate a system of all of its Zones with just one command. Oh, and you can bring them all back with just one command too. Starting with Oracle Solaris 11.4, you can now build intra-Zone dependencies and have the dependent Zones boot in the correct order, allowing you to automatically boot and restart complex application stacks in the correct order. Jan Pechanec wrote a nice how-to blog for you to get started. Joost Pronk wrote a community article going into the details more deeply. In Oracle Solaris 11.4, we give you one of the most requested features to make ZFS management even simpler than it already is. Cindy Swearingen talks about how Oracle Solaris now gives you ZFS device removal. Oracle Solaris is designed from the ground up to be simple to manage, saving you time. Always Secure Oracle Solaris is consistently compatible and is simple to use, but if it is one thing above all others, it is focused on security, and in Oracle Solaris 11.4 we give you even more security capabilities to make getting and staying secure and compliant easy. We start with multi-node compliance. In Oracle Solaris 11.4, you can now setup compliance to either push a compliance assessment to all systems with a single command and review the results in a single report, or you can setup your systems to regularly generate their compliance reports and push them to a central server where they can also be viewed via a single report.  This makes maintaining compliance across your data center even easier. You can find out more about multi-node compliance here. But how do you keep your systems compliant once they are made compliant? One of the most straightforward ways is to take advantage of Immutable Zones (this includes the Global Zone).  Immutable Zones even prevents system administrators from writing to the system and yet still allowed patches and updates via IPS. This is done via a trusted path. However, this also means that your configuration management tools like Puppet and Chef aren't able to write to the Zone to apply require configuration changes.  In Oracle Solaris 11.4, we added trusted path services. Now, you can create your own services like Puppet and Chef, that can be placed on the trusted path, allowing them to make the requisite changes while keeping the system/zone immutable and protected. Oracle Solaris Zones, especially Immutable Zones, are an incredibly useful tool for building isolation into your environment to protect applications and your data center from cyber attack or even just administrative error.  However, sometimes, a Zone is too much.  You really just want to be able to isolate applications on a system or within a Zone or VM. For this, we give you Application Sandboxing.  It allows you to isolate an application or isolate applications from each other.  Sandboxes provide additional separation of applications and reduce the risk of unauthorized data access. You can read more about it in Darren Moffat's blog, here. Oracle Solaris 11 is engineered to help you get and stay secure and compliant, reducing your risk. Updating In order to make your transition from Oracle Solaris 11.3 to Oracle Solaris 11.4 as smooth as possible, we've included a new compliance benchmark that will tell you if there are any issues such as old, unsupported device drivers, unsupported software or if the hardware you are running on isn't supported by Oracle Solaris 11.4. To install this new benchmark, update to Oracle Solaris 11.3 SRU 35 and run: # pkg install update-check Then to run the check, you simple run # compliance assess -b ehc-update -a 114update # compliance report -a 114update -o ./114update.html You can then use FireFox to view the report: My personal system started out as an Oracle Solaris 11.1 system and has been upgraded over the years to an Oracle Solaris 11.3 system. As you can see, I had some failures. These were some old device drivers, and old versions of software like gcc-3, iperf benchmarking tool, etc. The compliance report report tells you exactly what needs to happen to resolve the failures.  The devices drivers weren't needed any longer, and I uninstalled them per the reports instructions. The report said the software will be removed automatically during upgrade. Oracle Solaris Cluster 4.4 Of course with each update of Oracle Solaris 11, we release an new version of Oracle Solaris Cluster so you can upgrade in lock-step to provide a smooth transition for your HA environments. You can read about what's new in Oracle Solaris Cluster 4.4 in the What's New and find out more from the Data Sheet, the Oracle Technology Network and in our documentation. Try it out You can download Oracle Solaris 11.4 now from the Oracle Technology Network for bare metal or VirtualBox, OTN, MOS, and our repository at pkg.oracle.com. Take a moment to check out our new OTN page, and you can engage with other Oracle Solaris users and engineers on the Oracle Solaris Community page. UNIX® and UNIX® V7 are registered trademarks of The Open Group.

Oracle Solaris 11.4: The trusted business platform. I'm pleased to announce the release of Oracle Solaris 11.4. Of the four releases of Oracle Solaris that I've been involved in, this is the best one...

Zones Delegated Restarter and SMF Goals

Managing Zones in Oracle Solaris 11.3 In Oracle Solaris 11.3, Zones are managed by the Zones service svc:/system/zones:default. The service performs the autobooting and shutdown of Zones on system boot and shutdown according to each zone's configuration. The service boots, in parallel, all Zones configured with autoboot=true, and shuts down, also in parallel, all running Zones with their corresponding autoshutdown option: halt, shutdown, or suspend. See the zonecfg(1M) manual page in 11.3 for more information. The management mechanism existing in 11.3 is sufficient for systems running a small number of Zones. However, the growing size of systems and the increasing number of Zones on them require a more sophisticated mechanism. The issue is that the following features are missing: Very limited integration with SMF. That also means zones may not depend on other services and vice versa. No threshold tunable for the number of Zones booted in parallel. No integration with FMA. No mechanism to prioritize Zones booting order. No mechanism for providing information when a zone is considered up and running. This blog post describes enhancements brought in by 11.4 that address existing shortcomings of the Zones service in 11.3. Zones Delegated Restarter and Goals in Oracle Solaris 11.4 To solve the shortcomings outlined in the previous section, Oracle Solaris 11.4 brings the Zones Delegated Restarter (ZDR) to manage the Zones infrastructure and autobooting, and SMF Goals. Each zone aside from the Global Zone is modeled as an SMF instance of the service svc:/system/zones/zone:<zonename> where the name of the instance is the name of the zone. Note that the Zones configuration was not moved into the SMF repository. For an explanation of what is an SMF restarter, see the section "Restarters" in the smf(7) manual page. Zone SMF Instances The ZDR replaces the existing shell script, /lib/svc/method/svc-zones, in the Zones service method with the restarter daemon, /usr/lib/zones/svc.zones. See the svc.zones(8) manual page for more information. The restarter runs under the existing Zones service svc:/system/zones:default. A zone SMF instance is created at the zone creation time. The instance is marked as incomplete for zones in the configured state. On zone install, attach, and clone, the zone instance is marked as complete. Conversely, on zone detach or uninstall, the zone instance is marked incomplete. The zone instance is deleted when the zones is deleted via zonecfg(8). An example on listing the Zones instances: $ svcs svc:/system/zones/zone STATE STIME FMRI disabled 12:42:55 svc:/system/zones/zone:tzone1 online 16:29:47 svc:/system/zones/zone:s10 $ zoneadm list -vi ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 s10 running /system/zones/s10 solaris10 excl - tzone1 installed /system/zones/tzone1 solaris excl On start-up, the ZDR creates a zone SMF instance for any zone (save for the Global Zone) that does not have one but is supposed to. Likewise, if there is a zone SMF instance that does not have a corresponding zone, the restarter will remove the instance. The ZDR is responsible for setting up the infrastructure necessary for each zone, spawning a zoneadmd(8) daemon for each zone, and restarting the daemon when necessary. There is a running zoneadmd for each zone in a state greater than configured on the system. ZDR generated messages related to a particular zone are logged to /var/log/zones/<zonename>.messages, which is where zoneadmd logs as well. Failures during the infrastructure setup for a particular zone will place the zone to the maintenance state. A svcadm clear on the zone instance triggers the ZDR to re-try. SMF Goal Services SMF goals provide a mechanism by which a notification can be generated if a zone is incapable of auto booting to a fully online state (i.e. unless an admin intervenes). With their integration into ZDR they can be used to address one of the shortcomings mentioned above, that we had no mechanism for providing information when a zone is considered up and running. A goal service is a service with the general/goal-service=true property setting. Such service enters a maintenance state if its dependencies can never be satisfied. Goal services in maintenance automatically leave that state once their dependencies are satisfiable. The goal services failure mechanism is entirely encompassed in the SMF dependency graph engine. Any service can be marked as a goal service. We also introduced a new synthetic milestone modeled as a goal service, svc:/milestone/goals:default. The purpose of this new milestone is to provide a clear, unambiguous, and well-defined point where we consider the system up and running. The dependencies of milestone/goals should be configured to represent the mission critical services for the system. There is one dependency by default: root@vzl-143:~# svcs -d milestone/goals STATE STIME FMRI online 18:35:49 svc:/milestone/multi-user-server:default While goals are ZDR agnostic, they are a fundamental requirement of the ZDR which uses the states of the milestone/goals services to determine the state of each non-global zone. To change the dependency, use svcadm goals: # svcadm goals milestone/multi-user-server:default network/http:apache24 # svcs -d milestone/goals STATE STIME FMRI online 18:35:49 svc:/milestone/multi-user-server:default online 19:53:32 svc:/network/http:apache24 To reset (clear) the dependency service set to the default, use svcadm goals -c. Zone SMF Instance State ZDR is notified of the state of the milestone/goals service of each non-global zone that supports it. The zone instance state of each non-global zone will match the state of its milestone/goals. Kernel Zones that support the milestone/goals service (i.e. those with 11.4+ installed) use internal auxiliary states to report back to the host. Kernel Zones that do not support milestone/goals are considered online when their state is running and auxiliary state is hotplug-cpu. Zone SMF instances mapping to solaris10(7) branded Zones will have its state driven by the exit code of the zoneadm command. If the zone's milestone/goals is absent or disabled, the ZDR will treat the zone as not having support for milestone/goals. The ZDR can be instructed to ignore milestone/goals for the purpose of moving the zone SMF instance to the online state based only on the success of zoneadm boot -- if zoneadm boot fails the zone SMF instance is placed into maintenance. The switch is controlled by the following SMF property under the ZDR service instance, svc:/system/zones:default: config/track-zone-goals = true | false For example, this feature might be useful to providers of VMs to different tenants (IaaS) that do not care about what is running on the VMs but only care whether those VMs are accessible to their tenants. Zone Dependencies The set of SMF FMRIs that make up the zone dependencies is defined by a new zonecfg resource, smf-dependency. It is comprised of fmri and grouping properties. All SMF dependencies for a zone will be of type service and have restart_on none -- we do not want zones being shutdown or restarted because of a faulty flip-flopping dependency. Example: An example on setting dependencies for a zone: add smf-dependency set fmri svc:/application/frobnicate:default end add smf-dependency set fmri svc:/system/zones/zone:appfirewall end add smf-dependency set fmri svc:/system/zones/zone:dataload set grouping=exclude_all end The default for grouping is require_all. See the smf(7) manual page for other dependency grouping types. Zone Config autoboot Configuration The zone instance SMF property general/enabled corresponds to the zone configuration property autoboot and is stored in the SMF repository. The existing Zones interfaces: zonecfg(1M) export zoneadm(1M) detach -n stay unmodified and contain autoboot in their output. Also, the RAD interfaces accessing the property do not change from 11.3. Zones Boot Order There are two ways to establish zones boot order. One is determined by the SMF dependencies of a zone SMF instance (see above for smf-dependency). The other one is an assigned boot priority for a zone. Once the SMF dependencies are satisfied for a zone, the zone is placed in a queue according to its priority. The ZDR then boots zones from the highest to lowest boot priority in the queue. The new zonecfg property is boot-priority. set boot-priority={ high | default | low } Note that the boot ordering based on assigned boot priority is best-effort and thus non-deterministic. It is not guaranteed that all zones with higher boot priority will be booted before all zones with lower boot priority. If your configuration requires a deterministic behavior, use SMF dependencies. Zones Concurrent Boot and Suspend/Resume Limits The ZDR can limit the number of concurrent zones booting up or shutting down, and suspending or resuming. The max number of concurrent boot and suspend resume will be determined by the following properties on the ZDR service instance: $ svccfg -s system/zones:default listprop config/concurrent* config/concurrent-boot-shutdown count 0 config/concurrent-suspend-resume count 0 0 or absence of value means there is no limit imposed by the restarter. If the value is N, the restarter will attempt to boot in parallel at most N zones. The booting process of a NGZ will be considered completed when the milestone/goals of a zone is reached. If the milestone/goals cannot be reached, the zone SMF instance will be placed into maintenance and the booting process for that zone will be deemed complete from the ZDR perspective. Kernel Zones that do not support milestone/goals are considered up when the zone auxiliary state hotplug-cpu is set. KZs with the goal support use private auxiliary states to report back to the host. solaris10 branded zones will be considered up when the zoneadm boot command returns. Integration with FMA This requirement was automatically achieved with fully integrating the Zones framework with SMF. Example Let's have a very simplistic example with zones jack, joe, lady, master, and yesman<0-9>. Now, the master zone depends on lady, lady depends on both jack and joe, and we do not care much about when yesman<0-9> zones boot up. +------+ +------+ +--------+ | jack |<----------+--| lady |<--------------| master | +------+ / +------+ +--------+ / +------+ / | joe |<------+ +------+ +---------+ +---------+ | yesman0 | .... | yesman9 | +---------+ +---------+ Let's not tax the system excessively when booting, so we set the boot concurrency to 2 for this example. Also, let's assume we need a running web server in jack, so add that one to the goals milestone. Based on the environment we have, we choose to assign the high boot priority to jack and joe, keep lady and master at the default priority, and put all yesman zones to the low boot priority. To achieve all of the above, this is what we need to do: # svccfg -s system/zones:default setprop config/concurrent-boot-shutdown=2 # zlogin jack svcadm goals svc:/milestone/multi-user-server:default apache24 # zonecfg -z jack "set boot-priority=high" # zonecfg -z joe "set boot-priority=high" # zonecfg -z lady "add smf-dependency; set fmri=svc:/system/zones/zone:jack; end" # zonecfg -z lady "add smf-dependency; set fmri=svc:/system/zones/zone:joe; end" # zonecfg -z master "add smf-dependency; set fmri=svc:/system/zones/zone:lady; end" # for i in $(seq 0 9); do zonecfg -z yesman$i "set boot-priority=low"; done During the boot, you may see something like the following. As mentioned above, the boot priority is best effort also given we have dependencies, some yesman zones will boot up before some higher priority zones. You will see that at any given moment during the boot, only two zones are being booted up in parallel (the '*' denotes a service in a state transition, see svcs(1)), as we set the boot concurrency above to 2. $ svcs -o STATE,FMRI -s FMRI system/zones/* STATE FMRI offline* svc:/system/zones/zone:jack online svc:/system/zones/zone:joe offline svc:/system/zones/zone:lady offline svc:/system/zones/zone:master offline svc:/system/zones/zone:yesman0 offline svc:/system/zones/zone:yesman1 offline svc:/system/zones/zone:yesman2 offline* svc:/system/zones/zone:yesman3 online svc:/system/zones/zone:yesman4 online svc:/system/zones/zone:yesman5 offline svc:/system/zones/zone:yesman6 offline svc:/system/zones/zone:yesman7 offline svc:/system/zones/zone:yesman8 online svc:/system/zones/zone:yesman9 Conclusion With the Zones Delegated Restarter introduced in 11.4, we resolved several shortcomings of the Zones framework in 11.3. There is always room for additional enhancements, making the boot ordering based on boot priorities more deterministic, for example. We are open to any feedback you might have on this new Zones Delegated Restarter feature.

Managing Zones in Oracle Solaris 11.3 In Oracle Solaris 11.3, Zones are managed by the Zones service svc:/system/zones:default. The service performs the autobooting and shutdown of Zones on system boot...

Oracle Solaris 11.3 SRU 35 released

Earlier today we released Oracle Solaris 11.3 SRU 35. It's available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . This SRU introduces the following enhancements: Compliance Update Check; allows users to verify the system is not using features which are no longer supported in newer releases Oracle VM Server for SPARC has been updated to version 3.5.0.3. More details can be found in the Oracle VM Server for SPARC 3.5.0.3 Release Notes. The Java 8, Java 7, and Java 6 packages have been updated Explorer 18.3 is now available libepoxy has been added to Oracle Solaris gnu-gettext has been updated to 0.19.8 bison has been updated to 3.0.4 at-spi2-atk has been updated to 2.24.0 at-spi2-core has been updated to 2.24.0 gtk+3 has been updated to 3.18.0 Fixed the missing magick/static.h header in ImageMagick manifest The following components have also been updated to address security issues: python has been updated to 3.4.8 BIND has been updated to 9.10.6-P1 Apache Tomcat has been updated to 8.5.3 kerberos 5 has been updated to 1.16.1 Wireshark has been updated to 2.6.2 Thunderbird has been updated to 52.9.1 libvorbis has been updated to 1.3.6 MySQL has been updated to 5.7.23 gdk-pixbuf, libtiff, jansson, procmail, libgcrypt, libexif Full details of this SRU can be found in My Oracle Support Doc 2437228.1. For the list of Service Alerts affecting each Oracle Solaris 11.3 SRU, see Important Oracle Solaris 11.3 SRU Issues (Doc ID 2076753.1).

Earlier today we released Oracle Solaris 11.3 SRU 35. It's available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at...

Recovering a Missing Zone After a Repeated Upgrade From Oracle Solaris 11.3 to 11.4

Recap of the Problem As I explained in my past Shared Zone State in Oracle Solaris 11.4 blog post, if you update to Oracle Solaris 11.4 from 11.3, boot the new 11.4 BE, create/install new zones there, then boot back to 11.3, update again to 11.4, and boot that second 11.4 BE, zones you created and installed in the first 11.4 BE will no longer be shown in the zoneadm list -c output when you booted up from the second 11.4 BE. Those zones are missing as the 2nd update from 11.3 to 11.4 replaced the shared zones index file, storing the original one containing the zone entries to /var/share/zones/index.json.backup.<date>--<time> file. However, I did not explain how we can get such zones back and that is what I'm gonna show in this blog post. There are two pieces missing. One is the zones are not in the shared index file, the other one is the zones do not have their zone configurations in the current BE. The Recovery Solution The fix is quite easy so let's show how to recover one zone. Either zoneadm -z <zonename> export > <exported-config> the zone config from the first 11.4 BE, and import it in the other 11.4 BE via zonecfg -z <zonename> -f <exported-config>, or just manually create the zone in the second 11.4 BE with the same configuration. Example: BE-AAA# zonecfg -z xxx export > xxx.conf BE-BBB# zonecfg -z xxx -f xxx.conf In this specific case of multiple updates to 11.4, you could also manually copy <mounted-1st-11.4-be>/etc/zones/<zonename>.xml from the first 11.4 BE (use beadm mount <1st-11.4-be-name> /a to mount it) but note that that's not a supported way do to things as in general configurations from different system versions may not be compatible. If that is the case, the configuration update is done during the import or on the first boot. However, in this blog entry, I will cheat and use a simple cp(1) since I know that the configuration file is compatible with the BE I'm copying it into. The decribed recovery solution is brands(7) agnostic. Example An example that follows will recover a missing zone uar. Each color represents a different BE, denoted also by different shell prompts. root@s11u4_3:~# zonecfg -z uar create root@s11u4_3:~# zonecfg -z uar install root@s11u4_3:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 installed /system/zones/tzone1 solaris excl - uar installed /system/zones/uar solaris excl root@s11u4_3:~# beadm activate sru35.0.3 root@s11u4_3:~# reboot -f root@S11-3-SRU:~# pkg update --be-name=s11u4_3-b -C0 --accept entire@11.4-11.4.0.0.1.3.0 ... root@S11-3-SRU:~# reboot -f root@s11u4_3-b:~# svcs -xv svc:/system/zones-upgrade:default (Zone config upgrade after first boot) State: degraded since Fri Aug 17 13:39:53 2018 Reason: Degraded by service method: "Unexpected situation during zone index conversion to JSON." See: http://support.oracle.com/msg/SMF-8000-VE See: /var/svc/log/system-zones-upgrade:default.log Impact: Some functionality provided by the service may be unavailable. root@s11u4_3-b:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 installed /system/zones/tzone1 solaris excl root@s11u4_3-b:~# beadm mount s11u4_3 /a root@s11u4_3-b:~# cp /a/etc/zones/uar.xml /etc/zones/ root@s11u4_3-b:~# zonecfg -z uar create Zone uar does not exist but its configuration file does. To reuse it, use -r; create anyway to overwrite it (y/[n])? n root@s11u4_3-b:~# zonecfg -z uar create -r Zone uar does not exist but its configuration file does; do you want to reuse it (y/[n])? y root@s11u4_3-b:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 installed /system/zones/tzone1 solaris excl - uar configured /system/zones/uar solaris excl root@s11u4_3-b:~# zoneadm -z uar attach -u Progress being logged to /var/log/zones/zoneadm.20180817T134924Z.uar.attach Zone BE root dataset: rpool/VARSHARE/zones/uar/rpool/ROOT/solaris-0 Updating image format Image format already current. Updating non-global zone: Linking to image /. Updating non-global zone: Syncing packages. Packages to update: 527 Services to change: 2 ... ... Result: Attach Succeeded. Log saved in non-global zone as /system/zones/uar/root/var/log/zones/zoneadm.20180817T134924Z.uar.attach root@s11u4_3-b:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 installed /system/zones/tzone1 solaris excl - uar installed /system/zones/uar solaris excl Conclusion This situation of missing zones on multiple updates from 11.3 to 11.4 is inherently part of the change from a BE specific zone indexes in 11.3 to a shared index in 11.4. You should only encounter it if you go back from 11.4 to 11.3 and update again to 11.4. We assume such situations will not happen often. The final engineering consensus during the design was that while users mostly keep going forward, i.e. update to greater system versions and then go back not, if they happen to go back to 11.3 and update again to 11.4, they would expect the same list of zones as they had on the 11.3 BE they used last for the 11.4 update.

Recap of the Problem As I explained in my past Shared Zone State in Oracle Solaris 11.4 blog post, if you update to Oracle Solaris 11.4 from 11.3, boot the new 11.4 BE, create/install new zones there,...

Oracle Solaris 11

Trapped by Older Software

Help! I am trapped by my Old Application Which Cannot Change. I need to update my Oracle Solaris 11 system to a later Update and/or Support Repository Update (SRU) but I find upon the update my favourite FOSS component has been removed.  My Old Application Which Cannot Change has a dependency upon it and so no longer starts. Help! Oracle Solaris, by default, will ensure that the software installed on the system is up to date. This includes the removal (uninstall) of software that is obsolete. Packages can be marked as obsolete due to the owning community no longer supporting it,  being replaced by another package or later major version or for some other valid reason (we are very careful about the removal of software components). However, by design, the contention problem of wanting to keep the operating system up to date but allowing for exceptions is addressed by Oracle Solaris's packaging system. This is performed via the 'version locks' within it. Taking an example: Oracle Solaris 11.3 SRU 20 obsoleted Python 2.6. This was because that version of python has been End of Lifed by the Python community. And so updating a system beyond that SRU will result in Python 2.6 being removed. But what happens if you actually need to use Python 2.6 because of some application dependency ? Well the first thing is check with the application vendor to see if there is a later version that supports the newer version Python, if so consider updating to that later version. Maybe there is no later release of the application and so in this instance how do you get python-26 onto your system. Follow the steps below: Identify the version of the required package: use pkg list -af <name of package> for example: pkg list -af runtime/python-26 Identify if the package has dependencies that need to be installed: pkg contents -r -t depend runtime/python-26@2.6.8-0.175.3.15.0.4.0 The python-26 package is interesting as it has a conditional dependency upon the package runtime/tk-8 so that it depends upon library/python/tkinter-26. So if tk-8 is installed then tkinter-26 will need to be installed. Identify the incorporation that locks the package(s): pkg search depend:incorporate:runtime/python-26 Using the information in the previous step find the relevant lock(s) pkg contents -m userland-incorporation | egrep 'runtime/python-26|python/tkinter-26' Unlock the package(s): pkg change-facet version-lock.runtime/python-26=false version-lock.library/python/tkinter-26=false Update the package(s) to the identified version from the first step: pkg update runtime/python-26@2.6.8-0.175.3.15.0.4.0 No need to worry about tkinter-26 here because the dependency within the python-26 package will cause it to be installed. Freeze the package(s) so that further updates will not remove them. Put a comment with the freeze to indicate why the package is installed: pkg freeze -c 'Needed for Old Application' runtime/python-26 If, required, update the system to the later SRU or Oracle Solaris Update: pkg update Another complete example using the current Oracle Solaris 11.4 Beta and Java 7 # pkg list -af jre-7 NAME (PUBLISHER) VERSION IFO runtime/java/jre-7 1.7.0.999.99 --o runtime/java/jre-7 1.7.0.191.8 --- runtime/java/jre-7 1.7.0.181.9 --- runtime/java/jre-7 1.7.0.171.11 --- runtime/java/jre-7 1.7.0.161.13 --- ... # pkg search depend:incorporate:runtime/java/jre-7 INDEX ACTION VALUE PACKAGE incorporate depend runtime/java/jre-7@1.7.0.999.99,5.11 pkg:/consolidation/java-7/java-7-incorporation@1.7.0.999.99-0 # pkg contents -m java-7-incorporation|grep jre-7 depend fmri=runtime/java/jre-7@1.7.0.999.99,5.11 type=incorporate Oh. There is no lock. What should we do now ? Is there a lock on the Java 7 incorporation that can be used ? Yes! See the results of the searching in the first command below. So we can unlock that one and install Java 7. # pkg search depend:incorporate:consolidation/java-7/java-7-incorporation INDEX ACTION VALUE PACKAGE incorporate depend consolidation/java-7/java-7-incorporation@1.7.0.999.99 pkg:/entire@11.4-11.4.0.0.1.12.0 # pkg contents -m entire | grep java-7-incorporation depend fmri=consolidation/java-7/java-7-incorporation type=require depend facet.version-lock.consolidation/java-7/java-7-incorporation=true fmri=consolidation/java-7/java-7-incorporation@1.7.0.999.99 type=incorporate # pkg change-facet version-lock.consolidation/java-7/java-7-incorporation=false Packages to change: 1 Variants/Facets to change: 1 Create boot environment: No Create backup boot environment: Yes PHASE ITEMS Removing old actions 1/1 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 # pkg list -af java-7-incorporation NAME (PUBLISHER) VERSION IFO consolidation/java-7/java-7-incorporation 1.7.0.999.99-0 i-- consolidation/java-7/java-7-incorporation 1.7.0.191.8-0 --- consolidation/java-7/java-7-incorporation 1.7.0.181.9-0 --- .... # pkg install --accept jre-7@1.7.0.191.8 java-7-incorporation@1.7.0.191.8-0 Packages to install: 2 Packages to update: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 3/3 881/881 71.8/71.8 5.2M/s PHASE ITEMS Removing old actions 4/4 Installing new actions 1107/1107 Updating modified actions 2/2 Updating package state database Done Updating package cache 1/1 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 # pkg freeze -c 'Needed for Old Application' java-7-incorporation consolidation/java-7/java-7-incorporation was frozen at 1.7.0.191.8-0:20180711T215211Z # pkg freeze NAME VERSION DATE COMMENT consolidation/java-7/java-7-incorporation 1.7.0.191.8-0:20180711T215211Z 07 Aug 2018 14:50:34 UTC Needed for Old Application A couple of points with the above example. When installing the required version of java the corresponding incorporation at the correct version needed to be installed. The freeze has been applied to the Java 7 incorporation because that is the package that controls the Java 7 package version. The default version of Java remains as Java 8 but that can be changed as per the next steps below via the use of mediators (see pkg(1) and look for mediator). # java -version java version "1.8.0_181" Java(TM) SE Runtime Environment (build 1.8.0_181-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.181-b12, mixed mode) # /usr/jdk/instances/jdk1.7.0/bin/java -version java version "1.7.0_191" Java(TM) SE Runtime Environment (build 1.7.0_191-b08) Java HotSpot(TM) Server VM (build 24.191-b08, mixed mode) # pkg set-mediator -V 1.7 java Packages to change: 3 Mediators to change: 1 Create boot environment: No Create backup boot environment: Yes PHASE ITEMS Removing old actions 2/2 Updating modified actions 3/3 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 # java -version java version "1.7.0_191" Java(TM) SE Runtime Environment (build 1.7.0_191-b08) Java HotSpot(TM) Server VM (build 24.191-b08, mixed mode) Another example of unlocking packages is the article More Tips for Updating Your Oracle Solaris 11 System from the Oracle Support Repository. In summary Oracle Solaris 11 provides a single method to update all the operating system software via a pkg update but additionally allows for exceptions to be used to permit legacy applications to run.

Help! I am trapped by my Old Application Which Cannot Change. I need to update my Oracle Solaris 11 system to a later Update and/or Support Repository Update (SRU) but I find upon the update...

Oracle Solaris 11

Solaris 11: High-Level Steps to Create an IPS Package

Keywords: Solaris package IPS+Repository pkg 1 Work on Directory Structure     Start with organizing the package contents (files) into the same directory structure that you want on the installed system.   In the following example the directory was organized in such a manner that when the package was installed, it results in software being copied to /opt/myutils directory. eg., # tree opt opt `-- myutils |-- docs | |-- README.txt | `-- util_description.html |-- mylib.py |-- util1.sh |-- util2.sh `-- util3.sh Create a directory to hold the software in the desired layout. Let us call this "workingdir", and this directory will be specified in subsequent steps to generate the package manifest and finally the package itself. Move the top level software directory to the "workingdir". # mkdir workingdir # mv opt workingdir # tree -fai workingdir/ workingdir workingdir/opt workingdir/opt/myutils workingdir/opt/myutils/docs workingdir/opt/myutils/docs/README.txt workingdir/opt/myutils/docs/util_description.html workingdir/opt/myutils/mylib.py workingdir/opt/myutils/util1.sh workingdir/opt/myutils/util2.sh workingdir/opt/myutils/util3.sh 2 Generate Package Manifest   Package manifest provides metadata such as package name, description, version, classification & category along with the files and directories included, and the dependencies, if any, need to be installed for the target package. The manifest for an existing package can be examined with the help of pkg contents subcommand. pkgsend generate command generates the manifest. It takes "workingdir" as input. Piping the output through pkgfmt makes the manifest readable. # pkgsend generate workingdir | pkgfmt > myutilspkg.p5m.1 # cat myutilspkg.p5m.1 dir path=opt owner=root group=bin mode=0755 dir path=opt/myutils owner=root group=bin mode=0755 dir path=opt/myutils/docs owner=root group=bin mode=0755 file opt/myutils/docs/README.txt path=opt/myutils/docs/README.txt owner=root group=bin mode=0644 file opt/myutils/docs/util_description.html path=opt/myutils/docs/util_description.html owner=root group=bin mode=0644 file opt/myutils/mylib.py path=opt/myutils/mylib.py owner=root group=bin mode=0755 file opt/myutils/util1.sh path=opt/myutils/util1.sh owner=root group=bin mode=0644 file opt/myutils/util2.sh path=opt/myutils/util2.sh owner=root group=bin mode=0644 file opt/myutils/util3.sh path=opt/myutils/util3.sh owner=root group=bin mode=0644 3 Add Metadata to Package Manifest   Note that the package manifest is currently missing attributes such as name and description (metadata). Those attributes can be added directly to the generated manifest. However the recommended approach is to rely on pkgmogrify utility to make changes to an existing manifest. Create a text file with the missing package attributes. eg., # cat mypkg_attr set name=pkg.fmri value=myutils@3.0,5.11-0 set name=pkg.summary value="Utilities package" set name=pkg.description value="Utilities package" set name=variant.arch value=sparc set name=variant.opensolaris.zone value=global set name=variant.opensolaris.zone value=global action restricts the package installation to global zone. To make the package installable in both global and non-global zones, either specify set name=variant.opensolaris.zone value=global value=nonglobal action in the package manifest, or do not have any references to variant.opensolaris.zone variant at all in the manifest. Now merge the metadata with the manifest generated in previous step. # pkgmogrify myutilspkg.p5m.1 mypkg_attr | pkgfmt > myutilspkg.p5m.2 # cat myutilspkg.p5m.2 set name=pkg.fmri value=myutils@3.0,5.11-0 set name=pkg.summary value="Utilities package" set name=pkg.description value="Utilities package" set name=variant.arch value=sparc set name=variant.opensolaris.zone value=global dir path=opt owner=root group=bin mode=0755 dir path=opt/myutils owner=root group=bin mode=0755 dir path=opt/myutils/docs owner=root group=bin mode=0755 file opt/myutils/docs/README.txt path=opt/myutils/docs/README.txt owner=root group=bin mode=0644 file opt/myutils/docs/util_description.html \ path=opt/myutils/docs/util_description.html owner=root group=bin mode=0644 file opt/myutils/mylib.py path=opt/myutils/mylib.py owner=root group=bin mode=0755 file opt/myutils/util1.sh path=opt/myutils/util1.sh owner=root group=bin mode=0644 file opt/myutils/util2.sh path=opt/myutils/util2.sh owner=root group=bin mode=0644 file opt/myutils/util3.sh path=opt/myutils/util3.sh owner=root group=bin mode=0644 4 Evaluate & Generate Dependencies   Generate the dependencies so they will be part of the manifest. It is recommended to rely on pkgdepend utility for this task rather than declaring depend actions manually to minimize inaccuracies. eg., # pkgdepend generate -md workingdir myutilspkg.p5m.2 | pkgfmt > myutilspkg.p5m.3 At this point, ensure that the manifest has all the dependencies listed. If not, declare the missing dependencies manually.   5 Resolve Package Dependencies   This step might take a while to complete. eg., # pkgdepend resolve -m myutilspkg.p5m.3   6 Verify the Package   By this time the package manifest should pretty much be complete. Check and validate it manually or using pkglint utility (recommended) for consistency and any possible errors. # pkglint myutilspkg.p5m.3.res   7 Publish the Package   For the purpose of demonstration let's go with the simplest option to publish the package, local file-based repository. Create the local file based repository using pkgrepo command, and set the default publisher for the newly created repository. # pkgrepo create my-repository # pkgrepo -s my-repository set publisher/prefix=mypublisher Finally publish the target package with the help of pkgsend command. # pkgsend -s my-repository publish -d workingdir myutilspkg.p5m.3.res pkg://mypublisher/myutils@3.0,5.11-0:20180704T014157Z PUBLISHED # pkgrepo info -s my-repository PUBLISHER PACKAGES STATUS UPDATED mypublisher 1 online 2018-07-04T01:41:57.414014Z   8 Validate the Package   Finally validate whether the published package has been packaged properly by test installing it. # pkg set-publisher -p my-repository # pkg publisher # pkg install myutils # pkg info myutils Name: myutils Summary: Utilities package Description: Utilities package State: Installed Publisher: mypublisher Version: 3.0 Build Release: 5.11 Branch: 0 Packaging Date: Wed Jul 04 01:41:57 2018 Last Install Time: Wed Jul 04 01:45:05 2018 Size: 49.00 B FMRI: pkg://mypublisher/myutils@3.0,5.11-0:20180704T014157Z

Keywords: Solaris package IPS+Repository pkg 1 Work on Directory Structure     Start with organizing the package contents (files) into the same directory structure that you want on the installed system.  ...

Perspectives

Automatic Configuration of Solaris OCI Guests

Once you've gone through the basics of setting up an Oracle Solaris guest on Oracle Cloud Infrastructure (OCI) covered in my previous post, you will likely wonder how you can customize them automatically at launch.  You can always create specific custom images by hand, but that has two problems: you're doing a lot of work by hand, and then you have to manage the custom images as well; once created, they become objects with a lifecycle of their own.  The natural desire of system admins is to write some scripts to automate the work, and then run those scripts at first boot.  That allows just managing the scripts and applying them when booting an instance of Solaris.  Let's use setting up the Solaris 11.4 beta publisher as an example. Here's a template for a script that can automate applying your certificate and key to access the Solaris 11.4 beta publisher: #!/bin/ksh # # userdata script to setup pkg publisher for Solaris beta, install packages MARKER=/var/tmp/userdata_marker # If we've already run before in this instance, exit [[ -f $MARKER ]] && exit 0 # Save key and certificate to files for use by pkg cat </system/volatile/pkg.oracle.com.certificate.pem # replace with contents of downloaded pkg.oracle.com.certificate.pem EOF cat </system/volatile/pkg.oracle.com.key.pem # replace with contents of downloaded pkg.oracle.com.key.pem EOF # Wait for DNS configuration, as cloudbase-init intentionally doesn't wait # for nameservice milestone while [[ $(svcs -H -o state dns/client) != "online" ]]; do sleep 5 done pkg set-publisher -G '*' -g https://pkg.oracle.com/solaris/beta \ -c /system/volatile/pkg.oracle.com.certificate.pem \ -k /system/volatile/pkg.oracle.com.key.pem solaris # Publisher is set up, install additional packages here if desired # pkg install ... # Leave marker that this script has run touch $MARKER Copy this script, modify it by pasting in the contents of the certificate and key files you've downloaded from pkg-register.oracle.com, and save it. Now, select Create Instance in the OCI Console, and select your Solaris 11.4 beta image as the boot volume.  Paste or select your ssh key, and then as a Startup Script select or paste your modified copy of the template script above (Note: if you're using the emulated VM image you'll need to click Show Advanced Options to access these two fields).  Select a virtual cloud network for the instance, and then click Create Instance to start the launch process.  Once the image is launched and you're able to ssh in, you can verify that the package repository is correctly configured using "pkg search". There are lots of possible things you might do in such a startup script: install software, enable services, create user accounts, or any other things required to get an application running on a cloud instance.  Note, though, that the script will run at every boot, not just the first one, so your script must either be idempotent or ensure that it runs only once.  The pkg operatiosn in the example script are idempotent, but I've included a simple run-once mechanism to optimize it. Debugging Startup Script Problems There are two components to the startup script mechanism.  OCI provides a metadata service that publishes the startup script you provide, and Solaris includes the cloudbase-init service that downloads the metadata and applies it; the script is known as a userdata script.  If your script doesn't work, you can examine the cloudbase-init service log using the command sudo svcs -Lv cloudbase-init.  By default, cloudbase-init only reports the exit status of the userdata script, which likely isn't enough to tell you what happened since scripts generally can't provide specific error codes for every possible problem.  You can enable full debug logging for cloudbase-init by modifying its config/debug property: svccfg -s cloudbase-init:default setprop config/debug=boolean: true svcadm refresh cloudbase-init svcadm restart cloudbase-init The log will now include all output sent to stdout or stderr from the script.  

Once you've gone through the basics of setting up an Oracle Solaris guest on Oracle Cloud Infrastructure (OCI) covered in my previous post, you will likely wonder how you can customize them...

Perspectives

Getting Started with Solaris 11.4 Beta Images for Oracle Cloud Infrastructure

A question that's coming up more and more often among Oracle Solaris customers is, "Can I run Solaris workloads in Oracle Cloud Infrastructure (OCI)?".  Previously, it's only been possible by deploying your own OVM hypervisor in the OCI bare metal infrastructure, or running guests in the OCI Classic infrastructure.  As of today, I'm pleased that we can add bare metal and virtual machines in OCI as options.  With the release of the latest refresh to the Oracle Solaris 11.4 Beta, we're providing pre-built images for use in OCI. The images aren't part of the official OCI image catalog at this time, but using them is easy, just follow these steps: Login to your OCI console and select Compute->Custom Images from the main menu, this will display the Images page. Press the blue Import Image button.  This will display the Import Image dialog. In the dialog, select a compartment into which the image will be imported, and enter a name, such as "Solaris 11.4 Beta".  Select Linux for the operating system since OCI doesn't yet know about Solaris and that will avoid any special handling that OCI has for Windows images.  At this point, choose which image you wish to import: Bare Metal: Copy this link and paste it into the Object Storage URL field.  Select QCOW2 as the Image Type, and Native Mode as the Launch Mode. Enter any tags you wish to apply, and then press Import Image. Virtual Machine: Copy this link and paste it into the Object Storage URL field.  Select VMDK as the Image Type, and Emulated Mode as the Launch Mode.  Enter any tags you wish to apply, and then press Import Image. It'll take a few minutes for OCI to copy the image from object storage into your tenant's image repository.  Once that's complete, you can launch an instance using the image.  First, one tip: if you've imported the Bare Metal image, you should go to its Image Details page and press the Edit Details button.  In the Edit Image Details dialog that comes up, there's a Compatible Shapes list.  You'll find that all of the shapes have a blue checkmark.  You should uncheck all of the VM shapes and then Save the image.  The reason is that Solaris is not capable of booting in OCI's native virtual machine shapes at this time and this will prevent anyone who uses that image from inadvertently launching a VM that won't be accessible.  We're working on running Solaris under OCI's native VM technology, but since it's not ready yet, we've made the emulated mode image available for now. When creating an instance, select Custom Image as the boot volume type and select the image you've imported along with a compatible shape.  You'll need to supply an ssh key in order to login to the instance once it's started; when creating a VM, it's necessary to click the Show Advanced Options link to access the SSH Keys settings. After you start an instance, login using ssh opc@<instance ip>.  The image contains a relatively minimal Solaris installation suitable for bootstrapping into a cloud environment - this is the solaris-cloud-guest group package.  You'll likely need to install more software to do anything beyond some simple exploration; to add more Solaris packages, head on over to pkg-register.oracle.com and download a key and certificate to access the Oracle Solaris 11.4 Beta repository, following the instructions there to configure pkg. Now that you've got an instance running, there's a lot more you can do with it, including saving any modifications you make as a new Custom Image of your own that you can then redeploy directly to a new instance (note, though, that at this point a modified bare metal image will only be deployable to bare metal, and a VM image will only be deployable to a VM).  I'll post some how-to's for common tasks in the coming days, including deploying zones, creating your own images to move workloads into OCI, and using Terraform to orchestrate deployments.  Leave a comment here, post on the Solaris Beta community forum, or catch me @dave_miner on Twitter if you have topic suggestions or questions. Update: There's one problem with the VM image - if you create a new boot environment either directly or via a pkg operation and then reboot (even if you don't activate the new boot environment), the VM will end up in a panic loop.  To avoid this, run the following command after you've logged into your VM: sudo bootadm change-entry -i 0 kargs="-B enforce-prot-exec=off"    

A question that's coming up more and more often among Oracle Solaris customers is, "Can I run Solaris workloads in Oracle Cloud Infrastructure(OCI)?".  Previously, it's only been possible by...

Oracle Solaris 11

Oracle Solaris 11.4 Open Beta Refresh 2

As we continue to work toward release of Oracle Solaris 11.4, we present to you our third release of Oracle Solaris 11.4 open beta. You can download it here, or if you're already running a previous version of Oracle Solaris 11.4 beta, make sure your system is pointing to the beta repo (https://pkg.oracle.com/solaris/beta/) as its provider and type 'pkg update'. This will be the last Oracle Solaris 11.4 open beta as we are nearing release and are now going to focus our energies entirely on preparing Oracle Solaris 11.4 for general availability. The key focus of Oracle Solaris 11.4 is to bring new capabilities and yet maintain application compatibility to help you modernize and secure your infrastructure while maintaining and protecting your application investment.. This release is specifically focused on quality and application compatibility making your transition to Oracle Solaris 11.4 seamless. The refresh includes updates to 56 popular open source libraries and utilities, a new compliance(8) "explain" subcommand which provides details on the compliance checks performed against the system for a given benchmark, and a variety of other performance and security enhancements.   In addition this refresh delivers Kernel Page Table Isolation for x86 systems which is important in addressing the Meltdown security vulnerability affecting some x86 CPUs.   This update also includes an updated version of Oracle VM Server for SPARC, with improvements in console security, live migration, and introduces a LUN masking capability to simplify storage provisioning to guests. We’re excited about the content and capability of this update, and you’ll be seeing more about specific features and capabilities in the Oracle Solaris blog in the coming days.  As you try out the software in your own environment and with your own applications please continue to give us feedback through the Oracle Solaris Beta Community Forum at https://community.oracle.com/community/server_&_storage_systems/solaris/solaris-beta

As we continue to work toward release of Oracle Solaris 11.4, we present to you our third release of Oracle Solaris 11.4 open beta. You can download it here, or if you're already running a previous...

Solaris

Python: Exclusive File Locking on Solaris

Solaris doesn't lock open files automatically (not just Solaris - most of *nix operating systems behave this way). In general, when a process is about to update a file, the process is responsible for checking existing locks on target file, acquiring a lock and releasing it after updating the file. However given that not all processes cooperate and adhere to this mechanism (advisory locking) due to various reasons, such non-conforming practice may lead to problems such as inconsistent or invalid data mainly triggered by race condition(s). Serialization is one possible solution to prevent this, where only one process is allowed to update the target file at any time. It can be achieved with the help of file locking mechanism on Solaris as well as majority of other operating systems. On Solaris, a file can be locked for exclusive access by any process with the help of fcntl() system call. fcntl() function provides for control over open files. It can be used for finer-grained control over the locking -- for instance, we can specify whether or not to make the call block while requesting exclusive or shared lock. The following rudimentary Python code demonstrates how to acquire an exclusive lock on a file that makes all other processes wait to get access to the file in focus. eg., % cat -n xflock.py 1 #!/bin/python 2 import fcntl, time 3 f = open('somefile', 'a') 4 print 'waiting for exclusive lock' 5 fcntl.flock(f, fcntl.LOCK_EX) 6 print 'acquired lock at %s' % time.strftime('%Y-%m-%d %H:%M:%S') 7 time.sleep(10) 8 f.close() 9 print 'released lock at %s' % time.strftime('%Y-%m-%d %H:%M:%S') Running the above code in two terminal windows at the same time shows the following. Terminal 1: % ./xflock.py waiting for exclusive lock acquired lock at 2018-06-30 22:25:36 released lock at 2018-06-30 22:25:46 Terminal 2: % ./xflock.py waiting for exclusive lock acquired lock at 2018-06-30 22:25:46 released lock at 2018-06-30 22:25:56 Notice that the process running in second terminal was blocked waiting to acquire the lock until the process running in first terminal released the exclusive lock. Non-Blocking Attempt If the requirement is not to block on exclusive lock acquisition, it can be achieved with LOCK_EX (acquire exclusive lock) and LOCK_NB (do not block when locking) operations by performing a bitwise OR on them. In other words, the statement fcntl.flock(f, fcntl.LOCK_EX) becomes fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB) so the process will either get the lock or move on without blocking. Be aware that an IOError will be raised when a lock cannot be acquired in non-blocking mode. Therefore, it is the responsibility of the application developer to catch the exception and properly deal with the situation. The behavior changes as shown below after the inclusion of fcntl.LOCK_NB in the sample code above. Terminal 1: % ./xflock.py waiting for exclusive lock acquired lock at 2018-06-30 22:42:34 released lock at 2018-06-30 22:42:44 Terminal 2: % ./xflock.py waiting for exclusive lock Traceback (most recent call last): File "./xflock.py", line 5, in fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB) IOError: [Errno 11] Resource temporarily unavailable

Solaris doesn't lock open files automatically (not just Solaris - most of *nix operating systems behave this way). In general, when a process is about to update a file, the process is responsible for...

Oracle Solaris 11

Automated management of the Solaris Audit trail

The Solaris audit_binfile(7) module for auditd provides the ability to specify by age (in hours, days, months etc) or by file size when to close the currently active audit trail file and start a new one.  This is intended to be used to ensure any single audit file doesn't grow to large. What this doesn't do is provide a mechanism to automatically age out old audit records from closed audit files after a period of time.  Using the SMF periodic service feature (svc.periodicd) and the auditreduce(8) record selection and merging facitilites we can every easily build some automation. For this example I'm going to assume that specification of the period can be expressed in terms of days alone, that makes implementing this as an SMF periodic service and the resulting conversion of that policy into arguements for auditreduce(8) nice and easy. First create the method script in /lib/svc/method/site-audit-manage (making sure it is executable): #!/bin/sh /usr/sbin/auditreduce -D $(hostname) -a $(gdate -d "$1 days ago" +%Y%m%d) This tells auditreduce to merge all of the closed audit files from N days ago into one new file, where N is specified as the first argument. Then we can use svcbundle(8) to turn that into a periodic service. # svcbundle -i -s service-property=config:days:count:90 -s interval=month -s day_of_month=1 -s start-method="/lib/svc/method/site-audit-manage %{config/days}" -s service-name=site/audit-manage That creates and installs a new periodic SMF service that will run on the first day of the month and run the above method script with 90 as the number of days. If we later want to change the policy to be 180 days we can do that with the svccfg command thus: # svccfg -s site/audit-manage setprop config/days = 180 # svccfg -s site/audit-manage refresh Note that the method script uses the GNU coreutils gdate command to do the easy conversion of "N days ago", this is delivered in pkg:/file/gnu-coreutils, this package is installed by default for solaris-large-server and solaris-desktop group packages but not for solaris-small-server or solaris-minimal so you may need to manually add it.  

The Solaris audit_binfile(7) module for auditd provides the ability to specify by age (in hours, days, months etc) or by file size when to close the currently active audit trail file and start a...

Oracle Solaris 11

Python on Solaris

Our colleagues in the Oracle Linux organisation have a nice writeup of their support for Python, and how to get cx_Oracle installed so you can access an Oracle Database. I thought it would be useful to provide an equivalent guide for Oracle Solaris, so here it is. Oracle Solaris has a long history of involvement with Python, starting at least 15 years ago (if not more!). Our Image Packaging System is about 94-95% Python, and we've got about 440k LoC (lines of code) written in Python directly in the ON consolidation. When you look at the Userland consolidation, however, that list grows considerably. From a practical point of view, you cannot install Oracle Solaris without using Python, and nor can you have a supportable installation unless you have this system-delivered Python and a whole lot of packages in /usr/lib/python2.7/vendor-packages. We are well aware of the immininent end of support for Python 2.7 so work is underway on migrating not just our modules and commands, but also our tooling -- so that we're not stuck when 2020 arrives. So how does one find which libraries and modules we ship, without trawling through P5M files in the Userland gate? Simply search through the Oracle Solaris IPS publisher either using the web interface (ala https://pkg.oracle.com) or using the command line: $ pkg search -r \<python\> which gives you a lot of package names. You'll notice that we version them via a suffix, so while you do get a few screenfuls of output, the list is about 423 packages long. Then to install it's very simple: # pkg install <name-of-package> just like you would for any other package. I've made mention of this before, but I think it bears repeating: we make it very, very easy for you to install cx_Oracle and Instant Client so you can connect to the Oracle Database: # pkg install -v cx_oracle Packages to install: 7 Mediators to change: 1 Estimated space available: 22.67 GB Estimated space to be consumed: 1.01 GB Create boot environment: No Create backup boot environment: No Rebuild boot archive: No Changed mediators: mediator instantclient: version: None -> 12.2 (vendor default) Changed packages: solaris consolidation/instantclient/instantclient-incorporation None -> 12.2.0.1.0-4 database/oracle/instantclient-122 None -> 12.2.0.1.0-4 developer/oracle/odpi None -> 2.1.0-11.5.0.0.0.21.0 library/python/cx_oracle None -> 6.1-11.5.0.0.0.21.0 library/python/cx_oracle-27 None -> 6.1-11.5.0.0.0.21.0 library/python/cx_oracle-34 None -> 6.1-11.5.0.0.0.21.0 library/python/cx_oracle-35 None -> 6.1-11.5.0.0.0.21.0 Then it's a simple matter of firing up your preferred Python version and uttering import cx_Oracle and away you go. Much like this: >>> import cx_Oracle >>> tns = """ORCLDNFS=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbkz)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orcldnfs)))""" >>> user = "admin" >>> passwd = "welcome1" >>> cnx = cx_Oracle.connect(user, passwd, tns) >>> stmt = "select wait_class from v$system_event group by wait_class" >>> curs = cnx.cursor() >>> curs.execute(stmt).fetchall() [('Concurrency',), ('User I/O',), ('System I/O',), ('Scheduler',), ('Configuration',), ('Other',), ('Application',), ('Queueing',), ('Idle',), ('Commit',), ('Network',)] Simple!     Some notes on best practices for Python on Oracle Solaris While we do aim to package and deliver useful packages, it does happen that perhaps there's a package you need which we don't ship, or which we ship an older version of. How do you get past that problem in a fashion which doesn't affect your system installation? Unsurprisingly, the answer is not specific to Oracle Solaris: use Python Virtual Environments. While you could certainly use $ pip install --user you can still run afoul of incorrect versions of modules being loaded. Using a virtual environment is cheap, fits in very well with the concept of containerization, and makes the task of producing reproducible builds (aka deterministic compilation much simpler. We use a similar concept when we're building ON, Solaris Userland and Solaris IPS. For further information about Python packaging, please visit this tutorial, and review this article on Best Practices for Python dependency management which I've found to be one of the best written explanations about what to do, and why to do so. If you have other questions about using Python in Oracle Solaris, please pop in to the Solaris Beta forum and let us know.

Our colleagues in the Oracle Linux organisation have a nice writeup of their support for Python, and how to get cx_Oracle installed so you can access an Oracle Database. I thought it would be useful...

Oracle Solaris 11

Solaris 11.4: 10 Good-to-Know Features, Enhancements or Changes

[Admins] Device Removal From a ZFS Storage Pool In addition to removing hot spares, cache and log devices, Solaris 11.4 has support for removal of top-level virtual data devices (vdev) from a zpool with the exception of a RAID-Z pool. It is possible to cancel a remove operation that's in progress too. This enhancement will come in handy especially when dealing with overprovisioned and/or misconfigured pools. Ref: ZFS: Removing Devices From a Storage Pool for examples. [Developers & Admins] Bundled Software Bundled software packages include Python 3.5, Oracle instant client 12.2, MySQL 5.7, Cython (C-Extensions for Python), cx_Oracle Python module, Go compiler, clang (C language family frontend for LLVM) and so on. cx_Oracle is a Python module that enables accessing Oracle Database 12c and 11g from Python applications. The Solaris packaged version 5.2 can be used with Python 2.7 and 3.4. Depending on the type of Solaris installation, not every software package may get installed by default but the above mentioned packages can be installed from the package repository on demand. eg., # pkg install pkg:/developer/golang-17 # go version go version devel a30c3bd1a7fcc6a48acfb74936a19b4c Fri Dec 22 01:41:25 GMT 2017 solaris/sparc64 [Security] Isolating Applications with Sandboxes Sandboxes are isolated environments where users can run applications to protect them from other processes on the system while not giving full access to the rest of the system. Put another way, application sandboxing is one way to protect users, applications and systems by limiting the privileges of an application to its intended functionality there by reducing the risk of system compromise. Sandboxing joins Logical Domains (LDoms) and Zones in extending the isolation mechanisms available on Solaris. Sandboxes are suitable for constraining both privileged and unprivileged applications. Temporary sandboxes can be created to execute untrusted processes. Only administrators with the Sandbox Management rights profile (privileged users) can create persistent, uniquely named sandboxes with specific security attributes. The unprivileged command sandbox can be used to create temporary or named sandboxes to execute applications in a restricted environment. The privileged command sandboxadm can be used to create and manage named sandboxes. To install security/sandboxing package, run: # pkg install sandboxing -OR- # pkg install pkg:/security/sandboxing Ref: Configuring Sandboxes for Project Isolation for details. Also See: Oracle Multitenant: Isolation in Oracle Database 12c Release 2 (12.2) New Way to Find SRU Level uname -v was enhanced to include SRU level. Starting with the release of Solaris 11.4, uname -v reports Solaris patch version in the format "11.<update>.<sru>.<build>.<patch>". # uname -v 11.4.0.12.0 Above output translates to Solaris 11 Update 4 SRU 0 Build 12 Patch 0. [Cloud] Service to Perform Initial Configuration of Guest Operating Systems cloudbase-init service on Solaris will help speed up the guest VM deployment in a cloud infrastructure by performing initial configuration of the guest OS. Initial configuration tasks typically include user creation, password generation, networking configuration, SSH keys and so on. cloudbase-init package is not installed by default on Solaris 11.4. Install the package only into VM images that will be deployed in cloud environments by running: # pkg install cloudbase-init Device Usage Information The release of Solaris 11.4 makes it easy to identify the consumers of busy devices. Busy devices are those devices that are opened or held by a process or kernel module. Having access to the device usage information helps with certain hotplug or fault management tasks. For example, if a device is busy, it cannot be hotplugged. If users are provided with the knowledge of how a device is currently being used, it helps them in resolving related issue(s). On Solaris 11.4, prtconf -v shows pids of processes using different devices. eg., # prtconf -v ... Device Minor Nodes: dev=(214,72) dev_path=/pci@300/pci@2/usb@0/hub@4/storage@2/disk@0,0:a spectype=blk type=minor nodetype=ddi_block:channel dev_link=/dev/dsk/c2t0d0s0 dev_path=/pci@300/pci@2/usb@0/hub@4/storage@2/disk@0,0:a,raw spectype=chr type=minor nodetype=ddi_block:channel dev_link=/dev/rdsk/c2t0d0s0 Device Minor Opened By: proc='fmd' pid=1516 cmd='/usr/lib/fm/fmd/fmd' user='root[0]' ... [Developers] Support for C11 (C standard revision) Solaris 11.4 includes support for the C11 programming language standard: ISO/IEC 9899:2011 Information technology - Programming languages - C. Note that C11 standard is not part of the Single UNIX Specification yet. Solaris 11.4 has support for C11 in addition to C99 to provide customers with C11 support ahead of its inclusion in a future UNIX specification. That means developers can write C programs using the newest available C programming language standard on Solaris 11.4 (and later). pfiles on a coredump pfiles, a /proc debugging utility, has been enhanced in Solaris 11.4 to provide details about the file descriptors opened by a crashed process in addition to the files opened by a live process. In other words, "pfiles core" now works. Privileged Command Execution History A new command, admhist, was included in Solaris 11.4 to show successful system administration related commands which are likely to have modified the system state, in human readable form. This is similar to the shell builtin "history". eg., The following command displays the system administration events that occurred on the system today. # admhist -d "today" -v ... 2018-05-31 17:43:21.957-07:00 root@pitcher.dom.com cwd=/ /usr/bin/sparcv9/python2.7 /usr/bin/64/python2.7 /usr/bin/pkg -R /zonepool/p6128-z1/root/ --runid=12891 remote --ctlfd=8 --progfd=13 2018-05-31 17:43:21.959-07:00 root@pitcher.dom.com cwd=/ /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1 2018-05-31 17:43:22.413-07:00 root@pitcher.dom.com cwd=/ /usr/bin/sparcv9/pkg /usr/bin/64/python2.7 /usr/bin/pkg install sandboxing 2018-05-31 17:43:22.415-07:00 root@pitcher.dom.com cwd=/ /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1 2018-05-31 18:59:52.821-07:00 root@pitcher.dom.com cwd=/root /usr/bin/sparcv9/pkg /usr/bin/64/python2.7 /usr/bin/pkg search cloudbase-init .. It is possible to narrow the results by date, time, zone and audit-tag Ref: man page of admhist(8) [Developers] Process Control Library Solaris 11.4 includes a new process control library, libproc, which provides high-level interface to features of the /proc interface. This library also provides access to information such as symbol tables which are useful while examining and control of processes and threads. A controlling process using libproc can typically: Grab another process by suspending its execution Examine the state of that process Examine or modify the address space of the grabbed process Make that process execute system calls on behalf of the controlling process, and Release the grabbed process to continue execution Ref: man page of libproc(3LIB) for an example and details.

[Admins] Device Removal From a ZFS Storage Pool In addition to removing hot spares, cache and log devices, Solaris 11.4 has support for removal of top-level virtual data devices (vdev) from a zpool...

Easily Migrate to Oracle Solaris 11 on New SPARC Hardware

We have been working very hard to make it easy for you to migrate your applications to newer, faster SPARC hardware and Oracle Solaris 11. This post provides an overview of the process and the tools that automate the migration. Migration helps you modernize IT assets, lower infrastructure costs through consolidation, and improve performance. Oracle SPARC T8 servers, SPARC M8 servers, and Oracle SuperCluster M8 Engineered Systems serve as perfect consolidation platforms for migrating legacy workloads running on old systems. Applications migrated to faster hardware and Oracle Solaris 11 will automatically deliver better performance without requiring any architecture or code changes. You can migrate your operating environment and applications using both physical-to-virtual (P2V) and virtual-to-virtual (V2V) tools. The target environment can either be configured with Oracle VM for SPARC (LDoms) or Oracle Solaris Zones on the new hardware. You can also migrate to the Dedicated Compute Classic - SPARC Model 300 in Oracle Compute Cloud and benefit from Cloud capabilities. Migration Options In general there are two options for migration. 1) Lift and Shift of Applications to Oracle Solaris 11 The application on the source system is re-hosted on new SPARC hardware running Oracle Solaris 11. If your application is running on Oracle Solaris 10 on the source system, lift and shift of the application is preferred where possible because a full Oracle Solaris 11 stack will perform better and is easier to manage. With the Oracle Solaris Binary Application Guarantee, you will get the full benefits of OS modernization while still preserving your application investment. 2) Lift and Shift of the Whole System The operating environment and application running on the system are lifted as-is and re-hosted in an LDom or Oracle Solaris Zone on target hardware running Oracle Solaris 11 in the control domain or global zone. If you are running Oracle Solaris 10 on the source system and your application has dependencies on Solaris 10 services, you can either migrate to an Oracle Solaris 10 Branded Zone or an Oracle Solaris 10 guest domain on the target. Oracle Solaris 10 Branded Zones help you maintain a Oracle Solaris 10 environment for the application while taking advantage of Oracle Solaris 11 technologies in the global zone on the new SPARC hardware. Migration Phases There are 3 key phases in migration planning and execution. 1) Discovery This includes discovery and assessment of existing physical and virtual machines, their current utilization levels, and dependencies between systems hosting multi-tier applications or running highly available (HA) Oracle Solaris Cluster type configurations. This phase helps you identify the candidate systems for migration and the dependency order for performing the migrations. 2) Size the Target Environment This requires capacity planning of the target environment to accommodate the incoming virtual machines. This takes into account the resource utilization levels on the source machine, performance characteristics of the modern target hardware running Oracle Solaris 11, and the cost savings that result from higher performance. 3) Execute the Migration Migration can be accomplished using P2V and V2V tools for LDoms and Oracle Solaris Zones. We are continually enhancing migration tools and publishing supporting documentation. As a first step in this exercise, we are releasing LDom V2V tools that help users migrate Oracle Solaris 10 or Oracle Solaris 11 guest domains that are running on old SPARC systems to modern hardware running Oracle Solaris 11 in the control domain. One of the migration scenarios is illustrated here.   Three commands are used to perform the LDom V2V migration. 1) ovmtcreate runs on the source machine to create an Open Virtualization Appliance (OVA) file, called an OVM Template. 2) ovmtdeploy runs on the target machine to deploy the guest domain. 3) ovmtconfig runs on the target machine to configure the guest domain.     In the documented example use case, validation is performed using an Oracle Database workload. Database service health is monitored using Oracle Enterprise Manager (EM) Database Express. Migration Resources We have a Lift and Shift Guide that documents the end-to-end migration use case and a White Paper that provides an overview of the process. Both documents are available at: Lift and Shift Documentation Library Stay tuned for more updates on the tools and documentation for LDom and Oracle Solaris Zone migrations for both on-premise deployments and to SPARC Model 300 in Oracle Compute Cloud. Oracle Advanced Customer Services (ACS) offers SPARC Solaris Migration services, and they can assist you with migration planning and execution using the tools developed by Solaris Engineering.

We have been working very hard to make it easy for you to migrate your applications to newer, faster SPARC hardware and Oracle Solaris 11. This post provides an overview of the process and the tools...

Scheduled Pool Scrubs in Oracle Solaris ZFS

Recommended best practices for protecting your data with ZFS include using ECC memory, configuring pool redundancy and hot spares, and always having current backups of critical data. Because storage devices can fail over time, pool scrubs are also recommended to identify and resolve data inconsistencies caused by failing devices or other issues. Additionally: Data inconsistencies can occur over time. The earlier these issues are identified and resolved, overall data availability can be increased. Disks with bad data blocks could be identified sooner during a routine pool scrub and can be resolved before the risk of multiple disk failures occur. The Oracle Solaris 11.4 release includes a new pool property for scheduling a pool scrub and also introduces a read-only property for monitoring when the last pool scrub occurred. On-going pool scrubs are recommended for routine pool maintenance. The general best practice is to either scrub once per month or per quarter for data center quality drives. This new feature enables you to more easily schedule routine pool scrubs. If you install a new Solaris 11.4 system or upgrade your existing Solaris 11 system to Solaris 11.4, a new scrubinterval pool property is set to 30 days (1 month) by default. For example: % zpool get scrubinterval export NAME PROPERTY VALUE SOURCE export scrubinterval 1m default If you have multiple pools on your system, the default scheduled scrub is staggered so not all scrubs begin at the same time. You can specify your own scrubinterval in days, weeks, or months. If scrubinterval is set to manual, this feature is disabled. The read-only lastscrub property identifies the start time of the last scrub as follows: % zpool get lastscrub export NAME PROPERTY VALUE SOURCE export lastscrub Apr_03 local A pool scrub runs in the background and at a low priority. When a scrub is scheduled using this feature, a best effort is made not to impact an existing scrub or resilver operation and might be cancelled if these operations are already running. Any running scrub (scheduled or manually started) can be cancelled by using the following command: # zpool scrub -s tank # zpool status tank pool: tank state: ONLINE scan: scrub canceled on Mon Apr 16 13:23:00 2018 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 errors: No known data errors In summary, pool scrubs are an important part of routine pool maintenance to identify and repair any data inconsistencies. ZFS scheduled scrubs provide a way to automate pool scrubs in your environment.  

Recommended best practices for protecting your data with ZFS include using ECC memory, configuring pool redundancy and hot spares, and always having current backups of critical data. Because storage...

Solaris 11.4: Three Zones Related Changes in 3 Minutes or Less

[ 1 ] Automatic Live Migration of Kernel Zones using sysadm Utility Live migrate (evacuate) all kernel zones from a host system onto other systems temporarily or permanently with the help of new sysadm(8) utility. In addition, it is possible to evacuate all zones including kernel zones that are not running and native solaris zones in the installed state. If the target host (that is, the host the zone will be migrated to) meets all evacuation requirements, set it as destination host for one or more migrating kernel zones by setting the SMF service property evacuation/target. svccfg -s svc:/system/zones/zone:<migrating-zone> setprop evacuation/target=ssh://<dest-host> Put the source host in maintenance mode using sysadm utility to prevent non-running zones from attaching, booting, or migrating in zones from other hosts. sysadm maintain <options> Migrate the zones to their destination host(s) by running sysadm's evacuate subcommand. sysadm evacuate <options> Complete system maintenance work and end the maintenance mode on source host sysadm maintain -e Optionally bring back evacuated zones to the source host Please refer to Evacuating Oracle Solaris Kernel Zones for detailed steps. [ 2 ] Moving Solaris Zones across Different Storage URIs Starting with the release of Solaris 11.4, zoneadm's move subcommand can be used to change the zonepath without moving the Solaris zone installation. In addition, the same command can be used to move a zone from: local file system to shared storage shared storage to local file system, and one shared storage location to another [ 3 ] ZFS Dataset Live Zone Reconfiguration Live Zone Reconfiguration (LZR) is the ability to make changes to a running Solaris native zone configuration permanently or temporarily. In other words, LZR avoids rebooting the target zone. Solaris 11.3 already has support for reconfiguring resources such as dedicated cpus, capped memory and automatic network (anets). Solaris 11.4 extends the LZR support to ZFS datasets. With the release of Solaris 11.4, privileged users should be able to add or remove ZFS datasets dynamically to and from a Solaris native zone without the need to reboot the zone. eg., # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 tstzone running /zonepool/tstzone solaris excl Add a ZFS filesystem to the running zone, tstzone # zfs create zonepool/testfs # zonecfg -z tstzone "info dataset" # zonecfg -z tstzone "add dataset; set name=zonepool/testfs; end; verify; commit" # zonecfg -z tstzone "info dataset" dataset: name: zonepool/testfs alias: testfs # zoneadm -z tstzone apply zone 'tstzone': Checking: Modifying anet linkname=net0 zone 'tstzone': Checking: Adding dataset name=zonepool/testfs zone 'tstzone': Applying the changes # zlogin tstzone "zfs list testfs" cannot open 'testfs': filesystem does not exist # zlogin tstzone "zpool import testfs" # zlogin tstzone "zfs list testfs" NAME USED AVAIL REFER MOUNTPOINT testfs 31K 1.63T 31K /testfs Remove a ZFS filesystem from the running zone, tstzone # zonecfg -z tstzone "remove dataset name=zonepool/testfs; verify; commit" # zonecfg -z tstzone "info dataset" # zlogin tstzone "zpool export testfs" # zoneadm -z tstzone apply zone 'tstzone': Checking: Modifying anet linkname=net0 zone 'tstzone': Checking: Removing dataset name=zonepool/testfs zone 'tstzone': Applying the changes # zlogin tstzone "zfs list testfs" cannot open 'testfs': filesystem does not exist # zfs destroy zonepool/testfs # A summary of LZR support for resources and properties in native and kernel zones can be found in this page.

[ 1 ] Automatic Live Migration of Kernel Zones using sysadm Utility Live migrate (evacuate) all kernel zones from a host system onto other systems temporarily or permanently with the help of new sysadm(...

Shared Zone State in Oracle Solaris 11.4

Overview Since Oracle Solaris 11.4, state of Zones on the system is kept in a shared database in /var/share/zones/, meaning a single database is accessed from all boot environments (BEs). However, up until Oracle Solaris 11.3, each BE kept its own local copy in /etc/zones/index, and individual copies were never synced across BEs. This article provides some history, why we moved to the shared zones state database, and what it means for administrators when updating from 11.3 to 11.4. Keeping Zone State in 11.3 In Oracle Solaris 11.3, state of zones is associated separately with every global zone BE in a local text database /etc/zones/index. The zoneadm and zonecfg commands then operate on a specific copy based on which BE is booted. In the world of systems being migrated between hosts, having a local zone state database in every BE constitutes a problem if we, for example, update to a new BE and then migrate a zone before booting into the newly updated BE. When we boot the new BE eventually, the zone will end up in an unavailable state (the system recognizes the shared storage is already used and puts the zone into such state), suggesting that it should be possibly attached. However, as the zone was already migrated, an admin is expecting the zone in the configured state instead. The 11.3 implementation may also lead to a situation where all BEs on a system represent the same Solaris instance (see below for the definition of what is a Solaris instance), and yet every BE can be linked to a non-global Zone BE (ZBE) for a zone of the same name with the ZBE containing an unrelated Solaris instance. Such a situation happens on 11.3 if we reinstall a chosen non-global zone in each BE. Solaris Instance A Solaris instance represents a group of related IPS images. Such a group is created when a system is installed. One installs a system from the media or an install server, via "zoneadm install" or "zoneadm clone", or from a clone archive. Subsequent system updates add new IPS images to the same image group that represents the same Solaris instance. Uninstalling a Zone in 11.3 In 11.3, uninstalling a non-global zone (ie. the native solaris(5) branded zone) means to delete ZBEs linked to the presently booted BE, and updating the state of the zone in /etc/zones/index. Often it is only one ZBE to be deleted. ZBEs linked to other BEs are not affected. Destroying a BE only destroys a ZBE(s) linked to the BE. Presently the only supported way to completely uninstall a non-global zone from a system is to boot each BE and uninstall the zone from there. For Kernel Zones (KZs) installed on ZVOLs, each BE that has the zone in its index file is linked to the ZVOL via an <dataset>.gzbe:<gzbe_uuid> attribute. Uninstalling a Kernel Zone on a ZVOL removes that BE specific attribute, and only if no other BE is linked to the ZVOL, the ZVOL is deleted during the KZ uninstall or BE destroy. In 11.3, the only supported way to completely uninstall a KZ from a system was to boot every BE and uninstall the KZ from there. What can happen when one reinstalls zones during the normal system life cycle is depicted on the following pictures. Each color represents a unique Solaris instance. The picture below shows a usual situation with solaris(5) branded zones. After the system is installed, two zones are installed there into BE-1. Following that, on every update, the zones are being updated as part of the normal pkg update process. The next picture, though, shows what happens if zones are being reinstalled during the normal life cycle of the system. In BE-3, Zone X was reinstalled, while Zone Y was reinstalled in BE-2, BE-3, and BE-4 (but not in BE-5). That leads to a situation where there are two different instances of Zone X on the system, and four different Zone Y instances. Whichever zone instance is used depends on what BE is the system booted into. Note that the system itself and any zone always represent different Solaris instances. Undesirable Impact of the 11.3 Behavior The described behavior could lead to undesired situations in 11.3: With multiple ZBEs present, if a non-global zone is reinstalled, we end up with ZBEs under the zone's rpool/VARSHARE dataset representing Solaris instances that are unrelated, and yet share the same zone name. That leads to possible problems with migration mentioned in the first section. If a Kernel Zone is used in multiple BEs and the KZ is uninstalled and then tried to get re-installed again, the installation fails with an error message that the ZVOL is in use in other BEs. The only supported way to uninstall a zone is to boot into every BE on a system and uninstall the zone from there. With multiple BEs, that is definitely a suboptimal solution. Sharing a Zone State across BEs in 11.4 As already stated in the Overview section, in Oracle Solaris 11.4, the system shares a zone state across all BEs. The shared zone state resolves the issues mentioned in the previous sections. In most situations, nothing changes for the users and administrators as there were no changes in the existing interfaces (some were extended though). The major implementation change was to move the local, line oriented textual database /etc/zones/index to a shared directory /var/share/zones/ and store it in a JSON format. However, as before, the location and format of the database is just an implementation detail and is not part of any supported interface. To be precise, we did EOF zoneadm -R <root> mark as part of the Shared Zone State project. That interface was of very little use already and then only in some rare maintenance situations. Also let us be clear that all 11.3 systems use and will continue to use the local zone state index /etc/zones/index. We have no plans to update 11.3 to use the shared zone state database. Changes between 11.3 and 11.4 with regard to keeping the Zones State With the introduction of the shared zone state, you can run into situations that before were either just not possible or could have been only created via some unsupported behavior. Creating a new zone on top of an existing configuration When deleting a zone, the zone record is removed from the shared index database and the zone configuration is deleted from the present BE. Mounting all other non-active BEs and removing the configuration files from there would be quite time consuming so the files are left behind there. That means if one later boots into one of those previously non-active BEs and tries to create the zone there (which does not exist as we removed it from the shared database before), zonecfg may hit an already existing configuration file. We extended the zonecfg interface in a way that you have a choice to overwrite the configuration or reuse it: root# zonecfg -z zone1 create Zone zone1 does not exist but its configuration file does. To reuse it, use -r; create anyway to overwrite it (y/[n])? n root# zonecfg -z zone1 create -r Zone zone1 does not exist but its configuration file does; do you want to reuse it (y/[n])? y Existing Zone without a configuration If an admin creates a zone, the zone record is put into the shared index database. If the system is later booted into a BE that existed before the zone was created, that BE will not have the zone configuration file (unless left behind before, obviously) but the zone will be known as the zone state database is shared across all 11.4 BEs. In that case, the zone state in such a BE will be reported as incomplete (note that not even the zone brand is known as that is also part of a zone configuration). root# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 incomplete - - - When listing the auxiliary state, you will see that the zone has no configuration: root# zoneadm list -sc NAME STATUS AUXILIARY STATE global running zone1 incomplete no-config If you want to remove that zone, the -F option is needed. As removing the zone will make it invisible from all BEs, possibly those with a usable zone configuration, we introduced a new option to make sure an administrator will not accidentally remove such zones. root# zonecfg -z newzone delete Use -F to delete an existing zone without a configuration file. root# zonecfg -z newzone delete -F root# zoneadm -z newzone list zoneadm: zone 'newzone': No such zone exists Updating to 11.4 while a shared zone state database already exists When the system is updated from 11.3 to 11.4 for the first time, the shared zone database is created from the local /etc/zones/index on the first boot into the 11.4 BE. A new svc:/system/zones-upgrade:default service instance takes care of that. If the system is then brought back to 11.3 and updated again to 11.4, the system (ie. the service instance mentioned above) will find an existing shared index when first booting to this new 11.4 BE and if the two indexes differ, there is a conflict that must be taken care of. In order not to add more complexity to the update process, the rule is that on every update from 11.3 to 11.4, any existing shared zone index database /var/share/zones/index.json is overwritten with data converted from the /etc/zone/index file from the 11.3 BE the system was updated from, if the data differs. If there were no changes to Zones in between the 11.4 updates, either in any older 11.4 BEs nor in the 11.3 BE we are again updating to 11.4 from, there are no changes in the shared zone index database, and the existing shared database needs no overwrite. However, if there are changes to the Zones, eg. a zone was created, installed, uninstalled, or detached, the old shared index database is saved on boot, the new index is installed, and the service instance svc:/system/zones-upgrade:default is put to a degraded state. As the output of svcs -xv now in 11.4 newly reports services in a degraded state as well, that serves as a hint to a system administrator to go and check the service log: root# beadm list BE Flags Mountpoint Space Policy Created -- ----- ---------- ----- ------ ------- s11_4_01 - - 2.93G static 2018-02-28 08:59 s11u3_sru24 - - 12.24M static 2017-10-06 13:37 s11u3_sru31 NR / 23.65G static 2018-04-24 02:45 root# zonecfg -z newzone create root# pkg update entire@latest root# reboot -f root# root# beadm list BE Name Flags Mountpoint Space Policy Created ----------- ----- ---------- ------ ------ ---------------- s11_4_01 - - 2.25G static 2018-02-28 08:59 s11u3_sru24 - - 12.24M static 2017-10-06 13:37 s11u3_sru31 - - 2.36G static 2018-04-24 02:45 s11_4_03 NR / 12.93G static 2018-04-24 04:24 root# svcs -xv svc:/system/zones-upgrade:default (Zone config upgrade after first boot) State: degraded since April 24, 2018 at 04:32:58 AM PDT Reason: Degraded by service method: "Unexpected situation during zone index conversion to JSON." See: http://support.oracle.com/msg/SMF-8000-VE See: /var/svc/log/system-zones-upgrade:default.log Impact: Some functionality provided by the service may be unavailable. root# tail /var/svc/log/system-zones-upgrade:default.log [ 2018 Apr 24 04:32:50 Enabled. ] [ 2018 Apr 24 04:32:56 Executing start method ("/lib/svc/method/svc-zones-upgrade"). ] Converting /etc/zones/index to /var/share/zones/index.json. Newly generated /var/share/zones/index.json differs from the previously existing one. Forcing the degraded state. Please compare current /var/share/zones/index.json with the original one saved as /var/share/zones/index.json.backup.2018-04-24--04-32-57, then clear the service. Moving /etc/zones/index to /etc/zones/index.old-format. Creating old format skeleton /etc/zones/index. [ 2018 Apr 24 04:32:58 Method "start" exited with status 103. ] [ 2018 Apr 24 04:32:58 "start" method requested degraded state: "Unexpected situation during zone index conversion to JSON" ] root# svcadm clear svc:/system/zones-upgrade:default root# svcs svc:/system/zones-upgrade:default STATE STIME FMRI online 4:45:58 svc:/system/zones-upgrade:default If you diff(1)'ed the backed up JSON index and the present one, you would see that the zone newzone was added. The new index could be also missing some zones that were created before. Based on the index diff output, he/she can create or remove Zones on the system as necessary, using standard zonecfg(8) command, possibly reusing existing configurations as shown above. Also note that the degraded state here did not mean any degradation of functionality, its sole purpose was to notify the admin about the situation. Do not use BEs as a Backup Technology for Zones The previous implementation in 11.3 allowed for using BEs with linked ZBEs as a backup solution for Zones. That means that if a zone was uninstalled in a current BE, one could usually boot to an older BE or, in case of non-global zones, try to attach and update another existing ZBE from the current BE using the attach -z <ZBE> -u/-U subcommand. With the current implementation, uninstalling a zone means to uninstall the zone from the system, and that is uninstalling all ZBEs (or a ZVOL in case of Kernel Zones) as well. If you uninstall a zone in 11.4, it is gone. If an admin used previous implementation also as a convenient backup solution, we recommend to use archiveadm instead, whose functionality also provides for backing up zones. Future Enhancements An obvious future enhacement would be shared zone configurations across BEs. However, it is not on our short term plan at this point and neither we can guarantee this functionality will ever be implemented. One thing is clear - it would be a more challenging task than the shared zone state.

Overview Since Oracle Solaris 11.4, state of Zones on the system is kept in a shared database in /var/share/zones/, meaning a single database is accessed from all boot environments (BEs). However, up...

Solaris 10 Extended Support Patches & Patchsets Released!

On Tuesday April 17 we released the first batch of Solaris 10 patches & patchsets under Solaris 10 Extended Support.  There were a total of 24 Solaris 10 patches, including kernel updates, and 4 patchsets released on MOS! Solaris 10 Extended Support will run thru January 2021.  Scott Lynn put together a very informative Blog on Solaris 10 Extended Support detailing the benefits that customers can get by purchasing Extended Support for Solaris 10 - see https://blogs.oracle.com/solaris/oracle-solaris-10-support-explained. Those of you that have taken advantage of our previous Extended Support offerings for Solaris 8 and Solaris 9 will notice that we've changed things around a little with Solaris 10 Extended Support; previously we did not publish any updates to the Solaris 10 Recommended Patchsets during the Extended Support period.  This meant that the Recommended Patchsets remained available to all customers with Premier Operating Systems support, as all the patches the patchsets contained had Operating Systems entitlement requirements. Moving forward with Solaris 10 Extended Support, the decision has been made to continue to update the Recommended Patchsets thru the Solaris 10 Extended Support period.  This means customers that purchase Solaris 10 Extended Support get the benefit of continued Recommended Patchset updates, as patches that meet the criteria for inclusion in the patchsets are released.  During the Solaris 10 Extended Support period, the updates to the Recommended Patchsets will contain patches that require a Solaris 10 Extended Support contract, so the Solaris 10 Recommended Patchsets will also require a Solaris 10 Extended Support contract during this period. For customers that do not wish to avail of Extended Support and would like to access the last Recommended Patchsets created prior to the beginning of Extended Support for Solaris 10, the January 2018 Critical Patch Updates (CPUs) for Solaris 10 will remain available to those with Premier Operating System Support. The CPU Patchsets are rebranded versions of the Recommended Patchset on the CPU dates; the patches included in the CPUs are identical to the Recommended Patchset released on those CPU dates, but the CPU READMEs will be updated to reflect their use as CPU resources.  CPU patchsets are archived and are always available via MOS at later dates so that customers can easily align to their desired CPU baseline at any time.  A further benefit that only Solaris 10 Extended Support customers will receive is access to newly created CPU Patchsets for Solaris 10 thru the Extended Support period. The following table provides a quick reference to the recent Solaris 10 patchsets that have been released, including details of the support contract required to access them:   Patchset Name Patchset Details README Download Support Contract Required Recommended OS Patchset for Solaris 10 SPARC Patchset Details README Download Extended Support Recommended OS Patchset for Solaris 10 x86 Patchset Details README Download Extended Support CPU OS Patchset 2018/04 Solaris 10 SPARC Patchset Details README Download Extended Support CPU OS Patchset 2018/04 Solaris 10 x86 Patchset Details README Download Extended Support CPU OS Patchset 2018/01 Solaris 10 SPARC Patchset Details README Download  Operating Systems Support CPU OS Patchset 2018/01 Solaris 10 x86 Patchset Details README Download Operating Systems Support Please reach out to your local sales representative if you wish to get more information on the benefits of purchasing Extended Support for Solaris 10.

On Tuesday April 17 we released the first batch of Solaris 10 patches & patchsets under Solaris 10 Extended Support.  There were a total of 24 Solaris 10 patches, including kernel updates, and 4...

Oracle Solaris ZFS Device Removal

At long last, we provide the ability to remove a top-level VDEV from a ZFS storage pool in the upcoming Solaris 11.4 Beta refresh release. For many years, our recommendation was to create a pool based on current capacity requirements and then grow the pool to meet increasing capacity needs by adding VDEVs or by replacing smaller LUNs with larger LUNs. It is trivial to add capacity or replace smaller LUNs with larger LUNs, sometimes with just one simple command. The simplicity of ZFS is one of its great strengths! I still recommend the practice of creating a pool that meets current capacity requirements and then adding capacity when needed. If you need to repurpose pool devices in an over-provisioned pool or if you accidentally misconfigure a pool device, you now have the flexibility to resolve these scenarios. Review the following practical considerations when using this new feature, which should be used as an exception rather than the rule for pool configuration on production systems: An virtual (pseudo device) is created to move the data off the (removed) pool devices so the pool must have enough space to absorb the creation of the pseudo device Only top-level VDEVs can be removed from mirrored or RAIDZ pools Individual devices can be removed from striped pools Pool device misconfigurations can be corrected A few implementation details in case you were wondering: No additional steps are needed to remap the removed devices Data from the removed devices are allocated to the remaining devices but this is not a way to rebalance all data on pool devices Reads of the reallocated data are done from the pseudo device until those blocks are freed Some levels of indirection are needed to support this operation but they should not impact performance nor increase memory requirements See the examples below. Repurpose Pool Devices The following pool, tank, has low space consumption so one VDEV is removed. # zpool list tank NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank 928G 28.1G 900G 3% 1.00x ONLINE # zpool status tank pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 errors: No known data errors - # zpool remove tank mirror-1 # zpool status tank pool: tank state: ONLINE status: One or more devices are being removed. action: Wait for the resilver to complete. Run 'zpool status -v' to see device specific details. scan: resilver in progress since Sun Apr 15 20:58:45 2018 28.1G scanned 3.07G resilvered at 40.9M/s, 21.83% done, 4m35s to go config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 mirror-1 REMOVING 0 0 0 c1t7d0 REMOVING 0 0 0 c5t3d0 REMOVING 0 0 0 errors: No known data errors Run the zpool iostat command to verify that data is being written to the remaining VDEV. # zpool iostat -v tank 5 capacity operations bandwidth pool alloc free read write read write ------------------------- ----- ----- ----- ----- ----- ----- tank 28.1G 900G 9 182 932K 21.3M mirror-0 14.1G 450G 1 182 7.90K 21.3M c3t2d0 - - 0 28 4.79K 21.3M c4t2d0 - - 0 28 3.92K 21.3M mirror-1 - - 8 179 924K 21.2M c1t7d0 - - 1 28 495K 21.2M c5t3d0 - - 1 28 431K 21.2M ------------------------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool alloc free read write read write ------------------------- ----- ----- ----- ----- ----- ----- tank 28.1G 900G 0 967 0 60.0M mirror-0 14.1G 450G 0 967 0 60.0M c3t2d0 - - 0 67 0 60.0M c4t2d0 - - 0 68 0 60.4M mirror-1 - - 0 0 0 0 c1t7d0 - - 0 0 0 0 c5t3d0 - - 0 0 0 0 ------------------------- ----- ----- ----- ----- ----- ----- Misconfigured Pool Device In this case, a device was intended to be added as a cache device but was added a single device. The problem is identified and resolved. # zpool status rzpool pool: rzpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 errors: No known data errors # zpool add rzpool c3t3d0 vdev verification failed: use -f to override the following errors: mismatched replication level: pool uses raidz and new vdev is disk Unable to build pool from specified devices: invalid vdev configuration # zpool add -f rzpool c3t3d0 # zpool status rzpool pool: rzpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors # zpool remove rzpool c3t3d0 # zpool add rzpool cache c3t3d0 # zpool status rzpool pool: rzpool state: ONLINE scan: resilvered 0 in 1s with 0 errors on Sun Apr 15 21:09:35 2018 config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 cache c3t3d0 ONLINE 0 0 0 In summary, Solaris 11.4 includes a handy new option for repurposing pool devices and resolving pool misconfiguration errors.

At long last, we provide the ability to remove a top-level VDEV from a ZFS storage pool in the upcoming Solaris 11.4 Beta refresh release. For many years, our recommendation was to create a pool based...

Oracle Solaris 11.4 Open Beta Refreshed!

On January 30, 2018, we released the Oracle Solaris 11.4 Open Beta. It has been quite successful. Today, we are announcing that we've refreshed the 11.4 Open Beta. This refresh includes new capabilities and additional bug fixes (over 280 of them) as we drive to the General Availability Release of Oracle Solaris 11.4. Some new features in this release are: ZFS Device Removal ZFS Scheduled Scrub SMB 3.1.1 Oracle Solaris Cluster Compliance checking ssh-ldap-getpubkey Also, the Oracle Solaris 11.4 Beta refresh includes the changes to mitigate CVE-2017-5753, otherwise known as Spectre Variant 1, for Firefox, the NVIDIA Graphics driver, and the Solaris Kernel (see MOS docs on SPARC and x86 for more information). Additionally, new bundled software includes, gcc 7.3, libidn2, and qpdf 7.0.0, and more than 45 new bundled software versions. Before I go further, I have to say: The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. I want to take a few minutes to address some questions I've been getting that the upcoming release of Oracle Solaris 11.4 has sparked.  Oracle Solaris 11.4 runs on Oracle SPARC and x86 systems released since 2011, but not on certain older systems that had been supported in Solaris 11.3 and earlier.  Specifically, systems not supported in Oracle Solaris 11.4 include systems based on the SPARC T1, T2, and T3 processors or the SPARC64 VII+ and earlier based “Sun4u” systems such as the SPARC Enterprise M4000.  To allow customers time to migrate to newer hardware we intend to provide critical security fixes as necessary on top of the last SRU delivered for 11.3 for the following year.  These updates will not provide the same level of content as regular SRUs  and are intended solely as a transition vehicle.  Customers using newer hardware are encouraged to update to Oracle Solaris 11.4 and subsequent Oracle Solaris 11 SRUs as soon as practical. Another question I've been getting quite a bit is about the release frequency and strategy for Oracle Solaris 11. After much discussion internally and externally, with you, our customers, about our current continuous delivery release strategy, we are going forward with our current strategy with some minor changes: Oracle Solaris 11 update releases will be released every year in approximately the first quarter of our fiscal year (that's June, July, August for most people). New features will be made available as they are ready to ship in whatever is the next available and appropriate delivery vehicle. This could be an SRU, CPU or a new release. Oracle Solaris 11 update releases will contain the following content: All new features previously released in the SRUs between the releases Any new features that are ready at the time of release Free and Open Source Software updates (i.e. new versions of FOSS) End of Features and End of Life hardware. ​This should make our releases more predictable, maintain the reliability you've come to depend on, and provide new features to you rapidly, allowing you to test them and deploy them faster. Oracle Solaris 11.4 is secure, simple and cloud-ready and compatible with all your existing Oracle Solaris 11.3 and earlier applications. Go give the latest beta a try. You can download it here.  

On January 30, 2018, we released the Oracle Solaris 11.4 Open Beta. It has been quite successful. Today, we are announcing that we've refreshed the 11.4 Open Beta. This refresh includes new...

Oracle Solaris 11.3 SRU 31

We've just released Oracle Solaris 11.3 SRU 31. This is the April Critical Patch update and contains some important security fixes as well as enhancements to Oracle Solaris. SRU31 is now available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . The following components have been updated to address security issues: The Solaris kernel has been updated to mitigate against CVE-2017-5753 aka spectre v1. See Oracle Solaris on SPARC - Spectre (CVE-2017-5753, CVE-2017-5715) and Meltdown (CVE-2017-5754) Vulnerabilities (Doc ID 2349278.1) and Oracle Solaris on x86 - Spectre (CVE-2017-5753, CVE-2017-5715) and Meltdown (CVE-2017-5754) Vulnerabilities (Doc ID 2383531.1) for more information. Apache Tomcat has been updated to 8.5.28 Firefox has been updated to 52.7.3esr Thunderbird has been updated to 52.7.0 unzip has been updated to 6.1 beta c23 NTP has been updated to 4.2.8p11 TigerVNC has been updated to 1.7.1 Updated versions of PHP: PHP has been updated to 5.6.34 PHP has been updated to 7.1.15 Updated versions of MySQL: MySQL has been updated to 5.5.59 MySQL has been updated to 5.6.39 irssi has been updated to 1.0.7 Security fixes are also included for quagga, gimp, GNOME remote desktop, vinagre and NSS. These enhancements have also been added: Oracle VM Server for SPARC has been updated to version 3.5.0.2. For more information including What's New, Bug Fixes, and Known Issues, see Oracle VM Server for SPARC 3.5.0.2 Release Notes. The TigerVNC update introduces the new fltk component in Oracle Solaris 11.3 libidn has been updated to 2.0.4 pam_list support for wildcard and comment lines The Java 8, Java 7, and Java 6 packages have been updated. See Note 5 for the location and details on how to update Java. For more information and bugs fixed, see Java 8 Update 172 Release Notes, Java 7 Update 181 Release Notes, and Java 6 Update 191 Release Notes. Full details of this SRU can be found in My Oracle Support Doc 2385753.1. For the list of Service Alerts affecting each Oracle Solaris 11.3 SRU, see Important Oracle Solaris 11.3 SRU Issues (Doc ID 2076753.1).

We've just released Oracle Solaris 11.3 SRU 31. This is the April Critical Patch update and contains some important security fixes as well as enhancements to Oracle Solaris. SRU31 is now available...

One SMF Service to Monitor the Rest!

Contributed by: Thejaswini Kodavur Have you ever wondered if there was a single service that monitors all your other services and makes administration easier? If yes then “SMF goal services”, a new feature of Oracle Solaris 11.4, is here to provide a single, unambiguous, and well-defined point where one can consider the system up and running. You can choose your customized, mission critical services and link them together into a single SMF service in one step. This SMF service is called a goal service. It can be used to monitor the health of your system upon booting up. This makes administration much easier as monitoring each of the services individually is no longer required! There are two ways in which you can make your services part of a goal service. 1. Using the supplied Goal Service By default Oracle Solaris 11.4 system provides you a goal service called “svc:/milestone/goals:default”. This goal service has a dependency on the service “svc:/milestone/multi-user-server:default” by default. You can set your mission critical service to the default goal service as below: # svcadm goals system/my-critical-service-1:default Note: This is a set/clear interface. Therefore the above command will clear the dependency from “svc:/milestone/multi-user-server:default”. In order to set the dependency on both the services use: # svcadm goals svc:/milestone/multi-user-server:default \ system/my-critical-service-1:default 2. Creating you own Goal Service Oracle Solaris 11.4 allows you to create your own goal service and set your mission critical services as dependent services. Follow the below steps to create and use a goal service. Create a usual SMF service using svcbundle(8), svccfg(8), svcadm(8): # svcbundle -o new-gs.xml -s service-name=milestone/new-gs -s start-method=":true" # cp new-gs.xml /lib/svc/manifest/site/new-gs.xml # svccfg validate /lib/svc/manifest/site/new-gs.xml # svcadm restart svc:/system/manifest-import # svcs new-gs STATE STIME FMRI online 6:03:36 svc:/milestone/new-gs:default To make this SMF service as a goal service, set the property general/goal-service=true: # svcadm disable svc:/milestone/new-gs:default # svccfg -s svc:/milestone/new-gs:default setprop general/goal-service=true # svcadm enable svc:/milestone/new-gs:default Now you can set dependencies in the newly created goal services using the -g option as below: # svcadm goals -g svc:/milestone/new-gs:default system/critical-service-1:default \ system/critical-service-2:default Note: By omitting the -g option without specifying a goal service, you will set the dependency to the system provided default goal service, i.e svc:/milestone/multi-user-server:default. On system boot up if one of your critical services does not come online, then the goal service will go into maintenance state. # svcs -d milestone/new-gs STATE STIME FMRI disabled 5:54:31 svc:/system/critical-service-2:default online Feb_19 svc:/system/critical-service-1:default # svcs milestone/new-gs STATE STIME FMRI maintenance 5:54:30 svc:/milestone/new-gs:default Note: You can use -d option of svcs(1) to check the dependencies on your goal service. Once all of the dependent services come online then your goal service will also come online. For goal services to be online, they are expected to have all their dependencies satisfied. # svcs -d milestone/new-gs STATE STIME FMRI online Feb_19 svc:/system/critical-service-1:default online 5:56:39 svc:/system/critical-service-2:default # svcs milestone/new-gs STATE STIME FMRI online 5:56:39 svc:/milestone/new-gs:default Note: For more information refer to "Goal Services" in smf(7) and subcommand goal in svcadm(8). The goal service “milestone/new-gs” is your new single SMF service with which you can monitor all of your other mission critical services! Thus, Goals Service acts as the headquarters that monitors the rest of your services.

Contributed by: Thejaswini Kodavur Have you ever wondered if there was a single service that monitors all your other services and makes administration easier? If yes then “SMF goal services”, a new...

Oracle Solaris 11

Oracle Solaris 11.4 beta progress on LP64 conversion

Back in 2014, I posted Moving Oracle Solaris to LP64 bit by bit describing work we were doing then. In 2015, I provided an update covering Oracle Solaris 11.3 progress on LP64 conversion. Now that we've released the Oracle Solaris 11.4 Beta to the public you can see the ratio of ILP32 to LP64 programs in /usr/bin and /usr/sbin in the full Oracle Solaris package repositories has dramatically shifted in 11.4: Release 32-bit 64-bit total Solaris 11.0 1707 (92%) 144 (8%) 1851 Solaris 11.1 1723 (92%) 150 (8%) 1873 Solaris 11.2 1652 (86%) 271 (14%) 1923 Solaris 11.3 1603 (80%) 379 (19%) 1982 Solaris 11.4 169 (9%) 1769 (91%) 1938 That's over 70% more of the commands shipped in the OS which can use ADI to stop buffer overflows on SPARC, take advantage of more registers on x86, have more address space available for ASLR to choose from, are ready for timestamps and dates past 2038, and receive the other benefits of 64-bit software as described in previous blogs. And while we continue to provide more features for 64-bit programs, such as making ADI support available in the libc malloc, we aren't abandoning 32-bit programs either. A change that just missed our first beta release, but is coming in a later refresh of our public beta will make it easier for 32-bit programs to use file descriptors > 255 with stdio calls, relaxing a long held limitation of the 32-bit Solaris ABI. This work was years in the making, and over 180 engineers contributed to it in the Solaris organization, plus even more who came before to make all the FOSS projects we ship and the libraries we provide be 64-bit ready so we could make this happen. We thank all of them for making it possible to bring this to you now.

Back in 2014, I posted Moving Oracle Solaris to LP64 bit by bit describing work we were doing then. In 2015, I provided an update covering Oracle Solaris 11.3 progress on LP64 conversion. Now that...

Recent Blogs Round-Up

We're seeing blogs about the Solaris 11.4 Beta show up through different channels like Twitter and Facebook which means you might have missed some of these, so we thought it would be good do a round-up. This also means you might have already seen some of them but hopefully there are some nice new ones among them. Glenn Faden wrote about: Authenticated Rights Profiles Sharing Sensitive Data Monitoring Access to Sensitive Data OpenLDAP Support Using the Oracle Solaris Account Manager BUI Protecting Sensitive Data Cindy Swearingen wrote about the Data Management Features Enrico Perla wrote about libc:malloc meets ADIHEAP Thorsten Mühlmann wrote a few nice ones recently: Solaris is dead – long lives Solaris Privileged Command Execution History Reporting Per File Auditing Improved Debugging with pfiles Enhancement Monitoring File System Latency with fsstat Eli Kleinman wrote a pretty complete set of blogs on Analytics/StatsStore: Part 1 on how to configure analytics Part 2 on how to configure the client capture stat process Part 3 on how to publish the client captured stats Part 4 Configuring / Accessing the Web Dashboard / UI Part 5 Capturing Solaris 11.4 Analytics By Using Remote Administration Daemon (RAD) Andrew Watkins wrote about Setting up Sendmail / SASL to handle SMTP AUTH Marcel Hofstetter wrote about: Fast (asynchronous) ZFS destroy How JomaSoft VDCF can help with updating from 11.3 to 11.4 Rod Evans wrote about elfdiff And for those interested in linker and ELF related improvements/changes/details Ali Bahrami published a very complete set of blogs: kldd: ldd Style Analysis For Solaris Kernel Modules - The new kldd ELF utility brings ldd style analysis to kernel modules. Core File Enhancements for elfdump - Solaris 11.4 comes with a number of enhancements that allow the elfdump utility to display a wealth of information that was previously hidden in Solaris core files. Best of all, this comes without a significant increase in core file size. ELF Program Header Names - Starting with Solaris 11.4, program headers in Solaris ELF objects have explicit names associated with them. These names are used by libproc, elfdump, elfedit, pmap, pmadvise, and mdb to eliminate some of the guesswork that goes into looking at process mappings. ELF Section Compression - In cooperation with the GNU community, we are happy and proud to bring standard ELF section compression APIs to libelf. This builds on our earlier work in 2012 (Solaris 11 Update 2) to standardize ELF compression at the file format level. Now, others can easily access that functionality. ld -ztype, and Kernel Modules That Know What They Are - Solaris Kernel Modules (kmods) are now explicitly tagged as such, and are treated as final objects. Regular Expression and Glob Matching for Mapfiles - Pattern matching using regular expressions, globbing, or plain string comparisons, bring new expressive power to Solaris mapfiles. New CRT Objects. (Or: What Are CRT objects?) - Publically documented and committed CRT objects for Solaris. Goodbye (And Good Riddance) to -mt -and -D_REENTRANT - A long awaited simplification to the process of building multithreaded code, one of the final projects delivered to Solaris by Roger Faulkner, made possible by his earlier work on thread unification that landed in Solaris 10. Weak Filters: Dealing With libc Refactoring Over The Years - Weak Filters allow the link-editor to discard unnecessary libc filters as dependencies, because you can't always fix the Makefile. Where Did The 32-Bit Linkers Go? - In Solaris 11 Update 4 (and Solaris 11 Update 3), the 32-bit version of the link-editor, and related linking utilities, are gone.  We hope you enjoy.

We're seeing blogs about the Solaris 11.4 Beta show up through different channels like Twitter and Facebook which means you might have missed some of these, so we thought it would be good do...

Oracle Solaris 11

Oracle Solaris 11.3 SRU 29 Released

We've just released Oracle Solaris 11.3 SRU 29. It contains some important security fixes and enhancements. SRU29 is now available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . Features included in this SRU include: libdax support on X86 This feature enables the use of DAX query operations on x86 platforms. The ISV and Open Source communities can now develop DAX programs on x86 platforms. The application developed on x86 platforms can be executed on SPARC platform with no modifications and the libdax API will choose DAX operations supported by the platform. The libdax library on x86 uses software emulation and does not require any change in the user developed applications. Oracle VM Server for SPARC has been updated to version 3.5.0.1 For more information including What's New, Bug Fixes, and Known Issues, see Oracle VM Server for SPARC 3.5.0.1 Release Notes. The Java 8, Java 7, and Java 6 packages have been updated. For more information and bugs fixed, see Java 8 Update 162 Release Notes, Java 7 Update 171 Release Notes, and Java 6 Update 181 Release Notes. The SRU also updates the following components which have security fixes: p7zip has been updated to 16.02 Firefox has been updated to 52.6.0esr ImageMagick has been updated to 6.9.9-30 Thunderbird has been updated to 52.6.0 libtiff has been updated to 4.0.9 Wireshark has been updated to 2.4.4 NVIDIA driver has been updated irssi has been updated to 1.0.6 BIND has been updated to 9.10.5-P3   Full details of this SRU can be found in My Oracle Support Doc 2361795.1 For the list of Service Alerts affecting each Oracle Solaris 11.3 SRU, see Important Oracle Solaris 11.3 SRU Issues (Doc ID 2076753.1).

We've just released Oracle Solaris 11.3 SRU 29. It contains some important security fixes and enhancements. SRU29 is now available from My Oracle Support Doc ID 2045311.1, or via 'pkg update'...

posix_spawn() as an actual system call

History As a developer, there are always those projects when it is hard to find a way to go forward.  Drop the project for now and find another project, if only to rest your eyes and find yourself a new insight for the temporarily abandoned project.  This is how I embarked on posix_spawn() as an actual system call you will find in Oracle Solaris 11.4. The original library implementation of posix_spawn() uses vfork(), but why care about the old address space if you are not going to use it? Or, worse, stop all the other threads in the process and don't start them until exec succeeded or when you call exit()? As I had already written kernel modules for nefarious reason to run executables directly from the kernel, I decided to benchmark the simple "make process, execute /bin/true" against posix_spawn() from the library. Even with two threads, posix_spawn() scaled poorly: additional threads did not allow a large number of additional spawns per second. Starting a new process All ways to start a new process need to copy a number of process properties: file descriptors, credentials, priorities, resource controls, etc. The original way to start a new process is fork(); you will need to mark all the pages as copy-on-write (O(n) in the size of the number of pages in the process) and so this gets more and more expensive when the process get larger and larger. In Solaris we also reserve all the needed swap; a large process calling fork() doubles its swap requirement. In BSD vfork() was introduced; it borrows the address space and was cheap when it was invented.  In much larger processes with hundreds of threads, it became more and more of bottleneck.  Dynamic linking also throws a spanner in the works: what you can do between vfork() and the final exec() is extremely small. In the standard universe, posix_spawn() was invented; it was aimed mostly at small embedded systems and a very number of specific actions can be performed before the new executable is run.  As it was part of the standard, Solaris grew its own copy build on top of vfork(). It has, of course, the same problems as vfork() has; but because it is implemented in the library we can be sure we steer clear from all the other vfork() pitfalls. Native spawn(2) call The native spawn(2) system introduced in Oracle Solaris 11.4 shares a lot of code with the forkx(2) and execve(2).  It mostly avoids doing those unneeded operations: do not stop all threads do not copy any data about the current executable do not clear all watch points (vfork()) do not duplicate the address space (fork()) no need to handle shared memory segments do not copy one or more of the threads (fork1/forkall), create a new one instead do not copy all file pointers no need to restart all threads held earlier The exec() call copies from its own address space but when spawn(2) needs the argument, it is already in a new process.  So early in the spawn(2) system call we copy the environment vector and the arguments and save them away.  The data blob is given to the child and the parent waits until the client is about to return from the system call in the new process or when it decides that it can't actually exec and calls exit instead. A process can spawn(2) in all its threads and the concurrently is only limited by locks that need to be held shortly when processes are created. The performance win depends on the application; you won't win anything unless you use posix_spawn(); I was very happy to see that our standard shell is using posix_spawn() to start new processes as do popen(3c) as well as system(3c) so the call is well tested.  The more threads you have, the bigger the win. Stopping a thread is expensive, especially if it hold up in a system call. The world used to stop but now it just continues. Support in truss(1), mdb(1) When developing a new system call special attention needs to be given to proc(5) and truss(1) interaction.  The spawn(2) system call is an exception but only because it is much harder to get it right; support is also needed in debuggers or they won't see a new process starting. This includes mdb(1) but also truss(1).  They also need to learn that when spawn(2) succeeds, that they are stopping in a completely different executable; we may also have crossed a privilege boundary, e.g., when spawning su(8) or ping(8).

History As a developer, there are always those projects when it is hard to find a way to go forward.  Drop the project for now and find another project, if only to rest your eyes and find yourself...