X

News, tips, partners, and perspectives for the Oracle Solaris operating system

Recent Posts

Announcing Oracle Solaris 11.4 SRU8

Today we are releasing the SRU 8 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU introduces the following enhancements: Integration of 28060039 introduced an issue where any firmware update/query commands will log eereports and repeated execution of such commands led to faulty/degraded NIC. The issue has been addressed in this SRU. UCB (libucb, librpcsoc, libdbm, libtermcap, and libcurses) libraries have been reinstated for Oracle Solaris 11.4 Re-introduction of the service fc-fabric. ibus has been updated to 1.5.19 The following components have also been updated to address security issues: NTP has been updated to 4.2.8p12 Firefox has been updated to 60.6.0esr BIND has been updated to 9.11.6 OpenSSL has been updated to 1.0.2r MySQL has been updated to 5.6.43 & 5.7.25 libxml2 has been updated to 2.9.9 libxslt has been updated to 1.1.33 Wireshark has been updated to 2.6.7 ncurses has been updated to 6.1.0.20190105 Apache Web Server has been updated to 2.4.38 perl 5.22 pkg.depot Full details of this SRU can be found in My Oracle Support Doc 2529131.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we are releasing the SRU 8 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU...

Events

Coming To a Place Near You - Presentations on Oracle Solaris

In the coming months we have some really nice in person events coming your way. First thing I wanted to mention is that we will be holding two Oracle Solaris TechDays in on April 29th in Dubai and on May 14th in Prague. These TechDays are all day events with in-depth presentations and demonstrations on various Oracle Solaris technologies and features, and how to use them. What also makes these TechDays special is that the presentations are given by Bill Nesheim -- the VP of Engineering for Oracle Solaris -- and his top engineers, giving a unique opportunity to interact with them directly. So these are both a full day of technical presentations on Oracle Solaris. Then I wanted to also point out there are other events happening in other locations. These events will have a combination of presentations on various Oracle Systems products like the Private Cloud Appliance (PCA), Oracle SPARC systems, the storage Oracle ZS-7, Oracle Solaris, as well as a presentation on how to best refresh your estate to lower costs and risk. The events will be on April 9th in Stockholm, on April 9th in Rome, on April 10th in Helsinki, on April 25th in Dallas TX, and on April 25th in Utrecht. So if you live close to these events and are interested you are cordially invited to register and attend. Going forward more events will be organized to find these you can always check Oracle Events website, and search on "Solaris". And finally I wanted to point out that for those who don't live close to these locations, we'll also be holding a Virtual Seminar on Continuous Innovation with Oracle Solaris 11.4 that anybody from anywhere can join. This Virtual Seminar is a more technical in-depth 2 hour session on various Oracle Solaris 11.4 technologies with examples on how they could be used. The invite and registration page to follow shortly.  [Update] Here's the link to the Virtual Seminar on April 25th.

In the coming months we have some really nice in person events coming your way. First thing I wanted to mention is that we will be holding two Oracle Solaris TechDays in on April 29th in Dubai and on...

Oracle Developer Studio

Building Opensource on Solaris 11.4

I love opensource software, I use it every day, but sometimes a component I need is either not present in or at the wrong version in Oracle Solaris. We often hear the same requests from our customers too. Please can I have version x.y of opensource software FOO on Oracle Solaris 11.4. Oracle Solaris depends heavily on opensource software and we try to keep them reasonably up to date, while being careful not to break anything. However we don't have every peice of opensource available, so I thought I'd address this by writing a blog to show how simple it was to use our Userland build infrastructure to build a piece of opensource I actually wanted to use this week. So what's the software? Well I use cscope a fair bit. Internally we have two versions cscope and cscope-fast. The former came to us when we licensed AT&T System V Unix, and we have evolved it ever since. The latter is a fork to optimise search speed (at the expense of it being slower to generate the index file). cscope-fast isn't that good at doing python either, which I am doing a lot more of these days. I noticed that in 2001 SCO, who bought the rights to AT&T System V unix, had opensourced cscope under the BSD license, so thought it was worth seeing how it compared, both in terms of speed and how it handles python. It might now be worth a brief discussion about what "userland" actually is and how it works. Userland is our way of importing and building opensource software in a simple repeatable way. It does so by downloading the source of a project (either as a tar ball, or by using the source code management system to take a copy), it then applies any patches we need to get it to work on Oracle Solaris (thankfully fewer and fewer these days) and then builds it. It can even generate IPS packages if you create a package manifest. I've done everything in 11.4 GA (General Availability) using the Oracle Technical Network (OTN) versions of Oracle Developer Studio and Oracle Solaris, but it you have a support contract obviously you should use the latest versions you can. I went to OTN and registered for the developer studio package repository (I won't cover that here, it's pretty straight forward and documented on the OTN site). You need to register to get certificates and keys to access it. So I booted up my 11.4 GA vm and installed the following packages to get me started $ sudo pkg install flex git gcc-7 I know I'm going to need gcc and git, I only found out later I needed flex to build cscope. I decided to install Developer Studio 12.6 (The OTN version of Studio 12.4 doesn't install on 11.4 as 11.4 removed the Python2.6 interpreter, you can get a patched version of Studio 12.4 if you really need to) $ sudo pkg install --accept developerstudio-126 We need studio to do the setup for the userland build infrastructure, we won't be using it to build cscope. OK, so now we're ready to start. First off create a directory where you want to do the work $ git clone https://github.com/oracle/solaris-userland.git Cloning into 'solaris-userland'... remote: Enumerating objects: 1251, done. remote: Counting objects: 100% (1251/1251), done. remote: Compressing objects: 100% (683/683), done. remote: Total 91074 (delta 677), reused 935 (delta 530), pack-reused 89823 Receiving objects: 100% (91074/91074), 137.57 MiB | 8.05 MiB/s, done. Resolving deltas: 100% (34629/34629), done. Now we have a local copy to work with Next we need to setup the build environment to work. Because we're not in Oracle and using Studio 12.6 we need to change a couple of things in make-rules/shared-macros.mk, find the SPRO_VROOT definition and change it to SPRO_VROOT ?= $(SPRO_ROOT)/developerstudio12.6 and remove the line starting INTERNAL_ARCHIVE_MIRROR= and in make-rules/ips-buildinfo.mk find CANONICAL_REPO line (it's currently the last one) and change it to point to the Oracle Solaris release repo CANONICAL_REPO ?= http://pkg.oracle.com/solaris/release/ Now we can setup the build environment $ gmake setup /bin/mkdir -p /export/home/chris/userland-src/solaris-userland/i386 Generating component list... Generating component dependencies... Generating component list... Generating component dependencies... /bin/mkdir -p /export/home/chris/userland-src/solaris-userland/i386/logs /bin/mkdir -p /export/home/chris/userland-src/solaris-userland/i386/home /usr/bin/pkgrepo create file:/export/home/chris/userland-src/solaris-userland/i386/repo /usr/bin/pkgrepo add-publisher -s file:/export/home/chris/userland-src/solaris-userland/i386/repo nightly /usr/bin/pkgrepo add-publisher -s file:/export/home/chris/userland-src/solaris-userland/i386/repo userland-localizable /usr/bin/pkgrepo create file:/export/home/chris/userland-src/solaris-userland/i386/repo.experimental /usr/bin/pkgrepo add-publisher -s file:/export/home/chris/userland-src/solaris-userland/i386/repo.experimental nightly /usr/bin/pkgrepo add-publisher -s file:/export/home/chris/userland-src/solaris-userland/i386/repo.experimental userland-localizable building tools... /usr/gnu/bin/make -C ../tools clean make[1]: Entering directory '/export/home/chris/userland-src/solaris-userland/tools' make[1]: Leaving directory '/export/home/chris/userland-src/solaris-userland/tools' /usr/gnu/bin/make -C ../tools setup make[1]: Entering directory '/export/home/chris/userland-src/solaris-userland/tools' make[1]: Leaving directory '/export/home/chris/userland-src/solaris-userland/tools' Generating pkglint(1) cache from CANONICAL_REPO http://pkg.oracle.com/solaris/release/... The last bit takes a while. Now create a directory for cscope $ mkdir components/cscope $ cd components/cscope We need to create a Makefile. This is what mine looks like $ cat Makefile # # CDDL HEADER START # # The contents of this file are subject to the terms of the # Common Development and Distribution License (the "License"). # You may not use this file except in compliance with the License. # # You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE # or http://www.opensolaris.org/os/licensing. # See the License for the specific language governing permissions # and limitations under the License. # # When distributing Covered Code, include this CDDL HEADER in each # file and include the License file at usr/src/OPENSOLARIS.LICENSE. # If applicable, add the following below this CDDL HEADER, with the # fields enclosed by brackets "[]" replaced with your own identifying # information: Portions Copyright [yyyy] [name of copyright owner] # # CDDL HEADER END # # # Copyright (c) 2019, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64 COMPILER=gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= cscope COMPONENT_VERSION= 15.9 COMPONENT_SRC= $(COMPONENT_NAME)-$(COMPONENT_VERSION) IPS_COMPONENT_VERSION= $(COMPONENT_VERSION) BUILD_VERSION= 1 COMPONENT_PROJECT_URL= https://cscope.sourceforge.net #COMPONENT_BUGDB= utility/cscope #COMPONENT_ANITYA_ID= 1865 COMPONENT_ARCHIVE= cscope-15.9.tar.gz COMPONENT_ARCHIVE_URL= https://sourceforge.net/projects/cscope/files/latest/download COMPONENT_ARCHIVE_HASH= COMPONENT_MAKE_JOBS= 1 BUILD_STYLE= configure TEST_TARGET= $(NO_TESTS) include $(WS_MAKE_RULES)/common.mk REQUIRED_PACKAGES += library/ncurses This is about as small as I could make it, You need the location where to download the tarball, package and component names, and I only found later I needed to put a dependency on library/ncurses to get packaging to resolve the dependency correctly. Later on you will want to populate the COMPONENT_ARCHIVE_HASH to be sure you're getting the file you intended. BUILD_STYLE tells the makefiles to run configure within the build directory We will need to create a package manifest, but if we wait till we're building cleanly we can use the gmake sample-manifest to get something close to usable So assuming I've got the Makefile right should be able to build it $ gmake install <stuff deleted> This is gmake getting the source for us /export/home/chris/userland-src/solaris-userland/tools/userland-fetch --file cscope-15.9.tar.gz --url 'https://sourceforge.net/projects/cscope/files/latest/download' Source cscope-15.9.tar.gz... not found, skipping file copy Source https://sourceforge.net/projects/cscope/files/latest/download... downloading... validating signature... skipped (no signature URL) validating hash... skipping (no hash) hash is: sha256:c5505ae075a871a9cd8d9801859b0ff1c09782075df281c72c23e72115d9f159 /usr/bin/touch cscope-15.9.tar.gz <More stuff deleted> This is it running configure (and where I found I needed flex installed) (cd /export/home/chris/userland-src/solaris-userland/components/cscope/build/amd64 ; /usr/bin/env CONFIG_SHELL="/bin/bash" PKG_CONFIG_PATH="/usr/lib/amd64/pkgconfig" CC="/usr/gcc/7/bin/gcc" CXX="/usr/gcc/7/bin/g++" PATH="/usr/bin/amd64:/usr/bin:/usr/gnu/bin" CC_FOR_BUILD="/usr/gcc/7/bin/gcc -m64" CXX_FOR_BUILD="/usr/gcc/7/bin/g++ -m64" CPPFLAGS="-m64" "ac_cv_func_realloc_0_nonnull=yes" "NM=/usr/gnu/bin/nm" INTLTOOL_PERL="/usr/perl5/5.22/bin/perl" CFLAGS="-m64 -O3" CXXFLAGS="-m64 -O3" LDFLAGS="" /bin/bash \ /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/configure --prefix=/usr --mandir=/usr/share/man --bindir=/usr/bin --sbindir=/usr/sbin --libdir=/usr/lib/amd64 ) checking for a BSD-compatible install... /usr/bin/ginstall -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/bin/gmkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking build system type... x86_64-pc-solaris2.11 checking host system type... x86_64-pc-solaris2.11 checking for style of include used by make... GNU checking for gcc... /usr/gcc/7/bin/gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... <More stuff deleted> Here it's starting gmake install in the build area make[4]: Entering directory '/export/home/chris/userland-src/solaris-userland/components/cscope/build/amd64/src' /usr/gcc/7/bin/gcc -DHAVE_CONFIG_H -I. -I/export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src -I.. -I/usr/include/ncurses -m64 -m64 -O3 -MT fscanner.o -MD -MP -MF .deps/fscanner.Tpo -c -o fscanner.o /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src/fscanner.c mv -f .deps/fscanner.Tpo .deps/fscanner.Po /usr/gcc/7/bin/gcc -DHAVE_CONFIG_H -I. -I/export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src -I.. -I/usr/include/ncurses -m64 -m64 -O3 -MT egrep.o -MD -MP -MF .deps/egrep.Tpo -c -o egrep.o /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src/egrep.c mv -f .deps/egrep.Tpo .deps/egrep.Po /usr/gcc/7/bin/gcc -DHAVE_CONFIG_H -I. -I/export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src -I.. -I/usr/include/ncurses -m64 -m64 -O3 -MT alloc.o -MD -MP -MF .deps/alloc.Tpo -c -o alloc.o /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src/alloc.c mv -f .deps/alloc.Tpo .deps/alloc.Po /usr/gcc/7/bin/gcc -DHAVE_CONFIG_H -I. -I/export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src -I.. -I/usr/include/ncurses -m64 -m64 -O3 -MT basename.o -MD -MP -MF .deps/basename.Tpo -c -o basename.o /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src/basename.c mv -f .deps/basename.Tpo .deps/basename.Po /usr/gcc/7/bin/gcc -DHAVE_CONFIG_H -I. -I/export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src -I.. -I/usr/include/ncurses -m64 -m64 -O3 -MT build.o -MD -MP -MF .deps/build.Tpo -c -o build.o /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src/build.c mv -f .deps/build.Tpo .deps/build.Po /usr/gcc/7/bin/gcc -DHAVE_CONFIG_H -I. -I/export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src -I.. -I/usr/include/ncurses -m64 -m64 -O3 -MT command.o -MD -MP -MF .deps/command.Tpo -c -o command.o /export/home/chris/userland-src/solaris-userland/components/cscope/cscope-15.9/src/command.c mv -f .deps/command.Tpo .deps/command.Po Which ends up creating a prototype directory under $(BUILD) to mirror the directory structure of the installed package. This is very useful as pkg(7) can use this to generate a manifest for us. So at this point we can run cscope from the prototype area, but that's not really very interesting, we what IPS packages to install. Well fortunately now we have the proto area, we can run the gmake sample-manifest $ gmake sample-manifest /export/home/chris/userland-src/solaris-userland/tools/manifest-generate \ /export/home/chris/userland-src/solaris-userland/components/cscope/build/prototype/i386 | \ /usr/bin/pkgmogrify -D PERL_ARCH=i86pc-solaris-thread-multi-64 -D PERL_VERSION=5.22 -D IPS_COMPONENT_RE_VERSION=15\\.9 -D COMPONENT_RE_VERSION=15\\.9 -D PYTHON_2.7_ONLY=# -D PYTHON_3.4_ONLY=# -D PYTHON_3.5_ONLY=# -D SQ=\' -D DQ=\" -D Q=\" -I/export/home/chris/userland-src/solaris-userland/components/cscope -D SOLARIS_11_3_ONLY="#" -D SOLARIS_11_5_ONLY="#" -D SOLARIS_11_3_4_ONLY="" -D SOLARIS_11_4_5_ONLY="" -D SOLARIS_11_4_ONLY="" -D PY3_ABI3_NAMING="#" -D PY3_CYTHON_NAMING="" -D ARC_CASE="" -D TPNO="" -D BUILD_VERSION="1" -D OS_RELEASE="5.11" -D SOLARIS_VERSION="2.11" -D PKG_SOLARIS_VERSION="11.5" -D CONSOLIDATION="userland" -D CONSOLIDATION_CHANGESET="" -D CONSOLIDATION_REPOSITORY_URL="https://github.com/oracle/solaris-userland.git" -D COMPONENT_VERSION="15.9" -D IPS_COMPONENT_VERSION="15.9" -D HUMAN_VERSION="" -D COMPONENT_ARCHIVE_URL="https://sourceforge.net/projects/cscope/files/latest/download" -D COMPONENT_PROJECT_URL="https://cscope.sourceforge.net" -D COMPONENT_NAME="cscope" -D HG_REPO="" -D HG_REV="" -D HG_URL="" -D GIT_COMMIT_ID="" -D GIT_REPO="" -D GIT_TAG="" -D MACH="i386" -D MACH32="i86" -D MACH64="amd64" -D PUBLISHER="nightly" -D PUBLISHER_LOCALIZABLE="userland-localizable" /dev/fd/0 /export/home/chris/userland-src/solaris-userland/transforms/generate-cleanup | \ sed -e '/^$/d' -e '/^#.*$/d' | /usr/bin/pkgfmt | \ cat /export/home/chris/userland-src/solaris-userland/transforms/manifest-metadata-template - >/export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-generated.p5m So we have a manifest in /home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-generated.p5m. It's not useable but is close to it. For me it was a bit hit and miss adding and removing things till it worked. The diffs I ended up with are   $ diff -u /export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-generated.p5m cscope.p5m --- /export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-generated.p5m 2019-03-28 16:51:42.034736910 +0000 +++ cscope.p5m 2019-03-28 15:58:57.122806976 +0000 @@ -23,26 +23,17 @@ # Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved. # + default mangler.man.stability volatile> set name=pkg.fmri \ - value=pkg:/$(IPS_PKG_NAME)@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) -set name=pkg.summary value="XXX Summary XXX" -set name=com.oracle.info.description value="XXX Description XXX" -set name=com.oracle.info.tpno value=$(TPNO) + value=pkg:/site/developer/cscope@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) +set name=pkg.summary value=cscope +set name=com.oracle.info.description value="Source Browser" set name=info.classification \ - value="org.opensolaris.category.2008:XXX Classification XXX" + value=org.opensolaris.category.2008:Development/System set name=info.source-url value=$(COMPONENT_ARCHIVE_URL) set name=info.upstream-url value=$(COMPONENT_PROJECT_URL) set name=org.opensolaris.arc-caseid value=PSARC/YYYY/XXX -set name=org.opensolaris.consolidation value=$(CONSOLIDATION) - -license $(COPYRIGHT_FILE) license='$(COPYRIGHTS)' - file path=usr/bin/cscope file path=usr/bin/ocs file path=usr/share/man/man1/cscope.1 +license BSD license=BSD So I set the category, package name, summary, description and license manually. I also add a transform for the man page. This is required. A quick point about package names, I've put it in a hierachy that Oracle will never publish packages in (site). We need to also consider the license. You can't delete it, so I copied the license from the cscope home page in to a file called BSD in the cscope component directory. Remove generated package manifest (else it'll try to use that instead) and run pkgfmt on your manifest to make it acceptable to pkg and simply $ gmake publish /usr/bin/env; /usr/bin/pkgdepend generate \ -m -d /export/home/chris/userland-src/solaris-userland/components/cscope/build/prototype/i386/mangled -d /export/home/chris/userland-src/solaris-userland/components/cscope/build/prototype/i386 -d /export/home/chris/userland-src/solaris-userland/components/cscope/build -d /export/home/chris/userland-src/solaris-userland/components/cscope -d cscope-15.9 -d /export/home/chris/userland-src/solaris-userland/licenses /export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-cscope.mangled >/export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-cscope.depend /usr/bin/pkgdepend resolve -e /export/home/chris/userland-src/solaris-userland/components/cscope/build/resolve.deps -m /export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-cscope.depend /usr/bin/touch /export/home/chris/userland-src/solaris-userland/components/cscope/build/.resolved-i386 <More stuff deleted> /usr/bin/pkgsend -s file:/export/home/chris/userland-src/solaris-userland/i386/repo publish --fmri-in-manifest --no-catalog -d /export/home/chris/userland-src/solaris-userland/components/cscope/build/prototype/i386/mangled -d /export/home/chris/userland-src/solaris-userland/components/cscope/build/prototype/i386 -d /export/home/chris/userland-src/solaris-userland/components/cscope/build -d /export/home/chris/userland-src/solaris-userland/components/cscope -d cscope-15.9 -d /export/home/chris/userland-src/solaris-userland/licenses -T \*.py /export/home/chris/userland-src/solaris-userland/components/cscope/build/manifest-i386-cscope.depend.res pkg://nightly/developer/cscope@15.9,1:20190328T160907Z PUBLISHED <More stuff deleted> Right we have packages in a repository, we can install it $ sudo pkg set-publisher -p file:/export/home/chris/userland-src/solaris-userland/i386/repo pkg set-publisher: Added publisher(s): nightly Updated publisher(s): userland-localizable $ sudo pkg install cscope Packages to install: Create boot environment: No Create backup boot environment: No DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 4/4 0.2/0.2 -- PHASE ITEMS Installing new actions 18/18 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 4/4 So lets run it on cscope and see what it looks like $ cd cscope-15.9/src $ cscope -R   So there you have it, cscope, cscoping cscope source  

I love opensource software, I use it every day, but sometimes a component I need is either not present in or at the wrong version in Oracle Solaris. We often hear the same requests from our customers...

Oracle Developer Studio

Building OpenJDK 12 using JDK 8

Guest Author: Petr Sumbera As you may know, a new OpenJDK 12 was released which you might want to try on Oracle Solaris. There are already some great blog posts out by Martin Mueller and Andrew Watkins describing how to build recent OpenJDK versions from sources. But there is an issue. To build OpenJDK you will always need previous version as a boot JDK. Basically to build OpenJDK 12 you will need OpenJDK 11 and so on. For SPARC it's rather easy as you can download Java SE Development Kit 11 for Oracle Solaris on SPARC. But not for Oracle Solaris on x64. So you will need to start with JDK 8 which is bundled with Oracle Solaris. Following are copy paste instructions which should build all OpenJDK versions from 9 till latest 12. You just need to have Oracle Solaris 11.4 with installed system header files, Mercurial and JDK 8: pkg install mercurial system/header jdk-8 You might also want to check whether you have the facet.devel set to True or False. When set to False there are no C header files on system. So the correct setting is True: pkg facet | grep ^devel && pkg change-facet facet.devel=True Note that a setting of None should default to True, i.e. have the same effect as True As compiler you will need Oracle Solaris Studio 12.4 (ideally latest update). Installation guide is here. After successful build procedure you should find your fresh OpenJDK tar archives in openjdk*/build/solaris-*-server-release/bundles/ directory. So now copy and paste the following: cat > build.sh <<EOF set -xe # Oracle Developer Studio 12.4 and JDK 8 are needed. STUDIO="/opt/solarisstudio12.4/bin" JDK="/usr/jdk/instances/jdk1.8.0/bin/" # OpenJDK 12 has some issues: # - https://bugs.openjdk.java.net/browse/JDK-8211081 # - shenandoahgc doesn't build on Solaris i386 (and is not supported on sparc) CONFIG_OPTS_JDK12="--with-jvm-features=-shenandoahgc --disable-warnings-as-errors" # EM_486 is no longer defined since Solaris 11.4 function fixsrc1 { FILE=os_solaris.cpp for f in hotspot/src/os/solaris/vm/\$FILE src/hotspot/os/solaris/\$FILE; do [ ! -f "\$f" ] || gsed -i 's/EM_486/EM_IAMCU/' "\$f" done } # caddr32_t is already defined in Solaris 11 function fixsrc2 { FILE=src/java.base/solaris/native/libnio/ch/DevPollArrayWrapper.c for f in "\$FILE" "jdk/\$FILE"; do [ ! -f "\$f" ] || gsed -i '/typedef.*caddr32_t;/d' "\$f" done } for VERSION in {9..12}; do hg clone http://hg.openjdk.java.net/jdk-updates/jdk\${VERSION}u openjdk\${VERSION} cd openjdk\${VERSION}/ # OpenJDK 9 uses script to download nested repositories test -f get_source.sh && bash get_source.sh # There are needed some source changes to build OpenJDK on Solaris 11.4. fixsrc1 ; fixsrc2 [[ \${VERSION} -lt 12 ]] || CONFIGURE_OPTIONS="\${CONFIG_OPTS_JDK12}" # Oracle Solaris Studio 12.4 may fail while building hotspot-gtest (_stdio_file.h issue) PATH="\$JDK:\$STUDIO:/usr/bin/" bash ./configure --disable-hotspot-gtest \${CONFIGURE_OPTIONS} gmake bundles 2>&1 | tee build.log RELEASE_DIR=\`grep "^Finished building target" build.log | cut -d \' -f 4\` JDK="\`pwd\`/build/\${RELEASE_DIR}/images/jdk/bin/" cd .. done EOF bash build.sh This should correctly build all the versions of OpenJDK from 9 to 12. Have fun.

Guest Author: Petr Sumbera As you may know, a new OpenJDK 12 was released which you might want to try on Oracle Solaris. There are already some great blog posts out by Martin Mueller and Andrew...

Announcements

Registration is Open for Oracle OpenWorld and Oracle Code One San Francisco 2019

Register now for Oracle OpenWorld and Oracle Code One San Francisco. These concurrent events are happening September 16-19, 2019 at Moscone Center. By registering now, you can take advantage of the Super Saver rate before it expires on April 20, 2019.  This year at Oracle OpenWorld San Francisco, you’ll learn how to do more with your applications, adopt new technologies, and network with product experts and peers.    Don’t miss the opportunity to experience: Future technology firsthand such as Oracle Autonomous Database, blockchain, and artificial intelligence New connections by meeting some of the brightest technologists from some of the world’s most compelling companies Technical superiority by taking home new skills and getting an insider’s look at the latest Oracle technology Lasting memories while experiencing all that Oracle has to offer, including many opportunities to unwind and have some fun   At Oracle Code One, the most inclusive developer conference on the planet, come learn, experiment, and build with us. You can participate in discussions on Linux, Java, Go, Rust, Python, JavaScript, SQL, R, and more. See how you can shape the future and break new ground. Join deep-dive sessions and hands-on labs covering leading-edge technology such as blockchain, chatbots, microservices, and AI. Experience cloud development technology in the Groundbreakers Hub, featuring workshops and other live, interactive experiences and demos. Register Now and Save! Now is the best time to register for these popular conferences and take us up on the Super Saver rate. Then be sure to check back in early May, 2019 for the full content catalog where you will see the breadth and depth of our sessions. You can also signup to be notified when the content catalog goes live. Register now for Oracle OpenWorld San Francisco 2019 Register now for Oracle Code One San Francisco 2019 We look forward to seeing you in September!  

Register now for Oracle OpenWorld and Oracle Code One San Francisco. These concurrent events are happening September 16-19, 2019 at Moscone Center. By registering now, you can take advantage of the...

Announcements

Future of Python on Solaris

If you are an administrator of Oracle Solaris 11 systems you will very likely be aware that several of the critical installation and management tools are implemented in Python rather than the more traditional C and/or shell script.  Python is often a good choice as a systems development language and Oracle Solaris is not alone in choosing to use it more extensively. If you have done any programming with Python or have installed other Python tools you are probably aware that there are compatibility issues between the Python 2.x and 3.x language runtimes.  In Oracle Solaris 11.4.7 we deliver multiple releases of Python: 2.7, 3.4, 3.5.  Future updates to the Oracle Solaris support repository will change the exact version mix.  The Python community will no longer be supporting and providing security or other fixes for the 2.7 environment from the year 2020.  That is currently just about 9 months away. Python Versions A default installation of Oracle Solaris 11.4 will currently have at least Python 2.7 and a version of Python 3, /usr/bin/python by default will run Python 2.7.  There are also paths that allow explicitly picking a version, this is very useful for the #! line of a script: eg /usr/bin/python2, /usr/bin/python3 or even more specific /usr/bin/python3.5. The packaging system allows you to choose now which version of Python /usr/bin/python represents; there are two package mediators used to control this.  The 'python' mediator allows selecting which /usr/bin/python represents and the 'python3' mediator allows selecting which minor version of the 3.x release train the /usr/bin/python3 link points to. In a future support repository update we will change the default to be a version of Python 3.  Python Modules & Solaris Tools The Oracle Solaris 11 package tools (IPS: Image Packaging System) is one of the critical OS tools implemented in Python, in current releases of 11.4 this is still using Python 2.7, but we have also delivered a parallel version for testing that uses Python 3.4. We don't expect to have a single support repository update where we switch over all of the OS Python consumers to Python 3 in one release. When we implement operating system tools using Python the #! line should always be precise as to which major.minor version it is the script is expected to run with. Our package dependencies then ensure the correct versions of pkg:/runtime/python are installed. As a result of using Python to implement parts of the OS we deliver a fairly large number of Python modules from open source communities.  When we deliver a Python module via IPS it is always installed below /usr/lib/pythonX.Y/vendor-packages/.  The intent of the vendor-packages area is to be like the core Python site-packages area but to make it clear these modules came as part of the OS and should be updated with the OS tools.  This provides some level of isolation between OS installed modules and administrator/developer use of tools such as 'pip' package installer for Python. Sharing the Python Installation We have seen some recent cases where use of 'pip' to add additional modules, or different versions of modules delivered to vendor-packages has broken critical system administration tools such as /usr/bin/pkg and /usr/bin/beadm.  The reason for this is because vendor-packages is really just an extension of the core Python site-packages concept.  While it is possible to instruct the Python interpreter to ignore the user and site packages area for isolation reasons that doesn't help when the OS core tools are implemented in Python and need access to content installed in the vendor-packages area.  Oracle Solaris is not the only operating system impacted by this problem, other OS vendors are developing their own solutions to this, some of the concepts and recommendations you will find are similar. Firstly, like others do we strongly recommend the use of Python Virtual Environments when using pip or other Python install tools to add modules not delivered by Oracle Solaris packages.  If you wish to leverage the fact that Oracle Solaris has already included some of the modules you need, you can use '--system-site-packages' when you create the virtual environment. Using that allows pip/python to reference the system site-packages and thus find the vendor-package are but it won't install content into /usr/lib/pythonX.y/site-packages.  For example: $ python3 -m venv --system-site-packages /usr/local/mytool $ source /usr/local/mytool/bin/activate (mytool) $ python Python 3.5.6 (default, Mar 7 2019, 08:31:32) [C] on sunos5 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path ['', '/usr/lib/python35.zip', '/usr/lib/python3.5', '/usr/lib/python3.5/plat-sunos5', '/usr/lib/python3.5/lib-dynload', '/usr/local/mytool/lib/python3.5/site-packages', '/usr/lib/python3.5/site-packages', '/usr/lib/python3.5/vendor-packages'] >>> We can see from the above that the Python module search path includes the system and the 'mytool' application area as well as the Solaris IPS delivered content in vendor-packages.  If we now run 'pip install' it will add any modules it needs to the site-packages directory below 'mytool' rather than the one under /usr/lib/python3.5. Hardending the shared Python Installation So that we can keep providing open source Python modules for administrator/developer use and continue to use Python for the OS itself we need a solution that protects the Python modules in vendor-packages from being overridden by administrator installed ones in site-packages.  We don't wish to duplicate whole Python environments, since as I mentioned above we currently use multiple Python versions for the OS itself.  The solution we have come up was based on the following requirements: pip and other install tools should not be able to add incompatible versions of modules to site-packages that confuse core OS tools pip uninstall must not be able to remove packages from vendor-packages instead of site-packages. This happens because 'vendor-packages' is higher than 'site-packages' in the Python module search path. pip and other Python installers should work in a virtualenv and remain working for the system 'site-packages' area the solution should not require patching the Python runtime or pip to enforce the above. We will be delivering parts of the solution to this over time when they are ready and tested to our satisfaction, there should be no impact to any of your tooling that uses Python we deliver. The first change we will deliver is to further harden the core Python tools that we author against content in the 'site-packages' area that is potentially toxic to our tools.  Ideally we should have been able to just pass -S as part of the #! line of the script, our tools already use -s to ignore the per user 'site-packages' but that would mean the vendor-packages area is never setup since it depends on /usr/lib/pythonX.Y/site-packages/vendor-packages.pth to get added.  So instead we deliver a new module 'solaris.no_site_packages' that removes the system site-packages leaving vendor-packages.  This module only makes sense for Oracle Solaris core components, in particular any 3rd party FOSS tools that are in Python do not include this. We are still investigating the best  the solution to ensure that tools such as pip can not uninstall IPS delivered content in vendor-packages. Patching the version of pip we deliver would only be  a partial solution, since it would only work until an admin runs 'pip install --upgrade pip', since that would effectively remove any patch we made to ignore the vendor-packages area. Even with that restriction we may still apply a patch to pip.  We could potentially deliver an /etc/pip.conf file that forces the use of virtual environments, as described in this Python document virtual-env. However doing that could potentially have unexpected side effects if you are managing to safely use 'pip install' to place content into the system 'site-packages' area. Instead we intend to use a feature of Oracle Solaris that allows us to tag files as not being able to be removed even by the root user (even when running with all privileges), a feature I described in this blog post 11 years ago! IPS can deliver system attributes (sysattr) for files already.  Currently we only use that in a single package (pkg://solaris/release/name) to ensure that the version files that contain the value used for 'uname -v' are not easy to "accidentally" change, for that we add sysattr=readonly to the package manifest for those files. We are exploring the possibility of using the 'nounlink' or maybe even the 'immutable' attribute to protect all of the Python modules delivered by IPS to vendor-packages directory. In theory, we could do this for all content delivered via IPS that is not explicitly marked with the preserve attribute but we probably won't go that far in the initial use and instead focus just on hardening the Python environment. In summary Python on Oracle Solaris is very much here to stay as /usr/bin/python and we are working to ensure that you can continue to use the Python modules we deliver while providing an environment where our core tools implemented in Python are safe from any additional Python modules added to the system.

If you are an administrator of Oracle Solaris 11 systems you will very likely be aware that several of the critical installation and management tools are implemented in Python rather than the...

Perspectives

How to Install the OCI CLI on Oracle Solaris

Following on to my previous posts on using Oracle Solaris with Oracle Cloud Infrastructure (OCI)... All modern clouds provide a fancy browser console that lets you easily manage any of your cloud resources.  For power users, though, the browser console is often the slowest way to get things done.  Plus, if you're trying to leverage the cloud as part of other automation, which has often been written as traditional Unix shell scripts,you may need a command line interface (CLI), Oracle Cloud Infrastructure (OCI) provides a comprehensive CLI, the oci command, written in Python as is true of many recent system tools.  The CLI and the underlying Python SDK are very portable, but have some dependencies on other Python modules at particular versions, so can be somewhat complex to install on any particular OS platform.  Fortunately, Python has a powerful module known as virtualenv that makes it fairly simple to install and run a Python application and its dependencies in isolation from other Python programs that may have conflicting requirements.  This is especially an issue with modern enterprise operating systems such as Solaris, which use Python for system tools and don't necessarily update all of their modules as quickly as the cloud SDK's require.  OCI makes use of virtualenv to provide an easy installation setup for any Unix-like operating system, including Solaris 11.4.  There are just a couple of extra things we need to do in order for it to work on Solaris, so without further ado, here's a step-by-step guide to installing the OCI CLI on Solaris 11.4 Install Oracle Developer Studio's C compiler - any recent version should do, I've tested 12.4, 12.5 and 12.6.  Once you've obtained credentials for the package repository and configured the solarisstudio publisher, the command is simply pkg install --accept developer/developerstudio-126/cc Ensure the C compiler command is in your path: PATH=$PATH:/opt/developerstudio12.6/bin Install the system headers and pkg-config tool. pkg install system/header developer/build/pkg-config Set the correct include path in your environment: export CFLAGS=$(pkg-config --cflags libffi) Now follow the installation instructions from the OCI CLI documentation Once you download and execute the basic install script, you'll have to answer several prompts regarding the installation locations for the OCI CLI components, after that it will download a series of dependencies, build and install them to the specified location, this takes just a couple of minutes. I've included a transcript of a session below so you can see what a successful runof the OCI install script looks like.  Happy CLI-ing! {cli-114} bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6283 100 6283 0 0 49472 0 --:--:-- --:--:-- --:--:-- 49472 Downloading Oracle Cloud Infrastructure CLI install script from https://raw.githubusercontent.com/oracle/oci-cli/6dc61e3b5fd2781c5afff2decb532c24969fa6bf/scripts/install/install.py to /tmp/oci_cli_install_tmp__Xfw. ######################################################################### 100.0% System version of Python must be either a Python 2 version >= 2.7.5 or a Python 3 version >= 3.5.0. Running install script. python /tmp/oci_cli_install_tmp__Xfw < /dev/tty -- Verifying Python version. -- Python version 2.7.14 okay. ===> In what directory would you like to place the install? (leave blank to use '/home/dminer/lib/oracle-cli'): /export/oci/lib -- Install directory '/export/oci/lib' is not empty and may contain a previous installation. ===> Remove this directory? (y/N): y -- Deleted '/export/oci/lib'. -- Creating directory '/export/oci/lib'. -- We will install at '/export/oci/lib'. ===> In what directory would you like to place the 'oci' executable? (leave blank to use '/home/dminer/bin'): /export/oci/bin -- The executable will be in '/export/oci/bin'. ===> In what directory would you like to place the OCI scripts? (leave blank to use '/home/dminer/bin/oci-cli-scripts'): /export/oci/bin/oci-cli-scripts -- Creating directory '/export/oci/bin/oci-cli-scripts'. -- The scripts will be in '/export/oci/bin/oci-cli-scripts'. -- Downloading virtualenv package from https://github.com/pypa/virtualenv/archive/15.0.0.tar.gz. -- Downloaded virtualenv package to /tmp/tmpGBgVu5/15.0.0.tar.gz. -- Checksum of /tmp/tmpGBgVu5/15.0.0.tar.gz OK. -- Extracting '/tmp/tmpGBgVu5/15.0.0.tar.gz' to '/tmp/tmpGBgVu5'. -- Executing: ['/usr/bin/python', 'virtualenv.py', '--python', '/usr/bin/python', '/export/oci/lib'] Already using interpreter /usr/bin/python New python executable in /export/oci/lib/bin/python Installing setuptools, pip, wheel...done. -- Executing: ['/export/oci/lib/bin/pip', 'install', '--cache-dir', '/tmp/tmpGBgVu5', 'oci_cli', '--upgrade'] DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. Collecting oci_cli Downloading https://files.pythonhosted.org/packages/3b/8e/dd495ec5eff6bbe8e97cb84db9a1f1407fe5ee0e901f5f2833bb47920ae3/oci_cli-2.5.4-py2.py3-none-any.whl (3.7MB) 100% |████████████████████████████████| 3.7MB 3.3MB/s Collecting PyYAML==3.13 (from oci_cli) Downloading https://files.pythonhosted.org/packages/9e/a3/1d13970c3f36777c583f136c136f804d70f500168edc1edea6daa7200769/PyYAML-3.13.tar.gz (270kB) 100% |████████████████████████████████| 276kB 21.1MB/s Collecting configparser==3.5.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/7c/69/c2ce7e91c89dc073eb1aa74c0621c3eefbffe8216b3f9af9d3885265c01c/configparser-3.5.0.tar.gz Collecting python-dateutil==2.7.3 (from oci_cli) Downloading https://files.pythonhosted.org/packages/cf/f5/af2b09c957ace60dcfac112b669c45c8c97e32f94aa8b56da4c6d1682825/python_dateutil-2.7.3-py2.py3-none-any.whl (211kB) 100% |████████████████████████████████| 215kB 4.0MB/s Collecting pytz==2016.10 (from oci_cli) Downloading https://files.pythonhosted.org/packages/f5/fa/4a9aefc206aa49a4b5e0a72f013df1f471b4714cdbe6d78f0134feeeecdb/pytz-2016.10-py2.py3-none-any.whl (483kB) 100% |████████████████████████████████| 491kB 14.4MB/s Collecting cryptography==2.4.2 (from oci_cli) Downloading https://files.pythonhosted.org/packages/f3/39/d3904df7c56f8654691c4ae1bdb270c1c9220d6da79bd3b1fbad91afd0e1/cryptography-2.4.2.tar.gz (468kB) 100% |████████████████████████████████| 471kB 5.1MB/s Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done Collecting click==6.7 (from oci_cli) Downloading https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl (71kB) 100% |████████████████████████████████| 71kB 1.6MB/s Collecting cx-Oracle==7.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/b7/70/03dbb0f055ee97f7ddb6c6f11668f23a97b5884fdf4826a006ef91c5085c/cx_Oracle-7.0.0.tar.gz (281kB) 100% |████████████████████████████████| 286kB 19.0MB/s Collecting retrying==1.3.3 (from oci_cli) Downloading https://files.pythonhosted.org/packages/44/ef/beae4b4ef80902f22e3af073397f079c96969c69b2c7d52a57ea9ae61c9d/retrying-1.3.3.tar.gz Collecting six==1.11.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl Collecting arrow==0.10.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/54/db/76459c4dd3561bbe682619a5c576ff30c42e37c2e01900ed30a501957150/arrow-0.10.0.tar.gz (86kB) 100% |████████████████████████████████| 92kB 2.1MB/s Collecting jmespath==0.9.3 (from oci_cli) Downloading https://files.pythonhosted.org/packages/b7/31/05c8d001f7f87f0f07289a5fc0fc3832e9a57f2dbd4d3b0fee70e0d51365/jmespath-0.9.3-py2.py3-none-any.whl Collecting certifi (from oci_cli) Downloading https://files.pythonhosted.org/packages/60/75/f692a584e85b7eaba0e03827b3d51f45f571c2e793dd731e598828d380aa/certifi-2019.3.9-py2.py3-none-any.whl (158kB) 100% |████████████████████████████████| 163kB 4.5MB/s Collecting httpsig-cffi==15.0.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/93/f5/c9a213c0f906654c933f1192148d8aded2022678ad6bce8803d3300501c6/httpsig_cffi-15.0.0-py2.py3-none-any.whl Collecting pyOpenSSL==18.0.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/96/af/9d29e6bd40823061aea2e0574ccb2fcf72bfd6130ce53d32773ec375458c/pyOpenSSL-18.0.0-py2.py3-none-any.whl (53kB) 100% |████████████████████████████████| 61kB 14.1MB/s Collecting oci==2.2.3 (from oci_cli) Downloading https://files.pythonhosted.org/packages/d9/e1/1301df1b5ae84aa01188e5c42b41c1ad44739449ff3af2ab81790a952bb3/oci-2.2.3-py2.py3-none-any.whl (2.0MB) 100% |████████████████████████████████| 2.0MB 4.0MB/s Collecting terminaltables==3.1.0 (from oci_cli) Downloading https://files.pythonhosted.org/packages/9b/c4/4a21174f32f8a7e1104798c445dacdc1d4df86f2f26722767034e4de4bff/terminaltables-3.1.0.tar.gz Collecting idna<2.7,>=2.5 (from oci_cli) Downloading https://files.pythonhosted.org/packages/27/cc/6dd9a3869f15c2edfab863b992838277279ce92663d334df9ecf5106f5c6/idna-2.6-py2.py3-none-any.whl (56kB) 100% |████████████████████████████████| 61kB 12.6MB/s Collecting enum34; python_version < "3" (from cryptography==2.4.2->oci_cli) Downloading https://files.pythonhosted.org/packages/c5/db/e56e6b4bbac7c4a06de1c50de6fe1ef3810018ae11732a50f15f62c7d050/enum34-1.1.6-py2-none-any.whl Collecting cffi!=1.11.3,>=1.7 (from cryptography==2.4.2->oci_cli) Downloading https://files.pythonhosted.org/packages/64/7c/27367b38e6cc3e1f49f193deb761fe75cda9f95da37b67b422e62281fcac/cffi-1.12.2.tar.gz (453kB) 100% |████████████████████████████████| 460kB 5.1MB/s Collecting asn1crypto>=0.21.0 (from cryptography==2.4.2->oci_cli) Downloading https://files.pythonhosted.org/packages/ea/cd/35485615f45f30a510576f1a56d1e0a7ad7bd8ab5ed7cdc600ef7cd06222/asn1crypto-0.24.0-py2.py3-none-any.whl (101kB) 100% |████████████████████████████████| 102kB 1.6MB/s Collecting ipaddress; python_version < "3" (from cryptography==2.4.2->oci_cli) Downloading https://files.pythonhosted.org/packages/fc/d0/7fc3a811e011d4b388be48a0e381db8d990042df54aa4ef4599a31d39853/ipaddress-1.0.22-py2.py3-none-any.whl Collecting pycparser (from cffi!=1.11.3,>=1.7->cryptography==2.4.2->oci_cli) Downloading https://files.pythonhosted.org/packages/68/9e/49196946aee219aead1290e00d1e7fdeab8567783e83e1b9ab5585e6206a/pycparser-2.19.tar.gz (158kB) 100% |████████████████████████████████| 163kB 4.7MB/s Building wheels for collected packages: cryptography Building wheel for cryptography (PEP 517) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/13/ad/1b/94b787de6776646c28a03dc2f4a6387e3ab375533028c58195 Successfully built cryptography Building wheels for collected packages: PyYAML, configparser, cx-Oracle, retrying, arrow, terminaltables, cffi, pycparser Building wheel for PyYAML (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/ad/da/0c/74eb680767247273e2cf2723482cb9c924fe70af57c334513f Building wheel for configparser (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/a3/61/79/424ef897a2f3b14684a7de5d89e8600b460b89663e6ce9d17c Building wheel for cx-Oracle (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/31/db/58/a89e912df33e3545643a49cd8bcfe0f513d101b9d115cbeae4 Building wheel for retrying (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/d7/a9/33/acc7b709e2a35caa7d4cae442f6fe6fbf2c43f80823d46460c Building wheel for arrow (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/ce/4f/95/64541c7466fd88ffe72fda5164f8323c91d695c9a77072c574 Building wheel for terminaltables (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/30/6b/50/6c75775b681fb36cdfac7f19799888ef9d8813aff9e379663e Building wheel for cffi (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/bb/f8/22/e3e8d9dd87e0cc6df8201325bd0ae815e701d1ef2b95571cf2 Building wheel for pycparser (setup.py) ... done Stored in directory: /tmp/tmpGBgVu5/wheels/f2/9a/90/de94f8556265ddc9d9c8b271b0f63e57b26fb1d67a45564511 Successfully built PyYAML configparser cx-Oracle retrying arrow terminaltables cffi pycparser Installing collected packages: PyYAML, configparser, six, python-dateutil, pytz, enum34, idna, pycparser, cffi, asn1crypto, ipaddress, cryptography, click, cx-Oracle, retrying, arrow, jmespath, certifi, httpsig-cffi, pyOpenSSL, oci, terminaltables, oci-cli Successfully installed PyYAML-3.13 arrow-0.10.0 asn1crypto-0.24.0 certifi-2019.3.9 cffi-1.12.2 click-6.7 configparser-3.5.0 cryptography-2.4.2 cx-Oracle-7.0.0 enum34-1.1.6 httpsig-cffi-15.0.0 idna-2.6 ipaddress-1.0.22 jmespath-0.9.3 oci-2.2.3 oci-cli-2.5.4 pyOpenSSL-18.0.0 pycparser-2.19 python-dateutil-2.7.3 pytz-2016.10 retrying-1.3.3 six-1.11.0 terminaltables-3.1.0 ===> Modify profile to update your $PATH and enable shell/tab completion now? (Y/n): n -- If you change your mind, add 'source /export/oci/lib/lib/python2.7/site-packages/oci_cli/bin/oci_autocomplete.sh' to your rc file and restart your shell to enable tab completion. -- You can run the CLI with '/export/oci/bin/oci'. -- Installation successful. -- Run the CLI with /export/oci/bin/oci --help

Following on to my previous posts on using Oracle Solaris with Oracle Cloud Infrastructure (OCI)... All modern clouds provide a fancy browser console that lets you easily manage any of your cloud...

Perspectives

Because the NSA Says So

This post is in part tong-in-cheek, and in part serious. Last week someone told me that even the NSA (yes the National Security Agency of the USA) now advises to upgrade to Oracle Solaris 11.4 and my initial thought was "sure they do", but I was swiftly pointed to this very recent Cybersecurity Advisory they had put out (which also acts as guidance for the various government agencies). And sure enough, it states: "Upgrade to Solaris® 11.4 from earlier versions of Solaris® and apply the latest Support Repository Update (SRU)." It also says that the best way to keep your systems and applications safe is to keep up to date with the latest SRU, or at least "every third SRU" which we call a Critical Patch Update a.k.a. a CPU. They state  that this "contains critical fixes for multiple vulnerabilities, including those documented as Common Vulnerabilities and Exposure (CVE®2) entries". Of course we don't hold any critical fixes back for each CPU, and release them in the next SRU available. But if you can't update the system more often than every quarter the CPUs are the best SRU to go for. By the way, when we say the CPU comes out "every quarter", we mean it's the SRU that comes out in January, April, July, and October. So if you have restricted update possibilities in you schedule it's best to focus on these dates, even if you only update once every 6 months or 12 months. The simplest way to do this is continuously update this package every month: # pkg update solaris-11-cpu@latest It is also good to note that applying an older SRU or CPU doesn't help, as it doesn't "mature" over time it is a point in time version of what we think is the best most secure most stable version of Oracle Solaris. So when applying an SRU or CPU please always consider the newest one. You could choose to apply the latest CPU even if there are newer SRUs out, but holding off doesn't make a version better. This of course holds for Oracle Solaris versions and updates too. So if you're still running Oracle Solaris 10 or maybe Oracle Solaris 11.3, it it isn't absolutely necessary or there's some technical reason holding you back we strongly advise to move/update to Oracle Solaris 11.4. And if you're running and earlier version Oracle Solaris 11, this is as easy as applying an SRU, and it gives you an easy way to roll back if necessary.  Oracle Solaris 11.4 and it's SRUs/CPUs are simply the best, most secure version of our Operating System. And now if you get pushback "why we should move to 11.4?" you can say: "Because the NSA says so."

This post is in part tong-in-cheek, and in part serious. Last week someone told me that even the NSA (yes the National Security Agency of the USA) now advises to upgrade to Oracle Solaris 11.4 and my...

Announcing Oracle Solaris 11.4 SRU7

Today we are releasing the SRU 7 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU introduces the following enhancements: Oracle VM Server for SPARC 3.6.1: Reliability improvements in LDoms Manager startup, synchronization with Hypervisor and SP, and several 'ldm' subcommands Improvement in Oracle VM Tools to better enable moving existing workloads onto modern HW, OS & virtualized environments Live Migration reliability improvements and improved messaging when problems occur For more information, see Oracle VM Server for SPARC 3.6.1 Release Notes Add Ninja build system to Oracle Solaris GNU awk has been updated to 4.2.1 FFTW has been updated to 3.3.8 Nghttp2 library has been updated to 1.35.0 tigervnc has been updated to 1.9.0 fix for pkg info no longer shows Last Install/Update Time after catalog refresh Many fixes and improvements for mdb and PTP There has been a latent bug within the '/usr/bin/grep' command such that when used with the '-w' option it did not correctly report on the patterns it was supposed to match. This has been present for a while. The bug is now fixed however this may mean that more results are returned to any script that uses the command. See the README for examples of the behaviour. The following components have also been updated to address security issues: dnsmasq has been updated to 2.80 webkitgtk has been updated to 2.22.5 NSS has been updated to 4.38 libdwarf has been updated to 20190104 GNU Tar has been updated to 1.31 libarchive has been updated to 3.3.3 readline has been updated to 7.0 Firefox has been updated to 60.5.1esr curl has been updated to 7.64.0 mercurial has been updated to 4.7.1 Python: python 3.4 has been updated to 3.4.9 python 3.5 has been updated to 3.5.6 golang, openjpeg & openssh Full details of this SRU can be found in My Oracle Support Doc 2517183.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we are releasing the SRU 7 for Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU...

Oracle Solaris 11

Recent Blogs on Oracle Solaris

  On another forum, the other day, Alan Coopersmith shared a nice list of recent blogs on Oracle Solaris and related topics and I thought that this would be good to share on this blog roll too. It has blogs from this blog roll as well as many others and it gives a nice insight into the recent activity in this area as well as the new features delivered via the "Continuous Innovation" process in the 11.4 SRU's. Here's the list: Update on Oracle Java on Oracle Solaris — https://blogs.oracle.com/solaris/update-on-oracle-java-on-oracle-solaris OpenJDK on Solaris/SPARC — https://blogs.oracle.com/solaris/openjdk-on-solarissparc-v2 Building OpenJDK 12 on Solaris 11 x86_64 — http://notallmicrosoft.blogspot.com/2019/02/building-openjdk-12-on-solaris-11-x8664.html Configuring auditing for select files in Oracle Solaris 11.4 — http://blog.moellenkamp.org/archives/42-Configuring-auditing-for-select-files-in-Oracle-Solaris-11.4.html Oracle Solaris Repository Reorganisation — https://blogs.oracle.com/solaris/oracle-solaris-repository-reorganisation-v2 Customize Network Configuration with Solaris Automated Installs — https://blogs.oracle.com/solaris/customize-network-configuration-with-solaris-automated-installs-v2 Customize network configuration in a Golden Image — https://blogs.oracle.com/solaris/customize-network-configuration-in-a-golden-image Migrate Oracle Solaris 10 Physical Systems with ZFS root to Guest Domains in 3 Easy Steps — https://blogs.oracle.com/solaris/migrate-oracle-solaris-10-physical-systems-with-zfs-root-to-guest-domains-in-3-easy-steps Oracle Solaris 10 in the Oracle Cloud Infrastructure — https://blogs.oracle.com/solaris/oracle-solaris-10-in-the-oracle-cloud-infrastructure A brief history of the admins time — http://blog.moellenkamp.org/archives/43-A-brief-history-of-the-admins-time.html Audit Annotations — http://blog.moellenkamp.org/archives/32-Less-known-Solaris-Feature-Audit-Annotations.html Discover the New Oracle Database Sheet — https://blogs.oracle.com/solaris/discover-the-new-oracle-database-sheet Database performance sheet in Solaris Dashboard in 11.4 SRU 6 — http://blog.moellenkamp.org/archives/33-Database-performance-overview-for-Solaris-11.4-SRU6.html Latency distribution with iostat — http://blog.moellenkamp.org/archives/34-Latency-distribution-with-iostat.html Filesystem latencies with fsstat — http://blog.moellenkamp.org/archives/36-Filesystem-latencies-with-fsstat..html Periodic scrubs of ZFS filesystems — http://blog.moellenkamp.org/archives/37-Periodic-scrubs-of-ZFS-filesystems.html Adding datasets to running zones — http://blog.moellenkamp.org/archives/44-Adding-datasets-to-running-zones.html Memory Reservation Pools in Oracle Solaris 11.4 SRU1 — http://blog.moellenkamp.org/archives/38-Memory-Reservation-Pools-in-Oracle-Solaris-11.4-SRU1.html Labeled Sandboxes — http://blog.moellenkamp.org/archives/39-Labeled-Sandboxes.html DTrace %Y print format with nanoseconds — http://milek.blogspot.com/2019/03/dtrace-y-print-format-with-nanoseconds.html RAID-Z improvements and cloud device support — http://milek.blogspot.com/2018/11/raid-z-improvements-and-cloud-device.html Solaris: Spectre v2 & Meltdown fixes — http://milek.blogspot.com/2018/10/solaris-spectre-v2-meltdown-fixes.html And then for some more fun: The Story of Sun Microsystems PizzaTool — https://medium.com/@donhopkins/the-story-of-sun-microsystems-pizzatool-2a7992b4c797 DTrace on ... what ??? Err, DTrace on Windows — http://blog.moellenkamp.org/archives/48-DTrace-on-...-what-Err,-DTrace-on-Windows.html As you can see many of these blogs are by Joerg Moellenkamp, so if you still don't have enough you can find much more at: http://blog.moellenkamp.org Enjoy.

  On another forum, the other day, Alan Coopersmith shared a nice list of recent blogs on Oracle Solaris and related topics and I thought that this would be good to share on this blog roll too. It has...

Announcements

Discover the New Oracle Database Sheet

As part of the Oracle Solaris 11.4 release we introduced the new StatsStore and Dashboard functionality allowing you to collect and analyze stats from all over the system that Oracle Solaris is running on. Some of these stats come from the Oracle Solaris kernel and some from userland applications. One of the interesting things about the design of the StatsStore and Dashboard however is that it also allows you to collect data from other sources like applications running on Oracle Solaris. As part of our Continuous Delivery strategy we're taking new functionality from our development gate of the next Oracle Solaris update and release it as part of the Oracle Solaris 11.4 SRUs. An example of this is the release of the new Database Stats Sheet as part of Oracle Solaris 11.4 SRU6. It uses this ability of the StatsStore to collect in stats from userland applications and show this in a new sheet of the Oracle Solaris Dashboard. To make this as easy as possible we've introduced new commands statcfg(1) and rdbms-stat(1) that combine to do all the configuration for you. This configures and starts a new SMF service that connects with the Oracle Database and pulls key performance data from the Database and stores it in the StatsStore. To do this it creates a new set of stats in the StatsStore and defines what they are. Additionally they create a new Database sheet that gives a default view of these Database stats together with certain stats coming from Oracle Solaris, giving the administrator a unique graphical representation of this combined data. This blog will give an example on how this is all configured and show what the default Database sheet looks like. Configuring the Database Wallet One of the key steps in setting this up is to configure the Oracle Database Wallet so the SMF service can access the V$ tables in the database. The key thing is that the database user you configure in the wallet must have the SYSDBA connect privilege for the database SID you want to monitor. This is something the database admin will have to do for you if it doesn't already allow this. For this example I'll use a database setup that we used for our Hands on Lab at Oracle OpenWorld last year, where we have a container database (CDB) and a pluggable database (PDB). We'll be configuring the wallet for this PDB. Before we add anything to the wallet we'll need to ensure that the tnsnames.ora file has the appropriate alias and connection string to connect to the database SID. By default it's located in $ORACLE_HOME/network/admin/, but you could also put is somewhere else and have a symbolic link in this default directory that points to it. Here's our tnsnames.ora file: -bash-4.4$ cat $ORACLE_HOME/network/admin/tnsnames.ora mypdb = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = pdb1) ) ) So the SID for the PDB is pdb1 but the alias we've created is called mypdb. We'll also need to amend the sqlnet.ora file to make sure it points to where we want to locate our wallet file when something connects to the database. It too by default is located in $ORACLE_HOME/network/admin/, but can be located somewhere else if you want. Here's our sqlnet.ora: -bash-4.4$ cat $ORACLE_HOME/network/admin/sqlnet.ora # sqlnet.ora Network Configuration File: /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora # Generated by Oracle configuration tools. NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/export/home/oracle/wallet))) SQLNET.WALLET_OVERRIDE = TRUE Now we can create the wallet: -bash-4.4$ mkstore -wrl $HOME/wallet -create Oracle Secret Store Tool : Version 12.2.0.1.0 Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved. Enter password: Enter password again: Now we have the wallet we can add keys to it. Here we add the key to our mypdb instance and check if it's stored correctly: -bash-4.4$ mkstore -wrl $HOME/wallet -createCredential mypdb sys "<my_database_password>" Oracle Secret Store Tool : Version 12.2.0.1.0 Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved. Enter wallet password: -bash-4.4$ mkstore -wrl $HOME/wallet -listCredential Oracle Secret Store Tool : Version 12.2.0.1.0 Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved. Enter wallet password: List credential (index: connect_string username) 1: mypdb sys Note that you would put your own database password instead of "<my_database_password>" and that this password is now stored behind entry #1. We can now check if all works correctly: -bash-4.4$ sqlplus /@mypdb as SYSDBA SQL*Plus: Release 12.2.0.1.0 Production on Thu Feb 28 05:15:34 2019 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> It works! We can connect through the wallet. This is essentially what the SMF process will do. Installing the statcfg and oracle-rdbms-stats Packages Now to install the packages for the new Database sheet. The main new package that pulls data into the StatsStore is pkg:/service/system/statcfg however there's an additional package that configures statcfg and understand how to pull data in from an Oracle Database. This package is called pkg://solaris/service/oracle-rdbms-stats or oracle-rdbms-stats for short. And because it's dependent upon the statcfg package we only need to add the oracle-rdbms-stats package and it will pull statcfg in. Become root and install the package: -bash-4.4$ su - Password: Oracle Corporation SunOS 5.11 11.4 January 2019 root@HOL3797-19:~# pkg install oracle-rdbms-stats Packages to install: 6 Mediators to change: 1 Services to change: 1 Create boot environment: No Create backup boot environment: No DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 6/6 92/92 146.3/146.3 82.8M/s PHASE ITEMS Installing new actions 208/208 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 Note it pulled in a total of 6 packages, so 4 more besides the statcfg package. At this point we're ready to configure the connection. Configuring the New Service with statcfg Now we're ready for the final steps. First, still as the root role, give the user oracle the authorizations to write into and update the StatsStore: root@HOL3797-19:~# usermod -A +solaris.sstore.update.res,solaris.sstore.write oracle UX: usermod: oracle is currently logged in, some changes may not take effect until next login. We can ignore this last message as the thing we care about is the new oracle-database-stats service and it's process running as user oracle, and when the service is started it will now automatically pick up these authorizations. Now we configure the new service. Because we're still running the root role the environment settings needed to find the database aren't set, and so the database connection string and the wallet wouldn't be found either. So we set ORACLE_HOME: root@HOL3797-19:~# export ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 Then we use statcfg(1) to configure the new service, plus we make sure the new service also has ORACLE_HOME set: root@HOL3797-19:~# /usr/bin/statcfg oracle-rdbms -u oracle -g oinstall -s mypdb -c mypdb Creating a new service instance for 'mypdb' root@HOL3797-19:~# svccfg -s oracle-database-stats:mypdb setenv ORACLE_HOME /u01/app/oracle/product/12.2.0/dbhome_1 At this point you can check if the service has successfully started by either checking if there are any issues with svcs -x or svcs -l oracle-database-stats:mypdb. The first will show if any services are having issues and the second will show you the current status of the new oracle-database-stats:mypdb service. Note: There was an early glitch where after installing the new packages the sstored wasn't restarted resulting in the a problem with the new //:class.app/oracle/rdbms class not being added to StatsStore, this is easy to fix by restarting the sstore service. If the oracle-database-stats:mypdb has gone into maintenance because of this you can easily clear it once the sstore service is restarted. This should be addressed soon if not already when you read this. Looking at the new Database Sheet Running the statcfg command not only created and configured a new service that pulls data from the Oracle Database, it also creates a new sheet for the Web Dashboard and to look at this connect to the Dashboard on port 6787. Once there navigate to the page that shows you the overview of all the sheets you can choose from and you should see these two added: The one on the right is dedicated to the new oracle-database-stats:mypdb service and if you click on it, it should look something like this: This is a screenshot of my database running a swingbench workload to show what type of information you get to see. The top two graphs are coming from the Oracle Database (v$system_event and v$sysstat tables) and they give an insight on what the activity is inside the database and the next is a graph showing all the Oracle processes and their memory size (these are mainly the database shadow processes sharing memory). The other graphs are showing general system stats. All of these are coming out of the StatsStore. (I've zoomed in on the top graph quite a lot to get a nice picture, otherwise you only see the one or two peaks and hardly the small stuff in-between.) This sheet is a basic example that is provided to get the initial isights in what is happening inside and outside the database. The nice thing about the Dashboard is that you can create your own new sheet, either by copying this sheet and editing it or by creating a brand new one, and this way you can combine these database stats with other stats to fit your needs. But that's for another blog. Alternatively you can also use the REST interface to pull the stats out into the central monitoring tools of your choice. This too is something for another blog. Note 1: It is good to know that this functionality focused on local databases. Even though when configuring the tnsnames.ora you can set the connection string to connect to a database on a remote system and the configured oracle-database-stats service will then pull in stats from this remote database, but the StatsStore won't have any OS-based stats to compliment this. Plus the Web Dashboard sheet that is automatically created also won't show correlated data. Note 2: There's no additional license required to gather this data, so if you don't have the Diagnostics Pack enabled this will still work. Of course, the goal of this functionality is not built to replace the Diagnostics Pack and things it gives you (like AWR reports), it's to give you current and historic data from the database and the OS as a starting point to better investigate where possible issues are.

As part of the Oracle Solaris 11.4 release we introduced the new StatsStore and Dashboard functionality allowing you to collect and analyze stats from all over the system that Oracle Solaris is...

Oracle Solaris 11

OpenJDK on Solaris/SPARC

Building OpenJDK Java Releases on Oracle Solaris/SPARC Starting with Java SE 8, Oracle created the concept of Long-Term-Support (LTS) releases, which are expected to be used and supported for an extended amount of time and for production purposes. Oracle Java SE 8 and Java SE 11 are LTS releases and are available and supported for Oracle Solaris - subject to licensing and support requirements detailed here https://blogs.oracle.com/solaris/update-on-oracle-java-on-oracle-solaris and here https://www.oracle.com/technetwork/java/java-se-support-roadmap.html. Oracle Java SE 9 and Java SE 10 were considered a cumulative set of implementation enhancements of Java SE 8. Once a new feature release is made available, any previous non‑LTS release are considered superseded. The LTS releases are designed to be in production environments while the non-LTS releases provide access to new features ahead of their appearance in a LTS release. The vast majority of users rely on Java SE 8 and Java SE 11 for their production use. However, Oracle also makes available OpenJDK builds that allow users to test and deploy the latest Java innovations. If you find a need to run a development release of Java on Oracle Solaris, you can access the freely available OpenJDK Java releases http://openjdk.java.net/, under an open source license http://openjdk.java.net/legal/gplv2+ce.html. This article will describe how to build a Java 12 JDK from OpenJDK sources on a recent Oracle Solaris 11.4SRU6 installation on a Oracle SPARC T8-1 system. Preparing the Operating System The first step to prepare the OS installaton for development purposes would be to check if the right facet had been installed: root@t8-1-001:~# pkg facet FACET VALUE SRC devel True local ... Make sure that the "devel" facet is set to true (if it wasn't packages like system/header would not install the full set of include files and thus one would not be able to compile). In case your operating system installation has it set to false, a simple pkg change-facet "facet.devel=True" will fix this, Oracle Solaris will install all the missing files, among others those you would miss in /usr/include. The OpenJDK website maintains a list of suggested and sometimes tested build enviroments on https://wiki.openjdk.java.net/display/Build/Supported+Build+Platforms, for most Java versions Oracle Solaris Studio 12.4 has been used. We will also use version 12.4, although the latest version according to the list of build environments should work as well. The build process also needs a few more development tools, namely the GNU binutils suite, the Oracle Solaris assembler, and mercurial to clone the development tree from OpenJDK: root@t8-1-001:~# pkg install mercurial developer/assembler cc@12.4 c++@12.4 autoconf gnu-binutils ... Building a JDK from OpenJDK sources also requires a so called "boot JDK" which usually is a JDK one version older than the JDK that is to be built. In our case we need a JDK11 which is available from Oracle's Technology Network pages, https://www.oracle.com/technetwork/java/javase/downloads/jdk11-downloads-5066655.html. It doesn't need not be installed as root, in our example it was installed right into the home directory of a user "oracle" used to build the JDK12. Now the system is ready to be used as a build environment for OpenJDK. Building the JDK An OpenJDK build is similar to many other open source builds, one first has to "configure" the enviroment, and then do the actual build. Note, it is important to use the GNU variant of make, gmake , the "normal" make is not able to process the Makefiles generated during the "configure" step. But we haven't downloaded the sources yet, these are provided as a mercurial repository. To download the source tree the repository has to be cloned: oracle@t8-1-001:~$ hg clone http://hg.openjdk.java.net/jdk/jdk12 jdk12-build This way the OpenJDK 12 build tree is cloned into the directory "jdk12-build". This process takes a few minutes, depending on the speed of your network connection. The configure step is straightforward and sets up the build directory for the actual build process: oracle@t8-1-001:~/jdk12-build$ chmod +x configure oracle@t8-1-001:~/jdk12-build$ ./configure --with-devkit=/opt/solarisstudio12.4/ --with-boot-jdk=/export/home/oracle/jdk-11.0.2/ If one now issues a gmake one would run into a compile error, src/hotspot/os/solaris/os_solaris.cpp references the undefined constant "EM_486". Oracle Solaris 11.4 removed EM_486 from sys/elf.h , one of the ways to fix this is to delete the corresponding line in os_solaris.cpp . Now the source tree is ready for building OpenJDK 12 on Oracle Solaris 11.4, gmake will build a Java 12 JDK. the result will end up in the directory build/solaris-sparcv9-server-release/jdk/ inside the build directory jdk12-build. Testing the JDK Now that you have successfully built Java 12 you might want to test it. The resulting JDK12 did end up in build/solaris-sparcv9-server-release/jdk/ n your build directory, jdk12-build in our example. Simply execute oracle@t8-1-001:~/jdk12-build$ ./build/solaris-sparcv9-server-release/jdk/bin/java -version openjdk version "12-internal" 2019-03-19 OpenJDK Runtime Environment (build 12-internal+0-adhoc..jdk12-build) OpenJDK 64-Bit Server VM (build 12-internal+0-adhoc..jdk12-build, mixed mode) to check that your newly built java can do something. Another way would be to compile a "Hello World" Java program: oracle@t8-1-001:~$ cat HelloWorld.java public class HelloWorld { public static void main(String[] args) { // Prints "Hello, World" to the terminal window. System.out.println("Hello, World"); } } Compile and execute will not create big surprises: oracle@t8-1-001:~$ ./jdk12-build/build/solaris-sparcv9-server-release/jdk/bin/javac HelloWorld.java oracle@t8-1-001:~$ ./jdk12-build/build/solaris-sparcv9-server-release/jdk/bin/java HelloWorld Hello, World  

Building OpenJDK Java Releases on Oracle Solaris/SPARC Starting with Java SE 8, Oracle created the concept of Long-Term-Support (LTS) releases, which are expected to be used and supported for an...

Solaris

Update on Oracle Java on Oracle Solaris

With the recent changes to the Oracle Java SE 8 licensing and support, which also effect the Oracle Java SE that is shipped with Oracle Solaris, we've been getting some questions. So to help clear things up we decided to write a short blog about this and other questions we get around running Java on Oracle Solaris. The key thing with regards to Oracle Java SE licensing and support is that starting in January 2019 any commercial/production use of Oracle Java will require a commercial license, even if the Oracle Java being used was shipped as part of an Operating System like Oracle Solaris (see MOS Doc. ID 2370065.1). Now the reality is that a large majority of the use of Oracle Java SE is with applications like Oracle WebLogic Server, and the licenses and support on these not only cover this commercial use, they often come with their own specific Oracle Java install to ensure all the patch levels are in sync. So in that case they're not even using the Oracle Java that's bundled with Oracle Solaris. So here nothing really changed, even if they use the Oracle Java SE that's shipped with the Operating System, Oracle Solaris. However for those customers that are using the Oracle Java SE that ships with Oracle Solaris, for example because they like how easy it installs and updates, they will need to now pay attention to their licenses. From January 2019 on forward the Oracle Solaris license only covers the commercial use of the Oracle Java it ships for Oracle Solaris components that use Oracle Java, that is to say the applications that ship as part of Oracle Solaris, so for it's own use. Any other use is permitted as long as the right license (and support) are acquired. Again here not too many folks will probably be impacted, because for example many customers that used to use Oracle Java SE for things like system agents have now moved to a language like Python for this task, but still it's important to understand and check. Another question that we've been getting is around which Java releases we include with Oracle Solaris. We're focused on Long-Term-Support (LTS) releases of Java for maximum investment protection, consistency, and stability. This article has a good explanation of Oracle's plans for LTS and non-LTS releases. Oracle Solaris is a production platform where the focus is on running the LTS releases Oracle Java SE, such as Oracle Java SE 8. As this release is fully supported and sufficient for use by Oracle Solaris components, it is used as the system default version. The reality is that many customers have their production applications built on Oracle Java SE 8 and only now are starting to look at moving to Oracle Java SE 11 for their new applications, which means they will probably still be running Oracle Java SE 8 for a long time. The good news is that the Oracle Java SE 8 release will be supported with Oracle Premier Support at least until 2022, and Oracle Java SE 11 will be supported even longer. By the way, for those customers that would like to try/use Java 12 on Oracle Solaris, this is still possible of course if you build it yourself with OpenJDK. A pretty strait forward task, as outlined in Martin Müller's excellent blog about how to do this. The benefit here is that you're much more in control of which exact version you compile and use, which is sometimes very important when using non-LTS releases. Similarly of course you could build the OpenJDK version of Java 9 or 10 if you'd still be interested. More information with regards to the Oracle Java SE releases can be found in this nice FAQ blog by Sharat Chander.

With the recent changes to the Oracle Java SE 8 licensing and support, which also effect the Oracle Java SE that is shipped with Oracle Solaris, we've been getting some questions. So to help clear...

Announcing Oracle Solaris 11.4 SRU6

Today we are releasing the SRU 6 for  Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU introduces the following enhancements: Analytics Database Sheet - This allows you to use the Oracle Solaris System Web Interface to view a high-level overview of Oracle Database performance and problems on that system. Further details can be found at https://docs.oracle.com/cd/E37838_01/html/E56520/dbsheet.html. HMP has been updated to 2.4.5 RabbitMQ has been updated to 3.7.8 and includes hex 0.18.2 and elixir 1.7.3 Erlang has been updated to 21.0 Samba has been updated to 4.9.3 tracker has been updated to 1.12.0 irssi has been updated to 1.1.2 Explorer 19.1 is now available The Java 8 and Java 7 packages have been updated. The following customer associated bugs in Core OS have been fixed nxe panics with lm_hw_attn.c (1400): Condition Failed! Dtrace %Y format should allow decimal places mpathadm Access State not consistent with RTPG command response from NetApp Aborted NDMP session lingers, preventing dataset destruction logadm default still allows system memory to fill up Avoiding shell execution is too aggressive kclient fails to properly interact with Active Directory Forests svc:/network/ipf2pf:default SMF goes into maintenance state after S11.4 OS boots The following components have also been updated to address security issues: Firefox has been updated to 60.5.0 ESR perl has been updated to 5.26.3 & 5.22 ruby has been updated to 2.3.8 libtiff has been updated to 4.0.10 LFTP has been updated to 4.8.4 mod_jk has been updated to 1.2.46 Python 2.7 has been updated to 2.7.15 git has been updated to 2.19.2 Samba has been updated to 4.9.3 tracker has been updated to 1.12.0 OpenSSH has been updated to 7.7p1 Wireshark has been updated to 2.6.6 Full details of this SRU can be found in My Oracle Support Doc 2507241.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we are releasing the SRU 6 for  Oracle Solaris 11.4. It is available via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. This SRU...

Oracle Solaris 11

Customize network configuration in a Golden Image

Many of our customers use golden images of Solaris with fully installed and configured applications to deploy their production systems. They create a golden image by capturing a snapshot of their staging system's files in a Solaris Unified Archive. Solaris normally clears the staging system's network configuration during golden image installation. This makes sense given that each production system will have a different IP address and often resides in a different site with unique subnet prefix and default route. There are times however when you might want parts of your network configuration to survive golden image deployment. Now that Solaris 11.4 stores its persistent network configuration in SMF, you can do this by creating custom installation packages to establish the network configuration settings you wish preserved. In a previous blog, I showed how I used the Automated Installer to replicate my SPARC staging system's network configuration on other systems. I simplified the effort by using SMF to extract the working configuration in SC-profile format. In this blog, I'll start with the staging system as it was previously configured and will use the same SC-profiles I generated for that blog.  I'll show how I packaged and installed one of the SC-profiles so that the common part of my network configuration was able to survive golden image deployment. The process I'm using works exactly the same on X86 systems. Do the following on the staging system I split the staging system's configuration in my previous blog into three SC-profile files. One of these, the labnet-profile.xml file, set up a pair of high speed ethernet links configured to support 9000 byte jumbo frames in a link aggregation. This is the part of the network configuration information I'll be preserving in my golden image. root@headbanger:~# dladm LINK CLASS MTU STATE OVER aggr0 aggr 9000 up net0 net1 net0 phys 9000 up -- net1 phys 9000 up -- net2 phys 1500 up -- net3 phys 1500 up -- sp-phys0 phys 1500 up -- Place the network configuration profile in a directory from which to build a package. root@headbanger:~# mkdir proto root@headbanger:~# cp labnet-profile.xml proto/. Generate the initial package manifest from the files under the proto directory. root@headbanger:~# pkgsend generate proto | \ pkgfmt > labnet-profile.p5m Use a text editor to add descriptive package metadata to the manifest, adjust the destination directory of the SC-profile file, and add a package actuator to automatically import the SC-profile upon installation. This example places the SC-profile file in etc/svc/profile/node/ which will cause SMF to import the profile into its node-profile layer. The changes I made are shown in blue. labnet-profile.p5m set name=pkg.fmri value=labnet-profile@1.0 set name=pkg.summary value="lab net configuration" set name=pkg.description \ value="My sample lab system net configuration" file labnet-profile.xml \ path=etc/svc/profile/node/labnet-profile.xml \ owner=root group=bin mode=0644 \ restart_fmri=svc:/system/manifest-import:default Build the new package in a temporary package repository. root@headbanger:~# pkgrepo create /tmp/lab-repo root@headbanger:~# pkgrepo -s /tmp/lab-repo set publisher/prefix=lab root@headbanger:~# pkgsend -s /tmp/lab-repo publish \ -d proto labnet-profile.p5m Create a package archive from the contents of the package repository. root@headbanger:~# pkgrecv -s /tmp/lab-repo \ -a -d lab.p5p labnet-profile The package I created can be installed directly from this archive. The archive can also be easily copied and installed on other systems. This particular package could even be installed directly on an X86 staging system. Install the package just created on the staging system. root@headbanger:~# pkg set-publisher -g lab.p5p lab root@headbanger:~# pkg install labnet-profile Complete development and testing of the staging system. Create the unified archive containing the golden image. Note that the staging system's network interfaces must be running with access to the site's package servers when creating this golden image. root@headbanger:~# archiveadm create /var/tmp/labsys.uar Copy the golden image from the staging system to a distribution site. I used "http://example-ai.example.com/datapool/labsys.uar" for this example. Do the following on the AI server Solaris provides multiple options for deploying a unified archive on other systems. I chose to deploy my golden image from my lab's install server onto node beatnik. Create an installation manifest that points to the golden image. netclone-manifest.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="netclone" auto_reboot="true"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="ARCHIVE"> <source> <file uri="http://example-ai.example.com/datapool/labsys.uar"/> </source> <software_data action="install"> <name>global</name> </software_data> </software> </ai_instance> </auto_install> Create the install service using A Solaris install image that is based the version of Solaris installed on the golden image, The manifest created here, The base SC-profile from my previous blog, and The beatnik-specific SC-profile from my previous blog. Note that there is no need to create a profile with labnet-profile.xml. That configuration information will be installed with the unified archive. I used node beatnik for my production server example. # installadm create-service -y -n netclone-sparc \ -a sparc \ -p solaris=http://pkg.oracle.com/solaris/release/ \ -s install-image/solaris-auto-install@latest # installadm create-manifest -n netclone-sparc \ -d -m netclone-manifest \ -f netclone-manifest.xml # installadm create-profile -n netclone-sparc \ -p base-profile \ -f base-profile.xml # installadm create-client -n netclone-sparc \ -e 0:0:5e:0:53:24 # installadm create-profile -n netclone-sparc \ -p beatnik-profile \ -f beatnik-profile.xml \ -c mac=0:0:5e:0:53:24 Do the following on the production servers Log into the console and start the installation. {0} ok boot net:dhcp - install Log in after installation completes. The dladm command output verifies the definition of the aggr link from the unified archive was preserved. root@beatnik:~# dladm LINK CLASS MTU STATE OVER aggr0 aggr 9000 up net0 net1 net0 phys 9000 up -- net1 phys 9000 up -- net2 phys 1500 up -- net3 phys 1500 up -- sp-phys0 phys 1500 up -- The svccvg listprop command shown here indicates in the third column what SMF database layers different parts of the beatnik's datalink configuration are from. root@beatnik:~# svccfg -s datalink-management:default listprop -l all datalinks datalinks application sysconfig-profile datalinks application node-profile datalinks application manifest datalinks/aggr0 datalink-aggr sysconfig-profile datalinks/aggr0 datalink-aggr node-profile datalinks/aggr0/aggr-mode astring node-profile dlmp datalinks/aggr0/force boolean node-profile false datalinks/aggr0/key count node-profile 0 datalinks/aggr0/lacp-mode astring node-profile off datalinks/aggr0/lacp-timer astring node-profile short datalinks/aggr0/media astring node-profile Ethernet datalinks/aggr0/num-ports count node-profile 2 datalinks/aggr0/policy astring node-profile L4 datalinks/aggr0/ports astring node-profile "net0" "net1" datalinks/aggr0/probe-ip astring sysconfig-profile 198.51.100.64+ 198.51.100.1 datalinks/net0 datalink-phys node-profile datalinks/net0/devname astring admin i40e0 datalinks/net0/loc astring admin /SYS/MB datalinks/net0/media astring admin Ethernet datalinks/net0/mtu count admin 9000 datalinks/net0/mtu count node-profile 9000 datalinks/net1 datalink-phys node-profile datalinks/net1/devname astring admin i40e1 datalinks/net1/loc astring admin /SYS/MB datalinks/net1/media astring admin Ethernet datalinks/net1/mtu count admin 9000 datalinks/net1/mtu count node-profile 9000 datalinks/net2 datalink-phys admin datalinks/net2/devname astring admin i40e2 datalinks/net2/loc astring admin /SYS/MB datalinks/net2/media astring admin Ethernet datalinks/net3 datalink-phys admin datalinks/net3/devname astring admin i40e3 datalinks/net3/loc astring admin /SYS/MB datalinks/net3/media astring admin Ethernet datalinks/sp-phys0 datalink-phys admin datalinks/sp-phys0/devname astring admin usbecm2 datalinks/sp-phys0/media astring admin Ethernet In this case, the layers correspond to the following sources. I defined it this way to show how various parts of the configuration from different sources fit together. manifest from Solaris service manifests node-profile from my package in the golden image sysconfig-profile from profiles defined on the AI server admin automatically generated on first boot

Many of our customers use golden images of Solaris with fully installed and configured applications to deploy their production systems. They create a golden image by capturing a snapshot of their...

Oracle Solaris 11

Customize Network Configuration with Solaris Automated Installs

The first item in Solaris 11.4’s list of new networking features was "Migration of persistent network configuration to SMF". Does it really matter where the network configuration is stored? Absolutely! Solaris is often installed in large data centers where manual installation and configuration of each individual system is not practical. Solaris 11's Automated Installer along with Solaris' System Management Facility (SMF) were developed to make installation and configuration of multiple systems significantly easier. Having Solaris network configuration stored outside SMF left a significant gap in configurations these tools supported. When using the Automated Installer, you define system configurations in System Configuration (SC) profile files. SMF loads these SC-profiles into its database during first boot of a freshly installed system.  The network/install service, which existed before Sorlaris 11.4, initializes the system’s network configuration from its SMF based configuration parameters. This service only supports creation of IP addresses over the system's existing physical interfaces. There are a great many other Solaris network features commonly used in data centers such as jumbo frames, link aggregation, vlans, and flows that can not be configured by the network/install service. Now that the dladm(8), flowadm(8), ipadm(8), netcfg(8), and route(8) utilities store their persistent information in SMF, you can directly configure network services in SC-profiles to handle anything these utilities support. SC-profiles use an XML based format to define service configuration information. If your like me, you would rather avoid the effort it takes to learn the detailed syntax rules for network configuration. Fortunately, you don't have to. I find an easier approach is to use the network utilities to set up the configuration I want on a live staging system. I use the SMF's svccfg(8) utility to capture the running configuration in XML format, then create my SC-profiles directly from the svccfg generated files. The instructions below show how I built SC-profiles to automatically install SPARC systems with a pair of high speed ethernet links supporting 9000 byte jumbo frames in a link aggregation. The process works exactly the same on X86 systems. Step 1: Install staging system without external interfaces configured I started by installing the staging system with no external network interfaces configured. Solaris automatically configures the following on first boot after the install completes. dladm link name to physical device mappings loopback IP interface (lo0) IP interface to the service processor (sp-phys) These details could cause unwanted problems if they were included in my final SC-profile. For example, physical device names are unlikely to be the same on all systems where I expect to apply my custom network configuration. I'll be capturing this initial network configuration without my changes to identify what parts of the full configuration I should remove later. Do the following on the AI Server. Use a text editor to create the AI manifest file. The manifest in this example installs the large server group from a solaris repository. Replace "http://pkg.oracle.com/solaris/release/" with a local copy of the solaris repository if one is available. Feel free to list any other packages desired. base-manifest.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="lab" auto_reboot="true"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <source> <publisher name="solaris"> <origin name="http://pkg.oracle.com/solaris/release/"/> </publisher> </source> <software_data action="install"> <name>pkg:/entire@latest</name> <name>pkg:/group/system/solaris-large-server</name> </software_data> </software> </ai_instance> </auto_install> Use sysconfig(8) to generate the initial SC-profile file with login, language, and timezone information you normally used at the site. Be sure to select "No network" on the initial Network Configuration screen. This example used the Computer Name "headbanger" for the example staging system. # sysconfig create-profile -o headbanger (fill in sysconfig's on-screen menus) # mv headbanger/sc_profile.xml headbanger/base-profile.xml Use installadm(8) to configure the install server. The staging system in this example is a SPARC system with MAC address 0:0:5e:0:53:d2. # installadm create-service -y -n nettest-sparc \ -a sparc \ -p solaris=http://pkg.oracle.com/solaris/release/ \ -s install-image/solaris-auto-install@latest # installadm create-manifest -n nettest-sparc \ -d -m base-manifest \ -f base-manifest.xml # installadm create-profile -n nettest-sparc \ -p base-profile \ -f base-profile.xml # installadm create-client -n nettest-sparc \ -e 0:0:5e:0:53:d2 Do the following on the staging system. Log into the system's service processor, connect to the system console, and start a network boot to launch the installation. -> start /SP/console (halt operating system if currently running) {0} ok boot net:dhcp - install-> start /SP/console When first boot completes, verify no external interfaces have been configured yet. root@headbanger:~# dladm LINK CLASS MTU STATE OVER net0 phys 1500 up -- net1 phys 1500 up -- net2 phys 1500 up -- net3 phys 1500 up -- sp-phys0 phys 1500 up -- root@headbanger:~# ipadm NAME CLASS/TYPE STATE UNDER ADDR lo0 loopback ok -- -- lo0/v4 static ok -- 127.0.0.1/8 lo0/v6 static ok -- ::1/128 sp-phys0 ip ok -- -- sp-phys0/v4 static ok -- 203.0.113.77/24 Capture the initial first-boot configuration in SC-profile format for later reference. root@headbanger:~# svccfg extract -l admin \ network/datalink-management:default \ > initial-profile.xml root@headbanger:~# svccfg extract -l admin \ network/ip-interface-management:default \ >> initial-profile.xml Step 2: Configure network on staging system using Solaris utilities Use Solaris network configuration utilities to create the desired configuration. root@headbanger:~# dladm set-linkprop -p mtu=9000 net0 root@headbanger:~# dladm set-linkprop -p mtu=9000 net1 root@headbanger:~# dladm create-aggr -m dlmp -l net0 -l net1 aggr0 root@headbanger:~# ipadm create-ip aggr0 root@headbanger:~# ipadm create-addr -a 198.51.100.63/24 aggr0 aggr0/v4 root@headbanger:~# ipadm create-addr -T addrconf \ -p stateless=no,stateful=no aggr0 aggr0/v6 root@headbanger:~# ipadm create-addr \ -a 2001:db8:414:60bc::3f/64 aggr0 aggr0/v6a root@headbanger:~# route -p add default 198.51.100.1 add net default: gateway 198.51.100.1 root@headbanger:~# dladm set-linkprop \ -p probe-ip=198.51.100.63+198.51.100.1 aggr0 Verify the configuration was defined as intended. root@headbanger:~# dladm LINK CLASS MTU STATE OVER aggr0 aggr 9000 up net0 net1 net0 phys 9000 up -- net1 phys 9000 up -- net2 phys 1500 up -- net3 phys 1500 up -- sp-phys0 phys 1500 up -- root@headbanger:~# dladm show-linkprop -p mtu LINK PROPERTY PERM VALUE EFFECTIVE DEFAULT POSSIBLE aggr0 mtu rw 9000 9000 1500 576-9706 net0 mtu rw 9000 9000 1500 576-9706 net1 mtu rw 9000 9000 1500 576-9706 net2 mtu rw 1500 1500 1500 576-9706 net3 mtu rw 1500 1500 1500 576-9706 sp-phys0 mtu rw 1500 1500 1500 1500 root@headbanger:~# dladm show-aggr -Sn LINK PORT FLAGS STATE TARGETS XTARGETS aggr0 net0 u--3 active 198.51.100.1 net1 -- net1 u-2- active -- net0 root@headbanger:~# ipadm NAME CLASS/TYPE STATE UNDER ADDR aggr0 ip ok -- -- aggr0/v4 static ok -- 198.51.100.63/24 aggr0/v6 addrconf ok -- fe80::8:20ff:fe67:6164/10 aggr0/v6a static ok -- 2001:db8:414:60bc::3f/64 lo0 loopback ok -- -- lo0/v4 static ok -- 127.0.0.1/8 lo0/v6 static ok -- ::1/128 sp-phys0 ip ok -- -- sp-phys0/v4 static ok -- 203.0.113.77/24 root@headbanger:~# netstat -nr Routing Table: IPv4 Destination Gateway Flags Ref Use Interface -------------------- -------------------- ----- ----- ---------- --------- default 198.51.100.1 UG 2 31 198.51.100.0 198.51.100.63 U 3 73854 aggr0 127.0.0.1 127.0.0.1 UH 4 68 lo0 203.0.113.0 203.0.113.77 U 3 131889 sp-phys0 Routing Table: IPv6 Destination/Mask Gateway Flags Ref Use If ------------------------- --------------------------- ----- --- ------- ----- ::1 ::1 UH 2 24 lo0 2001:db8:414:60bc::/64 -- U 2 0 aggr0 2001:db8:414:60bc::/64 2001:db8:414:60bc::3f U 2 0 aggr0 fe80::/10 fe80::8:20ff:fe67:6164 U 3 0 aggr0 default fe80::200:5eff:fe00:530c UG 2 0 aggr0 Reboot the staging system and retest to ensure the configuration was persistently applied. Step 3: Capture the custom network configuration from SMF Use the SMF svccfg(8) utility to extract the current configuration from the modified network services. root@headbanger:~# svccfg extract -l admin \ network/datalink-management:default \ >> labnet-profile.xml root@headbanger:~# svccfg extract -l admin \ network/ip-interface-management:default \ >> labnet-profile.xml Edit the labnet-profile.xml file to: Delete replicated service bundle overhead between the extracted information of each service. Delete entries that exist in the initial-profile.xml file saved previously in Step 1. labnet-profile.xml <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='profile' name='labnet-profile'> <service name='network/datalink-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='datalinks' type='application'> <property_group name='aggr0' type='datalink-aggr'> <propval name='aggr-mode' type='astring' value='dlmp'/> <propval name='force' type='boolean' value='false'/> <propval name='key' type='count' value='0'/> <propval name='lacp-mode' type='astring' value='off'/> <propval name='lacp-timer' type='astring' value='short'/> <propval name='media' type='astring' value='Ethernet'/> <propval name='num-ports' type='count' value='2'/> <propval name='policy' type='astring' value='L4'/> <propval name='probe-ip' type='astring' value='198.51.100.63+198.51.100.1'/> <property name='ports' type='astring'> <astring_list> <value_node value='net0'/> <value_node value='net1'/> </astring_list> </property> </property_group> <property_group name='net0' type='datalink-phys'> <propval name='devname' type='astring' value='i40e0'/> <propval name='loc' type='astring' value='/SYS/MB'/> <propval name='media' type='astring' value='Ethernet'/> <propval name='mtu' type='count' value='9000'/> </property_group> <property_group name='net1' type='datalink-phys'> <propval name='devname' type='astring' value='i40e1'/> <propval name='loc' type='astring' value='/SYS/MB'/> <propval name='media' type='astring' value='Ethernet'/> <propval name='mtu' type='count' value='9000'/> </property_group> <property_group name='net2' type='datalink-phys'> <propval name='devname' type='astring' value='i40e2'/> <propval name='loc' type='astring' value='/SYS/MB'/> <propval name='media' type='astring' value='Ethernet'/> </property_group> <property_group name='net3' type='datalink-phys'> <propval name='devname' type='astring' value='i40e3'/> <propval name='loc' type='astring' value='/SYS/MB'/> <propval name='media' type='astring' value='Ethernet'/> </property_group> <property_group name='sp-phys0' type='datalink-phys'> <propval name='devname' type='astring' value='usbecm2'/> <propval name='media' type='astring' value='Ethernet'/> </property_group> </property_group> <property_group name='linkname-policy' type='application'> <propval name='initialized' type='boolean' value='true'/> </property_group> </instance> </service> </service_bundle> <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='profile' name='extract'> <service name='network/ip-interface-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='interfaces' type='application'> <property_group name='aggr0' type='interface-ip'> <property name='address-family' type='astring'> <astring_list> <value_node value='ipv4'/> <value_node value='ipv6'/> </astring_list> </property> <property_group name='v4' type='address-static'> <propval name='ipv4-address' type='astring' value='198.51.100.63'/> <propval name='prefixlen' type='count' value='24'/> <propval name='up' type='astring' value='yes'/> </property_group> <property_group name='v6' type='address-addrconf'> <propval name='interface-id' type='astring' value='::'/> <propval name='prefixlen' type='count' value='0'/> <propval name='stateful' type='astring' value='no'/> <propval name='stateless' type='astring' value='no'/> </property_group> <property_group name='v6a' type='address-static'> <propval name='ipv6-address' type='astring' value='2001:db8:414:60bc::3f'/> <propval name='prefixlen' type='count' value='64'/> <propval name='up' type='astring' value='yes'/> </property_group> </property_group> <property_group name='lo0' type='interface-loopback'> <property name='address-family' type='astring'> <astring_list> <value_node value='ipv4'/> <value_node value='ipv6'/> </astring_list> </property> <property_group name='v4' type='address-static'> <propval name='ipv4-address' type='astring' value='127.0.0.1'/> <propval name='prefixlen' type='count' value='8'/> <propval name='up' type='astring' value='yes'/> </property_group> <property_group name='v6' type='address-static'> <propval name='ipv6-address' type='astring' value='::1'/> <propval name='prefixlen' type='count' value='128'/> <propval name='up' type='astring' value='yes'/> </property_group> </property_group> <property_group name='sp-phys0' type='interface-ip'> <property name='address-family' type='astring'> <astring_list> <value_node value='ipv4'/> <value_node value='ipv6'/> </astring_list> </property> <property_group name='v4' type='address-static'> <propval name='ipv4-address' type='astring' value='203.0.113.77'/> <propval name='prefixlen' type='count' value='24'/> <propval name='up' type='astring' value='yes'/> </property_group> </property_group> </property_group> <property_group name='ipmgmtd' type='application'> <propval name='datastore_version' type='integer' value='2047'/> </property_group> <property_group name='static-routes' type='application'> <property_group name='route-1' type='static-route'> <propval name='destination' type='astring' value='default'/> <propval name='family' type='astring' value='inet'/> <propval name='gateway' type='astring' value='198.51.100.1'/> </property_group> </property_group> </instance> </service> </service_bundle> Step 4: Reinstall staging system with full network configuration The combined contents of base-profile.xml from Step 1 and labnet-profile.xml from step 3 could be used as the SC-profile for autoconfiguring headbanger on a fresh install. However, my goal is to easily replicate this configuration to many systems. I made replication easier by moving system-specific details from these two files to a smaller system-specific SC-profile file. This is all just cutting and pasting existing XML text blocks. headbanger-profile.xml <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='profile' name='headbanger-profile'> <service name='system/identity' type='service' version='0'> <instance name='node' enabled='true'> <property_group name='config' type='application'> <propval name='nodename' type='astring' value='headbanger'/> </property_group> </instance> </service> <service name='network/datalink-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='datalinks' type='application'> <property_group name='aggr0' type='datalink-aggr'> <propval name='probe-ip' type='astring' value='198.51.100.63+198.51.100.1'/> </property_group> </property_group> </instance> </service> <service name='network/ip-interface-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='interfaces' type='application'> <property_group name='aggr0' type='interface-ip'> <property_group name='v4' type='address-static'> <propval name='ipv4-address' type='astring' value='198.51.100.63'/> <propval name='prefixlen' type='count' value='24'/> <propval name='up' type='astring' value='yes'/> </property_group> <property_group name='v6a' type='address-static'> <propval name='ipv6-address' type='astring' value='2001:db8:414:60bc::3f'/> <propval name='prefixlen' type='count' value='64'/> <propval name='up' type='astring' value='yes'/> </property_group> </property_group> </property_group> <property_group name='static-routes' type='application'> <property_group name='route-1' type='static-route'> <propval name='destination' type='astring' value='default'/> <propval name='family' type='astring' value='inet'/> <propval name='gateway' type='astring' value='198.51.100.1'/> </property_group> </property_group> </instance> </service> </service_bundle> The two original SC-profile files now contain: base-profile.xml <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="$5$rounds=10000$HJbjRmpw$JehxKTyBZxAbZmq0HN7gi367n8b4tcs1GkdgzaRbjk6"/> <propval type="astring" name="type" value="normal"/> </property_group> </instance> </service> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="US/Eastern"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="sun-color"/> </property_group> </instance> </service> </service_bundle> labnet-profile.xml <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='profile' name='labnet-profile'> <service name='network/datalink-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='datalinks' type='application'> <property_group name='aggr0' type='datalink-aggr'> <propval name='aggr-mode' type='astring' value='dlmp'/> <propval name='force' type='boolean' value='false'/> <propval name='key' type='count' value='0'/> <propval name='lacp-mode' type='astring' value='off'/> <propval name='lacp-timer' type='astring' value='short'/> <propval name='media' type='astring' value='Ethernet'/> <propval name='num-ports' type='count' value='2'/> <propval name='policy' type='astring' value='L4'/> <property name='ports' type='astring'> <astring_list> <value_node value='net0'/> <value_node value='net1'/> </astring_list> </property> </property_group> <property_group name='net0' type='datalink-phys'> <propval name='mtu' type='count' value='9000'/> </property_group> <property_group name='net1' type='datalink-phys'> <propval name='mtu' type='count' value='9000'/> </property_group> </property_group> </instance> </service> <service name='network/ip-interface-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='interfaces' type='application'> <property_group name='aggr0' type='interface-ip'> <property name='address-family' type='astring'> <astring_list> <value_node value='ipv4'/> <value_node value='ipv6'/> </astring_list> </property> <property_group name='v6' type='address-addrconf'> <propval name='interface-id' type='astring' value='::'/> <propval name='prefixlen' type='count' value='0'/> <propval name='stateful' type='astring' value='no'/> <propval name='stateless' type='astring' value='no'/> </property_group> </property_group> </property_group> </instance> </service> </service_bundle> Do the following on the AI server. Copy the three profile files to the AI server then use installadm(8) then apply them to the original nettest-sparc configuration. I found installadm's ability to use multiple SC-profiles on a single service with client selection criteria is extremely handy for this situation. # installadm update-profile -n nettest-sparc \ -p base-profile \ -f base-profile.xml # installadm create-profile -n nettest-sparc \ -p labnet-profile \ -f labnet-profile.xml # installadm create-profile -n nettest-sparc \ -p headbanger-profile \ -f headbanger-profile.xml \ -c mac=0:0:5e:0:53:d2 Do the following on the staging system. Log into the staging system's console and reinstall. root@headbanger:~# halt {0} ok boot net:dhcp - install This repeats the installation of the staging system as before, but with the full network configuration automatically applied on first boot. After installation is complete, log in and verify the system is configured as intended. Step 5. Repeat installation on other systems Duplicating the network configuration on other systems is relatively easy now. In this example, I will recreate the configuration on node "beatnik". Do the following on the AI server. Start by copying the headbanger-profile.xml from step 4 to beatnik-profile.xml then use an editor to provide beatnik's unique configuration details. Bold fonts show where I made changes. beatnik-profile.xml <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='profile' name='beatnik-profile'> <service name='system/identity' type='service' version='0'> <instance name='node' enabled='true'> <property_group name='config' type='application'> <propval name='nodename' type='astring' value='beatnik'/> </property_group> </instance> </service> <service name='network/datalink-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='datalinks' type='application'> <property_group name='aggr0' type='datalink-aggr'> <propval name='probe-ip' type='astring' value='198.51.100.64+198.51.100.1'/> </property_group> </property_group> </instance> </service> <service name='network/ip-interface-management' type='service' version='0'> <instance name='default' enabled='true'> <property_group name='interfaces' type='application'> <property_group name='aggr0' type='interface-ip'> <property_group name='v4' type='address-static'> <propval name='ipv4-address' type='astring' value='198.51.100.64'/> <propval name='prefixlen' type='count' value='24'/> <propval name='up' type='astring' value='yes'/> </property_group> <property_group name='v6a' type='address-static'> <propval name='ipv6-address' type='astring' value='2001:db8:414:60bc::40'/> <propval name='prefixlen' type='count' value='64'/> <propval name='up' type='astring' value='yes'/> </property_group> </property_group> </property_group> <property_group name='static-routes' type='application'> <property_group name='route-1' type='static-route'> <propval name='destination' type='astring' value='default'/> <propval name='family' type='astring' value='inet'/> <propval name='gateway' type='astring' value='198.51.100.1'/> </property_group> </property_group> </instance> </service> </service_bundle> Use installadm(8) on the install server to load beatnik's installation details. # installadm create-client -n nettest-sparc \ -e 0:0:5e:0:53:24 # installadm create-profile -n nettest-sparc \ -p beatnik-profile \ -f beatnik-profile.xml \ -c mac=0:0:5e:0:53:24 The following installadm list command shows how selection criteria is used to identify which SC-profile files are applied during the installation of each system. # installadm list -p -n nettest-sparc Service Name Profile Name Environment Criteria ------------ ------------ ----------- -------- nettest-sparc base-profile install none system labnet-profile install none system headbanger-profile install mac=0:0:5e:0:53:d2 system beatnik-profile install mac=0:0:5e:0:53:24 system Do the following on other systems targeted for the same configuration. Log into the console and start the installation. {0} ok boot net:dhcp - install The full network configuration will automatically be applied on first boot of beatnik just as it was on headbanger.

The first item in Solaris 11.4’s list of new networking features was "Migration of persistent network configuration to SMF". Does it really matter where the network configuration is stored?...

Migrate Oracle Solaris 10 Physical Systems with ZFS root to Guest Domains in 3 Easy Steps

  We are happy to announce that we now offer a new tool called ldmp2vz(1M) that can migrate Oracle Solaris 10 source systems with ZFS root. This tool has two commands; ldmp2vz_collect(1M) and ldmp2vz_convert(1M). These two commands enable migration of a sun4u or a sun4v Oracle Solaris 10 physical machine to an Oracle Solaris 10 guest domain on a system running Oracle Solaris 11 in the control domain as shown in this diagram.       Migration of the system is accomplished in three easy steps.   1. Collection: Run ldmp2vz_collect(1M) on the source machine to collect system image and configuration data. The resulting image file can be transmitted to the target system’s control domain or stored on an NFS server that is available on the target machine.   2. Preparation:  Create a new Oracle Solaris 10 guest domain on the SPARC target machine that is running Oracle Solaris 11 in the control domain, using any supported method. You can use ovmtdeploy(8), Jumpstart or install from Oracle Solaris DVD.   3. Conversion: Run ldmp2vz_convert(1M) inside the newly created Oracle Solaris 10 guest domain on the target machine. This command uses the system image from the source machine and restores the root file system and configuration data using Live Upgrade technology.  This process also replaces sun4u packages with the corresponding sun4v versions if needed.       The Oracle Solaris 10 guest domain on the target machine is now ready for application and data migration. We documented a comprehensive end-to-end migration use case with an Oracle Solaris 10 source system that has a Database zone running Oracle Single Instance Database 12.1.0.2 and a Web zone running Apache web server. Application data on the source system is stored on ZFS, UFS and ASM raw disks and the document explains the process for transferring the data from the source machine to the Oracle Solaris 10 guest domain on the target system.       The ldmp2vz(1M) tool is available in Oracle VM Server for SPARC 3.2 Patch 151934-06. The migration use case document, titled Lift and Shift Guide – Migrating Workloads from Oracle Solaris 10 (ZFS) SPARC Systems to Oracle Solaris 10 Guest Domains, is available in the Lift and Shift Documentation Library

  We are happy to announce that we now offer a new tool called ldmp2vz(1M) that can migrate Oracle Solaris 10 source systems with ZFS root. This tool has two commands; ldmp2vz_collect(1M)...

Oracle Solaris Cluster

Cluster File System with ZFS: General Deployment Questions

Cluster File System with ZFS: Introduction and Configuration Is there a picture or theory of operation explaining the working model of Cluster File System with ZFS? Will this scale? (i.e. to 4 nodes, 40 nodes, 400 nodes, 4000 nodes?) Is it safe for diversity? (i.e. 3 datacenters, located in different cities or continents?) What is the performance implication when you add a node? Can multiple nodes drop dead and the application continue to run without losing data? I am trying to determine the use cases for clustering with ZFS... (i.e. is it a "Cloud Solution" where we could lose a datacenter and still have applications continue to run in the remaining datacenters?) Is there a picture or theory of operation explaining the working model of Cluster File System with ZFS? Here is one such diagram that explains the working model of Cluster File System or Proxy File System (PxFS) with ZFS at a high level.   From the diagram above, it can be observed that all the data flow/transactions go through one machine.  PxFS is another layer that sits above the real file system. PxFS client talks to the PxFS Server layer on the primary node (the information on which is being checkpointed to secondary nodes). Then finally the PxFS server layer talks to the ZFS file system, which then talks to the physical storage. Note that this is a very high level architecture and has many other components involved.  The block that shows "Cluster Communication Subsystem" exists within each node of the cluster. Will this scale? (i.e. to 4 nodes, 40 nodes, 400 nodes, 4000 nodes?) See the URL https://docs.oracle.com/cd/E69294_01/html/E69310/bacfbadd.html#CLCONbacbbbbb Excerpts from the link above: Supported Configurations Depending on your platform, Oracle Solaris Cluster software supports the following configurations: SPARC: Oracle Solaris Cluster software supports from one to 16 cluster nodes in a cluster. Different hardware configurations impose additional limits on the maximum number of nodes that you can configure in a cluster composed of SPARC-based systems. See Oracle Solaris Cluster Topologies in Concepts for Oracle Solaris Cluster 4.4 for the supported configurations. x86: Oracle Solaris Cluster software supports from one to eight cluster nodes in a cluster. Different hardware configurations impose additional limits on the maximum number of nodes that you can configure in a cluster composed of x86-based systems. See Oracle Solaris Cluster Topologies in Concepts for Oracle Solaris Cluster 4.4 for the supported configurations As you can observe from the diagram in answer for #1, all the client requests go through the PxFS server. As many client nodes, so many read/write operations to PxFS primary. So, the scalability is limited by the server's bandwidth. Is it safe for diversity? (i.e. 3 datacenters, located in different cities or continents?) Yes, this feature can be used in campus clusters where nodes of a cluster can be located in different rooms and with Solaris Cluster Disaster Recovery Framework where you can achieve Multi-Cluster and Multi-site capability. A cluster file system only exists within a cluster instance. Depending on the distance, nodes of one cluster instance can be spread between two locations. See https://docs.oracle.com/cd/E69294_01/html/E69325/z40001987941.html#scrolltoc There is support for replication of cluster file systems based on ZFS between clusters which can be at different geographies with services availability managed by the disaster recovery framework. See https://docs.oracle.com/cd/E69294_01/html/E69466/index.html for more information. What is the performance implication when you add a node? It will have no impact on Cluster File system as such. However, it also depends on the activity being performed on the Cluster File System. If the new node joins a cluster as a Secondary, all the existing PxFS information related to the file systems will be checkpointed to the Secondary node, which in turn affects the performance. But if the new node joins the cluster when it has the desired number of secondaries, then the new node is added as a Spare which means no existing data from the PxFS primary is checkpointed to the new node. Can multiple nodes drop dead and the application continue to run without losing data? File system access will continue on a node failure. Completion of file and directory operations during a node failure will maintain the semantics of local ZFS file system. Any file system state change that occurred on a globally mounted cluster file system ensures that the latest state is preserved after a failover operation. A multiple node drop dead scenario is similar to a single node death. I am trying to determine the use cases for clustering with ZFS... (i.e. is it a "Cloud Solution" where we could lose a datacenter and still have applications continue to run in the remaining datacenters?) Some of the use cases are: 1) For having the live migration support of Solaris Cluster HA-LDOM data service, global access is a requirement. 2) Shared file systems access multi-tier applications like SAP, Oracle E-Business Suite. 3) Shared file systems for multiple instances of same application deployed in the cluster. 4) ZFS snapshot replication for Cluster File systems with ZFS managed by Solaris Cluster Disaster Recovery Framework. As you can observe from the diagram, PxFS is a layer above the underlying file system. If you have a requirement for multiple nodes to be able to access 'the same' filesystem and performance is not the main requirement - Solaris Cluster File System is the way to go. If the application has heavy read/write requirements, then it is always better to use highly available local file systems.  

Cluster File System with ZFS: Introduction and Configuration Is there a picture or theory of operation explaining the working model of Cluster File System with ZFS? Will this scale? (i.e. to 4 nodes,...

Python Lambda Functions - Quick Notes

Lambda functions: are anonymous functions with no function name but with function body or definition are useful when a small function is expected to be called once or just a few times are useful as arguments to a higher-order function that takes other functions as arguments For example, to filter out some data from a list based on some condition, using a lambda function might be concise and simpler than writing a full-fledged function to accomplish the same task. are defined using the lambda keyword in Python can accept any number of arguments (zero or more) can have only one expression that gets evaluated and returned Syntax: lambda argument(s): expression eg., A trivial example. def areaofcircle(radius): return math.pi * radius * radius can be written as: circlearea = lambda radius : math.pi * radius * radius In this example, the lambda function accepts a lone argument radius; and the function evaluates the lambda expression (π * radius2) to calculate the area of a circle and returns the result to caller. In case of lambda function, the identifier "circlearea" is assigned the function object the lambda expression creates so circlearea can be called like any normal function. eg., circlearea(5) Another trivial example that converts first character in each word in a list to uppercase character. >> fullname = ["john doe", "richard roe", "janie q"] >>> fullname ['john doe', 'richard roe', 'janie q'] >>> map(lambda name : name.title(), fullname) ['John Doe', 'Richard Roe', 'Janie Q'] Alternative URL: Python Lambda Functions @technopark02.blogspot.com

Lambda functions: are anonymous functions with no function name but with function body or definition are useful when a small function is expected to be called once or just a few times are useful as...

Oracle Solaris Cluster support for Oracle E-Business Suite 12.2.8

We are very pleased to announce further support for Oracle E-Business Suite 12.2.8 on the following Oracle Solaris Cluster versions: Oracle Solaris Cluster 4.3 SRU 3 or later on Oracle Solaris 11.3 Oracle Solaris Cluster 4.4 or later on Oracle Solaris 11.4 Deploying Oracle E-Business Suite 12.2 with Oracle Solaris Cluster provides the ability to take advantage of the zone cluster feature to provide security isolation, application fault isolation as well as individual resource management within a zone cluster. Note, Oracle offers the ability to license much of its software, including the Oracle Database, based on the quantity of CPUs that will run the software. When performance goals can be met with only a subset of the computer's CPUs, it may be appropriate to limit licensing costs by using a processor-based licensing metric. The document Hard Partitioning With Oracle Soalris Zones explains the different Solaris features that can be used to limit software licensing costs when a processor-based metric is used. In the deployment example below, the Oracle E-Business Suite 12.2.8 DB-Tier and Mid-Tiers have been deployed within separate zone clusters. Furthermore, the Primary Application Tier and subsequent WebLogic Administration Server have been installed within an Oracle Solaris Cluster failover resource group using a logical host. What this means is that if the physical node or zone cluster node hosting the Primary Application Tier and subsequent WebLogic Administration Server fails then Oracle Solaris Cluster will fail over the Primary Application Tier to the other zone cluster node. Oracle Solaris Cluster will detect a zone cluster node or physical node failure within seconds and will automatically fail over the logical host to another zone cluster node where the Primary Application Tier services are automatically started again. Typically, the WebLogic Administration Server is available again within 2-3 minutes after a zone cluster node or physical node failure. The following diagram shows a typical Oracle E-Business 12.2 deployment on Oracle Solaris Cluster 4.3 with Oracle Solaris 11, using Oracle Solaris Zone Clusters. With this deployment example, after Oracle E-Business Suite 12.2.8 was installed and configured on Oracle Solaris Cluster 4.3, a version update was performed from the latest version of 4.3, using the rolling upgrade method, to Oracle Solaris Cluster 4.4 and Oracle Solaris 11.4 in order to take advantage of new features offered by Oracle Solaris Cluster 4.4. One new feature is the support for immutable zone clusters. Once running on Oracle Solaris Cluster 4.4 and Oracle Solaris 11.4, both Oracle Solaris zone clusters were configured as immutable zone clusters in order to restrict access with read-only roots. A read-only zone cluster can be configured by setting the file-mac-property. Using a read-only zone cluster root expands the secure runtime boundary. The following example provides simple steps to configure immutable zone clusters for db-zc and app-zc. Note, a zone cluster node reboot is required to implement the immutable zone cluster configuration. However, in order to maintain the availability of Oracle E-Business Suite 12.2.8 services a rolling zone cluster node reboot is performed. As shown within the diagram above, the Primary Application Tier contains the HTTP Server which represents the single Web Entry Point. As such, when an app-zc zone cluster node is rebooted that is running the HTTP Server a small HTTP Server interrupt will occur. Typically, this takes less than 1 minute before Oracle Solaris Cluster has restarted the HTTP Server on the surviving zone cluster node for app-zc. Note, already connected clients may not notice that small outage if they did not hit enter, during that outage, as their session will remain connected. Verify that zone clusters db-zc and app-zc are running on each physical node root@node1:~# clzc status db-zc app-zc === Zone Clusters === --- Zone Cluster Status --- Name     Brand     Node Name   Zone Host Name   Status   Zone Status ----     -----     ---------   --------------   ------   ----------- db-zc    solaris   node1       vdbnode1         Online   Running                    node2       vdbnode2         Online   Running app-zc   solaris   node1       vappnode1        Online   Running                    node2       vappnode2        Online   Running root@ node1:~# Configure immutable zone clusters for db-zc and app-zc root@node1:~# clzc configure db-zc clzc:db-zc> set file-mac-profile=fixed-configuration clzc:db-zc> exit root@node1:~# root@node1:~# clzc configure app-zc clzc:db-zc> set file-mac-profile=fixed-configuration clzc:db-zc> exit root@node1:~# Selectively reboot each db-zc zone cluster node as an immutable zone cluster node Verify that that the Oracle RAC Database instances are running. root@node1:~# zlogin db-zc 'su - oracle -c "srvctl status db -d vis"' Oracle Corporation      SunOS 5.11      11.4    November 2018 Instance VIS1 is running on node vdbnode1 Instance VIS2 is running on node vdbnode2 root@node1:~# Reboot zone cluster db-zc on node1. root@node1:~# clzc reboot -n node1 db-zc Waiting for zone reboot commands to complete on all the nodes of the zone cluster "db-zc"... root@node1:~# Once complete verify that the Oracle RAC Database instances are running, as shown above, before continuing. Reboot zone cluster db-zc on node2. root@node1:~# clzc reboot -n node2 db-zc Waiting for zone reboot commands to complete on all the nodes of the zone cluster "db-zc"... root@node1:~# Once complete verify that the Oracle RAC Database instances are running, as shown above, before continuing. Selectively reboot each app-zc zone cluster node as an immutable zone cluster node Verify that that Oracle E-Business Suite 12.2.8 services  are running. root@node1:~# clrs status -Z app-zc Reboot zone cluster app-zc on node1. root@node1:~# clzc reboot -n node1 app-zc Waiting for zone reboot commands to complete on all the nodes of the zone cluster "app-zc"... root@node1:~# Once complete verify that that the Oracle E-Business Suite 12.2.8 services  are running, as shown above, before continuing. Reboot zone cluster app-zc on node2. root@node1:~# clzc reboot -n node2 app-zc Waiting for zone reboot commands to complete on all the nodes of the zone cluster "app-zc"... root@node1:~# Once complete, immutable zone clusters providing read-only roots are now underpinning Oracle E-Business Suite 12.2.8 services. As noted above, Oracle Solaris Cluster provides the ability to take advantage of the zone cluster feature to provide security isolation, application fault isolation as well as individual resource management within a zone cluster. In this context, it is possible to consolidate other Oracle E-Business Suite deployments within separate zone clusters. A future article will expand this deployment example to include Business Continuity for Oracle E-Business Suite 12.2. While there are many use cases that could be considered, a largely passive standby site, within a primary and standby Disaster Recovery configuration, could be used for development / test Oracle E-Business Suite deployments. Those deployments would be quiesced before a takeover or switchover to a standby site of the production Oracle E-Business Suite 12.2.8. Such other Oracle E-Business Suite deployments could be deployed within separate zone clusters and would be isolated from the production Oracle E-Business Suite 12.2.8 deployment. For more information please refer to the following: Oracle Solaris Cluster 4.4 Release Notes How to Deploy Oracle RAC on an Exclusive-IP Oracle Solaris Zones Cluster How to configure a Zone Cluster to Be Immutable Oracle Solaris Cluster Data Service for Oracle E-Business Suite as of Release 12.2 Guide Hard Partitioning With Oracle Solaris Zones

We are very pleased to announce further support for Oracle E-Business Suite 12.2.8 on the following Oracle Solaris Cluster versions: Oracle Solaris Cluster 4.3 SRU 3 or later on Oracle Solaris 11.3 Oracl...

Perspectives

Programming in C: Few Tidbits #8

1) Function Pointers Declaring Function Pointers Similar to a variable declared as pointer to some data type, a variable can also be declared to be a pointer to a function. Such a variable stores the address of a function that can later be called using that function pointer. In other words, function pointers point to the executable code rather than data like typical pointers. eg., void (*func_ptr)(); In the above declaration, func_ptr is a variable that can point to a function that takes no arguments and returns nothing (void). The parentheses around the function pointer cannot be removed. Doing so makes the declaration a function that returns a void pointer. The declaration itself won't point to anything so a value has to be assigned to the function pointer which is typically the address of the target function to be executed. Assigning Function Pointers If a function by name dummy was already defined, the following assignment makes func_ptr variable to point to the function dummy. eg., void dummy() { ; } func_ptr = dummy; In the above example, function's name was used to assign that function's address to the function pointer. Using address-of / address operator (&) is another way. eg., void dummy() { ; } func_ptr = &dummy; The above two sample assignments highlight the fact that similar to arrays, a function's address can be obtained either by using address operator (&) or by simply specifying the function name - hence the use of address operator is optional. Here's an example proving that. % cat funcaddr.c #include <stdio.h> void foo() { ; } void main() { printf("Address of function foo without using & operator = %p\n", foo); printf("Address of function foo using & operator = %p\n", &foo); } % cc -o funcaddr funcaddr.c % ./funcaddr Address of function foo without using & operator = 10b6c Address of function foo using & operator = 10b6c Using Function Pointers Once we have a function pointer variable pointing to a function, we can call the function that it points to using that function pointer variable as if it is the actual function name. Dereferencing the function pointer is optional similar to using & operator during function pointer assignment. The dereferencing happens automatically if not done explicitly. eg., The following two function calls are equivalent, and exhibit the same behavior. func_ptr(); (*func_ptr)(); Complete Example Here is one final example for the sake of completeness. This example demonstrate the execution of couple of arithmetic functions using function pointers. Same example also highlights the optional use of & operator and pointer dereferencing. % cat funcptr.c #include <stdio.h> int add(int first, int second) { return (first + second); } int multiply(int first, int second) { return (first * second); } void main() { int (*func_ptr)(int, int); /* declaration */ func_ptr = add; /* assignment (auto func address) */ printf("100+200 = %d\n", (*func_ptr)(100,200)); /* execution (dereferencing) */ func_ptr = &multiply; /* assignment (func address using &) */ printf("100*200 = %d\n", func_ptr(100,200)); /* execution (auto dereferencing) */ } % cc -o funcptr funcptr.c % ./funcptr 100+200 = 300 100*200 = 20000 Few Practical Uses of Function Pointers Function pointers are convenient and useful while writing functions that sort data. Standard C Library includes qsort() function to sort data of any type (integers, floats, strings). The last argument to qsort() is a function pointer pointing to the comparison function. Function pointers are useful to write callback functions where a function (executable code) is passed as an argument to another function that is expected to execute the argument (call back the function sent as argument) at some point. In both examples above function pointers are used to pass functions as arguments to other functions. In some cases function pointers may make the code cleaner and readable. For example, array of function pointers may simplify a large switch statement. 2) Printing Unicode Characters Here's one possible way. Make use of wide characters. Wide character strings can represent Unicode character value (code point). The standard C library provides wide-character functions. Include the header file wchar.h Set proper locale to support wide characters Print the wide character(s) using standard printf and "%ls" format specifier -or- using wprintf to output formatted wide characters Following rudimentary code sample prints random currency symbols and a name in Telugu script using both printf and wprintf function calls. % cat -n unicode.c 1 #include <wchar.h> 2 #include <locale.h> 3 #include <stdio.h> 4 5 int main() 6 { 7 setlocale(LC_ALL,"en_US.UTF-8"); 8 wprintf(L"\u20AC\t\u00A5\t\u00A3\t\u00A2\t\u20A3\t\u20A4"); 9 wchar_t wide[4]={ 0x0C38, 0x0C30, 0x0C33, 0 }; 10 printf("\n%ls", wide); 11 wprintf(L"\n%ls", wide); 12 return 0; 13 } % cc -o unicode unicode.c % ./unicode € ¥ £ ¢ ₣ ₤ సరళ సరళ Here is one website where numerical values for various Unicode characters can be found.

1) Function Pointers Declaring Function Pointers Similar to a variable declared as pointer to some data type, a variable can also be declared to be a pointer to a function. Such a variable stores the...

Perspectives

Using Solaris Zones on Oracle Cloud Infrastructure

Continuing my series of posts on how to use Oracle Solaris in Oracle Cloud Infrastructure (OCI), I'll next explore using Solaris Zones in OCI.  In this post I assume you're somewhat familiar with zones already, as they've been around since we released Solaris 10 in 2005. Before diving in, there's a little terminology to review.  The original zones introduced in Solaris 10 are known as non-global zones, which share a kernel with the global zone but otherwise appear to applications as a separate instance of Solaris.  More recently, we introduced kernel zones in Solaris 11.2.  These run a separate kernel, with specialized network and I/O paths that behave more like a paravirtualized virtual machine.  There are also Solaris 10 branded zones, which emulate a Solaris 10 environment on a Solaris 11 kernel. All of the brands of zones provide a Solaris-native virtualization environment with minimal overhead that can help you get more out of your OCI compute resources.  The image at the start of this post from the Solaris 11.4 documentation shows a complex zones environment that might be built anywhere, including in OCI, but this post provides a basic how-to for getting started with non-global and kernel zones in the OCI environment. Setup As a first step, you need to deploy Solaris 11.4 as a bare metal or virtual machine instance in OCI.  See my earlier post on how to import the Solaris 11.4 images into your tenancy if you haven't already done so.  As you create the instance, I'd recommend customizing the boot volume size to a larger value than its default 50 GB in order to accommodate the extra storage required for the zones.  After the instance is booted, you'll need to ssh in as the opc user and toggle the root pool's autoexpand property to allow ZFS to see the extra space beyond 50 GB (this is a workaround for an issue with autoexpand on the root pool): # zpool set autoexpand=off rpool;sleep 15;zpool set autoexpand=on rpool Next, you'll need to use the OCI console or CLI add a second VNIC to the instance that we'll use for the zone's network access.  When you're done, the instance's Attached VNICs section in the OCI console should look something like this:   Configuring a Non-Global Zone on Bare Metal Start by setting the default vlan tag on net0 to ensure its traffic is tagged correctly: # dladm set-linkprop -p default-tag=0 net0 Now it's time to configure the zone.  The key requirement is to configure the anet resource to use the VLAN tag, IP and MAC addresses assigned to the VNIC by OCI, otherwise there won't be any network traffic allowed to or from the zone.  Note that you need the prefix length from the VCN configuration. We also set the default router in the zone configuration to ensure it can reach resources outside its local link (by convention the router is the first address on the network). # zonecfg -z ngz1 Use 'create' to begin configuring a new zone. zonecfg:ngz1> create create: Using system default template 'SYSdefault' zonecfg:ngz1> info zonename: ngz1 brand: solaris anet 0: linkname: net0 configure-allowed-address: true zonecfg:ngz1> select anet 0 zonecfg:ngz1:anet> set allowed-address=100.106.200.4/23 zonecfg:ngz1:anet> set mac-address=00:00:17:00:BA:64 zonecfg:ngz1:anet> set vlan-id=1 zonecfg:ngz1:anet> set defrouter=100.106.200.1 zonecfg:ngz1:anet> end zonecfg:ngz1> info zonename: ngz1 brand: solaris anet 0: linkname: net0 allowed-address: 100.106.200.4/23 configure-allowed-address: true defrouter: 100.106.200.1 link-protection: "mac-nospoof, ip-nospoof" mac-address: 00:00:17:00:ba:64 vlan-id: 1 zonecfg:ngz1> commit zonecfg:ngz1> exit Configuring a Non-Global Zone on a Virtual Machine When using a VM as the global zone, the process is slightly different as the hypervisor exposes the VNIC to the guest automatically.  This makes the zone configuration somewhat simpler as we don't have to worry about VLAN tags (the hypervisor handles that) and can just delegate that physical link as a net resource and configure the allowed address and router.  First, here's the OCI VNIC configuration for our VM: To configure the zone: # zonecfg -z ngz1 Use 'create' to begin configuring a new zone. zonecfg:ngz1> create create: Using system default template 'SYSdefault' zonecfg:ngz1> info zonename: ngz1 brand: solaris anet 0: linkname: net0 configure-allowed-address: true zonecfg:ngz1> remove anet 0 zonecfg:ngz1> add net zonecfg:ngz1:net> set physical=net1 zonecfg:ngz1:net> set allowed-address=100.106.196.11/23 zonecfg:ngz1:net> set defrouter=100.106.196.1 zonecfg:ngz1:net> end zonecfg:ngz1> info zonename: ngz1 brand: solaris net 0: allowed-address: 100.106.196.11/23 physical: net1 defrouter: 100.106.196.1 zonecfg:ngz1> commit zonecfg:ngz1> exit Once the zone is configured, install it as usual with zoneadm -z ngz1 install, then boot it, login via zlogin, and verify the network works. Configuring a Kernel Zone Kernel zones are not supported by OCI's virtual machines, so you'll need a bare metal host to run them. First ensure that you've installed the kernel zone brand package, which isn't included in the Solaris image for OCI: # pkg install brand-solaris-kz In this example, we're reusing the same VNIC for the kernel zone as was used for the non-global zone example above.  Kernel zones don't support setting the IP configuration in the zone configuration, so we only set the MAC address and VLAN tag here.  The correct IP address, network prefix, and router must be provided either in the SMF profile used to configure the kernel zone or interactively to the system configuration tool that will run at first boot of the kernel zone.  Here's the zone configuration: # zonecfg -z kz1 Use 'create' to begin configuring a new zone. zonecfg:kz1> create -t SYSsolaris-kz zonecfg:kz1> info zonename: kz1 brand: solaris-kz hostid: 0x2bab3170 anet 0: configure-allowed-address: true id: 0 device 0: storage.template: dev:/dev/zvol/dsk/%{global-rootzpool}/VARSHARE/zones/%{zonename}/disk%{id} storage: dev:/dev/zvol/dsk/rpool/VARSHARE/zones/kz1/disk0 id: 0 bootpri: 0 virtual-cpu: ncpus: 4 capped-memory: physical: 4G pagesize-policy: largest-available zonecfg:kz1> select anet 0 zonecfg:kz1:anet> set mac-address=00:00:17:00:BA:64 zonecfg:kz1:anet> set vlan-id=1 zonecfg:kz1:anet> end zonecfg:kz1> commit zonecfg:kz1> exit As before, proceed to install the zone and boot it using the zoneadm command. There's plenty more to explore in using zones with OCI, consult the zones documentation for more details on their capabilities.

Continuing my series of posts on how to use Oracle Solaris in Oracle Cloud Infrastructure (OCI), I'll next explore using Solaris Zones in OCI.  In this post I assume you're somewhat familiar...

Oracle Solaris Cluster

Oracle Solaris Cluster Centralized Installation

Oracle Solaris Cluster Centralized Installation is available with Oracle Solaris Cluster 4.4. The tool is called "Centralized installer" which provides complete cluster software package installation from one of the cluster nodes. Centralized installation offers a wizard-style installer “clinstall” in both text-based and GUI mode to improve the ease of use.  Centralized Installer can be used during new cluster installs, when adding new nodes to an existing cluster or to install additional cluster software to an existing cluster. How to install Oracle solaris cluster packages using clinstall  Refer https://docs.oracle.com/cd/E69294_01/html/E69313/gsrid.html#scrolltoc for pre-requisites before you begin the installation. Restore external access to remote procedure call (RPC) communication on all cluster nodes. phys-schost# svccfg -s svc:/network/rpc/bind setprop config/local_only = false phys-schost# svcadm refresh network/rpc/bind:default Set the location of the Oracle Solaris Cluster 4.4 package repository on all nodes. phys-schost# pkg set-publisher -p file:///mnt/repo Install package “ha-cluster/system/preinstall” on all nodes phys-schost# pkg install --accept ha-cluster/system/preinstall Determine a cluster node which is used as control node to issue installation commands. Verify that the control node can reach each of the other cluster nodes to install Authorize acceptance of cluster installation commands by the control node. Perform this step on all the nodes phys-schost# clauth enable -n phys-control From the control node, launch the clinstall installer To launch the graphical user interface (GUI), use the -g option. phys-control# clinstall –g To launch the interactive text-based utility, use the -t option. phys-control# clinstall -t Example 1: Cluster initial installation from IPS repository using interactive text-based utility with clinstall -t.  Select initial installation phys-control# clinstall -t *** Main Menu *** 1) Initial Installation 2) Additional Cluster Package Installation ?) Help with menu options q) Quit Option: 1  Specify nodes on which software package needs to be installed >>> Enter nodes to install Oracle Solaris Cluster package <<< This Oracle Solaris Cluster release supports up to 16 nodes. Type the names of the nodes you want to install with  Oracle Solaris     Cluster software packages. Enter one node name  per line.  When finished, type Control-D: Node name:  phys-control Node name (Control-D to finish):  phys-schost Node name (Control-D to finish):  ^D This is the complete list of nodes: phys-control phys-schost Is it correct (y/n) ?  y Checking all nodes... Checking "phys-control"...NAME (PUBLISHER)                                  VERSION                    IFO ha-cluster/system/preinstall (ha-cluster)         4.4-0.21.0                 i-- Solaris 11.4.0.15.0 i386...ready Checking "phys-schost"...Solaris 11.4.0.15.0 i386...ready All nodes are ready for the cluster installation. Press Enter to continue... Select the installation from either an IPS repository or an ISO file that is usually  downloaded from Oracle support website. Select the source of Oracle Solaris Cluster software 1) Install from IPS Repository 2) Install from ISO Image File Option:  1 Enter the IPS repository URI [https://pkg.oracle.com/ha-cluster/release]: Accessing the repository for the cluster components ... [|]  Select the cluster component which you want to install. Select an Oracle Solaris Cluster component that you want to install: Package                        Description 1) ha-cluster-framework-minimal   OSC Framework minimal group package 2) ha-cluster-framework-full      OSC Framework full group package 3) ha-cluster-full                OSC full installation group package 4) ha-cluster/system/manager      OSC Manager q) Done Option: 1 The package, "ha-cluster-framework-minimal" is selected to install on the following nodes. phys-control,phys-schost Do you want to continue with the installation (y/n) ?  y Upon completion, cluster packages in ha-cluster-framework-minimal group package are installed on phys-control and phys-schost. Administrator can now proceed to do initial cluster configuration. Example 2: Additional cluster package installation from IPS repository using text based utility phys-control# clinstall -t *** Main Menu *** 1) Initial Installation 2) Additional Cluster Package Installation ?) Help with menu options q) Quit Option:  2 >>> Enter nodes to install Oracle Solaris Cluster package <<< This Oracle Solaris Cluster release supports up to 16 nodes.   Type the names of the nodes you want to install with  Oracle Solaris     Cluster software packages. Enter one node name  per line.  When finished, type Control-D: Node name:  phys-control Node name (Control-D to finish):  phys-schost Node name (Control-D to finish):  ^D This is the complete list of nodes: phys-control phys-schost Is it correct (y/n) ?  y Checking all nodes... Checking "phys-control"...NAME (PUBLISHER)    VERSION      IFO ha-cluster/system/preinstall (ha-cluster)   4.4-0.21.0   i-- Solaris 11.4.0.15.0 i386...ready Checking "phys-schost"...Solaris 11.4.0.15.0 i386...ready All nodes are ready for the cluster installation. Press Enter to continue... Select the source of Oracle Solaris Cluster software 1) Install from IPS Repository 2) Install from ISO Image File Option:  1 Enter the IPS repository URI [https://pkg.oracle.com/ha-cluster/release]: Accessing the repository for the cluster components ... [|] Select an Oracle Solaris Cluster component that you want to install      Package                        Description 1) *data-service                  Select an Oracle Solaris Cluster data-service to i 2) ha-cluster-data-services-full  OSC Data Services full group package 3) ha-cluster-framework-full      OSC Framework full group package 4) ha-cluster/system/manager      OSC Manager 5) ha-cluster-geo-full            OSC Disaster Recovery Framework full group package 6) ha-cluster-full                OSC full installation group package q) Done Option:  5 The package, "ha-cluster-geo-full" is selected to install on the following nodes. phys-control, phys-schost Do you want to continue with the installation (y/n) ?  y Upon completion, the Solaris Cluster Disaster Recovery Framework software is installed on phys-control and phys-schost. Administrator can now proceed to configure the disaster recovery framework on the cluster. Example 3: Cluster initial installation from ISO image using the graphical user interface (GUI), clinstall with -g option phys-control# clinstall –g Upon completion, cluster packages in ha-cluster-full package are installed on phys-control and phys-schost. Administrator can now proceed to do initial cluster configuration.

Oracle Solaris Cluster Centralized Installation is available with Oracle Solaris Cluster 4.4. The tool is called "Centralized installer" which provides complete cluster software package installation...

Oracle Solaris 10

Oracle Solaris 10 in the Oracle Cloud Infrastructure

With the release of the October 2018 Solaris 10 Extended Support Recommended patch set, you can now run Solaris 10 in Oracle Cloud. I thought it would be good to document the main steps for getting an image you can run in OCI High level steps are Create a Solaris 10 image using Virtualbox and get it patched with the October 2018 patch set Unconfigure it and shut it down Upload it to the Oracle Cloud object storage Create a custom image from that object you've just uploaded Create a compute instance of the custom image Boot it up and perform configuration tasks Creating a Solaris 10 image 1) Download Solaris10. I chose to download the Oracle VM VirtualBox template, which is preconfigured and installed with Solaris 10 1/13, which is the last update release of Solaris 10, you could equally install from the ISO, just make sure you pick vmdk as the disk image format. 2) Install VirtualBox on any suitable x86 host and operating system, I'm using Oracle Linux 7 which is configured to kickstart in our lab, but you could download it from Oracle at https://www.oracle.com/linux. One reason I picked Oracle Linux 7 is to make it easier to run the OCI tools for uploading images to Oracl Cloud Infrastructure. VirtualBox can be downloaded from http://virtualbox.org, or better it's in the Oracle Linux 7 yum repositories, just make sure the addons and developer repos are enabled in the file /etc/yum.repos.d/public-yum-ol7.repo, then run # yum install VirtualBox-5.2.x86_64 3) Import the VirtualBox template you downloaded above using the import appliance menu On the import Virtual Appliance dialog I've increased the amount of memory and changed the location of the root disk I also changed the imported appliance to run with USB 1.1 as I haven't got the extension pack installed, but you probably should install that any way. When it comes up it'll be using dhcp, so you should be able to just select dhcp during the usual sysconfig phase, select timezone, and root password, and it'll eventually come up with a desktop login.   Now you can see we've got Solaris 10 up and running. For good measure I updated the guest additions. They're installed any way, but at an older version, so it works better with the new versions. 4) Next step is to download the recommended patch set. Specifically the October 2018 patch set. This contains some fixes needed to work in OCI These are available from https://updates.oracle.com/download/28797770.html It is 2.1GB in size, so it'll take some time. Then we simply extract the zip archive, change directory in to the directory you've just extracted it to and run ./installpatchset --s10patchset (If I was doing this on a real machine I'd probably create a new liveupgrade boot environment and patch that, having scoured the README) Currently 403 patches are analysed, that will change over time. Shut it down and prepare it for use in OCI When you reboot be sure to read the messages at the end of the patch install phase, and the README.  Particularly this section "If the "-B" Live Upgrade flag is used, then the luactivate command will need to be run, and either an init(1M) or a shutdown(1M) will be needed to complete activation of the boot environment. A reboot(1M) will not complete activation of a boot environment following an luactivate." Two more things to do before we shutdown though, first off remove the SUNWvboxguest package, second is to sys-unconfig the VM, so we get to configure it properly on reboot. # pkgrm SUNWvboxguest # sys-unconfig   Upload it to Oracle Cloud object storage So we now have a suitably patched S10 image, ready to upload to Oracle Cloud. To do this you need to have the oci tools installed on your linux machine. Doing that will be the subject of another blog. But there's pretty good documentation here too (which is all I followed to create the blog). Assuming you now have ocitools working and in your path you upload the the disk image using this command $ oci os object put --bucket-name images --file Solaris10_1-13-disk1.vmdk It'll use the keys you've configured and uploaded to the console, and is surprisingly quick to upload the image - given this disk file is ~20GB in size, it only took about 10minutes to upload. [oci@ol7-1]]# oci os object put --bucket-name images --file Solaris10_1-13-disk1.vmdk Upload ID: c94aaf0d-a0e2-d3d1-7fb6-5aed125c3921 Split file into 145 parts for upload. Uploading object  [###############################-----]   87%  0d 00:01:26 Once it's there you can see it in the object storage pane of the OCI console And critically you need to get the object storage URI to allow you to create a custom image of it   Create a custom image Then go to the compute section and create a custom image. Selecting the "Emulated" mode, and giving it the object storage URI It takes a while for the image to be created, but once it is you can deploy it multiple times Create a compute instance Now go to the Compute menu and create an instance Key things about this stage are to select the Custom image you just created and an appropriate VM shape You will then be shown a page like this one And finally you can use vnc to connect to the console by using your rsa public key and creating a console connection. If you select the connect with VNC option from the 3dots on the right of the console connection, it gives you the command to set up an ssh tunnel from your system to the console. Boot it up and perform configuration tasks You connect with vncviewer :5900 and you'll see the VM has panicked. Solaris 10 uses an older version of grub, which can't easily find the root disk if the device configuration changes. So we need to trick it in to finding the rpool. To do this you can boot the failsafe archive  and mount the rpool.   Then you touch /a/reconfigure and reboot, next time through the system should boot up correctly. It does take a while after loading the ORACLE SOLARIS image for the system to actually boot, so don't panic if you see a blue screen for a while before seeing the SunOS Release boot messages. Of course we remembered to sys-unconfig before shutting the VM down, so we will have to run through the sysconfig setup. Just remember to set it up as DHCP and you do get asked for nameservice info, you will probably want to use the local DNS resolver at 169.254.169.254. Oracle Cloud also has lots of more specific options for managing your own DNS records and zones. If you forget to remove the SUNWvboxguest package the X server will fail to start. And there you have it, Oracle Solaris 10 running in OCI          

With the release of the October 2018 Solaris 10 Extended Support Recommended patch set, you can now run Solaris 10 in Oracle Cloud. I thought it would be good to document the main steps for getting an...

Oracle Solaris Cluster

Cluster File System with ZFS - Introduction and Configuration

Oracle Solaris Cluster as of 4.3 release has support for Cluster File System with UFS ZFS as a failover file system For application deployments that require accessing the same file system across multiple nodes, ZFS as a failover file system will not be a suitable solution. ZFS is the preferred data storage management on Oracle Solaris 11 and has many advantages over UFS. With Oracle Solaris Cluster 4.4, now you can have both ZFS and global access to ZFS file systems. Oracle Solaris Cluster 4.4 has added support for Cluster File System with ZFS With this new feature, you can now make ZFS file systems accessible from multiple nodes and run applications from the same file systems simultaneously on those nodes. It must be noted that zpool for globally mounted ZFS file systems does not actually mean a global ZFS pool, instead there is a Cluster File System layer that is present on top of ZFS that makes the file systems of the ZFS pool globally accessible. The following procedures explain and illustrate a couple of methods to bring up this configuration. How to create a zpool for globally mounted ZFS file systems: 1) Identify the shared device to be used for ZFS pool creation. To configure a zpool for globally mounted ZFS file systems, choose one or more multi-hosted devices from the output of the cldevice show command. phys-schost-1# cldevice show | grep Device In the following example, the entries for DID devices /dev/did/rdsk/d1 and /dev/did/rdsk/d4 shows that those devices are connected only to phys-schost-1 and phys-schost-2 respectively, while /dev/did/rdsk/d2 and /dev/did/rdsk/d3 are accessible by both nodes of this two-node cluster, phys-schost-1 and phys-schost-2. In this example, DID device /dev/did/rdsk/d3 with device name c1t6d0 will be used for global access by both nodes. # cldevice show | grep Device === DID Device Instances === DID Device Name:                                /dev/did/rdsk/d1   Full Device Path:                                phys-schost-1:/dev/rdsk/c0t0d0 DID Device Name:                                /dev/did/rdsk/d2   Full Device Path:                                phys-schost-1:/dev/rdsk/c0t6d0   Full Device Path:                                phys-schost-2:/dev/rdsk/c0t6d0 DID Device Name:                                /dev/did/rdsk/d3   Full Device Path:                                phys-schost-1:/dev/rdsk/c1t6d0   Full Device Path:                                phys-schost-2:/dev/rdsk/c1t6d0 DID Device Name:                                /dev/did/rdsk/d4   Full Device Path:                                phys-schost-2:/dev/rdsk/c1t6d1 2) Create a ZFS pool for the DID device(s) that you chose. phys-schost-1# zpool create HAzpool c1t6d0 phys-schost-1# zpool list NAME      SIZE  ALLOC   FREE  CAP  DEDUP    HEALTH  ALTROOT HAzpool  49.8G  2.22G  47.5G   4%  1.00x    ONLINE  / 3) Create ZFS file systems on the pool. phys-schost-1# zfs create -o mountpoint=/global/fs1 HAzpool/fs1 phys-schost-1# zfs create -o mountpoint=/global/fs2 HAzpool/fs2 4) Create files to show global access of the file systems. Copy some files to the newly created file systems. These files will be used in procedures below to demonstrate the file systems globally accessible by all cluster nodes. phys-schost-1# cp /usr/bin/ls /global/fs1/ phys-schost-1# cp /usr/bin/date /global/fs2/ phys-schost-1# ls -al /global/fs1/ /global/fs2/ /global/fs1/: total 120 drwxr-xr-x   3 root     root           4 Oct 8 23:22 . drwxr-xr-x   5 root     sys            5 Oct 8 23:21 .. -r-xr-xr-x   1 root     root       57576 Oct 8 23:22 ls /global/fs2/: total 7 drwxr-xr-x   3 root     root           4 Oct 8 23:22 . drwxr-xr-x   5 root     sys            5 Oct 8 23:21 .. -r-xr-xr-x   1 root     root       24656 Oct 8 23:22 date At this point the ZFS file systems of the zpool are accessible only on the node where the zpool is imported. There are two ways of configuring a zpool for globally mounted ZFS file systems. Method-1: Using device group: You would use this method when the requirement is only to provide global access to the ZFS file systems, when it is not known how HA services will be created using the file systems or which cluster resource groups will be created. 1) Create a device group of the same name as the zpool you created in step 1 of type zpool with poolaccess set to global. phys-schost-1# cldevicegroup create -p poolaccess=global -n \ phys-schost-1,phys-schost-2 -t zpool HAzpool Note: The device group must have the same name HAzpool as chosen for the pool. The poolaccess property is set to global to indicate that the file systems of this pool will be globally accessible across the nodes of the cluster. 2) Bring the device group online. phys-schost-1# cldevicegroup online HAzpool 3) Verify the configuration. phys-schost-1# cldevicegroup show === Device Groups === Device Group Name:                              HAzpool   Type:                                            ZPOOL   failback:                                        false   Node List:                                       phys-schost-1, phys-schost-2   preferenced:                                     false   autogen:                                         false   numsecondaries:                                  1   ZFS pool name:                                   HAzpool   poolaccess:                                      global   readonly:                                        false   import-at-boot:                                  false   searchpaths:                                     /dev/dsk phys-schost-1# cldevicegroup status === Cluster Device Groups === --- Device Group Status --- Device Group Name     Primary     Secondary     Status -----------------     -------     ---------     ------ HAzpool               phys-schost-1     phys-schost-2       Online In these configurations, the zpool is imported on the node that is primary for the zpool device group, but the file systems in the zpool are mounted globally. Execute the files copied in step#3 in the previous section, from a different node. It can be observed that the file systems are mounted globally and accessible across all nodes. phys-schost-2# /global/fs1/ls -al /global/fs2 total 56 drwxr-xr-x   3 root     root           4 Oct 8 23:22 . drwxr-xr-x   5 root     sys            5 Oct 8 23:21 .. -r-xr-xr-x   1 root     root       24656 Oct 8 23:22 date phys-schost-2# /global/fs2/date Fri Oct 9 04:08:59 PDT 2018 You can also verify that a newly created ZFS file system is immediately accessible from all nodes by executing the below commands. From the cldevicegroup status above, it can be observed that phys-schost-1 is the primary node for the device group.Execute the below command on the primary node: phys-schost-1# zfs create -o mountpoint=/global/fs3 HAzpool/fs3 Then from a different node, verify if the file system is accessible. phys-schost-2# df -h /global/fs3 file system             Size   Used  Available Capacity  Mounted on HAzpool/fs3             47G    40K        47G     1%    /global/fs3 Method-2: Using HAStoragePlus resource: You would typically use this method when you have planned on how HA services in resource groups would use the globally mounted file systems and expect dependencies from  resources managing the application on a resource managing the file systems. The device group of type zpool with poolaccess set to global is created when an HAStoragePlus resource is created and globalzpools property is defined, that is if such device group is not already created. 1) Create HAStoragePlus resource for a zpool for globally mounted file systems and bring it online. Note: The resource group can be scalable or failover as needed by the configuration. phys-schost-1# clresourcegroup create hasp-rg phys-schost-1# clresource create -t HAStoragePlus -p \ GlobalZpools=HAzpool -g hasp-rg hasp-rs phys-schost-1# clresourcegroup online -eM hasp-rg 2) Verify the configuration. phys-schost-1# clrs status hasp-rs === Cluster Resources === Resource Name       Node Name      State        Status Message -------------       ---------      -----        -------------- hasp-rs             phys-schost-1        Online       Online                     phys-schost-2        Offline      Offline phys-schost-1# cldevicegroup show === Device Groups === Device Group Name:                              HAzpool   Type:                                            ZPOOL   failback:                                        false   Node List:                                       phys-schost-1, phys-schost-2   preferenced:                                     false   autogen:                                         true   numsecondaries:                                  1   ZFS pool name:                                   HAzpool   poolaccess:                                      global   readonly:                                        false   import-at-boot:                                  false   searchpaths:                                     /dev/dsk phys-schost-1# cldevicegroup status === Cluster Device Groups === --- Device Group Status --- Device Group Name     Primary     Secondary     Status -----------------     -------     ---------     ------ HAzpool               phys-schost-1     phys-schost-2       Online Execute the files copied in step#3 in the previous section, from a different node. It can be observed that the file systems are mounted globally and accessible across all nodes. phys-schost-2# /global/fs1/ls -al /global/fs2 total 56 drwxr-xr-x   3 root     root           4 Oct 8 23:22 . drwxr-xr-x   5 root     sys            5 Oct 8 23:21 .. -r-xr-xr-x   1 root     root       24656 Oct 8 23:22 date phys-schost-2# /global/fs2/date Fri Oct 9 04:23:26 PDT 2018 You can also verify that a newly created ZFS file system is immediately accessible from all nodes by executing the below commands. From the cldevicegroup status above, it can be observed that phys-schost-1 is the primary node for the device group. Execute the below command on the primary node: phys-schost-1# zfs create -o mountpoint=/global/fs3 HAzpool/fs3 Then from a different node, verify if the file system is accessible. phys-schost-2# df -h /global/fs3 file system             Size   Used  Available Capacity  Mounted on HAzpool/fs3             47G    40K        47G     1%    /global/fs3 How to configure a zpool for globally mounted ZFS file systems for an already registered device group: There might be situations where a system administrator had used Method 1 to meet the global access requirement for application admins to install the application but later finds a requirement for an HAStoragePlus resource for HA services deployment. In those situations, there is no need to undo the steps done in Method 1 and redo it over in Method 2. HAStoragePlus also supports zpools for globally mounted ZFS file systems for already manually registered zpool device groups. The following steps illustrate how this configuration can be achieved. 1) Create HAStoragePlus resource with an existing zpool for globally mounted ZFS file systems and bring it online. Note: The resource group can be scalable or failover as needed by the configuration. phys-schost-1# clresourcegroup create hasp-rg phys-schost-1# clresource create -t HAStoragePlus -p \ GlobalZpools=HAzpool -g hasp-rg hasp-rs phys-schost-1# clresourcegroup online -eM hasp-rg 2) Verify the configuration. phys-schost-1~# clresource show -p GlobalZpools hasp-rs   GlobalZpools:                                 HAzpool phys-schost-1# cldevicegroup status === Cluster Device Groups === --- Device Group Status --- Device Group Name     Primary     Secondary     Status -----------------     -------     ---------     ------ HAzpool               phys-schost-1     phys-schost-2       Online Note: When the HAStoragePlus resource is deleted, the zpool device group is not automatically deleted. While the zpool device group exists, the ZFS filesystems in the zpool will be mounted globally when the device group is brought online (# cldevicegroup online HAzpool). For more information, see How to Configure a ZFS Storage Pool for Cluster-wide Global Access Without HAStoragePlus For information on how to use HAStoragePlus to manage ZFS pools for global access by data services, see  How to Set Up an HAStoragePlus Resource for a Global ZFS Storage Pool in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

Oracle Solaris Cluster as of 4.3 release has support for Cluster File System with UFS ZFS as a failover file system For application deployments that require accessing the same file system across multiple...

Oracle Announces 2018 Oracle Excellence Awards – Congratulations to our “Leadership in Infrastructure Transformation" Winners

We are pleased to announce the 2018 Oracle Excellence Awards “Leadership in Infrastructure Transformation" Winners. This elite group of recipients includes customers and partners who are using Oracle Infrastructure Technologies to accelerate innovation and drive business transformation by increasing agility, lowering costs, and reducing IT complexity. This year, our 10 award recipients were selected from amongst hundreds of nominations. The winners represent 5 different countries: Austria, Russia, Turkey, Sweden, United States and 6 different industries:  Communications, Financial, Government, Manufacturing, Technology, Transportation. Winners must use at least one, or a combination, of the following for category qualification: Oracle Linux Oracle Solaris Oracle Virtualization (VM, Virtual Box) Oracle Private Cloud Appliance Oracle SuperCluster Oracle SPARC Oracle Storage, Tape/Disk Oracle is pleased to honor these leaders who have delivered value to their organizations through the use of multiple Oracle technologies which have resulted in reduced cost of IT operations, improved time to deployment, and performance and end user productivity gains.  This year’s winners are Michael Polepchuk, Deputy Chief Information Officer, BCS Global Markets; Brian Young, Vice President, Cerner, Brian Bream, CTO, Collier IT; Rudolf Rotheneder, CEO, cons4u GmbH; Heidi Ratini, Senior Director of Engineering, IT Convergence; Philip Adams, Chief Technology Officer, Lawrence Livermore National Labs; JK Pareek, Vice President, Global IT and CIO, Nidec Americas Holding Corporation; Baris Findik, CIO, Pegasus Airlines; Michael Myhrén, Senior DBA Senior Systems Engineer and Charles Mongeon, Vice President Data Center Solutions and Services (TELUS Corporation). More information on these winners can be found at https://www.oracle.com/corporate/awards/leadership-in-infrastructure-transformation/winners.html

We are pleased to announce the 2018 Oracle Excellence Awards “Leadership in Infrastructure Transformation" Winners. This elite group of recipients includes customers and partners who are using Oracle...

Oracle Solaris/SPARC and Cloud Adjacent Solutions at Oracle Open World 2018

If you want to know what sessions at Oracle Open World 2018 are about Oracle Solaris and SPARC, you can find them in the Oracle Solaris and SPARC Focus on Document. Monday at 11:30am, in Moscone South - Room 214, Bill Nesheim, Senior Vice President, Oracle Solaris Development, Masood Heydari, SVP, Hardware Development, and Brian Bream, Chief Technology Office, Collier-IT, will be speaking about what's happening with Oracle Solaris, SPARC and how they are being used in real-world, mission critical applications to deploy Oracle Solaris/SPARC applications cloud adjacent.  Giving you all the consistency, simplicity and security of Oracle Solaris/SPARC and the monetary advantages of cloud. Oracle Solaris and SPARC Update: Security, Simplicity, Performance [PRM3358] Oracle Solaris and SPARC technologies are developed through Oracle’s unique vision of coengineering operating systems and hardware together with Oracle Database, Java, and enterprise applications. Discover the advanced features that secure application data with less effort, implement end-to-end encryption, streamline compliance assessment, simplify management of virtual machines, and accelerate performance for Oracle Database and middleware. In this session learn how SPARC/Solaris engineered systems and servers deliver continuous innovation and investment protection to organizations in their journey to the cloud, see examples of cloud-ready infrastructure, and learn Oracle's strategy for future enhancements for Oracle Solaris and SPARC systems. Masood Heydari, SVP, Hardware Development, Oracle Brian Bream, Chief Technology Officer, Vaske Computer, Inc. Bill Nesheim, Senior Vice President, Oracle Solaris Development, Oracle Monday, Oct 22, 11:30 a.m. - 12:15 p.m. | Moscone South - Room 206 To find out more, join us at Oracle Open World 2018!

If you want to know what sessions at Oracle Open World 2018 are about Oracle Solaris and SPARC, you can find them in the Oracle Solaris and SPARC Focus on Document. Monday at 11:30am, in Moscone South...

Oracle Solaris Security at OOW18

Oracle Open World 2018 is next week! We're busy creating session slides, making Hands-on-Labs and constructing demos. But I wanted to take a few moments to highlight one of our sessions. Darren Moffat, our Oracle Solaris Architect, will be joined by Thorsten Muhlmann, from VP Bank, AG.  They will be speaking at about how Oracle Solaris helps simplify securing your data center and the new security capabilities of Oracle Solaris 11.4: Oracle Solaris: Simplifying Security and Compliance for On-Premises and the Cloud [PRO1787] Oracle Solaris is engineered for security at every level. It allows you to mitigate risk and prove on-premises and cloud compliance easily, so you can spend time innovating. In this session learn how Oracle combines the power of industry-standard security features, unique security and antimalware capabilities, and mulitnode compliance management tools for low-risk application deployments and cloud infrastructure.  Thorsten Muehlmann, Senior Systems Engineer, VP Bank AG Darren Moffat, Senior Software Architect, Oracle Monday, Oct 22, 10:30 a.m. - 11:15 a.m. | Moscone South - Room 216 Want to know more about Oracle Solaris Application Sandboxing, multi-node compliance or per-file auditing, or how a customer uses Oracle Solaris security capabilities in a real-world, mission-critical data center? Join them on Monday at 10:30am at Oracle Open World!

Oracle Open World 2018 is next week! We're busy creating session slides, making Hands-on-Labs and constructing demos. But I wanted to take a few moments to highlight one of our sessions. Darren Moffat,...

Oracle Solaris Cluster

Exclusive-IP Zone Cluster - Automatic Network Configuration

Prior to the 4.4 release of Oracle Solaris Cluster (OSC), it was not possible to perform automatic public network configuration for  Exclusive-IP Zone Cluster (ZC) by specifying a System Configuration (SC) profile to the clzonecluster 'install' command. To illustrate this let us consider installation of a typical ZC  with a separate IP stack and two data-links to achieve network redundancy needed to run HA services. The data-links which are vnics previously created in the global zone are configured as part of an IPMP group that is needed to host the LogicalHostname or SharedAddress resource IP address. The zone cluster was configured as shown by the clzc 'export' command output below.   root@clusterhost1:~# clzc export zc1 create -b set zonepath=/zones/zc1 set brand=solaris set autoboot=false set enable_priv_net=true set enable_scalable_svc=false set file-mac-profile=none set ip-type=exclusive add net set address=192.168.10.10 set physical=auto end add attr set name=cluster set type=boolean set value=true end add node set physical-host=clusterhost1 set hostname=zc1-host-1 add net set physical=vnic3 end add net set physical=vnic0 end add privnet set physical=vnic1 end add privnet set physical=vnic2 end end add node set physical-host=clusterhost2 set hostname=zc1-host-2 add net set physical=vnic3 end add net set physical=vnic0 end add privnet set physical=vnic1 end add privnet set physical=vnic2 end end In OSC 4.3, after installing the ZC with a SC profile and booting it up, ZC will be in Online Running state but without the public network configuration.  The following ipadm(1M) commands are needed to set up the static network configuration in each non-global zone of the ZC. root@zc1-host-1:~# ipadm create-ip vnic0 root@zc1-host-1:~# ipadm create-ip vnic3 root@zc1-host-1:~# ipadm create-ipmp -i vnic0 -i vnic3 sc_ipmp0 root@zc1-host-1:~# ipadm create-addr -T static -a 192.168.10.11/24 sc_ipmp0/v4 root@zc1-host-2:~# ipadm create-ip vnic0 root@zc1-host-2:~# ipadm create-ip vnic3 root@zc1-host-2:~# ipadm create-ipmp -i vnic0 -i vnic3 sc_ipmp0 root@zc1-host-2:~# ipadm create-addr -T static -a 192.168.10.12/24 sc_ipmp0/v4 In OSC 4.4 it is now possible to build a SC profile such that no manual steps will be required to complete the network configuration and all the zones of the ZC can boot up to "Online Running" state upon first boot of the ZC. How is this made possible in OSC 4.4 on Solaris 11.4? Well, the clzonecluster(8CL) command can recognize sections of the SC profile XML that are applicable for individual zones of the ZC by inserting these sections within the <instances_for_node node_name="ZCNodeName"></instances_for_node> XML tags. Other sections of the SC profile that are not within these XML tags are applicable for all the zones of the ZC. Solaris 11.4 now supports arbitrarily complex network configurations in SC profiles. The following is a snippet of the SC profile that can be used for our typical ZC configuration that is derived from the template /usr/share/auto_install/sc_profiles/ipmp_network.xml. The section of the SC profile which is common for all the zones of the ZC has not been included in this snippet. <instances_for_node node_name="zc1-host-1"> <service version="1" name="system/identity"> <instance enabled="true" name="node"> <property_group name="config"> <propval name="nodename" value="zc1-host-1"/> </property_group> </instance> </service> <service name="network/ip-interface-management" version="1" type="service"> <instance name="default" enabled="true"> <property_group name="interfaces" type="application"> <!-- vnic0 interface configuration --> <property_group name="vnic0" type="interface-ip"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <propval name="ipmp-interface" type="astring" value="sc_ipmp0"/> </property_group> <!-- vnic3 interface configuration --> <property_group name="vnic3" type="interface-ip"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <propval name="ipmp-interface" type="astring" value="sc_ipmp0"/> </property_group> <!-- IPMP interface configuration --> <property_group name="sc_ipmp0" type="interface-ipmp"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <property name="under-interfaces" type="astring"> <astring_list> <value_node value="vnic0"/> <value_node value="vnic3"/> </astring_list> </property> <!-- IPv4 static address --> <property_group name="data1" type="address-static"> <propval name="ipv4-address" type="astring" value="192.168.10.11"//> <propval name="prefixlen" type="count" value="24"/> <propval name="up" type="astring" value="yes"/> </property_group> </property_group> </property_group> </instance> </service> <instances_for_node node_name="zc1-host-2"> <service version="1" name="system/identity"> <instance enabled="true" name="node"> <property_group name="config"> <propval name="nodename" value="zc1-host-2"/> </property_group> </instance> </service> <service name="network/ip-interface-management" version="1" type="service"> <instance name="default" enabled="true"> <property_group name="interfaces" type="application"> <!-- vnic0 interface configuration --> <property_group name="vnic0" type="interface-ip"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <propval name="ipmp-interface" type="astring" value="sc_ipmp0"/> </property_group> <!-- vnic0 interface configuration --> <property_group name="vnic3" type="interface-ip"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <propval name="ipmp-interface" type="astring" value="sc_ipmp0"/> </property_group> <!-- IPMP interface configuration --> <property_group name="sc_ipmp0" type="interface-ipmp"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <property name="under-interfaces" type="astring"> <astring_list> <value_node value="vnic0"/> <value_node value="vnic3"/> </astring_list> </property> <!-- IPv4 static address --> <property_group name="data1" type="address-static"> <propval name="ipv4-address" type="astring" value="192.168.10.12"//> <propval name="prefixlen" type="count" value="24"/> <propval name="up" type="astring" value="yes"/> </property_group> </property_group> </property_group> </instance> </service> </instances_for_node> You can find the complete SC profile here. Some fields like zone host name, IP and encrypted passwords needs to be substituted in this file. Other changes can be made to this profile for different configuration for e.g, configuring an Active-Standby IPMP group instead of Active-Active IPMP configuration shown in this example. We can use this SC profile to install and boot our ZC as shown below. root@clusterhost1:~# clzc install -c /var/tmp/zc1_config.xml zc1 Waiting for zone install commands to complete on all the nodes of the zone cluster "zc1"... root@clusterhost1:~# clzc boot zc1 Waiting for zone boot commands to complete on all the nodes of the zone cluster "zc1"... After a short duration, we will see the ZC in Online Running state. root@clusterhost1:~# clzc status zc1 === Zone Clusters === --- Zone Cluster Status --- Name   Brand     Node Name   Zone Host Name   Status   Zone Status ----   -----     ---------   --------------   ------   ----------- zc1    solaris   clusterhost1 zc1-host-1       Online   Running                  clusterhost2 zc1-host-2       Online   Running root@zc1-host-1:~# ipadm NAME              CLASS/TYPE STATE        UNDER      ADDR clprivnet1        ip         ok           --         -- clprivnet1/?   static     ok           --         172.16.3.65/26 lo0               loopback   ok           --         -- lo0/v4          static     ok           --         127.0.0.1/8 lo0/v6          static     ok           --         ::1/128 sc_ipmp0          ipmp       ok           --         -- sc_ipmp0/data1 static     ok           --         192.168.10.11/24 vnic0             ip         ok           sc_ipmp0   -- vnic1             ip         ok           --         -- vnic1/?         static     ok           --         172.16.3.129/26 vnic2             ip         ok           --         -- vnic2/?         static     ok           --         172.16.3.193/26 vnic3            ip         ok           sc_ipmp0   -- root@zc1-host-2:~# ipadm NAME              CLASS/TYPE STATE        UNDER      ADDR clprivnet1        ip         ok           --         -- clprivnet1/?   static    ok           --         172.16.3.66/26 lo0               loopback   ok           --         -- lo0/v4          static     ok           --         127.0.0.1/8 lo0/v6          static     ok           --         ::1/128 sc_ipmp0          ipmp       ok           --         -- sc_ipmp0/data1 static     ok           --         192.168.10.12/24 vnic0             ip         ok           sc_ipmp0   -- vnic1             ip         ok           --         -- vnic1/?        static     ok           --         172.16.3.130/26 vnic2             ip         ok           --         -- vnic2/?        static     ok           --         172.16.3.194/26 vnic3            ip         ok           sc_ipmp0   -- With this feature it is easy to create templates for SC profiles and can be used for fast deployments of ZC's without the need for administrator intervention to complete system configuration of the zones in the ZC. For more details on the SC profile configuration in 11.4 refer to Customizing Automated Installations With Manifests and Profiles. For cluster documentation refer to clzonecluster(8cl).

Prior to the 4.4 release of Oracle Solaris Cluster (OSC), it was not possible to perform automatic public network configuration for  Exclusive-IP Zone Cluster (ZC) by specifying a System Configuration...

Oracle Solaris Cluster

Oracle SuperCluster: Brief Introduction to osc-interdom

Target Audience: Oracle SuperCluster customers The primary objective of this blog post is to provide some related information on this obscure tool to inquisitive users/customers as they might have noticed the osc-interdom service and the namesake package and wondered what is it for at some point. SuperCluster InterDomain Communcation Tool, osc-interdom, is an infrastructure framework and a service that runs on Oracle SuperCluster products to provide flexible monitoring and management capabilities across SuperCluster domains. It provides means to inspect and enumerate the components of a SuperCluster so that other components can fulfill their roles in managing the SuperCluster. The framework also allows commands to be executed from a control domain to take effect across all domains on the server node (eg., a PDom on M8) and, optionally, across all servers (eg., other M8 PDoms in the cluster) on the system. SuperCluster Virtual Assistant (SVA), ssctuner, exachk and Oracle Enterprise Manager (EM) are some of the consumers of the osc-interdom framework. Installation and Configuration Interdom framework requires osc-interdom package from exa-family repository be installed and enabled on all types of domains in the SuperCluster. In order to enable communication between domains in the SuperCluster, interdom must be configured on all domains that need to be part of the inter-domain communication channel. In other words, it is not a requirement for all domains in the cluster to be part of the osc-interdom configuration. It is possible to exclude some domains from the comprehensive interdom directory either during initial configuration or at a later time. Also once the interdom directory configuration was built, it can be refreshed or rebuilt any time at will. Since installing and configuring osc-interdom was automated and made part of SuperCluster installation and configuration processes, it is unlikely that anyone from the customer site need to know or to perform those tasks manually. # svcs osc-interdom STATE STIME FMRI online 22:24:13 svc:/site/application/sysadmin/osc-interdom:default Domain Registry and Command Line Interface (CLI) Configuring interdom results in a Domain Registry. The purpose of the registry is to provide an accurate and up-to-date database of all SuperCluster domains and their characteristics. oidcli is a simple command line interface for the domain registry. The oidcli command line utility is located in /opt/oracle.supercluster/bin directory. oidcli utility can be used to query interdom domain registry for data that is associated with different components in the SuperCluster. Each component maps to a domain in the SuperCluster; and each component is uniquely identified by a UUID. The SuperCluster Domain Registry is stored on Master Control Domain (MCD). The "master" is usually the first control domain in the SuperCluster. Since the domain registry is on the master control domain, it is expected to run oidcli on MCD to query the data. When running from other domains, option -a must be specified along with the management IP address of the master control domain. Keep in mind that the data returned by oidcli is meant for other SuperCluster tools that have the ability to interpret the data correctly and coherently. Therefore humans who are looking at the same data may need some extra effort to digest and understand. eg., # cd /opt/oracle.supercluster/bin # ./oidcli -h Usage: oidcli [options] dir | [options] [...] invalidate|get_data|get_value , other component ID or 'all' (e.g. 'hostname', 'control_uuid') or 'all' NOTE: get_value must request single Options: -h, --help show this help message and exit -p Output in PrettyPrinter format -a ADDR TCP address/hostname (and optional ',') for connection -d Enable debugging output -w W Re-try for up to seconds for success. Use 0 for no wait. Default: 1801.0. List all components (domains) # ./oidcli -p dir [ ['db8c979d-4149-452f-8737-c857e0dc9eb0', True], ['4651ac93-924e-4990-8cf9-83be556eb667', True], .. ['945696fb-97f1-48e3-aa20-8c8baf198ea8', True], ['4026d670-61db-425e-834a-dfc45ff9a533', True]] List the hostname of all domains # ./oidcli -p get_data all hostname db8c979d-4149-452f-8737-c857e0dc9eb0: { 'hostname': { 'mtime': 1538089861, 'name': 'hostname', 'value': 'alpha'}} 3cfc9039-2157-4b62-ac69-ea3d85f2a19f: { 'hostname': { 'mtime': 1538174309, 'name': 'hostname', 'value': 'beta'}} ... List all available properties for all domains   # ./oidcli -p get_data all all db8c979d-4149-452f-8737-c857e0dc9eb0: { 'banner_name': { 'mtime': 1538195164, 'name': 'banner_name', 'value': 'SPARC M7-4'}, 'comptype': { 'mtime': 1538195164, 'name': 'comptype', 'value': 'LDom'}, 'control_uuid': { 'mtime': 1538195164, 'name': 'control_uuid', 'value': 'Unknown'}, 'guests': { 'mtime': 1538195164, 'name': 'guests', 'value': None}, 'host_domain_chassis': { 'mtime': 1538195164, 'name': 'host_domain_chassis', 'value': 'AK00251676'}, 'host_domain_name': { 'mtime': 1538284541, 'name': 'host_domain_name', 'value': 'ssccn1-io-alpha'}, ... Query a specific property from a specific domain # ./oidcli -p get_data 4651ac93-924e-4990-8cf9-83be556eb667 mgmt_ipaddr mgmt_ipaddr: { 'mtime': 1538143865, 'name': 'mgmt_ipaddr', 'value': ['xx.xxx.xxx.xxx', 20, 'scm_ipmp0']} The domain registry is persistent and updated hourly. When accurate and up-to-date is needed, it is recommended to query the registry with --no-cache option. eg., # ./oidcli -p get_data --no-cache 4651ac93-924e-4990-8cf9-83be556eb667 load_average load_average: { 'mtime': 1538285043, 'name': 'load_average', 'value': [0.01171875, 0.0078125, 0.0078125]} The mtime attribute in all examples above represent the UNIX timestamp. Debug Mode By default, osc-interdom service runs in non-debug mode. Running the service in debug mode enables logging more details to osc-interdom service log. In general, if osc-interdom service is transitioning to maintenance state, switching to the debug mode may provide few additional clues. To check if debug mode was enabled, run: svcprop -c -p config osc-interdom | grep debug To enable debug mode, run: svccfg -s sysadmin/osc-interdom:default setprop config/oidd_debug '=' true svcadm restart osc-interdom Finally check the service log for debug messages. svcs -L osc-interdom command output points to the location of osc-interdom service log. Documentation Similar to SuperCluster resource allocation engine, osc-resalloc, interdom framework is mostly meant for automated tools with little or no human interaction. Consequently there are no references to osc-interdom in SuperCluster Documentation Set. Related: Oracle SuperCluster: A Brief Introduction to osc-resalloc Oracle SuperCluster: A Brief Introduction to osc-setcoremem Non-Interactive osc-setcoremem Oracle Solaris Tools for Locality Observability Acknowledgments/Credit:   Tim Cook

Target Audience: Oracle SuperCluster customers The primary objective of this blog post is to provide some related information on this obscure tool to inquisitive users/customers as they might have...

Announcements

Oracle Solaris at Oracle Open World 2018

Oracle Open World is coming to San Francisco October 22-25! With it being less than a month away, I wanted to preview Oracle Solaris this year. With the release of Oracle Solaris 11.4 on August 28, 2018, we have much to talk about. For information about the future of Oracle Solaris and SPARC, Bill Nesheim, Senior Vice President of Oracle Solaris Development is joined by Massod Heydari, Senior Vice President of Hardware Development, and Brian Bream, Chief Technology Officer of Vaske Computer in our Oracle Solaris and SPARC Roadmap Session.  Brian will be discussing his unique solution for delivering Oracle SPARC solutions adjacent to Oracle Cloud, giving you the best of both worlds, cloud infrastructure for your scale-out needs and Oracle Solaris and SPARC hosted for your mission-critical needs. You can get hands on with the new cloud-scale capabilities in Oracle Solaris 11.4 with two Hands-on-Labs. "Get the Auditors Off Your Back: One Stop Cloud Compliance Validation" will show you how to setup your Oracle Solaris systems to meet compliance, lock them down so they can't be tampered with, centrally manage their compliance from a single system and even see the compliance benchmark trends over time meeting your auditors requirements. In our second Hands-on-Lab, "The CAT Scan for Oracle Solaris: StatsStore and Web Dashboard," you will get into utilizing the new Oracle Solaris System Web Interface and StatsStore to give you unique insight into not just the current state of a system but also insight into the historic state of the system.  You'll see how the Oracle Solaris 11.4 System Web Interface automatically correlates audit and system events to allow you to diagnose and root cause potential system issues quickly and easily. To find out more about Oracle Solaris and SPARC sessions and Hands-on-Labs at Oracle Open World 2018, see our Focus on Oracle Solaris and SPARC document. See you at Oracle Open World 2018!

Oracle Open World is coming to San Francisco October 22-25! With it being less than a month away, I wanted to preview Oracle Solaris this year. With the release of Oracle Solaris 11.4 on August...

Announcing Oracle Solaris 11.4 SRU1

Today we're releasing the first SRU for Oracle Solaris 11.4! This is the next installment in our ongoing support train for Oracle Solaris 11 and there will be no further Oracle Solairs 11.3 SRUs delivered to the support repository.  Due to the timing of our releases and some fixes being in Oracle Solaris 11.3 SRU35 but not in 11.4, not all customers on Oracle Solaris 11.3 SRU35 were able to update to Oracle Solaris 11.4 when it was released. SRU1 includes all these fixes and customers can now update to Oracle Solaris 11.4 SRU1 via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1.    This SRU introduces 'Memory Reservation Pools for Kernel Zones'. On some more heavily loaded systems that experience memory contention or sufficiently fragmented memory, it can be difficult for an administrator to guarantee that Kernel Zones can allocate all the memory they need to boot or even reboot. By allowing memory to be reserved ahead of time early during boot when it is more likely that enough memory of the desired pagesize is available, the impact of overall system memory usage on the ability of a Kernel Zone to boot/reboot will be minimized. Further details on using this feature are in the SRU readme file.  Other fixes of note in this SRU include:  - vim has been updated to 8.1.0209  - mailman has been updated to 2.1.29  - Updated Intel tool in HMP  - Samba has been updated to 4.8.4  - Kerberos 5 has been updated to 1.16.1  - webkitgtk+ has been updated to 2.18.6  - curl has been updated to 7.61.0  - sqlite has been updated to 3.24.0  - Apache Tomcat has been updated to 8.5.32  - Thunderbird has been updated to 52.9.1  - Apache Web Server has been updated to 2.4.34  - Wireshark has been updated to 2.6.2  - MySQL has been updated to 5.7.23  - OpenSSL has been updated to 1.0.2p  Full details of this SRU can be found in My Oracle Support Doc 2449090.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we're releasing the first SRU for Oracle Solaris 11.4! This is the next installment in our ongoing support train for Oracle Solaris 11 and there will be no further Oracle Solairs 11.3 SRUs...

Oracle Solaris 11

How To build an Input Method Engine for Oracle Solaris 11.4

Contributed by: Pavel Heimlich and Ales Cernosek Oracle Solaris 11.4 delivers a modern and extensible enterprise desktop environment. The desktop environment supports and delivers an IBus (Intelligent Input Bus) runtime environment and libraries that enable customers to easily customize and build Open Source IBus Input Method Engines to suit their preference. This article explores the step-by-step process of building one of the available IBus input methods - KKC, a Japanese input method engine. We'll start with the Oracle Solaris Userland build infrastructure and use it to build the KKC dependencies and KKC itself, and publish the Input Method IPS packages to a local publisher. A similar approach can be used for building and delivering other input method engines for other languages. Please note the components built may no longer be current; newer versions may have been released since these steps were made available. You should check for and use the latest version available. We'll start with installing all the build system dependencies : # sudo pkg exact-install developer/opensolaris/userland system/input-method/ibus group/system/solaris-desktop # sudo reboot To make this task as easy as possible, we'll reuse the Userland build infrastructure. This also has the advantage of using the same compiler, build flags and other defaults as the rest of the GNOME desktop. First clone the Userland workspace: # git clone https://github.com/oracle/solaris-userland gate Use your Oracle Solaris 11.4 IPS publisher; set this variable to empty: # export CANONICAL_REPO=http://pkg.oracle.com/... Do not use the internal Userland archive mirror: # export INTERNAL_ARCHIVE_MIRROR='' Set a few other variables and prepare the build environment: # export COMPILER=gcc # export PUBLISHER=example # export OS_VERSION=11.4 # cd gate # git checkout 8b36ec131eb42a65b0f42fc0d0d71b49cfb3adf3 # gmake setup We are going to need the intermediary packages installed on the build system, so let's add the build publisher: # pkg set-publisher -g `uname -p`/repo example KKC has a couple of dependencies. The first of them is marisa-trie. Create a folder for it : # mkdir components/marisa # cd components/marisa And create a Makefile as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64_and_32 COMPILER= gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= marisa COMPONENT_VERSION= 0.2.4 COMPONENT_PROJECT_URL= https://github.com/s-yata/marisa-trie COMPONENT_ARCHIVE_HASH= \ sha256:67a7a4f70d3cc7b0a85eb08f10bc3eaf6763419f0c031f278c1f919121729fb3 COMPONENT_ARCHIVE_URL= https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/marisa-trie/marisa-0.2.4.tar.gz TPNO= 26209 REQUIRED_PACKAGES += system/library/gcc/gcc-c-runtime REQUIRED_PACKAGES += system/library/gcc/gcc-c++-runtime REQUIRED_PACKAGES += system/library/math TEST_TARGET= $(NO_TESTS) COMPONENT_POST_INSTALL_ACTION += \ cd $(SOURCE_DIR)/bindings/python; \ CC=$(CC) CXX=$(CXX) CFLAGS="$(CFLAGS) -I$(SOURCE_DIR)/lib" LDFLAGS=-L$(PROTO_DIR)$(USRLIB) $(PYTHON) setup.py install --install-lib $(PROTO_DIR)/$(PYTHON_LIB); # to avoid libtool breaking build of libkkc COMPONENT_POST_INSTALL_ACTION += rm -f $(PROTO_DIR)$(USRLIB)/libmarisa.la; include $(WS_MAKE_RULES)/common.mk Most of the build process is defined in shared macros from the Userland workspace, so building marisa is now as easy as running the following command : # gmake install Copy the marisa copyright file so that the package follows Oracle standards: # cat marisa-0.2.4/COPYING > marisa.copyright Create the marisa.p5m package manifest as follows: # # Copyright (c) 2015, 2018, Oracle and/or its affiliates. All rights reserved. # <transform file path=usr.*/man/.+ -> default mangler.man.stability uncommitted> set name=pkg.fmri \ value=pkg:/example/system/input-method/library/marisa@$(BUILD_VERSION) set name=pkg.summary value="Marisa library" set name=pkg.description \ value="Marisa - Matching Algorithm with Recursively Implemented StorAge library" set name=com.oracle.info.description \ value="Marisa - Matching Algorithm with Recursively Implemented StorAge library" set name=info.classification \ value=org.opensolaris.category.2008:System/Internationalization set name=info.source-url \ value=https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/marisa-trie/marisa-0.2.4.tar.gz set name=info.upstream-url value=https://github.com/s-yata/marisa-trie set name=org.opensolaris.arc-caseid value=PSARC/2009/499 set name=org.opensolaris.consolidation value=$(CONSOLIDATION) file path=usr/bin/marisa-benchmark file path=usr/bin/marisa-build file path=usr/bin/marisa-common-prefix-search file path=usr/bin/marisa-dump file path=usr/bin/marisa-lookup file path=usr/bin/marisa-predictive-search file path=usr/bin/marisa-reverse-lookup file path=usr/include/marisa.h file path=usr/include/marisa/agent.h file path=usr/include/marisa/base.h file path=usr/include/marisa/exception.h file path=usr/include/marisa/iostream.h file path=usr/include/marisa/key.h file path=usr/include/marisa/keyset.h file path=usr/include/marisa/query.h file path=usr/include/marisa/scoped-array.h file path=usr/include/marisa/scoped-ptr.h file path=usr/include/marisa/stdio.h file path=usr/include/marisa/trie.h link path=usr/lib/$(MACH64)/libmarisa.so target=libmarisa.so.0.0.0 link path=usr/lib/$(MACH64)/libmarisa.so.0 target=libmarisa.so.0.0.0 file path=usr/lib/$(MACH64)/libmarisa.so.0.0.0 file path=usr/lib/$(MACH64)/pkgconfig/marisa.pc link path=usr/lib/libmarisa.so target=libmarisa.so.0.0.0 link path=usr/lib/libmarisa.so.0 target=libmarisa.so.0.0.0 file path=usr/lib/libmarisa.so.0.0.0 file path=usr/lib/pkgconfig/marisa.pc file path=usr/lib/python2.7/vendor-packages/64/_marisa.so file path=usr/lib/python2.7/vendor-packages/_marisa.so file path=usr/lib/python2.7/vendor-packages/marisa-0.0.0-py2.7.egg-info file path=usr/lib/python2.7/vendor-packages/marisa.py license marisa.copyright license="marisa, LGPLv2.1" Generate the IPS package: # gmake publish It's necessary to install the package as it's a prerequisite to build other packages : # sudo pkg install marisa # cd .. The next step is to build the KKC library: # mkdir libkkc # cd libkkc Create the Makefile as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64_and_32 COMPILER= gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= libkkc COMPONENT_VERSION= 0.3.5 COMPONENT_PROJECT_URL= https://github.com/ueno/libkkc COMPONENT_ARCHIVE= $(COMPONENT_SRC).tar.gz COMPONENT_ARCHIVE_HASH= \ sha256:89b07b042dae5726d306aaa1296d1695cb75c4516f4b4879bc3781fe52f62aef COMPONENT_ARCHIVE_URL= $(COMPONENT_PROJECT_URL)/releases/download/v$(COMPONENT_VERSION)/$(COMPONENT_ARCHIVE) TPNO= 26171 TEST_TARGET= $(NO_TESTS) REQUIRED_PACKAGES += example/system/input-method/library/marisa REQUIRED_PACKAGES += library/glib2 REQUIRED_PACKAGES += library/json-glib REQUIRED_PACKAGES += library/desktop/libgee REQUIRED_PACKAGES += system/library/gcc/gcc-c-runtime export LD_LIBRARY_PATH=$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(USRLIB) export PYTHONPATH=$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(PYTHON_LIB) CPPFLAGS += -I$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)/usr/include LDFLAGS += -L$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(USRLIB) # for gsed - metadata PATH=$(GNUBIN):$(USRBINDIR) include $(WS_MAKE_RULES)/common.mk # some of this is likely unnecessary CONFIGURE_OPTIONS += --enable-introspection=no PKG_CONFIG_PATHS += $(COMPONENT_DIR)/../marisa/build/$(MACH$(BITS)) # to avoid libtool breaking build of ibus-kkc COMPONENT_POST_INSTALL_ACTION = rm -f $(PROTO_DIR)$(USRLIB)/libkkc.la # to rebuild configure for libtool fix and fix building json files COMPONENT_PREP_ACTION = \ (cd $(@D) ; $(AUTORECONF) -m --force -v; gsed -i 's@test -f ./$$<@test -f $$<@' data/rules/rule.mk) And build the component: # gmake install Prepare the copyright file : # cat libkkc-0.3.5/COPYING > libkkc.copyright Create the libkkc.p5m package manifest as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # <transform file path=usr.*/man/.+ -> default mangler.man.stability volatile> set name=pkg.fmri \ value=pkg:/example/system/input-method/library/libkkc@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) set name=pkg.summary value="libkkc - Kana Kanji input library" set name=pkg.description \ value="libkkc - Japanese Kana Kanji conversion input method library" set name=com.oracle.info.description value="libkkc - Kana Kanji input library" set name=info.classification \ value=org.opensolaris.category.2008:System/Internationalization set name=info.source-url \ value=https://github.com/ueno/libkkc/releases/download/v0.3.5/libkkc-0.3.5.tar.gz set name=info.upstream-url value=https://github.com/ueno/libkkc set name=org.opensolaris.arc-caseid value=PSARC/2009/499 set name=org.opensolaris.consolidation value=$(CONSOLIDATION) file path=usr/bin/kkc file path=usr/bin/kkc-package-data dir path=usr/include/libkkc file path=usr/include/libkkc/libkkc.h dir path=usr/lib/$(MACH64)/libkkc link path=usr/lib/$(MACH64)/libkkc.so target=libkkc.so.2.0.0 link path=usr/lib/$(MACH64)/libkkc.so.2 target=libkkc.so.2.0.0 file path=usr/lib/$(MACH64)/libkkc.so.2.0.0 file path=usr/lib/$(MACH64)/pkgconfig/kkc-1.0.pc dir path=usr/lib/libkkc link path=usr/lib/libkkc.so target=libkkc.so.2.0.0 link path=usr/lib/libkkc.so.2 target=libkkc.so.2.0.0 file path=usr/lib/libkkc.so.2.0.0 file path=usr/lib/pkgconfig/kkc-1.0.pc dir path=usr/share/libkkc dir path=usr/share/libkkc/rules dir path=usr/share/libkkc/rules/act dir path=usr/share/libkkc/rules/act/keymap file path=usr/share/libkkc/rules/act/keymap/default.json file path=usr/share/libkkc/rules/act/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/act/keymap/hiragana.json file path=usr/share/libkkc/rules/act/keymap/katakana.json file path=usr/share/libkkc/rules/act/keymap/latin.json file path=usr/share/libkkc/rules/act/keymap/wide-latin.json file path=usr/share/libkkc/rules/act/metadata.json dir path=usr/share/libkkc/rules/act/rom-kana file path=usr/share/libkkc/rules/act/rom-kana/default.json dir path=usr/share/libkkc/rules/azik dir path=usr/share/libkkc/rules/azik-jp106 dir path=usr/share/libkkc/rules/azik-jp106/keymap file path=usr/share/libkkc/rules/azik-jp106/keymap/default.json file path=usr/share/libkkc/rules/azik-jp106/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/azik-jp106/keymap/hiragana.json file path=usr/share/libkkc/rules/azik-jp106/keymap/katakana.json file path=usr/share/libkkc/rules/azik-jp106/keymap/latin.json file path=usr/share/libkkc/rules/azik-jp106/keymap/wide-latin.json file path=usr/share/libkkc/rules/azik-jp106/metadata.json dir path=usr/share/libkkc/rules/azik-jp106/rom-kana file path=usr/share/libkkc/rules/azik-jp106/rom-kana/default.json dir path=usr/share/libkkc/rules/azik/keymap file path=usr/share/libkkc/rules/azik/keymap/default.json file path=usr/share/libkkc/rules/azik/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/azik/keymap/hiragana.json file path=usr/share/libkkc/rules/azik/keymap/katakana.json file path=usr/share/libkkc/rules/azik/keymap/latin.json file path=usr/share/libkkc/rules/azik/keymap/wide-latin.json file path=usr/share/libkkc/rules/azik/metadata.json dir path=usr/share/libkkc/rules/azik/rom-kana file path=usr/share/libkkc/rules/azik/rom-kana/default.json dir path=usr/share/libkkc/rules/default dir path=usr/share/libkkc/rules/default/keymap file path=usr/share/libkkc/rules/default/keymap/default.json file path=usr/share/libkkc/rules/default/keymap/direct.json file path=usr/share/libkkc/rules/default/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/default/keymap/hiragana.json file path=usr/share/libkkc/rules/default/keymap/katakana.json file path=usr/share/libkkc/rules/default/keymap/latin.json file path=usr/share/libkkc/rules/default/keymap/wide-latin.json file path=usr/share/libkkc/rules/default/metadata.json dir path=usr/share/libkkc/rules/default/rom-kana file path=usr/share/libkkc/rules/default/rom-kana/default.json dir path=usr/share/libkkc/rules/kana dir path=usr/share/libkkc/rules/kana/keymap file path=usr/share/libkkc/rules/kana/keymap/default.json file path=usr/share/libkkc/rules/kana/keymap/direct.json file path=usr/share/libkkc/rules/kana/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/kana/keymap/hiragana.json file path=usr/share/libkkc/rules/kana/keymap/katakana.json file path=usr/share/libkkc/rules/kana/keymap/latin.json file path=usr/share/libkkc/rules/kana/keymap/wide-latin.json file path=usr/share/libkkc/rules/kana/metadata.json dir path=usr/share/libkkc/rules/kana/rom-kana file path=usr/share/libkkc/rules/kana/rom-kana/default.json dir path=usr/share/libkkc/rules/kzik dir path=usr/share/libkkc/rules/kzik/keymap file path=usr/share/libkkc/rules/kzik/keymap/default.json file path=usr/share/libkkc/rules/kzik/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/kzik/keymap/hiragana.json file path=usr/share/libkkc/rules/kzik/keymap/katakana.json file path=usr/share/libkkc/rules/kzik/keymap/latin.json file path=usr/share/libkkc/rules/kzik/keymap/wide-latin.json file path=usr/share/libkkc/rules/kzik/metadata.json dir path=usr/share/libkkc/rules/kzik/rom-kana file path=usr/share/libkkc/rules/kzik/rom-kana/default.json dir path=usr/share/libkkc/rules/nicola dir path=usr/share/libkkc/rules/nicola/keymap file path=usr/share/libkkc/rules/nicola/keymap/default.json file path=usr/share/libkkc/rules/nicola/keymap/direct.json file path=usr/share/libkkc/rules/nicola/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/nicola/keymap/hiragana.json file path=usr/share/libkkc/rules/nicola/keymap/katakana.json file path=usr/share/libkkc/rules/nicola/keymap/latin.json file path=usr/share/libkkc/rules/nicola/keymap/wide-latin.json file path=usr/share/libkkc/rules/nicola/metadata.json dir path=usr/share/libkkc/rules/nicola/rom-kana file path=usr/share/libkkc/rules/nicola/rom-kana/default.json dir path=usr/share/libkkc/rules/tcode dir path=usr/share/libkkc/rules/tcode/keymap file path=usr/share/libkkc/rules/tcode/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/tcode/keymap/hiragana.json file path=usr/share/libkkc/rules/tcode/keymap/katakana.json file path=usr/share/libkkc/rules/tcode/keymap/latin.json file path=usr/share/libkkc/rules/tcode/keymap/wide-latin.json file path=usr/share/libkkc/rules/tcode/metadata.json dir path=usr/share/libkkc/rules/tcode/rom-kana file path=usr/share/libkkc/rules/tcode/rom-kana/default.json dir path=usr/share/libkkc/rules/trycode dir path=usr/share/libkkc/rules/trycode/keymap file path=usr/share/libkkc/rules/trycode/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/trycode/keymap/hiragana.json file path=usr/share/libkkc/rules/trycode/keymap/katakana.json file path=usr/share/libkkc/rules/trycode/keymap/latin.json file path=usr/share/libkkc/rules/trycode/keymap/wide-latin.json file path=usr/share/libkkc/rules/trycode/metadata.json dir path=usr/share/libkkc/rules/trycode/rom-kana file path=usr/share/libkkc/rules/trycode/rom-kana/default.json dir path=usr/share/libkkc/rules/tutcode dir path=usr/share/libkkc/rules/tutcode-touch16x dir path=usr/share/libkkc/rules/tutcode-touch16x/keymap file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/hiragana.json file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/katakana.json file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/latin.json file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/wide-latin.json file path=usr/share/libkkc/rules/tutcode-touch16x/metadata.json dir path=usr/share/libkkc/rules/tutcode-touch16x/rom-kana file path=usr/share/libkkc/rules/tutcode-touch16x/rom-kana/default.json dir path=usr/share/libkkc/rules/tutcode/keymap file path=usr/share/libkkc/rules/tutcode/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/tutcode/keymap/hiragana.json file path=usr/share/libkkc/rules/tutcode/keymap/katakana.json file path=usr/share/libkkc/rules/tutcode/keymap/latin.json file path=usr/share/libkkc/rules/tutcode/keymap/wide-latin.json file path=usr/share/libkkc/rules/tutcode/metadata.json dir path=usr/share/libkkc/rules/tutcode/rom-kana file path=usr/share/libkkc/rules/tutcode/rom-kana/default.json dir path=usr/share/libkkc/templates dir path=usr/share/libkkc/templates/libkkc-data file path=usr/share/libkkc/templates/libkkc-data/Makefile.am file path=usr/share/libkkc/templates/libkkc-data/configure.ac.in dir path=usr/share/libkkc/templates/libkkc-data/data file path=usr/share/libkkc/templates/libkkc-data/data/Makefile.am dir path=usr/share/libkkc/templates/libkkc-data/data/models file path=usr/share/libkkc/templates/libkkc-data/data/models/Makefile.sorted2 file path=usr/share/libkkc/templates/libkkc-data/data/models/Makefile.sorted3 dir path=usr/share/libkkc/templates/libkkc-data/data/models/sorted2 file path=usr/share/libkkc/templates/libkkc-data/data/models/sorted2/metadata.json dir path=usr/share/libkkc/templates/libkkc-data/data/models/sorted3 file path=usr/share/libkkc/templates/libkkc-data/data/models/sorted3/metadata.json dir path=usr/share/libkkc/templates/libkkc-data/data/models/text2 file path=usr/share/libkkc/templates/libkkc-data/data/models/text2/metadata.json dir path=usr/share/libkkc/templates/libkkc-data/data/models/text3 file path=usr/share/libkkc/templates/libkkc-data/data/models/text3/metadata.json dir path=usr/share/libkkc/templates/libkkc-data/tools file path=usr/share/libkkc/templates/libkkc-data/tools/Makefile.am file path=usr/share/libkkc/templates/libkkc-data/tools/genfilter.py file path=usr/share/libkkc/templates/libkkc-data/tools/sortlm.py file path=usr/share/locale/ja/LC_MESSAGES/libkkc.mo file path=usr/share/vala/vapi/kkc-1.0.deps file path=usr/share/vala/vapi/kkc-1.0.vapi license libkkc.copyright license="libkkc, GPLmix" \ com.oracle.info.description="libkkc - Kana Kanji input library" \ com.oracle.info.name=pci.ids com.oracle.info.tpno=26171 \ com.oracle.info.version=0.3.5 Use the following command to create the IPS package: # gmake publish # cd .. Then install the package so that it's available for dependency resolution: # sudo pkg install libkkc Next build libkkc-data: # mkdir libkkc-data # cd libkkc-data Create the Makefile as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64_and_32 COMPILER= gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= libkkc-data COMPONENT_VERSION= 0.2.7 COMPONENT_PROJECT_URL= https://github.com/ueno/libkkc COMPONENT_ARCHIVE= $(COMPONENT_SRC).tar.xz COMPONENT_ARCHIVE_HASH= \ sha256:9e678755a030043da68e37a4049aa296c296869ff1fb9e6c70026b2541595b99 COMPONENT_ARCHIVE_URL= https://github.com/ueno/libkkc/releases/download/v0.3.5/$(COMPONENT_ARCHIVE) TPNO= 26171 TEST_TARGET= $(NO_TESTS) export LD_LIBRARY_PATH=$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)/$(USRLIB) export PYTHONPATH=$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(PYTHON_LIB) include $(WS_MAKE_RULES)/common.mk CONFIGURE_ENV += MARISA_CFLAGS="-I$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)/usr/include" CONFIGURE_ENV += MARISA_LIBS="-L$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(USRLIB) -lmarisa" Build libkkc-data : # gmake install Prepare the copyright file : # cat libkkc-data-0.2.7/COPYING > libkkc-data.copyright Create the libkkc-data.p5m package manifest as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # <transform file path=usr.*/man/.+ -> default mangler.man.stability volatile> set name=pkg.fmri \ value=pkg:/example/system/input-method/library/libkkc-data@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) set name=pkg.summary value="libkkc-data - Kana Kanji input library data" set name=pkg.description \ value="libkkc-data - data for Japanese Kana Kanji conversion input method library" set name=com.oracle.info.description \ value="libkkc-data - Kana Kanji input library data" set name=info.classification \ value=org.opensolaris.category.2008:System/Internationalization set name=info.source-url \ value=https://bitbucket.org/libkkc/libkkc-data/downloads/libkkc-data-0.2.7.tar.xz set name=info.upstream-url value=https://bitbucket.org/libkkc/libkkc-data set name=org.opensolaris.arc-caseid value=PSARC/2009/499 set name=org.opensolaris.consolidation value=$(CONSOLIDATION) dir path=usr/lib/$(MACH64)/libkkc/models dir path=usr/lib/$(MACH64)/libkkc/models/sorted3 file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.1gram file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.1gram.index file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.2gram file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.2gram.filter file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.3gram file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.3gram.filter file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.input file path=usr/lib/$(MACH64)/libkkc/models/sorted3/metadata.json dir path=usr/lib/libkkc/models dir path=usr/lib/libkkc/models/sorted3 file path=usr/lib/libkkc/models/sorted3/data.1gram file path=usr/lib/libkkc/models/sorted3/data.1gram.index file path=usr/lib/libkkc/models/sorted3/data.2gram file path=usr/lib/libkkc/models/sorted3/data.2gram.filter file path=usr/lib/libkkc/models/sorted3/data.3gram file path=usr/lib/libkkc/models/sorted3/data.3gram.filter file path=usr/lib/libkkc/models/sorted3/data.input file path=usr/lib/libkkc/models/sorted3/metadata.json license libkkc-data.copyright license="libkkc-data, GPLv3" \ com.oracle.info.description="libkkc - Kana Kanji input library language data" \ com.oracle.info.name=usb.ids com.oracle.info.tpno=26171 \ com.oracle.info.version=0.2.7 Use the following command to create the IPS package: # gmake publish # cd .. Then install the package so that it's available for dependency resolution: # sudo pkg install libkkc-data Finally, create the KKC package: # mkdir ibus-kkc # cd ibus-kkc Create the Makefile as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64_and_32 COMPILER= gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= ibus-kkc COMPONENT_VERSION= 1.5.22 COMPONENT_PROJECT_URL= https://github.com/ueno/ibus-kkc IBUS-KKC_PROJECT_URL= https://github.com/ueno/ibus-kkc COMPONENT_ARCHIVE= $(COMPONENT_SRC).tar.gz COMPONENT_ARCHIVE_HASH= \ sha256:22fe2552f08a34a751cef7d1ea3c088e8dc0f0af26fd7bba9cdd27ff132347ce COMPONENT_ARCHIVE_URL= $(COMPONENT_PROJECT_URL)/releases/download/v$(COMPONENT_VERSION)/$(COMPONENT_ARCHIVE) TPNO= 31503 TEST_TARGET= $(NO_TESTS) REQUIRED_PACKAGES += system/input-method/ibus REQUIRED_PACKAGES += example/system/input-method/library/libkkc REQUIRED_PACKAGES += example/system/input-method/library/libkkc-data REQUIRED_PACKAGES += library/desktop/gtk3 REQUIRED_PACKAGES += library/desktop/libgee REQUIRED_PACKAGES += library/json-glib REQUIRED_PACKAGES += library/glib2 # for marisa REQUIRED_PACKAGES += system/library/gcc/gcc-c-runtime REQUIRED_PACKAGES += system/library/gcc/gcc-c++-runtime REQUIRED_PACKAGES += system/library/math CPPFLAGS += -I$(COMPONENT_DIR)/../libkkc/build/prototype/$(MACH)/usr/include LDFLAGS += "-L$(COMPONENT_DIR)/../libkkc/build/prototype/$(MACH)$(USRLIB)" LDFLAGS += "-L$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(USRLIB)" include $(WS_MAKE_RULES)/common.mk CONFIGURE_ENV += PATH=$(GNUBIN):$(USRBINDIR) CONFIGURE_OPTIONS += --libexecdir=$(USRLIBDIR)/ibus #CONFIGURE_OPTIONS += --enable-static=no PKG_CONFIG_PATHS += $(COMPONENT_DIR)/../libkkc/build/$(MACH$(BITS))/libkkc PKG_CONFIG_PATHS += $(COMPONENT_DIR)/../marisa/build/$(MACH$(BITS)) # to rebuild configure for libtool fix COMPONENT_PREP_ACTION = \ (cd $(@D) ; $(AUTORECONF) -m --force -v) PKG_PROTO_DIRS += $(COMPONENT_DIR)/../marisa/build/prototype/$(MACH) PKG_PROTO_DIRS += $(COMPONENT_DIR)/../libkkc/build/prototype/$(MACH) PKG_PROTO_DIRS += $(COMPONENT_DIR)/../libkkc-data/build/prototype/$(MACH) And build the component: # gmake install Prepare the copyright file : # cat ibus-kkc-1.5.22/COPYING > ibus-kkc.copyright Create the ibus-kkc.p5m package manifest as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # <transform file path=usr.*/man/.+ -> default mangler.man.stability volatile> set name=pkg.fmri \ value=pkg:/example/system/input-method/ibus/kkc@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) set name=pkg.summary value="IBus Japanese IME - kkc" set name=pkg.description value="Japanese Kana Kanji input engine for IBus" set name=com.oracle.info.description value="ibus kkc - Kana kanji input engine" set name=info.classification \ value=org.opensolaris.category.2008:System/Internationalization set name=info.source-url value=$(COMPONENT_ARCHIVE_URL) set name=info.upstream-url value=$(COMPONENT_PROJECT_URL) set name=org.opensolaris.arc-caseid value=PSARC/2009/499 set name=org.opensolaris.consolidation value=$(CONSOLIDATION) file path=usr/lib/ibus/ibus-engine-kkc mode=0555 file path=usr/lib/ibus/ibus-setup-kkc mode=0555 file path=usr/share/applications/ibus-setup-kkc.desktop dir path=usr/share/ibus-kkc dir path=usr/share/ibus-kkc/icons file path=usr/share/ibus-kkc/icons/ibus-kkc.svg file path=usr/share/ibus/component/kkc.xml file path=usr/share/locale/ja/LC_MESSAGES/ibus-kkc.mo license ibus-kkc.copyright license="ibus-kkc, GPLmix" depend type=require fmri=example/system/input-method/library/libkkc-data Perform the final build: # gmake publish Use the following command to install the package from the build's IPS repository: # sudo pkg install ibus/kkc Other input method engines can be built in a similar way: for example, ibus-hangul for Korean, ibus-chewing or ibus-table for Chinese, and others.

Contributed by: Pavel Heimlich and Ales Cernosek Oracle Solaris 11.4 delivers a modern and extensible enterprise desktop environment. The desktop environment supports and delivers an IBus (Intelligent...

Perspectives

How to Use OCI Storage Resources with Solaris

After getting started by importing an Oracle Solaris image into your Oracle Cloud Infrastructure (OCI) tenant and launched an instance, you'll likely want to use other OCI resources to run your applications.  In this post, I'll provide a quick cheat sheet to using the storage resources: block volumes and file storage.  I'm not going to cover the basics of creating these objects in OCI, which is covered well by the documentation.  This post just shows how to do the Solaris-specific things that the OCI console will otherwise tell you for Linux guests. Block Volumes Once you've created a block volume and attached it to your Solaris guest in the OCI console, you still need to do a small amount of work on the Solaris guest to let it see the storage you've attached.  The OCI Console will display the Linux iSCSI commands, you just need to translate them to the Solaris equivalents.  Here's an example of the Linux commands: sudo iscsiadm -m node -o new -T iqn.2015-12.com.oracleiaas:de785f06-2288-4f2f-8ef7-f0c2c25d6144 -p 169.254.2.2:3260 sudo iscsiadm -m node -o update -T iqn.2015-12.com.oracleiaas:de785f06-2288-4f2f-8ef7-f0c2c25d6144 -n node.startup -v automatic sudo iscsiadm -m node -T iqn.2015-12.com.oracleiaas:de785f06-2288-4f2f-8ef7-f0c2c25d6144 -p 169.254.2.2:3260 -l While the Solaris command for these tasks is also iscsiadm, it uses a different command line design so we have to translate.  Using the iqn string and address:port strings from the first command above, we define a static target, and then enable static discovery. sudo iscsiadm add static-config iqn.2015-12.com.oracleiaas:de785f06-2288-4f2f-8ef7-f0c2c25d6144,169.254.2.2:3260 sudo iscsiadm modify discovery --static enable At this point, the device will be available and can be seen in the format utility: opc@instance-20180904-1408:~$ sudo format </dev/null Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c2d0 <QEMU HAR-QM0000-0001-48.83GB> /pci@0,0/pci-ide@1,1/ide@1/cmdk@0,0 1. c3t0d0 <ORACLE-BlockVolume-1.0-1.00TB> /iscsi/disk@0000iqn.2015-12.com.oracleiaas%3Ade785f06-2288-4f2f-8ef7-f0c2c25d61440001,1 And now it's trivial to create a ZFS pool on the storage: opc@instance-20180904-1408:~$ sudo zpool create tank c3t0d0 opc@instance-20180904-1408:~$ sudo zpool status tank pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 errors: No known data errors Since the iscsiadm command automatically persists the configuration you enter, the storage (and the ZFS pool) will still be visible after the guest is rebooted. File Storage The OCI file storage service provides a file store that's accessible over NFS, and it's fully interoperable with clients using NFSv3.  Once you've created a file system and associated a mount target, the OCI console will show you the commands to install the NFS client, create a mount point directory, and then mount the file system.  The Solaris images we've provided already include the NFS client, so all you need to do is create a mount point directory and mount the file system.  The Solaris commands for those tasks are the same as on Linux, so you can directly copy and paste those from the OCI console.  To make the mount persistent, you'll either need to add the mount entry to /etc/vfstab or configure the automounter.  I won't provide a tutorial on those topics here, but refer you to the Solaris documentation.

After getting started by importing an Oracle Solaris image into your Oracle Cloud Infrastructure (OCI) tenant and launched an instance, you'll likely want to use other OCI resources to run...

Oracle Solaris 11

Random Solaris & Shell Command Tips — kstat, tput, sed, digest

The examples shown in this blog post were created and executed on a Solaris system. Some of these tips and examples are applicable to all *nix systems. Digest of a File One of the typical uses of computed digest is to check if a file has been compromised or tampered. The digest utility can be used to calculate the digest of files. On Solaris, -l option lists out all cryptographic hash algorithms available on the system. eg., % digest -l sha1 md5 sha224 sha256 .. sha3_224 sha3_256 .. -a option can be used to specify the hash algorithm while computing the digest. eg., % digest -v -a sha1 /usr/lib/libc.so.1 sha1 (/usr/lib/libc.so.1) = 89a588f447ade9b1be55ba8b0cd2edad25513619   Multiline Shell Script Comments Shell treats any line that start with '#' symbol as a comment and ignores such lines completely. (The line on top that start with #! is an exception). From what I understand there is no multiline comment mechanism in shell. While # symbol is useful to mark single line comments, it becomes laborious and not well suited to comment quite a number of contiguous lines. One possible way to achieve multiline comments in a shell script is to rely on a combination of shell built-in ':' and Here-Document code block. It may not be the most attractive solution but gets the work done. Shell ignores the lines that start with a ":" (colon) and returns true. eg., % cat -n multiblock_comment.sh 1 #!/bin/bash 2 3 echo 'not a commented line' 4 #echo 'a commented line' 5 echo 'not a commented line either' 6 : <<'MULTILINE-COMMENT' 7 echo 'beginning of a multiline comment' 8 echo 'second line of a multiline comment' 9 echo 'last line of a multiline comment' 10 MULTILINE-COMMENT 11 echo 'yet another "not a commented line"' % ./multiblock_comment.sh not a commented line not a commented line either yet another "not a commented line"   tput Utility to Jazz Up Command Line User Experience The tput command can help make the command line terminals look interesting. tput can be used to change the color of text, apply effects (bold, underline, blink, ..), move cursor around the screen, get information about the status of the terminal and so on. In addition to improving the command line experience, tput can also be used to improve the interactive experience of scripts by showing different colors and/or text effects to users. eg., % tput bold <= bold text % date Thu Aug 30 17:02:57 PDT 2018 % tput smul <= underline text % date Thu Aug 30 17:03:51 PDT 2018 % tput sgr0 <= turn-off all attributes (back to normal) % date Thu Aug 30 17:04:47 PDT 2018 Check the man page of terminfo for a complete list of capabilities to be used with tput.   Processor Marketing Name On systems running Solaris, processor's marketing or brand name can be extracted with the help of kstat utility. cpu_info module provides information related to the processor(s) on the system. eg., On SPARC: % kstat -p cpu_info:1:cpu_info1:brand cpu_info:1:cpu_info1:brand SPARC-M8 On x86/x64: % kstat -p cpu_info:1:cpu_info1:brand cpu_info:1:cpu_info1:brand Intel(r) Xeon(r) CPU L5640 @ 2.27GHz In the above example, cpu_info is the module. 1 is the instance number. cpu_info1 is the name of the section and brand is the statistic in focus. Note that cpu_info module has only one section cpu_info1. Therefore it is fine to skip the section name portion (eg., cpu_info:1::brand). To see the complete list of statistics offered by cpu_info module, simply run kstat cpu_info:1.   Consolidating Multiple sed Commands sed utility allows specifying multiple editing commands on the same command line. (in other words, it is not necessary to pipe multiple sed commands). The editing commands need to be separated with a semicolon (;) eg., The following two commands are equivalent and yield the same output. % prtconf | grep Memory | sed 's/Megabytes/MB/g' | sed 's/ size//g' Memory: 65312 MB % prtconf | grep Memory | sed 's/Megabytes/MB/g;s/ size//g' Memory: 65312 MB

The examples shown in this blog post were created and executed on a Solaris system. Some of these tips and examples are applicable to all *nix systems. Digest of a File One of the typical uses of...

Perspectives

Getting Started with Oracle Solaris 11.4 on Oracle Cloud Infrastructure (OCI)

With today's release of Oracle Solaris 11.4, we're making pre-built images available for use in Oracle Cloud Infrastructure (OCI).  The images aren't part of the official OCI image catalog at this time, but using them is easy, just follow these steps, which are the same as in my previous post on the 11.4 beta images. Login to your OCI console and select Compute->Custom Images from the main menu, this will display the Images page. Press the blue Import Image button.  This will display the Import Image dialog. In the dialog, select a compartment into which the image will be imported, and enter a name, such as "Solaris 11.4".  Select Linux for the operating system since OCI doesn't yet know about Solaris and that will avoid any special handling that OCI has for Windows images.  At this point, choose which image you wish to import: Bare Metal: Copy this link and paste it into the Object Storage URL field.  Select QCOW2 as the Image Type, and Native Mode as the Launch Mode. Enter any tags you wish to apply, and then press Import Image. Virtual Machine: Copy this link and paste it into the Object Storage URL field.  Select VMDK as the Image Type, and Emulated Mode as the Launch Mode.  Enter any tags you wish to apply, and then press Import Image. It'll take a few minutes for OCI to copy the image from object storage into your tenant's image repository.  Once that's complete, you can launch an instance using the image.  First, one tip: if you've imported the Bare Metal image, you should go to its Image Details page and press the Edit Details button.  In the Edit Image Details dialog that comes up, there's a Compatible Shapes list.  You'll find that all of the shapes have a blue checkmark.  You should uncheck all of the VM shapes and then Save the image.  The reason is that Solaris is not capable of booting in OCI's native virtual machine shapes at this time and this will prevent anyone who uses that image from inadvertently launching a VM that won't be accessible.  We're working on running Solaris under OCI's native VM technology, but since it's not ready yet, we've made the emulated mode image available for now. When creating an instance, select Custom Image as the boot volume type and select the image you've imported along with a compatible shape.  You'll need to supply an ssh key in order to login to the instance once it's started; when creating a VM, it's necessary to click the Show Advanced Options link to access the SSH Keys settings. After you start an instance, login using ssh opc@<instance ip>.  The image contains a relatively minimal Solaris installation suitable for bootstrapping into a cloud environment - this is the solaris-cloud-guest group package.  You'll likely need to install more software to do anything beyond some simple exploration; to add more Solaris packages, just use the pkg command to install from the Solaris release repository. Now that you've got an instance running, there's a lot more you can do with it, including saving any modifications you make as a new Custom Image of your own that you can then redeploy directly to a new instance (note, though, that at this point a modified bare metal image will only be deployable to bare metal, and a VM image will only be deployable to a VM).  Leave a comment here, post on the Solaris 11 community forum, or catch me @dave_miner on Twitter if you have topic suggestions or questions.  And of course check out my previous post on automatically configuring Solaris guests in OCI.

With today's release of Oracle Solaris 11.4, we're making pre-built images available for use in Oracle Cloud Infrastructure (OCI).  The images aren't part of the official OCI image catalog at this...

Announcements

Oracle Solaris 11.4 Released for General Availability

Oracle Solaris 11.4: The trusted business platform. I'm pleased to announce the release of Oracle Solaris 11.4. Of the four releases of Oracle Solaris that I've been involved in, this is the best one yet! Oracle Solaris is the trusted business platform that you depend on. Oracle Solaris 11 gives you consistent compatibility, is simple to use and is designed to always be secure. Some fun facts about Oracle Solaris 11.4 There have been 175 development builds to get us to Oracle Solaris 11.4. We've tested Oracle Solaris 11.4 for more than 30 million machine hours. Over 50 customers have already put Oracle Solaris 11.4 into production and it already has more than 3000 applications certified to run on it. Oracle Solaris 11.4 is the first and, currently, the only operating system that has completed UNIX® V7 certification. What's new Consistently Compatible That last number in the fun facts is interesting because that number is a small subset of applications that will run on Oracle Solaris 11.4.  It doesn't include applications that will run on Oracle Solaris 11 that were designed and build for Oracle Solaris 10 (nor 8 and 9 for that matter). One of the reasons why Oracle Solaris is trusted by so many large companies and governments around the world to run their most mission-critical applications is our consistency. One of the key capabilities for Oracle Solaris is the Oracle Solaris Application Compatibility Guarantee. For close to 20 years now, we have guaranteed that Oracle Solaris will run applications built on previous releases of Oracle Solaris, and we continue to keep that promise today. Additionally, we've made it easier than ever to migrate your Oracle Solaris 10 workloads to Oracle Solaris 11. We've enhanced our migration tools and documentation to make moving from Oracle Solaris 10 to Oracle Solaris 11 on modern hardware simple.  All in an effort to save you money. Simple to Use Of course with every release of Oracle Solaris, we work hard to make life simpler for our users. This release is no different. We've included several new features in Oracle Solaris 11.4 that make it easier than ever to manage. The coolest of those new features is our new Observability Tools System Web Interface. The System Web Interface brings together several key observability technologies, including the new StatsStore data, audit events and FMA events, into a centralized, customizable browser-based interface, that allows you to see the current and past system behavior at a glance. James McPherson did an excellent job of writing all about the Web Interface here. He also wrote about what we collect by default here. Of course, you can also add your own data to be collected and customize the interface as you like.  And if you want to export the data to some other application like a spreadsheet or database, my colleague Joost Pronk wrote a blog on how to get the data into a csv format file. For more information about that, you can read more about it all in our Observability Tools documentation. The Service Management Framework has been enhanced to allow you to automatically monitor and restart critical applications and services. Thejaswini Kodavur shows you how to use our new SMF goal services. We've made managing and updating Oracle Solaris Zones and the applications you run inside them simpler than ever. We started by supplying you with the ability to evacuate a system of all of its Zones with just one command. Oh, and you can bring them all back with just one command too. Starting with Oracle Solaris 11.4, you can now build intra-Zone dependencies and have the dependent Zones boot in the correct order, allowing you to automatically boot and restart complex application stacks in the correct order. Jan Pechanec wrote a nice how-to blog for you to get started. Joost Pronk wrote a community article going into the details more deeply. In Oracle Solaris 11.4, we give you one of the most requested features to make ZFS management even simpler than it already is. Cindy Swearingen talks about how Oracle Solaris now gives you ZFS device removal. Oracle Solaris is designed from the ground up to be simple to manage, saving you time. Always Secure Oracle Solaris is consistently compatible and is simple to use, but if it is one thing above all others, it is focused on security, and in Oracle Solaris 11.4 we give you even more security capabilities to make getting and staying secure and compliant easy. We start with multi-node compliance. In Oracle Solaris 11.4, you can now setup compliance to either push a compliance assessment to all systems with a single command and review the results in a single report, or you can setup your systems to regularly generate their compliance reports and push them to a central server where they can also be viewed via a single report.  This makes maintaining compliance across your data center even easier. You can find out more about multi-node compliance here. But how do you keep your systems compliant once they are made compliant? One of the most straightforward ways is to take advantage of Immutable Zones (this includes the Global Zone).  Immutable Zones even prevents system administrators from writing to the system and yet still allowed patches and updates via IPS. This is done via a trusted path. However, this also means that your configuration management tools like Puppet and Chef aren't able to write to the Zone to apply require configuration changes.  In Oracle Solaris 11.4, we added trusted path services. Now, you can create your own services like Puppet and Chef, that can be placed on the trusted path, allowing them to make the requisite changes while keeping the system/zone immutable and protected. Oracle Solaris Zones, especially Immutable Zones, are an incredibly useful tool for building isolation into your environment to protect applications and your data center from cyber attack or even just administrative error.  However, sometimes, a Zone is too much.  You really just want to be able to isolate applications on a system or within a Zone or VM. For this, we give you Application Sandboxing.  It allows you to isolate an application or isolate applications from each other.  Sandboxes provide additional separation of applications and reduce the risk of unauthorized data access. You can read more about it in Darren Moffat's blog, here. Oracle Solaris 11 is engineered to help you get and stay secure and compliant, reducing your risk. Updating In order to make your transition from Oracle Solaris 11.3 to Oracle Solaris 11.4 as smooth as possible, we've included a new compliance benchmark that will tell you if there are any issues such as old, unsupported device drivers, unsupported software or if the hardware you are running on isn't supported by Oracle Solaris 11.4. To install this new benchmark, update to Oracle Solaris 11.3 SRU 35 and run: # pkg install update-check Then to run the check, you simple run # compliance assess -b ehc-update -a 114update # compliance report -a 114update -o ./114update.html You can then use FireFox to view the report: My personal system started out as an Oracle Solaris 11.1 system and has been upgraded over the years to an Oracle Solaris 11.3 system. As you can see, I had some failures. These were some old device drivers, and old versions of software like gcc-3, iperf benchmarking tool, etc. The compliance report report tells you exactly what needs to happen to resolve the failures.  The devices drivers weren't needed any longer, and I uninstalled them per the reports instructions. The report said the software will be removed automatically during upgrade. Oracle Solaris Cluster 4.4 Of course with each update of Oracle Solaris 11, we release an new version of Oracle Solaris Cluster so you can upgrade in lock-step to provide a smooth transition for your HA environments. You can read about what's new in Oracle Solaris Cluster 4.4 in the What's New and find out more from the Data Sheet, the Oracle Technology Network and in our documentation. Try it out You can download Oracle Solaris 11.4 now from the Oracle Technology Network for bare metal or VirtualBox, OTN, MOS, and our repository at pkg.oracle.com. Take a moment to check out our new OTN page, and you can engage with other Oracle Solaris users and engineers on the Oracle Solaris Community page. UNIX® and UNIX® V7 are registered trademarks of The Open Group.

Oracle Solaris 11.4: The trusted business platform. I'm pleased to announce the release of Oracle Solaris 11.4. Of the four releases of Oracle Solaris that I've been involved in, this is the best one...

Zones Delegated Restarter and SMF Goals

Managing Zones in Oracle Solaris 11.3 In Oracle Solaris 11.3, Zones are managed by the Zones service svc:/system/zones:default. The service performs the autobooting and shutdown of Zones on system boot and shutdown according to each zone's configuration. The service boots, in parallel, all Zones configured with autoboot=true, and shuts down, also in parallel, all running Zones with their corresponding autoshutdown option: halt, shutdown, or suspend. See the zonecfg(1M) manual page in 11.3 for more information. The management mechanism existing in 11.3 is sufficient for systems running a small number of Zones. However, the growing size of systems and the increasing number of Zones on them require a more sophisticated mechanism. The issue is that the following features are missing: Very limited integration with SMF. That also means zones may not depend on other services and vice versa. No threshold tunable for the number of Zones booted in parallel. No integration with FMA. No mechanism to prioritize Zones booting order. No mechanism for providing information when a zone is considered up and running. This blog post describes enhancements brought in by 11.4 that address existing shortcomings of the Zones service in 11.3. Zones Delegated Restarter and Goals in Oracle Solaris 11.4 To solve the shortcomings outlined in the previous section, Oracle Solaris 11.4 brings the Zones Delegated Restarter (ZDR) to manage the Zones infrastructure and autobooting, and SMF Goals. Each zone aside from the Global Zone is modeled as an SMF instance of the service svc:/system/zones/zone:<zonename> where the name of the instance is the name of the zone. Note that the Zones configuration was not moved into the SMF repository. For an explanation of what is an SMF restarter, see the section "Restarters" in the smf(7) manual page. Zone SMF Instances The ZDR replaces the existing shell script, /lib/svc/method/svc-zones, in the Zones service method with the restarter daemon, /usr/lib/zones/svc.zones. See the svc.zones(8) manual page for more information. The restarter runs under the existing Zones service svc:/system/zones:default. A zone SMF instance is created at the zone creation time. The instance is marked as incomplete for zones in the configured state. On zone install, attach, and clone, the zone instance is marked as complete. Conversely, on zone detach or uninstall, the zone instance is marked incomplete. The zone instance is deleted when the zones is deleted via zonecfg(8). An example on listing the Zones instances: $ svcs svc:/system/zones/zone STATE STIME FMRI disabled 12:42:55 svc:/system/zones/zone:tzone1 online 16:29:47 svc:/system/zones/zone:s10 $ zoneadm list -vi ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 s10 running /system/zones/s10 solaris10 excl - tzone1 installed /system/zones/tzone1 solaris excl On start-up, the ZDR creates a zone SMF instance for any zone (save for the Global Zone) that does not have one but is supposed to. Likewise, if there is a zone SMF instance that does not have a corresponding zone, the restarter will remove the instance. The ZDR is responsible for setting up the infrastructure necessary for each zone, spawning a zoneadmd(8) daemon for each zone, and restarting the daemon when necessary. There is a running zoneadmd for each zone in a state greater than configured on the system. ZDR generated messages related to a particular zone are logged to /var/log/zones/<zonename>.messages, which is where zoneadmd logs as well. Failures during the infrastructure setup for a particular zone will place the zone to the maintenance state. A svcadm clear on the zone instance triggers the ZDR to re-try. SMF Goal Services SMF goals provide a mechanism by which a notification can be generated if a zone is incapable of auto booting to a fully online state (i.e. unless an admin intervenes). With their integration into ZDR they can be used to address one of the shortcomings mentioned above, that we had no mechanism for providing information when a zone is considered up and running. A goal service is a service with the general/goal-service=true property setting. Such service enters a maintenance state if its dependencies can never be satisfied. Goal services in maintenance automatically leave that state once their dependencies are satisfiable. The goal services failure mechanism is entirely encompassed in the SMF dependency graph engine. Any service can be marked as a goal service. We also introduced a new synthetic milestone modeled as a goal service, svc:/milestone/goals:default. The purpose of this new milestone is to provide a clear, unambiguous, and well-defined point where we consider the system up and running. The dependencies of milestone/goals should be configured to represent the mission critical services for the system. There is one dependency by default: root@vzl-143:~# svcs -d milestone/goals STATE STIME FMRI online 18:35:49 svc:/milestone/multi-user-server:default While goals are ZDR agnostic, they are a fundamental requirement of the ZDR which uses the states of the milestone/goals services to determine the state of each non-global zone. To change the dependency, use svcadm goals: # svcadm goals milestone/multi-user-server:default network/http:apache24 # svcs -d milestone/goals STATE STIME FMRI online 18:35:49 svc:/milestone/multi-user-server:default online 19:53:32 svc:/network/http:apache24 To reset (clear) the dependency service set to the default, use svcadm goals -c. Zone SMF Instance State ZDR is notified of the state of the milestone/goals service of each non-global zone that supports it. The zone instance state of each non-global zone will match the state of its milestone/goals. Kernel Zones that support the milestone/goals service (i.e. those with 11.4+ installed) use internal auxiliary states to report back to the host. Kernel Zones that do not support milestone/goals are considered online when their state is running and auxiliary state is hotplug-cpu. Zone SMF instances mapping to solaris10(7) branded Zones will have its state driven by the exit code of the zoneadm command. If the zone's milestone/goals is absent or disabled, the ZDR will treat the zone as not having support for milestone/goals. The ZDR can be instructed to ignore milestone/goals for the purpose of moving the zone SMF instance to the online state based only on the success of zoneadm boot -- if zoneadm boot fails the zone SMF instance is placed into maintenance. The switch is controlled by the following SMF property under the ZDR service instance, svc:/system/zones:default: config/track-zone-goals = true | false For example, this feature might be useful to providers of VMs to different tenants (IaaS) that do not care about what is running on the VMs but only care whether those VMs are accessible to their tenants. Zone Dependencies The set of SMF FMRIs that make up the zone dependencies is defined by a new zonecfg resource, smf-dependency. It is comprised of fmri and grouping properties. All SMF dependencies for a zone will be of type service and have restart_on none -- we do not want zones being shutdown or restarted because of a faulty flip-flopping dependency. Example: An example on setting dependencies for a zone: add smf-dependency set fmri svc:/application/frobnicate:default end add smf-dependency set fmri svc:/system/zones/zone:appfirewall end add smf-dependency set fmri svc:/system/zones/zone:dataload set grouping=exclude_all end The default for grouping is require_all. See the smf(7) manual page for other dependency grouping types. Zone Config autoboot Configuration The zone instance SMF property general/enabled corresponds to the zone configuration property autoboot and is stored in the SMF repository. The existing Zones interfaces: zonecfg(1M) export zoneadm(1M) detach -n stay unmodified and contain autoboot in their output. Also, the RAD interfaces accessing the property do not change from 11.3. Zones Boot Order There are two ways to establish zones boot order. One is determined by the SMF dependencies of a zone SMF instance (see above for smf-dependency). The other one is an assigned boot priority for a zone. Once the SMF dependencies are satisfied for a zone, the zone is placed in a queue according to its priority. The ZDR then boots zones from the highest to lowest boot priority in the queue. The new zonecfg property is boot-priority. set boot-priority={ high | default | low } Note that the boot ordering based on assigned boot priority is best-effort and thus non-deterministic. It is not guaranteed that all zones with higher boot priority will be booted before all zones with lower boot priority. If your configuration requires a deterministic behavior, use SMF dependencies. Zones Concurrent Boot and Suspend/Resume Limits The ZDR can limit the number of concurrent zones booting up or shutting down, and suspending or resuming. The max number of concurrent boot and suspend resume will be determined by the following properties on the ZDR service instance: $ svccfg -s system/zones:default listprop config/concurrent* config/concurrent-boot-shutdown count 0 config/concurrent-suspend-resume count 0 0 or absence of value means there is no limit imposed by the restarter. If the value is N, the restarter will attempt to boot in parallel at most N zones. The booting process of a NGZ will be considered completed when the milestone/goals of a zone is reached. If the milestone/goals cannot be reached, the zone SMF instance will be placed into maintenance and the booting process for that zone will be deemed complete from the ZDR perspective. Kernel Zones that do not support milestone/goals are considered up when the zone auxiliary state hotplug-cpu is set. KZs with the goal support use private auxiliary states to report back to the host. solaris10 branded zones will be considered up when the zoneadm boot command returns. Integration with FMA This requirement was automatically achieved with fully integrating the Zones framework with SMF. Example Let's have a very simplistic example with zones jack, joe, lady, master, and yesman<0-9>. Now, the master zone depends on lady, lady depends on both jack and joe, and we do not care much about when yesman<0-9> zones boot up. +------+ +------+ +--------+ | jack |<----------+--| lady |<--------------| master | +------+ / +------+ +--------+ / +------+ / | joe |<------+ +------+ +---------+ +---------+ | yesman0 | .... | yesman9 | +---------+ +---------+ Let's not tax the system excessively when booting, so we set the boot concurrency to 2 for this example. Also, let's assume we need a running web server in jack, so add that one to the goals milestone. Based on the environment we have, we choose to assign the high boot priority to jack and joe, keep lady and master at the default priority, and put all yesman zones to the low boot priority. To achieve all of the above, this is what we need to do: # svccfg -s system/zones:default setprop config/concurrent-boot-shutdown=2 # zlogin jack svcadm goals svc:/milestone/multi-user-server:default apache24 # zonecfg -z jack "set boot-priority=high" # zonecfg -z joe "set boot-priority=high" # zonecfg -z lady "add smf-dependency; set fmri=svc:/system/zones/zone:jack; end" # zonecfg -z lady "add smf-dependency; set fmri=svc:/system/zones/zone:joe; end" # zonecfg -z master "add smf-dependency; set fmri=svc:/system/zones/zone:lady; end" # for i in $(seq 0 9); do zonecfg -z yesman$i "set boot-priority=low"; done During the boot, you may see something like the following. As mentioned above, the boot priority is best effort also given we have dependencies, some yesman zones will boot up before some higher priority zones. You will see that at any given moment during the boot, only two zones are being booted up in parallel (the '*' denotes a service in a state transition, see svcs(1)), as we set the boot concurrency above to 2. $ svcs -o STATE,FMRI -s FMRI system/zones/* STATE FMRI offline* svc:/system/zones/zone:jack online svc:/system/zones/zone:joe offline svc:/system/zones/zone:lady offline svc:/system/zones/zone:master offline svc:/system/zones/zone:yesman0 offline svc:/system/zones/zone:yesman1 offline svc:/system/zones/zone:yesman2 offline* svc:/system/zones/zone:yesman3 online svc:/system/zones/zone:yesman4 online svc:/system/zones/zone:yesman5 offline svc:/system/zones/zone:yesman6 offline svc:/system/zones/zone:yesman7 offline svc:/system/zones/zone:yesman8 online svc:/system/zones/zone:yesman9 Conclusion With the Zones Delegated Restarter introduced in 11.4, we resolved several shortcomings of the Zones framework in 11.3. There is always room for additional enhancements, making the boot ordering based on boot priorities more deterministic, for example. We are open to any feedback you might have on this new Zones Delegated Restarter feature.

Managing Zones in Oracle Solaris 11.3 In Oracle Solaris 11.3, Zones are managed by the Zones service svc:/system/zones:default. The service performs the autobooting and shutdown of Zones on system boot...

Oracle Solaris 11.3 SRU 35 released

Earlier today we released Oracle Solaris 11.3 SRU 35. It's available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . This SRU introduces the following enhancements: Compliance Update Check; allows users to verify the system is not using features which are no longer supported in newer releases Oracle VM Server for SPARC has been updated to version 3.5.0.3. More details can be found in the Oracle VM Server for SPARC 3.5.0.3 Release Notes. The Java 8, Java 7, and Java 6 packages have been updated Explorer 18.3 is now available libepoxy has been added to Oracle Solaris gnu-gettext has been updated to 0.19.8 bison has been updated to 3.0.4 at-spi2-atk has been updated to 2.24.0 at-spi2-core has been updated to 2.24.0 gtk+3 has been updated to 3.18.0 Fixed the missing magick/static.h header in ImageMagick manifest The following components have also been updated to address security issues: python has been updated to 3.4.8 BIND has been updated to 9.10.6-P1 Apache Tomcat has been updated to 8.5.3 kerberos 5 has been updated to 1.16.1 Wireshark has been updated to 2.6.2 Thunderbird has been updated to 52.9.1 libvorbis has been updated to 1.3.6 MySQL has been updated to 5.7.23 gdk-pixbuf, libtiff, jansson, procmail, libgcrypt, libexif Full details of this SRU can be found in My Oracle Support Doc 2437228.1. For the list of Service Alerts affecting each Oracle Solaris 11.3 SRU, see Important Oracle Solaris 11.3 SRU Issues (Doc ID 2076753.1).

Earlier today we released Oracle Solaris 11.3 SRU 35. It's available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at...

Recovering a Missing Zone After a Repeated Upgrade From Oracle Solaris 11.3 to 11.4

Recap of the Problem As I explained in my past Shared Zone State in Oracle Solaris 11.4 blog post, if you update to Oracle Solaris 11.4 from 11.3, boot the new 11.4 BE, create/install new zones there, then boot back to 11.3, update again to 11.4, and boot that second 11.4 BE, zones you created and installed in the first 11.4 BE will no longer be shown in the zoneadm list -c output when you booted up from the second 11.4 BE. Those zones are missing as the 2nd update from 11.3 to 11.4 replaced the shared zones index file, storing the original one containing the zone entries to /var/share/zones/index.json.backup.<date>--<time> file. However, I did not explain how we can get such zones back and that is what I'm gonna show in this blog post. There are two pieces missing. One is the zones are not in the shared index file, the other one is the zones do not have their zone configurations in the current BE. The Recovery Solution The fix is quite easy so let's show how to recover one zone. Either zoneadm -z <zonename> export > <exported-config> the zone config from the first 11.4 BE, and import it in the other 11.4 BE via zonecfg -z <zonename> -f <exported-config>, or just manually create the zone in the second 11.4 BE with the same configuration. Example: BE-AAA# zonecfg -z xxx export > xxx.conf BE-BBB# zonecfg -z xxx -f xxx.conf In this specific case of multiple updates to 11.4, you could also manually copy <mounted-1st-11.4-be>/etc/zones/<zonename>.xml from the first 11.4 BE (use beadm mount <1st-11.4-be-name> /a to mount it) but note that that's not a supported way do to things as in general configurations from different system versions may not be compatible. If that is the case, the configuration update is done during the import or on the first boot. However, in this blog entry, I will cheat and use a simple cp(1) since I know that the configuration file is compatible with the BE I'm copying it into. The decribed recovery solution is brands(7) agnostic. Example An example that follows will recover a missing zone uar. Each color represents a different BE, denoted also by different shell prompts. root@s11u4_3:~# zonecfg -z uar create root@s11u4_3:~# zonecfg -z uar install root@s11u4_3:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 installed /system/zones/tzone1 solaris excl - uar installed /system/zones/uar solaris excl root@s11u4_3:~# beadm activate sru35.0.3 root@s11u4_3:~# reboot -f root@S11-3-SRU:~# pkg update --be-name=s11u4_3-b -C0 --accept entire@11.4-11.4.0.0.1.3.0 ... root@S11-3-SRU:~# reboot -f root@s11u4_3-b:~# svcs -xv svc:/system/zones-upgrade:default (Zone config upgrade after first boot) State: degraded since Fri Aug 17 13:39:53 2018 Reason: Degraded by service method: "Unexpected situation during zone index conversion to JSON." See: http://support.oracle.com/msg/SMF-8000-VE See: /var/svc/log/system-zones-upgrade:default.log Impact: Some functionality provided by the service may be unavailable. root@s11u4_3-b:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 installed /system/zones/tzone1 solaris excl root@s11u4_3-b:~# beadm mount s11u4_3 /a root@s11u4_3-b:~# cp /a/etc/zones/uar.xml /etc/zones/ root@s11u4_3-b:~# zonecfg -z uar create Zone uar does not exist but its configuration file does. To reuse it, use -r; create anyway to overwrite it (y/[n])? n root@s11u4_3-b:~# zonecfg -z uar create -r Zone uar does not exist but its configuration file does; do you want to reuse it (y/[n])? y root@s11u4_3-b:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 installed /system/zones/tzone1 solaris excl - uar configured /system/zones/uar solaris excl root@s11u4_3-b:~# zoneadm -z uar attach -u Progress being logged to /var/log/zones/zoneadm.20180817T134924Z.uar.attach Zone BE root dataset: rpool/VARSHARE/zones/uar/rpool/ROOT/solaris-0 Updating image format Image format already current. Updating non-global zone: Linking to image /. Updating non-global zone: Syncing packages. Packages to update: 527 Services to change: 2 ... ... Result: Attach Succeeded. Log saved in non-global zone as /system/zones/uar/root/var/log/zones/zoneadm.20180817T134924Z.uar.attach root@s11u4_3-b:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 installed /system/zones/tzone1 solaris excl - uar installed /system/zones/uar solaris excl Conclusion This situation of missing zones on multiple updates from 11.3 to 11.4 is inherently part of the change from a BE specific zone indexes in 11.3 to a shared index in 11.4. You should only encounter it if you go back from 11.4 to 11.3 and update again to 11.4. We assume such situations will not happen often. The final engineering consensus during the design was that while users mostly keep going forward, i.e. update to greater system versions and then go back not, if they happen to go back to 11.3 and update again to 11.4, they would expect the same list of zones as they had on the 11.3 BE they used last for the 11.4 update.

Recap of the Problem As I explained in my past Shared Zone State in Oracle Solaris 11.4 blog post, if you update to Oracle Solaris 11.4 from 11.3, boot the new 11.4 BE, create/install new zones there,...

Oracle Solaris 11

Trapped by Older Software

Help! I am trapped by my Old Application Which Cannot Change. I need to update my Oracle Solaris 11 system to a later Update and/or Support Repository Update (SRU) but I find upon the update my favourite FOSS component has been removed.  My Old Application Which Cannot Change has a dependency upon it and so no longer starts. Help! Oracle Solaris, by default, will ensure that the software installed on the system is up to date. This includes the removal (uninstall) of software that is obsolete. Packages can be marked as obsolete due to the owning community no longer supporting it,  being replaced by another package or later major version or for some other valid reason (we are very careful about the removal of software components). However, by design, the contention problem of wanting to keep the operating system up to date but allowing for exceptions is addressed by Oracle Solaris's packaging system. This is performed via the 'version locks' within it. Taking an example: Oracle Solaris 11.3 SRU 20 obsoleted Python 2.6. This was because that version of python has been End of Lifed by the Python community. And so updating a system beyond that SRU will result in Python 2.6 being removed. But what happens if you actually need to use Python 2.6 because of some application dependency ? Well the first thing is check with the application vendor to see if there is a later version that supports the newer version Python, if so consider updating to that later version. Maybe there is no later release of the application and so in this instance how do you get python-26 onto your system. Follow the steps below: Identify the version of the required package: use pkg list -af <name of package> for example: pkg list -af runtime/python-26 Identify if the package has dependencies that need to be installed: pkg contents -r -t depend runtime/python-26@2.6.8-0.175.3.15.0.4.0 The python-26 package is interesting as it has a conditional dependency upon the package runtime/tk-8 so that it depends upon library/python/tkinter-26. So if tk-8 is installed then tkinter-26 will need to be installed. Identify the incorporation that locks the package(s): pkg search depend:incorporate:runtime/python-26 Using the information in the previous step find the relevant lock(s) pkg contents -m userland-incorporation | egrep 'runtime/python-26|python/tkinter-26' Unlock the package(s): pkg change-facet version-lock.runtime/python-26=false version-lock.library/python/tkinter-26=false Update the package(s) to the identified version from the first step: pkg update runtime/python-26@2.6.8-0.175.3.15.0.4.0 No need to worry about tkinter-26 here because the dependency within the python-26 package will cause it to be installed. Freeze the package(s) so that further updates will not remove them. Put a comment with the freeze to indicate why the package is installed: pkg freeze -c 'Needed for Old Application' runtime/python-26 If, required, update the system to the later SRU or Oracle Solaris Update: pkg update Another complete example using the current Oracle Solaris 11.4 Beta and Java 7 # pkg list -af jre-7 NAME (PUBLISHER) VERSION IFO runtime/java/jre-7 1.7.0.999.99 --o runtime/java/jre-7 1.7.0.191.8 --- runtime/java/jre-7 1.7.0.181.9 --- runtime/java/jre-7 1.7.0.171.11 --- runtime/java/jre-7 1.7.0.161.13 --- ... # pkg search depend:incorporate:runtime/java/jre-7 INDEX ACTION VALUE PACKAGE incorporate depend runtime/java/jre-7@1.7.0.999.99,5.11 pkg:/consolidation/java-7/java-7-incorporation@1.7.0.999.99-0 # pkg contents -m java-7-incorporation|grep jre-7 depend fmri=runtime/java/jre-7@1.7.0.999.99,5.11 type=incorporate Oh. There is no lock. What should we do now ? Is there a lock on the Java 7 incorporation that can be used ? Yes! See the results of the searching in the first command below. So we can unlock that one and install Java 7. # pkg search depend:incorporate:consolidation/java-7/java-7-incorporation INDEX ACTION VALUE PACKAGE incorporate depend consolidation/java-7/java-7-incorporation@1.7.0.999.99 pkg:/entire@11.4-11.4.0.0.1.12.0 # pkg contents -m entire | grep java-7-incorporation depend fmri=consolidation/java-7/java-7-incorporation type=require depend facet.version-lock.consolidation/java-7/java-7-incorporation=true fmri=consolidation/java-7/java-7-incorporation@1.7.0.999.99 type=incorporate # pkg change-facet version-lock.consolidation/java-7/java-7-incorporation=false Packages to change: 1 Variants/Facets to change: 1 Create boot environment: No Create backup boot environment: Yes PHASE ITEMS Removing old actions 1/1 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 # pkg list -af java-7-incorporation NAME (PUBLISHER) VERSION IFO consolidation/java-7/java-7-incorporation 1.7.0.999.99-0 i-- consolidation/java-7/java-7-incorporation 1.7.0.191.8-0 --- consolidation/java-7/java-7-incorporation 1.7.0.181.9-0 --- .... # pkg install --accept jre-7@1.7.0.191.8 java-7-incorporation@1.7.0.191.8-0 Packages to install: 2 Packages to update: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 3/3 881/881 71.8/71.8 5.2M/s PHASE ITEMS Removing old actions 4/4 Installing new actions 1107/1107 Updating modified actions 2/2 Updating package state database Done Updating package cache 1/1 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 # pkg freeze -c 'Needed for Old Application' java-7-incorporation consolidation/java-7/java-7-incorporation was frozen at 1.7.0.191.8-0:20180711T215211Z # pkg freeze NAME VERSION DATE COMMENT consolidation/java-7/java-7-incorporation 1.7.0.191.8-0:20180711T215211Z 07 Aug 2018 14:50:34 UTC Needed for Old Application A couple of points with the above example. When installing the required version of java the corresponding incorporation at the correct version needed to be installed. The freeze has been applied to the Java 7 incorporation because that is the package that controls the Java 7 package version. The default version of Java remains as Java 8 but that can be changed as per the next steps below via the use of mediators (see pkg(1) and look for mediator). # java -version java version "1.8.0_181" Java(TM) SE Runtime Environment (build 1.8.0_181-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.181-b12, mixed mode) # /usr/jdk/instances/jdk1.7.0/bin/java -version java version "1.7.0_191" Java(TM) SE Runtime Environment (build 1.7.0_191-b08) Java HotSpot(TM) Server VM (build 24.191-b08, mixed mode) # pkg set-mediator -V 1.7 java Packages to change: 3 Mediators to change: 1 Create boot environment: No Create backup boot environment: Yes PHASE ITEMS Removing old actions 2/2 Updating modified actions 3/3 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 # java -version java version "1.7.0_191" Java(TM) SE Runtime Environment (build 1.7.0_191-b08) Java HotSpot(TM) Server VM (build 24.191-b08, mixed mode) Another example of unlocking packages is the article More Tips for Updating Your Oracle Solaris 11 System from the Oracle Support Repository. In summary Oracle Solaris 11 provides a single method to update all the operating system software via a pkg update but additionally allows for exceptions to be used to permit legacy applications to run.

Help! I am trapped by my Old Application Which Cannot Change. I need to update my Oracle Solaris 11 system to a later Update and/or Support Repository Update (SRU) but I find upon the update...

Oracle Solaris 11

Solaris 11: High-Level Steps to Create an IPS Package

Keywords: Solaris package IPS+Repository pkg 1 Work on Directory Structure     Start with organizing the package contents (files) into the same directory structure that you want on the installed system.   In the following example the directory was organized in such a manner that when the package was installed, it results in software being copied to /opt/myutils directory. eg., # tree opt opt `-- myutils |-- docs | |-- README.txt | `-- util_description.html |-- mylib.py |-- util1.sh |-- util2.sh `-- util3.sh Create a directory to hold the software in the desired layout. Let us call this "workingdir", and this directory will be specified in subsequent steps to generate the package manifest and finally the package itself. Move the top level software directory to the "workingdir". # mkdir workingdir # mv opt workingdir # tree -fai workingdir/ workingdir workingdir/opt workingdir/opt/myutils workingdir/opt/myutils/docs workingdir/opt/myutils/docs/README.txt workingdir/opt/myutils/docs/util_description.html workingdir/opt/myutils/mylib.py workingdir/opt/myutils/util1.sh workingdir/opt/myutils/util2.sh workingdir/opt/myutils/util3.sh 2 Generate Package Manifest   Package manifest provides metadata such as package name, description, version, classification & category along with the files and directories included, and the dependencies, if any, need to be installed for the target package. The manifest for an existing package can be examined with the help of pkg contents subcommand. pkgsend generate command generates the manifest. It takes "workingdir" as input. Piping the output through pkgfmt makes the manifest readable. # pkgsend generate workingdir | pkgfmt > myutilspkg.p5m.1 # cat myutilspkg.p5m.1 dir path=opt owner=root group=bin mode=0755 dir path=opt/myutils owner=root group=bin mode=0755 dir path=opt/myutils/docs owner=root group=bin mode=0755 file opt/myutils/docs/README.txt path=opt/myutils/docs/README.txt owner=root group=bin mode=0644 file opt/myutils/docs/util_description.html path=opt/myutils/docs/util_description.html owner=root group=bin mode=0644 file opt/myutils/mylib.py path=opt/myutils/mylib.py owner=root group=bin mode=0755 file opt/myutils/util1.sh path=opt/myutils/util1.sh owner=root group=bin mode=0644 file opt/myutils/util2.sh path=opt/myutils/util2.sh owner=root group=bin mode=0644 file opt/myutils/util3.sh path=opt/myutils/util3.sh owner=root group=bin mode=0644 3 Add Metadata to Package Manifest   Note that the package manifest is currently missing attributes such as name and description (metadata). Those attributes can be added directly to the generated manifest. However the recommended approach is to rely on pkgmogrify utility to make changes to an existing manifest. Create a text file with the missing package attributes. eg., # cat mypkg_attr set name=pkg.fmri value=myutils@3.0,5.11-0 set name=pkg.summary value="Utilities package" set name=pkg.description value="Utilities package" set name=variant.arch value=sparc set name=variant.opensolaris.zone value=global set name=variant.opensolaris.zone value=global action restricts the package installation to global zone. To make the package installable in both global and non-global zones, either specify set name=variant.opensolaris.zone value=global value=nonglobal action in the package manifest, or do not have any references to variant.opensolaris.zone variant at all in the manifest. Now merge the metadata with the manifest generated in previous step. # pkgmogrify myutilspkg.p5m.1 mypkg_attr | pkgfmt > myutilspkg.p5m.2 # cat myutilspkg.p5m.2 set name=pkg.fmri value=myutils@3.0,5.11-0 set name=pkg.summary value="Utilities package" set name=pkg.description value="Utilities package" set name=variant.arch value=sparc set name=variant.opensolaris.zone value=global dir path=opt owner=root group=bin mode=0755 dir path=opt/myutils owner=root group=bin mode=0755 dir path=opt/myutils/docs owner=root group=bin mode=0755 file opt/myutils/docs/README.txt path=opt/myutils/docs/README.txt owner=root group=bin mode=0644 file opt/myutils/docs/util_description.html \ path=opt/myutils/docs/util_description.html owner=root group=bin mode=0644 file opt/myutils/mylib.py path=opt/myutils/mylib.py owner=root group=bin mode=0755 file opt/myutils/util1.sh path=opt/myutils/util1.sh owner=root group=bin mode=0644 file opt/myutils/util2.sh path=opt/myutils/util2.sh owner=root group=bin mode=0644 file opt/myutils/util3.sh path=opt/myutils/util3.sh owner=root group=bin mode=0644 4 Evaluate & Generate Dependencies   Generate the dependencies so they will be part of the manifest. It is recommended to rely on pkgdepend utility for this task rather than declaring depend actions manually to minimize inaccuracies. eg., # pkgdepend generate -md workingdir myutilspkg.p5m.2 | pkgfmt > myutilspkg.p5m.3 At this point, ensure that the manifest has all the dependencies listed. If not, declare the missing dependencies manually.   5 Resolve Package Dependencies   This step might take a while to complete. eg., # pkgdepend resolve -m myutilspkg.p5m.3   6 Verify the Package   By this time the package manifest should pretty much be complete. Check and validate it manually or using pkglint utility (recommended) for consistency and any possible errors. # pkglint myutilspkg.p5m.3.res   7 Publish the Package   For the purpose of demonstration let's go with the simplest option to publish the package, local file-based repository. Create the local file based repository using pkgrepo command, and set the default publisher for the newly created repository. # pkgrepo create my-repository # pkgrepo -s my-repository set publisher/prefix=mypublisher Finally publish the target package with the help of pkgsend command. # pkgsend -s my-repository publish -d workingdir myutilspkg.p5m.3.res pkg://mypublisher/myutils@3.0,5.11-0:20180704T014157Z PUBLISHED # pkgrepo info -s my-repository PUBLISHER PACKAGES STATUS UPDATED mypublisher 1 online 2018-07-04T01:41:57.414014Z   8 Validate the Package   Finally validate whether the published package has been packaged properly by test installing it. # pkg set-publisher -p my-repository # pkg publisher # pkg install myutils # pkg info myutils Name: myutils Summary: Utilities package Description: Utilities package State: Installed Publisher: mypublisher Version: 3.0 Build Release: 5.11 Branch: 0 Packaging Date: Wed Jul 04 01:41:57 2018 Last Install Time: Wed Jul 04 01:45:05 2018 Size: 49.00 B FMRI: pkg://mypublisher/myutils@3.0,5.11-0:20180704T014157Z

Keywords: Solaris package IPS+Repository pkg 1 Work on Directory Structure     Start with organizing the package contents (files) into the same directory structure that you want on the installed system.  ...

Perspectives

Automatic Configuration of Solaris OCI Guests

Once you've gone through the basics of setting up an Oracle Solaris guest on Oracle Cloud Infrastructure (OCI) covered in my previous post, you will likely wonder how you can customize them automatically at launch.  You can always create specific custom images by hand, but that has two problems: you're doing a lot of work by hand, and then you have to manage the custom images as well; once created, they become objects with a lifecycle of their own.  The natural desire of system admins is to write some scripts to automate the work, and then run those scripts at first boot.  That allows just managing the scripts and applying them when booting an instance of Solaris.  Let's use setting up the Solaris 11.4 beta publisher as an example. Here's a template for a script that can automate applying your certificate and key to access the Solaris 11.4 beta publisher: #!/bin/ksh # # userdata script to setup pkg publisher for Solaris beta, install packages MARKER=/var/tmp/userdata_marker # If we've already run before in this instance, exit [[ -f $MARKER ]] && exit 0 # Save key and certificate to files for use by pkg cat </system/volatile/pkg.oracle.com.certificate.pem # replace with contents of downloaded pkg.oracle.com.certificate.pem EOF cat </system/volatile/pkg.oracle.com.key.pem # replace with contents of downloaded pkg.oracle.com.key.pem EOF # Wait for DNS configuration, as cloudbase-init intentionally doesn't wait # for nameservice milestone while [[ $(svcs -H -o state dns/client) != "online" ]]; do sleep 5 done pkg set-publisher -G '*' -g https://pkg.oracle.com/solaris/beta \ -c /system/volatile/pkg.oracle.com.certificate.pem \ -k /system/volatile/pkg.oracle.com.key.pem solaris # Publisher is set up, install additional packages here if desired # pkg install ... # Leave marker that this script has run touch $MARKER Copy this script, modify it by pasting in the contents of the certificate and key files you've downloaded from pkg-register.oracle.com, and save it. Now, select Create Instance in the OCI Console, and select your Solaris 11.4 beta image as the boot volume.  Paste or select your ssh key, and then as a Startup Script select or paste your modified copy of the template script above (Note: if you're using the emulated VM image you'll need to click Show Advanced Options to access these two fields).  Select a virtual cloud network for the instance, and then click Create Instance to start the launch process.  Once the image is launched and you're able to ssh in, you can verify that the package repository is correctly configured using "pkg search". There are lots of possible things you might do in such a startup script: install software, enable services, create user accounts, or any other things required to get an application running on a cloud instance.  Note, though, that the script will run at every boot, not just the first one, so your script must either be idempotent or ensure that it runs only once.  The pkg operatiosn in the example script are idempotent, but I've included a simple run-once mechanism to optimize it. Debugging Startup Script Problems There are two components to the startup script mechanism.  OCI provides a metadata service that publishes the startup script you provide, and Solaris includes the cloudbase-init service that downloads the metadata and applies it; the script is known as a userdata script.  If your script doesn't work, you can examine the cloudbase-init service log using the command sudo svcs -Lv cloudbase-init.  By default, cloudbase-init only reports the exit status of the userdata script, which likely isn't enough to tell you what happened since scripts generally can't provide specific error codes for every possible problem.  You can enable full debug logging for cloudbase-init by modifying its config/debug property: svccfg -s cloudbase-init:default setprop config/debug=boolean: true svcadm refresh cloudbase-init svcadm restart cloudbase-init The log will now include all output sent to stdout or stderr from the script.  

Once you've gone through the basics of setting up an Oracle Solaris guest on Oracle Cloud Infrastructure (OCI) covered in my previous post, you will likely wonder how you can customize them...

Perspectives

Getting Started with Solaris 11.4 Beta Images for Oracle Cloud Infrastructure

A question that's coming up more and more often among Oracle Solaris customers is, "Can I run Solaris workloads in Oracle Cloud Infrastructure (OCI)?".  Previously, it's only been possible by deploying your own OVM hypervisor in the OCI bare metal infrastructure, or running guests in the OCI Classic infrastructure.  As of today, I'm pleased that we can add bare metal and virtual machines in OCI as options.  With the release of the latest refresh to the Oracle Solaris 11.4 Beta, we're providing pre-built images for use in OCI. The images aren't part of the official OCI image catalog at this time, but using them is easy, just follow these steps: Login to your OCI console and select Compute->Custom Images from the main menu, this will display the Images page. Press the blue Import Image button.  This will display the Import Image dialog. In the dialog, select a compartment into which the image will be imported, and enter a name, such as "Solaris 11.4 Beta".  Select Linux for the operating system since OCI doesn't yet know about Solaris and that will avoid any special handling that OCI has for Windows images.  At this point, choose which image you wish to import: Bare Metal: Copy this link and paste it into the Object Storage URL field.  Select QCOW2 as the Image Type, and Native Mode as the Launch Mode. Enter any tags you wish to apply, and then press Import Image. Virtual Machine: Copy this link and paste it into the Object Storage URL field.  Select VMDK as the Image Type, and Emulated Mode as the Launch Mode.  Enter any tags you wish to apply, and then press Import Image. It'll take a few minutes for OCI to copy the image from object storage into your tenant's image repository.  Once that's complete, you can launch an instance using the image.  First, one tip: if you've imported the Bare Metal image, you should go to its Image Details page and press the Edit Details button.  In the Edit Image Details dialog that comes up, there's a Compatible Shapes list.  You'll find that all of the shapes have a blue checkmark.  You should uncheck all of the VM shapes and then Save the image.  The reason is that Solaris is not capable of booting in OCI's native virtual machine shapes at this time and this will prevent anyone who uses that image from inadvertently launching a VM that won't be accessible.  We're working on running Solaris under OCI's native VM technology, but since it's not ready yet, we've made the emulated mode image available for now. When creating an instance, select Custom Image as the boot volume type and select the image you've imported along with a compatible shape.  You'll need to supply an ssh key in order to login to the instance once it's started; when creating a VM, it's necessary to click the Show Advanced Options link to access the SSH Keys settings. After you start an instance, login using ssh opc@<instance ip>.  The image contains a relatively minimal Solaris installation suitable for bootstrapping into a cloud environment - this is the solaris-cloud-guest group package.  You'll likely need to install more software to do anything beyond some simple exploration; to add more Solaris packages, head on over to pkg-register.oracle.com and download a key and certificate to access the Oracle Solaris 11.4 Beta repository, following the instructions there to configure pkg. Now that you've got an instance running, there's a lot more you can do with it, including saving any modifications you make as a new Custom Image of your own that you can then redeploy directly to a new instance (note, though, that at this point a modified bare metal image will only be deployable to bare metal, and a VM image will only be deployable to a VM).  I'll post some how-to's for common tasks in the coming days, including deploying zones, creating your own images to move workloads into OCI, and using Terraform to orchestrate deployments.  Leave a comment here, post on the Solaris Beta community forum, or catch me @dave_miner on Twitter if you have topic suggestions or questions. Update: There's one problem with the VM image - if you create a new boot environment either directly or via a pkg operation and then reboot (even if you don't activate the new boot environment), the VM will end up in a panic loop.  To avoid this, run the following command after you've logged into your VM: sudo bootadm change-entry -i 0 kargs="-B enforce-prot-exec=off"    

A question that's coming up more and more often among Oracle Solaris customers is, "Can I run Solaris workloads in Oracle Cloud Infrastructure(OCI)?".  Previously, it's only been possible by...

Oracle Solaris 11

Oracle Solaris 11.4 Open Beta Refresh 2

As we continue to work toward release of Oracle Solaris 11.4, we present to you our third release of Oracle Solaris 11.4 open beta. You can download it here, or if you're already running a previous version of Oracle Solaris 11.4 beta, make sure your system is pointing to the beta repo (https://pkg.oracle.com/solaris/beta/) as its provider and type 'pkg update'. This will be the last Oracle Solaris 11.4 open beta as we are nearing release and are now going to focus our energies entirely on preparing Oracle Solaris 11.4 for general availability. The key focus of Oracle Solaris 11.4 is to bring new capabilities and yet maintain application compatibility to help you modernize and secure your infrastructure while maintaining and protecting your application investment.. This release is specifically focused on quality and application compatibility making your transition to Oracle Solaris 11.4 seamless. The refresh includes updates to 56 popular open source libraries and utilities, a new compliance(8) "explain" subcommand which provides details on the compliance checks performed against the system for a given benchmark, and a variety of other performance and security enhancements.   In addition this refresh delivers Kernel Page Table Isolation for x86 systems which is important in addressing the Meltdown security vulnerability affecting some x86 CPUs.   This update also includes an updated version of Oracle VM Server for SPARC, with improvements in console security, live migration, and introduces a LUN masking capability to simplify storage provisioning to guests. We’re excited about the content and capability of this update, and you’ll be seeing more about specific features and capabilities in the Oracle Solaris blog in the coming days.  As you try out the software in your own environment and with your own applications please continue to give us feedback through the Oracle Solaris Beta Community Forum at https://community.oracle.com/community/server_&_storage_systems/solaris/solaris-beta

As we continue to work toward release of Oracle Solaris 11.4, we present to you our third release of Oracle Solaris 11.4 open beta. You can download it here, or if you're already running a previous...

Solaris

Python: Exclusive File Locking on Solaris

Solaris doesn't lock open files automatically (not just Solaris - most of *nix operating systems behave this way). In general, when a process is about to update a file, the process is responsible for checking existing locks on target file, acquiring a lock and releasing it after updating the file. However given that not all processes cooperate and adhere to this mechanism (advisory locking) due to various reasons, such non-conforming practice may lead to problems such as inconsistent or invalid data mainly triggered by race condition(s). Serialization is one possible solution to prevent this, where only one process is allowed to update the target file at any time. It can be achieved with the help of file locking mechanism on Solaris as well as majority of other operating systems. On Solaris, a file can be locked for exclusive access by any process with the help of fcntl() system call. fcntl() function provides for control over open files. It can be used for finer-grained control over the locking -- for instance, we can specify whether or not to make the call block while requesting exclusive or shared lock. The following rudimentary Python code demonstrates how to acquire an exclusive lock on a file that makes all other processes wait to get access to the file in focus. eg., % cat -n xflock.py 1 #!/bin/python 2 import fcntl, time 3 f = open('somefile', 'a') 4 print 'waiting for exclusive lock' 5 fcntl.flock(f, fcntl.LOCK_EX) 6 print 'acquired lock at %s' % time.strftime('%Y-%m-%d %H:%M:%S') 7 time.sleep(10) 8 f.close() 9 print 'released lock at %s' % time.strftime('%Y-%m-%d %H:%M:%S') Running the above code in two terminal windows at the same time shows the following. Terminal 1: % ./xflock.py waiting for exclusive lock acquired lock at 2018-06-30 22:25:36 released lock at 2018-06-30 22:25:46 Terminal 2: % ./xflock.py waiting for exclusive lock acquired lock at 2018-06-30 22:25:46 released lock at 2018-06-30 22:25:56 Notice that the process running in second terminal was blocked waiting to acquire the lock until the process running in first terminal released the exclusive lock. Non-Blocking Attempt If the requirement is not to block on exclusive lock acquisition, it can be achieved with LOCK_EX (acquire exclusive lock) and LOCK_NB (do not block when locking) operations by performing a bitwise OR on them. In other words, the statement fcntl.flock(f, fcntl.LOCK_EX) becomes fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB) so the process will either get the lock or move on without blocking. Be aware that an IOError will be raised when a lock cannot be acquired in non-blocking mode. Therefore, it is the responsibility of the application developer to catch the exception and properly deal with the situation. The behavior changes as shown below after the inclusion of fcntl.LOCK_NB in the sample code above. Terminal 1: % ./xflock.py waiting for exclusive lock acquired lock at 2018-06-30 22:42:34 released lock at 2018-06-30 22:42:44 Terminal 2: % ./xflock.py waiting for exclusive lock Traceback (most recent call last): File "./xflock.py", line 5, in fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB) IOError: [Errno 11] Resource temporarily unavailable

Solaris doesn't lock open files automatically (not just Solaris - most of *nix operating systems behave this way). In general, when a process is about to update a file, the process is responsible for...

Oracle Solaris 11

Automated management of the Solaris Audit trail

The Solaris audit_binfile(7) module for auditd provides the ability to specify by age (in hours, days, months etc) or by file size when to close the currently active audit trail file and start a new one.  This is intended to be used to ensure any single audit file doesn't grow to large. What this doesn't do is provide a mechanism to automatically age out old audit records from closed audit files after a period of time.  Using the SMF periodic service feature (svc.periodicd) and the auditreduce(8) record selection and merging facitilites we can every easily build some automation. For this example I'm going to assume that specification of the period can be expressed in terms of days alone, that makes implementing this as an SMF periodic service and the resulting conversion of that policy into arguements for auditreduce(8) nice and easy. First create the method script in /lib/svc/method/site-audit-manage (making sure it is executable): #!/bin/sh /usr/sbin/auditreduce -D $(hostname) -a $(gdate -d "$1 days ago" +%Y%m%d) This tells auditreduce to merge all of the closed audit files from N days ago into one new file, where N is specified as the first argument. Then we can use svcbundle(8) to turn that into a periodic service. # svcbundle -i -s service-property=config:days:count:90 -s interval=month -s day_of_month=1 -s start-method="/lib/svc/method/site-audit-manage %{config/days}" -s service-name=site/audit-manage That creates and installs a new periodic SMF service that will run on the first day of the month and run the above method script with 90 as the number of days. If we later want to change the policy to be 180 days we can do that with the svccfg command thus: # svccfg -s site/audit-manage setprop config/days = 180 # svccfg -s site/audit-manage refresh Note that the method script uses the GNU coreutils gdate command to do the easy conversion of "N days ago", this is delivered in pkg:/file/gnu-coreutils, this package is installed by default for solaris-large-server and solaris-desktop group packages but not for solaris-small-server or solaris-minimal so you may need to manually add it.  

The Solaris audit_binfile(7) module for auditd provides the ability to specify by age (in hours, days, months etc) or by file size when to close the currently active audit trail file and start a...

Oracle Solaris 11

Python on Solaris

Our colleagues in the Oracle Linux organisation have a nice writeup of their support for Python, and how to get cx_Oracle installed so you can access an Oracle Database. I thought it would be useful to provide an equivalent guide for Oracle Solaris, so here it is. Oracle Solaris has a long history of involvement with Python, starting at least 15 years ago (if not more!). Our Image Packaging System is about 94-95% Python, and we've got about 440k LoC (lines of code) written in Python directly in the ON consolidation. When you look at the Userland consolidation, however, that list grows considerably. From a practical point of view, you cannot install Oracle Solaris without using Python, and nor can you have a supportable installation unless you have this system-delivered Python and a whole lot of packages in /usr/lib/python2.7/vendor-packages. We are well aware of the immininent end of support for Python 2.7 so work is underway on migrating not just our modules and commands, but also our tooling -- so that we're not stuck when 2020 arrives. So how does one find which libraries and modules we ship, without trawling through P5M files in the Userland gate? Simply search through the Oracle Solaris IPS publisher either using the web interface (ala https://pkg.oracle.com) or using the command line: $ pkg search -r \<python\> which gives you a lot of package names. You'll notice that we version them via a suffix, so while you do get a few screenfuls of output, the list is about 423 packages long. Then to install it's very simple: # pkg install <name-of-package> just like you would for any other package. I've made mention of this before, but I think it bears repeating: we make it very, very easy for you to install cx_Oracle and Instant Client so you can connect to the Oracle Database: # pkg install -v cx_oracle Packages to install: 7 Mediators to change: 1 Estimated space available: 22.67 GB Estimated space to be consumed: 1.01 GB Create boot environment: No Create backup boot environment: No Rebuild boot archive: No Changed mediators: mediator instantclient: version: None -> 12.2 (vendor default) Changed packages: solaris consolidation/instantclient/instantclient-incorporation None -> 12.2.0.1.0-4 database/oracle/instantclient-122 None -> 12.2.0.1.0-4 developer/oracle/odpi None -> 2.1.0-11.5.0.0.0.21.0 library/python/cx_oracle None -> 6.1-11.5.0.0.0.21.0 library/python/cx_oracle-27 None -> 6.1-11.5.0.0.0.21.0 library/python/cx_oracle-34 None -> 6.1-11.5.0.0.0.21.0 library/python/cx_oracle-35 None -> 6.1-11.5.0.0.0.21.0 Then it's a simple matter of firing up your preferred Python version and uttering import cx_Oracle and away you go. Much like this: >>> import cx_Oracle >>> tns = """ORCLDNFS=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbkz)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orcldnfs)))""" >>> user = "admin" >>> passwd = "welcome1" >>> cnx = cx_Oracle.connect(user, passwd, tns) >>> stmt = "select wait_class from v$system_event group by wait_class" >>> curs = cnx.cursor() >>> curs.execute(stmt).fetchall() [('Concurrency',), ('User I/O',), ('System I/O',), ('Scheduler',), ('Configuration',), ('Other',), ('Application',), ('Queueing',), ('Idle',), ('Commit',), ('Network',)] Simple!     Some notes on best practices for Python on Oracle Solaris While we do aim to package and deliver useful packages, it does happen that perhaps there's a package you need which we don't ship, or which we ship an older version of. How do you get past that problem in a fashion which doesn't affect your system installation? Unsurprisingly, the answer is not specific to Oracle Solaris: use Python Virtual Environments. While you could certainly use $ pip install --user you can still run afoul of incorrect versions of modules being loaded. Using a virtual environment is cheap, fits in very well with the concept of containerization, and makes the task of producing reproducible builds (aka deterministic compilation much simpler. We use a similar concept when we're building ON, Solaris Userland and Solaris IPS. For further information about Python packaging, please visit this tutorial, and review this article on Best Practices for Python dependency management which I've found to be one of the best written explanations about what to do, and why to do so. If you have other questions about using Python in Oracle Solaris, please pop in to the Solaris Beta forum and let us know.

Our colleagues in the Oracle Linux organisation have a nice writeup of their support for Python, and how to get cx_Oracle installed so you can access an Oracle Database. I thought it would be useful...

Oracle Solaris 11

Solaris 11.4: 10 Good-to-Know Features, Enhancements or Changes

[Admins] Device Removal From a ZFS Storage Pool In addition to removing hot spares, cache and log devices, Solaris 11.4 has support for removal of top-level virtual data devices (vdev) from a zpool with the exception of a RAID-Z pool. It is possible to cancel a remove operation that's in progress too. This enhancement will come in handy especially when dealing with overprovisioned and/or misconfigured pools. Ref: ZFS: Removing Devices From a Storage Pool for examples. [Developers & Admins] Bundled Software Bundled software packages include Python 3.5, Oracle instant client 12.2, MySQL 5.7, Cython (C-Extensions for Python), cx_Oracle Python module, Go compiler, clang (C language family frontend for LLVM) and so on. cx_Oracle is a Python module that enables accessing Oracle Database 12c and 11g from Python applications. The Solaris packaged version 5.2 can be used with Python 2.7 and 3.4. Depending on the type of Solaris installation, not every software package may get installed by default but the above mentioned packages can be installed from the package repository on demand. eg., # pkg install pkg:/developer/golang-17 # go version go version devel a30c3bd1a7fcc6a48acfb74936a19b4c Fri Dec 22 01:41:25 GMT 2017 solaris/sparc64 [Security] Isolating Applications with Sandboxes Sandboxes are isolated environments where users can run applications to protect them from other processes on the system while not giving full access to the rest of the system. Put another way, application sandboxing is one way to protect users, applications and systems by limiting the privileges of an application to its intended functionality there by reducing the risk of system compromise. Sandboxing joins Logical Domains (LDoms) and Zones in extending the isolation mechanisms available on Solaris. Sandboxes are suitable for constraining both privileged and unprivileged applications. Temporary sandboxes can be created to execute untrusted processes. Only administrators with the Sandbox Management rights profile (privileged users) can create persistent, uniquely named sandboxes with specific security attributes. The unprivileged command sandbox can be used to create temporary or named sandboxes to execute applications in a restricted environment. The privileged command sandboxadm can be used to create and manage named sandboxes. To install security/sandboxing package, run: # pkg install sandboxing -OR- # pkg install pkg:/security/sandboxing Ref: Configuring Sandboxes for Project Isolation for details. Also See: Oracle Multitenant: Isolation in Oracle Database 12c Release 2 (12.2) New Way to Find SRU Level uname -v was enhanced to include SRU level. Starting with the release of Solaris 11.4, uname -v reports Solaris patch version in the format "11.<update>.<sru>.<build>.<patch>". # uname -v 11.4.0.12.0 Above output translates to Solaris 11 Update 4 SRU 0 Build 12 Patch 0. [Cloud] Service to Perform Initial Configuration of Guest Operating Systems cloudbase-init service on Solaris will help speed up the guest VM deployment in a cloud infrastructure by performing initial configuration of the guest OS. Initial configuration tasks typically include user creation, password generation, networking configuration, SSH keys and so on. cloudbase-init package is not installed by default on Solaris 11.4. Install the package only into VM images that will be deployed in cloud environments by running: # pkg install cloudbase-init Device Usage Information The release of Solaris 11.4 makes it easy to identify the consumers of busy devices. Busy devices are those devices that are opened or held by a process or kernel module. Having access to the device usage information helps with certain hotplug or fault management tasks. For example, if a device is busy, it cannot be hotplugged. If users are provided with the knowledge of how a device is currently being used, it helps them in resolving related issue(s). On Solaris 11.4, prtconf -v shows pids of processes using different devices. eg., # prtconf -v ... Device Minor Nodes: dev=(214,72) dev_path=/pci@300/pci@2/usb@0/hub@4/storage@2/disk@0,0:a spectype=blk type=minor nodetype=ddi_block:channel dev_link=/dev/dsk/c2t0d0s0 dev_path=/pci@300/pci@2/usb@0/hub@4/storage@2/disk@0,0:a,raw spectype=chr type=minor nodetype=ddi_block:channel dev_link=/dev/rdsk/c2t0d0s0 Device Minor Opened By: proc='fmd' pid=1516 cmd='/usr/lib/fm/fmd/fmd' user='root[0]' ... [Developers] Support for C11 (C standard revision) Solaris 11.4 includes support for the C11 programming language standard: ISO/IEC 9899:2011 Information technology - Programming languages - C. Note that C11 standard is not part of the Single UNIX Specification yet. Solaris 11.4 has support for C11 in addition to C99 to provide customers with C11 support ahead of its inclusion in a future UNIX specification. That means developers can write C programs using the newest available C programming language standard on Solaris 11.4 (and later). pfiles on a coredump pfiles, a /proc debugging utility, has been enhanced in Solaris 11.4 to provide details about the file descriptors opened by a crashed process in addition to the files opened by a live process. In other words, "pfiles core" now works. Privileged Command Execution History A new command, admhist, was included in Solaris 11.4 to show successful system administration related commands which are likely to have modified the system state, in human readable form. This is similar to the shell builtin "history". eg., The following command displays the system administration events that occurred on the system today. # admhist -d "today" -v ... 2018-05-31 17:43:21.957-07:00 root@pitcher.dom.com cwd=/ /usr/bin/sparcv9/python2.7 /usr/bin/64/python2.7 /usr/bin/pkg -R /zonepool/p6128-z1/root/ --runid=12891 remote --ctlfd=8 --progfd=13 2018-05-31 17:43:21.959-07:00 root@pitcher.dom.com cwd=/ /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1 2018-05-31 17:43:22.413-07:00 root@pitcher.dom.com cwd=/ /usr/bin/sparcv9/pkg /usr/bin/64/python2.7 /usr/bin/pkg install sandboxing 2018-05-31 17:43:22.415-07:00 root@pitcher.dom.com cwd=/ /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1 2018-05-31 18:59:52.821-07:00 root@pitcher.dom.com cwd=/root /usr/bin/sparcv9/pkg /usr/bin/64/python2.7 /usr/bin/pkg search cloudbase-init .. It is possible to narrow the results by date, time, zone and audit-tag Ref: man page of admhist(8) [Developers] Process Control Library Solaris 11.4 includes a new process control library, libproc, which provides high-level interface to features of the /proc interface. This library also provides access to information such as symbol tables which are useful while examining and control of processes and threads. A controlling process using libproc can typically: Grab another process by suspending its execution Examine the state of that process Examine or modify the address space of the grabbed process Make that process execute system calls on behalf of the controlling process, and Release the grabbed process to continue execution Ref: man page of libproc(3LIB) for an example and details.

[Admins] Device Removal From a ZFS Storage Pool In addition to removing hot spares, cache and log devices, Solaris 11.4 has support for removal of top-level virtual data devices (vdev) from a zpool...

Easily Migrate to Oracle Solaris 11 on New SPARC Hardware

We have been working very hard to make it easy for you to migrate your applications to newer, faster SPARC hardware and Oracle Solaris 11. This post provides an overview of the process and the tools that automate the migration. Migration helps you modernize IT assets, lower infrastructure costs through consolidation, and improve performance. Oracle SPARC T8 servers, SPARC M8 servers, and Oracle SuperCluster M8 Engineered Systems serve as perfect consolidation platforms for migrating legacy workloads running on old systems. Applications migrated to faster hardware and Oracle Solaris 11 will automatically deliver better performance without requiring any architecture or code changes. You can migrate your operating environment and applications using both physical-to-virtual (P2V) and virtual-to-virtual (V2V) tools. The target environment can either be configured with Oracle VM for SPARC (LDoms) or Oracle Solaris Zones on the new hardware. You can also migrate to the Dedicated Compute Classic - SPARC Model 300 in Oracle Compute Cloud and benefit from Cloud capabilities. Migration Options In general there are two options for migration. 1) Lift and Shift of Applications to Oracle Solaris 11 The application on the source system is re-hosted on new SPARC hardware running Oracle Solaris 11. If your application is running on Oracle Solaris 10 on the source system, lift and shift of the application is preferred where possible because a full Oracle Solaris 11 stack will perform better and is easier to manage. With the Oracle Solaris Binary Application Guarantee, you will get the full benefits of OS modernization while still preserving your application investment. 2) Lift and Shift of the Whole System The operating environment and application running on the system are lifted as-is and re-hosted in an LDom or Oracle Solaris Zone on target hardware running Oracle Solaris 11 in the control domain or global zone. If you are running Oracle Solaris 10 on the source system and your application has dependencies on Solaris 10 services, you can either migrate to an Oracle Solaris 10 Branded Zone or an Oracle Solaris 10 guest domain on the target. Oracle Solaris 10 Branded Zones help you maintain a Oracle Solaris 10 environment for the application while taking advantage of Oracle Solaris 11 technologies in the global zone on the new SPARC hardware. Migration Phases There are 3 key phases in migration planning and execution. 1) Discovery This includes discovery and assessment of existing physical and virtual machines, their current utilization levels, and dependencies between systems hosting multi-tier applications or running highly available (HA) Oracle Solaris Cluster type configurations. This phase helps you identify the candidate systems for migration and the dependency order for performing the migrations. 2) Size the Target Environment This requires capacity planning of the target environment to accommodate the incoming virtual machines. This takes into account the resource utilization levels on the source machine, performance characteristics of the modern target hardware running Oracle Solaris 11, and the cost savings that result from higher performance. 3) Execute the Migration Migration can be accomplished using P2V and V2V tools for LDoms and Oracle Solaris Zones. We are continually enhancing migration tools and publishing supporting documentation. As a first step in this exercise, we are releasing LDom V2V tools that help users migrate Oracle Solaris 10 or Oracle Solaris 11 guest domains that are running on old SPARC systems to modern hardware running Oracle Solaris 11 in the control domain. One of the migration scenarios is illustrated here.   Three commands are used to perform the LDom V2V migration. 1) ovmtcreate runs on the source machine to create an Open Virtualization Appliance (OVA) file, called an OVM Template. 2) ovmtdeploy runs on the target machine to deploy the guest domain. 3) ovmtconfig runs on the target machine to configure the guest domain.     In the documented example use case, validation is performed using an Oracle Database workload. Database service health is monitored using Oracle Enterprise Manager (EM) Database Express. Migration Resources We have a Lift and Shift Guide that documents the end-to-end migration use case and a White Paper that provides an overview of the process. Both documents are available at: Lift and Shift Documentation Library Stay tuned for more updates on the tools and documentation for LDom and Oracle Solaris Zone migrations for both on-premise deployments and to SPARC Model 300 in Oracle Compute Cloud. Oracle Advanced Customer Services (ACS) offers SPARC Solaris Migration services, and they can assist you with migration planning and execution using the tools developed by Solaris Engineering.

We have been working very hard to make it easy for you to migrate your applications to newer, faster SPARC hardware and Oracle Solaris 11. This post provides an overview of the process and the tools...

Scheduled Pool Scrubs in Oracle Solaris ZFS

Recommended best practices for protecting your data with ZFS include using ECC memory, configuring pool redundancy and hot spares, and always having current backups of critical data. Because storage devices can fail over time, pool scrubs are also recommended to identify and resolve data inconsistencies caused by failing devices or other issues. Additionally: Data inconsistencies can occur over time. The earlier these issues are identified and resolved, overall data availability can be increased. Disks with bad data blocks could be identified sooner during a routine pool scrub and can be resolved before the risk of multiple disk failures occur. The Oracle Solaris 11.4 release includes a new pool property for scheduling a pool scrub and also introduces a read-only property for monitoring when the last pool scrub occurred. On-going pool scrubs are recommended for routine pool maintenance. The general best practice is to either scrub once per month or per quarter for data center quality drives. This new feature enables you to more easily schedule routine pool scrubs. If you install a new Solaris 11.4 system or upgrade your existing Solaris 11 system to Solaris 11.4, a new scrubinterval pool property is set to 30 days (1 month) by default. For example: % zpool get scrubinterval export NAME PROPERTY VALUE SOURCE export scrubinterval 1m default If you have multiple pools on your system, the default scheduled scrub is staggered so not all scrubs begin at the same time. You can specify your own scrubinterval in days, weeks, or months. If scrubinterval is set to manual, this feature is disabled. The read-only lastscrub property identifies the start time of the last scrub as follows: % zpool get lastscrub export NAME PROPERTY VALUE SOURCE export lastscrub Apr_03 local A pool scrub runs in the background and at a low priority. When a scrub is scheduled using this feature, a best effort is made not to impact an existing scrub or resilver operation and might be cancelled if these operations are already running. Any running scrub (scheduled or manually started) can be cancelled by using the following command: # zpool scrub -s tank # zpool status tank pool: tank state: ONLINE scan: scrub canceled on Mon Apr 16 13:23:00 2018 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 errors: No known data errors In summary, pool scrubs are an important part of routine pool maintenance to identify and repair any data inconsistencies. ZFS scheduled scrubs provide a way to automate pool scrubs in your environment.  

Recommended best practices for protecting your data with ZFS include using ECC memory, configuring pool redundancy and hot spares, and always having current backups of critical data. Because storage...

Solaris 11.4: Three Zones Related Changes in 3 Minutes or Less

[ 1 ] Automatic Live Migration of Kernel Zones using sysadm Utility Live migrate (evacuate) all kernel zones from a host system onto other systems temporarily or permanently with the help of new sysadm(8) utility. In addition, it is possible to evacuate all zones including kernel zones that are not running and native solaris zones in the installed state. If the target host (that is, the host the zone will be migrated to) meets all evacuation requirements, set it as destination host for one or more migrating kernel zones by setting the SMF service property evacuation/target. svccfg -s svc:/system/zones/zone:<migrating-zone> setprop evacuation/target=ssh://<dest-host> Put the source host in maintenance mode using sysadm utility to prevent non-running zones from attaching, booting, or migrating in zones from other hosts. sysadm maintain <options> Migrate the zones to their destination host(s) by running sysadm's evacuate subcommand. sysadm evacuate <options> Complete system maintenance work and end the maintenance mode on source host sysadm maintain -e Optionally bring back evacuated zones to the source host Please refer to Evacuating Oracle Solaris Kernel Zones for detailed steps. [ 2 ] Moving Solaris Zones across Different Storage URIs Starting with the release of Solaris 11.4, zoneadm's move subcommand can be used to change the zonepath without moving the Solaris zone installation. In addition, the same command can be used to move a zone from: local file system to shared storage shared storage to local file system, and one shared storage location to another [ 3 ] ZFS Dataset Live Zone Reconfiguration Live Zone Reconfiguration (LZR) is the ability to make changes to a running Solaris native zone configuration permanently or temporarily. In other words, LZR avoids rebooting the target zone. Solaris 11.3 already has support for reconfiguring resources such as dedicated cpus, capped memory and automatic network (anets). Solaris 11.4 extends the LZR support to ZFS datasets. With the release of Solaris 11.4, privileged users should be able to add or remove ZFS datasets dynamically to and from a Solaris native zone without the need to reboot the zone. eg., # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 tstzone running /zonepool/tstzone solaris excl Add a ZFS filesystem to the running zone, tstzone # zfs create zonepool/testfs # zonecfg -z tstzone "info dataset" # zonecfg -z tstzone "add dataset; set name=zonepool/testfs; end; verify; commit" # zonecfg -z tstzone "info dataset" dataset: name: zonepool/testfs alias: testfs # zoneadm -z tstzone apply zone 'tstzone': Checking: Modifying anet linkname=net0 zone 'tstzone': Checking: Adding dataset name=zonepool/testfs zone 'tstzone': Applying the changes # zlogin tstzone "zfs list testfs" cannot open 'testfs': filesystem does not exist # zlogin tstzone "zpool import testfs" # zlogin tstzone "zfs list testfs" NAME USED AVAIL REFER MOUNTPOINT testfs 31K 1.63T 31K /testfs Remove a ZFS filesystem from the running zone, tstzone # zonecfg -z tstzone "remove dataset name=zonepool/testfs; verify; commit" # zonecfg -z tstzone "info dataset" # zlogin tstzone "zpool export testfs" # zoneadm -z tstzone apply zone 'tstzone': Checking: Modifying anet linkname=net0 zone 'tstzone': Checking: Removing dataset name=zonepool/testfs zone 'tstzone': Applying the changes # zlogin tstzone "zfs list testfs" cannot open 'testfs': filesystem does not exist # zfs destroy zonepool/testfs # A summary of LZR support for resources and properties in native and kernel zones can be found in this page.

[ 1 ] Automatic Live Migration of Kernel Zones using sysadm Utility Live migrate (evacuate) all kernel zones from a host system onto other systems temporarily or permanently with the help of new sysadm(...

Shared Zone State in Oracle Solaris 11.4

Overview Since Oracle Solaris 11.4, state of Zones on the system is kept in a shared database in /var/share/zones/, meaning a single database is accessed from all boot environments (BEs). However, up until Oracle Solaris 11.3, each BE kept its own local copy in /etc/zones/index, and individual copies were never synced across BEs. This article provides some history, why we moved to the shared zones state database, and what it means for administrators when updating from 11.3 to 11.4. Keeping Zone State in 11.3 In Oracle Solaris 11.3, state of zones is associated separately with every global zone BE in a local text database /etc/zones/index. The zoneadm and zonecfg commands then operate on a specific copy based on which BE is booted. In the world of systems being migrated between hosts, having a local zone state database in every BE constitutes a problem if we, for example, update to a new BE and then migrate a zone before booting into the newly updated BE. When we boot the new BE eventually, the zone will end up in an unavailable state (the system recognizes the shared storage is already used and puts the zone into such state), suggesting that it should be possibly attached. However, as the zone was already migrated, an admin is expecting the zone in the configured state instead. The 11.3 implementation may also lead to a situation where all BEs on a system represent the same Solaris instance (see below for the definition of what is a Solaris instance), and yet every BE can be linked to a non-global Zone BE (ZBE) for a zone of the same name with the ZBE containing an unrelated Solaris instance. Such a situation happens on 11.3 if we reinstall a chosen non-global zone in each BE. Solaris Instance A Solaris instance represents a group of related IPS images. Such a group is created when a system is installed. One installs a system from the media or an install server, via "zoneadm install" or "zoneadm clone", or from a clone archive. Subsequent system updates add new IPS images to the same image group that represents the same Solaris instance. Uninstalling a Zone in 11.3 In 11.3, uninstalling a non-global zone (ie. the native solaris(5) branded zone) means to delete ZBEs linked to the presently booted BE, and updating the state of the zone in /etc/zones/index. Often it is only one ZBE to be deleted. ZBEs linked to other BEs are not affected. Destroying a BE only destroys a ZBE(s) linked to the BE. Presently the only supported way to completely uninstall a non-global zone from a system is to boot each BE and uninstall the zone from there. For Kernel Zones (KZs) installed on ZVOLs, each BE that has the zone in its index file is linked to the ZVOL via an <dataset>.gzbe:<gzbe_uuid> attribute. Uninstalling a Kernel Zone on a ZVOL removes that BE specific attribute, and only if no other BE is linked to the ZVOL, the ZVOL is deleted during the KZ uninstall or BE destroy. In 11.3, the only supported way to completely uninstall a KZ from a system was to boot every BE and uninstall the KZ from there. What can happen when one reinstalls zones during the normal system life cycle is depicted on the following pictures. Each color represents a unique Solaris instance. The picture below shows a usual situation with solaris(5) branded zones. After the system is installed, two zones are installed there into BE-1. Following that, on every update, the zones are being updated as part of the normal pkg update process. The next picture, though, shows what happens if zones are being reinstalled during the normal life cycle of the system. In BE-3, Zone X was reinstalled, while Zone Y was reinstalled in BE-2, BE-3, and BE-4 (but not in BE-5). That leads to a situation where there are two different instances of Zone X on the system, and four different Zone Y instances. Whichever zone instance is used depends on what BE is the system booted into. Note that the system itself and any zone always represent different Solaris instances. Undesirable Impact of the 11.3 Behavior The described behavior could lead to undesired situations in 11.3: With multiple ZBEs present, if a non-global zone is reinstalled, we end up with ZBEs under the zone's rpool/VARSHARE dataset representing Solaris instances that are unrelated, and yet share the same zone name. That leads to possible problems with migration mentioned in the first section. If a Kernel Zone is used in multiple BEs and the KZ is uninstalled and then tried to get re-installed again, the installation fails with an error message that the ZVOL is in use in other BEs. The only supported way to uninstall a zone is to boot into every BE on a system and uninstall the zone from there. With multiple BEs, that is definitely a suboptimal solution. Sharing a Zone State across BEs in 11.4 As already stated in the Overview section, in Oracle Solaris 11.4, the system shares a zone state across all BEs. The shared zone state resolves the issues mentioned in the previous sections. In most situations, nothing changes for the users and administrators as there were no changes in the existing interfaces (some were extended though). The major implementation change was to move the local, line oriented textual database /etc/zones/index to a shared directory /var/share/zones/ and store it in a JSON format. However, as before, the location and format of the database is just an implementation detail and is not part of any supported interface. To be precise, we did EOF zoneadm -R <root> mark as part of the Shared Zone State project. That interface was of very little use already and then only in some rare maintenance situations. Also let us be clear that all 11.3 systems use and will continue to use the local zone state index /etc/zones/index. We have no plans to update 11.3 to use the shared zone state database. Changes between 11.3 and 11.4 with regard to keeping the Zones State With the introduction of the shared zone state, you can run into situations that before were either just not possible or could have been only created via some unsupported behavior. Creating a new zone on top of an existing configuration When deleting a zone, the zone record is removed from the shared index database and the zone configuration is deleted from the present BE. Mounting all other non-active BEs and removing the configuration files from there would be quite time consuming so the files are left behind there. That means if one later boots into one of those previously non-active BEs and tries to create the zone there (which does not exist as we removed it from the shared database before), zonecfg may hit an already existing configuration file. We extended the zonecfg interface in a way that you have a choice to overwrite the configuration or reuse it: root# zonecfg -z zone1 create Zone zone1 does not exist but its configuration file does. To reuse it, use -r; create anyway to overwrite it (y/[n])? n root# zonecfg -z zone1 create -r Zone zone1 does not exist but its configuration file does; do you want to reuse it (y/[n])? y Existing Zone without a configuration If an admin creates a zone, the zone record is put into the shared index database. If the system is later booted into a BE that existed before the zone was created, that BE will not have the zone configuration file (unless left behind before, obviously) but the zone will be known as the zone state database is shared across all 11.4 BEs. In that case, the zone state in such a BE will be reported as incomplete (note that not even the zone brand is known as that is also part of a zone configuration). root# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 incomplete - - - When listing the auxiliary state, you will see that the zone has no configuration: root# zoneadm list -sc NAME STATUS AUXILIARY STATE global running zone1 incomplete no-config If you want to remove that zone, the -F option is needed. As removing the zone will make it invisible from all BEs, possibly those with a usable zone configuration, we introduced a new option to make sure an administrator will not accidentally remove such zones. root# zonecfg -z newzone delete Use -F to delete an existing zone without a configuration file. root# zonecfg -z newzone delete -F root# zoneadm -z newzone list zoneadm: zone 'newzone': No such zone exists Updating to 11.4 while a shared zone state database already exists When the system is updated from 11.3 to 11.4 for the first time, the shared zone database is created from the local /etc/zones/index on the first boot into the 11.4 BE. A new svc:/system/zones-upgrade:default service instance takes care of that. If the system is then brought back to 11.3 and updated again to 11.4, the system (ie. the service instance mentioned above) will find an existing shared index when first booting to this new 11.4 BE and if the two indexes differ, there is a conflict that must be taken care of. In order not to add more complexity to the update process, the rule is that on every update from 11.3 to 11.4, any existing shared zone index database /var/share/zones/index.json is overwritten with data converted from the /etc/zone/index file from the 11.3 BE the system was updated from, if the data differs. If there were no changes to Zones in between the 11.4 updates, either in any older 11.4 BEs nor in the 11.3 BE we are again updating to 11.4 from, there are no changes in the shared zone index database, and the existing shared database needs no overwrite. However, if there are changes to the Zones, eg. a zone was created, installed, uninstalled, or detached, the old shared index database is saved on boot, the new index is installed, and the service instance svc:/system/zones-upgrade:default is put to a degraded state. As the output of svcs -xv now in 11.4 newly reports services in a degraded state as well, that serves as a hint to a system administrator to go and check the service log: root# beadm list BE Flags Mountpoint Space Policy Created -- ----- ---------- ----- ------ ------- s11_4_01 - - 2.93G static 2018-02-28 08:59 s11u3_sru24 - - 12.24M static 2017-10-06 13:37 s11u3_sru31 NR / 23.65G static 2018-04-24 02:45 root# zonecfg -z newzone create root# pkg update entire@latest root# reboot -f root# root# beadm list BE Name Flags Mountpoint Space Policy Created ----------- ----- ---------- ------ ------ ---------------- s11_4_01 - - 2.25G static 2018-02-28 08:59 s11u3_sru24 - - 12.24M static 2017-10-06 13:37 s11u3_sru31 - - 2.36G static 2018-04-24 02:45 s11_4_03 NR / 12.93G static 2018-04-24 04:24 root# svcs -xv svc:/system/zones-upgrade:default (Zone config upgrade after first boot) State: degraded since April 24, 2018 at 04:32:58 AM PDT Reason: Degraded by service method: "Unexpected situation during zone index conversion to JSON." See: http://support.oracle.com/msg/SMF-8000-VE See: /var/svc/log/system-zones-upgrade:default.log Impact: Some functionality provided by the service may be unavailable. root# tail /var/svc/log/system-zones-upgrade:default.log [ 2018 Apr 24 04:32:50 Enabled. ] [ 2018 Apr 24 04:32:56 Executing start method ("/lib/svc/method/svc-zones-upgrade"). ] Converting /etc/zones/index to /var/share/zones/index.json. Newly generated /var/share/zones/index.json differs from the previously existing one. Forcing the degraded state. Please compare current /var/share/zones/index.json with the original one saved as /var/share/zones/index.json.backup.2018-04-24--04-32-57, then clear the service. Moving /etc/zones/index to /etc/zones/index.old-format. Creating old format skeleton /etc/zones/index. [ 2018 Apr 24 04:32:58 Method "start" exited with status 103. ] [ 2018 Apr 24 04:32:58 "start" method requested degraded state: "Unexpected situation during zone index conversion to JSON" ] root# svcadm clear svc:/system/zones-upgrade:default root# svcs svc:/system/zones-upgrade:default STATE STIME FMRI online 4:45:58 svc:/system/zones-upgrade:default If you diff(1)'ed the backed up JSON index and the present one, you would see that the zone newzone was added. The new index could be also missing some zones that were created before. Based on the index diff output, he/she can create or remove Zones on the system as necessary, using standard zonecfg(8) command, possibly reusing existing configurations as shown above. Also note that the degraded state here did not mean any degradation of functionality, its sole purpose was to notify the admin about the situation. Do not use BEs as a Backup Technology for Zones The previous implementation in 11.3 allowed for using BEs with linked ZBEs as a backup solution for Zones. That means that if a zone was uninstalled in a current BE, one could usually boot to an older BE or, in case of non-global zones, try to attach and update another existing ZBE from the current BE using the attach -z <ZBE> -u/-U subcommand. With the current implementation, uninstalling a zone means to uninstall the zone from the system, and that is uninstalling all ZBEs (or a ZVOL in case of Kernel Zones) as well. If you uninstall a zone in 11.4, it is gone. If an admin used previous implementation also as a convenient backup solution, we recommend to use archiveadm instead, whose functionality also provides for backing up zones. Future Enhancements An obvious future enhacement would be shared zone configurations across BEs. However, it is not on our short term plan at this point and neither we can guarantee this functionality will ever be implemented. One thing is clear - it would be a more challenging task than the shared zone state.

Overview Since Oracle Solaris 11.4, state of Zones on the system is kept in a shared database in /var/share/zones/, meaning a single database is accessed from all boot environments (BEs). However, up...

Solaris 10 Extended Support Patches & Patchsets Released!

On Tuesday April 17 we released the first batch of Solaris 10 patches & patchsets under Solaris 10 Extended Support.  There were a total of 24 Solaris 10 patches, including kernel updates, and 4 patchsets released on MOS! Solaris 10 Extended Support will run thru January 2021.  Scott Lynn put together a very informative Blog on Solaris 10 Extended Support detailing the benefits that customers can get by purchasing Extended Support for Solaris 10 - see https://blogs.oracle.com/solaris/oracle-solaris-10-support-explained. Those of you that have taken advantage of our previous Extended Support offerings for Solaris 8 and Solaris 9 will notice that we've changed things around a little with Solaris 10 Extended Support; previously we did not publish any updates to the Solaris 10 Recommended Patchsets during the Extended Support period.  This meant that the Recommended Patchsets remained available to all customers with Premier Operating Systems support, as all the patches the patchsets contained had Operating Systems entitlement requirements. Moving forward with Solaris 10 Extended Support, the decision has been made to continue to update the Recommended Patchsets thru the Solaris 10 Extended Support period.  This means customers that purchase Solaris 10 Extended Support get the benefit of continued Recommended Patchset updates, as patches that meet the criteria for inclusion in the patchsets are released.  During the Solaris 10 Extended Support period, the updates to the Recommended Patchsets will contain patches that require a Solaris 10 Extended Support contract, so the Solaris 10 Recommended Patchsets will also require a Solaris 10 Extended Support contract during this period. For customers that do not wish to avail of Extended Support and would like to access the last Recommended Patchsets created prior to the beginning of Extended Support for Solaris 10, the January 2018 Critical Patch Updates (CPUs) for Solaris 10 will remain available to those with Premier Operating System Support. The CPU Patchsets are rebranded versions of the Recommended Patchset on the CPU dates; the patches included in the CPUs are identical to the Recommended Patchset released on those CPU dates, but the CPU READMEs will be updated to reflect their use as CPU resources.  CPU patchsets are archived and are always available via MOS at later dates so that customers can easily align to their desired CPU baseline at any time.  A further benefit that only Solaris 10 Extended Support customers will receive is access to newly created CPU Patchsets for Solaris 10 thru the Extended Support period. The following table provides a quick reference to the recent Solaris 10 patchsets that have been released, including details of the support contract required to access them:   Patchset Name Patchset Details README Download Support Contract Required Recommended OS Patchset for Solaris 10 SPARC Patchset Details README Download Extended Support Recommended OS Patchset for Solaris 10 x86 Patchset Details README Download Extended Support CPU OS Patchset 2018/04 Solaris 10 SPARC Patchset Details README Download Extended Support CPU OS Patchset 2018/04 Solaris 10 x86 Patchset Details README Download Extended Support CPU OS Patchset 2018/01 Solaris 10 SPARC Patchset Details README Download  Operating Systems Support CPU OS Patchset 2018/01 Solaris 10 x86 Patchset Details README Download Operating Systems Support Please reach out to your local sales representative if you wish to get more information on the benefits of purchasing Extended Support for Solaris 10.

On Tuesday April 17 we released the first batch of Solaris 10 patches & patchsets under Solaris 10 Extended Support.  There were a total of 24 Solaris 10 patches, including kernel updates, and 4...

Oracle Solaris ZFS Device Removal

At long last, we provide the ability to remove a top-level VDEV from a ZFS storage pool in the upcoming Solaris 11.4 Beta refresh release. For many years, our recommendation was to create a pool based on current capacity requirements and then grow the pool to meet increasing capacity needs by adding VDEVs or by replacing smaller LUNs with larger LUNs. It is trivial to add capacity or replace smaller LUNs with larger LUNs, sometimes with just one simple command. The simplicity of ZFS is one of its great strengths! I still recommend the practice of creating a pool that meets current capacity requirements and then adding capacity when needed. If you need to repurpose pool devices in an over-provisioned pool or if you accidentally misconfigure a pool device, you now have the flexibility to resolve these scenarios. Review the following practical considerations when using this new feature, which should be used as an exception rather than the rule for pool configuration on production systems: An virtual (pseudo device) is created to move the data off the (removed) pool devices so the pool must have enough space to absorb the creation of the pseudo device Only top-level VDEVs can be removed from mirrored or RAIDZ pools Individual devices can be removed from striped pools Pool device misconfigurations can be corrected A few implementation details in case you were wondering: No additional steps are needed to remap the removed devices Data from the removed devices are allocated to the remaining devices but this is not a way to rebalance all data on pool devices Reads of the reallocated data are done from the pseudo device until those blocks are freed Some levels of indirection are needed to support this operation but they should not impact performance nor increase memory requirements See the examples below. Repurpose Pool Devices The following pool, tank, has low space consumption so one VDEV is removed. # zpool list tank NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank 928G 28.1G 900G 3% 1.00x ONLINE # zpool status tank pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 errors: No known data errors - # zpool remove tank mirror-1 # zpool status tank pool: tank state: ONLINE status: One or more devices are being removed. action: Wait for the resilver to complete. Run 'zpool status -v' to see device specific details. scan: resilver in progress since Sun Apr 15 20:58:45 2018 28.1G scanned 3.07G resilvered at 40.9M/s, 21.83% done, 4m35s to go config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 mirror-1 REMOVING 0 0 0 c1t7d0 REMOVING 0 0 0 c5t3d0 REMOVING 0 0 0 errors: No known data errors Run the zpool iostat command to verify that data is being written to the remaining VDEV. # zpool iostat -v tank 5 capacity operations bandwidth pool alloc free read write read write ------------------------- ----- ----- ----- ----- ----- ----- tank 28.1G 900G 9 182 932K 21.3M mirror-0 14.1G 450G 1 182 7.90K 21.3M c3t2d0 - - 0 28 4.79K 21.3M c4t2d0 - - 0 28 3.92K 21.3M mirror-1 - - 8 179 924K 21.2M c1t7d0 - - 1 28 495K 21.2M c5t3d0 - - 1 28 431K 21.2M ------------------------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool alloc free read write read write ------------------------- ----- ----- ----- ----- ----- ----- tank 28.1G 900G 0 967 0 60.0M mirror-0 14.1G 450G 0 967 0 60.0M c3t2d0 - - 0 67 0 60.0M c4t2d0 - - 0 68 0 60.4M mirror-1 - - 0 0 0 0 c1t7d0 - - 0 0 0 0 c5t3d0 - - 0 0 0 0 ------------------------- ----- ----- ----- ----- ----- ----- Misconfigured Pool Device In this case, a device was intended to be added as a cache device but was added a single device. The problem is identified and resolved. # zpool status rzpool pool: rzpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 errors: No known data errors # zpool add rzpool c3t3d0 vdev verification failed: use -f to override the following errors: mismatched replication level: pool uses raidz and new vdev is disk Unable to build pool from specified devices: invalid vdev configuration # zpool add -f rzpool c3t3d0 # zpool status rzpool pool: rzpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors # zpool remove rzpool c3t3d0 # zpool add rzpool cache c3t3d0 # zpool status rzpool pool: rzpool state: ONLINE scan: resilvered 0 in 1s with 0 errors on Sun Apr 15 21:09:35 2018 config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 cache c3t3d0 ONLINE 0 0 0 In summary, Solaris 11.4 includes a handy new option for repurposing pool devices and resolving pool misconfiguration errors.

At long last, we provide the ability to remove a top-level VDEV from a ZFS storage pool in the upcoming Solaris 11.4 Beta refresh release. For many years, our recommendation was to create a pool based...

Oracle Solaris 11.4 Open Beta Refreshed!

On January 30, 2018, we released the Oracle Solaris 11.4 Open Beta. It has been quite successful. Today, we are announcing that we've refreshed the 11.4 Open Beta. This refresh includes new capabilities and additional bug fixes (over 280 of them) as we drive to the General Availability Release of Oracle Solaris 11.4. Some new features in this release are: ZFS Device Removal ZFS Scheduled Scrub SMB 3.1.1 Oracle Solaris Cluster Compliance checking ssh-ldap-getpubkey Also, the Oracle Solaris 11.4 Beta refresh includes the changes to mitigate CVE-2017-5753, otherwise known as Spectre Variant 1, for Firefox, the NVIDIA Graphics driver, and the Solaris Kernel (see MOS docs on SPARC and x86 for more information). Additionally, new bundled software includes, gcc 7.3, libidn2, and qpdf 7.0.0, and more than 45 new bundled software versions. Before I go further, I have to say: The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. I want to take a few minutes to address some questions I've been getting that the upcoming release of Oracle Solaris 11.4 has sparked.  Oracle Solaris 11.4 runs on Oracle SPARC and x86 systems released since 2011, but not on certain older systems that had been supported in Solaris 11.3 and earlier.  Specifically, systems not supported in Oracle Solaris 11.4 include systems based on the SPARC T1, T2, and T3 processors or the SPARC64 VII+ and earlier based “Sun4u” systems such as the SPARC Enterprise M4000.  To allow customers time to migrate to newer hardware we intend to provide critical security fixes as necessary on top of the last SRU delivered for 11.3 for the following year.  These updates will not provide the same level of content as regular SRUs  and are intended solely as a transition vehicle.  Customers using newer hardware are encouraged to update to Oracle Solaris 11.4 and subsequent Oracle Solaris 11 SRUs as soon as practical. Another question I've been getting quite a bit is about the release frequency and strategy for Oracle Solaris 11. After much discussion internally and externally, with you, our customers, about our current continuous delivery release strategy, we are going forward with our current strategy with some minor changes: Oracle Solaris 11 update releases will be released every year in approximately the first quarter of our fiscal year (that's June, July, August for most people). New features will be made available as they are ready to ship in whatever is the next available and appropriate delivery vehicle. This could be an SRU, CPU or a new release. Oracle Solaris 11 update releases will contain the following content: All new features previously released in the SRUs between the releases Any new features that are ready at the time of release Free and Open Source Software updates (i.e. new versions of FOSS) End of Features and End of Life hardware. ​This should make our releases more predictable, maintain the reliability you've come to depend on, and provide new features to you rapidly, allowing you to test them and deploy them faster. Oracle Solaris 11.4 is secure, simple and cloud-ready and compatible with all your existing Oracle Solaris 11.3 and earlier applications. Go give the latest beta a try. You can download it here.  

On January 30, 2018, we released the Oracle Solaris 11.4 Open Beta. It has been quite successful. Today, we are announcing that we've refreshed the 11.4 Open Beta. This refresh includes new...

Oracle Solaris 11.3 SRU 31

We've just released Oracle Solaris 11.3 SRU 31. This is the April Critical Patch update and contains some important security fixes as well as enhancements to Oracle Solaris. SRU31 is now available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . The following components have been updated to address security issues: The Solaris kernel has been updated to mitigate against CVE-2017-5753 aka spectre v1. See Oracle Solaris on SPARC - Spectre (CVE-2017-5753, CVE-2017-5715) and Meltdown (CVE-2017-5754) Vulnerabilities (Doc ID 2349278.1) and Oracle Solaris on x86 - Spectre (CVE-2017-5753, CVE-2017-5715) and Meltdown (CVE-2017-5754) Vulnerabilities (Doc ID 2383531.1) for more information. Apache Tomcat has been updated to 8.5.28 Firefox has been updated to 52.7.3esr Thunderbird has been updated to 52.7.0 unzip has been updated to 6.1 beta c23 NTP has been updated to 4.2.8p11 TigerVNC has been updated to 1.7.1 Updated versions of PHP: PHP has been updated to 5.6.34 PHP has been updated to 7.1.15 Updated versions of MySQL: MySQL has been updated to 5.5.59 MySQL has been updated to 5.6.39 irssi has been updated to 1.0.7 Security fixes are also included for quagga, gimp, GNOME remote desktop, vinagre and NSS. These enhancements have also been added: Oracle VM Server for SPARC has been updated to version 3.5.0.2. For more information including What's New, Bug Fixes, and Known Issues, see Oracle VM Server for SPARC 3.5.0.2 Release Notes. The TigerVNC update introduces the new fltk component in Oracle Solaris 11.3 libidn has been updated to 2.0.4 pam_list support for wildcard and comment lines The Java 8, Java 7, and Java 6 packages have been updated. See Note 5 for the location and details on how to update Java. For more information and bugs fixed, see Java 8 Update 172 Release Notes, Java 7 Update 181 Release Notes, and Java 6 Update 191 Release Notes. Full details of this SRU can be found in My Oracle Support Doc 2385753.1. For the list of Service Alerts affecting each Oracle Solaris 11.3 SRU, see Important Oracle Solaris 11.3 SRU Issues (Doc ID 2076753.1).

We've just released Oracle Solaris 11.3 SRU 31. This is the April Critical Patch update and contains some important security fixes as well as enhancements to Oracle Solaris. SRU31 is now available...

One SMF Service to Monitor the Rest!

Contributed by: Thejaswini Kodavur Have you ever wondered if there was a single service that monitors all your other services and makes administration easier? If yes then “SMF goal services”, a new feature of Oracle Solaris 11.4, is here to provide a single, unambiguous, and well-defined point where one can consider the system up and running. You can choose your customized, mission critical services and link them together into a single SMF service in one step. This SMF service is called a goal service. It can be used to monitor the health of your system upon booting up. This makes administration much easier as monitoring each of the services individually is no longer required! There are two ways in which you can make your services part of a goal service. 1. Using the supplied Goal Service By default Oracle Solaris 11.4 system provides you a goal service called “svc:/milestone/goals:default”. This goal service has a dependency on the service “svc:/milestone/multi-user-server:default” by default. You can set your mission critical service to the default goal service as below: # svcadm goals system/my-critical-service-1:default Note: This is a set/clear interface. Therefore the above command will clear the dependency from “svc:/milestone/multi-user-server:default”. In order to set the dependency on both the services use: # svcadm goals svc:/milestone/multi-user-server:default \ system/my-critical-service-1:default 2. Creating you own Goal Service Oracle Solaris 11.4 allows you to create your own goal service and set your mission critical services as dependent services. Follow the below steps to create and use a goal service. Create a usual SMF service using svcbundle(8), svccfg(8), svcadm(8): # svcbundle -o new-gs.xml -s service-name=milestone/new-gs -s start-method=":true" # cp new-gs.xml /lib/svc/manifest/site/new-gs.xml # svccfg validate /lib/svc/manifest/site/new-gs.xml # svcadm restart svc:/system/manifest-import # svcs new-gs STATE STIME FMRI online 6:03:36 svc:/milestone/new-gs:default To make this SMF service as a goal service, set the property general/goal-service=true: # svcadm disable svc:/milestone/new-gs:default # svccfg -s svc:/milestone/new-gs:default setprop general/goal-service=true # svcadm enable svc:/milestone/new-gs:default Now you can set dependencies in the newly created goal services using the -g option as below: # svcadm goals -g svc:/milestone/new-gs:default system/critical-service-1:default \ system/critical-service-2:default Note: By omitting the -g option without specifying a goal service, you will set the dependency to the system provided default goal service, i.e svc:/milestone/multi-user-server:default. On system boot up if one of your critical services does not come online, then the goal service will go into maintenance state. # svcs -d milestone/new-gs STATE STIME FMRI disabled 5:54:31 svc:/system/critical-service-2:default online Feb_19 svc:/system/critical-service-1:default # svcs milestone/new-gs STATE STIME FMRI maintenance 5:54:30 svc:/milestone/new-gs:default Note: You can use -d option of svcs(1) to check the dependencies on your goal service. Once all of the dependent services come online then your goal service will also come online. For goal services to be online, they are expected to have all their dependencies satisfied. # svcs -d milestone/new-gs STATE STIME FMRI online Feb_19 svc:/system/critical-service-1:default online 5:56:39 svc:/system/critical-service-2:default # svcs milestone/new-gs STATE STIME FMRI online 5:56:39 svc:/milestone/new-gs:default Note: For more information refer to "Goal Services" in smf(7) and subcommand goal in svcadm(8). The goal service “milestone/new-gs” is your new single SMF service with which you can monitor all of your other mission critical services! Thus, Goals Service acts as the headquarters that monitors the rest of your services.

Contributed by: Thejaswini Kodavur Have you ever wondered if there was a single service that monitors all your other services and makes administration easier? If yes then “SMF goal services”, a new...

Oracle Solaris 11

Oracle Solaris 11.4 beta progress on LP64 conversion

Back in 2014, I posted Moving Oracle Solaris to LP64 bit by bit describing work we were doing then. In 2015, I provided an update covering Oracle Solaris 11.3 progress on LP64 conversion. Now that we've released the Oracle Solaris 11.4 Beta to the public you can see the ratio of ILP32 to LP64 programs in /usr/bin and /usr/sbin in the full Oracle Solaris package repositories has dramatically shifted in 11.4: Release 32-bit 64-bit total Solaris 11.0 1707 (92%) 144 (8%) 1851 Solaris 11.1 1723 (92%) 150 (8%) 1873 Solaris 11.2 1652 (86%) 271 (14%) 1923 Solaris 11.3 1603 (80%) 379 (19%) 1982 Solaris 11.4 169 (9%) 1769 (91%) 1938 That's over 70% more of the commands shipped in the OS which can use ADI to stop buffer overflows on SPARC, take advantage of more registers on x86, have more address space available for ASLR to choose from, are ready for timestamps and dates past 2038, and receive the other benefits of 64-bit software as described in previous blogs. And while we continue to provide more features for 64-bit programs, such as making ADI support available in the libc malloc, we aren't abandoning 32-bit programs either. A change that just missed our first beta release, but is coming in a later refresh of our public beta will make it easier for 32-bit programs to use file descriptors > 255 with stdio calls, relaxing a long held limitation of the 32-bit Solaris ABI. This work was years in the making, and over 180 engineers contributed to it in the Solaris organization, plus even more who came before to make all the FOSS projects we ship and the libraries we provide be 64-bit ready so we could make this happen. We thank all of them for making it possible to bring this to you now.

Back in 2014, I posted Moving Oracle Solaris to LP64 bit by bit describing work we were doing then. In 2015, I provided an update covering Oracle Solaris 11.3 progress on LP64 conversion. Now that...

Recent Blogs Round-Up

We're seeing blogs about the Solaris 11.4 Beta show up through different channels like Twitter and Facebook which means you might have missed some of these, so we thought it would be good do a round-up. This also means you might have already seen some of them but hopefully there are some nice new ones among them. Glenn Faden wrote about: Authenticated Rights Profiles Sharing Sensitive Data Monitoring Access to Sensitive Data OpenLDAP Support Using the Oracle Solaris Account Manager BUI Protecting Sensitive Data Cindy Swearingen wrote about the Data Management Features Enrico Perla wrote about libc:malloc meets ADIHEAP Thorsten Mühlmann wrote a few nice ones recently: Solaris is dead – long lives Solaris Privileged Command Execution History Reporting Per File Auditing Improved Debugging with pfiles Enhancement Monitoring File System Latency with fsstat Eli Kleinman wrote a pretty complete set of blogs on Analytics/StatsStore: Part 1 on how to configure analytics Part 2 on how to configure the client capture stat process Part 3 on how to publish the client captured stats Part 4 Configuring / Accessing the Web Dashboard / UI Part 5 Capturing Solaris 11.4 Analytics By Using Remote Administration Daemon (RAD) Andrew Watkins wrote about Setting up Sendmail / SASL to handle SMTP AUTH Marcel Hofstetter wrote about: Fast (asynchronous) ZFS destroy How JomaSoft VDCF can help with updating from 11.3 to 11.4 Rod Evans wrote about elfdiff And for those interested in linker and ELF related improvements/changes/details Ali Bahrami published a very complete set of blogs: kldd: ldd Style Analysis For Solaris Kernel Modules - The new kldd ELF utility brings ldd style analysis to kernel modules. Core File Enhancements for elfdump - Solaris 11.4 comes with a number of enhancements that allow the elfdump utility to display a wealth of information that was previously hidden in Solaris core files. Best of all, this comes without a significant increase in core file size. ELF Program Header Names - Starting with Solaris 11.4, program headers in Solaris ELF objects have explicit names associated with them. These names are used by libproc, elfdump, elfedit, pmap, pmadvise, and mdb to eliminate some of the guesswork that goes into looking at process mappings. ELF Section Compression - In cooperation with the GNU community, we are happy and proud to bring standard ELF section compression APIs to libelf. This builds on our earlier work in 2012 (Solaris 11 Update 2) to standardize ELF compression at the file format level. Now, others can easily access that functionality. ld -ztype, and Kernel Modules That Know What They Are - Solaris Kernel Modules (kmods) are now explicitly tagged as such, and are treated as final objects. Regular Expression and Glob Matching for Mapfiles - Pattern matching using regular expressions, globbing, or plain string comparisons, bring new expressive power to Solaris mapfiles. New CRT Objects. (Or: What Are CRT objects?) - Publically documented and committed CRT objects for Solaris. Goodbye (And Good Riddance) to -mt -and -D_REENTRANT - A long awaited simplification to the process of building multithreaded code, one of the final projects delivered to Solaris by Roger Faulkner, made possible by his earlier work on thread unification that landed in Solaris 10. Weak Filters: Dealing With libc Refactoring Over The Years - Weak Filters allow the link-editor to discard unnecessary libc filters as dependencies, because you can't always fix the Makefile. Where Did The 32-Bit Linkers Go? - In Solaris 11 Update 4 (and Solaris 11 Update 3), the 32-bit version of the link-editor, and related linking utilities, are gone.  We hope you enjoy.

We're seeing blogs about the Solaris 11.4 Beta show up through different channels like Twitter and Facebook which means you might have missed some of these, so we thought it would be good do...

Oracle Solaris 11

Oracle Solaris 11.3 SRU 29 Released

We've just released Oracle Solaris 11.3 SRU 29. It contains some important security fixes and enhancements. SRU29 is now available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . Features included in this SRU include: libdax support on X86 This feature enables the use of DAX query operations on x86 platforms. The ISV and Open Source communities can now develop DAX programs on x86 platforms. The application developed on x86 platforms can be executed on SPARC platform with no modifications and the libdax API will choose DAX operations supported by the platform. The libdax library on x86 uses software emulation and does not require any change in the user developed applications. Oracle VM Server for SPARC has been updated to version 3.5.0.1 For more information including What's New, Bug Fixes, and Known Issues, see Oracle VM Server for SPARC 3.5.0.1 Release Notes. The Java 8, Java 7, and Java 6 packages have been updated. For more information and bugs fixed, see Java 8 Update 162 Release Notes, Java 7 Update 171 Release Notes, and Java 6 Update 181 Release Notes. The SRU also updates the following components which have security fixes: p7zip has been updated to 16.02 Firefox has been updated to 52.6.0esr ImageMagick has been updated to 6.9.9-30 Thunderbird has been updated to 52.6.0 libtiff has been updated to 4.0.9 Wireshark has been updated to 2.4.4 NVIDIA driver has been updated irssi has been updated to 1.0.6 BIND has been updated to 9.10.5-P3   Full details of this SRU can be found in My Oracle Support Doc 2361795.1 For the list of Service Alerts affecting each Oracle Solaris 11.3 SRU, see Important Oracle Solaris 11.3 SRU Issues (Doc ID 2076753.1).

We've just released Oracle Solaris 11.3 SRU 29. It contains some important security fixes and enhancements. SRU29 is now available from My Oracle Support Doc ID 2045311.1, or via 'pkg update'...

posix_spawn() as an actual system call

History As a developer, there are always those projects when it is hard to find a way to go forward.  Drop the project for now and find another project, if only to rest your eyes and find yourself a new insight for the temporarily abandoned project.  This is how I embarked on posix_spawn() as an actual system call you will find in Oracle Solaris 11.4. The original library implementation of posix_spawn() uses vfork(), but why care about the old address space if you are not going to use it? Or, worse, stop all the other threads in the process and don't start them until exec succeeded or when you call exit()? As I had already written kernel modules for nefarious reason to run executables directly from the kernel, I decided to benchmark the simple "make process, execute /bin/true" against posix_spawn() from the library. Even with two threads, posix_spawn() scaled poorly: additional threads did not allow a large number of additional spawns per second. Starting a new process All ways to start a new process need to copy a number of process properties: file descriptors, credentials, priorities, resource controls, etc. The original way to start a new process is fork(); you will need to mark all the pages as copy-on-write (O(n) in the size of the number of pages in the process) and so this gets more and more expensive when the process get larger and larger. In Solaris we also reserve all the needed swap; a large process calling fork() doubles its swap requirement. In BSD vfork() was introduced; it borrows the address space and was cheap when it was invented.  In much larger processes with hundreds of threads, it became more and more of bottleneck.  Dynamic linking also throws a spanner in the works: what you can do between vfork() and the final exec() is extremely small. In the standard universe, posix_spawn() was invented; it was aimed mostly at small embedded systems and a very number of specific actions can be performed before the new executable is run.  As it was part of the standard, Solaris grew its own copy build on top of vfork(). It has, of course, the same problems as vfork() has; but because it is implemented in the library we can be sure we steer clear from all the other vfork() pitfalls. Native spawn(2) call The native spawn(2) system introduced in Oracle Solaris 11.4 shares a lot of code with the forkx(2) and execve(2).  It mostly avoids doing those unneeded operations: do not stop all threads do not copy any data about the current executable do not clear all watch points (vfork()) do not duplicate the address space (fork()) no need to handle shared memory segments do not copy one or more of the threads (fork1/forkall), create a new one instead do not copy all file pointers no need to restart all threads held earlier The exec() call copies from its own address space but when spawn(2) needs the argument, it is already in a new process.  So early in the spawn(2) system call we copy the environment vector and the arguments and save them away.  The data blob is given to the child and the parent waits until the client is about to return from the system call in the new process or when it decides that it can't actually exec and calls exit instead. A process can spawn(2) in all its threads and the concurrently is only limited by locks that need to be held shortly when processes are created. The performance win depends on the application; you won't win anything unless you use posix_spawn(); I was very happy to see that our standard shell is using posix_spawn() to start new processes as do popen(3c) as well as system(3c) so the call is well tested.  The more threads you have, the bigger the win. Stopping a thread is expensive, especially if it hold up in a system call. The world used to stop but now it just continues. Support in truss(1), mdb(1) When developing a new system call special attention needs to be given to proc(5) and truss(1) interaction.  The spawn(2) system call is an exception but only because it is much harder to get it right; support is also needed in debuggers or they won't see a new process starting. This includes mdb(1) but also truss(1).  They also need to learn that when spawn(2) succeeds, that they are stopping in a completely different executable; we may also have crossed a privilege boundary, e.g., when spawning su(8) or ping(8).

History As a developer, there are always those projects when it is hard to find a way to go forward.  Drop the project for now and find another project, if only to rest your eyes and find yourself...

Installing Packages — Oracle Solaris 11.4 Beta

We've been getting random questions about how to install (Oracle Solaris) packages onto their newly installed Oracle Solaris 11.4 Beta. And of course key is pointing to the appropriate IPS repository. One of the options is to download the full repository and install it on it's own locally or add this to an existing local repository and then just point the publisher to this local repository. This is mostly used by folks who have a test system/LDom/Kernel Zone where they will probably have one or more local repositories already. However experience shows that a large percentage of folks testing a beta version like this do this in a VirtualBox instance on their laptop or workstation. And because of this they want to use the Gnome Desktop rather than remotely logging through ssh. So one of the things we do is supply an Oracle VM Template for VirtualBox which already has the solaris-desktop group package installed (officially the group/system/solaris-desktop) so it shows more than the console when started and give you the ability to run desktop like tasks like Firefox and a Terminal. (Btw as per the Release Notes on Runtime Issues there's a glitch with gnome-terminal you might run into and you'd need to run a workaround to get it working.) For this group of VirtualBox based testers the chances are high that they're not going to have a local repository nearby, especially on a laptop that's moving around. This is where using our central repository at pkg.oracle.com is very useful which is well described in the Oracle Solaris documentation. However going through this there may be some minor obstacles to clear when using this method that aren't directly part of the process but get in the way when using the VirtualBox installed OVM Template. First, when using the Firefox browser to request certificates and download certificates and later point to the repository you'll need to have DNS working and depending on the install the DNS client may not yet be enabled. Here's how you check it: demo@solaris-vbox:~$ svcs dns/client STATE STIME FMRI disabled 5:45:26 svc:/network/dns/client:default This is fairly simple to solve. First check that the Oracle Solaris instance has correctly picked up the DNS information from VirtualBox in the DHCP process buy looking in /etc/resolv.conf. If that looks good simply enable the dns/client service: demo@solaris-vbox:~$ sudo svcadm enable dns/client You'll be asked for your password and then it will be enabled. Note you can also use pfexec(1) instead of sudo(8). This will also check if your user has the appropriate privileges. You can check if the service is running: demo@solaris-vbox:~/Downloads$ svcs dns/client STATE STIME FMRI online 10:21:16 svc:/network/dns/client:default Now DNS is running you should be able to ping pkg.oracle.com. The second gotya is that on the pkg-register.oracle.com page the Oracle Solaris 11.4 Beta repository is at the very bottom of the list of available repositories and should not be confused with the Oracle Solaris 11 Support repository (to which you may already have requested access) listed at the top of the page. The same certificate/key pair are used for any of the Oracle Solaris repositories, however in order permit the use of the any existing cert/key pair the license for the Oracle Solaris 11.4 Beta repository must be accepted. This means selecting the 'Request Access' button next to the Solaris 11.4 Beta repository entry. Once you have the cert/key, or you have accepted the license, then you can configure the beta repository as: pkg set-publisher -k <your-key> -c <your-cert> -g https://pkg.oracle.com/solaris/beta solaris With the Virtual Box image the default repository setup includes the 'release' repository. It is best to remove that: pkg set-publisher -G http://pkg.oracle.com/solaris/release solaris This can be performed in one command: pkg set-publisher -k <your-key> -c <your-cert> -G http://pkg.oracle.com/solaris/release\ -g https://pkg.oracle.com/solaris/beta solaris Note that here too you'll need to either use pfexec(1) or sudo(8) again. This should kickoff the pkg(1) command and once it's done you can check it's status with: demo@solaris-vbox:~/Downloads$ pkg publisher solaris Publisher: solaris Alias: Origin URI: https://pkg.oracle.com/solaris/beta/ Origin Status: Online SSL Key: /var/pkg/ssl/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx SSL Cert: /var/pkg/ssl/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Cert. Effective Date: January 29, 2018 at 03:04:58 PM Cert. Expiration Date: February 6, 2020 at 03:04:58 PM Client UUID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Catalog Updated: January 24, 2018 at 02:09:16 PM Enabled: Yes And now you're up and running. A final thought, if for example you've chosen to install the Text Install version of the Oracle Solaris 11.4 Beta because you want yo have a nice minimal install with no overhead of Gnome and things like that, you can also download the key and certificate to another system or the hosting OS (in the case you're using VirtualBox) and then rsync or rcp them across and then follow all the same steps.

We've been getting random questions about how to install (Oracle Solaris) packages onto their newly installed Oracle Solaris 11.4 Beta. And of course key is pointing to the appropriate IPS repository. O...

System maintenance — evacuate all Zones!

The number one use case for live migration today is for evacuation: when a Solaris Zones host needs some maintenance operation that involves a reboot, then the zones are live migrated to some other willing host. This avoids scheduling simultaneous maintenance windows for all the services provided by those zones. Implementing this today on Solaris 11.3 involves manually migrating zones with individual zoneadm migrate commands, and especially, determining suitable destinations for each of the zones. To make this common scenario simpler and less error prone, Solaris 11.4 Beta comes with a new command sysadm(8) for system maintenance that also allows for zone evacuation. The basic idea of how it is supposed to be used is like this: # pkg update ... # sysadm maintain -s -m "updating to new build" # sysadm evacuate -v Evacuating 3 zones... Migrating myzone1 to rads://destination1/ ... Migrating myzone3 to rads://destination1/ ... Migrating myzone4 to rads://destination2/ ... Done in 3m30s. # reboot ... # sysadm maintain -e # sysadm evacuate -r ... When in maintenance mode, an attempt to attach or boot any zone is refused: if the admin is trying to move zones off the host, it's not helpful to allow incoming zones. Note that this maintenance mode is recorded system-wide, not just in the zones framework; even though the only current impact is on zones, it seems likely other sub-systems may find it useful in the future. To set up an evacuation target for a zone, an SMF property evacuation/target for a given zone service instance system/zones/zone:<zone-name> must be set to the target host. You can either use rads:// or ssh:// location identifier, e.g.: ssh://janp@mymachine.mydomain.com. Do not forget to refresh the service instance for your change to take effect. You can evacuate running Kernel Zones and both installed native and Kernel Zones. The evacuation always means evacuating running zones, and with the option -a, installed zones are included as well. Only those zones with the set evacuation/target property in their service instance are scheduled for evacuation. However, if any of the running zone (and also installed if the evacuate -a is used) is not set with the property, the overall result of the evacuation will be reported as failed by sysadm which is logical as an evacuation by its definition means to evacuate everything. As live zone migration does not support native zones, those can be only evacuated in the installed state. Also note that you can only evacuate zones installed on shared storage. For example, on iSCSI volumes. See the storage URI manual page, suri(7), for information on what other shared storage is supported. Note that you can install Kernel Zones to NFS files as well. To setup live Kernel Zone migration, please check out Migrating an Oracle Solaris Kernel Zone section of the 11.4 online documentation. Now, let's see a real example. We have a few zones on host nacaozumbi. All running and installed zones are on shared storage, including the native zone tzone1 and Kernel Zone evac1: root:nacaozumbi:~# zonecfg -z tzone1 info rootzpool rootzpool: storage: iscsi://saison/luname.naa.600144f0dbf8af1900005582f1c90007 root:nacaozumbi::~$ zonecfg -z evac1 info device device: storage: iscsi://saison/luname.naa.600144f0dbf8af19000058ff48060017 id: 1 bootpri: 0 root:nacaozumbi:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 82 evac3 running - solaris-kz excl 83 evac1 running - solaris-kz excl 84 evac2 running - solaris-kz excl - tzone1 installed /system/zones/tzone1 solaris excl - on-fixes configured - solaris-kz excl - evac4 installed - solaris-kz excl - zts configured - solaris-kz excl Zones not set for evacution were detached - ie. on-fixes and zts. All running and installed zones are set to be evacuated to bjork, for example: root:nacaozumbi:~# svccfg -s system/zones/zone:evac1 listprop evacuation/target evacuation/target astring ssh://root@bjork Now, let's start the maintenance window: root:nacaozumbi:~# sysadm maintain -s -m "updating to new build" root:nacaozumbi:~# sysadm maintain -l TYPE USER DATE MESSAGE admin root 2018-02-02 01:10 updating to new build At this point we can no longer boot or attach zones on nacaozumbi: root:nacaozumbi:~# zoneadm -z on-fixes attach zoneadm: zone 'on-fixes': attach prevented due to system maintenance: see sysadm(8) And that also includes migrating zones to nacaozumbi: root:bjork:~# zoneadm -z on-fixes migrate ssh://root@nacaozumbi zoneadm: zone 'on-fixes': Using existing zone configuration on destination. zoneadm: zone 'on-fixes': Attaching zone. zoneadm: zone 'on-fixes': attach failed: zoneadm: zone 'on-fixes': attach prevented due to system maintenance: see sysadm(8) Now we start evacuating all the zones. In this example, all running and installed zones have their service instance property evacuation/target set. The option -a means all the zones, that is including those installed. The -v option provides verbose output. root:nacaozumbi:~# sysadm evacuate -va sysadm: preparing 5 zone(s) for evacuation ... sysadm: initializing migration of evac1 to bjork ... sysadm: initializing migration of evac3 to bjork ... sysadm: initializing migration of evac4 to bjork ... sysadm: initializing migration of tzone1 to bjork ... sysadm: initializing migration of evac2 to bjork ... sysadm: evacuating 5 zone(s) ... sysadm: migrating tzone1 to bjork ... sysadm: migrating evac2 to bjork ... sysadm: migrating evac4 to bjork ... sysadm: migrating evac1 to bjork ... sysadm: migrating evac3 to bjork ... sysadm: evacuation completed successfully. sysadm: evac1: evacuated to ssh://root@bjork sysadm: evac2: evacuated to ssh://root@bjork sysadm: evac3: evacuated to ssh://root@bjork sysadm: evac4: evacuated to ssh://root@bjork sysadm: tzone1: evacuated to ssh://root@bjork While being evacuated, you can check the state of evacuation like this: root:nacaozumbi:~# sysadm evacuate -l sysadm: evacuation in progress After the evacuation is done, you can also see the details like this (for example, in case you did not run it in verbose mode): root:nacaozumbi:~# sysadm evacuate -l -o ZONENAME,STATE,DEST ZONENAME STATE DEST evac1 EVACUATED ssh://root@bjork evac2 EVACUATED ssh://root@bjork evac3 EVACUATED ssh://root@bjork evac4 EVACUATED ssh://root@bjork tzone1 EVACUATED ssh://root@bjork And you can see all the evacuated zones are now in the configured state on the source host: root:nacaozumbi:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 configured /system/zones/tzone1 solaris excl - evac1 configured - solaris-kz excl - on-fixes configured - solaris-kz excl - evac4 configured - solaris-kz excl - zts configured - solaris-kz excl - evac3 configured - solaris-kz excl - evac2 configured - solaris-kz excl And the migrated zones are happily running or in the installed state on host bjork: jpechane:bjork::~$ zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 57 evac3 running - solaris-kz excl 58 evac1 running - solaris-kz excl 59 evac2 running - solaris-kz excl - on-fixes installed - solaris-kz excl - tzone1 installed /system/zones/tzone1 solaris excl - zts installed - solaris-kz excl - evac4 installed - solaris-kz excl The maintenance state is still held at this point: root:nacaozumbi:~# sysadm maintain -l TYPE USER DATE MESSAGE admin root 2018-02-02 01:10 updating to new build Upgrade the system with a new boot environment unless you did that before (which you should have to keep the time your zones are running on the other host to minimum): root:nacaozumbi:~# pkg update --be-name=.... -C0 entire@... root:nacaozumbi:~# reboot Now, finish the maintenance mode. root:nacaozumbi:~# sysadm maintain -e And as the final step, return all the evacuated zones now. As explained before, you would not be able to do it if still in the maintenace mode. root:nacaozumbi:~# sysadm evacuate -ra sysadm: preparing zones for return ... 5/5 sysadm: returning zones ... 5/5 sysadm: return completed successfully. Possible enhancements for the future we are considering include specifying multiple targets and a spread policy, with a resource utilisation comparison algorithm that would consider CPU arch, RAM and CPU resources.

The number one use case for live migration today is for evacuation: when a Solaris Zones host needs some maintenance operation that involves a reboot, then the zones are live migrated to some other...

What is this BUI thing anyway?

This is part two in my series of posts about Solaris Analytics in the Solaris 11.4 release. You may find part one here. The Solaris Analytics WebUI (or "bui" for short) is what we use to tie together all our data gathering from the Stats Store. Comprised of two web apps (titled "Solaris Dashboard" and "Solaris Analytics"), enable the webui service via # svcadm enable webui/server Once the service is online, point your browser at https://127.0.0.1:6787 and log in. [Note that the self-signed certificate is that generated by your system, and adding an exception for it in your browser is fine]. Rather than roll our own toolkit, we make use of Oracle Jet, which means we can keep a consistent look and feel across Oracle web applications. After logging in, you'll see yourself at the Oracle Solaris Web Dashboard, which shows an overview of several aspects of your system, along with Faults (FMA) and Solaris Audit activity if your user has sufficient privileges to read them.     Mousing over any of the visualizations on this page will give you a brief description of what the visualization provides, and clicking on it will take you to a more detailed page. If you click on the hostname in the top bar (next to Applications), you'll see what we call the Host Drawer. This pulls information from svc:/system/sysstat. Click the 'x' on the top right to close the drawer. Selecting Applications / Solaris Analytics will take you to the main part of the bui: I've select the NFS Client sheet, resulting in the dark shaded box on the right popping up with a description of what the sheet will show you.   Building blocks: faults, utilization and audit events In the previous installment I mentioned that we wanted to provide a way for you to tie together the many sources of information we provide, so that you could answer questions about your system. This is a small example of how you can do so. The host these screenshots were taken from is a single processor, four-core Intel-based workstation. In a terminal window I ran # psradm -f 3 Followed a few minutes later by # psradm -n 3 You can see those events marked on each of the visualizations with a blue triangle here: Now if I mouseover the triangle marking the second offline/online pair, in the Thread Migrations viz, I can see that the system generated a Solaris Audit event: This allows us to observe that the changes in system behaviour (primarily load average and thread migrations across cores) were correlated with the offlining of a cpu core. Finally, let's have a look at the Audit sheet. To view the stats on this page, you need to login to the bui as a suitably-privileged user - either root, or a user with the solaris.sstore.read.sensitive privileges.   # usermod -A +solaris.sstore.read.sensitive $USER   For this screenshot I not only redid the psradm operations from earlier, I also tried making an ssh connection with an unknown user, and logged in on another of this system's virtual consoles. There are many other things you could observe with the audit subsystem; this is just a glimpse: Tune in next time for a discussion of using the C and Python bindings to the Stats Store so you can add your own statistics.

This is part two in my series of posts about Solaris Analytics in the Solaris 11.4 release. You may find part one here.The Solaris Analytics WebUI (or "bui" for short) is what we use to tie together...

Perspectives

Default Memory Allocator Security Protections using Silicon Secured Memory (SSM ADI)

In Solaris 11.3 we provided the ability to use the Silicon Secured Memory feature of the Oracle SPARC processors in the M7 and M8 families. An API for applications to explicitly manage ADI (Application Data Integrity) versioning was provided, see adi(2) man page, as well as new memory allocator library - libadimalloc(3LIB). This required either code changes to the application or arranging to set LD_PRELOAD_64=/usr/lib/64/libadimalloc.so.1 in the environment variables before the application started. The libadimalloc(3LIB) allocator was derived from the libumem(3LIB) codebase but doesn't expose all of the features that libumem does. With Oracle Solaris 11.4 Beta the use of ADI has been integrated into the default system memory allocator in libc(3LIB) and libumem(3LIB), while retaining libadimalloc(3LIB) for backwards compatibility with Oracle Solaris 11.3 systems. Control of which processes run with ADI protection is now via the Security Extensions Framework, usng sxadm(8), so it is no longer necessary to set the $LD_PRELOAD_64 environment variable. There are two distinct ADI based protections exposed via the Security Extensions Framework: ADISTACK and ADIHEAP. To complement the existing extensions introduced in earlier Oracle Solaris 11 update releases: ASLR, NXHEAP, NXSTACK (all three of which are available on SPARC and x86 CPU systems). ADIHEAP is how the ADI protection is exposed via the standard libc memory allocator and via libumem. The ADISTACK extension as the name suggests is for protectiong the register save area of the stack. $ sxadm status EXTENSION STATUS CONFIGURATION aslr enabled (tagged-files) default (default) nxstack enabled (all) default (default) nxheap enabled (tagged-files) default (default) adiheap enabled (tagged-files) default (default) adistack enabled (tagged-files) default (default) The above output from sxadm shows the default configuration of an Oracle SPARC M7/M8 based system. What we can see here is that some of the security extensions, including adiheap/adistack, are enabled by default only for tagged-files. Executable binaries can be tagged using ld(1) as documented in sxadm(8), for example if we want to tag an application at build time to use adiheap we would add '-z sx=adiheap'. Note it is not meaningful at this time to tag shared libaries only leaf executable programs. Most executables in Oracle Solaris were already tagged to run with the aslr, nxstack, nxheap security extensions. Now many of them are also tagged for ADISTACK and ADIHEAP as well. For the Oracle Solaris 11.4 release we have also had to explicitly tag some executables to not run with ADIHEAP and/or ADISTACK, this is either due to outstanding issues when running with an ADI allocator or in some cases to more fundamental issues with how the prgoram itself works (ImageMagic graphics image processing tool is one such example where ADISTACK is explicily disabled). The sxadm command can be used to start processes with security extensions enabled regardless of the system wide status and binary tagging. For example to start a program that was not tagged at build time with both ADI based protections, in addtion to its binary tagged extensions: $ sxadm exec -s adistack=enable -s adiheap=enable /path/to/program It is possible to edit binary executables to add the security extension tags, even if there were none present at link time. Explicit tagging of binaries already installed on a system and delivered by any package management software is not recommened. If all of the untagged applications that are deployed to be run on a system have been tested to work with the ADI protections then it is possible to chane the system wide defaults rather than having to use sxadm to run the processes: # sxadm enable adistack,adiheap The Oracle Solaris 11.4 Beta also has support for use of ADI to protect kernel memory, that is currently undocumented but is planned to be exposed via sxadm by 11.4 release or soon after. The KADI support also includes a signifcant amount of ADI support in mdb, for both live and post-mortem kernel debugging. KADI is enabled by default with precise traps when running a debug build of the kernel. The debug builds are published in the public Oracle Solaris 11.4 Beta repository and can be enabled by running: # pkg change-variant debug.osnet=true The use of ADI via the standard libc and libumem memory allocators and by the kernel (in LDOMs and Zones including with live migration/suspend) has enabled the Oracle Solaris engineering team to find and fix many otherwise difficult to find or diagnose bugs. However we are not yet at a point where we believe all applications from all vendors are sufficiently well behaved that the ADISTACK and ADIHEAP protections can be enabled by default.

In Solaris 11.3 we provided the ability to use the Silicon Secured Memory feature of the Oracle SPARC processors in the M7 and M8 families. An API for applications to explicitly manage...

Getting Data Out of the StatsStore

After the release of the Oracle Solaris 11.4 Beta and the post on the new observability features by James McPherson I've had a few folks ask me if it's possible to export the data from the StatsStore into a format like CSV (Comma-separated values) so they can easily import this into something like Excel. The answer is: Yes The main command to access the StatsStore through the CLI is sstore(1), which you can either use as a single command or you can use it as an interactive shell-like environment. For example to browse the statistics namespace. The other way to access the StatsStore is through the Oracle Solaris Dashboard through a browser, where you point to the system's IP address on port 6787. A third way to access the data is through the REST interface (which the Dashboard actually also using to get it's data) but this is something for a later post. As James pointed out in his post you can use sstore(1) to list the currently available resources, and you can use export to pull data from one or more of those resources. And it's with this last option you can specify the format you want this data to be exported as. The default is tab separated: demo@solaris-vbox:~$ sstore export -t 2018-02-01T06:47:00 -e 2018-02-01T06:52:00 -i 60 '//:class.cpu//:stat.usage' TIME VALUE IDENTIFIER 2018-02-01T06:47:00 20286401.157722 //:class.cpu//:stat.usage 2018-02-01T06:48:00 20345863.706499 //:class.cpu//:stat.usage 2018-02-01T06:49:00 20405363.144286 //:class.cpu//:stat.usage 2018-02-01T06:50:00 20465694.085729 //:class.cpu//:stat.usage 2018-02-01T06:51:00 20525877.600447 //:class.cpu//:stat.usage 2018-02-01T06:52:00 20585941.862812 //:class.cpu//:stat.usage But you can also get it in CSV: demo@solaris-vbox:~$ sstore export -F csv -t 2018-02-01T06:47:00 -e 2018-02-01T06:52:00 -i 60 '//:class.cpu//:stat.usage' time,//:class.cpu//:stat.usage 1517496420000000,20286401.157722 1517496480000000,20345863.706499 1517496540000000,20405363.144286 1517496600000000,20465694.085729 1517496660000000,20525877.600447 1517496720000000,20585941.862812 And in JSON: demo@solaris-vbox:~$ sstore export -F json -t 2018-02-01T06:47:00 -e 2018-02-01T06:52:00 -i 60 '//:class.cpu//:stat.usage' { "__version": 1, "data": [ { "ssid": "//:class.cpu//:stat.usage", "records": [ { "start-time": 1517496420000000, "value": 20286401.157722 }, { "start-time": 1517496480000000, "value": 20345863.706498999 }, { "start-time": 1517496540000000, "value": 20405363.144285999 }, { "start-time": 1517496600000000, "value": 20465694.085728999 }, { "start-time": 1517496660000000, "value": 20525877.600446999 }, { "start-time": 1517496720000000, "value": 20585941.862812001 } ] } ] } Each of these have their own manual entries sstore.cvs(5) and sstore.json(5). Now the question rises: How do you get something interesting/useful? Well, part of this is about learning what the StatsStore can gather for you and the types of tricks you can do with the data before you export it. This is where the Dashboard is a great learning guide. When you first log in you get a landing page very similar to this: Note: The default install of Oracle Solaris won't have a valid cert and the browser will complain it's an untrusted connection. Because you know the system you can add an exception and connect. Because this post is not about exploring the Dashboard but about exporting data I'll just focus on that. But by all means click around. So if you click on the "CPU Utilization by mode (%)" graph you're essentially double clicking on that data and you'll got to a statistics sheet we've built showing all kinds of aspects on CPU utilization and this should look something like this: Note: You can see my VirtualBox instance is pretty busy. So these graphs look pretty interesting, but how do I get to this data? Well, if we're interested in the Top Processes, first click on Top Processes by CPU Utilization and this should bring up this overlay window: Note: This shows this statistic is only temporarily collected (something you could make persistent here) and that the performance impact of collecting this statistic is very low. Now click on "proc cpu-percentage" and this will show what is being collected to create this graph: This shows the SSID of the data in this graph. A quick look at this show it's looking in the process data //:class.proc, then it's using a wildcard on the resources //:res.* which grabs all the entries available, then it selects the statistic for CPU usage in percent //:stat.cpu-percentage, and finally it does a top operation on this list and selects the to 5 processes //:op.top(5) (see ssid-op(7) for more info). And when I use this on the command line I get: demo@solaris-vbox:~$ sstore export -F CSV -t 2018-02-01T06:47:00 -i 60 '//:class.proc//:res.*//:stat.cpu-percentage//:op.top(5)' time,//:class.proc//:res.firefox/2035/demo//:stat.cpu-percentage//:op.top(5),//:class.proc//:res.rad/204/root//:stat.cpu-percentage//:op.top(5),//:class.proc//:res.gnome-shell/1316/demo//:stat.cpu-percentage//:op.top(5),//:class.proc//:res.Xorg/1039/root//:stat.cpu-percentage//:op.top(5),//:class.proc//:res.firefox/2030/demo//:stat.cpu-percentage//:op.top(5) 1517496480000000,31.378174,14.608765,1.272583,0.500488,0.778198 1517496540000000,33.743286,8.999634,3.271484,1.477051,2.059937 1517496600000000,41.018677,9.545898,5.603027,3.170776,3.070068 1517496660000000,37.011719,8.312988,1.940918,0.958252,1.275635 1517496720000000,29.541016,8.514404,9.561157,4.693604,0.869751 Where "-F CSV" tells it to output to CSV (I could also have used lowercase csv), "-t 2018-02-01T06:47:00" is the begin time of what I want to look at, I'm not using an end time which would be similar but then with an "-e", the "-i 60" tells it I want the length of each sample to be 60 seconds, and then I use the SSID from above. Note: For the CSV export to work you'll need to specify at least the begin time (-t) and the length of each sample (-i), otherwise the export will error. You also want to export data the StatStore has actually gathered or it will also not work. In the response the first line is the header with what each column is (time, firefox, rad, gnome-shell, Xorg, firefox) and then the values where the first column is UNIX time. Similarly if I look at what data is driving the CPU Utilization graph I get the following data with this SSID: demo@solaris-vbox:~$ sstore export -F csv -t 2018-02-01T06:47:00 -i 60 '//:class.cpu//:stat.usage//:part.mode(user,kernel,stolen,intr)//:op.rate//:op.util' time,//:class.cpu//:stat.usage//:part.mode(user,kernel,stolen,intr)//:op.rate//:op.util(intr),//:class.cpu//:stat.usage//:part.mode(user,kernel,stolen,intr)//:op.rate//:op.util(kernel),//:class.cpu//:stat.usage//:part.mode(user,kernel,stolen,intr)//:op.rate//:op.util(user) 1517496420000000,2.184663,28.283780,31.322588 1517496480000000,2.254090,16.524862,32.667445 1517496540000000,1.568696,19.479255,41.112911 1517496600000000,1.906700,18.194955,39.069998 1517496660000000,2.326821,18.103397,39.564789 1517496720000000,2.484758,17.909993,38.684371 Note: Even though we've asked for data on user, kernel, stolen, and intr (interrupts), it doesn't return data on stolen as it doesn't have this. Also Note: It's using two other operations rate and util in combination to create this result (also see ssid-op(7) for more info). This should allow you to click around the Dashboard and learn what you can gather and how to export it. We'll talk more on mining interesting data and for example using the JSON output in later posts.

After the release of the Oracle Solaris 11.4 Beta and the post on the new observability features by James McPherson I've had a few folks ask me if it's possible to export the data from the StatsStore...

Perspectives

Normalizing man page section numbers in Solaris 11.4

If you look closely at the listings for the Oracle Solaris 11.4 Reference Manuals and the previous Oracle Solaris 11.3 Reference Manuals, you might notice a change in some sections.  One of our “modernization” projects for this release actually took us back to our roots, in returning to the man page section numbers used in SunOS releases before the adoption of the System V scheme in Solaris 2.0.   When I proposed this change, I dug into the history a bit to explain in the PSARC case to review the switchover. Unix man pages have been divided into numbered sections for its entire recorded history. The original sections, as seen in the Introduction to the Unix 1st Edition Manual from 1971 & the Unix 2nd Edition Manual from 1972, were: I. Commands II. System calls III. Subroutines IV. Special files V. File formats VI. User-maintained programs VII. Miscellaneous By Version 7, Bell Labs had switched from Roman numerals to Arabic and updated the definitions a bit: 1. Commands 2. System calls 3. Subroutines 4. Special files 5. File formats and conventions 6. Games 7. Macro packages and language conventions 8. Maintenance Most Unix derivatives followed this section breakdown, and a very similar set is still used today on BSD, Linux, and MacOS X: 1 General commands 2 System calls 3 Library functions, covering in particular the C standard library 4 Special files (usually devices, those found in /dev) and drivers 5 File formats and conventions 6 Games and screensavers 7 Miscellanea 8 System administration commands and daemons The Linux Filesystem Hierarchy Standard defines these sections as man1: User programs Manual pages that describe publicly accessible commands are contained in this chapter. Most program documentation that a user will need to use is located here. man2: System calls This section describes all of the system calls (requests for the kernel to perform operations). man3: Library functions and subroutines Section 3 describes program library routines that are not direct calls to kernel services. This and chapter 2 are only really of interest to programmers. man4: Special files Section 4 describes the special files, related driver functions, and networking support available in the system. Typically, this includes the device files found in /dev and the kernel interface to networking protocol support. man5: File formats The formats for many data files are documented in the section 5. This includes various include files, program output files, and system files. man6: Games This chapter documents games, demos, and generally trivial programs. Different people have various notions about how essential this is. man7: Miscellaneous Manual pages that are difficult to classify are designated as being section 7. The troff and other text processing macro packages are found here. man8: System administration Programs used by system administrators for system operation and maintenance are documented here. Some of these programs are also occasionally useful for normal users. The Linux man pages also include a non-FHS specified section 9 for "kernel routine documentation." But of course, one Unix system broke ranks and shuffled the numbering around. USL redefined the man page sections in System V to instead be: 1 General commands 1M System administration commands and daemons 2 System calls 3 C library functions 4 File formats and conventions 5 Miscellanea 7 Special files (usually devices, those found in /dev) and drivers Most notably moving section 8 to 1M and swapping 4, 5, & 7 around. Solaris still tried to follow the System V arrangement until now, with some extensions: 1 User Commands 1M System Administration Commands 2 System Calls 3 Library Interfaces and Headers 4 File Formats 5 Standards, Environments, and Macros 6 Games and screensavers 7 Device and Network Interfaces 9 DDI and DKI Interfaces With Solaris 11.4, we've now given up the ghost of System V and declared Solaris to be back in sync with Bell Labs, BSD, and Linux numbering. Specifically, all existing Solaris man pages using these System V sections were renumbered to the listed standard section: SysV Standard ---- -------- 1m -> 8 4 -> 5 5 -> 7 7 -> 4 Sections 1, 2, 3, 6, and 9 remain as is, including the Solaris method of subdividing section 3 into per library subdirectories. The subdivisions of section 7 introduced in PSARC/1994/335 have become subdivisions of section 4 instead, for instance ioctls will now be documented in section 4I instead of 7I. The man command was updated so that if someone specifies one of the remapped sections, it will look first in the section specified, then in any subsections of that section, then the mapped section, and then in any subsections of that section. This will assist users following references from older Solaris documentation to find the expected pages, as well as users of other platforms who don't know our subsections. For example: If a user did "man -s 4 core", looking for the man page that was delivered as /usr/share/man/man4/core.4, and no such page was found in /usr/share/man/man4/ it would look for /usr/share/man/man5/core.5 instead. If a user did "man -s 3 malloc", it would display /usr/share/man/man3/malloc.3c. The man page previously delivered as ip(7P), and now as ip(4P) could be found by any of: man ip man ip.4p man ip.4 man ip.7p man ip.7 and the equivalent man -s formulations. Additionally, as long as we were mucking with the sections, we defined two new sections which we plan to start using soon: 2D DTrace Providers 8S SMF Services The resulting Solaris manual sections are thus now: 1 User Commands 2 System Calls 2D DTrace Providers 3 Library Interfaces and Headers 3* Interfaces split out by library (i.e. 3C for libc, 3M for libm, 3PAM for libpam) 4 Device and Network Interfaces 4D Device Drivers & /dev files 4FS FileSystems 4I ioctls for a class of drivers or subsystems 4M Streams Modules 4P Network Protocols 5 File Formats 6 Games and screensavers 7 Standards, Environments, Macros, Character Sets, and miscellany 8 System Administration Commands 8S SMF Services 9 DDI and DKI Interfaces 9E Driver Entry Points 9F Kernel Functions 9P Driver Properties 9S Kernel & Driver Data Structures We hope this makes it easier for users and system administrators who have to use multiple OS'es by getting rid of one set of needless differences. It certainly helps us in delivering FOSS packages by not having to change all the manpages in the upstream sources to be different for Solaris just because USL wanted to be different 30 years ago.

If you look closely at the listings for the Oracle Solaris 11.4 Reference Manuals and the previous Oracle Solaris 11.3 Reference Manuals, you might notice a change in some sections.  One of our...

Perspectives

Random Solaris Tips: 11.4 Beta, LDoms 3.5, Privileges, File Attributes & Disk Block Size

* { box-sizing: border-box; } .event { border-radius: 4px; width: 800px; height: 110px; margin: 10px auto 0; margin-left: 0cm; } .event-side { padding: 10px; border-radius: 8px; float: left; height: 100%; width: calc(15% - 1px); box-shadow: 1px 2px 2px 1px #888; background: white; position: relative; overflow: hidden; font-size: 0.8em; text-align: right; } .event-date, .event-time { position: absolute; width: calc(90% - 20px); } .event-date { top: 30px; font: bold 24px Garamond, Georgia, serif; } .dotted-line-separator { right: -2px; position: absolute; background: #fff; width: 5px; top: 8px; bottom: 8px; } .dotted-line-separator .line { /*border-right: 1px dashed #ccc;*/ transform: rotate(90deg); } .event-body { border-radius: 8px; float: left; height: 100%; width: 65%; line-height: 22px; box-shadow: 0 2px 2px -1px #888; background: white; padding-right: 9px; font: bold 16px Garamond, Georgia, serif; } .event-title, .event-location, .event-details { float: left; width: 60%; padding: 15px; height: 33%; } .event-title, .event-location { border-bottom: 1px solid #ccc; } .event-details2 { float: left; width: 60%; padding: 15px; height: 23%; font: bold 24px Garamond, Georgia, serif; } Solaris OS Beta 11.4 Download Location & Documentation Recently Solaris 11.4 hit the web as a public beta product meaning anyone can download and use it in non-production environments. This is a major Solaris milestone since the release of Solaris 11.3 GA back in 2015. Few interesting pages: Solaris 11.4 Beta Downloads page for SPARC and x86 What's New in Oracle Solaris 11.4 Solaris 11.4 Release Notes Solaris 11.4 Documentation Logical Domains Dynamic Reconfiguration Blacklisted Resources Command History Dynamic Reconfiguration of Named Resources Starting with the release of Oracle VM Server for SPARC 3.5 (aka LDoms) it is possible to dynamically reconfigure domains that have named resources assigned. Named resources are the resources that are assigned explicitly to domains. Assigning core ids 10 & 11 and a 32 GB block of memory at physical address 0x50000000 to some domain X is an example of named resource assignment. SuperCluster Engineered System is one example where named resources are explicitly assigned to guest domains. Be aware that depending on the state of the system, domains and resources, some of the dynamic reconfiguration operations may or may not succeed. Here are few examples that show DR functionality with named resources. ldm remove-core cid=66,67,72,73 primary ldm add-core cid=66,67 guest1 ldm add-mem mblock=17664M:16G,34048M:16G,50432M:16G guest2 Listing Blacklisted Resources When FMA detects faulty resource(s), Logical Domains Manager attempts to stop using those faulty core and memory resources (no I/O resources at the moment) in all running domains. Also those faulty resources will be preemptively blacklisted so they don't get assigned to any domain. However if the faulty resource is currently in use, Logical Domains Manager attempts to use core or memory DR to evacuate the resource. If the attempt fails, the faulty resource is marked as "evacuation pending". All such pending faulty resources are removed and moved to blacklist when the affected guest domain is stopped or rebooted. Starting with the release of LDoms software 3.5, blacklisted and evacuation pending resources (faulty resources) can be examined with the help of ldm's -B option. eg., # ldm list-devices -B CORE ID STATUS DOMAIN 1 Blacklisted 2 Evac_pending ldg1 MEMORY PA SIZE STATUS DOMAIN 0xa30000000 87G Blacklisted 0x80000000000 128G Evac_pending ldg1 Check this page for some more information. LDoms Command History Recent releases of LDoms Manager can show the history of recently executed ldm commands with the list-history subcommand. # ldm history Jan 31 19:01:18 ldm ls -o domain -p Jan 31 19:01:48 ldm list -p Jan 31 19:01:49 ldm list -e primary Jan 31 19:01:54 ldm history .. Last 10 ldm commands are shown by default. ldm set-logctl history=<value> command can be used to configure the number of commands in the command history. Setting the value to 0 disables the command history log. Disks Determine the Blocksize devprop command on recent versions of Solaris 11 can show the logical and physical block size of a device. The size is represented in bytes. eg., Following output shows 512-byte size for both logical and physical block. It is likely a 512-byte native disk (512n). % devprop -v -n /dev/rdsk/c4t2d0 device-blksize device-pblksize device-blksize=512 device-pblksize=512 Find some useful information about disk drives that exceed the common 512-byte block size here. Security Services Privileges When debugging option was enabled, ppriv command on recent versions of Solaris 11 can be used to check if the current user has required privileges to run a certain command. eg., % ppriv -ef +D /usr/sbin/trapstat trapstat[18998]: missing privilege "file_dac_read" (euid = 100, syscall = "faccessat") for "/devices/pseudo/trapstat@0:trapstat" at devfs_access+0x74 trapstat: permission denied opening /dev/trapstat: Permission denied % ppriv -ef +D /usr/sbin/prtdiag System Configuration: Oracle Corporation sun4v T5240 Memory size: 65312 Megabytes ================================ Virtual CPUs ================================ .. Following example examines the privileges of a running process. # ppriv 23829

Solaris OS Beta 11.4 Download Location & Documentation Recently Solaris 11.4 hit the web as a public beta product meaning anyone can download and use it in non-production environments. This is a major...

Immutable Zones: SMF changes & Trusted Path services

History of the Immutable (ROZR) Zones  In Solaris 11 11/11 we introduced Immutable non-global zones; these have been built on top of MWAC (Mandatory Write Access Control) using a handful of choices for the file-mac-profile property in zone configurations. Management was only possible by booting the zone read/write or by modifying configuration files from within the global zone. In Solaris 11.2 we added support for the Immutable Global Zone and so we also added the Immutable Kernel Zone. In order to make maintenance possible for the global zone we added the concept of a Trusted Path login. It is invoked through the abort-sequence for an LDOM or bare metal system and for native and kernel zones using the -T/-U options for zlogin(1). Limitations The Trusted Path introduced in Solaris 11.2 was not available in services; changes to the SMF repository were always possible; depending on the file-mac-profile, /etc/svc/repository.db was either writable (not MWAC protected such as in flexible-configuration) and so the changes were permanent.  The immutable zone's configuration was not protected! If the repository was not writable, a writable copy was created in /system/volatile and all changes would not persist across a reboot. In order to make any permanent cases the system needed to be rebooted read/write. The administrator had two choices: either the changes to the SMF repository were persistent (file-mac-profile=flexible-configuration) or any permanent changes required a r/w boot. In all cases, the behavior of an immutable system could be modified considerably. When an SMF services was moved into the Trusted Path using ppriv(1),  it could not be killed and the service would go to maintenance on the first attempt to restart or stop the service. In Solaris 11.4 we updated the Immutable Zone: SMF becomes immutable and we introduce services on the Trusted Path. Persistent SMF changes can be made only when they are made from the Trusted Path.  SMF becomes Immutable SMF has two different repositories: the persistent repository which contains all of the system and service configuration and the non-persistent repository; the latter contains the current state of the system, which services are actually running. It also stores the non-persistent property groups such as general_ovr; this property group is used to store whether services are enabled and disabled. The svc.configd service now runs in the Trusted Path so it can now change the persistent repository regardless of the MWAC profile. Changes made to the persistent repository will now always survive a reboot. The svc.configd checks whether the caller is running in the Trusted Path; if a process runs in the Trusted Path it is allowed to make changes to the persistent repository. If not, an error is returned. Trusted Path services In Solaris 11.4 we introduce a Boolean parameter in the SMF method_credential called "trusted_path"; if it is set to true, the method runs in the Trusted Path. This feature is joined at the hip with Immutable SMF: without the latter, it would be easy to escalate from a privileged process to a privileged process in the Trusted Path. All these processes need to behave normally, we added a new privilege flag, PRIV_TPD_KILLABLE; such a process even when run in the Trusted Path can be send a signal from outside the Trusted Path.  But clearly such a process cannot be manipulated outside of the Trusted Path so you can't aim a debugger unless the debugger runs in the Trusted Path too. As the Trusted Path property can only be given or inherited from, init(8), the SMF restarters need to run in the Trusted Path. This feature allows us to run self-assembly services that do not depend on the self-assembly-complete milestone; instead we can now run them on the Trusted Path; these services can now take as long as they want and they can be run even on each and every boot and even when the service is restarted. When a system administrator wants to run the console login always on the Trusted Path, he can easily achieve that by running the following command: # svccfg -s console-login:default setprop trusted_path = boolean: true # svccfg -s console-login:default refresh  etc It is possible in Oracle Solaris 11.4 to write a service which updates and reboots the system; such a service can be started by an administrator outside of the Trusted Path by temporarily enabling such a service. Combined with non-reboot immutable which was introduced in Oracle Solaris 11.3 SRU 12, automatic and secure updates are now possible, without additional downtime.  Similarly there may be use cases for deploying a configuration management service, such as Puppet or Ansible, on the SMF Trusted Path so that it can reconfigure the system but interactive, even root, administrators can not.

History of the Immutable (ROZR) Zones  In Solaris 11 11/11 we introduced Immutable non-global zones; these have been built on top of MWAC (Mandatory Write Access Control) using a handful of choices for...

Migrating from IPF to Packet Filter in Solaris 11.4

Contributed by: Alexandr Nedvedicky This blog entry covers the migration from IPF to Packet Filter (a.k.a. PF). If your Oracle Solaris 11.3 runs without IPF, then you can stop reading now (well of course, if you're interested in reading about the built-in firewalls you should continue on). The IPF served as a network firewall on Oracle Solaris for more than a decade (Since Oracle Solaris 10). PF on Oracle Solaris is available since Oracle Solaris 11.3 as alternative firewall. Administrator must install it explicitly using 'pkg install firewall'. Having both firewalls shipped during Oracle Solaris 11.3 release cycle should provide some time to prepare for a leap to world without IPF. If you as a SysAdmin have completed your homework and your ipf.conf (et. al.) are converted to pf.conf already, then skip reading to 'What has changed since Oracle Solaris 11.3' IPF is gone, what now? On upgrade from Oracle Solaris 11.3 PF will be automatically installed without any action from the administrator. This is implemented by renaming pkg:/network/ipfilter to pkg:/network/ipf2pf and adding a dependency on pkg:/network/firewall. The ipf2pf package installs ipf2pf(7) (svc:/network/ipf2pf) service, which runs at the first boot to newly updated BE. The service inspects the IPF configuration, which is still available in '/etc/svc/repository-boot'.  The ipf2pf start method uses the repository to locate IPF configuration. The IPF configuration is then moved to '/var/firewall/legacy.ipf/conf' directory.  Content of the directory may vary depending on your ipfilter configuration.  The next step is to attempt to convert legacy IPF configuration to pf.conf.  Unlike IPF the PF keeps its configuration in single file named pf.conf (/etc/firewall/pf.conf). The service uses 'ipf2pf' binary tool for conversion. Please do not set your expectations for ipf2pf too high.  It's a simple tool (like hammer or screwdriver) to support your craftsman's skill, while converting implementation of your network policy from IPF to PF.  The tool might work well for simple cases, but please always review the conversion result before deploying it. As soon as ipf2pf service is done with conversion, it updates /etc/firewall/pf.conf with comment to point you to result: '/var/firewall/legacy.ipf/pf.conf'. Let's see how the tool actually works when it converts your IPF configuration to PF. Let's assume your IPF configuration is kept in ipf.conf, ipnat.conf and ippool-1.conf: ipf.conf # # net0 faces to public network. we want to allow web and mail # traffic as stateless to avoid explosion of IPF state tables. # mail and web is busy. # # allow stateful inbound ssh from trusted hosts/networks only # block in on net0 from any to any pass in on net0 from any to 192.168.1.1 port = 80 pass in on net0 from any to 172.16.1.15 port = 2525 pass in on net0 from pool/1 to any port = 22 keep state pass out on net0 from any to any keep state pass out on net0 from 192.168.1.1 port = 80 to any pass out on net0 from 192.168.1.1 port = 2525 to any    ipnat.conf # let our private lab network talk to network outside  map net0 172.16.0.0/16 -> 0.0.0.0/32 rdr net0 192.168.1.1/32 port 25 -> 172.16.1.15 port 2525    ippool-1.conf table role = ipf type = tree number = 1 { 8.8.8.8, 10.0.0.0/32 }; In order to convert ipf configuration above, we run ipf2pf as follows:    ipf2pf -4 ipf.conf -n ipnat.conf -p ippool-1.conf -o pf.conf The result of conversion pf.conf looks like that:    #    # File was generated by ipf2pf(7) service during system upgrade. The    # service attempted to convert your IPF rules to PF (the new firewall)    # rules. You should check if firewall configuration here, suggested by    # ipf2pf, still meets your network policy requirements.    #    #    # Unlike IPF, PF intercepts packets on loopback by default.  IPF does not    # intercept packets bound to loopback. To turn off the policy check for    # loopback packets, we suggest to use command below:    set skip on lo0    #    # PF does IP reassembly by default. It looks like your IPF does not have IP    # reassembly enabled. Therefore the feature is turned off.    #    set reassemble no    # In case you change your mind and decide to enable IP reassembly    # delete the line above. Also to improve interoperability    # with broken IP stacks, tell PF to ignore the 'DF' flag when    # doing reassembly. Uncommenting line below will do it:    #    # set reassemble yes no-df    #    # PF tables are the equivalent of ippools in IPF. For every pool    # in legacy IPF configuration, ipf2pf creates a table and    # populates it with IP addresses from the legacy IPF pool. ipf2pf    # creates persistent tables only.    #    table <pool_1> persist { 8.8.8.8, 10.0.0.0 }    #    # Unlike IPF, the PF firewall implements NAT as yet another    # optional action of a regular policy rule. To keep PF    # configuration close to the original IPF, consider using    # the 'match' action in PF rules, which translate addresses.    # There is one caveat with 'match'. You must always write a 'pass'    # rule to match the translated packet. Packets are not translated    # unless they hit a subsequent pass rule. Otherwise, the "match"    # rule has no effect.     #    # It's also important to avoid applying nat rules to DHCP/BOOTP    # requests. The following stateful rule, when added above the NAT    # rules, will avoid that for us.    pass out quick proto udp from 0.0.0.0/32 port 68 to 255.255.255.255/32 port 67    # There are 2 such rules in your IPF ruleset    #    match out on net0 inet from 172.16.0.0/16 to any nat-to (net0)    match in on net0 inet from any to 192.168.1.1 rdr-to 172.16.1.15 port 2525    #    # The pass rules below make sure rdr/nat -to actions    # in the match rules above will take effect.    pass out all    pass in all    block drop in on net0 inet all    #    # IPF rule specifies either a port match or return-rst action,    # but does not specify a protocol (TCP or UDP). PF requires a port    # rule to include a protocol match using the 'proto' keyword.    # ipf2pf always assumes and enters a TCP port number    #    pass in on net0 inet proto tcp from any to 192.168.1.1 port = 80 no state    #    # IPF rule specifies either a port match or return-rst action,    # but does not specify a protocol (TCP or UDP). PF requires a port    # rule to include a protocol match using the 'proto' keyword.    # ipf2pf always assumes and enters a TCP port number    #    pass in on net0 inet proto tcp from any to 172.16.1.15 port = 2525 no state    #    # IPF rule specifies either a port match or return-rst action,    # but does not specify a protocol (TCP or UDP). PF requires a port    # rule to include a protocol match using the 'proto' keyword.    # ipf2pf always assumes and enters a TCP port number    #    pass in on net0 inet proto tcp from <pool_1> to any port = 22 flags any keep state (sloppy)    pass out on net0 inet all flags any keep state (sloppy)    #    # IPF rule specifies either a port match or return-rst action,    # but does not specify a protocol (TCP or UDP). PF requires a port    # rule to include a protocol match using the 'proto' keyword.    # ipf2pf always assumes and enters a TCP port number    #    pass out on net0 inet proto tcp from 192.168.1.1 port = 80 to any no state    #    # IPF rule specifies either a port match or return-rst action,    # but does not specify a protocol (TCP or UDP). PF requires a port    # rule to include a protocol match using the 'proto' keyword.    # ipf2pf always assumes and enters a TCP port number    #    pass out on net0 inet proto tcp from 192.168.1.1 port = 2525 to any no state As you can see the result pf.conf file is annotated by comments, which explain what happened to original ipf.conf. What has changed since Oracle Solaris 11.3? If you already have experience with PF on Solaris, then you are going to notice those changes since Oracle Solaris 11.3:    firewall in degraded state Firewall service enters degraded state, whenever it gets enabled with default configuration shipped package. This should notify administrator, the system enables firewall with empty configuration. As soon as you alter etc/firewall/pf.conf and refresh firewall service, the service will become online.    firewall in maintenance state firewall enters maintenance state as soon as the service tries to load syntactically invalid configuration. If it happens the smf method inserts a hardwired fallback rules, which drop all inbound sessions excepts ssh.    support for IP interface groups 11.4 comes with support for 'firewall interface groups'. It's a feature, which comes from upstream. The idea is best described by its author Henning Brauer [ goo.gl/eTjn54 ]. The S11.4 brings same feature with Solaris flavor. The interface groups are treated as interface property 'fwifgroup'. To assign interface net0 to group alpha use ipadm(8):     ipadm set-ifprop -p fwifgroup=alpha net0 to show firewall interface groups, which net0 is member of use show-ifprop:     ipadm show-ifprop -p fwifgroup net0 The firewall interface groups are treated as any other interface, which PF rule is bound to. Rule below applies to all packets, which are bound to firewall interface group alpha:     pass on alpha all The firewall interface group is just kind of tag you assign to interface, so PF rules can use it to refer to such interface(s) using tags instead of names. Known Issues Not everything goes as planned. There is a bunch of changes, which missed the beta-release build. Those are known issues and are going to be addressed in final release.    firewall:framework SMF instacne This is going to be removed in final release. The feature presents yet another hurdle in upgrade path. It's been decided to postpone it. The things are just unfortunate we failed to remove it for beta.    support for _auto/_static anchors Those are closely related to firewall:framework instance. Support for those two anchors is postponed.    'set skip on lo0' in default pf.conf The 'set skip on lo0' is going to disappear from default pf.conf shipped by firewall package. Unlike IPF    Lack of firewall in Solaris 10 BrandZ With IPF gone there is no firewall available in S10 BrandZ. The only solution for Solaris 10 deployments, which require firewall protection is to move those Solaris 10 BrandZ behind the firewall, which runs outside the Solaris 10 BrandZ. The firewall can be either another network appliance, or it can be Solaris 11 Zone with PF installed. The Solaris 11 Zone will act as L3 router forwarding all traffic for Solaris 10 -Zone. If such solution does not work for your deployment/scenario we hope to hear back from you with some details of your story. Although there are differences between IPF to PF, the migration should not be that hard as both firewalls have similar concept. We keep PF on Oracle Solaris as much close to upstream as possible, so many guides and recipes found in Internet should work for Oracle Solaris as well, just keep in mind NAT-64, pfsync and bandwidth management were not ported to Oracle Solaris. We hope you'll enjoy the PF ride and will stay tuned for more on the Oracle Solaris 11.4 release. Links Oracle Solaris Documentation on IPF, PF, and comparing IPF to PF. [ goo.gl/YyDMVZ ] https://sourceforge.net/projects/ipfilter/ [ goo.gl/UgwCth ] https://www.openbsd.org/faq/pf/ [ goo.gl/eTjn54 ] https://marc.info/?l=openbsd-misc&m=111894940807554&w=2

Contributed by: Alexandr Nedvedicky This blog entry covers the migration from IPF to Packet Filter (a.k.a. PF). If your Oracle Solaris 11.3 runs without IPF, then you can stop reading now (well of...

More adventures in Software FMA

Those of you that have ever read my own blog (Ghost Busting) will know I've a long standing interest in trying to get the systems we all use and love to be easier to fix, and ideally tell you themselves what's wrong with them. Back in Oracle Solaris 11 we added the concept of Software Fault Management Architecture (SWFMA), with two types of event modelled as FMA defects. One was panic events, the other SMF service state transitions. This also allowed for notification of all FMA events via a new service facility (svccfg setnotify) over SNMP and email. With a brief diversion to make crash dumps smaller, and faster, and easier to diagnose, we've come back to the SWFMA concept and extended it in two ways. Corediag It's pretty easy to see that the same concept for modelling System panic as FMA events could be applied to user level process core dumps. So that's what we've done. coreadm(8) has been extended so that by default a diagnostic core file is created for every Solaris binary that crashes. This is smaller than a regular core dump. We then have a service (svc:/system/coremon:default) which runs a daemon (coremond) that will monitor for these being created, and summarize them,. By default the summary file is kept, though you can use coreadm to remove them. coremond then turns these in to FMA alerts. These are considered more informational than full on defects, but are still present in the FMA logs. You can run fmadm(8) like in the screen shot below to see any alerts. This was one I got when debugging a problem with a core file.   Stackdiag Over the years we've learned that for known problems, there is a very good correlation between the raw stack trace and an existing bug. We've had tools internally to do this for years. We've mined our bug database and extracted stack information, this is bundled up in to a file delivered by pkg:/system/diagnostic/stackdb. Any time FMA detects there is stack telemetry in an FMA event, and the FMA message indicates we could be looking at a software bug, it'll trigger a lookup and try to add the bug id for significant threads to the FMA message. So if you look in the message above you'll see the description that says: Description : A diagnostic core file was dumped in /var/diag/e83476f7-104d-4c85-9de4-bf7e45f261d1 for RESOURCE /usr/bin/pstack whose ASRU is . The ASRU is the Service FMRI for the resource and will be NULL if the resource is not part of a service. The following are potential bugs. stack[0] - 24522117 The stack[0] shows which thread within the process caused in the core dump, and obviously the bugid following it is one you can search for in MOS to find any solution records. Alternatively you can see if you've already got the fix in your Oracle Solaris package repository, by using the pkg search option to search for bugid. Hang on. If I've got the fix in the package repository why isn't it installed? The stackdb package is what we refer to as "unincorporated". This is just a data file, and has no code within it, and the latest version you have available will be installed (not just the one matching the SRU you're on). So you can update it regularly to get the latest diagnosis, without updating the SRU on your system or rebooting it. This means you may get information about bugs which are already fixed and the fix is available when you are on older SRUs. Software FMA We believe that these are the first steps to a self diagnosing system, and will reduce the need to log SRs for bugs we already know about. Hopefully this will mean you can get the fixes quicker, with minimal process or fuss.  

Those of you that have ever read my own blog (Ghost Busting) will know I've a long standing interest in trying to get the systems we all use and love to be easier to fix, and ideally tell you...