X

News, tips, partners, and perspectives for the Oracle Solaris operating system

Recent Posts

Perspectives

Programming in C: Few Tidbits #8

1) Function Pointers Declaring Function Pointers Similar to a variable declared as pointer to some data type, a variable can also be declared to be a pointer to a function. Such a variable stores the address of a function that can later be called using that function pointer. In other words, function pointers point to the executable code rather than data like typical pointers. eg., void (*func_ptr)(); In the above declaration, func_ptr is a variable that can point to a function that takes no arguments and returns nothing (void). The parentheses around the function pointer cannot be removed. Doing so makes the declaration a function that returns a void pointer. The declaration itself won't point to anything so a value has to be assigned to the function pointer which is typically the address of the target function to be executed. Assigning Function Pointers If a function by name dummy was already defined, the following assignment makes func_ptr variable to point to the function dummy. eg., void dummy() { ; } func_ptr = dummy; In the above example, function's name was used to assign that function's address to the function pointer. Using address-of / address operator (&) is another way. eg., void dummy() { ; } func_ptr = &dummy; The above two sample assignments highlight the fact that similar to arrays, a function's address can be obtained either by using address operator (&) or by simply specifying the function name - hence the use of address operator is optional. Here's an example proving that. % cat funcaddr.c #include <stdio.h> void foo() { ; } void main() { printf("Address of function foo without using & operator = %p\n", foo); printf("Address of function foo using & operator = %p\n", &foo); } % cc -o funcaddr funcaddr.c % ./funcaddr Address of function foo without using & operator = 10b6c Address of function foo using & operator = 10b6c Using Function Pointers Once we have a function pointer variable pointing to a function, we can call the function that it points to using that function pointer variable as if it is the actual function name. Dereferencing the function pointer is optional similar to using & operator during function pointer assignment. The dereferencing happens automatically if not done explicitly. eg., The following two function calls are equivalent, and exhibit the same behavior. func_ptr(); (*func_ptr)(); Complete Example Here is one final example for the sake of completeness. This example demonstrate the execution of couple of arithmetic functions using function pointers. Same example also highlights the optional use of & operator and pointer dereferencing. % cat funcptr.c #include <stdio.h> int add(int first, int second) { return (first + second); } int multiply(int first, int second) { return (first * second); } void main() { int (*func_ptr)(int, int); /* declaration */ func_ptr = add; /* assignment (auto func address) */ printf("100+200 = %d\n", (*func_ptr)(100,200)); /* execution (dereferencing) */ func_ptr = &multiply; /* assignment (func address using &) */ printf("100*200 = %d\n", func_ptr(100,200)); /* execution (auto dereferencing) */ } % cc -o funcptr funcptr.c % ./funcptr 100+200 = 300 100*200 = 20000 Few Practical Uses of Function Pointers Function pointers are convenient and useful while writing functions that sort data. Standard C Library includes qsort() function to sort data of any type (integers, floats, strings). The last argument to qsort() is a function pointer pointing to the comparison function. Function pointers are useful to write callback functions where a function (executable code) is passed as an argument to another function that is expected to execute the argument (call back the function sent as argument) at some point. In both examples above function pointers are used to pass functions as arguments to other functions. In some cases function pointers may make the code cleaner and readable. For example, array of function pointers may simplify a large switch statement. 2) Printing Unicode Characters Here's one possible way. Make use of wide characters. Wide character strings can represent Unicode character value (code point). The standard C library provides wide-character functions. Include the header file wchar.h Set proper locale to support wide characters Print the wide character(s) using standard printf and "%ls" format specifier -or- using wprintf to output formatted wide characters Following rudimentary code sample prints random currency symbols and a name in Telugu script using both printf and wprintf function calls. % cat -n unicode.c 1 #include <wchar.h> 2 #include <locale.h> 3 #include <stdio.h> 4 5 int main() 6 { 7 setlocale(LC_ALL,"en_US.UTF-8"); 8 wprintf(L"\u20AC\t\u00A5\t\u00A3\t\u00A2\t\u20A3\t\u20A4"); 9 wchar_t wide[4]={ 0x0C38, 0x0C30, 0x0C33, 0 }; 10 printf("\n%ls", wide); 11 wprintf(L"\n%ls", wide); 12 return 0; 13 } % cc -o unicode unicode.c % ./unicode € ¥ £ ¢ ₣ ₤ సరళ సరళ Here is one website where numerical values for various Unicode characters can be found.

1) Function Pointers Declaring Function Pointers Similar to a variable declared as pointer to some data type, a variable can also be declared to be a pointer to a function. Such a variable stores the...

Oracle Solaris Cluster

Oracle Solaris Cluster Centralized Installation

Oracle Solaris Cluster Centralized Installation is available with Oracle Solaris Cluster 4.4. The tool is called "Centralized installer" which provides complete cluster software package installation from one of the cluster nodes. Centralized installation offers a wizard-style installer “clinstall” in both text-based and GUI mode to improve the ease of use.  Centralized Installer can be used during new cluster installs, when adding new nodes to an existing cluster or to install additional cluster software to an existing cluster. How to install Oracle solaris cluster packages using clinstall  Refer https://docs.oracle.com/cd/E69294_01/html/E69313/gsrid.html#scrolltoc for pre-requisites before you begin the installation. Restore external access to remote procedure call (RPC) communication on all cluster nodes. phys-schost# svccfg -s svc:/network/rpc/bind setprop config/local_only = false phys-schost# svcadm refresh network/rpc/bind:default Set the location of the Oracle Solaris Cluster 4.4 package repository on all nodes. phys-schost# pkg set-publisher -p file:///mnt/repo Install package “ha-cluster/system/preinstall” on all nodes phys-schost# pkg install --accept ha-cluster/system/preinstall Determine a cluster node which is used as control node to issue installation commands. Verify that the control node can reach each of the other cluster nodes to install Authorize acceptance of cluster installation commands by the control node. Perform this step on all the nodes phys-schost# clauth enable -n phys-control From the control node, launch the clinstall installer To launch the graphical user interface (GUI), use the -g option. phys-control# clinstall –g To launch the interactive text-based utility, use the -t option. phys-control# clinstall -t Example 1: Cluster initial installation from IPS repository using interactive text-based utility with clinstall -t.  Select initial installation phys-control# clinstall -t *** Main Menu *** 1) Initial Installation 2) Additional Cluster Package Installation ?) Help with menu options q) Quit Option: 1  Specify nodes on which software package needs to be installed >>> Enter nodes to install Oracle Solaris Cluster package <<< This Oracle Solaris Cluster release supports up to 16 nodes. Type the names of the nodes you want to install with  Oracle Solaris     Cluster software packages. Enter one node name  per line.  When finished, type Control-D: Node name:  phys-control Node name (Control-D to finish):  phys-schost Node name (Control-D to finish):  ^D This is the complete list of nodes: phys-control phys-schost Is it correct (y/n) ?  y Checking all nodes... Checking "phys-control"...NAME (PUBLISHER)                                  VERSION                    IFO ha-cluster/system/preinstall (ha-cluster)         4.4-0.21.0                 i-- Solaris 11.4.0.15.0 i386...ready Checking "phys-schost"...Solaris 11.4.0.15.0 i386...ready All nodes are ready for the cluster installation. Press Enter to continue... Select the installation from either an IPS repository or an ISO file that is usually  downloaded from Oracle support website. Select the source of Oracle Solaris Cluster software 1) Install from IPS Repository 2) Install from ISO Image File Option:  1 Enter the IPS repository URI [https://pkg.oracle.com/ha-cluster/release]: Accessing the repository for the cluster components ... [|]  Select the cluster component which you want to install. Select an Oracle Solaris Cluster component that you want to install: Package                        Description 1) ha-cluster-framework-minimal   OSC Framework minimal group package 2) ha-cluster-framework-full      OSC Framework full group package 3) ha-cluster-full                OSC full installation group package 4) ha-cluster/system/manager      OSC Manager q) Done Option: 1 The package, "ha-cluster-framework-minimal" is selected to install on the following nodes. phys-control,phys-schost Do you want to continue with the installation (y/n) ?  y Upon completion, cluster packages in ha-cluster-framework-minimal group package are installed on phys-control and phys-schost. Administrator can now proceed to do initial cluster configuration. Example 2: Additional cluster package installation from IPS repository using text based utility phys-control# clinstall -t *** Main Menu *** 1) Initial Installation 2) Additional Cluster Package Installation ?) Help with menu options q) Quit Option:  2 >>> Enter nodes to install Oracle Solaris Cluster package <<< This Oracle Solaris Cluster release supports up to 16 nodes.   Type the names of the nodes you want to install with  Oracle Solaris     Cluster software packages. Enter one node name  per line.  When finished, type Control-D: Node name:  phys-control Node name (Control-D to finish):  phys-schost Node name (Control-D to finish):  ^D This is the complete list of nodes: phys-control phys-schost Is it correct (y/n) ?  y Checking all nodes... Checking "phys-control"...NAME (PUBLISHER)    VERSION      IFO ha-cluster/system/preinstall (ha-cluster)   4.4-0.21.0   i-- Solaris 11.4.0.15.0 i386...ready Checking "phys-schost"...Solaris 11.4.0.15.0 i386...ready All nodes are ready for the cluster installation. Press Enter to continue... Select the source of Oracle Solaris Cluster software 1) Install from IPS Repository 2) Install from ISO Image File Option:  1 Enter the IPS repository URI [https://pkg.oracle.com/ha-cluster/release]: Accessing the repository for the cluster components ... [|] Select an Oracle Solaris Cluster component that you want to install      Package                        Description 1) *data-service                  Select an Oracle Solaris Cluster data-service to i 2) ha-cluster-data-services-full  OSC Data Services full group package 3) ha-cluster-framework-full      OSC Framework full group package 4) ha-cluster/system/manager      OSC Manager 5) ha-cluster-geo-full            OSC Disaster Recovery Framework full group package 6) ha-cluster-full                OSC full installation group package q) Done Option:  5 The package, "ha-cluster-geo-full" is selected to install on the following nodes. phys-control, phys-schost Do you want to continue with the installation (y/n) ?  y Upon completion, the Solaris Cluster Disaster Recovery Framework software is installed on phys-control and phys-schost. Administrator can now proceed to configure the disaster recovery framework on the cluster. Example 3: Cluster initial installation from ISO image using the graphical user interface (GUI), clinstall with -g option phys-control# clinstall –g Upon completion, cluster packages in ha-cluster-full package are installed on phys-control and phys-schost. Administrator can now proceed to do initial cluster configuration.

Oracle Solaris Cluster Centralized Installation is available with Oracle Solaris Cluster 4.4. The tool is called "Centralized installer" which provides complete cluster software package installation...

Oracle Solaris 10

Oracle Solaris 10 in the Oracle Cloud Infrastructure

With the release of the October 2018 Solaris 10 Extended Support Recommended patch set, you can now run Solaris 10 in Oracle Cloud. I thought it would be good to document the main steps for getting an image you can run in OCI High level steps are Create a Solaris 10 image using Virtualbox and get it patched with the October 2018 patch set Unconfigure it and shut it down Upload it to the Oracle Cloud object storage Create a custom image from that object you've just uploaded Create a compute instance of the custom image Boot it up and perform configuration tasks Creating a Solaris 10 image 1) Download Solaris10. I chose to download the Oracle VM VirtualBox template, which is preconfigured and installed with Solaris 10 1/13, which is the last update release of Solaris 10, you could equally install from the ISO, just make sure you pick vmdk as the disk image format. 2) Install VirtualBox on any suitable x86 host and operating system, I'm using Oracle Linux 7 which is configured to kickstart in our lab, but you could download it from Oracle at https://www.oracle.com/linux. One reason I picked Oracle Linux 7 is to make it easier to run the OCI tools for uploading images to Oracl Cloud Infrastructure. VirtualBox can be downloaded from http://virtualbox.org, or better it's in the Oracle Linux 7 yum repositories, just make sure the addons and developer repos are enabled in the file /etc/yum.repos.d/public-yum-ol7.repo, then run # yum install VirtualBox-5.2.x86_64 3) Import the VirtualBox template you downloaded above using the import appliance menu On the import Virtual Appliance dialog I've increased the amount of memory and changed the location of the root disk I also changed the imported appliance to run with USB 1.1 as I haven't got the extension pack installed, but you probably should install that any way. When it comes up it'll be using dhcp, so you should be able to just select dhcp during the usual sysconfig phase, select timezone, and root password, and it'll eventually come up with a desktop login.   Now you can see we've got Solaris 10 up and running. For good measure I updated the guest additions. They're installed any way, but at an older version, so it works better with the new versions. 4) Next step is to download the recommended patch set. Specifically the October 2018 patch set. This contains some fixes needed to work in OCI These are available from https://updates.oracle.com/download/28797770.html It is 2.1GB in size, so it'll take some time. Then we simply extract the zip archive, change directory in to the directory you've just extracted it to and run ./installpatchset --s10patchset (If I was doing this on a real machine I'd probably create a new liveupgrade boot environment and patch that, having scoured the README) Currently 403 patches are analysed, that will change over time. Shut it down and prepare it for use in OCI When you reboot be sure to read the messages at the end of the patch install phase, and the README.  Particularly this section "If the "-B" Live Upgrade flag is used, then the luactivate command will need to be run, and either an init(1M) or a shutdown(1M) will be needed to complete activation of the boot environment. A reboot(1M) will not complete activation of a boot environment following an luactivate." Two more things to do before we shutdown though, first off remove the SUNWvboxguest package, second is to sys-unconfig the VM, so we get to configure it properly on reboot. # pkgrm SUNWvboxguest # sys-unconfig   Upload it to Oracle Cloud object storage So we now have a suitably patched S10 image, ready to upload to Oracle Cloud. To do this you need to have the oci tools installed on your linux machine. Doing that will be the subject of another blog. But there's pretty good documentation here too (which is all I followed to create the blog). Assuming you now have ocitools working and in your path you upload the the disk image using this command $ oci os object put --bucket-name images --file Solaris10_1-13-disk1.vmdk It'll use the keys you've configured and uploaded to the console, and is surprisingly quick to upload the image - given this disk file is ~20GB in size, it only took about 10minutes to upload. [oci@ol7-1]]# oci os object put --bucket-name images --file Solaris10_1-13-disk1.vmdk Upload ID: c94aaf0d-a0e2-d3d1-7fb6-5aed125c3921 Split file into 145 parts for upload. Uploading object  [###############################-----]   87%  0d 00:01:26 Once it's there you can see it in the object storage pane of the OCI console And critically you need to get the object storage URI to allow you to create a custom image of it   Create a custom image Then go to the compute section and create a custom image. Selecting the "Emulated" mode, and giving it the object storage URI It takes a while for the image to be created, but once it is you can deploy it multiple times Create a compute instance Now go to the Compute menu and create an instance Key things about this stage are to select the Custom image you just created and an appropriate VM shape You will then be shown a page like this one And finally you can use vnc to connect to the console by using your rsa public key and creating a console connection. If you select the connect with VNC option from the 3dots on the right of the console connection, it gives you the command to set up an ssh tunnel from your system to the console. Boot it up and perform configuration tasks You connect with vncviewer :5900 and you'll see the VM has panicked. Solaris 10 uses an older version of grub, which can't easily find the root disk if the device configuration changes. So we need to trick it in to finding the rpool. To do this you can boot the failsafe archive  and mount the rpool.   Then you touch /a/reconfigure and reboot, next time through the system should boot up correctly. It does take a while after loading the ORACLE SOLARIS image for the system to actually boot, so don't panic if you see a blue screen for a while before seeing the SunOS Release boot messages. Of course we remembered to sys-unconfig before shutting the VM down, so we will have to run through the sysconfig setup. Just remember to set it up as DHCP and you do get asked for nameservice info, you will probably want to use the local DNS resolver at 169.254.169.254. Oracle Cloud also has lots of more specific options for managing your own DNS records and zones. If you forget to remove the SUNWvboxguest package the X server will fail to start. And there you have it, Oracle Solaris 10 running in OCI          

With the release of the October 2018 Solaris 10 Extended Support Recommended patch set, you can now run Solaris 10 in Oracle Cloud. I thought it would be good to document the main steps for getting an...

Oracle Solaris Cluster

Cluster File System with ZFS - Introduction and Configuration

Oracle Solaris Cluster as of 4.3 release has support for Cluster File System with UFS ZFS as a failover file system For application deployments that require accessing the same file system across multiple nodes, ZFS as a failover file system will not be a suitable solution. ZFS is the preferred data storage management on Oracle Solaris 11 and has many advantages over UFS. With Oracle Solaris Cluster 4.4, now you can have both ZFS and global access to ZFS file systems. Oracle Solaris Cluster 4.4 has added support for Cluster File System with ZFS With this new feature, you can now make ZFS file systems accessible from multiple nodes and run applications from the same file systems simultaneously on those nodes. It must be noted that zpool for globally mounted ZFS file systems does not actually mean a global ZFS pool, instead there is a Cluster File System layer that is present on top of ZFS that makes the file systems of the ZFS pool globally accessible. The following procedures explain and illustrate a couple of methods to bring up this configuration. How to create a zpool for globally mounted ZFS file systems: 1) Identify the shared device to be used for ZFS pool creation. To configure a zpool for globally mounted ZFS file systems, choose one or more multi-hosted devices from the output of the cldevice show command. phys-schost-1# cldevice show | grep Device In the following example, the entries for DID devices /dev/did/rdsk/d1 and /dev/did/rdsk/d4 shows that those devices are connected only to phys-schost-1 and phys-schost-2 respectively, while /dev/did/rdsk/d2 and /dev/did/rdsk/d3 are accessible by both nodes of this two-node cluster, phys-schost-1 and phys-schost-2. In this example, DID device /dev/did/rdsk/d3 with device name c1t6d0 will be used for global access by both nodes. # cldevice show | grep Device === DID Device Instances === DID Device Name:                                /dev/did/rdsk/d1   Full Device Path:                                phys-schost-1:/dev/rdsk/c0t0d0 DID Device Name:                                /dev/did/rdsk/d2   Full Device Path:                                phys-schost-1:/dev/rdsk/c0t6d0   Full Device Path:                                phys-schost-2:/dev/rdsk/c0t6d0 DID Device Name:                                /dev/did/rdsk/d3   Full Device Path:                                phys-schost-1:/dev/rdsk/c1t6d0   Full Device Path:                                phys-schost-2:/dev/rdsk/c1t6d0 DID Device Name:                                /dev/did/rdsk/d4   Full Device Path:                                phys-schost-2:/dev/rdsk/c1t6d1 2) Create a ZFS pool for the DID device(s) that you chose. phys-schost-1# zpool create HAzpool c1t6d0 phys-schost-1# zpool list NAME      SIZE  ALLOC   FREE  CAP  DEDUP    HEALTH  ALTROOT HAzpool  49.8G  2.22G  47.5G   4%  1.00x    ONLINE  / 3) Create ZFS file systems on the pool. phys-schost-1# zfs create -o mountpoint=/global/fs1 HAzpool/fs1 phys-schost-1# zfs create -o mountpoint=/global/fs2 HAzpool/fs2 4) Create files to show global access of the file systems. Copy some files to the newly created file systems. These files will be used in procedures below to demonstrate the file systems globally accessible by all cluster nodes. phys-schost-1# cp /usr/bin/ls /global/fs1/ phys-schost-1# cp /usr/bin/date /global/fs2/ phys-schost-1# ls -al /global/fs1/ /global/fs2/ /global/fs1/: total 120 drwxr-xr-x   3 root     root           4 Oct 8 23:22 . drwxr-xr-x   5 root     sys            5 Oct 8 23:21 .. -r-xr-xr-x   1 root     root       57576 Oct 8 23:22 ls /global/fs2/: total 7 drwxr-xr-x   3 root     root           4 Oct 8 23:22 . drwxr-xr-x   5 root     sys            5 Oct 8 23:21 .. -r-xr-xr-x   1 root     root       24656 Oct 8 23:22 date At this point the ZFS file systems of the zpool are accessible only on the node where the zpool is imported. There are two ways of configuring a zpool for globally mounted ZFS file systems. Method-1: Using device group: You would use this method when the requirement is only to provide global access to the ZFS file systems, when it is not known how HA services will be created using the file systems or which cluster resource groups will be created. 1) Create a device group of the same name as the zpool you created in step 1 of type zpool with poolaccess set to global. phys-schost-1# cldevicegroup create -p poolaccess=global -n \ phys-schost-1,phys-schost-2 -t zpool HAzpool Note: The device group must have the same name HAzpool as chosen for the pool. The poolaccess property is set to global to indicate that the file systems of this pool will be globally accessible across the nodes of the cluster. 2) Bring the device group online. phys-schost-1# cldevicegroup online HAzpool 3) Verify the configuration. phys-schost-1# cldevicegroup show === Device Groups === Device Group Name:                              HAzpool   Type:                                            ZPOOL   failback:                                        false   Node List:                                       phys-schost-1, phys-schost-2   preferenced:                                     false   autogen:                                         false   numsecondaries:                                  1   ZFS pool name:                                   HAzpool   poolaccess:                                      global   readonly:                                        false   import-at-boot:                                  false   searchpaths:                                     /dev/dsk phys-schost-1# cldevicegroup status === Cluster Device Groups === --- Device Group Status --- Device Group Name     Primary     Secondary     Status -----------------     -------     ---------     ------ HAzpool               phys-schost-1     phys-schost-2       Online In these configurations, the zpool is imported on the node that is primary for the zpool device group, but the file systems in the zpool are mounted globally. Execute the files copied in step#3 in the previous section, from a different node. It can be observed that the file systems are mounted globally and accessible across all nodes. phys-schost-2# /global/fs1/ls -al /global/fs2 total 56 drwxr-xr-x   3 root     root           4 Oct 8 23:22 . drwxr-xr-x   5 root     sys            5 Oct 8 23:21 .. -r-xr-xr-x   1 root     root       24656 Oct 8 23:22 date phys-schost-2# /global/fs2/date Fri Oct 9 04:08:59 PDT 2018 You can also verify that a newly created ZFS file system is immediately accessible from all nodes by executing the below commands. From the cldevicegroup status above, it can be observed that phys-schost-1 is the primary node for the device group.Execute the below command on the primary node: phys-schost-1# zfs create -o mountpoint=/global/fs3 HAzpool/fs3 Then from a different node, verify if the file system is accessible. phys-schost-2# df -h /global/fs3 file system             Size   Used  Available Capacity  Mounted on HAzpool/fs3             47G    40K        47G     1%    /global/fs3 Method-2: Using HAStoragePlus resource: You would typically use this method when you have planned on how HA services in resource groups would use the globally mounted file systems and expect dependencies from  resources managing the application on a resource managing the file systems. The device group of type zpool with poolaccess set to global is created when an HAStoragePlus resource is created and globalzpools property is defined, that is if such device group is not already created. 1) Create HAStoragePlus resource for a zpool for globally mounted file systems and bring it online. Note: The resource group can be scalable or failover as needed by the configuration. phys-schost-1# clresourcegroup create hasp-rg phys-schost-1# clresource create -t HAStoragePlus -p \ GlobalZpools=HAzpool -g hasp-rg hasp-rs phys-schost-1# clresourcegroup online -eM hasp-rg 2) Verify the configuration. phys-schost-1# clrs status hasp-rs === Cluster Resources === Resource Name       Node Name      State        Status Message -------------       ---------      -----        -------------- hasp-rs             phys-schost-1        Online       Online                     phys-schost-2        Offline      Offline phys-schost-1# cldevicegroup show === Device Groups === Device Group Name:                              HAzpool   Type:                                            ZPOOL   failback:                                        false   Node List:                                       phys-schost-1, phys-schost-2   preferenced:                                     false   autogen:                                         true   numsecondaries:                                  1   ZFS pool name:                                   HAzpool   poolaccess:                                      global   readonly:                                        false   import-at-boot:                                  false   searchpaths:                                     /dev/dsk phys-schost-1# cldevicegroup status === Cluster Device Groups === --- Device Group Status --- Device Group Name     Primary     Secondary     Status -----------------     -------     ---------     ------ HAzpool               phys-schost-1     phys-schost-2       Online Execute the files copied in step#3 in the previous section, from a different node. It can be observed that the file systems are mounted globally and accessible across all nodes. phys-schost-2# /global/fs1/ls -al /global/fs2 total 56 drwxr-xr-x   3 root     root           4 Oct 8 23:22 . drwxr-xr-x   5 root     sys            5 Oct 8 23:21 .. -r-xr-xr-x   1 root     root       24656 Oct 8 23:22 date phys-schost-2# /global/fs2/date Fri Oct 9 04:23:26 PDT 2018 You can also verify that a newly created ZFS file system is immediately accessible from all nodes by executing the below commands. From the cldevicegroup status above, it can be observed that phys-schost-1 is the primary node for the device group. Execute the below command on the primary node: phys-schost-1# zfs create -o mountpoint=/global/fs3 HAzpool/fs3 Then from a different node, verify if the file system is accessible. phys-schost-2# df -h /global/fs3 file system             Size   Used  Available Capacity  Mounted on HAzpool/fs3             47G    40K        47G     1%    /global/fs3 How to configure a zpool for globally mounted ZFS file systems for an already registered device group: There might be situations where a system administrator had used Method 1 to meet the global access requirement for application admins to install the application but later finds a requirement for an HAStoragePlus resource for HA services deployment. In those situations, there is no need to undo the steps done in Method 1 and redo it over in Method 2. HAStoragePlus also supports zpools for globally mounted ZFS file systems for already manually registered zpool device groups. The following steps illustrate how this configuration can be achieved. 1) Create HAStoragePlus resource with an existing zpool for globally mounted ZFS file systems and bring it online. Note: The resource group can be scalable or failover as needed by the configuration. phys-schost-1# clresourcegroup create hasp-rg phys-schost-1# clresource create -t HAStoragePlus -p \ GlobalZpools=HAzpool -g hasp-rg hasp-rs phys-schost-1# clresourcegroup online -eM hasp-rg 2) Verify the configuration. phys-schost-1~# clresource show -p GlobalZpools hasp-rs   GlobalZpools:                                 HAzpool phys-schost-1# cldevicegroup status === Cluster Device Groups === --- Device Group Status --- Device Group Name     Primary     Secondary     Status -----------------     -------     ---------     ------ HAzpool               phys-schost-1     phys-schost-2       Online Note: When the HAStoragePlus resource is deleted, the zpool device group is not automatically deleted. While the zpool device group exists, the ZFS filesystems in the zpool will be mounted globally when the device group is brought online (# cldevicegroup online HAzpool). For more information, see How to Configure a ZFS Storage Pool for Cluster-wide Global Access Without HAStoragePlus For information on how to use HAStoragePlus to manage ZFS pools for global access by data services, see  How to Set Up an HAStoragePlus Resource for a Global ZFS Storage Pool in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

Oracle Solaris Cluster as of 4.3 release has support for Cluster File System with UFS ZFS as a failover file system For application deployments that require accessing the same file system across multiple...

Oracle Announces 2018 Oracle Excellence Awards – Congratulations to our “Leadership in Infrastructure Transformation" Winners

We are pleased to announce the 2018 Oracle Excellence Awards “Leadership in Infrastructure Transformation" Winners. This elite group of recipients includes customers and partners who are using Oracle Infrastructure Technologies to accelerate innovation and drive business transformation by increasing agility, lowering costs, and reducing IT complexity. This year, our 10 award recipients were selected from amongst hundreds of nominations. The winners represent 5 different countries: Austria, Russia, Turkey, Sweden, United States and 6 different industries:  Communications, Financial, Government, Manufacturing, Technology, Transportation. Winners must use at least one, or a combination, of the following for category qualification: Oracle Linux Oracle Solaris Oracle Virtualization (VM, Virtual Box) Oracle Private Cloud Appliance Oracle SuperCluster Oracle SPARC Oracle Storage, Tape/Disk Oracle is pleased to honor these leaders who have delivered value to their organizations through the use of multiple Oracle technologies which have resulted in reduced cost of IT operations, improved time to deployment, and performance and end user productivity gains.  This year’s winners are Michael Polepchuk, Deputy Chief Information Officer, BCS Global Markets; Brian Young, Vice President, Cerner, Brian Bream, CTO, Collier IT; Rudolf Rotheneder, CEO, cons4u GmbH; Heidi Ratini, Senior Director of Engineering, IT Convergence; Philip Adams, Chief Technology Officer, Lawrence Livermore National Labs; JK Pareek, Vice President, Global IT and CIO, Nidec Americas Holding Corporation; Baris Findik, CIO, Pegasus Airlines; Michael Myhrén, Senior DBA Senior Systems Engineer and Charles Mongeon, Vice President Data Center Solutions and Services (TELUS Corporation). More information on these winners can be found at https://www.oracle.com/corporate/awards/leadership-in-infrastructure-transformation/winners.html

We are pleased to announce the 2018 Oracle Excellence Awards “Leadership in Infrastructure Transformation" Winners. This elite group of recipients includes customers and partners who are using Oracle...

Oracle Solaris/SPARC and Cloud Adjacent Solutions at Oracle Open World 2018

If you want to know what sessions at Oracle Open World 2018 are about Oracle Solaris and SPARC, you can find them in the Oracle Solaris and SPARC Focus on Document. Monday at 11:30am, in Moscone South - Room 214, Bill Nesheim, Senior Vice President, Oracle Solaris Development, Masood Heydari, SVP, Hardware Development, and Brian Bream, Chief Technology Office, Collier-IT, will be speaking about what's happening with Oracle Solaris, SPARC and how they are being used in real-world, mission critical applications to deploy Oracle Solaris/SPARC applications cloud adjacent.  Giving you all the consistency, simplicity and security of Oracle Solaris/SPARC and the monetary advantages of cloud. Oracle Solaris and SPARC Update: Security, Simplicity, Performance [PRM3358] Oracle Solaris and SPARC technologies are developed through Oracle’s unique vision of coengineering operating systems and hardware together with Oracle Database, Java, and enterprise applications. Discover the advanced features that secure application data with less effort, implement end-to-end encryption, streamline compliance assessment, simplify management of virtual machines, and accelerate performance for Oracle Database and middleware. In this session learn how SPARC/Solaris engineered systems and servers deliver continuous innovation and investment protection to organizations in their journey to the cloud, see examples of cloud-ready infrastructure, and learn Oracle's strategy for future enhancements for Oracle Solaris and SPARC systems. Masood Heydari, SVP, Hardware Development, Oracle Brian Bream, Chief Technology Officer, Vaske Computer, Inc. Bill Nesheim, Senior Vice President, Oracle Solaris Development, Oracle Monday, Oct 22, 11:30 a.m. - 12:15 p.m. | Moscone South - Room 206 To find out more, join us at Oracle Open World 2018!

If you want to know what sessions at Oracle Open World 2018 are about Oracle Solaris and SPARC, you can find them in the Oracle Solaris and SPARC Focus on Document. Monday at 11:30am, in Moscone South...

Oracle Solaris Security at OOW18

Oracle Open World 2018 is next week! We're busy creating session slides, making Hands-on-Labs and constructing demos. But I wanted to take a few moments to highlight one of our sessions. Darren Moffat, our Oracle Solaris Architect, will be joined by Thorsten Muhlmann, from VP Bank, AG.  They will be speaking at about how Oracle Solaris helps simplify securing your data center and the new security capabilities of Oracle Solaris 11.4: Oracle Solaris: Simplifying Security and Compliance for On-Premises and the Cloud [PRO1787] Oracle Solaris is engineered for security at every level. It allows you to mitigate risk and prove on-premises and cloud compliance easily, so you can spend time innovating. In this session learn how Oracle combines the power of industry-standard security features, unique security and antimalware capabilities, and mulitnode compliance management tools for low-risk application deployments and cloud infrastructure.  Thorsten Muehlmann, Senior Systems Engineer, VP Bank AG Darren Moffat, Senior Software Architect, Oracle Monday, Oct 22, 10:30 a.m. - 11:15 a.m. | Moscone South - Room 216 Want to know more about Oracle Solaris Application Sandboxing, multi-node compliance or per-file auditing, or how a customer uses Oracle Solaris security capabilities in a real-world, mission-critical data center? Join them on Monday at 10:30am at Oracle Open World!

Oracle Open World 2018 is next week! We're busy creating session slides, making Hands-on-Labs and constructing demos. But I wanted to take a few moments to highlight one of our sessions. Darren Moffat,...

Oracle Solaris Cluster

Exclusive-IP Zone Cluster - Automatic Network Configuration

Prior to the 4.4 release of Oracle Solaris Cluster (OSC), it was not possible to perform automatic public network configuration for  Exclusive-IP Zone Cluster (ZC) by specifying a System Configuration (SC) profile to the clzonecluster 'install' command. To illustrate this let us consider installation of a typical ZC  with a separate IP stack and two data-links to achieve network redundancy needed to run HA services. The data-links which are vnics previously created in the global zone are configured as part of an IPMP group that is needed to host the LogicalHostname or SharedAddress resource IP address. The zone cluster was configured as shown by the clzc 'export' command output below.   root@clusterhost1:~# clzc export zc1 create -b set zonepath=/zones/zc1 set brand=solaris set autoboot=false set enable_priv_net=true set enable_scalable_svc=false set file-mac-profile=none set ip-type=exclusive add net set address=192.168.10.10 set physical=auto end add attr set name=cluster set type=boolean set value=true end add node set physical-host=clusterhost1 set hostname=zc1-host-1 add net set physical=vnic3 end add net set physical=vnic0 end add privnet set physical=vnic1 end add privnet set physical=vnic2 end end add node set physical-host=clusterhost2 set hostname=zc1-host-2 add net set physical=vnic3 end add net set physical=vnic0 end add privnet set physical=vnic1 end add privnet set physical=vnic2 end end In OSC 4.3, after installing the ZC with a SC profile and booting it up, ZC will be in Online Running state but without the public network configuration.  The following ipadm(1M) commands are needed to set up the static network configuration in each non-global zone of the ZC. root@zc1-host-1:~# ipadm create-ip vnic0 root@zc1-host-1:~# ipadm create-ip vnic3 root@zc1-host-1:~# ipadm create-ipmp -i vnic0 -i vnic3 sc_ipmp0 root@zc1-host-1:~# ipadm create-addr -T static -a 192.168.10.11/24 sc_ipmp0/v4 root@zc1-host-2:~# ipadm create-ip vnic0 root@zc1-host-2:~# ipadm create-ip vnic3 root@zc1-host-2:~# ipadm create-ipmp -i vnic0 -i vnic3 sc_ipmp0 root@zc1-host-2:~# ipadm create-addr -T static -a 192.168.10.12/24 sc_ipmp0/v4 In OSC 4.4 it is now possible to build a SC profile such that no manual steps will be required to complete the network configuration and all the zones of the ZC can boot up to "Online Running" state upon first boot of the ZC. How is this made possible in OSC 4.4 on Solaris 11.4? Well, the clzonecluster(8CL) command can recognize sections of the SC profile XML that are applicable for individual zones of the ZC by inserting these sections within the <instances_for_node node_name="ZCNodeName"></instances_for_node> XML tags. Other sections of the SC profile that are not within these XML tags are applicable for all the zones of the ZC. Solaris 11.4 now supports arbitrarily complex network configurations in SC profiles. The following is a snippet of the SC profile that can be used for our typical ZC configuration that is derived from the template /usr/share/auto_install/sc_profiles/ipmp_network.xml. The section of the SC profile which is common for all the zones of the ZC has not been included in this snippet. <instances_for_node node_name="zc1-host-1"> <service version="1" name="system/identity"> <instance enabled="true" name="node"> <property_group name="config"> <propval name="nodename" value="zc1-host-1"/> </property_group> </instance> </service> <service name="network/ip-interface-management" version="1" type="service"> <instance name="default" enabled="true"> <property_group name="interfaces" type="application"> <!-- vnic0 interface configuration --> <property_group name="vnic0" type="interface-ip"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <propval name="ipmp-interface" type="astring" value="sc_ipmp0"/> </property_group> <!-- vnic3 interface configuration --> <property_group name="vnic3" type="interface-ip"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <propval name="ipmp-interface" type="astring" value="sc_ipmp0"/> </property_group> <!-- IPMP interface configuration --> <property_group name="sc_ipmp0" type="interface-ipmp"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <property name="under-interfaces" type="astring"> <astring_list> <value_node value="vnic0"/> <value_node value="vnic3"/> </astring_list> </property> <!-- IPv4 static address --> <property_group name="data1" type="address-static"> <propval name="ipv4-address" type="astring" value="192.168.10.11"//> <propval name="prefixlen" type="count" value="24"/> <propval name="up" type="astring" value="yes"/> </property_group> </property_group> </property_group> </instance> </service> <instances_for_node node_name="zc1-host-2"> <service version="1" name="system/identity"> <instance enabled="true" name="node"> <property_group name="config"> <propval name="nodename" value="zc1-host-2"/> </property_group> </instance> </service> <service name="network/ip-interface-management" version="1" type="service"> <instance name="default" enabled="true"> <property_group name="interfaces" type="application"> <!-- vnic0 interface configuration --> <property_group name="vnic0" type="interface-ip"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <propval name="ipmp-interface" type="astring" value="sc_ipmp0"/> </property_group> <!-- vnic0 interface configuration --> <property_group name="vnic3" type="interface-ip"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <propval name="ipmp-interface" type="astring" value="sc_ipmp0"/> </property_group> <!-- IPMP interface configuration --> <property_group name="sc_ipmp0" type="interface-ipmp"> <property name="address-family" type="astring"> <astring_list> <value_node value="ipv4"/> <value_node value="ipv6"/> </astring_list> </property> <property name="under-interfaces" type="astring"> <astring_list> <value_node value="vnic0"/> <value_node value="vnic3"/> </astring_list> </property> <!-- IPv4 static address --> <property_group name="data1" type="address-static"> <propval name="ipv4-address" type="astring" value="192.168.10.12"//> <propval name="prefixlen" type="count" value="24"/> <propval name="up" type="astring" value="yes"/> </property_group> </property_group> </property_group> </instance> </service> </instances_for_node> You can find the complete SC profile here. Some fields like zone host name, IP and encrypted passwords needs to be substituted in this file. Other changes can be made to this profile for different configuration for e.g, configuring an Active-Standby IPMP group instead of Active-Active IPMP configuration shown in this example. We can use this SC profile to install and boot our ZC as shown below. root@clusterhost1:~# clzc install -c /var/tmp/zc1_config.xml zc1 Waiting for zone install commands to complete on all the nodes of the zone cluster "zc1"... root@clusterhost1:~# clzc boot zc1 Waiting for zone boot commands to complete on all the nodes of the zone cluster "zc1"... After a short duration, we will see the ZC in Online Running state. root@clusterhost1:~# clzc status zc1 === Zone Clusters === --- Zone Cluster Status --- Name   Brand     Node Name   Zone Host Name   Status   Zone Status ----   -----     ---------   --------------   ------   ----------- zc1    solaris   clusterhost1 zc1-host-1       Online   Running                  clusterhost2 zc1-host-2       Online   Running root@zc1-host-1:~# ipadm NAME              CLASS/TYPE STATE        UNDER      ADDR clprivnet1        ip         ok           --         -- clprivnet1/?   static     ok           --         172.16.3.65/26 lo0               loopback   ok           --         -- lo0/v4          static     ok           --         127.0.0.1/8 lo0/v6          static     ok           --         ::1/128 sc_ipmp0          ipmp       ok           --         -- sc_ipmp0/data1 static     ok           --         192.168.10.11/24 vnic0             ip         ok           sc_ipmp0   -- vnic1             ip         ok           --         -- vnic1/?         static     ok           --         172.16.3.129/26 vnic2             ip         ok           --         -- vnic2/?         static     ok           --         172.16.3.193/26 vnic3            ip         ok           sc_ipmp0   -- root@zc1-host-2:~# ipadm NAME              CLASS/TYPE STATE        UNDER      ADDR clprivnet1        ip         ok           --         -- clprivnet1/?   static    ok           --         172.16.3.66/26 lo0               loopback   ok           --         -- lo0/v4          static     ok           --         127.0.0.1/8 lo0/v6          static     ok           --         ::1/128 sc_ipmp0          ipmp       ok           --         -- sc_ipmp0/data1 static     ok           --         192.168.10.12/24 vnic0             ip         ok           sc_ipmp0   -- vnic1             ip         ok           --         -- vnic1/?        static     ok           --         172.16.3.130/26 vnic2             ip         ok           --         -- vnic2/?        static     ok           --         172.16.3.194/26 vnic3            ip         ok           sc_ipmp0   -- With this feature it is easy to create templates for SC profiles and can be used for fast deployments of ZC's without the need for administrator intervention to complete system configuration of the zones in the ZC. For more details on the SC profile configuration in 11.4 refer to Customizing Automated Installations With Manifests and Profiles. For cluster documentation refer to clzonecluster(8cl).

Prior to the 4.4 release of Oracle Solaris Cluster (OSC), it was not possible to perform automatic public network configuration for  Exclusive-IP Zone Cluster (ZC) by specifying a System Configuration...

Oracle Solaris Cluster

Oracle SuperCluster: Brief Introduction to osc-interdom

Target Audience: Oracle SuperCluster customers The primary objective of this blog post is to provide some related information on this obscure tool to inquisitive users/customers as they might have noticed the osc-interdom service and the namesake package and wondered what is it for at some point. SuperCluster InterDomain Communcation Tool, osc-interdom, is an infrastructure framework and a service that runs on Oracle SuperCluster products to provide flexible monitoring and management capabilities across SuperCluster domains. It provides means to inspect and enumerate the components of a SuperCluster so that other components can fulfill their roles in managing the SuperCluster. The framework also allows commands to be executed from a control domain to take effect across all domains on the server node (eg., a PDom on M8) and, optionally, across all servers (eg., other M8 PDoms in the cluster) on the system. SuperCluster Virtual Assistant (SVA), ssctuner, exachk and Oracle Enterprise Manager (EM) are some of the consumers of the osc-interdom framework. Installation and Configuration Interdom framework requires osc-interdom package from exa-family repository be installed and enabled on all types of domains in the SuperCluster. In order to enable communication between domains in the SuperCluster, interdom must be configured on all domains that need to be part of the inter-domain communication channel. In other words, it is not a requirement for all domains in the cluster to be part of the osc-interdom configuration. It is possible to exclude some domains from the comprehensive interdom directory either during initial configuration or at a later time. Also once the interdom directory configuration was built, it can be refreshed or rebuilt any time at will. Since installing and configuring osc-interdom was automated and made part of SuperCluster installation and configuration processes, it is unlikely that anyone from the customer site need to know or to perform those tasks manually. # svcs osc-interdom STATE STIME FMRI online 22:24:13 svc:/site/application/sysadmin/osc-interdom:default Domain Registry and Command Line Interface (CLI) Configuring interdom results in a Domain Registry. The purpose of the registry is to provide an accurate and up-to-date database of all SuperCluster domains and their characteristics. oidcli is a simple command line interface for the domain registry. The oidcli command line utility is located in /opt/oracle.supercluster/bin directory. oidcli utility can be used to query interdom domain registry for data that is associated with different components in the SuperCluster. Each component maps to a domain in the SuperCluster; and each component is uniquely identified by a UUID. The SuperCluster Domain Registry is stored on Master Control Domain (MCD). The "master" is usually the first control domain in the SuperCluster. Since the domain registry is on the master control domain, it is expected to run oidcli on MCD to query the data. When running from other domains, option -a must be specified along with the management IP address of the master control domain. Keep in mind that the data returned by oidcli is meant for other SuperCluster tools that have the ability to interpret the data correctly and coherently. Therefore humans who are looking at the same data may need some extra effort to digest and understand. eg., # cd /opt/oracle.supercluster/bin # ./oidcli -h Usage: oidcli [options] dir | [options] [...] invalidate|get_data|get_value , other component ID or 'all' (e.g. 'hostname', 'control_uuid') or 'all' NOTE: get_value must request single Options: -h, --help show this help message and exit -p Output in PrettyPrinter format -a ADDR TCP address/hostname (and optional ',') for connection -d Enable debugging output -w W Re-try for up to seconds for success. Use 0 for no wait. Default: 1801.0. List all components (domains) # ./oidcli -p dir [ ['db8c979d-4149-452f-8737-c857e0dc9eb0', True], ['4651ac93-924e-4990-8cf9-83be556eb667', True], .. ['945696fb-97f1-48e3-aa20-8c8baf198ea8', True], ['4026d670-61db-425e-834a-dfc45ff9a533', True]] List the hostname of all domains # ./oidcli -p get_data all hostname db8c979d-4149-452f-8737-c857e0dc9eb0: { 'hostname': { 'mtime': 1538089861, 'name': 'hostname', 'value': 'alpha'}} 3cfc9039-2157-4b62-ac69-ea3d85f2a19f: { 'hostname': { 'mtime': 1538174309, 'name': 'hostname', 'value': 'beta'}} ... List all available properties for all domains   # ./oidcli -p get_data all all db8c979d-4149-452f-8737-c857e0dc9eb0: { 'banner_name': { 'mtime': 1538195164, 'name': 'banner_name', 'value': 'SPARC M7-4'}, 'comptype': { 'mtime': 1538195164, 'name': 'comptype', 'value': 'LDom'}, 'control_uuid': { 'mtime': 1538195164, 'name': 'control_uuid', 'value': 'Unknown'}, 'guests': { 'mtime': 1538195164, 'name': 'guests', 'value': None}, 'host_domain_chassis': { 'mtime': 1538195164, 'name': 'host_domain_chassis', 'value': 'AK00251676'}, 'host_domain_name': { 'mtime': 1538284541, 'name': 'host_domain_name', 'value': 'ssccn1-io-alpha'}, ... Query a specific property from a specific domain # ./oidcli -p get_data 4651ac93-924e-4990-8cf9-83be556eb667 mgmt_ipaddr mgmt_ipaddr: { 'mtime': 1538143865, 'name': 'mgmt_ipaddr', 'value': ['xx.xxx.xxx.xxx', 20, 'scm_ipmp0']} The domain registry is persistent and updated hourly. When accurate and up-to-date is needed, it is recommended to query the registry with --no-cache option. eg., # ./oidcli -p get_data --no-cache 4651ac93-924e-4990-8cf9-83be556eb667 load_average load_average: { 'mtime': 1538285043, 'name': 'load_average', 'value': [0.01171875, 0.0078125, 0.0078125]} The mtime attribute in all examples above represent the UNIX timestamp. Debug Mode By default, osc-interdom service runs in non-debug mode. Running the service in debug mode enables logging more details to osc-interdom service log. In general, if osc-interdom service is transitioning to maintenance state, switching to the debug mode may provide few additional clues. To check if debug mode was enabled, run: svcprop -c -p config osc-interdom | grep debug To enable debug mode, run: svccfg -s sysadmin/osc-interdom:default setprop config/oidd_debug '=' true svcadm restart osc-interdom Finally check the service log for debug messages. svcs -L osc-interdom command output points to the location of osc-interdom service log. Documentation Similar to SuperCluster resource allocation engine, osc-resalloc, interdom framework is mostly meant for automated tools with little or no human interaction. Consequently there are no references to osc-interdom in SuperCluster Documentation Set. Related: Oracle SuperCluster: A Brief Introduction to osc-resalloc Oracle SuperCluster: A Brief Introduction to osc-setcoremem Non-Interactive osc-setcoremem Oracle Solaris Tools for Locality Observability Acknowledgments/Credit:   Tim Cook

Target Audience: Oracle SuperCluster customers The primary objective of this blog post is to provide some related information on this obscure tool to inquisitive users/customers as they might have...

Announcements

Oracle Solaris at Oracle Open World 2018

Oracle Open World is coming to San Francisco October 22-25! With it being less than a month away, I wanted to preview Oracle Solaris this year. With the release of Oracle Solaris 11.4 on August 28, 2018, we have much to talk about. For information about the future of Oracle Solaris and SPARC, Bill Nesheim, Senior Vice President of Oracle Solaris Development is joined by Massod Heydari, Senior Vice President of Hardware Development, and Brian Bream, Chief Technology Officer of Vaske Computer in our Oracle Solaris and SPARC Roadmap Session.  Brian will be discussing his unique solution for delivering Oracle SPARC solutions adjacent to Oracle Cloud, giving you the best of both worlds, cloud infrastructure for your scale-out needs and Oracle Solaris and SPARC hosted for your mission-critical needs. You can get hands on with the new cloud-scale capabilities in Oracle Solaris 11.4 with two Hands-on-Labs. "Get the Auditors Off Your Back: One Stop Cloud Compliance Validation" will show you how to setup your Oracle Solaris systems to meet compliance, lock them down so they can't be tampered with, centrally manage their compliance from a single system and even see the compliance benchmark trends over time meeting your auditors requirements. In our second Hands-on-Lab, "The CAT Scan for Oracle Solaris: StatsStore and Web Dashboard," you will get into utilizing the new Oracle Solaris System Web Interface and StatsStore to give you unique insight into not just the current state of a system but also insight into the historic state of the system.  You'll see how the Oracle Solaris 11.4 System Web Interface automatically correlates audit and system events to allow you to diagnose and root cause potential system issues quickly and easily. To find out more about Oracle Solaris and SPARC sessions and Hands-on-Labs at Oracle Open World 2018, see our Focus on Oracle Solaris and SPARC document. See you at Oracle Open World 2018!

Oracle Open World is coming to San Francisco October 22-25! With it being less than a month away, I wanted to preview Oracle Solaris this year. With the release of Oracle Solaris 11.4 on August...

Announcing Oracle Solaris 11.4 SRU1

Today we're releasing the first SRU for Oracle Solaris 11.4! This is the next installment in our ongoing support train for Oracle Solaris 11 and there will be no further Oracle Solairs 11.3 SRUs delivered to the support repository.  Due to the timing of our releases and some fixes being in Oracle Solaris 11.3 SRU35 but not in 11.4, not all customers on Oracle Solaris 11.3 SRU35 were able to update to Oracle Solaris 11.4 when it was released. SRU1 includes all these fixes and customers can now update to Oracle Solaris 11.4 SRU1 via 'pkg update' from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1.    This SRU introduces 'Memory Reservation Pools for Kernel Zones'. On some more heavily loaded systems that experience memory contention or sufficiently fragmented memory, it can be difficult for an administrator to guarantee that Kernel Zones can allocate all the memory they need to boot or even reboot. By allowing memory to be reserved ahead of time early during boot when it is more likely that enough memory of the desired pagesize is available, the impact of overall system memory usage on the ability of a Kernel Zone to boot/reboot will be minimized. Further details on using this feature are in the SRU readme file.  Other fixes of note in this SRU include:  - vim has been updated to 8.1.0209  - mailman has been updated to 2.1.29  - Updated Intel tool in HMP  - Samba has been updated to 4.8.4  - Kerberos 5 has been updated to 1.16.1  - webkitgtk+ has been updated to 2.18.6  - curl has been updated to 7.61.0  - sqlite has been updated to 3.24.0  - Apache Tomcat has been updated to 8.5.32  - Thunderbird has been updated to 52.9.1  - Apache Web Server has been updated to 2.4.34  - Wireshark has been updated to 2.6.2  - MySQL has been updated to 5.7.23  - OpenSSL has been updated to 1.0.2p  Full details of this SRU can be found in My Oracle Support Doc 2449090.1. For the list of Service Alerts affecting each Oracle Solaris 11.4 SRU, see Important Oracle Solaris 11.4 SRU Issues (Doc ID 2445150.1).

Today we're releasing the first SRU for Oracle Solaris 11.4! This is the next installment in our ongoing support train for Oracle Solaris 11 and there will be no further Oracle Solairs 11.3 SRUs...

Oracle Solaris 11

How To build an Input Method Engine for Oracle Solaris 11.4

Contributed by: Pavel Heimlich and Ales Cernosek Oracle Solaris 11.4 delivers a modern and extensible enterprise desktop environment. The desktop environment supports and delivers an IBus (Intelligent Input Bus) runtime environment and libraries that enable customers to easily customize and build Open Source IBus Input Method Engines to suit their preference. This article explores the step-by-step process of building one of the available IBus input methods - KKC, a Japanese input method engine. We'll start with the Oracle Solaris Userland build infrastructure and use it to build the KKC dependencies and KKC itself, and publish the Input Method IPS packages to a local publisher. A similar approach can be used for building and delivering other input method engines for other languages. Please note the components built may no longer be current; newer versions may have been released since these steps were made available. You should check for and use the latest version available. We'll start with installing all the build system dependencies : # sudo pkg exact-install developer/opensolaris/userland system/input-method/ibus group/system/solaris-desktop # sudo reboot To make this task as easy as possible, we'll reuse the Userland build infrastructure. This also has the advantage of using the same compiler, build flags and other defaults as the rest of the GNOME desktop. First clone the Userland workspace: # git clone https://github.com/oracle/solaris-userland gate Use your Oracle Solaris 11.4 IPS publisher; set this variable to empty: # export CANONICAL_REPO=http://pkg.oracle.com/... Do not use the internal Userland archive mirror: # export INTERNAL_ARCHIVE_MIRROR='' Set a few other variables and prepare the build environment: # export COMPILER=gcc # export PUBLISHER=example # export OS_VERSION=11.4 # cd gate # git checkout 8b36ec131eb42a65b0f42fc0d0d71b49cfb3adf3 # gmake setup We are going to need the intermediary packages installed on the build system, so let's add the build publisher: # pkg set-publisher -g `uname -p`/repo example KKC has a couple of dependencies. The first of them is marisa-trie. Create a folder for it : # mkdir components/marisa # cd components/marisa And create a Makefile as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64_and_32 COMPILER= gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= marisa COMPONENT_VERSION= 0.2.4 COMPONENT_PROJECT_URL= https://github.com/s-yata/marisa-trie COMPONENT_ARCHIVE_HASH= \ sha256:67a7a4f70d3cc7b0a85eb08f10bc3eaf6763419f0c031f278c1f919121729fb3 COMPONENT_ARCHIVE_URL= https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/marisa-trie/marisa-0.2.4.tar.gz TPNO= 26209 REQUIRED_PACKAGES += system/library/gcc/gcc-c-runtime REQUIRED_PACKAGES += system/library/gcc/gcc-c++-runtime REQUIRED_PACKAGES += system/library/math TEST_TARGET= $(NO_TESTS) COMPONENT_POST_INSTALL_ACTION += \ cd $(SOURCE_DIR)/bindings/python; \ CC=$(CC) CXX=$(CXX) CFLAGS="$(CFLAGS) -I$(SOURCE_DIR)/lib" LDFLAGS=-L$(PROTO_DIR)$(USRLIB) $(PYTHON) setup.py install --install-lib $(PROTO_DIR)/$(PYTHON_LIB); # to avoid libtool breaking build of libkkc COMPONENT_POST_INSTALL_ACTION += rm -f $(PROTO_DIR)$(USRLIB)/libmarisa.la; include $(WS_MAKE_RULES)/common.mk Most of the build process is defined in shared macros from the Userland workspace, so building marisa is now as easy as running the following command : # gmake install Copy the marisa copyright file so that the package follows Oracle standards: # cat marisa-0.2.4/COPYING > marisa.copyright Create the marisa.p5m package manifest as follows: # # Copyright (c) 2015, 2018, Oracle and/or its affiliates. All rights reserved. # <transform file path=usr.*/man/.+ -> default mangler.man.stability uncommitted> set name=pkg.fmri \ value=pkg:/example/system/input-method/library/marisa@$(BUILD_VERSION) set name=pkg.summary value="Marisa library" set name=pkg.description \ value="Marisa - Matching Algorithm with Recursively Implemented StorAge library" set name=com.oracle.info.description \ value="Marisa - Matching Algorithm with Recursively Implemented StorAge library" set name=info.classification \ value=org.opensolaris.category.2008:System/Internationalization set name=info.source-url \ value=https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/marisa-trie/marisa-0.2.4.tar.gz set name=info.upstream-url value=https://github.com/s-yata/marisa-trie set name=org.opensolaris.arc-caseid value=PSARC/2009/499 set name=org.opensolaris.consolidation value=$(CONSOLIDATION) file path=usr/bin/marisa-benchmark file path=usr/bin/marisa-build file path=usr/bin/marisa-common-prefix-search file path=usr/bin/marisa-dump file path=usr/bin/marisa-lookup file path=usr/bin/marisa-predictive-search file path=usr/bin/marisa-reverse-lookup file path=usr/include/marisa.h file path=usr/include/marisa/agent.h file path=usr/include/marisa/base.h file path=usr/include/marisa/exception.h file path=usr/include/marisa/iostream.h file path=usr/include/marisa/key.h file path=usr/include/marisa/keyset.h file path=usr/include/marisa/query.h file path=usr/include/marisa/scoped-array.h file path=usr/include/marisa/scoped-ptr.h file path=usr/include/marisa/stdio.h file path=usr/include/marisa/trie.h link path=usr/lib/$(MACH64)/libmarisa.so target=libmarisa.so.0.0.0 link path=usr/lib/$(MACH64)/libmarisa.so.0 target=libmarisa.so.0.0.0 file path=usr/lib/$(MACH64)/libmarisa.so.0.0.0 file path=usr/lib/$(MACH64)/pkgconfig/marisa.pc link path=usr/lib/libmarisa.so target=libmarisa.so.0.0.0 link path=usr/lib/libmarisa.so.0 target=libmarisa.so.0.0.0 file path=usr/lib/libmarisa.so.0.0.0 file path=usr/lib/pkgconfig/marisa.pc file path=usr/lib/python2.7/vendor-packages/64/_marisa.so file path=usr/lib/python2.7/vendor-packages/_marisa.so file path=usr/lib/python2.7/vendor-packages/marisa-0.0.0-py2.7.egg-info file path=usr/lib/python2.7/vendor-packages/marisa.py license marisa.copyright license="marisa, LGPLv2.1" Generate the IPS package: # gmake publish It's necessary to install the package as it's a prerequisite to build other packages : # sudo pkg install marisa # cd .. The next step is to build the KKC library: # mkdir libkkc # cd libkkc Create the Makefile as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64_and_32 COMPILER= gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= libkkc COMPONENT_VERSION= 0.3.5 COMPONENT_PROJECT_URL= https://github.com/ueno/libkkc COMPONENT_ARCHIVE= $(COMPONENT_SRC).tar.gz COMPONENT_ARCHIVE_HASH= \ sha256:89b07b042dae5726d306aaa1296d1695cb75c4516f4b4879bc3781fe52f62aef COMPONENT_ARCHIVE_URL= $(COMPONENT_PROJECT_URL)/releases/download/v$(COMPONENT_VERSION)/$(COMPONENT_ARCHIVE) TPNO= 26171 TEST_TARGET= $(NO_TESTS) REQUIRED_PACKAGES += example/system/input-method/library/marisa REQUIRED_PACKAGES += library/glib2 REQUIRED_PACKAGES += library/json-glib REQUIRED_PACKAGES += library/desktop/libgee REQUIRED_PACKAGES += system/library/gcc/gcc-c-runtime export LD_LIBRARY_PATH=$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(USRLIB) export PYTHONPATH=$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(PYTHON_LIB) CPPFLAGS += -I$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)/usr/include LDFLAGS += -L$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(USRLIB) # for gsed - metadata PATH=$(GNUBIN):$(USRBINDIR) include $(WS_MAKE_RULES)/common.mk # some of this is likely unnecessary CONFIGURE_OPTIONS += --enable-introspection=no PKG_CONFIG_PATHS += $(COMPONENT_DIR)/../marisa/build/$(MACH$(BITS)) # to avoid libtool breaking build of ibus-kkc COMPONENT_POST_INSTALL_ACTION = rm -f $(PROTO_DIR)$(USRLIB)/libkkc.la # to rebuild configure for libtool fix and fix building json files COMPONENT_PREP_ACTION = \ (cd $(@D) ; $(AUTORECONF) -m --force -v; gsed -i 's@test -f ./$$<@test -f $$<@' data/rules/rule.mk) And build the component: # gmake install Prepare the copyright file : # cat libkkc-0.3.5/COPYING > libkkc.copyright Create the libkkc.p5m package manifest as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # <transform file path=usr.*/man/.+ -> default mangler.man.stability volatile> set name=pkg.fmri \ value=pkg:/example/system/input-method/library/libkkc@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) set name=pkg.summary value="libkkc - Kana Kanji input library" set name=pkg.description \ value="libkkc - Japanese Kana Kanji conversion input method library" set name=com.oracle.info.description value="libkkc - Kana Kanji input library" set name=info.classification \ value=org.opensolaris.category.2008:System/Internationalization set name=info.source-url \ value=https://github.com/ueno/libkkc/releases/download/v0.3.5/libkkc-0.3.5.tar.gz set name=info.upstream-url value=https://github.com/ueno/libkkc set name=org.opensolaris.arc-caseid value=PSARC/2009/499 set name=org.opensolaris.consolidation value=$(CONSOLIDATION) file path=usr/bin/kkc file path=usr/bin/kkc-package-data dir path=usr/include/libkkc file path=usr/include/libkkc/libkkc.h dir path=usr/lib/$(MACH64)/libkkc link path=usr/lib/$(MACH64)/libkkc.so target=libkkc.so.2.0.0 link path=usr/lib/$(MACH64)/libkkc.so.2 target=libkkc.so.2.0.0 file path=usr/lib/$(MACH64)/libkkc.so.2.0.0 file path=usr/lib/$(MACH64)/pkgconfig/kkc-1.0.pc dir path=usr/lib/libkkc link path=usr/lib/libkkc.so target=libkkc.so.2.0.0 link path=usr/lib/libkkc.so.2 target=libkkc.so.2.0.0 file path=usr/lib/libkkc.so.2.0.0 file path=usr/lib/pkgconfig/kkc-1.0.pc dir path=usr/share/libkkc dir path=usr/share/libkkc/rules dir path=usr/share/libkkc/rules/act dir path=usr/share/libkkc/rules/act/keymap file path=usr/share/libkkc/rules/act/keymap/default.json file path=usr/share/libkkc/rules/act/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/act/keymap/hiragana.json file path=usr/share/libkkc/rules/act/keymap/katakana.json file path=usr/share/libkkc/rules/act/keymap/latin.json file path=usr/share/libkkc/rules/act/keymap/wide-latin.json file path=usr/share/libkkc/rules/act/metadata.json dir path=usr/share/libkkc/rules/act/rom-kana file path=usr/share/libkkc/rules/act/rom-kana/default.json dir path=usr/share/libkkc/rules/azik dir path=usr/share/libkkc/rules/azik-jp106 dir path=usr/share/libkkc/rules/azik-jp106/keymap file path=usr/share/libkkc/rules/azik-jp106/keymap/default.json file path=usr/share/libkkc/rules/azik-jp106/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/azik-jp106/keymap/hiragana.json file path=usr/share/libkkc/rules/azik-jp106/keymap/katakana.json file path=usr/share/libkkc/rules/azik-jp106/keymap/latin.json file path=usr/share/libkkc/rules/azik-jp106/keymap/wide-latin.json file path=usr/share/libkkc/rules/azik-jp106/metadata.json dir path=usr/share/libkkc/rules/azik-jp106/rom-kana file path=usr/share/libkkc/rules/azik-jp106/rom-kana/default.json dir path=usr/share/libkkc/rules/azik/keymap file path=usr/share/libkkc/rules/azik/keymap/default.json file path=usr/share/libkkc/rules/azik/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/azik/keymap/hiragana.json file path=usr/share/libkkc/rules/azik/keymap/katakana.json file path=usr/share/libkkc/rules/azik/keymap/latin.json file path=usr/share/libkkc/rules/azik/keymap/wide-latin.json file path=usr/share/libkkc/rules/azik/metadata.json dir path=usr/share/libkkc/rules/azik/rom-kana file path=usr/share/libkkc/rules/azik/rom-kana/default.json dir path=usr/share/libkkc/rules/default dir path=usr/share/libkkc/rules/default/keymap file path=usr/share/libkkc/rules/default/keymap/default.json file path=usr/share/libkkc/rules/default/keymap/direct.json file path=usr/share/libkkc/rules/default/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/default/keymap/hiragana.json file path=usr/share/libkkc/rules/default/keymap/katakana.json file path=usr/share/libkkc/rules/default/keymap/latin.json file path=usr/share/libkkc/rules/default/keymap/wide-latin.json file path=usr/share/libkkc/rules/default/metadata.json dir path=usr/share/libkkc/rules/default/rom-kana file path=usr/share/libkkc/rules/default/rom-kana/default.json dir path=usr/share/libkkc/rules/kana dir path=usr/share/libkkc/rules/kana/keymap file path=usr/share/libkkc/rules/kana/keymap/default.json file path=usr/share/libkkc/rules/kana/keymap/direct.json file path=usr/share/libkkc/rules/kana/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/kana/keymap/hiragana.json file path=usr/share/libkkc/rules/kana/keymap/katakana.json file path=usr/share/libkkc/rules/kana/keymap/latin.json file path=usr/share/libkkc/rules/kana/keymap/wide-latin.json file path=usr/share/libkkc/rules/kana/metadata.json dir path=usr/share/libkkc/rules/kana/rom-kana file path=usr/share/libkkc/rules/kana/rom-kana/default.json dir path=usr/share/libkkc/rules/kzik dir path=usr/share/libkkc/rules/kzik/keymap file path=usr/share/libkkc/rules/kzik/keymap/default.json file path=usr/share/libkkc/rules/kzik/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/kzik/keymap/hiragana.json file path=usr/share/libkkc/rules/kzik/keymap/katakana.json file path=usr/share/libkkc/rules/kzik/keymap/latin.json file path=usr/share/libkkc/rules/kzik/keymap/wide-latin.json file path=usr/share/libkkc/rules/kzik/metadata.json dir path=usr/share/libkkc/rules/kzik/rom-kana file path=usr/share/libkkc/rules/kzik/rom-kana/default.json dir path=usr/share/libkkc/rules/nicola dir path=usr/share/libkkc/rules/nicola/keymap file path=usr/share/libkkc/rules/nicola/keymap/default.json file path=usr/share/libkkc/rules/nicola/keymap/direct.json file path=usr/share/libkkc/rules/nicola/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/nicola/keymap/hiragana.json file path=usr/share/libkkc/rules/nicola/keymap/katakana.json file path=usr/share/libkkc/rules/nicola/keymap/latin.json file path=usr/share/libkkc/rules/nicola/keymap/wide-latin.json file path=usr/share/libkkc/rules/nicola/metadata.json dir path=usr/share/libkkc/rules/nicola/rom-kana file path=usr/share/libkkc/rules/nicola/rom-kana/default.json dir path=usr/share/libkkc/rules/tcode dir path=usr/share/libkkc/rules/tcode/keymap file path=usr/share/libkkc/rules/tcode/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/tcode/keymap/hiragana.json file path=usr/share/libkkc/rules/tcode/keymap/katakana.json file path=usr/share/libkkc/rules/tcode/keymap/latin.json file path=usr/share/libkkc/rules/tcode/keymap/wide-latin.json file path=usr/share/libkkc/rules/tcode/metadata.json dir path=usr/share/libkkc/rules/tcode/rom-kana file path=usr/share/libkkc/rules/tcode/rom-kana/default.json dir path=usr/share/libkkc/rules/trycode dir path=usr/share/libkkc/rules/trycode/keymap file path=usr/share/libkkc/rules/trycode/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/trycode/keymap/hiragana.json file path=usr/share/libkkc/rules/trycode/keymap/katakana.json file path=usr/share/libkkc/rules/trycode/keymap/latin.json file path=usr/share/libkkc/rules/trycode/keymap/wide-latin.json file path=usr/share/libkkc/rules/trycode/metadata.json dir path=usr/share/libkkc/rules/trycode/rom-kana file path=usr/share/libkkc/rules/trycode/rom-kana/default.json dir path=usr/share/libkkc/rules/tutcode dir path=usr/share/libkkc/rules/tutcode-touch16x dir path=usr/share/libkkc/rules/tutcode-touch16x/keymap file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/hiragana.json file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/katakana.json file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/latin.json file path=usr/share/libkkc/rules/tutcode-touch16x/keymap/wide-latin.json file path=usr/share/libkkc/rules/tutcode-touch16x/metadata.json dir path=usr/share/libkkc/rules/tutcode-touch16x/rom-kana file path=usr/share/libkkc/rules/tutcode-touch16x/rom-kana/default.json dir path=usr/share/libkkc/rules/tutcode/keymap file path=usr/share/libkkc/rules/tutcode/keymap/hankaku-katakana.json file path=usr/share/libkkc/rules/tutcode/keymap/hiragana.json file path=usr/share/libkkc/rules/tutcode/keymap/katakana.json file path=usr/share/libkkc/rules/tutcode/keymap/latin.json file path=usr/share/libkkc/rules/tutcode/keymap/wide-latin.json file path=usr/share/libkkc/rules/tutcode/metadata.json dir path=usr/share/libkkc/rules/tutcode/rom-kana file path=usr/share/libkkc/rules/tutcode/rom-kana/default.json dir path=usr/share/libkkc/templates dir path=usr/share/libkkc/templates/libkkc-data file path=usr/share/libkkc/templates/libkkc-data/Makefile.am file path=usr/share/libkkc/templates/libkkc-data/configure.ac.in dir path=usr/share/libkkc/templates/libkkc-data/data file path=usr/share/libkkc/templates/libkkc-data/data/Makefile.am dir path=usr/share/libkkc/templates/libkkc-data/data/models file path=usr/share/libkkc/templates/libkkc-data/data/models/Makefile.sorted2 file path=usr/share/libkkc/templates/libkkc-data/data/models/Makefile.sorted3 dir path=usr/share/libkkc/templates/libkkc-data/data/models/sorted2 file path=usr/share/libkkc/templates/libkkc-data/data/models/sorted2/metadata.json dir path=usr/share/libkkc/templates/libkkc-data/data/models/sorted3 file path=usr/share/libkkc/templates/libkkc-data/data/models/sorted3/metadata.json dir path=usr/share/libkkc/templates/libkkc-data/data/models/text2 file path=usr/share/libkkc/templates/libkkc-data/data/models/text2/metadata.json dir path=usr/share/libkkc/templates/libkkc-data/data/models/text3 file path=usr/share/libkkc/templates/libkkc-data/data/models/text3/metadata.json dir path=usr/share/libkkc/templates/libkkc-data/tools file path=usr/share/libkkc/templates/libkkc-data/tools/Makefile.am file path=usr/share/libkkc/templates/libkkc-data/tools/genfilter.py file path=usr/share/libkkc/templates/libkkc-data/tools/sortlm.py file path=usr/share/locale/ja/LC_MESSAGES/libkkc.mo file path=usr/share/vala/vapi/kkc-1.0.deps file path=usr/share/vala/vapi/kkc-1.0.vapi license libkkc.copyright license="libkkc, GPLmix" \ com.oracle.info.description="libkkc - Kana Kanji input library" \ com.oracle.info.name=pci.ids com.oracle.info.tpno=26171 \ com.oracle.info.version=0.3.5 Use the following command to create the IPS package: # gmake publish # cd .. Then install the package so that it's available for dependency resolution: # sudo pkg install libkkc Next build libkkc-data: # mkdir libkkc-data # cd libkkc-data Create the Makefile as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64_and_32 COMPILER= gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= libkkc-data COMPONENT_VERSION= 0.2.7 COMPONENT_PROJECT_URL= https://github.com/ueno/libkkc COMPONENT_ARCHIVE= $(COMPONENT_SRC).tar.xz COMPONENT_ARCHIVE_HASH= \ sha256:9e678755a030043da68e37a4049aa296c296869ff1fb9e6c70026b2541595b99 COMPONENT_ARCHIVE_URL= https://github.com/ueno/libkkc/releases/download/v0.3.5/$(COMPONENT_ARCHIVE) TPNO= 26171 TEST_TARGET= $(NO_TESTS) export LD_LIBRARY_PATH=$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)/$(USRLIB) export PYTHONPATH=$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(PYTHON_LIB) include $(WS_MAKE_RULES)/common.mk CONFIGURE_ENV += MARISA_CFLAGS="-I$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)/usr/include" CONFIGURE_ENV += MARISA_LIBS="-L$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(USRLIB) -lmarisa" Build libkkc-data : # gmake install Prepare the copyright file : # cat libkkc-data-0.2.7/COPYING > libkkc-data.copyright Create the libkkc-data.p5m package manifest as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # <transform file path=usr.*/man/.+ -> default mangler.man.stability volatile> set name=pkg.fmri \ value=pkg:/example/system/input-method/library/libkkc-data@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) set name=pkg.summary value="libkkc-data - Kana Kanji input library data" set name=pkg.description \ value="libkkc-data - data for Japanese Kana Kanji conversion input method library" set name=com.oracle.info.description \ value="libkkc-data - Kana Kanji input library data" set name=info.classification \ value=org.opensolaris.category.2008:System/Internationalization set name=info.source-url \ value=https://bitbucket.org/libkkc/libkkc-data/downloads/libkkc-data-0.2.7.tar.xz set name=info.upstream-url value=https://bitbucket.org/libkkc/libkkc-data set name=org.opensolaris.arc-caseid value=PSARC/2009/499 set name=org.opensolaris.consolidation value=$(CONSOLIDATION) dir path=usr/lib/$(MACH64)/libkkc/models dir path=usr/lib/$(MACH64)/libkkc/models/sorted3 file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.1gram file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.1gram.index file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.2gram file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.2gram.filter file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.3gram file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.3gram.filter file path=usr/lib/$(MACH64)/libkkc/models/sorted3/data.input file path=usr/lib/$(MACH64)/libkkc/models/sorted3/metadata.json dir path=usr/lib/libkkc/models dir path=usr/lib/libkkc/models/sorted3 file path=usr/lib/libkkc/models/sorted3/data.1gram file path=usr/lib/libkkc/models/sorted3/data.1gram.index file path=usr/lib/libkkc/models/sorted3/data.2gram file path=usr/lib/libkkc/models/sorted3/data.2gram.filter file path=usr/lib/libkkc/models/sorted3/data.3gram file path=usr/lib/libkkc/models/sorted3/data.3gram.filter file path=usr/lib/libkkc/models/sorted3/data.input file path=usr/lib/libkkc/models/sorted3/metadata.json license libkkc-data.copyright license="libkkc-data, GPLv3" \ com.oracle.info.description="libkkc - Kana Kanji input library language data" \ com.oracle.info.name=usb.ids com.oracle.info.tpno=26171 \ com.oracle.info.version=0.2.7 Use the following command to create the IPS package: # gmake publish # cd .. Then install the package so that it's available for dependency resolution: # sudo pkg install libkkc-data Finally, create the KKC package: # mkdir ibus-kkc # cd ibus-kkc Create the Makefile as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # BUILD_BITS= 64_and_32 COMPILER= gcc include ../../make-rules/shared-macros.mk COMPONENT_NAME= ibus-kkc COMPONENT_VERSION= 1.5.22 COMPONENT_PROJECT_URL= https://github.com/ueno/ibus-kkc IBUS-KKC_PROJECT_URL= https://github.com/ueno/ibus-kkc COMPONENT_ARCHIVE= $(COMPONENT_SRC).tar.gz COMPONENT_ARCHIVE_HASH= \ sha256:22fe2552f08a34a751cef7d1ea3c088e8dc0f0af26fd7bba9cdd27ff132347ce COMPONENT_ARCHIVE_URL= $(COMPONENT_PROJECT_URL)/releases/download/v$(COMPONENT_VERSION)/$(COMPONENT_ARCHIVE) TPNO= 31503 TEST_TARGET= $(NO_TESTS) REQUIRED_PACKAGES += system/input-method/ibus REQUIRED_PACKAGES += example/system/input-method/library/libkkc REQUIRED_PACKAGES += example/system/input-method/library/libkkc-data REQUIRED_PACKAGES += library/desktop/gtk3 REQUIRED_PACKAGES += library/desktop/libgee REQUIRED_PACKAGES += library/json-glib REQUIRED_PACKAGES += library/glib2 # for marisa REQUIRED_PACKAGES += system/library/gcc/gcc-c-runtime REQUIRED_PACKAGES += system/library/gcc/gcc-c++-runtime REQUIRED_PACKAGES += system/library/math CPPFLAGS += -I$(COMPONENT_DIR)/../libkkc/build/prototype/$(MACH)/usr/include LDFLAGS += "-L$(COMPONENT_DIR)/../libkkc/build/prototype/$(MACH)$(USRLIB)" LDFLAGS += "-L$(COMPONENT_DIR)/../marisa/build/prototype/$(MACH)$(USRLIB)" include $(WS_MAKE_RULES)/common.mk CONFIGURE_ENV += PATH=$(GNUBIN):$(USRBINDIR) CONFIGURE_OPTIONS += --libexecdir=$(USRLIBDIR)/ibus #CONFIGURE_OPTIONS += --enable-static=no PKG_CONFIG_PATHS += $(COMPONENT_DIR)/../libkkc/build/$(MACH$(BITS))/libkkc PKG_CONFIG_PATHS += $(COMPONENT_DIR)/../marisa/build/$(MACH$(BITS)) # to rebuild configure for libtool fix COMPONENT_PREP_ACTION = \ (cd $(@D) ; $(AUTORECONF) -m --force -v) PKG_PROTO_DIRS += $(COMPONENT_DIR)/../marisa/build/prototype/$(MACH) PKG_PROTO_DIRS += $(COMPONENT_DIR)/../libkkc/build/prototype/$(MACH) PKG_PROTO_DIRS += $(COMPONENT_DIR)/../libkkc-data/build/prototype/$(MACH) And build the component: # gmake install Prepare the copyright file : # cat ibus-kkc-1.5.22/COPYING > ibus-kkc.copyright Create the ibus-kkc.p5m package manifest as follows: # # Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. # <transform file path=usr.*/man/.+ -> default mangler.man.stability volatile> set name=pkg.fmri \ value=pkg:/example/system/input-method/ibus/kkc@$(IPS_COMPONENT_VERSION),$(BUILD_VERSION) set name=pkg.summary value="IBus Japanese IME - kkc" set name=pkg.description value="Japanese Kana Kanji input engine for IBus" set name=com.oracle.info.description value="ibus kkc - Kana kanji input engine" set name=info.classification \ value=org.opensolaris.category.2008:System/Internationalization set name=info.source-url value=$(COMPONENT_ARCHIVE_URL) set name=info.upstream-url value=$(COMPONENT_PROJECT_URL) set name=org.opensolaris.arc-caseid value=PSARC/2009/499 set name=org.opensolaris.consolidation value=$(CONSOLIDATION) file path=usr/lib/ibus/ibus-engine-kkc mode=0555 file path=usr/lib/ibus/ibus-setup-kkc mode=0555 file path=usr/share/applications/ibus-setup-kkc.desktop dir path=usr/share/ibus-kkc dir path=usr/share/ibus-kkc/icons file path=usr/share/ibus-kkc/icons/ibus-kkc.svg file path=usr/share/ibus/component/kkc.xml file path=usr/share/locale/ja/LC_MESSAGES/ibus-kkc.mo license ibus-kkc.copyright license="ibus-kkc, GPLmix" depend type=require fmri=example/system/input-method/library/libkkc-data Perform the final build: # gmake publish Use the following command to install the package from the build's IPS repository: # sudo pkg install ibus/kkc Other input method engines can be built in a similar way: for example, ibus-hangul for Korean, ibus-chewing or ibus-table for Chinese, and others.

Contributed by: Pavel Heimlich and Ales Cernosek Oracle Solaris 11.4 delivers a modern and extensible enterprise desktop environment. The desktop environment supports and delivers an IBus (Intelligent...

Oracle Solaris 11

Random Solaris & Shell Command Tips — kstat, tput, sed, digest

The examples shown in this blog post were created and executed on a Solaris system. Some of these tips and examples are applicable to all *nix systems. Digest of a File One of the typical uses of computed digest is to check if a file has been compromised or tampered. The digest utility can be used to calculate the digest of files. On Solaris, -l option lists out all cryptographic hash algorithms available on the system. eg., % digest -l sha1 md5 sha224 sha256 .. sha3_224 sha3_256 .. -a option can be used to specify the hash algorithm while computing the digest. eg., % digest -v -a sha1 /usr/lib/libc.so.1 sha1 (/usr/lib/libc.so.1) = 89a588f447ade9b1be55ba8b0cd2edad25513619   Multiline Shell Script Comments Shell treats any line that start with '#' symbol as a comment and ignores such lines completely. (The line on top that start with #! is an exception). From what I understand there is no multiline comment mechanism in shell. While # symbol is useful to mark single line comments, it becomes laborious and not well suited to comment quite a number of contiguous lines. One possible way to achieve multiline comments in a shell script is to rely on a combination of shell built-in ':' and Here-Document code block. It may not be the most attractive solution but gets the work done. Shell ignores the lines that start with a ":" (colon) and returns true. eg., % cat -n multiblock_comment.sh 1 #!/bin/bash 2 3 echo 'not a commented line' 4 #echo 'a commented line' 5 echo 'not a commented line either' 6 : <<'MULTILINE-COMMENT' 7 echo 'beginning of a multiline comment' 8 echo 'second line of a multiline comment' 9 echo 'last line of a multiline comment' 10 MULTILINE-COMMENT 11 echo 'yet another "not a commented line"' % ./multiblock_comment.sh not a commented line not a commented line either yet another "not a commented line"   tput Utility to Jazz Up Command Line User Experience The tput command can help make the command line terminals look interesting. tput can be used to change the color of text, apply effects (bold, underline, blink, ..), move cursor around the screen, get information about the status of the terminal and so on. In addition to improving the command line experience, tput can also be used to improve the interactive experience of scripts by showing different colors and/or text effects to users. eg., % tput bold <= bold text % date Thu Aug 30 17:02:57 PDT 2018 % tput smul <= underline text % date Thu Aug 30 17:03:51 PDT 2018 % tput sgr0 <= turn-off all attributes (back to normal) % date Thu Aug 30 17:04:47 PDT 2018 Check the man page of terminfo for a complete list of capabilities to be used with tput.   Processor Marketing Name On systems running Solaris, processor's marketing or brand name can be extracted with the help of kstat utility. cpu_info module provides information related to the processor(s) on the system. eg., On SPARC: % kstat -p cpu_info:1:cpu_info1:brand cpu_info:1:cpu_info1:brand SPARC-M8 On x86/x64: % kstat -p cpu_info:1:cpu_info1:brand cpu_info:1:cpu_info1:brand Intel(r) Xeon(r) CPU L5640 @ 2.27GHz In the above example, cpu_info is the module. 1 is the instance number. cpu_info1 is the name of the section and brand is the statistic in focus. Note that cpu_info module has only one section cpu_info1. Therefore it is fine to skip the section name portion (eg., cpu_info:1::brand). To see the complete list of statistics offered by cpu_info module, simply run kstat cpu_info:1.   Consolidating Multiple sed Commands sed utility allows specifying multiple editing commands on the same command line. (in other words, it is not necessary to pipe multiple sed commands). The editing commands need to be separated with a semicolon (;) eg., The following two commands are equivalent and yield the same output. % prtconf | grep Memory | sed 's/Megabytes/MB/g' | sed 's/ size//g' Memory: 65312 MB % prtconf | grep Memory | sed 's/Megabytes/MB/g;s/ size//g' Memory: 65312 MB

The examples shown in this blog post were created and executed on a Solaris system. Some of these tips and examples are applicable to all *nix systems. Digest of a File One of the typical uses of...

Announcements

Oracle Solaris 11.4 Released for General Availability

Oracle Solaris 11.4: The trusted business platform. I'm pleased to announce the release of Oracle Solaris 11.4. Of the four releases of Oracle Solaris that I've been involved in, this is the best one yet! Oracle Solaris is the trusted business platform that you depend on. Oracle Solaris 11 gives you consistent compatibility, is simple to use and is designed to always be secure. Some fun facts about Oracle Solaris 11.4 There have been 175 development builds to get us to Oracle Solaris 11.4. We've tested Oracle Solaris 11.4 for more than 30 million machine hours. Over 50 customers have already put Oracle Solaris 11.4 into production and it already has more than 3000 applications certified to run on it. Oracle Solaris 11.4 is the first and, currently, the only operating system that has completed UNIX® V7 certification. What's new Consistently Compatible That last number in the fun facts is interesting because that number is a small subset of applications that will run on Oracle Solaris 11.4.  It doesn't include applications that will run on Oracle Solaris 11 that were designed and build for Oracle Solaris 10 (nor 8 and 9 for that matter). One of the reasons why Oracle Solaris is trusted by so many large companies and governments around the world to run their most mission-critical applications is our consistency. One of the key capabilities for Oracle Solaris is the Oracle Solaris Application Compatibility Guarantee. For close to 20 years now, we have guaranteed that Oracle Solaris will run applications built on previous releases of Oracle Solaris, and we continue to keep that promise today. Additionally, we've made it easier than ever to migrate your Oracle Solaris 10 workloads to Oracle Solaris 11. We've enhanced our migration tools and documentation to make moving from Oracle Solaris 10 to Oracle Solaris 11 on modern hardware simple.  All in an effort to save you money. Simple to Use Of course with every release of Oracle Solaris, we work hard to make life simpler for our users. This release is no different. We've included several new features in Oracle Solaris 11.4 that make it easier than ever to manage. The coolest of those new features is our new Observability Tools System Web Interface. The System Web Interface brings together several key observability technologies, including the new StatsStore data, audit events and FMA events, into a centralized, customizable browser-based interface, that allows you to see the current and past system behavior at a glance. James McPherson did an excellent job of writing all about the Web Interface here. He also wrote about what we collect by default here. Of course, you can also add your own data to be collected and customize the interface as you like.  And if you want to export the data to some other application like a spreadsheet or database, my colleague Joost Pronk wrote a blog on how to get the data into a csv format file. For more information about that, you can read more about it all in our Observability Tools documentation. The Service Management Framework has been enhanced to allow you to automatically monitor and restart critical applications and services. Thejaswini Kodavur shows you how to use our new SMF goal services. We've made managing and updating Oracle Solaris Zones and the applications you run inside them simpler than ever. We started by supplying you with the ability to evacuate a system of all of its Zones with just one command. Oh, and you can bring them all back with just one command too. Starting with Oracle Solaris 11.4, you can now build intra-Zone dependencies and have the dependent Zones boot in the correct order, allowing you to automatically boot and restart complex application stacks in the correct order. Jan Pechanec wrote a nice how-to blog for you to get started. Joost Pronk wrote a community article going into the details more deeply. In Oracle Solaris 11.4, we give you one of the most requested features to make ZFS management even simpler than it already is. Cindy Swearingen talks about how Oracle Solaris now gives you ZFS device removal. Oracle Solaris is designed from the ground up to be simple to manage, saving you time. Always Secure Oracle Solaris is consistently compatible and is simple to use, but if it is one thing above all others, it is focused on security, and in Oracle Solaris 11.4 we give you even more security capabilities to make getting and staying secure and compliant easy. We start with multi-node compliance. In Oracle Solaris 11.4, you can now setup compliance to either push a compliance assessment to all systems with a single command and review the results in a single report, or you can setup your systems to regularly generate their compliance reports and push them to a central server where they can also be viewed via a single report.  This makes maintaining compliance across your data center even easier. You can find out more about multi-node compliance here. But how do you keep your systems compliant once they are made compliant? One of the most straightforward ways is to take advantage of Immutable Zones (this includes the Global Zone).  Immutable Zones even prevents system administrators from writing to the system and yet still allowed patches and updates via IPS. This is done via a trusted path. However, this also means that your configuration management tools like Puppet and Chef aren't able to write to the Zone to apply require configuration changes.  In Oracle Solaris 11.4, we added trusted path services. Now, you can create your own services like Puppet and Chef, that can be placed on the trusted path, allowing them to make the requisite changes while keeping the system/zone immutable and protected. Oracle Solaris Zones, especially Immutable Zones, are an incredibly useful tool for building isolation into your environment to protect applications and your data center from cyber attack or even just administrative error.  However, sometimes, a Zone is too much.  You really just want to be able to isolate applications on a system or within a Zone or VM. For this, we give you Application Sandboxing.  It allows you to isolate an application or isolate applications from each other.  Sandboxes provide additional separation of applications and reduce the risk of unauthorized data access. You can read more about it in Darren Moffat's blog, here. Oracle Solaris 11 is engineered to help you get and stay secure and compliant, reducing your risk. Updating In order to make your transition from Oracle Solaris 11.3 to Oracle Solaris 11.4 as smooth as possible, we've included a new compliance benchmark that will tell you if there are any issues such as old, unsupported device drivers, unsupported software or if the hardware you are running on isn't supported by Oracle Solaris 11.4. To install this new benchmark, update to Oracle Solaris 11.3 SRU 35 and run: # pkg install update-check Then to run the check, you simple run # compliance assess -b ehc-update -a 114update # compliance report -a 114update -o ./114update.html You can then use FireFox to view the report: My personal system started out as an Oracle Solaris 11.1 system and has been upgraded over the years to an Oracle Solaris 11.3 system. As you can see, I had some failures. These were some old device drivers, and old versions of software like gcc-3, iperf benchmarking tool, etc. The compliance report report tells you exactly what needs to happen to resolve the failures.  The devices drivers weren't needed any longer, and I uninstalled them per the reports instructions. The report said the software will be removed automatically during upgrade. Oracle Solaris Cluster 4.4 Of course with each update of Oracle Solaris 11, we release an new version of Oracle Solaris Cluster so you can upgrade in lock-step to provide a smooth transition for your HA environments. You can read about what's new in Oracle Solaris Cluster 4.4 in the What's New and find out more from the Data Sheet, the Oracle Technology Network and in our documentation. Try it out You can download Oracle Solaris 11.4 now from the Oracle Technology Network for bare metal or VirtualBox, OTN, MOS, and our repository at pkg.oracle.com. Take a moment to check out our new OTN page, and you can engage with other Oracle Solaris users and engineers on the Oracle Solaris Community page. UNIX® and UNIX® V7 are registered trademarks of The Open Group.

Oracle Solaris 11.4: The trusted business platform. I'm pleased to announce the release of Oracle Solaris 11.4. Of the four releases of Oracle Solaris that I've been involved in, this is the best one...

Zones Delegated Restarter and SMF Goals

Managing Zones in Oracle Solaris 11.3 In Oracle Solaris 11.3, Zones are managed by the Zones service svc:/system/zones:default. The service performs the autobooting and shutdown of Zones on system boot and shutdown according to each zone's configuration. The service boots, in parallel, all Zones configured with autoboot=true, and shuts down, also in parallel, all running Zones with their corresponding autoshutdown option: halt, shutdown, or suspend. See the zonecfg(1M) manual page in 11.3 for more information. The management mechanism existing in 11.3 is sufficient for systems running a small number of Zones. However, the growing size of systems and the increasing number of Zones on them require a more sophisticated mechanism. The issue is that the following features are missing: Very limited integration with SMF. That also means zones may not depend on other services and vice versa. No threshold tunable for the number of Zones booted in parallel. No integration with FMA. No mechanism to prioritize Zones booting order. No mechanism for providing information when a zone is considered up and running. This blog post describes enhancements brought in by 11.4 that address existing shortcomings of the Zones service in 11.3. Zones Delegated Restarter and Goals in Oracle Solaris 11.4 To solve the shortcomings outlined in the previous section, Oracle Solaris 11.4 brings the Zones Delegated Restarter (ZDR) to manage the Zones infrastructure and autobooting, and SMF Goals. Each zone aside from the Global Zone is modeled as an SMF instance of the service svc:/system/zones/zone:<zonename> where the name of the instance is the name of the zone. Note that the Zones configuration was not moved into the SMF repository. For an explanation of what is an SMF restarter, see the section "Restarters" in the smf(7) manual page. Zone SMF Instances The ZDR replaces the existing shell script, /lib/svc/method/svc-zones, in the Zones service method with the restarter daemon, /usr/lib/zones/svc.zones. See the svc.zones(8) manual page for more information. The restarter runs under the existing Zones service svc:/system/zones:default. A zone SMF instance is created at the zone creation time. The instance is marked as incomplete for zones in the configured state. On zone install, attach, and clone, the zone instance is marked as complete. Conversely, on zone detach or uninstall, the zone instance is marked incomplete. The zone instance is deleted when the zones is deleted via zonecfg(8). An example on listing the Zones instances: $ svcs svc:/system/zones/zone STATE STIME FMRI disabled 12:42:55 svc:/system/zones/zone:tzone1 online 16:29:47 svc:/system/zones/zone:s10 $ zoneadm list -vi ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 s10 running /system/zones/s10 solaris10 excl - tzone1 installed /system/zones/tzone1 solaris excl On start-up, the ZDR creates a zone SMF instance for any zone (save for the Global Zone) that does not have one but is supposed to. Likewise, if there is a zone SMF instance that does not have a corresponding zone, the restarter will remove the instance. The ZDR is responsible for setting up the infrastructure necessary for each zone, spawning a zoneadmd(8) daemon for each zone, and restarting the daemon when necessary. There is a running zoneadmd for each zone in a state greater than configured on the system. ZDR generated messages related to a particular zone are logged to /var/log/zones/<zonename>.messages, which is where zoneadmd logs as well. Failures during the infrastructure setup for a particular zone will place the zone to the maintenance state. A svcadm clear on the zone instance triggers the ZDR to re-try. SMF Goal Services SMF goals provide a mechanism by which a notification can be generated if a zone is incapable of auto booting to a fully online state (i.e. unless an admin intervenes). With their integration into ZDR they can be used to address one of the shortcomings mentioned above, that we had no mechanism for providing information when a zone is considered up and running. A goal service is a service with the general/goal-service=true property setting. Such service enters a maintenance state if its dependencies can never be satisfied. Goal services in maintenance automatically leave that state once their dependencies are satisfiable. The goal services failure mechanism is entirely encompassed in the SMF dependency graph engine. Any service can be marked as a goal service. We also introduced a new synthetic milestone modeled as a goal service, svc:/milestone/goals:default. The purpose of this new milestone is to provide a clear, unambiguous, and well-defined point where we consider the system up and running. The dependencies of milestone/goals should be configured to represent the mission critical services for the system. There is one dependency by default: root@vzl-143:~# svcs -d milestone/goals STATE STIME FMRI online 18:35:49 svc:/milestone/multi-user-server:default While goals are ZDR agnostic, they are a fundamental requirement of the ZDR which uses the states of the milestone/goals services to determine the state of each non-global zone. To change the dependency, use svcadm goals: # svcadm goals milestone/multi-user-server:default network/http:apache24 # svcs -d milestone/goals STATE STIME FMRI online 18:35:49 svc:/milestone/multi-user-server:default online 19:53:32 svc:/network/http:apache24 To reset (clear) the dependency service set to the default, use svcadm goals -c. Zone SMF Instance State ZDR is notified of the state of the milestone/goals service of each non-global zone that supports it. The zone instance state of each non-global zone will match the state of its milestone/goals. Kernel Zones that support the milestone/goals service (i.e. those with 11.4+ installed) use internal auxiliary states to report back to the host. Kernel Zones that do not support milestone/goals are considered online when their state is running and auxiliary state is hotplug-cpu. Zone SMF instances mapping to solaris10(7) branded Zones will have its state driven by the exit code of the zoneadm command. If the zone's milestone/goals is absent or disabled, the ZDR will treat the zone as not having support for milestone/goals. The ZDR can be instructed to ignore milestone/goals for the purpose of moving the zone SMF instance to the online state based only on the success of zoneadm boot -- if zoneadm boot fails the zone SMF instance is placed into maintenance. The switch is controlled by the following SMF property under the ZDR service instance, svc:/system/zones:default: config/track-zone-goals = true | false For example, this feature might be useful to providers of VMs to different tenants (IaaS) that do not care about what is running on the VMs but only care whether those VMs are accessible to their tenants. Zone Dependencies The set of SMF FMRIs that make up the zone dependencies is defined by a new zonecfg resource, smf-dependency. It is comprised of fmri and grouping properties. All SMF dependencies for a zone will be of type service and have restart_on none -- we do not want zones being shutdown or restarted because of a faulty flip-flopping dependency. Example: An example on setting dependencies for a zone: add smf-dependency set fmri svc:/application/frobnicate:default end add smf-dependency set fmri svc:/system/zones/zone:appfirewall end add smf-dependency set fmri svc:/system/zones/zone:dataload set grouping=exclude_all end The default for grouping is require_all. See the smf(7) manual page for other dependency grouping types. Zone Config autoboot Configuration The zone instance SMF property general/enabled corresponds to the zone configuration property autoboot and is stored in the SMF repository. The existing Zones interfaces: zonecfg(1M) export zoneadm(1M) detach -n stay unmodified and contain autoboot in their output. Also, the RAD interfaces accessing the property do not change from 11.3. Zones Boot Order There are two ways to establish zones boot order. One is determined by the SMF dependencies of a zone SMF instance (see above for smf-dependency). The other one is an assigned boot priority for a zone. Once the SMF dependencies are satisfied for a zone, the zone is placed in a queue according to its priority. The ZDR then boots zones from the highest to lowest boot priority in the queue. The new zonecfg property is boot-priority. set boot-priority={ high | default | low } Note that the boot ordering based on assigned boot priority is best-effort and thus non-deterministic. It is not guaranteed that all zones with higher boot priority will be booted before all zones with lower boot priority. If your configuration requires a deterministic behavior, use SMF dependencies. Zones Concurrent Boot and Suspend/Resume Limits The ZDR can limit the number of concurrent zones booting up or shutting down, and suspending or resuming. The max number of concurrent boot and suspend resume will be determined by the following properties on the ZDR service instance: $ svccfg -s system/zones:default listprop config/concurrent* config/concurrent-boot-shutdown count 0 config/concurrent-suspend-resume count 0 0 or absence of value means there is no limit imposed by the restarter. If the value is N, the restarter will attempt to boot in parallel at most N zones. The booting process of a NGZ will be considered completed when the milestone/goals of a zone is reached. If the milestone/goals cannot be reached, the zone SMF instance will be placed into maintenance and the booting process for that zone will be deemed complete from the ZDR perspective. Kernel Zones that do not support milestone/goals are considered up when the zone auxiliary state hotplug-cpu is set. KZs with the goal support use private auxiliary states to report back to the host. solaris10 branded zones will be considered up when the zoneadm boot command returns. Integration with FMA This requirement was automatically achieved with fully integrating the Zones framework with SMF. Example Let's have a very simplistic example with zones jack, joe, lady, master, and yesman<0-9>. Now, the master zone depends on lady, lady depends on both jack and joe, and we do not care much about when yesman<0-9> zones boot up. +------+ +------+ +--------+ | jack |<----------+--| lady |<--------------| master | +------+ / +------+ +--------+ / +------+ / | joe |<------+ +------+ +---------+ +---------+ | yesman0 | .... | yesman9 | +---------+ +---------+ Let's not tax the system excessively when booting, so we set the boot concurrency to 2 for this example. Also, let's assume we need a running web server in jack, so add that one to the goals milestone. Based on the environment we have, we choose to assign the high boot priority to jack and joe, keep lady and master at the default priority, and put all yesman zones to the low boot priority. To achieve all of the above, this is what we need to do: # svccfg -s system/zones:default setprop config/concurrent-boot-shutdown=2 # zlogin jack svcadm goals svc:/milestone/multi-user-server:default apache24 # zonecfg -z jack "set boot-priority=high" # zonecfg -z joe "set boot-priority=high" # zonecfg -z lady "add smf-dependency; set fmri=svc:/system/zones/zone:jack; end" # zonecfg -z lady "add smf-dependency; set fmri=svc:/system/zones/zone:joe; end" # zonecfg -z master "add smf-dependency; set fmri=svc:/system/zones/zone:lady; end" # for i in $(seq 0 9); do zonecfg -z yesman$i "set boot-priority=low"; done During the boot, you may see something like the following. As mentioned above, the boot priority is best effort also given we have dependencies, some yesman zones will boot up before some higher priority zones. You will see that at any given moment during the boot, only two zones are being booted up in parallel (the '*' denotes a service in a state transition, see svcs(1)), as we set the boot concurrency above to 2. $ svcs -o STATE,FMRI -s FMRI system/zones/* STATE FMRI offline* svc:/system/zones/zone:jack online svc:/system/zones/zone:joe offline svc:/system/zones/zone:lady offline svc:/system/zones/zone:master offline svc:/system/zones/zone:yesman0 offline svc:/system/zones/zone:yesman1 offline svc:/system/zones/zone:yesman2 offline* svc:/system/zones/zone:yesman3 online svc:/system/zones/zone:yesman4 online svc:/system/zones/zone:yesman5 offline svc:/system/zones/zone:yesman6 offline svc:/system/zones/zone:yesman7 offline svc:/system/zones/zone:yesman8 online svc:/system/zones/zone:yesman9 Conclusion With the Zones Delegated Restarter introduced in 11.4, we resolved several shortcomings of the Zones framework in 11.3. There is always room for additional enhancements, making the boot ordering based on boot priorities more deterministic, for example. We are open to any feedback you might have on this new Zones Delegated Restarter feature.

Managing Zones in Oracle Solaris 11.3 In Oracle Solaris 11.3, Zones are managed by the Zones service svc:/system/zones:default. The service performs the autobooting and shutdown of Zones on system boot...

Oracle Solaris 11.3 SRU 35 released

Earlier today we released Oracle Solaris 11.3 SRU 35. It's available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . This SRU introduces the following enhancements: Compliance Update Check; allows users to verify the system is not using features which are no longer supported in newer releases Oracle VM Server for SPARC has been updated to version 3.5.0.3. More details can be found in the Oracle VM Server for SPARC 3.5.0.3 Release Notes. The Java 8, Java 7, and Java 6 packages have been updated Explorer 18.3 is now available libepoxy has been added to Oracle Solaris gnu-gettext has been updated to 0.19.8 bison has been updated to 3.0.4 at-spi2-atk has been updated to 2.24.0 at-spi2-core has been updated to 2.24.0 gtk+3 has been updated to 3.18.0 Fixed the missing magick/static.h header in ImageMagick manifest The following components have also been updated to address security issues: python has been updated to 3.4.8 BIND has been updated to 9.10.6-P1 Apache Tomcat has been updated to 8.5.3 kerberos 5 has been updated to 1.16.1 Wireshark has been updated to 2.6.2 Thunderbird has been updated to 52.9.1 libvorbis has been updated to 1.3.6 MySQL has been updated to 5.7.23 gdk-pixbuf, libtiff, jansson, procmail, libgcrypt, libexif Full details of this SRU can be found in My Oracle Support Doc 2437228.1. For the list of Service Alerts affecting each Oracle Solaris 11.3 SRU, see Important Oracle Solaris 11.3 SRU Issues (Doc ID 2076753.1).

Earlier today we released Oracle Solaris 11.3 SRU 35. It's available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at...

Recovering a Missing Zone After a Repeated Upgrade From Oracle Solaris 11.3 to 11.4

Recap of the Problem As I explained in my past Shared Zone State in Oracle Solaris 11.4 blog post, if you update to Oracle Solaris 11.4 from 11.3, boot the new 11.4 BE, create/install new zones there, then boot back to 11.3, update again to 11.4, and boot that second 11.4 BE, zones you created and installed in the first 11.4 BE will no longer be shown in the zoneadm list -c output when you booted up from the second 11.4 BE. Those zones are missing as the 2nd update from 11.3 to 11.4 replaced the shared zones index file, storing the original one containing the zone entries to /var/share/zones/index.json.backup.<date>--<time> file. However, I did not explain how we can get such zones back and that is what I'm gonna show in this blog post. There are two pieces missing. One is the zones are not in the shared index file, the other one is the zones do not have their zone configurations in the current BE. The Recovery Solution The fix is quite easy so let's show how to recover one zone. Either zoneadm -z <zonename> export > <exported-config> the zone config from the first 11.4 BE, and import it in the other 11.4 BE via zonecfg -z <zonename> -f <exported-config>, or just manually create the zone in the second 11.4 BE with the same configuration. Example: BE-AAA# zonecfg -z xxx export > xxx.conf BE-BBB# zonecfg -z xxx -f xxx.conf In this specific case of multiple updates to 11.4, you could also manually copy <mounted-1st-11.4-be>/etc/zones/<zonename>.xml from the first 11.4 BE (use beadm mount <1st-11.4-be-name> /a to mount it) but note that that's not a supported way do to things as in general configurations from different system versions may not be compatible. If that is the case, the configuration update is done during the import or on the first boot. However, in this blog entry, I will cheat and use a simple cp(1) since I know that the configuration file is compatible with the BE I'm copying it into. The decribed recovery solution is brands(7) agnostic. Example An example that follows will recover a missing zone uar. Each color represents a different BE, denoted also by different shell prompts. root@s11u4_3:~# zonecfg -z uar create root@s11u4_3:~# zonecfg -z uar install root@s11u4_3:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 installed /system/zones/tzone1 solaris excl - uar installed /system/zones/uar solaris excl root@s11u4_3:~# beadm activate sru35.0.3 root@s11u4_3:~# reboot -f root@S11-3-SRU:~# pkg update --be-name=s11u4_3-b -C0 --accept entire@11.4-11.4.0.0.1.3.0 ... root@S11-3-SRU:~# reboot -f root@s11u4_3-b:~# svcs -xv svc:/system/zones-upgrade:default (Zone config upgrade after first boot) State: degraded since Fri Aug 17 13:39:53 2018 Reason: Degraded by service method: "Unexpected situation during zone index conversion to JSON." See: http://support.oracle.com/msg/SMF-8000-VE See: /var/svc/log/system-zones-upgrade:default.log Impact: Some functionality provided by the service may be unavailable. root@s11u4_3-b:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 installed /system/zones/tzone1 solaris excl root@s11u4_3-b:~# beadm mount s11u4_3 /a root@s11u4_3-b:~# cp /a/etc/zones/uar.xml /etc/zones/ root@s11u4_3-b:~# zonecfg -z uar create Zone uar does not exist but its configuration file does. To reuse it, use -r; create anyway to overwrite it (y/[n])? n root@s11u4_3-b:~# zonecfg -z uar create -r Zone uar does not exist but its configuration file does; do you want to reuse it (y/[n])? y root@s11u4_3-b:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 installed /system/zones/tzone1 solaris excl - uar configured /system/zones/uar solaris excl root@s11u4_3-b:~# zoneadm -z uar attach -u Progress being logged to /var/log/zones/zoneadm.20180817T134924Z.uar.attach Zone BE root dataset: rpool/VARSHARE/zones/uar/rpool/ROOT/solaris-0 Updating image format Image format already current. Updating non-global zone: Linking to image /. Updating non-global zone: Syncing packages. Packages to update: 527 Services to change: 2 ... ... Result: Attach Succeeded. Log saved in non-global zone as /system/zones/uar/root/var/log/zones/zoneadm.20180817T134924Z.uar.attach root@s11u4_3-b:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 installed /system/zones/tzone1 solaris excl - uar installed /system/zones/uar solaris excl Conclusion This situation of missing zones on multiple updates from 11.3 to 11.4 is inherently part of the change from a BE specific zone indexes in 11.3 to a shared index in 11.4. You should only encounter it if you go back from 11.4 to 11.3 and update again to 11.4. We assume such situations will not happen often. The final engineering consensus during the design was that while users mostly keep going forward, i.e. update to greater system versions and then go back not, if they happen to go back to 11.3 and update again to 11.4, they would expect the same list of zones as they had on the 11.3 BE they used last for the 11.4 update.

Recap of the Problem As I explained in my past Shared Zone State in Oracle Solaris 11.4 blog post, if you update to Oracle Solaris 11.4 from 11.3, boot the new 11.4 BE, create/install new zones there,...

Oracle Solaris 11

Trapped by Older Software

Help! I am trapped by my Old Application Which Cannot Change. I need to update my Oracle Solaris 11 system to a later Update and/or Support Repository Update (SRU) but I find upon the update my favourite FOSS component has been removed.  My Old Application Which Cannot Change has a dependency upon it and so no longer starts. Help! Oracle Solaris, by default, will ensure that the software installed on the system is up to date. This includes the removal (uninstall) of software that is obsolete. Packages can be marked as obsolete due to the owning community no longer supporting it,  being replaced by another package or later major version or for some other valid reason (we are very careful about the removal of software components). However, by design, the contention problem of wanting to keep the operating system up to date but allowing for exceptions is addressed by Oracle Solaris's packaging system. This is performed via the 'version locks' within it. Taking an example: Oracle Solaris 11.3 SRU 20 obsoleted Python 2.6. This was because that version of python has been End of Lifed by the Python community. And so updating a system beyond that SRU will result in Python 2.6 being removed. But what happens if you actually need to use Python 2.6 because of some application dependency ? Well the first thing is check with the application vendor to see if there is a later version that supports the newer version Python, if so consider updating to that later version. Maybe there is no later release of the application and so in this instance how do you get python-26 onto your system. Follow the steps below: Identify the version of the required package: use pkg list -af <name of package> for example: pkg list -af runtime/python-26 Identify if the package has dependencies that need to be installed: pkg contents -r -t depend runtime/python-26@2.6.8-0.175.3.15.0.4.0 The python-26 package is interesting as it has a conditional dependency upon the package runtime/tk-8 so that it depends upon library/python/tkinter-26. So if tk-8 is installed then tkinter-26 will need to be installed. Identify the incorporation that locks the package(s): pkg search depend:incorporate:runtime/python-26 Using the information in the previous step find the relevant lock(s) pkg contents -m userland-incorporation | egrep 'runtime/python-26|python/tkinter-26' Unlock the package(s): pkg change-facet version-lock.runtime/python-26=false version-lock.library/python/tkinter-26=false Update the package(s) to the identified version from the first step: pkg update runtime/python-26@2.6.8-0.175.3.15.0.4.0 No need to worry about tkinter-26 here because the dependency within the python-26 package will cause it to be installed. Freeze the package(s) so that further updates will not remove them. Put a comment with the freeze to indicate why the package is installed: pkg freeze -c 'Needed for Old Application' runtime/python-26 If, required, update the system to the later SRU or Oracle Solaris Update: pkg update Another complete example using the current Oracle Solaris 11.4 Beta and Java 7 # pkg list -af jre-7 NAME (PUBLISHER) VERSION IFO runtime/java/jre-7 1.7.0.999.99 --o runtime/java/jre-7 1.7.0.191.8 --- runtime/java/jre-7 1.7.0.181.9 --- runtime/java/jre-7 1.7.0.171.11 --- runtime/java/jre-7 1.7.0.161.13 --- ... # pkg search depend:incorporate:runtime/java/jre-7 INDEX ACTION VALUE PACKAGE incorporate depend runtime/java/jre-7@1.7.0.999.99,5.11 pkg:/consolidation/java-7/java-7-incorporation@1.7.0.999.99-0 # pkg contents -m java-7-incorporation|grep jre-7 depend fmri=runtime/java/jre-7@1.7.0.999.99,5.11 type=incorporate Oh. There is no lock. What should we do now ? Is there a lock on the Java 7 incorporation that can be used ? Yes! See the results of the searching in the first command below. So we can unlock that one and install Java 7. # pkg search depend:incorporate:consolidation/java-7/java-7-incorporation INDEX ACTION VALUE PACKAGE incorporate depend consolidation/java-7/java-7-incorporation@1.7.0.999.99 pkg:/entire@11.4-11.4.0.0.1.12.0 # pkg contents -m entire | grep java-7-incorporation depend fmri=consolidation/java-7/java-7-incorporation type=require depend facet.version-lock.consolidation/java-7/java-7-incorporation=true fmri=consolidation/java-7/java-7-incorporation@1.7.0.999.99 type=incorporate # pkg change-facet version-lock.consolidation/java-7/java-7-incorporation=false Packages to change: 1 Variants/Facets to change: 1 Create boot environment: No Create backup boot environment: Yes PHASE ITEMS Removing old actions 1/1 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 # pkg list -af java-7-incorporation NAME (PUBLISHER) VERSION IFO consolidation/java-7/java-7-incorporation 1.7.0.999.99-0 i-- consolidation/java-7/java-7-incorporation 1.7.0.191.8-0 --- consolidation/java-7/java-7-incorporation 1.7.0.181.9-0 --- .... # pkg install --accept jre-7@1.7.0.191.8 java-7-incorporation@1.7.0.191.8-0 Packages to install: 2 Packages to update: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 3/3 881/881 71.8/71.8 5.2M/s PHASE ITEMS Removing old actions 4/4 Installing new actions 1107/1107 Updating modified actions 2/2 Updating package state database Done Updating package cache 1/1 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 # pkg freeze -c 'Needed for Old Application' java-7-incorporation consolidation/java-7/java-7-incorporation was frozen at 1.7.0.191.8-0:20180711T215211Z # pkg freeze NAME VERSION DATE COMMENT consolidation/java-7/java-7-incorporation 1.7.0.191.8-0:20180711T215211Z 07 Aug 2018 14:50:34 UTC Needed for Old Application A couple of points with the above example. When installing the required version of java the corresponding incorporation at the correct version needed to be installed. The freeze has been applied to the Java 7 incorporation because that is the package that controls the Java 7 package version. The default version of Java remains as Java 8 but that can be changed as per the next steps below via the use of mediators (see pkg(1) and look for mediator). # java -version java version "1.8.0_181" Java(TM) SE Runtime Environment (build 1.8.0_181-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.181-b12, mixed mode) # /usr/jdk/instances/jdk1.7.0/bin/java -version java version "1.7.0_191" Java(TM) SE Runtime Environment (build 1.7.0_191-b08) Java HotSpot(TM) Server VM (build 24.191-b08, mixed mode) # pkg set-mediator -V 1.7 java Packages to change: 3 Mediators to change: 1 Create boot environment: No Create backup boot environment: Yes PHASE ITEMS Removing old actions 2/2 Updating modified actions 3/3 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 # java -version java version "1.7.0_191" Java(TM) SE Runtime Environment (build 1.7.0_191-b08) Java HotSpot(TM) Server VM (build 24.191-b08, mixed mode) Another example of unlocking packages is the article More Tips for Updating Your Oracle Solaris 11 System from the Oracle Support Repository. In summary Oracle Solaris 11 provides a single method to update all the operating system software via a pkg update but additionally allows for exceptions to be used to permit legacy applications to run.

Help! I am trapped by my Old Application Which Cannot Change. I need to update my Oracle Solaris 11 system to a later Update and/or Support Repository Update (SRU) but I find upon the update...

Oracle Solaris 11

Solaris 11: High-Level Steps to Create an IPS Package

Keywords: Solaris package IPS+Repository pkg 1 Work on Directory Structure     Start with organizing the package contents (files) into the same directory structure that you want on the installed system.   In the following example the directory was organized in such a manner that when the package was installed, it results in software being copied to /opt/myutils directory. eg., # tree opt opt `-- myutils |-- docs | |-- README.txt | `-- util_description.html |-- mylib.py |-- util1.sh |-- util2.sh `-- util3.sh Create a directory to hold the software in the desired layout. Let us call this "workingdir", and this directory will be specified in subsequent steps to generate the package manifest and finally the package itself. Move the top level software directory to the "workingdir". # mkdir workingdir # mv opt workingdir # tree -fai workingdir/ workingdir workingdir/opt workingdir/opt/myutils workingdir/opt/myutils/docs workingdir/opt/myutils/docs/README.txt workingdir/opt/myutils/docs/util_description.html workingdir/opt/myutils/mylib.py workingdir/opt/myutils/util1.sh workingdir/opt/myutils/util2.sh workingdir/opt/myutils/util3.sh 2 Generate Package Manifest   Package manifest provides metadata such as package name, description, version, classification & category along with the files and directories included, and the dependencies, if any, need to be installed for the target package. The manifest for an existing package can be examined with the help of pkg contents subcommand. pkgsend generate command generates the manifest. It takes "workingdir" as input. Piping the output through pkgfmt makes the manifest readable. # pkgsend generate workingdir | pkgfmt > myutilspkg.p5m.1 # cat myutilspkg.p5m.1 dir path=opt owner=root group=bin mode=0755 dir path=opt/myutils owner=root group=bin mode=0755 dir path=opt/myutils/docs owner=root group=bin mode=0755 file opt/myutils/docs/README.txt path=opt/myutils/docs/README.txt owner=root group=bin mode=0644 file opt/myutils/docs/util_description.html path=opt/myutils/docs/util_description.html owner=root group=bin mode=0644 file opt/myutils/mylib.py path=opt/myutils/mylib.py owner=root group=bin mode=0755 file opt/myutils/util1.sh path=opt/myutils/util1.sh owner=root group=bin mode=0644 file opt/myutils/util2.sh path=opt/myutils/util2.sh owner=root group=bin mode=0644 file opt/myutils/util3.sh path=opt/myutils/util3.sh owner=root group=bin mode=0644 3 Add Metadata to Package Manifest   Note that the package manifest is currently missing attributes such as name and description (metadata). Those attributes can be added directly to the generated manifest. However the recommended approach is to rely on pkgmogrify utility to make changes to an existing manifest. Create a text file with the missing package attributes. eg., # cat mypkg_attr set name=pkg.fmri value=myutils@3.0,5.11-0 set name=pkg.summary value="Utilities package" set name=pkg.description value="Utilities package" set name=variant.arch value=sparc set name=variant.opensolaris.zone value=global set name=variant.opensolaris.zone value=global action restricts the package installation to global zone. To make the package installable in both global and non-global zones, either specify set name=variant.opensolaris.zone value=global value=nonglobal action in the package manifest, or do not have any references to variant.opensolaris.zone variant at all in the manifest. Now merge the metadata with the manifest generated in previous step. # pkgmogrify myutilspkg.p5m.1 mypkg_attr | pkgfmt > myutilspkg.p5m.2 # cat myutilspkg.p5m.2 set name=pkg.fmri value=myutils@3.0,5.11-0 set name=pkg.summary value="Utilities package" set name=pkg.description value="Utilities package" set name=variant.arch value=sparc set name=variant.opensolaris.zone value=global dir path=opt owner=root group=bin mode=0755 dir path=opt/myutils owner=root group=bin mode=0755 dir path=opt/myutils/docs owner=root group=bin mode=0755 file opt/myutils/docs/README.txt path=opt/myutils/docs/README.txt owner=root group=bin mode=0644 file opt/myutils/docs/util_description.html \ path=opt/myutils/docs/util_description.html owner=root group=bin mode=0644 file opt/myutils/mylib.py path=opt/myutils/mylib.py owner=root group=bin mode=0755 file opt/myutils/util1.sh path=opt/myutils/util1.sh owner=root group=bin mode=0644 file opt/myutils/util2.sh path=opt/myutils/util2.sh owner=root group=bin mode=0644 file opt/myutils/util3.sh path=opt/myutils/util3.sh owner=root group=bin mode=0644 4 Evaluate & Generate Dependencies   Generate the dependencies so they will be part of the manifest. It is recommended to rely on pkgdepend utility for this task rather than declaring depend actions manually to minimize inaccuracies. eg., # pkgdepend generate -md workingdir myutilspkg.p5m.2 | pkgfmt > myutilspkg.p5m.3 At this point, ensure that the manifest has all the dependencies listed. If not, declare the missing dependencies manually.   5 Resolve Package Dependencies   This step might take a while to complete. eg., # pkgdepend resolve -m myutilspkg.p5m.3   6 Verify the Package   By this time the package manifest should pretty much be complete. Check and validate it manually or using pkglint utility (recommended) for consistency and any possible errors. # pkglint myutilspkg.p5m.3.res   7 Publish the Package   For the purpose of demonstration let's go with the simplest option to publish the package, local file-based repository. Create the local file based repository using pkgrepo command, and set the default publisher for the newly created repository. # pkgrepo create my-repository # pkgrepo -s my-repository set publisher/prefix=mypublisher Finally publish the target package with the help of pkgsend command. # pkgsend -s my-repository publish -d workingdir myutilspkg.p5m.3.res pkg://mypublisher/myutils@3.0,5.11-0:20180704T014157Z PUBLISHED # pkgrepo info -s my-repository PUBLISHER PACKAGES STATUS UPDATED mypublisher 1 online 2018-07-04T01:41:57.414014Z   8 Validate the Package   Finally validate whether the published package has been packaged properly by test installing it. # pkg set-publisher -p my-repository # pkg publisher # pkg install myutils # pkg info myutils Name: myutils Summary: Utilities package Description: Utilities package State: Installed Publisher: mypublisher Version: 3.0 Build Release: 5.11 Branch: 0 Packaging Date: Wed Jul 04 01:41:57 2018 Last Install Time: Wed Jul 04 01:45:05 2018 Size: 49.00 B FMRI: pkg://mypublisher/myutils@3.0,5.11-0:20180704T014157Z

Keywords: Solaris package IPS+Repository pkg 1 Work on Directory Structure     Start with organizing the package contents (files) into the same directory structure that you want on the installed system.  ...

Oracle Solaris 11

Oracle Solaris 11.4 Open Beta Refresh 2

As we continue to work toward release of Oracle Solaris 11.4, we present to you our third release of Oracle Solaris 11.4 open beta. You can download it here, or if you're already running a previous version of Oracle Solaris 11.4 beta, make sure your system is pointing to the beta repo (https://pkg.oracle.com/solaris/beta/) as its provider and type 'pkg update'. This will be the last Oracle Solaris 11.4 open beta as we are nearing release and are now going to focus our energies entirely on preparing Oracle Solaris 11.4 for general availability. The key focus of Oracle Solaris 11.4 is to bring new capabilities and yet maintain application compatibility to help you modernize and secure your infrastructure while maintaining and protecting your application investment.. This release is specifically focused on quality and application compatibility making your transition to Oracle Solaris 11.4 seamless. The refresh includes updates to 56 popular open source libraries and utilities, a new compliance(8) "explain" subcommand which provides details on the compliance checks performed against the system for a given benchmark, and a variety of other performance and security enhancements.   In addition this refresh delivers Kernel Page Table Isolation for x86 systems which is important in addressing the Meltdown security vulnerability affecting some x86 CPUs.   This update also includes an updated version of Oracle VM Server for SPARC, with improvements in console security, live migration, and introduces a LUN masking capability to simplify storage provisioning to guests. We’re excited about the content and capability of this update, and you’ll be seeing more about specific features and capabilities in the Oracle Solaris blog in the coming days.  As you try out the software in your own environment and with your own applications please continue to give us feedback through the Oracle Solaris Beta Community Forum at https://community.oracle.com/community/server_&_storage_systems/solaris/solaris-beta

As we continue to work toward release of Oracle Solaris 11.4, we present to you our third release of Oracle Solaris 11.4 open beta. You can download it here, or if you're already running a previous...

Solaris

Python: Exclusive File Locking on Solaris

Solaris doesn't lock open files automatically (not just Solaris - most of *nix operating systems behave this way). In general, when a process is about to update a file, the process is responsible for checking existing locks on target file, acquiring a lock and releasing it after updating the file. However given that not all processes cooperate and adhere to this mechanism (advisory locking) due to various reasons, such non-conforming practice may lead to problems such as inconsistent or invalid data mainly triggered by race condition(s). Serialization is one possible solution to prevent this, where only one process is allowed to update the target file at any time. It can be achieved with the help of file locking mechanism on Solaris as well as majority of other operating systems. On Solaris, a file can be locked for exclusive access by any process with the help of fcntl() system call. fcntl() function provides for control over open files. It can be used for finer-grained control over the locking -- for instance, we can specify whether or not to make the call block while requesting exclusive or shared lock. The following rudimentary Python code demonstrates how to acquire an exclusive lock on a file that makes all other processes wait to get access to the file in focus. eg., % cat -n xflock.py 1 #!/bin/python 2 import fcntl, time 3 f = open('somefile', 'a') 4 print 'waiting for exclusive lock' 5 fcntl.flock(f, fcntl.LOCK_EX) 6 print 'acquired lock at %s' % time.strftime('%Y-%m-%d %H:%M:%S') 7 time.sleep(10) 8 f.close() 9 print 'released lock at %s' % time.strftime('%Y-%m-%d %H:%M:%S') Running the above code in two terminal windows at the same time shows the following. Terminal 1: % ./xflock.py waiting for exclusive lock acquired lock at 2018-06-30 22:25:36 released lock at 2018-06-30 22:25:46 Terminal 2: % ./xflock.py waiting for exclusive lock acquired lock at 2018-06-30 22:25:46 released lock at 2018-06-30 22:25:56 Notice that the process running in second terminal was blocked waiting to acquire the lock until the process running in first terminal released the exclusive lock. Non-Blocking Attempt If the requirement is not to block on exclusive lock acquisition, it can be achieved with LOCK_EX (acquire exclusive lock) and LOCK_NB (do not block when locking) operations by performing a bitwise OR on them. In other words, the statement fcntl.flock(f, fcntl.LOCK_EX) becomes fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB) so the process will either get the lock or move on without blocking. Be aware that an IOError will be raised when a lock cannot be acquired in non-blocking mode. Therefore, it is the responsibility of the application developer to catch the exception and properly deal with the situation. The behavior changes as shown below after the inclusion of fcntl.LOCK_NB in the sample code above. Terminal 1: % ./xflock.py waiting for exclusive lock acquired lock at 2018-06-30 22:42:34 released lock at 2018-06-30 22:42:44 Terminal 2: % ./xflock.py waiting for exclusive lock Traceback (most recent call last): File "./xflock.py", line 5, in fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB) IOError: [Errno 11] Resource temporarily unavailable

Solaris doesn't lock open files automatically (not just Solaris - most of *nix operating systems behave this way). In general, when a process is about to update a file, the process is responsible for...

Oracle Solaris 11

Automated management of the Solaris Audit trail

The Solaris audit_binfile(7) module for auditd provides the ability to specify by age (in hours, days, months etc) or by file size when to close the currently active audit trail file and start a new one.  This is intended to be used to ensure any single audit file doesn't grow to large. What this doesn't do is provide a mechanism to automatically age out old audit records from closed audit files after a period of time.  Using the SMF periodic service feature (svc.periodicd) and the auditreduce(8) record selection and merging facitilites we can every easily build some automation. For this example I'm going to assume that specification of the period can be expressed in terms of days alone, that makes implementing this as an SMF periodic service and the resulting conversion of that policy into arguements for auditreduce(8) nice and easy. First create the method script in /lib/svc/method/site-audit-manage (making sure it is executable): #!/bin/sh /usr/sbin/auditreduce -D $(hostname) -a $(gdate -d "$1 days ago" +%Y%m%d) This tells auditreduce to merge all of the closed audit files from N days ago into one new file, where N is specified as the first argument. Then we can use svcbundle(8) to turn that into a periodic service. # svcbundle -i -s service-property=config:days:count:90 -s interval=month -s day_of_month=1 -s start-method="/lib/svc/method/site-audit-manage %{config/days}" -s service-name=site/audit-manage That creates and installs a new periodic SMF service that will run on the first day of the month and run the above method script with 90 as the number of days. If we later want to change the policy to be 180 days we can do that with the svccfg command thus: # svccfg -s site/audit-manage setprop config/days = 180 # svccfg -s site/audit-manage refresh Note that the method script uses the GNU coreutils gdate command to do the easy conversion of "N days ago", this is delivered in pkg:/file/gnu-coreutils, this package is installed by default for solaris-large-server and solaris-desktop group packages but not for solaris-small-server or solaris-minimal so you may need to manually add it.  

The Solaris audit_binfile(7) module for auditd provides the ability to specify by age (in hours, days, months etc) or by file size when to close the currently active audit trail file and start a...

Oracle Solaris 11

Python on Solaris

Our colleagues in the Oracle Linux organisation have a nice writeup of their support for Python, and how to get cx_Oracle installed so you can access an Oracle Database. I thought it would be useful to provide an equivalent guide for Oracle Solaris, so here it is. Oracle Solaris has a long history of involvement with Python, starting at least 15 years ago (if not more!). Our Image Packaging System is about 94-95% Python, and we've got about 440k LoC (lines of code) written in Python directly in the ON consolidation. When you look at the Userland consolidation, however, that list grows considerably. From a practical point of view, you cannot install Oracle Solaris without using Python, and nor can you have a supportable installation unless you have this system-delivered Python and a whole lot of packages in /usr/lib/python2.7/vendor-packages. We are well aware of the immininent end of support for Python 2.7 so work is underway on migrating not just our modules and commands, but also our tooling -- so that we're not stuck when 2020 arrives. So how does one find which libraries and modules we ship, without trawling through P5M files in the Userland gate? Simply search through the Oracle Solaris IPS publisher either using the web interface (ala https://pkg.oracle.com) or using the command line: $ pkg search -r \<python\> which gives you a lot of package names. You'll notice that we version them via a suffix, so while you do get a few screenfuls of output, the list is about 423 packages long. Then to install it's very simple: # pkg install <name-of-package> just like you would for any other package. I've made mention of this before, but I think it bears repeating: we make it very, very easy for you to install cx_Oracle and Instant Client so you can connect to the Oracle Database: # pkg install -v cx_oracle Packages to install: 7 Mediators to change: 1 Estimated space available: 22.67 GB Estimated space to be consumed: 1.01 GB Create boot environment: No Create backup boot environment: No Rebuild boot archive: No Changed mediators: mediator instantclient: version: None -> 12.2 (vendor default) Changed packages: solaris consolidation/instantclient/instantclient-incorporation None -> 12.2.0.1.0-4 database/oracle/instantclient-122 None -> 12.2.0.1.0-4 developer/oracle/odpi None -> 2.1.0-11.5.0.0.0.21.0 library/python/cx_oracle None -> 6.1-11.5.0.0.0.21.0 library/python/cx_oracle-27 None -> 6.1-11.5.0.0.0.21.0 library/python/cx_oracle-34 None -> 6.1-11.5.0.0.0.21.0 library/python/cx_oracle-35 None -> 6.1-11.5.0.0.0.21.0 Then it's a simple matter of firing up your preferred Python version and uttering import cx_Oracle and away you go. Much like this: >>> import cx_Oracle >>> tns = """ORCLDNFS=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbkz)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orcldnfs)))""" >>> user = "admin" >>> passwd = "welcome1" >>> cnx = cx_Oracle.connect(user, passwd, tns) >>> stmt = "select wait_class from v$system_event group by wait_class" >>> curs = cnx.cursor() >>> curs.execute(stmt).fetchall() [('Concurrency',), ('User I/O',), ('System I/O',), ('Scheduler',), ('Configuration',), ('Other',), ('Application',), ('Queueing',), ('Idle',), ('Commit',), ('Network',)] Simple!     Some notes on best practices for Python on Oracle Solaris While we do aim to package and deliver useful packages, it does happen that perhaps there's a package you need which we don't ship, or which we ship an older version of. How do you get past that problem in a fashion which doesn't affect your system installation? Unsurprisingly, the answer is not specific to Oracle Solaris: use Python Virtual Environments. While you could certainly use $ pip install --user you can still run afoul of incorrect versions of modules being loaded. Using a virtual environment is cheap, fits in very well with the concept of containerization, and makes the task of producing reproducible builds (aka deterministic compilation much simpler. We use a similar concept when we're building ON, Solaris Userland and Solaris IPS. For further information about Python packaging, please visit this tutorial, and review this article on Best Practices for Python dependency management which I've found to be one of the best written explanations about what to do, and why to do so. If you have other questions about using Python in Oracle Solaris, please pop in to the Solaris Beta forum and let us know.

Our colleagues in the Oracle Linux organisation have a nice writeup of their support for Python, and how to get cx_Oracle installed so you can access an Oracle Database. I thought it would be useful...

Oracle Solaris 11

Solaris 11.4: 10 Good-to-Know Features, Enhancements or Changes

[Admins] Device Removal From a ZFS Storage Pool In addition to removing hot spares, cache and log devices, Solaris 11.4 has support for removal of top-level virtual data devices (vdev) from a zpool with the exception of a RAID-Z pool. It is possible to cancel a remove operation that's in progress too. This enhancement will come in handy especially when dealing with overprovisioned and/or misconfigured pools. Ref: ZFS: Removing Devices From a Storage Pool for examples. [Developers & Admins] Bundled Software Bundled software packages include Python 3.5, Oracle instant client 12.2, MySQL 5.7, Cython (C-Extensions for Python), cx_Oracle Python module, Go compiler, clang (C language family frontend for LLVM) and so on. cx_Oracle is a Python module that enables accessing Oracle Database 12c and 11g from Python applications. The Solaris packaged version 5.2 can be used with Python 2.7 and 3.4. Depending on the type of Solaris installation, not every software package may get installed by default but the above mentioned packages can be installed from the package repository on demand. eg., # pkg install pkg:/developer/golang-17 # go version go version devel a30c3bd1a7fcc6a48acfb74936a19b4c Fri Dec 22 01:41:25 GMT 2017 solaris/sparc64 [Security] Isolating Applications with Sandboxes Sandboxes are isolated environments where users can run applications to protect them from other processes on the system while not giving full access to the rest of the system. Put another way, application sandboxing is one way to protect users, applications and systems by limiting the privileges of an application to its intended functionality there by reducing the risk of system compromise. Sandboxing joins Logical Domains (LDoms) and Zones in extending the isolation mechanisms available on Solaris. Sandboxes are suitable for constraining both privileged and unprivileged applications. Temporary sandboxes can be created to execute untrusted processes. Only administrators with the Sandbox Management rights profile (privileged users) can create persistent, uniquely named sandboxes with specific security attributes. The unprivileged command sandbox can be used to create temporary or named sandboxes to execute applications in a restricted environment. The privileged command sandboxadm can be used to create and manage named sandboxes. To install security/sandboxing package, run: # pkg install sandboxing -OR- # pkg install pkg:/security/sandboxing Ref: Configuring Sandboxes for Project Isolation for details. Also See: Oracle Multitenant: Isolation in Oracle Database 12c Release 2 (12.2) New Way to Find SRU Level uname -v was enhanced to include SRU level. Starting with the release of Solaris 11.4, uname -v reports Solaris patch version in the format "11.<update>.<sru>.<build>.<patch>". # uname -v 11.4.0.12.0 Above output translates to Solaris 11 Update 4 SRU 0 Build 12 Patch 0. [Cloud] Service to Perform Initial Configuration of Guest Operating Systems cloudbase-init service on Solaris will help speed up the guest VM deployment in a cloud infrastructure by performing initial configuration of the guest OS. Initial configuration tasks typically include user creation, password generation, networking configuration, SSH keys and so on. cloudbase-init package is not installed by default on Solaris 11.4. Install the package only into VM images that will be deployed in cloud environments by running: # pkg install cloudbase-init Device Usage Information The release of Solaris 11.4 makes it easy to identify the consumers of busy devices. Busy devices are those devices that are opened or held by a process or kernel module. Having access to the device usage information helps with certain hotplug or fault management tasks. For example, if a device is busy, it cannot be hotplugged. If users are provided with the knowledge of how a device is currently being used, it helps them in resolving related issue(s). On Solaris 11.4, prtconf -v shows pids of processes using different devices. eg., # prtconf -v ... Device Minor Nodes: dev=(214,72) dev_path=/pci@300/pci@2/usb@0/hub@4/storage@2/disk@0,0:a spectype=blk type=minor nodetype=ddi_block:channel dev_link=/dev/dsk/c2t0d0s0 dev_path=/pci@300/pci@2/usb@0/hub@4/storage@2/disk@0,0:a,raw spectype=chr type=minor nodetype=ddi_block:channel dev_link=/dev/rdsk/c2t0d0s0 Device Minor Opened By: proc='fmd' pid=1516 cmd='/usr/lib/fm/fmd/fmd' user='root[0]' ... [Developers] Support for C11 (C standard revision) Solaris 11.4 includes support for the C11 programming language standard: ISO/IEC 9899:2011 Information technology - Programming languages - C. Note that C11 standard is not part of the Single UNIX Specification yet. Solaris 11.4 has support for C11 in addition to C99 to provide customers with C11 support ahead of its inclusion in a future UNIX specification. That means developers can write C programs using the newest available C programming language standard on Solaris 11.4 (and later). pfiles on a coredump pfiles, a /proc debugging utility, has been enhanced in Solaris 11.4 to provide details about the file descriptors opened by a crashed process in addition to the files opened by a live process. In other words, "pfiles core" now works. Privileged Command Execution History A new command, admhist, was included in Solaris 11.4 to show successful system administration related commands which are likely to have modified the system state, in human readable form. This is similar to the shell builtin "history". eg., The following command displays the system administration events that occurred on the system today. # admhist -d "today" -v ... 2018-05-31 17:43:21.957-07:00 root@pitcher.dom.com cwd=/ /usr/bin/sparcv9/python2.7 /usr/bin/64/python2.7 /usr/bin/pkg -R /zonepool/p6128-z1/root/ --runid=12891 remote --ctlfd=8 --progfd=13 2018-05-31 17:43:21.959-07:00 root@pitcher.dom.com cwd=/ /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1 2018-05-31 17:43:22.413-07:00 root@pitcher.dom.com cwd=/ /usr/bin/sparcv9/pkg /usr/bin/64/python2.7 /usr/bin/pkg install sandboxing 2018-05-31 17:43:22.415-07:00 root@pitcher.dom.com cwd=/ /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1 2018-05-31 18:59:52.821-07:00 root@pitcher.dom.com cwd=/root /usr/bin/sparcv9/pkg /usr/bin/64/python2.7 /usr/bin/pkg search cloudbase-init .. It is possible to narrow the results by date, time, zone and audit-tag Ref: man page of admhist(8) [Developers] Process Control Library Solaris 11.4 includes a new process control library, libproc, which provides high-level interface to features of the /proc interface. This library also provides access to information such as symbol tables which are useful while examining and control of processes and threads. A controlling process using libproc can typically: Grab another process by suspending its execution Examine the state of that process Examine or modify the address space of the grabbed process Make that process execute system calls on behalf of the controlling process, and Release the grabbed process to continue execution Ref: man page of libproc(3LIB) for an example and details.

[Admins] Device Removal From a ZFS Storage Pool In addition to removing hot spares, cache and log devices, Solaris 11.4 has support for removal of top-level virtual data devices (vdev) from a zpool...

Easily Migrate to Oracle Solaris 11 on New SPARC Hardware

We have been working very hard to make it easy for you to migrate your applications to newer, faster SPARC hardware and Oracle Solaris 11. This post provides an overview of the process and the tools that automate the migration. Migration helps you modernize IT assets, lower infrastructure costs through consolidation, and improve performance. Oracle SPARC T8 servers, SPARC M8 servers, and Oracle SuperCluster M8 Engineered Systems serve as perfect consolidation platforms for migrating legacy workloads running on old systems. Applications migrated to faster hardware and Oracle Solaris 11 will automatically deliver better performance without requiring any architecture or code changes. You can migrate your operating environment and applications using both physical-to-virtual (P2V) and virtual-to-virtual (V2V) tools. The target environment can either be configured with Oracle VM for SPARC (LDoms) or Oracle Solaris Zones on the new hardware. You can also migrate to the Dedicated Compute Classic - SPARC Model 300 in Oracle Compute Cloud and benefit from Cloud capabilities. Migration Options In general there are two options for migration. 1) Lift and Shift of Applications to Oracle Solaris 11 The application on the source system is re-hosted on new SPARC hardware running Oracle Solaris 11. If your application is running on Oracle Solaris 10 on the source system, lift and shift of the application is preferred where possible because a full Oracle Solaris 11 stack will perform better and is easier to manage. With the Oracle Solaris Binary Application Guarantee, you will get the full benefits of OS modernization while still preserving your application investment. 2) Lift and Shift of the Whole System The operating environment and application running on the system are lifted as-is and re-hosted in an LDom or Oracle Solaris Zone on target hardware running Oracle Solaris 11 in the control domain or global zone. If you are running Oracle Solaris 10 on the source system and your application has dependencies on Solaris 10 services, you can either migrate to an Oracle Solaris 10 Branded Zone or an Oracle Solaris 10 guest domain on the target. Oracle Solaris 10 Branded Zones help you maintain a Oracle Solaris 10 environment for the application while taking advantage of Oracle Solaris 11 technologies in the global zone on the new SPARC hardware. Migration Phases There are 3 key phases in migration planning and execution. 1) Discovery This includes discovery and assessment of existing physical and virtual machines, their current utilization levels, and dependencies between systems hosting multi-tier applications or running highly available (HA) Oracle Solaris Cluster type configurations. This phase helps you identify the candidate systems for migration and the dependency order for performing the migrations. 2) Size the Target Environment This requires capacity planning of the target environment to accommodate the incoming virtual machines. This takes into account the resource utilization levels on the source machine, performance characteristics of the modern target hardware running Oracle Solaris 11, and the cost savings that result from higher performance. 3) Execute the Migration Migration can be accomplished using P2V and V2V tools for LDoms and Oracle Solaris Zones. We are continually enhancing migration tools and publishing supporting documentation. As a first step in this exercise, we are releasing LDom V2V tools that help users migrate Oracle Solaris 10 or Oracle Solaris 11 guest domains that are running on old SPARC systems to modern hardware running Oracle Solaris 11 in the control domain. One of the migration scenarios is illustrated here.   Three commands are used to perform the LDom V2V migration. 1) ovmtcreate runs on the source machine to create an Open Virtualization Appliance (OVA) file, called an OVM Template. 2) ovmtdeploy runs on the target machine to deploy the guest domain. 3) ovmtconfig runs on the target machine to configure the guest domain.     In the documented example use case, validation is performed using an Oracle Database workload. Database service health is monitored using Oracle Enterprise Manager (EM) Database Express. Migration Resources We have a Lift and Shift Guide that documents the end-to-end migration use case and a White Paper that provides an overview of the process. Both documents are available at: Lift and Shift Documentation Library Stay tuned for more updates on the tools and documentation for LDom and Oracle Solaris Zone migrations for both on-premise deployments and to SPARC Model 300 in Oracle Compute Cloud. Oracle Advanced Customer Services (ACS) offers SPARC Solaris Migration services, and they can assist you with migration planning and execution using the tools developed by Solaris Engineering.

We have been working very hard to make it easy for you to migrate your applications to newer, faster SPARC hardware and Oracle Solaris 11. This post provides an overview of the process and the tools...

Scheduled Pool Scrubs in Oracle Solaris ZFS

Recommended best practices for protecting your data with ZFS include using ECC memory, configuring pool redundancy and hot spares, and always having current backups of critical data. Because storage devices can fail over time, pool scrubs are also recommended to identify and resolve data inconsistencies caused by failing devices or other issues. Additionally: Data inconsistencies can occur over time. The earlier these issues are identified and resolved, overall data availability can be increased. Disks with bad data blocks could be identified sooner during a routine pool scrub and can be resolved before the risk of multiple disk failures occur. The Oracle Solaris 11.4 release includes a new pool property for scheduling a pool scrub and also introduces a read-only property for monitoring when the last pool scrub occurred. On-going pool scrubs are recommended for routine pool maintenance. The general best practice is to either scrub once per month or per quarter for data center quality drives. This new feature enables you to more easily schedule routine pool scrubs. If you install a new Solaris 11.4 system or upgrade your existing Solaris 11 system to Solaris 11.4, a new scrubinterval pool property is set to 30 days (1 month) by default. For example: % zpool get scrubinterval export NAME PROPERTY VALUE SOURCE export scrubinterval 1m default If you have multiple pools on your system, the default scheduled scrub is staggered so not all scrubs begin at the same time. You can specify your own scrubinterval in days, weeks, or months. If scrubinterval is set to manual, this feature is disabled. The read-only lastscrub property identifies the start time of the last scrub as follows: % zpool get lastscrub export NAME PROPERTY VALUE SOURCE export lastscrub Apr_03 local A pool scrub runs in the background and at a low priority. When a scrub is scheduled using this feature, a best effort is made not to impact an existing scrub or resilver operation and might be cancelled if these operations are already running. Any running scrub (scheduled or manually started) can be cancelled by using the following command: # zpool scrub -s tank # zpool status tank pool: tank state: ONLINE scan: scrub canceled on Mon Apr 16 13:23:00 2018 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 errors: No known data errors In summary, pool scrubs are an important part of routine pool maintenance to identify and repair any data inconsistencies. ZFS scheduled scrubs provide a way to automate pool scrubs in your environment.  

Recommended best practices for protecting your data with ZFS include using ECC memory, configuring pool redundancy and hot spares, and always having current backups of critical data. Because storage...

Solaris 11.4: Three Zones Related Changes in 3 Minutes or Less

[ 1 ] Automatic Live Migration of Kernel Zones using sysadm Utility Live migrate (evacuate) all kernel zones from a host system onto other systems temporarily or permanently with the help of new sysadm(8) utility. In addition, it is possible to evacuate all zones including kernel zones that are not running and native solaris zones in the installed state. If the target host (that is, the host the zone will be migrated to) meets all evacuation requirements, set it as destination host for one or more migrating kernel zones by setting the SMF service property evacuation/target. svccfg -s svc:/system/zones/zone:<migrating-zone> setprop evacuation/target=ssh://<dest-host> Put the source host in maintenance mode using sysadm utility to prevent non-running zones from attaching, booting, or migrating in zones from other hosts. sysadm maintain <options> Migrate the zones to their destination host(s) by running sysadm's evacuate subcommand. sysadm evacuate <options> Complete system maintenance work and end the maintenance mode on source host sysadm maintain -e Optionally bring back evacuated zones to the source host Please refer to Evacuating Oracle Solaris Kernel Zones for detailed steps. [ 2 ] Moving Solaris Zones across Different Storage URIs Starting with the release of Solaris 11.4, zoneadm's move subcommand can be used to change the zonepath without moving the Solaris zone installation. In addition, the same command can be used to move a zone from: local file system to shared storage shared storage to local file system, and one shared storage location to another [ 3 ] ZFS Dataset Live Zone Reconfiguration Live Zone Reconfiguration (LZR) is the ability to make changes to a running Solaris native zone configuration permanently or temporarily. In other words, LZR avoids rebooting the target zone. Solaris 11.3 already has support for reconfiguring resources such as dedicated cpus, capped memory and automatic network (anets). Solaris 11.4 extends the LZR support to ZFS datasets. With the release of Solaris 11.4, privileged users should be able to add or remove ZFS datasets dynamically to and from a Solaris native zone without the need to reboot the zone. eg., # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 1 tstzone running /zonepool/tstzone solaris excl Add a ZFS filesystem to the running zone, tstzone # zfs create zonepool/testfs # zonecfg -z tstzone "info dataset" # zonecfg -z tstzone "add dataset; set name=zonepool/testfs; end; verify; commit" # zonecfg -z tstzone "info dataset" dataset: name: zonepool/testfs alias: testfs # zoneadm -z tstzone apply zone 'tstzone': Checking: Modifying anet linkname=net0 zone 'tstzone': Checking: Adding dataset name=zonepool/testfs zone 'tstzone': Applying the changes # zlogin tstzone "zfs list testfs" cannot open 'testfs': filesystem does not exist # zlogin tstzone "zpool import testfs" # zlogin tstzone "zfs list testfs" NAME USED AVAIL REFER MOUNTPOINT testfs 31K 1.63T 31K /testfs Remove a ZFS filesystem from the running zone, tstzone # zonecfg -z tstzone "remove dataset name=zonepool/testfs; verify; commit" # zonecfg -z tstzone "info dataset" # zlogin tstzone "zpool export testfs" # zoneadm -z tstzone apply zone 'tstzone': Checking: Modifying anet linkname=net0 zone 'tstzone': Checking: Removing dataset name=zonepool/testfs zone 'tstzone': Applying the changes # zlogin tstzone "zfs list testfs" cannot open 'testfs': filesystem does not exist # zfs destroy zonepool/testfs # A summary of LZR support for resources and properties in native and kernel zones can be found in this page.

[ 1 ] Automatic Live Migration of Kernel Zones using sysadm Utility Live migrate (evacuate) all kernel zones from a host system onto other systems temporarily or permanently with the help of new sysadm(...

Shared Zone State in Oracle Solaris 11.4

Overview Since Oracle Solaris 11.4, state of Zones on the system is kept in a shared database in /var/share/zones/, meaning a single database is accessed from all boot environments (BEs). However, up until Oracle Solaris 11.3, each BE kept its own local copy in /etc/zones/index, and individual copies were never synced across BEs. This article provides some history, why we moved to the shared zones state database, and what it means for administrators when updating from 11.3 to 11.4. Keeping Zone State in 11.3 In Oracle Solaris 11.3, state of zones is associated separately with every global zone BE in a local text database /etc/zones/index. The zoneadm and zonecfg commands then operate on a specific copy based on which BE is booted. In the world of systems being migrated between hosts, having a local zone state database in every BE constitutes a problem if we, for example, update to a new BE and then migrate a zone before booting into the newly updated BE. When we boot the new BE eventually, the zone will end up in an unavailable state (the system recognizes the shared storage is already used and puts the zone into such state), suggesting that it should be possibly attached. However, as the zone was already migrated, an admin is expecting the zone in the configured state instead. The 11.3 implementation may also lead to a situation where all BEs on a system represent the same Solaris instance (see below for the definition of what is a Solaris instance), and yet every BE can be linked to a non-global Zone BE (ZBE) for a zone of the same name with the ZBE containing an unrelated Solaris instance. Such a situation happens on 11.3 if we reinstall a chosen non-global zone in each BE. Solaris Instance A Solaris instance represents a group of related IPS images. Such a group is created when a system is installed. One installs a system from the media or an install server, via "zoneadm install" or "zoneadm clone", or from a clone archive. Subsequent system updates add new IPS images to the same image group that represents the same Solaris instance. Uninstalling a Zone in 11.3 In 11.3, uninstalling a non-global zone (ie. the native solaris(5) branded zone) means to delete ZBEs linked to the presently booted BE, and updating the state of the zone in /etc/zones/index. Often it is only one ZBE to be deleted. ZBEs linked to other BEs are not affected. Destroying a BE only destroys a ZBE(s) linked to the BE. Presently the only supported way to completely uninstall a non-global zone from a system is to boot each BE and uninstall the zone from there. For Kernel Zones (KZs) installed on ZVOLs, each BE that has the zone in its index file is linked to the ZVOL via an <dataset>.gzbe:<gzbe_uuid> attribute. Uninstalling a Kernel Zone on a ZVOL removes that BE specific attribute, and only if no other BE is linked to the ZVOL, the ZVOL is deleted during the KZ uninstall or BE destroy. In 11.3, the only supported way to completely uninstall a KZ from a system was to boot every BE and uninstall the KZ from there. What can happen when one reinstalls zones during the normal system life cycle is depicted on the following pictures. Each color represents a unique Solaris instance. The picture below shows a usual situation with solaris(5) branded zones. After the system is installed, two zones are installed there into BE-1. Following that, on every update, the zones are being updated as part of the normal pkg update process. The next picture, though, shows what happens if zones are being reinstalled during the normal life cycle of the system. In BE-3, Zone X was reinstalled, while Zone Y was reinstalled in BE-2, BE-3, and BE-4 (but not in BE-5). That leads to a situation where there are two different instances of Zone X on the system, and four different Zone Y instances. Whichever zone instance is used depends on what BE is the system booted into. Note that the system itself and any zone always represent different Solaris instances. Undesirable Impact of the 11.3 Behavior The described behavior could lead to undesired situations in 11.3: With multiple ZBEs present, if a non-global zone is reinstalled, we end up with ZBEs under the zone's rpool/VARSHARE dataset representing Solaris instances that are unrelated, and yet share the same zone name. That leads to possible problems with migration mentioned in the first section. If a Kernel Zone is used in multiple BEs and the KZ is uninstalled and then tried to get re-installed again, the installation fails with an error message that the ZVOL is in use in other BEs. The only supported way to uninstall a zone is to boot into every BE on a system and uninstall the zone from there. With multiple BEs, that is definitely a suboptimal solution. Sharing a Zone State across BEs in 11.4 As already stated in the Overview section, in Oracle Solaris 11.4, the system shares a zone state across all BEs. The shared zone state resolves the issues mentioned in the previous sections. In most situations, nothing changes for the users and administrators as there were no changes in the existing interfaces (some were extended though). The major implementation change was to move the local, line oriented textual database /etc/zones/index to a shared directory /var/share/zones/ and store it in a JSON format. However, as before, the location and format of the database is just an implementation detail and is not part of any supported interface. To be precise, we did EOF zoneadm -R <root> mark as part of the Shared Zone State project. That interface was of very little use already and then only in some rare maintenance situations. Also let us be clear that all 11.3 systems use and will continue to use the local zone state index /etc/zones/index. We have no plans to update 11.3 to use the shared zone state database. Changes between 11.3 and 11.4 with regard to keeping the Zones State With the introduction of the shared zone state, you can run into situations that before were either just not possible or could have been only created via some unsupported behavior. Creating a new zone on top of an existing configuration When deleting a zone, the zone record is removed from the shared index database and the zone configuration is deleted from the present BE. Mounting all other non-active BEs and removing the configuration files from there would be quite time consuming so the files are left behind there. That means if one later boots into one of those previously non-active BEs and tries to create the zone there (which does not exist as we removed it from the shared database before), zonecfg may hit an already existing configuration file. We extended the zonecfg interface in a way that you have a choice to overwrite the configuration or reuse it: root# zonecfg -z zone1 create Zone zone1 does not exist but its configuration file does. To reuse it, use -r; create anyway to overwrite it (y/[n])? n root# zonecfg -z zone1 create -r Zone zone1 does not exist but its configuration file does; do you want to reuse it (y/[n])? y Existing Zone without a configuration If an admin creates a zone, the zone record is put into the shared index database. If the system is later booted into a BE that existed before the zone was created, that BE will not have the zone configuration file (unless left behind before, obviously) but the zone will be known as the zone state database is shared across all 11.4 BEs. In that case, the zone state in such a BE will be reported as incomplete (note that not even the zone brand is known as that is also part of a zone configuration). root# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 incomplete - - - When listing the auxiliary state, you will see that the zone has no configuration: root# zoneadm list -sc NAME STATUS AUXILIARY STATE global running zone1 incomplete no-config If you want to remove that zone, the -F option is needed. As removing the zone will make it invisible from all BEs, possibly those with a usable zone configuration, we introduced a new option to make sure an administrator will not accidentally remove such zones. root# zonecfg -z newzone delete Use -F to delete an existing zone without a configuration file. root# zonecfg -z newzone delete -F root# zoneadm -z newzone list zoneadm: zone 'newzone': No such zone exists Updating to 11.4 while a shared zone state database already exists When the system is updated from 11.3 to 11.4 for the first time, the shared zone database is created from the local /etc/zones/index on the first boot into the 11.4 BE. A new svc:/system/zones-upgrade:default service instance takes care of that. If the system is then brought back to 11.3 and updated again to 11.4, the system (ie. the service instance mentioned above) will find an existing shared index when first booting to this new 11.4 BE and if the two indexes differ, there is a conflict that must be taken care of. In order not to add more complexity to the update process, the rule is that on every update from 11.3 to 11.4, any existing shared zone index database /var/share/zones/index.json is overwritten with data converted from the /etc/zone/index file from the 11.3 BE the system was updated from, if the data differs. If there were no changes to Zones in between the 11.4 updates, either in any older 11.4 BEs nor in the 11.3 BE we are again updating to 11.4 from, there are no changes in the shared zone index database, and the existing shared database needs no overwrite. However, if there are changes to the Zones, eg. a zone was created, installed, uninstalled, or detached, the old shared index database is saved on boot, the new index is installed, and the service instance svc:/system/zones-upgrade:default is put to a degraded state. As the output of svcs -xv now in 11.4 newly reports services in a degraded state as well, that serves as a hint to a system administrator to go and check the service log: root# beadm list BE Flags Mountpoint Space Policy Created -- ----- ---------- ----- ------ ------- s11_4_01 - - 2.93G static 2018-02-28 08:59 s11u3_sru24 - - 12.24M static 2017-10-06 13:37 s11u3_sru31 NR / 23.65G static 2018-04-24 02:45 root# zonecfg -z newzone create root# pkg update entire@latest root# reboot -f root# root# beadm list BE Name Flags Mountpoint Space Policy Created ----------- ----- ---------- ------ ------ ---------------- s11_4_01 - - 2.25G static 2018-02-28 08:59 s11u3_sru24 - - 12.24M static 2017-10-06 13:37 s11u3_sru31 - - 2.36G static 2018-04-24 02:45 s11_4_03 NR / 12.93G static 2018-04-24 04:24 root# svcs -xv svc:/system/zones-upgrade:default (Zone config upgrade after first boot) State: degraded since April 24, 2018 at 04:32:58 AM PDT Reason: Degraded by service method: "Unexpected situation during zone index conversion to JSON." See: http://support.oracle.com/msg/SMF-8000-VE See: /var/svc/log/system-zones-upgrade:default.log Impact: Some functionality provided by the service may be unavailable. root# tail /var/svc/log/system-zones-upgrade:default.log [ 2018 Apr 24 04:32:50 Enabled. ] [ 2018 Apr 24 04:32:56 Executing start method ("/lib/svc/method/svc-zones-upgrade"). ] Converting /etc/zones/index to /var/share/zones/index.json. Newly generated /var/share/zones/index.json differs from the previously existing one. Forcing the degraded state. Please compare current /var/share/zones/index.json with the original one saved as /var/share/zones/index.json.backup.2018-04-24--04-32-57, then clear the service. Moving /etc/zones/index to /etc/zones/index.old-format. Creating old format skeleton /etc/zones/index. [ 2018 Apr 24 04:32:58 Method "start" exited with status 103. ] [ 2018 Apr 24 04:32:58 "start" method requested degraded state: "Unexpected situation during zone index conversion to JSON" ] root# svcadm clear svc:/system/zones-upgrade:default root# svcs svc:/system/zones-upgrade:default STATE STIME FMRI online 4:45:58 svc:/system/zones-upgrade:default If you diff(1)'ed the backed up JSON index and the present one, you would see that the zone newzone was added. The new index could be also missing some zones that were created before. Based on the index diff output, he/she can create or remove Zones on the system as necessary, using standard zonecfg(8) command, possibly reusing existing configurations as shown above. Also note that the degraded state here did not mean any degradation of functionality, its sole purpose was to notify the admin about the situation. Do not use BEs as a Backup Technology for Zones The previous implementation in 11.3 allowed for using BEs with linked ZBEs as a backup solution for Zones. That means that if a zone was uninstalled in a current BE, one could usually boot to an older BE or, in case of non-global zones, try to attach and update another existing ZBE from the current BE using the attach -z <ZBE> -u/-U subcommand. With the current implementation, uninstalling a zone means to uninstall the zone from the system, and that is uninstalling all ZBEs (or a ZVOL in case of Kernel Zones) as well. If you uninstall a zone in 11.4, it is gone. If an admin used previous implementation also as a convenient backup solution, we recommend to use archiveadm instead, whose functionality also provides for backing up zones. Future Enhancements An obvious future enhacement would be shared zone configurations across BEs. However, it is not on our short term plan at this point and neither we can guarantee this functionality will ever be implemented. One thing is clear - it would be a more challenging task than the shared zone state.

Overview Since Oracle Solaris 11.4, state of Zones on the system is kept in a shared database in /var/share/zones/, meaning a single database is accessed from all boot environments (BEs). However, up...

Solaris 10 Extended Support Patches & Patchsets Released!

On Tuesday April 17 we released the first batch of Solaris 10 patches & patchsets under Solaris 10 Extended Support.  There were a total of 24 Solaris 10 patches, including kernel updates, and 4 patchsets released on MOS! Solaris 10 Extended Support will run thru January 2021.  Scott Lynn put together a very informative Blog on Solaris 10 Extended Support detailing the benefits that customers can get by purchasing Extended Support for Solaris 10 - see https://blogs.oracle.com/solaris/oracle-solaris-10-support-explained. Those of you that have taken advantage of our previous Extended Support offerings for Solaris 8 and Solaris 9 will notice that we've changed things around a little with Solaris 10 Extended Support; previously we did not publish any updates to the Solaris 10 Recommended Patchsets during the Extended Support period.  This meant that the Recommended Patchsets remained available to all customers with Premier Operating Systems support, as all the patches the patchsets contained had Operating Systems entitlement requirements. Moving forward with Solaris 10 Extended Support, the decision has been made to continue to update the Recommended Patchsets thru the Solaris 10 Extended Support period.  This means customers that purchase Solaris 10 Extended Support get the benefit of continued Recommended Patchset updates, as patches that meet the criteria for inclusion in the patchsets are released.  During the Solaris 10 Extended Support period, the updates to the Recommended Patchsets will contain patches that require a Solaris 10 Extended Support contract, so the Solaris 10 Recommended Patchsets will also require a Solaris 10 Extended Support contract during this period. For customers that do not wish to avail of Extended Support and would like to access the last Recommended Patchsets created prior to the beginning of Extended Support for Solaris 10, the January 2018 Critical Patch Updates (CPUs) for Solaris 10 will remain available to those with Premier Operating System Support. The CPU Patchsets are rebranded versions of the Recommended Patchset on the CPU dates; the patches included in the CPUs are identical to the Recommended Patchset released on those CPU dates, but the CPU READMEs will be updated to reflect their use as CPU resources.  CPU patchsets are archived and are always available via MOS at later dates so that customers can easily align to their desired CPU baseline at any time.  A further benefit that only Solaris 10 Extended Support customers will receive is access to newly created CPU Patchsets for Solaris 10 thru the Extended Support period. The following table provides a quick reference to the recent Solaris 10 patchsets that have been released, including details of the support contract required to access them:   Patchset Name Patchset Details README Download Support Contract Required Recommended OS Patchset for Solaris 10 SPARC Patchset Details README Download Extended Support Recommended OS Patchset for Solaris 10 x86 Patchset Details README Download Extended Support CPU OS Patchset 2018/04 Solaris 10 SPARC Patchset Details README Download Extended Support CPU OS Patchset 2018/04 Solaris 10 x86 Patchset Details README Download Extended Support CPU OS Patchset 2018/01 Solaris 10 SPARC Patchset Details README Download  Operating Systems Support CPU OS Patchset 2018/01 Solaris 10 x86 Patchset Details README Download Operating Systems Support Please reach out to your local sales representative if you wish to get more information on the benefits of purchasing Extended Support for Solaris 10.

On Tuesday April 17 we released the first batch of Solaris 10 patches & patchsets under Solaris 10 Extended Support.  There were a total of 24 Solaris 10 patches, including kernel updates, and 4...

Oracle Solaris ZFS Device Removal

At long last, we provide the ability to remove a top-level VDEV from a ZFS storage pool in the upcoming Solaris 11.4 Beta refresh release. For many years, our recommendation was to create a pool based on current capacity requirements and then grow the pool to meet increasing capacity needs by adding VDEVs or by replacing smaller LUNs with larger LUNs. It is trivial to add capacity or replace smaller LUNs with larger LUNs, sometimes with just one simple command. The simplicity of ZFS is one of its great strengths! I still recommend the practice of creating a pool that meets current capacity requirements and then adding capacity when needed. If you need to repurpose pool devices in an over-provisioned pool or if you accidentally misconfigure a pool device, you now have the flexibility to resolve these scenarios. Review the following practical considerations when using this new feature, which should be used as an exception rather than the rule for pool configuration on production systems: An virtual (pseudo device) is created to move the data off the (removed) pool devices so the pool must have enough space to absorb the creation of the pseudo device Only top-level VDEVs can be removed from mirrored or RAIDZ pools Individual devices can be removed from striped pools Pool device misconfigurations can be corrected A few implementation details in case you were wondering: No additional steps are needed to remap the removed devices Data from the removed devices are allocated to the remaining devices but this is not a way to rebalance all data on pool devices Reads of the reallocated data are done from the pseudo device until those blocks are freed Some levels of indirection are needed to support this operation but they should not impact performance nor increase memory requirements See the examples below. Repurpose Pool Devices The following pool, tank, has low space consumption so one VDEV is removed. # zpool list tank NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank 928G 28.1G 900G 3% 1.00x ONLINE # zpool status tank pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 errors: No known data errors - # zpool remove tank mirror-1 # zpool status tank pool: tank state: ONLINE status: One or more devices are being removed. action: Wait for the resilver to complete. Run 'zpool status -v' to see device specific details. scan: resilver in progress since Sun Apr 15 20:58:45 2018 28.1G scanned 3.07G resilvered at 40.9M/s, 21.83% done, 4m35s to go config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 mirror-1 REMOVING 0 0 0 c1t7d0 REMOVING 0 0 0 c5t3d0 REMOVING 0 0 0 errors: No known data errors Run the zpool iostat command to verify that data is being written to the remaining VDEV. # zpool iostat -v tank 5 capacity operations bandwidth pool alloc free read write read write ------------------------- ----- ----- ----- ----- ----- ----- tank 28.1G 900G 9 182 932K 21.3M mirror-0 14.1G 450G 1 182 7.90K 21.3M c3t2d0 - - 0 28 4.79K 21.3M c4t2d0 - - 0 28 3.92K 21.3M mirror-1 - - 8 179 924K 21.2M c1t7d0 - - 1 28 495K 21.2M c5t3d0 - - 1 28 431K 21.2M ------------------------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool alloc free read write read write ------------------------- ----- ----- ----- ----- ----- ----- tank 28.1G 900G 0 967 0 60.0M mirror-0 14.1G 450G 0 967 0 60.0M c3t2d0 - - 0 67 0 60.0M c4t2d0 - - 0 68 0 60.4M mirror-1 - - 0 0 0 0 c1t7d0 - - 0 0 0 0 c5t3d0 - - 0 0 0 0 ------------------------- ----- ----- ----- ----- ----- ----- Misconfigured Pool Device In this case, a device was intended to be added as a cache device but was added a single device. The problem is identified and resolved. # zpool status rzpool pool: rzpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 errors: No known data errors # zpool add rzpool c3t3d0 vdev verification failed: use -f to override the following errors: mismatched replication level: pool uses raidz and new vdev is disk Unable to build pool from specified devices: invalid vdev configuration # zpool add -f rzpool c3t3d0 # zpool status rzpool pool: rzpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors # zpool remove rzpool c3t3d0 # zpool add rzpool cache c3t3d0 # zpool status rzpool pool: rzpool state: ONLINE scan: resilvered 0 in 1s with 0 errors on Sun Apr 15 21:09:35 2018 config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 cache c3t3d0 ONLINE 0 0 0 In summary, Solaris 11.4 includes a handy new option for repurposing pool devices and resolving pool misconfiguration errors.

At long last, we provide the ability to remove a top-level VDEV from a ZFS storage pool in the upcoming Solaris 11.4 Beta refresh release. For many years, our recommendation was to create a pool based...

Oracle Solaris 11.4 Open Beta Refreshed!

On January 30, 2018, we released the Oracle Solaris 11.4 Open Beta. It has been quite successful. Today, we are announcing that we've refreshed the 11.4 Open Beta. This refresh includes new capabilities and additional bug fixes (over 280 of them) as we drive to the General Availability Release of Oracle Solaris 11.4. Some new features in this release are: ZFS Device Removal ZFS Scheduled Scrub SMB 3.1.1 Oracle Solaris Cluster Compliance checking ssh-ldap-getpubkey Also, the Oracle Solaris 11.4 Beta refresh includes the changes to mitigate CVE-2017-5753, otherwise known as Spectre Variant 1, for Firefox, the NVIDIA Graphics driver, and the Solaris Kernel (see MOS docs on SPARC and x86 for more information). Additionally, new bundled software includes, gcc 7.3, libidn2, and qpdf 7.0.0, and more than 45 new bundled software versions. Before I go further, I have to say: The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. I want to take a few minutes to address some questions I've been getting that the upcoming release of Oracle Solaris 11.4 has sparked.  Oracle Solaris 11.4 runs on Oracle SPARC and x86 systems released since 2011, but not on certain older systems that had been supported in Solaris 11.3 and earlier.  Specifically, systems not supported in Oracle Solaris 11.4 include systems based on the SPARC T1, T2, and T3 processors or the SPARC64 VII+ and earlier based “Sun4u” systems such as the SPARC Enterprise M4000.  To allow customers time to migrate to newer hardware we intend to provide critical security fixes as necessary on top of the last SRU delivered for 11.3 for the following year.  These updates will not provide the same level of content as regular SRUs  and are intended solely as a transition vehicle.  Customers using newer hardware are encouraged to update to Oracle Solaris 11.4 and subsequent Oracle Solaris 11 SRUs as soon as practical. Another question I've been getting quite a bit is about the release frequency and strategy for Oracle Solaris 11. After much discussion internally and externally, with you, our customers, about our current continuous delivery release strategy, we are going forward with our current strategy with some minor changes: Oracle Solaris 11 update releases will be released every year in approximately the first quarter of our fiscal year (that's June, July, August for most people). New features will be made available as they are ready to ship in whatever is the next available and appropriate delivery vehicle. This could be an SRU, CPU or a new release. Oracle Solaris 11 update releases will contain the following content: All new features previously released in the SRUs between the releases Any new features that are ready at the time of release Free and Open Source Software updates (i.e. new versions of FOSS) End of Features and End of Life hardware. ​This should make our releases more predictable, maintain the reliability you've come to depend on, and provide new features to you rapidly, allowing you to test them and deploy them faster. Oracle Solaris 11.4 is secure, simple and cloud-ready and compatible with all your existing Oracle Solaris 11.3 and earlier applications. Go give the latest beta a try. You can download it here.  

On January 30, 2018, we released the Oracle Solaris 11.4 Open Beta. It has been quite successful. Today, we are announcing that we've refreshed the 11.4 Open Beta. This refresh includes new...

Oracle Solaris 11.3 SRU 31

We've just released Oracle Solaris 11.3 SRU 31. This is the April Critical Patch update and contains some important security fixes as well as enhancements to Oracle Solaris. SRU31 is now available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . The following components have been updated to address security issues: The Solaris kernel has been updated to mitigate against CVE-2017-5753 aka spectre v1. See Oracle Solaris on SPARC - Spectre (CVE-2017-5753, CVE-2017-5715) and Meltdown (CVE-2017-5754) Vulnerabilities (Doc ID 2349278.1) and Oracle Solaris on x86 - Spectre (CVE-2017-5753, CVE-2017-5715) and Meltdown (CVE-2017-5754) Vulnerabilities (Doc ID 2383531.1) for more information. Apache Tomcat has been updated to 8.5.28 Firefox has been updated to 52.7.3esr Thunderbird has been updated to 52.7.0 unzip has been updated to 6.1 beta c23 NTP has been updated to 4.2.8p11 TigerVNC has been updated to 1.7.1 Updated versions of PHP: PHP has been updated to 5.6.34 PHP has been updated to 7.1.15 Updated versions of MySQL: MySQL has been updated to 5.5.59 MySQL has been updated to 5.6.39 irssi has been updated to 1.0.7 Security fixes are also included for quagga, gimp, GNOME remote desktop, vinagre and NSS. These enhancements have also been added: Oracle VM Server for SPARC has been updated to version 3.5.0.2. For more information including What's New, Bug Fixes, and Known Issues, see Oracle VM Server for SPARC 3.5.0.2 Release Notes. The TigerVNC update introduces the new fltk component in Oracle Solaris 11.3 libidn has been updated to 2.0.4 pam_list support for wildcard and comment lines The Java 8, Java 7, and Java 6 packages have been updated. See Note 5 for the location and details on how to update Java. For more information and bugs fixed, see Java 8 Update 172 Release Notes, Java 7 Update 181 Release Notes, and Java 6 Update 191 Release Notes. Full details of this SRU can be found in My Oracle Support Doc 2385753.1. For the list of Service Alerts affecting each Oracle Solaris 11.3 SRU, see Important Oracle Solaris 11.3 SRU Issues (Doc ID 2076753.1).

We've just released Oracle Solaris 11.3 SRU 31. This is the April Critical Patch update and contains some important security fixes as well as enhancements to Oracle Solaris. SRU31 is now available...

One SMF Service to Monitor the Rest!

Contributed by: Thejaswini Kodavur Have you ever wondered if there was a single service that monitors all your other services and makes administration easier? If yes then “SMF goal services”, a new feature of Oracle Solaris 11.4, is here to provide a single, unambiguous, and well-defined point where one can consider the system up and running. You can choose your customized, mission critical services and link them together into a single SMF service in one step. This SMF service is called a goal service. It can be used to monitor the health of your system upon booting up. This makes administration much easier as monitoring each of the services individually is no longer required! There are two ways in which you can make your services part of a goal service. 1. Using the supplied Goal Service By default Oracle Solaris 11.4 system provides you a goal service called “svc:/milestone/goals:default”. This goal service has a dependency on the service “svc:/milestone/multi-user-server:default” by default. You can set your mission critical service to the default goal service as below: # svcadm goals system/my-critical-service-1:default Note: This is a set/clear interface. Therefore the above command will clear the dependency from “svc:/milestone/multi-user-server:default”. In order to set the dependency on both the services use: # svcadm goals svc:/milestone/multi-user-server:default \ system/my-critical-service-1:default 2. Creating you own Goal Service Oracle Solaris 11.4 allows you to create your own goal service and set your mission critical services as dependent services. Follow the below steps to create and use a goal service. Create a usual SMF service using svcbundle(8), svccfg(8), svcadm(8): # svcbundle -o new-gs.xml -s service-name=milestone/new-gs -s start-method=":true" # cp new-gs.xml /lib/svc/manifest/site/new-gs.xml # svccfg validate /lib/svc/manifest/site/new-gs.xml # svcadm restart svc:/system/manifest-import # svcs new-gs STATE STIME FMRI online 6:03:36 svc:/milestone/new-gs:default To make this SMF service as a goal service, set the property general/goal-service=true: # svcadm disable svc:/milestone/new-gs:default # svccfg -s svc:/milestone/new-gs:default setprop general/goal-service=true # svcadm enable svc:/milestone/new-gs:default Now you can set dependencies in the newly created goal services using the -g option as below: # svcadm goals -g svc:/milestone/new-gs:default system/critical-service-1:default \ system/critical-service-2:default Note: By omitting the -g option without specifying a goal service, you will set the dependency to the system provided default goal service, i.e svc:/milestone/multi-user-server:default. On system boot up if one of your critical services does not come online, then the goal service will go into maintenance state. # svcs -d milestone/new-gs STATE STIME FMRI disabled 5:54:31 svc:/system/critical-service-2:default online Feb_19 svc:/system/critical-service-1:default # svcs milestone/new-gs STATE STIME FMRI maintenance 5:54:30 svc:/milestone/new-gs:default Note: You can use -d option of svcs(1) to check the dependencies on your goal service. Once all of the dependent services come online then your goal service will also come online. For goal services to be online, they are expected to have all their dependencies satisfied. # svcs -d milestone/new-gs STATE STIME FMRI online Feb_19 svc:/system/critical-service-1:default online 5:56:39 svc:/system/critical-service-2:default # svcs milestone/new-gs STATE STIME FMRI online 5:56:39 svc:/milestone/new-gs:default Note: For more information refer to "Goal Services" in smf(7) and subcommand goal in svcadm(8). The goal service “milestone/new-gs” is your new single SMF service with which you can monitor all of your other mission critical services! Thus, Goals Service acts as the headquarters that monitors the rest of your services.

Contributed by: Thejaswini Kodavur Have you ever wondered if there was a single service that monitors all your other services and makes administration easier? If yes then “SMF goal services”, a new...

Oracle Solaris 11

Oracle Solaris 11.4 beta progress on LP64 conversion

Back in 2014, I posted Moving Oracle Solaris to LP64 bit by bit describing work we were doing then. In 2015, I provided an update covering Oracle Solaris 11.3 progress on LP64 conversion. Now that we've released the Oracle Solaris 11.4 Beta to the public you can see the ratio of ILP32 to LP64 programs in /usr/bin and /usr/sbin in the full Oracle Solaris package repositories has dramatically shifted in 11.4: Release 32-bit 64-bit total Solaris 11.0 1707 (92%) 144 (8%) 1851 Solaris 11.1 1723 (92%) 150 (8%) 1873 Solaris 11.2 1652 (86%) 271 (14%) 1923 Solaris 11.3 1603 (80%) 379 (19%) 1982 Solaris 11.4 169 (9%) 1769 (91%) 1938 That's over 70% more of the commands shipped in the OS which can use ADI to stop buffer overflows on SPARC, take advantage of more registers on x86, have more address space available for ASLR to choose from, are ready for timestamps and dates past 2038, and receive the other benefits of 64-bit software as described in previous blogs. And while we continue to provide more features for 64-bit programs, such as making ADI support available in the libc malloc, we aren't abandoning 32-bit programs either. A change that just missed our first beta release, but is coming in a later refresh of our public beta will make it easier for 32-bit programs to use file descriptors > 255 with stdio calls, relaxing a long held limitation of the 32-bit Solaris ABI. This work was years in the making, and over 180 engineers contributed to it in the Solaris organization, plus even more who came before to make all the FOSS projects we ship and the libraries we provide be 64-bit ready so we could make this happen. We thank all of them for making it possible to bring this to you now.

Back in 2014, I posted Moving Oracle Solaris to LP64 bit by bit describing work we were doing then. In 2015, I provided an update covering Oracle Solaris 11.3 progress on LP64 conversion. Now that...

Recent Blogs Round-Up

We're seeing blogs about the Solaris 11.4 Beta show up through different channels like Twitter and Facebook which means you might have missed some of these, so we thought it would be good do a round-up. This also means you might have already seen some of them but hopefully there are some nice new ones among them. Glenn Faden wrote about: Authenticated Rights Profiles Sharing Sensitive Data Monitoring Access to Sensitive Data OpenLDAP Support Using the Oracle Solaris Account Manager BUI Protecting Sensitive Data Cindy Swearingen wrote about the Data Management Features Enrico Perla wrote about libc:malloc meets ADIHEAP Thorsten Mühlmann wrote a few nice ones recently: Solaris is dead – long lives Solaris Privileged Command Execution History Reporting Per File Auditing Improved Debugging with pfiles Enhancement Monitoring File System Latency with fsstat Eli Kleinman wrote a pretty complete set of blogs on Analytics/StatsStore: Part 1 on how to configure analytics Part 2 on how to configure the client capture stat process Part 3 on how to publish the client captured stats Part 4 Configuring / Accessing the Web Dashboard / UI Part 5 Capturing Solaris 11.4 Analytics By Using Remote Administration Daemon (RAD) Andrew Watkins wrote about Setting up Sendmail / SASL to handle SMTP AUTH Marcel Hofstetter wrote about: Fast (asynchronous) ZFS destroy How JomaSoft VDCF can help with updating from 11.3 to 11.4 Rod Evans wrote about elfdiff And for those interested in linker and ELF related improvements/changes/details Ali Bahrami published a very complete set of blogs: kldd: ldd Style Analysis For Solaris Kernel Modules - The new kldd ELF utility brings ldd style analysis to kernel modules. Core File Enhancements for elfdump - Solaris 11.4 comes with a number of enhancements that allow the elfdump utility to display a wealth of information that was previously hidden in Solaris core files. Best of all, this comes without a significant increase in core file size. ELF Program Header Names - Starting with Solaris 11.4, program headers in Solaris ELF objects have explicit names associated with them. These names are used by libproc, elfdump, elfedit, pmap, pmadvise, and mdb to eliminate some of the guesswork that goes into looking at process mappings. ELF Section Compression - In cooperation with the GNU community, we are happy and proud to bring standard ELF section compression APIs to libelf. This builds on our earlier work in 2012 (Solaris 11 Update 2) to standardize ELF compression at the file format level. Now, others can easily access that functionality. ld -ztype, and Kernel Modules That Know What They Are - Solaris Kernel Modules (kmods) are now explicitly tagged as such, and are treated as final objects. Regular Expression and Glob Matching for Mapfiles - Pattern matching using regular expressions, globbing, or plain string comparisons, bring new expressive power to Solaris mapfiles. New CRT Objects. (Or: What Are CRT objects?) - Publically documented and committed CRT objects for Solaris. Goodbye (And Good Riddance) to -mt -and -D_REENTRANT - A long awaited simplification to the process of building multithreaded code, one of the final projects delivered to Solaris by Roger Faulkner, made possible by his earlier work on thread unification that landed in Solaris 10. Weak Filters: Dealing With libc Refactoring Over The Years - Weak Filters allow the link-editor to discard unnecessary libc filters as dependencies, because you can't always fix the Makefile. Where Did The 32-Bit Linkers Go? - In Solaris 11 Update 4 (and Solaris 11 Update 3), the 32-bit version of the link-editor, and related linking utilities, are gone.  We hope you enjoy.

We're seeing blogs about the Solaris 11.4 Beta show up through different channels like Twitter and Facebook which means you might have missed some of these, so we thought it would be good do...

Oracle Solaris 11

Oracle Solaris 11.3 SRU 29 Released

We've just released Oracle Solaris 11.3 SRU 29. It contains some important security fixes and enhancements. SRU29 is now available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . Features included in this SRU include: libdax support on X86 This feature enables the use of DAX query operations on x86 platforms. The ISV and Open Source communities can now develop DAX programs on x86 platforms. The application developed on x86 platforms can be executed on SPARC platform with no modifications and the libdax API will choose DAX operations supported by the platform. The libdax library on x86 uses software emulation and does not require any change in the user developed applications. Oracle VM Server for SPARC has been updated to version 3.5.0.1 For more information including What's New, Bug Fixes, and Known Issues, see Oracle VM Server for SPARC 3.5.0.1 Release Notes. The Java 8, Java 7, and Java 6 packages have been updated. For more information and bugs fixed, see Java 8 Update 162 Release Notes, Java 7 Update 171 Release Notes, and Java 6 Update 181 Release Notes. The SRU also updates the following components which have security fixes: p7zip has been updated to 16.02 Firefox has been updated to 52.6.0esr ImageMagick has been updated to 6.9.9-30 Thunderbird has been updated to 52.6.0 libtiff has been updated to 4.0.9 Wireshark has been updated to 2.4.4 NVIDIA driver has been updated irssi has been updated to 1.0.6 BIND has been updated to 9.10.5-P3   Full details of this SRU can be found in My Oracle Support Doc 2361795.1 For the list of Service Alerts affecting each Oracle Solaris 11.3 SRU, see Important Oracle Solaris 11.3 SRU Issues (Doc ID 2076753.1).

We've just released Oracle Solaris 11.3 SRU 29. It contains some important security fixes and enhancements. SRU29 is now available from My Oracle Support Doc ID 2045311.1, or via 'pkg update'...

posix_spawn() as an actual system call

History As a developer, there are always those projects when it is hard to find a way to go forward.  Drop the project for now and find another project, if only to rest your eyes and find yourself a new insight for the temporarily abandoned project.  This is how I embarked on posix_spawn() as an actual system call you will find in Oracle Solaris 11.4. The original library implementation of posix_spawn() uses vfork(), but why care about the old address space if you are not going to use it? Or, worse, stop all the other threads in the process and don't start them until exec succeeded or when you call exit()? As I had already written kernel modules for nefarious reason to run executables directly from the kernel, I decided to benchmark the simple "make process, execute /bin/true" against posix_spawn() from the library. Even with two threads, posix_spawn() scaled poorly: additional threads did not allow a large number of additional spawns per second. Starting a new process All ways to start a new process need to copy a number of process properties: file descriptors, credentials, priorities, resource controls, etc. The original way to start a new process is fork(); you will need to mark all the pages as copy-on-write (O(n) in the size of the number of pages in the process) and so this gets more and more expensive when the process get larger and larger. In Solaris we also reserve all the needed swap; a large process calling fork() doubles its swap requirement. In BSD vfork() was introduced; it borrows the address space and was cheap when it was invented.  In much larger processes with hundreds of threads, it became more and more of bottleneck.  Dynamic linking also throws a spanner in the works: what you can do between vfork() and the final exec() is extremely small. In the standard universe, posix_spawn() was invented; it was aimed mostly at small embedded systems and a very number of specific actions can be performed before the new executable is run.  As it was part of the standard, Solaris grew its own copy build on top of vfork(). It has, of course, the same problems as vfork() has; but because it is implemented in the library we can be sure we steer clear from all the other vfork() pitfalls. Native spawn(2) call The native spawn(2) system introduced in Oracle Solaris 11.4 shares a lot of code with the forkx(2) and execve(2).  It mostly avoids doing those unneeded operations: do not stop all threads do not copy any data about the current executable do not clear all watch points (vfork()) do not duplicate the address space (fork()) no need to handle shared memory segments do not copy one or more of the threads (fork1/forkall), create a new one instead do not copy all file pointers no need to restart all threads held earlier The exec() call copies from its own address space but when spawn(2) needs the argument, it is already in a new process.  So early in the spawn(2) system call we copy the environment vector and the arguments and save them away.  The data blob is given to the child and the parent waits until the client is about to return from the system call in the new process or when it decides that it can't actually exec and calls exit instead. A process can spawn(2) in all its threads and the concurrently is only limited by locks that need to be held shortly when processes are created. The performance win depends on the application; you won't win anything unless you use posix_spawn(); I was very happy to see that our standard shell is using posix_spawn() to start new processes as do popen(3c) as well as system(3c) so the call is well tested.  The more threads you have, the bigger the win. Stopping a thread is expensive, especially if it hold up in a system call. The world used to stop but now it just continues. Support in truss(1), mdb(1) When developing a new system call special attention needs to be given to proc(5) and truss(1) interaction.  The spawn(2) system call is an exception but only because it is much harder to get it right; support is also needed in debuggers or they won't see a new process starting. This includes mdb(1) but also truss(1).  They also need to learn that when spawn(2) succeeds, that they are stopping in a completely different executable; we may also have crossed a privilege boundary, e.g., when spawning su(8) or ping(8).

History As a developer, there are always those projects when it is hard to find a way to go forward.  Drop the project for now and find another project, if only to rest your eyes and find yourself...

Installing Packages — Oracle Solaris 11.4 Beta

We've been getting random questions about how to install (Oracle Solaris) packages onto their newly installed Oracle Solaris 11.4 Beta. And of course key is pointing to the appropriate IPS repository. One of the options is to download the full repository and install it on it's own locally or add this to an existing local repository and then just point the publisher to this local repository. This is mostly used by folks who have a test system/LDom/Kernel Zone where they will probably have one or more local repositories already. However experience shows that a large percentage of folks testing a beta version like this do this in a VirtualBox instance on their laptop or workstation. And because of this they want to use the Gnome Desktop rather than remotely logging through ssh. So one of the things we do is supply an Oracle VM Template for VirtualBox which already has the solaris-desktop group package installed (officially the group/system/solaris-desktop) so it shows more than the console when started and give you the ability to run desktop like tasks like Firefox and a Terminal. (Btw as per the Release Notes on Runtime Issues there's a glitch with gnome-terminal you might run into and you'd need to run a workaround to get it working.) For this group of VirtualBox based testers the chances are high that they're not going to have a local repository nearby, especially on a laptop that's moving around. This is where using our central repository at pkg.oracle.com is very useful which is well described in the Oracle Solaris documentation. However going through this there may be some minor obstacles to clear when using this method that aren't directly part of the process but get in the way when using the VirtualBox installed OVM Template. First, when using the Firefox browser to request certificates and download certificates and later point to the repository you'll need to have DNS working and depending on the install the DNS client may not yet be enabled. Here's how you check it: demo@solaris-vbox:~$ svcs dns/client STATE STIME FMRI disabled 5:45:26 svc:/network/dns/client:default This is fairly simple to solve. First check that the Oracle Solaris instance has correctly picked up the DNS information from VirtualBox in the DHCP process buy looking in /etc/resolv.conf. If that looks good simply enable the dns/client service: demo@solaris-vbox:~$ sudo svcadm enable dns/client You'll be asked for your password and then it will be enabled. Note you can also use pfexec(1) instead of sudo(8). This will also check if your user has the appropriate privileges. You can check if the service is running: demo@solaris-vbox:~/Downloads$ svcs dns/client STATE STIME FMRI online 10:21:16 svc:/network/dns/client:default Now DNS is running you should be able to ping pkg.oracle.com. The second gotya is that on the pkg-register.oracle.com page the Oracle Solaris 11.4 Beta repository is at the very bottom of the list of available repositories and should not be confused with the Oracle Solaris 11 Support repository (to which you may already have requested access) listed at the top of the page. The same certificate/key pair are used for any of the Oracle Solaris repositories, however in order permit the use of the any existing cert/key pair the license for the Oracle Solaris 11.4 Beta repository must be accepted. This means selecting the 'Request Access' button next to the Solaris 11.4 Beta repository entry. Once you have the cert/key, or you have accepted the license, then you can configure the beta repository as: pkg set-publisher -k <your-key> -c <your-cert> -g https://pkg.oracle.com/solaris/beta solaris With the Virtual Box image the default repository setup includes the 'release' repository. It is best to remove that: pkg set-publisher -G http://pkg.oracle.com/solaris/release solaris This can be performed in one command: pkg set-publisher -k <your-key> -c <your-cert> -G http://pkg.oracle.com/solaris/release\ -g https://pkg.oracle.com/solaris/beta solaris Note that here too you'll need to either use pfexec(1) or sudo(8) again. This should kickoff the pkg(1) command and once it's done you can check it's status with: demo@solaris-vbox:~/Downloads$ pkg publisher solaris Publisher: solaris Alias: Origin URI: https://pkg.oracle.com/solaris/beta/ Origin Status: Online SSL Key: /var/pkg/ssl/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx SSL Cert: /var/pkg/ssl/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Cert. Effective Date: January 29, 2018 at 03:04:58 PM Cert. Expiration Date: February 6, 2020 at 03:04:58 PM Client UUID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Catalog Updated: January 24, 2018 at 02:09:16 PM Enabled: Yes And now you're up and running. A final thought, if for example you've chosen to install the Text Install version of the Oracle Solaris 11.4 Beta because you want yo have a nice minimal install with no overhead of Gnome and things like that, you can also download the key and certificate to another system or the hosting OS (in the case you're using VirtualBox) and then rsync or rcp them across and then follow all the same steps.

We've been getting random questions about how to install (Oracle Solaris) packages onto their newly installed Oracle Solaris 11.4 Beta. And of course key is pointing to the appropriate IPS repository. O...

System maintenance — evacuate all Zones!

The number one use case for live migration today is for evacuation: when a Solaris Zones host needs some maintenance operation that involves a reboot, then the zones are live migrated to some other willing host. This avoids scheduling simultaneous maintenance windows for all the services provided by those zones. Implementing this today on Solaris 11.3 involves manually migrating zones with individual zoneadm migrate commands, and especially, determining suitable destinations for each of the zones. To make this common scenario simpler and less error prone, Solaris 11.4 Beta comes with a new command sysadm(8) for system maintenance that also allows for zone evacuation. The basic idea of how it is supposed to be used is like this: # pkg update ... # sysadm maintain -s -m "updating to new build" # sysadm evacuate -v Evacuating 3 zones... Migrating myzone1 to rads://destination1/ ... Migrating myzone3 to rads://destination1/ ... Migrating myzone4 to rads://destination2/ ... Done in 3m30s. # reboot ... # sysadm maintain -e # sysadm evacuate -r ... When in maintenance mode, an attempt to attach or boot any zone is refused: if the admin is trying to move zones off the host, it's not helpful to allow incoming zones. Note that this maintenance mode is recorded system-wide, not just in the zones framework; even though the only current impact is on zones, it seems likely other sub-systems may find it useful in the future. To set up an evacuation target for a zone, an SMF property evacuation/target for a given zone service instance system/zones/zone:<zone-name> must be set to the target host. You can either use rads:// or ssh:// location identifier, e.g.: ssh://janp@mymachine.mydomain.com. Do not forget to refresh the service instance for your change to take effect. You can evacuate running Kernel Zones and both installed native and Kernel Zones. The evacuation always means evacuating running zones, and with the option -a, installed zones are included as well. Only those zones with the set evacuation/target property in their service instance are scheduled for evacuation. However, if any of the running zone (and also installed if the evacuate -a is used) is not set with the property, the overall result of the evacuation will be reported as failed by sysadm which is logical as an evacuation by its definition means to evacuate everything. As live zone migration does not support native zones, those can be only evacuated in the installed state. Also note that you can only evacuate zones installed on shared storage. For example, on iSCSI volumes. See the storage URI manual page, suri(7), for information on what other shared storage is supported. Note that you can install Kernel Zones to NFS files as well. To setup live Kernel Zone migration, please check out Migrating an Oracle Solaris Kernel Zone section of the 11.4 online documentation. Now, let's see a real example. We have a few zones on host nacaozumbi. All running and installed zones are on shared storage, including the native zone tzone1 and Kernel Zone evac1: root:nacaozumbi:~# zonecfg -z tzone1 info rootzpool rootzpool: storage: iscsi://saison/luname.naa.600144f0dbf8af1900005582f1c90007 root:nacaozumbi::~$ zonecfg -z evac1 info device device: storage: iscsi://saison/luname.naa.600144f0dbf8af19000058ff48060017 id: 1 bootpri: 0 root:nacaozumbi:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 82 evac3 running - solaris-kz excl 83 evac1 running - solaris-kz excl 84 evac2 running - solaris-kz excl - tzone1 installed /system/zones/tzone1 solaris excl - on-fixes configured - solaris-kz excl - evac4 installed - solaris-kz excl - zts configured - solaris-kz excl Zones not set for evacution were detached - ie. on-fixes and zts. All running and installed zones are set to be evacuated to bjork, for example: root:nacaozumbi:~# svccfg -s system/zones/zone:evac1 listprop evacuation/target evacuation/target astring ssh://root@bjork Now, let's start the maintenance window: root:nacaozumbi:~# sysadm maintain -s -m "updating to new build" root:nacaozumbi:~# sysadm maintain -l TYPE USER DATE MESSAGE admin root 2018-02-02 01:10 updating to new build At this point we can no longer boot or attach zones on nacaozumbi: root:nacaozumbi:~# zoneadm -z on-fixes attach zoneadm: zone 'on-fixes': attach prevented due to system maintenance: see sysadm(8) And that also includes migrating zones to nacaozumbi: root:bjork:~# zoneadm -z on-fixes migrate ssh://root@nacaozumbi zoneadm: zone 'on-fixes': Using existing zone configuration on destination. zoneadm: zone 'on-fixes': Attaching zone. zoneadm: zone 'on-fixes': attach failed: zoneadm: zone 'on-fixes': attach prevented due to system maintenance: see sysadm(8) Now we start evacuating all the zones. In this example, all running and installed zones have their service instance property evacuation/target set. The option -a means all the zones, that is including those installed. The -v option provides verbose output. root:nacaozumbi:~# sysadm evacuate -va sysadm: preparing 5 zone(s) for evacuation ... sysadm: initializing migration of evac1 to bjork ... sysadm: initializing migration of evac3 to bjork ... sysadm: initializing migration of evac4 to bjork ... sysadm: initializing migration of tzone1 to bjork ... sysadm: initializing migration of evac2 to bjork ... sysadm: evacuating 5 zone(s) ... sysadm: migrating tzone1 to bjork ... sysadm: migrating evac2 to bjork ... sysadm: migrating evac4 to bjork ... sysadm: migrating evac1 to bjork ... sysadm: migrating evac3 to bjork ... sysadm: evacuation completed successfully. sysadm: evac1: evacuated to ssh://root@bjork sysadm: evac2: evacuated to ssh://root@bjork sysadm: evac3: evacuated to ssh://root@bjork sysadm: evac4: evacuated to ssh://root@bjork sysadm: tzone1: evacuated to ssh://root@bjork While being evacuated, you can check the state of evacuation like this: root:nacaozumbi:~# sysadm evacuate -l sysadm: evacuation in progress After the evacuation is done, you can also see the details like this (for example, in case you did not run it in verbose mode): root:nacaozumbi:~# sysadm evacuate -l -o ZONENAME,STATE,DEST ZONENAME STATE DEST evac1 EVACUATED ssh://root@bjork evac2 EVACUATED ssh://root@bjork evac3 EVACUATED ssh://root@bjork evac4 EVACUATED ssh://root@bjork tzone1 EVACUATED ssh://root@bjork And you can see all the evacuated zones are now in the configured state on the source host: root:nacaozumbi:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - tzone1 configured /system/zones/tzone1 solaris excl - evac1 configured - solaris-kz excl - on-fixes configured - solaris-kz excl - evac4 configured - solaris-kz excl - zts configured - solaris-kz excl - evac3 configured - solaris-kz excl - evac2 configured - solaris-kz excl And the migrated zones are happily running or in the installed state on host bjork: jpechane:bjork::~$ zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 57 evac3 running - solaris-kz excl 58 evac1 running - solaris-kz excl 59 evac2 running - solaris-kz excl - on-fixes installed - solaris-kz excl - tzone1 installed /system/zones/tzone1 solaris excl - zts installed - solaris-kz excl - evac4 installed - solaris-kz excl The maintenance state is still held at this point: root:nacaozumbi:~# sysadm maintain -l TYPE USER DATE MESSAGE admin root 2018-02-02 01:10 updating to new build Upgrade the system with a new boot environment unless you did that before (which you should have to keep the time your zones are running on the other host to minimum): root:nacaozumbi:~# pkg update --be-name=.... -C0 entire@... root:nacaozumbi:~# reboot Now, finish the maintenance mode. root:nacaozumbi:~# sysadm maintain -e And as the final step, return all the evacuated zones now. As explained before, you would not be able to do it if still in the maintenace mode. root:nacaozumbi:~# sysadm evacuate -ra sysadm: preparing zones for return ... 5/5 sysadm: returning zones ... 5/5 sysadm: return completed successfully. Possible enhancements for the future we are considering include specifying multiple targets and a spread policy, with a resource utilisation comparison algorithm that would consider CPU arch, RAM and CPU resources.

The number one use case for live migration today is for evacuation: when a Solaris Zones host needs some maintenance operation that involves a reboot, then the zones are live migrated to some other...

What is this BUI thing anyway?

This is part two in my series of posts about Solaris Analytics in the Solaris 11.4 release. You may find part one here. The Solaris Analytics WebUI (or "bui" for short) is what we use to tie together all our data gathering from the Stats Store. Comprised of two web apps (titled "Solaris Dashboard" and "Solaris Analytics"), enable the webui service via # svcadm enable webui/server Once the service is online, point your browser at https://127.0.0.1:6787 and log in. [Note that the self-signed certificate is that generated by your system, and adding an exception for it in your browser is fine]. Rather than roll our own toolkit, we make use of Oracle Jet, which means we can keep a consistent look and feel across Oracle web applications. After logging in, you'll see yourself at the Oracle Solaris Web Dashboard, which shows an overview of several aspects of your system, along with Faults (FMA) and Solaris Audit activity if your user has sufficient privileges to read them.     Mousing over any of the visualizations on this page will give you a brief description of what the visualization provides, and clicking on it will take you to a more detailed page. If you click on the hostname in the top bar (next to Applications), you'll see what we call the Host Drawer. This pulls information from svc:/system/sysstat. Click the 'x' on the top right to close the drawer. Selecting Applications / Solaris Analytics will take you to the main part of the bui: I've select the NFS Client sheet, resulting in the dark shaded box on the right popping up with a description of what the sheet will show you.   Building blocks: faults, utilization and audit events In the previous installment I mentioned that we wanted to provide a way for you to tie together the many sources of information we provide, so that you could answer questions about your system. This is a small example of how you can do so. The host these screenshots were taken from is a single processor, four-core Intel-based workstation. In a terminal window I ran # psradm -f 3 Followed a few minutes later by # psradm -n 3 You can see those events marked on each of the visualizations with a blue triangle here: Now if I mouseover the triangle marking the second offline/online pair, in the Thread Migrations viz, I can see that the system generated a Solaris Audit event: This allows us to observe that the changes in system behaviour (primarily load average and thread migrations across cores) were correlated with the offlining of a cpu core. Finally, let's have a look at the Audit sheet. To view the stats on this page, you need to login to the bui as a suitably-privileged user - either root, or a user with the solaris.sstore.read.sensitive privileges.   # usermod -A +solaris.sstore.read.sensitive $USER   For this screenshot I not only redid the psradm operations from earlier, I also tried making an ssh connection with an unknown user, and logged in on another of this system's virtual consoles. There are many other things you could observe with the audit subsystem; this is just a glimpse: Tune in next time for a discussion of using the C and Python bindings to the Stats Store so you can add your own statistics.

This is part two in my series of posts about Solaris Analytics in the Solaris 11.4 release. You may find part one here.The Solaris Analytics WebUI (or "bui" for short) is what we use to tie together...

Perspectives

Default Memory Allocator Security Protections using Silicon Secured Memory (SSM ADI)

In Solaris 11.3 we provided the ability to use the Silicon Secured Memory feature of the Oracle SPARC processors in the M7 and M8 families. An API for applications to explicitly manage ADI (Application Data Integrity) versioning was provided, see adi(2) man page, as well as new memory allocator library - libadimalloc(3LIB). This required either code changes to the application or arranging to set LD_PRELOAD_64=/usr/lib/64/libadimalloc.so.1 in the environment variables before the application started. The libadimalloc(3LIB) allocator was derived from the libumem(3LIB) codebase but doesn't expose all of the features that libumem does. With Oracle Solaris 11.4 Beta the use of ADI has been integrated into the default system memory allocator in libc(3LIB) and libumem(3LIB), while retaining libadimalloc(3LIB) for backwards compatibility with Oracle Solaris 11.3 systems. Control of which processes run with ADI protection is now via the Security Extensions Framework, usng sxadm(8), so it is no longer necessary to set the $LD_PRELOAD_64 environment variable. There are two distinct ADI based protections exposed via the Security Extensions Framework: ADISTACK and ADIHEAP. To complement the existing extensions introduced in earlier Oracle Solaris 11 update releases: ASLR, NXHEAP, NXSTACK (all three of which are available on SPARC and x86 CPU systems). ADIHEAP is how the ADI protection is exposed via the standard libc memory allocator and via libumem. The ADISTACK extension as the name suggests is for protectiong the register save area of the stack. $ sxadm status EXTENSION STATUS CONFIGURATION aslr enabled (tagged-files) default (default) nxstack enabled (all) default (default) nxheap enabled (tagged-files) default (default) adiheap enabled (tagged-files) default (default) adistack enabled (tagged-files) default (default) The above output from sxadm shows the default configuration of an Oracle SPARC M7/M8 based system. What we can see here is that some of the security extensions, including adiheap/adistack, are enabled by default only for tagged-files. Executable binaries can be tagged using ld(1) as documented in sxadm(8), for example if we want to tag an application at build time to use adiheap we would add '-z sx=adiheap'. Note it is not meaningful at this time to tag shared libaries only leaf executable programs. Most executables in Oracle Solaris were already tagged to run with the aslr, nxstack, nxheap security extensions. Now many of them are also tagged for ADISTACK and ADIHEAP as well. For the Oracle Solaris 11.4 release we have also had to explicitly tag some executables to not run with ADIHEAP and/or ADISTACK, this is either due to outstanding issues when running with an ADI allocator or in some cases to more fundamental issues with how the prgoram itself works (ImageMagic graphics image processing tool is one such example where ADISTACK is explicily disabled). The sxadm command can be used to start processes with security extensions enabled regardless of the system wide status and binary tagging. For example to start a program that was not tagged at build time with both ADI based protections, in addtion to its binary tagged extensions: $ sxadm exec -s adistack=enable -s adiheap=enable /path/to/program It is possible to edit binary executables to add the security extension tags, even if there were none present at link time. Explicit tagging of binaries already installed on a system and delivered by any package management software is not recommened. If all of the untagged applications that are deployed to be run on a system have been tested to work with the ADI protections then it is possible to chane the system wide defaults rather than having to use sxadm to run the processes: # sxadm enable adistack,adiheap The Oracle Solaris 11.4 Beta also has support for use of ADI to protect kernel memory, that is currently undocumented but is planned to be exposed via sxadm by 11.4 release or soon after. The KADI support also includes a signifcant amount of ADI support in mdb, for both live and post-mortem kernel debugging. KADI is enabled by default with precise traps when running a debug build of the kernel. The debug builds are published in the public Oracle Solaris 11.4 Beta repository and can be enabled by running: # pkg change-variant debug.osnet=true The use of ADI via the standard libc and libumem memory allocators and by the kernel (in LDOMs and Zones including with live migration/suspend) has enabled the Oracle Solaris engineering team to find and fix many otherwise difficult to find or diagnose bugs. However we are not yet at a point where we believe all applications from all vendors are sufficiently well behaved that the ADISTACK and ADIHEAP protections can be enabled by default.

In Solaris 11.3 we provided the ability to use the Silicon Secured Memory feature of the Oracle SPARC processors in the M7 and M8 families. An API for applications to explicitly manage...

Getting Data Out of the StatsStore

After the release of the Oracle Solaris 11.4 Beta and the post on the new observability features by James McPherson I've had a few folks ask me if it's possible to export the data from the StatsStore into a format like CSV (Comma-separated values) so they can easily import this into something like Excel. The answer is: Yes The main command to access the StatsStore through the CLI is sstore(1), which you can either use as a single command or you can use it as an interactive shell-like environment. For example to browse the statistics namespace. The other way to access the StatsStore is through the Oracle Solaris Dashboard through a browser, where you point to the system's IP address on port 6787. A third way to access the data is through the REST interface (which the Dashboard actually also using to get it's data) but this is something for a later post. As James pointed out in his post you can use sstore(1) to list the currently available resources, and you can use export to pull data from one or more of those resources. And it's with this last option you can specify the format you want this data to be exported as. The default is tab separated: demo@solaris-vbox:~$ sstore export -t 2018-02-01T06:47:00 -e 2018-02-01T06:52:00 -i 60 '//:class.cpu//:stat.usage' TIME VALUE IDENTIFIER 2018-02-01T06:47:00 20286401.157722 //:class.cpu//:stat.usage 2018-02-01T06:48:00 20345863.706499 //:class.cpu//:stat.usage 2018-02-01T06:49:00 20405363.144286 //:class.cpu//:stat.usage 2018-02-01T06:50:00 20465694.085729 //:class.cpu//:stat.usage 2018-02-01T06:51:00 20525877.600447 //:class.cpu//:stat.usage 2018-02-01T06:52:00 20585941.862812 //:class.cpu//:stat.usage But you can also get it in CSV: demo@solaris-vbox:~$ sstore export -F csv -t 2018-02-01T06:47:00 -e 2018-02-01T06:52:00 -i 60 '//:class.cpu//:stat.usage' time,//:class.cpu//:stat.usage 1517496420000000,20286401.157722 1517496480000000,20345863.706499 1517496540000000,20405363.144286 1517496600000000,20465694.085729 1517496660000000,20525877.600447 1517496720000000,20585941.862812 And in JSON: demo@solaris-vbox:~$ sstore export -F json -t 2018-02-01T06:47:00 -e 2018-02-01T06:52:00 -i 60 '//:class.cpu//:stat.usage' { "__version": 1, "data": [ { "ssid": "//:class.cpu//:stat.usage", "records": [ { "start-time": 1517496420000000, "value": 20286401.157722 }, { "start-time": 1517496480000000, "value": 20345863.706498999 }, { "start-time": 1517496540000000, "value": 20405363.144285999 }, { "start-time": 1517496600000000, "value": 20465694.085728999 }, { "start-time": 1517496660000000, "value": 20525877.600446999 }, { "start-time": 1517496720000000, "value": 20585941.862812001 } ] } ] } Each of these have their own manual entries sstore.cvs(5) and sstore.json(5). Now the question rises: How do you get something interesting/useful? Well, part of this is about learning what the StatsStore can gather for you and the types of tricks you can do with the data before you export it. This is where the Dashboard is a great learning guide. When you first log in you get a landing page very similar to this: Note: The default install of Oracle Solaris won't have a valid cert and the browser will complain it's an untrusted connection. Because you know the system you can add an exception and connect. Because this post is not about exploring the Dashboard but about exporting data I'll just focus on that. But by all means click around. So if you click on the "CPU Utilization by mode (%)" graph you're essentially double clicking on that data and you'll got to a statistics sheet we've built showing all kinds of aspects on CPU utilization and this should look something like this: Note: You can see my VirtualBox instance is pretty busy. So these graphs look pretty interesting, but how do I get to this data? Well, if we're interested in the Top Processes, first click on Top Processes by CPU Utilization and this should bring up this overlay window: Note: This shows this statistic is only temporarily collected (something you could make persistent here) and that the performance impact of collecting this statistic is very low. Now click on "proc cpu-percentage" and this will show what is being collected to create this graph: This shows the SSID of the data in this graph. A quick look at this show it's looking in the process data //:class.proc, then it's using a wildcard on the resources //:res.* which grabs all the entries available, then it selects the statistic for CPU usage in percent //:stat.cpu-percentage, and finally it does a top operation on this list and selects the to 5 processes //:op.top(5) (see ssid-op(7) for more info). And when I use this on the command line I get: demo@solaris-vbox:~$ sstore export -F CSV -t 2018-02-01T06:47:00 -i 60 '//:class.proc//:res.*//:stat.cpu-percentage//:op.top(5)' time,//:class.proc//:res.firefox/2035/demo//:stat.cpu-percentage//:op.top(5),//:class.proc//:res.rad/204/root//:stat.cpu-percentage//:op.top(5),//:class.proc//:res.gnome-shell/1316/demo//:stat.cpu-percentage//:op.top(5),//:class.proc//:res.Xorg/1039/root//:stat.cpu-percentage//:op.top(5),//:class.proc//:res.firefox/2030/demo//:stat.cpu-percentage//:op.top(5) 1517496480000000,31.378174,14.608765,1.272583,0.500488,0.778198 1517496540000000,33.743286,8.999634,3.271484,1.477051,2.059937 1517496600000000,41.018677,9.545898,5.603027,3.170776,3.070068 1517496660000000,37.011719,8.312988,1.940918,0.958252,1.275635 1517496720000000,29.541016,8.514404,9.561157,4.693604,0.869751 Where "-F CSV" tells it to output to CSV (I could also have used lowercase csv), "-t 2018-02-01T06:47:00" is the begin time of what I want to look at, I'm not using an end time which would be similar but then with an "-e", the "-i 60" tells it I want the length of each sample to be 60 seconds, and then I use the SSID from above. Note: For the CSV export to work you'll need to specify at least the begin time (-t) and the length of each sample (-i), otherwise the export will error. You also want to export data the StatStore has actually gathered or it will also not work. In the response the first line is the header with what each column is (time, firefox, rad, gnome-shell, Xorg, firefox) and then the values where the first column is UNIX time. Similarly if I look at what data is driving the CPU Utilization graph I get the following data with this SSID: demo@solaris-vbox:~$ sstore export -F csv -t 2018-02-01T06:47:00 -i 60 '//:class.cpu//:stat.usage//:part.mode(user,kernel,stolen,intr)//:op.rate//:op.util' time,//:class.cpu//:stat.usage//:part.mode(user,kernel,stolen,intr)//:op.rate//:op.util(intr),//:class.cpu//:stat.usage//:part.mode(user,kernel,stolen,intr)//:op.rate//:op.util(kernel),//:class.cpu//:stat.usage//:part.mode(user,kernel,stolen,intr)//:op.rate//:op.util(user) 1517496420000000,2.184663,28.283780,31.322588 1517496480000000,2.254090,16.524862,32.667445 1517496540000000,1.568696,19.479255,41.112911 1517496600000000,1.906700,18.194955,39.069998 1517496660000000,2.326821,18.103397,39.564789 1517496720000000,2.484758,17.909993,38.684371 Note: Even though we've asked for data on user, kernel, stolen, and intr (interrupts), it doesn't return data on stolen as it doesn't have this. Also Note: It's using two other operations rate and util in combination to create this result (also see ssid-op(7) for more info). This should allow you to click around the Dashboard and learn what you can gather and how to export it. We'll talk more on mining interesting data and for example using the JSON output in later posts.

After the release of the Oracle Solaris 11.4 Beta and the post on the new observability features by James McPherson I've had a few folks ask me if it's possible to export the data from the StatsStore...

Perspectives

Normalizing man page section numbers in Solaris 11.4

If you look closely at the listings for the Oracle Solaris 11.4 Reference Manuals and the previous Oracle Solaris 11.3 Reference Manuals, you might notice a change in some sections.  One of our “modernization” projects for this release actually took us back to our roots, in returning to the man page section numbers used in SunOS releases before the adoption of the System V scheme in Solaris 2.0.   When I proposed this change, I dug into the history a bit to explain in the PSARC case to review the switchover. Unix man pages have been divided into numbered sections for its entire recorded history. The original sections, as seen in the Introduction to the Unix 1st Edition Manual from 1971 & the Unix 2nd Edition Manual from 1972, were: I. Commands II. System calls III. Subroutines IV. Special files V. File formats VI. User-maintained programs VII. Miscellaneous By Version 7, Bell Labs had switched from Roman numerals to Arabic and updated the definitions a bit: 1. Commands 2. System calls 3. Subroutines 4. Special files 5. File formats and conventions 6. Games 7. Macro packages and language conventions 8. Maintenance Most Unix derivatives followed this section breakdown, and a very similar set is still used today on BSD, Linux, and MacOS X: 1 General commands 2 System calls 3 Library functions, covering in particular the C standard library 4 Special files (usually devices, those found in /dev) and drivers 5 File formats and conventions 6 Games and screensavers 7 Miscellanea 8 System administration commands and daemons The Linux Filesystem Hierarchy Standard defines these sections as man1: User programs Manual pages that describe publicly accessible commands are contained in this chapter. Most program documentation that a user will need to use is located here. man2: System calls This section describes all of the system calls (requests for the kernel to perform operations). man3: Library functions and subroutines Section 3 describes program library routines that are not direct calls to kernel services. This and chapter 2 are only really of interest to programmers. man4: Special files Section 4 describes the special files, related driver functions, and networking support available in the system. Typically, this includes the device files found in /dev and the kernel interface to networking protocol support. man5: File formats The formats for many data files are documented in the section 5. This includes various include files, program output files, and system files. man6: Games This chapter documents games, demos, and generally trivial programs. Different people have various notions about how essential this is. man7: Miscellaneous Manual pages that are difficult to classify are designated as being section 7. The troff and other text processing macro packages are found here. man8: System administration Programs used by system administrators for system operation and maintenance are documented here. Some of these programs are also occasionally useful for normal users. The Linux man pages also include a non-FHS specified section 9 for "kernel routine documentation." But of course, one Unix system broke ranks and shuffled the numbering around. USL redefined the man page sections in System V to instead be: 1 General commands 1M System administration commands and daemons 2 System calls 3 C library functions 4 File formats and conventions 5 Miscellanea 7 Special files (usually devices, those found in /dev) and drivers Most notably moving section 8 to 1M and swapping 4, 5, & 7 around. Solaris still tried to follow the System V arrangement until now, with some extensions: 1 User Commands 1M System Administration Commands 2 System Calls 3 Library Interfaces and Headers 4 File Formats 5 Standards, Environments, and Macros 6 Games and screensavers 7 Device and Network Interfaces 9 DDI and DKI Interfaces With Solaris 11.4, we've now given up the ghost of System V and declared Solaris to be back in sync with Bell Labs, BSD, and Linux numbering. Specifically, all existing Solaris man pages using these System V sections were renumbered to the listed standard section: SysV Standard ---- -------- 1m -> 8 4 -> 5 5 -> 7 7 -> 4 Sections 1, 2, 3, 6, and 9 remain as is, including the Solaris method of subdividing section 3 into per library subdirectories. The subdivisions of section 7 introduced in PSARC/1994/335 have become subdivisions of section 4 instead, for instance ioctls will now be documented in section 4I instead of 7I. The man command was updated so that if someone specifies one of the remapped sections, it will look first in the section specified, then in any subsections of that section, then the mapped section, and then in any subsections of that section. This will assist users following references from older Solaris documentation to find the expected pages, as well as users of other platforms who don't know our subsections. For example: If a user did "man -s 4 core", looking for the man page that was delivered as /usr/share/man/man4/core.4, and no such page was found in /usr/share/man/man4/ it would look for /usr/share/man/man5/core.5 instead. If a user did "man -s 3 malloc", it would display /usr/share/man/man3/malloc.3c. The man page previously delivered as ip(7P), and now as ip(4P) could be found by any of: man ip man ip.4p man ip.4 man ip.7p man ip.7 and the equivalent man -s formulations. Additionally, as long as we were mucking with the sections, we defined two new sections which we plan to start using soon: 2D DTrace Providers 8S SMF Services The resulting Solaris manual sections are thus now: 1 User Commands 2 System Calls 2D DTrace Providers 3 Library Interfaces and Headers 3* Interfaces split out by library (i.e. 3C for libc, 3M for libm, 3PAM for libpam) 4 Device and Network Interfaces 4D Device Drivers & /dev files 4FS FileSystems 4I ioctls for a class of drivers or subsystems 4M Streams Modules 4P Network Protocols 5 File Formats 6 Games and screensavers 7 Standards, Environments, Macros, Character Sets, and miscellany 8 System Administration Commands 8S SMF Services 9 DDI and DKI Interfaces 9E Driver Entry Points 9F Kernel Functions 9P Driver Properties 9S Kernel & Driver Data Structures We hope this makes it easier for users and system administrators who have to use multiple OS'es by getting rid of one set of needless differences. It certainly helps us in delivering FOSS packages by not having to change all the manpages in the upstream sources to be different for Solaris just because USL wanted to be different 30 years ago.

If you look closely at the listings for the Oracle Solaris 11.4 Reference Manuals and the previous Oracle Solaris 11.3 Reference Manuals, you might notice a change in some sections.  One of our...

Perspectives

Random Solaris Tips: 11.4 Beta, LDoms 3.5, Privileges, File Attributes & Disk Block Size

* { box-sizing: border-box; } .event { border-radius: 4px; width: 800px; height: 110px; margin: 10px auto 0; margin-left: 0cm; } .event-side { padding: 10px; border-radius: 8px; float: left; height: 100%; width: calc(15% - 1px); box-shadow: 1px 2px 2px 1px #888; background: white; position: relative; overflow: hidden; font-size: 0.8em; text-align: right; } .event-date, .event-time { position: absolute; width: calc(90% - 20px); } .event-date { top: 30px; font: bold 24px Garamond, Georgia, serif; } .dotted-line-separator { right: -2px; position: absolute; background: #fff; width: 5px; top: 8px; bottom: 8px; } .dotted-line-separator .line { /*border-right: 1px dashed #ccc;*/ transform: rotate(90deg); } .event-body { border-radius: 8px; float: left; height: 100%; width: 65%; line-height: 22px; box-shadow: 0 2px 2px -1px #888; background: white; padding-right: 9px; font: bold 16px Garamond, Georgia, serif; } .event-title, .event-location, .event-details { float: left; width: 60%; padding: 15px; height: 33%; } .event-title, .event-location { border-bottom: 1px solid #ccc; } .event-details2 { float: left; width: 60%; padding: 15px; height: 23%; font: bold 24px Garamond, Georgia, serif; } Solaris OS Beta 11.4 Download Location & Documentation Recently Solaris 11.4 hit the web as a public beta product meaning anyone can download and use it in non-production environments. This is a major Solaris milestone since the release of Solaris 11.3 GA back in 2015. Few interesting pages: Solaris 11.4 Beta Downloads page for SPARC and x86 What's New in Oracle Solaris 11.4 Solaris 11.4 Release Notes Solaris 11.4 Documentation Logical Domains Dynamic Reconfiguration Blacklisted Resources Command History Dynamic Reconfiguration of Named Resources Starting with the release of Oracle VM Server for SPARC 3.5 (aka LDoms) it is possible to dynamically reconfigure domains that have named resources assigned. Named resources are the resources that are assigned explicitly to domains. Assigning core ids 10 & 11 and a 32 GB block of memory at physical address 0x50000000 to some domain X is an example of named resource assignment. SuperCluster Engineered System is one example where named resources are explicitly assigned to guest domains. Be aware that depending on the state of the system, domains and resources, some of the dynamic reconfiguration operations may or may not succeed. Here are few examples that show DR functionality with named resources. ldm remove-core cid=66,67,72,73 primary ldm add-core cid=66,67 guest1 ldm add-mem mblock=17664M:16G,34048M:16G,50432M:16G guest2 Listing Blacklisted Resources When FMA detects faulty resource(s), Logical Domains Manager attempts to stop using those faulty core and memory resources (no I/O resources at the moment) in all running domains. Also those faulty resources will be preemptively blacklisted so they don't get assigned to any domain. However if the faulty resource is currently in use, Logical Domains Manager attempts to use core or memory DR to evacuate the resource. If the attempt fails, the faulty resource is marked as "evacuation pending". All such pending faulty resources are removed and moved to blacklist when the affected guest domain is stopped or rebooted. Starting with the release of LDoms software 3.5, blacklisted and evacuation pending resources (faulty resources) can be examined with the help of ldm's -B option. eg., # ldm list-devices -B CORE ID STATUS DOMAIN 1 Blacklisted 2 Evac_pending ldg1 MEMORY PA SIZE STATUS DOMAIN 0xa30000000 87G Blacklisted 0x80000000000 128G Evac_pending ldg1 Check this page for some more information. LDoms Command History Recent releases of LDoms Manager can show the history of recently executed ldm commands with the list-history subcommand. # ldm history Jan 31 19:01:18 ldm ls -o domain -p Jan 31 19:01:48 ldm list -p Jan 31 19:01:49 ldm list -e primary Jan 31 19:01:54 ldm history .. Last 10 ldm commands are shown by default. ldm set-logctl history=<value> command can be used to configure the number of commands in the command history. Setting the value to 0 disables the command history log. Disks Determine the Blocksize devprop command on recent versions of Solaris 11 can show the logical and physical block size of a device. The size is represented in bytes. eg., Following output shows 512-byte size for both logical and physical block. It is likely a 512-byte native disk (512n). % devprop -v -n /dev/rdsk/c4t2d0 device-blksize device-pblksize device-blksize=512 device-pblksize=512 Find some useful information about disk drives that exceed the common 512-byte block size here. Security Services Privileges When debugging option was enabled, ppriv command on recent versions of Solaris 11 can be used to check if the current user has required privileges to run a certain command. eg., % ppriv -ef +D /usr/sbin/trapstat trapstat[18998]: missing privilege "file_dac_read" (euid = 100, syscall = "faccessat") for "/devices/pseudo/trapstat@0:trapstat" at devfs_access+0x74 trapstat: permission denied opening /dev/trapstat: Permission denied % ppriv -ef +D /usr/sbin/prtdiag System Configuration: Oracle Corporation sun4v T5240 Memory size: 65312 Megabytes ================================ Virtual CPUs ================================ .. Following example examines the privileges of a running process. # ppriv 23829

Solaris OS Beta 11.4 Download Location & Documentation Recently Solaris 11.4 hit the web as a public beta product meaning anyone can download and use it in non-production environments. This is a major...

Immutable Zones: SMF changes & Trusted Path services

History of the Immutable (ROZR) Zones  In Solaris 11 11/11 we introduced Immutable non-global zones; these have been built on top of MWAC (Mandatory Write Access Control) using a handful of choices for the file-mac-profile property in zone configurations. Management was only possible by booting the zone read/write or by modifying configuration files from within the global zone. In Solaris 11.2 we added support for the Immutable Global Zone and so we also added the Immutable Kernel Zone. In order to make maintenance possible for the global zone we added the concept of a Trusted Path login. It is invoked through the abort-sequence for an LDOM or bare metal system and for native and kernel zones using the -T/-U options for zlogin(1). Limitations The Trusted Path introduced in Solaris 11.2 was not available in services; changes to the SMF repository were always possible; depending on the file-mac-profile, /etc/svc/repository.db was either writable (not MWAC protected such as in flexible-configuration) and so the changes were permanent.  The immutable zone's configuration was not protected! If the repository was not writable, a writable copy was created in /system/volatile and all changes would not persist across a reboot. In order to make any permanent cases the system needed to be rebooted read/write. The administrator had two choices: either the changes to the SMF repository were persistent (file-mac-profile=flexible-configuration) or any permanent changes required a r/w boot. In all cases, the behavior of an immutable system could be modified considerably. When an SMF services was moved into the Trusted Path using ppriv(1),  it could not be killed and the service would go to maintenance on the first attempt to restart or stop the service. In Solaris 11.4 we updated the Immutable Zone: SMF becomes immutable and we introduce services on the Trusted Path. Persistent SMF changes can be made only when they are made from the Trusted Path.  SMF becomes Immutable SMF has two different repositories: the persistent repository which contains all of the system and service configuration and the non-persistent repository; the latter contains the current state of the system, which services are actually running. It also stores the non-persistent property groups such as general_ovr; this property group is used to store whether services are enabled and disabled. The svc.configd service now runs in the Trusted Path so it can now change the persistent repository regardless of the MWAC profile. Changes made to the persistent repository will now always survive a reboot. The svc.configd checks whether the caller is running in the Trusted Path; if a process runs in the Trusted Path it is allowed to make changes to the persistent repository. If not, an error is returned. Trusted Path services In Solaris 11.4 we introduce a Boolean parameter in the SMF method_credential called "trusted_path"; if it is set to true, the method runs in the Trusted Path. This feature is joined at the hip with Immutable SMF: without the latter, it would be easy to escalate from a privileged process to a privileged process in the Trusted Path. All these processes need to behave normally, we added a new privilege flag, PRIV_TPD_KILLABLE; such a process even when run in the Trusted Path can be send a signal from outside the Trusted Path.  But clearly such a process cannot be manipulated outside of the Trusted Path so you can't aim a debugger unless the debugger runs in the Trusted Path too. As the Trusted Path property can only be given or inherited from, init(8), the SMF restarters need to run in the Trusted Path. This feature allows us to run self-assembly services that do not depend on the self-assembly-complete milestone; instead we can now run them on the Trusted Path; these services can now take as long as they want and they can be run even on each and every boot and even when the service is restarted. When a system administrator wants to run the console login always on the Trusted Path, he can easily achieve that by running the following command: # svccfg -s console-login:default setprop trusted_path = boolean: true # svccfg -s console-login:default refresh  etc It is possible in Oracle Solaris 11.4 to write a service which updates and reboots the system; such a service can be started by an administrator outside of the Trusted Path by temporarily enabling such a service. Combined with non-reboot immutable which was introduced in Oracle Solaris 11.3 SRU 12, automatic and secure updates are now possible, without additional downtime.  Similarly there may be use cases for deploying a configuration management service, such as Puppet or Ansible, on the SMF Trusted Path so that it can reconfigure the system but interactive, even root, administrators can not.

History of the Immutable (ROZR) Zones  In Solaris 11 11/11 we introduced Immutable non-global zones; these have been built on top of MWAC (Mandatory Write Access Control) using a handful of choices for...

Migrating from IPF to Packet Filter in Solaris 11.4

Contributed by: Alexandr Nedvedicky This blog entry covers the migration from IPF to Packet Filter (a.k.a. PF). If your Oracle Solaris 11.3 runs without IPF, then you can stop reading now (well of course, if you're interested in reading about the built-in firewalls you should continue on). The IPF served as a network firewall on Oracle Solaris for more than a decade (Since Oracle Solaris 10). PF on Oracle Solaris is available since Oracle Solaris 11.3 as alternative firewall. Administrator must install it explicitly using 'pkg install firewall'. Having both firewalls shipped during Oracle Solaris 11.3 release cycle should provide some time to prepare for a leap to world without IPF. If you as a SysAdmin have completed your homework and your ipf.conf (et. al.) are converted to pf.conf already, then skip reading to 'What has changed since Oracle Solaris 11.3' IPF is gone, what now? On upgrade from Oracle Solaris 11.3 PF will be automatically installed without any action from the administrator. This is implemented by renaming pkg:/network/ipfilter to pkg:/network/ipf2pf and adding a dependency on pkg:/network/firewall. The ipf2pf package installs ipf2pf(7) (svc:/network/ipf2pf) service, which runs at the first boot to newly updated BE. The service inspects the IPF configuration, which is still available in '/etc/svc/repository-boot'.  The ipf2pf start method uses the repository to locate IPF configuration. The IPF configuration is then moved to '/var/firewall/legacy.ipf/conf' directory.  Content of the directory may vary depending on your ipfilter configuration.  The next step is to attempt to convert legacy IPF configuration to pf.conf.  Unlike IPF the PF keeps its configuration in single file named pf.conf (/etc/firewall/pf.conf). The service uses 'ipf2pf' binary tool for conversion. Please do not set your expectations for ipf2pf too high.  It's a simple tool (like hammer or screwdriver) to support your craftsman's skill, while converting implementation of your network policy from IPF to PF.  The tool might work well for simple cases, but please always review the conversion result before deploying it. As soon as ipf2pf service is done with conversion, it updates /etc/firewall/pf.conf with comment to point you to result: '/var/firewall/legacy.ipf/pf.conf'. Let's see how the tool actually works when it converts your IPF configuration to PF. Let's assume your IPF configuration is kept in ipf.conf, ipnat.conf and ippool-1.conf: ipf.conf # # net0 faces to public network. we want to allow web and mail # traffic as stateless to avoid explosion of IPF state tables. # mail and web is busy. # # allow stateful inbound ssh from trusted hosts/networks only # block in on net0 from any to any pass in on net0 from any to 192.168.1.1 port = 80 pass in on net0 from any to 172.16.1.15 port = 2525 pass in on net0 from pool/1 to any port = 22 keep state pass out on net0 from any to any keep state pass out on net0 from 192.168.1.1 port = 80 to any pass out on net0 from 192.168.1.1 port = 2525 to any    ipnat.conf # let our private lab network talk to network outside  map net0 172.16.0.0/16 -> 0.0.0.0/32 rdr net0 192.168.1.1/32 port 25 -> 172.16.1.15 port 2525    ippool-1.conf table role = ipf type = tree number = 1 { 8.8.8.8, 10.0.0.0/32 }; In order to convert ipf configuration above, we run ipf2pf as follows:    ipf2pf -4 ipf.conf -n ipnat.conf -p ippool-1.conf -o pf.conf The result of conversion pf.conf looks like that:    #    # File was generated by ipf2pf(7) service during system upgrade. The    # service attempted to convert your IPF rules to PF (the new firewall)    # rules. You should check if firewall configuration here, suggested by    # ipf2pf, still meets your network policy requirements.    #    #    # Unlike IPF, PF intercepts packets on loopback by default.  IPF does not    # intercept packets bound to loopback. To turn off the policy check for    # loopback packets, we suggest to use command below:    set skip on lo0    #    # PF does IP reassembly by default. It looks like your IPF does not have IP    # reassembly enabled. Therefore the feature is turned off.    #    set reassemble no    # In case you change your mind and decide to enable IP reassembly    # delete the line above. Also to improve interoperability    # with broken IP stacks, tell PF to ignore the 'DF' flag when    # doing reassembly. Uncommenting line below will do it:    #    # set reassemble yes no-df    #    # PF tables are the equivalent of ippools in IPF. For every pool    # in legacy IPF configuration, ipf2pf creates a table and    # populates it with IP addresses from the legacy IPF pool. ipf2pf    # creates persistent tables only.    #    table <pool_1> persist { 8.8.8.8, 10.0.0.0 }    #    # Unlike IPF, the PF firewall implements NAT as yet another    # optional action of a regular policy rule. To keep PF    # configuration close to the original IPF, consider using    # the 'match' action in PF rules, which translate addresses.    # There is one caveat with 'match'. You must always write a 'pass'    # rule to match the translated packet. Packets are not translated    # unless they hit a subsequent pass rule. Otherwise, the "match"    # rule has no effect.     #    # It's also important to avoid applying nat rules to DHCP/BOOTP    # requests. The following stateful rule, when added above the NAT    # rules, will avoid that for us.    pass out quick proto udp from 0.0.0.0/32 port 68 to 255.255.255.255/32 port 67    # There are 2 such rules in your IPF ruleset    #    match out on net0 inet from 172.16.0.0/16 to any nat-to (net0)    match in on net0 inet from any to 192.168.1.1 rdr-to 172.16.1.15 port 2525    #    # The pass rules below make sure rdr/nat -to actions    # in the match rules above will take effect.    pass out all    pass in all    block drop in on net0 inet all    #    # IPF rule specifies either a port match or return-rst action,    # but does not specify a protocol (TCP or UDP). PF requires a port    # rule to include a protocol match using the 'proto' keyword.    # ipf2pf always assumes and enters a TCP port number    #    pass in on net0 inet proto tcp from any to 192.168.1.1 port = 80 no state    #    # IPF rule specifies either a port match or return-rst action,    # but does not specify a protocol (TCP or UDP). PF requires a port    # rule to include a protocol match using the 'proto' keyword.    # ipf2pf always assumes and enters a TCP port number    #    pass in on net0 inet proto tcp from any to 172.16.1.15 port = 2525 no state    #    # IPF rule specifies either a port match or return-rst action,    # but does not specify a protocol (TCP or UDP). PF requires a port    # rule to include a protocol match using the 'proto' keyword.    # ipf2pf always assumes and enters a TCP port number    #    pass in on net0 inet proto tcp from <pool_1> to any port = 22 flags any keep state (sloppy)    pass out on net0 inet all flags any keep state (sloppy)    #    # IPF rule specifies either a port match or return-rst action,    # but does not specify a protocol (TCP or UDP). PF requires a port    # rule to include a protocol match using the 'proto' keyword.    # ipf2pf always assumes and enters a TCP port number    #    pass out on net0 inet proto tcp from 192.168.1.1 port = 80 to any no state    #    # IPF rule specifies either a port match or return-rst action,    # but does not specify a protocol (TCP or UDP). PF requires a port    # rule to include a protocol match using the 'proto' keyword.    # ipf2pf always assumes and enters a TCP port number    #    pass out on net0 inet proto tcp from 192.168.1.1 port = 2525 to any no state As you can see the result pf.conf file is annotated by comments, which explain what happened to original ipf.conf. What has changed since Oracle Solaris 11.3? If you already have experience with PF on Solaris, then you are going to notice those changes since Oracle Solaris 11.3:    firewall in degraded state Firewall service enters degraded state, whenever it gets enabled with default configuration shipped package. This should notify administrator, the system enables firewall with empty configuration. As soon as you alter etc/firewall/pf.conf and refresh firewall service, the service will become online.    firewall in maintenance state firewall enters maintenance state as soon as the service tries to load syntactically invalid configuration. If it happens the smf method inserts a hardwired fallback rules, which drop all inbound sessions excepts ssh.    support for IP interface groups 11.4 comes with support for 'firewall interface groups'. It's a feature, which comes from upstream. The idea is best described by its author Henning Brauer [ goo.gl/eTjn54 ]. The S11.4 brings same feature with Solaris flavor. The interface groups are treated as interface property 'fwifgroup'. To assign interface net0 to group alpha use ipadm(8):     ipadm set-ifprop -p fwifgroup=alpha net0 to show firewall interface groups, which net0 is member of use show-ifprop:     ipadm show-ifprop -p fwifgroup net0 The firewall interface groups are treated as any other interface, which PF rule is bound to. Rule below applies to all packets, which are bound to firewall interface group alpha:     pass on alpha all The firewall interface group is just kind of tag you assign to interface, so PF rules can use it to refer to such interface(s) using tags instead of names. Known Issues Not everything goes as planned. There is a bunch of changes, which missed the beta-release build. Those are known issues and are going to be addressed in final release.    firewall:framework SMF instacne This is going to be removed in final release. The feature presents yet another hurdle in upgrade path. It's been decided to postpone it. The things are just unfortunate we failed to remove it for beta.    support for _auto/_static anchors Those are closely related to firewall:framework instance. Support for those two anchors is postponed.    'set skip on lo0' in default pf.conf The 'set skip on lo0' is going to disappear from default pf.conf shipped by firewall package. Unlike IPF    Lack of firewall in Solaris 10 BrandZ With IPF gone there is no firewall available in S10 BrandZ. The only solution for Solaris 10 deployments, which require firewall protection is to move those Solaris 10 BrandZ behind the firewall, which runs outside the Solaris 10 BrandZ. The firewall can be either another network appliance, or it can be Solaris 11 Zone with PF installed. The Solaris 11 Zone will act as L3 router forwarding all traffic for Solaris 10 -Zone. If such solution does not work for your deployment/scenario we hope to hear back from you with some details of your story. Although there are differences between IPF to PF, the migration should not be that hard as both firewalls have similar concept. We keep PF on Oracle Solaris as much close to upstream as possible, so many guides and recipes found in Internet should work for Oracle Solaris as well, just keep in mind NAT-64, pfsync and bandwidth management were not ported to Oracle Solaris. We hope you'll enjoy the PF ride and will stay tuned for more on the Oracle Solaris 11.4 release. Links Oracle Solaris Documentation on IPF, PF, and comparing IPF to PF. [ goo.gl/YyDMVZ ] https://sourceforge.net/projects/ipfilter/ [ goo.gl/UgwCth ] https://www.openbsd.org/faq/pf/ [ goo.gl/eTjn54 ] https://marc.info/?l=openbsd-misc&m=111894940807554&w=2

Contributed by: Alexandr Nedvedicky This blog entry covers the migration from IPF to Packet Filter (a.k.a. PF). If your Oracle Solaris 11.3 runs without IPF, then you can stop reading now (well of...

More adventures in Software FMA

Those of you that have ever read my own blog (Ghost Busting) will know I've a long standing interest in trying to get the systems we all use and love to be easier to fix, and ideally tell you themselves what's wrong with them. Back in Oracle Solaris 11 we added the concept of Software Fault Management Architecture (SWFMA), with two types of event modelled as FMA defects. One was panic events, the other SMF service state transitions. This also allowed for notification of all FMA events via a new service facility (svccfg setnotify) over SNMP and email. With a brief diversion to make crash dumps smaller, and faster, and easier to diagnose, we've come back to the SWFMA concept and extended it in two ways. Corediag It's pretty easy to see that the same concept for modelling System panic as FMA events could be applied to user level process core dumps. So that's what we've done. coreadm(8) has been extended so that by default a diagnostic core file is created for every Solaris binary that crashes. This is smaller than a regular core dump. We then have a service (svc:/system/coremon:default) which runs a daemon (coremond) that will monitor for these being created, and summarize them,. By default the summary file is kept, though you can use coreadm to remove them. coremond then turns these in to FMA alerts. These are considered more informational than full on defects, but are still present in the FMA logs. You can run fmadm(8) like in the screen shot below to see any alerts. This was one I got when debugging a problem with a core file.   Stackdiag Over the years we've learned that for known problems, there is a very good correlation between the raw stack trace and an existing bug. We've had tools internally to do this for years. We've mined our bug database and extracted stack information, this is bundled up in to a file delivered by pkg:/system/diagnostic/stackdb. Any time FMA detects there is stack telemetry in an FMA event, and the FMA message indicates we could be looking at a software bug, it'll trigger a lookup and try to add the bug id for significant threads to the FMA message. So if you look in the message above you'll see the description that says: Description : A diagnostic core file was dumped in /var/diag/e83476f7-104d-4c85-9de4-bf7e45f261d1 for RESOURCE /usr/bin/pstack whose ASRU is . The ASRU is the Service FMRI for the resource and will be NULL if the resource is not part of a service. The following are potential bugs. stack[0] - 24522117 The stack[0] shows which thread within the process caused in the core dump, and obviously the bugid following it is one you can search for in MOS to find any solution records. Alternatively you can see if you've already got the fix in your Oracle Solaris package repository, by using the pkg search option to search for bugid. Hang on. If I've got the fix in the package repository why isn't it installed? The stackdb package is what we refer to as "unincorporated". This is just a data file, and has no code within it, and the latest version you have available will be installed (not just the one matching the SRU you're on). So you can update it regularly to get the latest diagnosis, without updating the SRU on your system or rebooting it. This means you may get information about bugs which are already fixed and the fix is available when you are on older SRUs. Software FMA We believe that these are the first steps to a self diagnosing system, and will reduce the need to log SRs for bugs we already know about. Hopefully this will mean you can get the fixes quicker, with minimal process or fuss.  

Those of you that have ever read my own blog (Ghost Busting) will know I've a long standing interest in trying to get the systems we all use and love to be easier to fix, and ideally tell you...

Oracle Solaris 11

What's in a uname ?

One of the more subtle changes with Oracle Solaris 11.4 is the identity of the Operating System - namely the output of uname(1).  Obviously we are not changing the release - this is still SunOS 5.11 which brings along the interface stability levels that Oracle Solaris has delivered for decades. However, in part based upon customer feedback, and in part internal requirements, the version level now displays finer grained information: $ uname -v 11.4.0.12.0 If we compare this to the output from an Oracle Solaris 11.3 machine running Support Repository Update 28: $ uname -v 11.3 So we now have 3 extra digits to convey more information, whose meaning is: 11.<update>.<sru>.<build>.<reserved> Update : the Oracle Solaris update. From the above we see this is Oracle Solaris 11 Update 4 SRU : the SRU number. Again, from the above, as this is a beta release, it is not an SRU so the value is 0. build: the build of the Update, or if the SRU is non-zero, the SRU. reserved: this is a number that will be used to reflect some internal mechanisms - for example, if we discover a problem with the build but the next build has been started then we will use this number as a 'respin'. Taking this forward when the 9th SRU is produced the output of uname -v will be: $ uname -v 11.4.9.4.0 As an SRU typically has 4 builds we see the build number is 4 and, as the SRU is perfect,  the reserved field is 0. One other important change, which is not immediately obvious, is that the version is no longer encoded in the kernel at build time. Rather it is read from a file at boot: /etc/versions/uts_version. This brings about an important point - the system identity can be changed without the need to deliver a new kernel. This means, that potentially, SRUs can be delivered that modify userland binaries AND can update the system identity (ie no need to reboot to update the system identity). This file is protected from being modified via extended attributes used within the packaging system, and because it is a delivered file it is easy to detect if has been modified: # pkg verify -p /etc/versions/uts_version PACKAGE                                                                 STATUS pkg://solaris/entire                                                     ERROR         file: etc/versions/uts_version                 ERROR: Hash: 7d5ef997d22686ef7e46cc9e06ff793d7b98fc14 should be 0fcd60579e8d0205c345cc224dfb27de72165d54 What about the previous advice to use 'pkg info entire' to identify the system? Using the packaging system to identify what is running on the system continues to be a sure fire way simply because trying to convey detailed information in 5 digits is really impossible for a variety of reasons.  The versions (fmri's) of packages have been changed. We have taken the leap to make them more reflective of the release. For example: $ pkg list -H entire entire              11.4-11.4.0.0.0.12.1       i-- whereas in the 11.3 SRU case: $ pkg list -H entire entire              0.5.11-0.175.3.28.0.4.0    i-- The fmri now indicates this is 11.4 and the branch scheme (numbers after the '-') has replaced 0.175  with a much more useful release version. The above example indicates Oracle Solaris 11.4, build 12 and a respin of 1. The exact  details of the various digits are documented in the pkg(7).

One of the more subtle changes with Oracle Solaris 11.4 is the identity of the Operating System - namely the output of uname(1).  Obviously we are not changing the release - this is still SunOS 5.11...

Live Zone Reconfiguration for Dataset Resources

Contributed by: Jan Pechanec The Live Zone Reconfiguration, LZR for short, was introduced in Oracle Solaris 11.3. It allows for changing a configuration of a running zone without a need to reboot it. More precisely, it provides a way to: reconfigure a running zone to match persistent configuration of a zone maintained through zonecfg(8) edit the live configuration of a running zone instead of a persistent configuration on a stable storage Up to Solaris 11.3, the LZR supported resources and properties like anet, capped-memory, pool, and others. For the complete list, see the solaris(5) and solaris-kz(5) manual pages on 11.3 as there are differences in what is supported by each brand. Please note that those manual pages were renamed in 11.4 Beta to zones_solaris(7) and zones_solaris-kz(7). With the release of Oracle Solaris 11.4 Beta, the LZR now newly supports reconfiguration of dataset resources for the "solaris" branded zones -- aka non-global zones (NGZs), or native zones. This new feature in 11.4 will allow you to add and remove datasets to solaris branded zones on the fly, whereas such datasets will show up within the zone as ZFS pools that may be imported and exported using an established zpool(8) command. In order to provide a coherent user interface and a good administrative experience, the decision was to use already existing NGZ command line interface to import a new virtual ZFS pool that has been made available to the running NGZ by the GZ administrator via adding a dataset resource using the LZR, and also to remove a dataset resource out of a running NGZ when no longer needed. A short example will illustrate how to use this feature via applying a modified persistent configuration. First, create the dataset in the global zone, modify the native zone's configuration, and apply the modifications: global-zone# zfs create rpool/lzr1 global-zone# zonecfg -z tzone1 zonecfg:tzone1> add dataset zonecfg:tzone1:dataset> set name=rpool/lzr1 zonecfg:tzone1:dataset> end zonecfg:tzone1> commit zonecfg:tzone1> exit global-zone# zoneadm -z tzone1 apply zone 'tzone1': Checking: Adding dataset name=rpool/lzr1 zone 'tzone1': Applying the changes Now log in to the tzone1 zone and import the newly added ZFS pool: global-zone# zlogin tzone1 tzone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 127G 32.0G 95.0G 25% 1.00x ONLINE - tzone1:~# zpool import # will show pools available for import pool: lzr1 id: 11612286362086021519 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: lzr1 ONLINE c1d0 ONLINE tzone1:~# zpool import lzr1 tzone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT lzr1 127G 32.0G 95.0G 25% 1.00x ONLINE - rpool 127G 32.0G 95.0G 25% 1.00x ONLINE - tzone1:~# touch /lzr1/hello tzone1:~# ls -l /lzr1/hello -rw-r--r-- 1 root root 0 Jan 29 09:51 /lzr1/hello The size of the imported lzr1 pool is same as how much space is available in the ZFS dataset rpool/lzr1 within the GZ. Now trying to remove the dataset while it is still imported in the zone: global-zone# zonecfg -z tzone1 zonecfg:tzone1> remove dataset 0 zonecfg:tzone1> commit zonecfg:tzone1> exit global-zone# zoneadm -z tzone1 apply zone 'tzone1': Checking: Removing dataset name=rpool/lzr1 zone 'tzone1': error: cannot remove dataset 'rpool/lzr1': not in exported state zoneadm: zone tzone1: failed to apply the configuration: zoneadmd(8) returned an error 1 (unspecified error) global-zone# echo $? 1 As you would probably expect, one cannot remove a dataset resource while the ZFS pool is in use within the zone. So, just redo after exporting the pool first: tzone1:~# zpool export lzr1 tzone1:~# echo $? 0 global-zone# zoneadm -z tzone1 apply zone 'tzone1': Checking: Removing dataset name=rpool/lzr1 zone 'tzone1': Applying the changes Alternatively, you could only update the in-memory configuration via "zonecfg -z tzone1 -r". The configuration is applied on commit: global-zone# zonecfg -z tzone1 remove dataset 0 global-zone# zonecfg -z tzone1 apply ... global-zone# zonecfg -z tzone1 -r zonecfg:tzone1> add dataset zonecfg:tzone1:dataset> set name=rpool/lzr1 zonecfg:tzone1:dataset> end zonecfg:tzone1> commit zone 'tzone1': Checking: Adding dataset name=rpool/lzr1 zone 'tzone1': Applying the changes Within the tzone1 zone, you would now import the pool same way as before.

Contributed by: Jan Pechanec The Live Zone Reconfiguration, LZR for short, was introduced in Oracle Solaris 11.3. It allows for changing a configuration of a running zone without a need to reboot it....

Oracle Solaris 11

Application Sandboxing in Oracle Solaris 11.4

Portions Contributed by: Glenn Faden Oracle Solaris Zones provide a robust security boundary around all processes running inside of them. Non-global (aka native) Zones were designed to provide Operating System level virtualisation so they present a complete user space isolation with separate file system namespaces and IP stack, etc while running on a shared kernel. Sometimes it is desirable to wrap an additional security boundary around an application to reduce the risk of it leaking data or executing external tools if it were to be misused. Application Sandboxing provides that lighter weight capability, and it can be used to provide containment even for unprivileged applications. In March of 2013, Glenn Faden wrote a blog entitled Application Containment via Sandboxing. That prototype led to the development of the new sandboxing(7) infrastructure in Oracle Solaris 11.4. Application Sandboxes are a new security mechanism used to provide additional separation between running programs, with the goal of reducing the risk of unauthorised access, read and write, of user/application data. Sandboxes isolate applications by restricting the applications’ process attributes, such as their clearances and privileges. Sandboxes can be used within global and non-global zones. A new unprivileged command, sandbox(1) provides various options for entering restricted environments to run applications in an isolated environment. It supports two use cases. sandbox [-n] -l [clearance] [command]  sandbox -s sandboxname [command] The first use case is similar to the original sandboxing prototype in that the user applies specific attributes to customize the sandbox. The -n option is still used to prevent all network access by removing the basic privilege net_access. To restrict the execution to programs under /usr, an extended policy is applied to the privilege proc_exec. The prototype applied extended policies to the basic privileges file_read and file_write to achieve an isolated environment. In the new implementation, file access is restricted by reducing the process clearance. By default, the clearance is set to the system minimum label, Admin_Low, but the -l option can be used to specify any clearance that is dominated by the current process clearance. The current user and group IDs are preserved, but processes inside the sandbox run with a lower clearance, and cannot access the user's processes or files or files that are outside of the sandbox. A subdirectory should be created and assigned a label matching the sandbox clearance prior to entering a sandbox. The sandbox command should be executed from that directory.. In the second use case, the attributes of the sandbox are specified administratively. Administrators with the Sandbox Management rights profile can create persistent, uniquely named sandboxes with specific security attributes using the create subcommand of sandboxadm(8). Named sandboxes have parent and child relationships that are implemented using hierarchical and disjoint clearances. Sandbox clearances are assigned automatically so that the clearances of all parent sandboxes are disjoint from each other, but dominate their respective child sandboxes. The clearance of each child sandbox is disjoint with respect to every other child sandbox. The default sandboxing configuration supports the creation of up to four disjoint parent sandboxes, each of which may have up to 100 child sandboxes. These limits can be raised using the init option of sandboxadm(8). In addition, a top-level parent sandbox can be created which dominates every other sandbox. The other process attributes of named sandboxes include a unique user ID, optional primary and secondary groups, and a unique project for resource management. The unique sandbox name is also used to identify its corresponding project. The create subcommand also creates a properly labeled home directory for each sandbox. When entering a named sandbox, its associated directory is automatically set as the current working directory. Access to named sandboxes is restricted by user ID and clearance. Each sandbox has a unique label which specifies how it is hierarchically related to all other sandboxes. For more information, see the labels(7) man page. Processes executed in a sandbox must have a clearance that dominates the sandbox's label. For more information about the description of process labeling, see the clearance(7) man page.  To enter a parent sandbox, the user ID of the calling process must equal the user ID that has been assigned to the sandbox, or the process must assert the proc_setid privilege. In addition the clearance of the calling process must dominate the clearance assigned to the sandbox. Entry into a child sandbox is restricted to processes that are already running within its parent sandbox. Using sandbox(1) or the corresponding sandbox_enter(3SANDBOX) API does not require a password. However, sandbox accounts may be assigned passwords and/or ssh keys to support local logins or remote access. Upon successful authentication, the sandbox is entered automatically. Now lets look at an example, running an editor in an application sandbox: ~alice:$ cd playground ~alice/playground:$ sandbox vim The credentials of the resulting process look like this: ~alice/playground:$ ppriv 103889 103889: vim flags = PRIV_XPOLICY Extended policies: {proc_exec}:/export/home/alice/playground {proc_exec}:/usr/* E: basic,!net_access,!proc_exec,!proc_info,!proc_session I: basic,!net_access,!proc_exec,!proc_info,!proc_session P: basic,!net_access,!proc_exec,!proc_info,!proc_session L: all ~alice/playground:$ plabel $$ 103889: ADMIN_LOW From this we can see that the application (vim) has no network access and we have removed some of the basic privileges that would normally allow it to see and signal other processes on the system.  It can also only execute new processes from /usr or the directory we entered. The following diagram provides an architectural overview of Oracle Solaris sandboxing, very few people need to understand this but it shows how it fits into the existing Oracle Solaris security architecture: As you can see from the brief description above the Application Sandboxing functionality is built by using multiple pre-existing Solaris security and resource manamagent features to provide a simple interface to containing applications.

Portions Contributed by: Glenn Faden Oracle Solaris Zones provide a robust security boundary around all processes running inside of them. Non-global (aka native) Zones were designed to provide...

reflink(3c) What is it? Why do I care? And how can I use it?

Oracle Solaris 11.4 Beta is now available. One of the most requested features may seem like a tiny thing, but could have a huge impact on your VM deployments, or if you happen to deal with very large files regularly (eg video files). Copying files is usually a case of telling the file system to read the blocks from disk, then telling them to write those same blocks to a different bit of the disk. reflink(3) allows you to tell the file system to simply create a new mapping to those blocks from a file system object (ie a file). ZFS will then keep track of any changes you make to the new file and write a copy of the modified blocks as part of the new file only. This is essentially like ZFS file system snapshot and clone, but on a file level. Think of a common VM deployment scenario. Quite often you’ll be taking a standard VM image, and create multiple VMs from the same image. You can do this in Oracle Solaris 11.3 by putting each VM in its own ZFS file system or ZVOL, and using ZFS snapshot and ZFS clone to create new ones. In Oracle Solaris 11.4 Beta you can achieve the same thing using reflink(3c). The overall effect of this is not only space savings, you’re only adding changes made to each file to the overall file system usage, but cpu usage and time are also reduced. This is because all you’re doing is getting the file system to create more meta data to track the new file and the changes. The reflink copy is near instantaneous and uses virtually no CPU resources, as it’s not doing the expensive read/ write cycle. Of course we’re not saying you need to write a program to do the reflink for you. We have also modified the cp(1) command to add the ‘-z’ flag. This tells cp(1) to use reflink instead of copying by hand. In addition this is designed to be compatible with the Oracle VAAI/NFS plugin.

Oracle Solaris 11.4 Beta is now available. One of the most requested features may seem like a tiny thing, but could have a huge impact on your VM deployments, or if you happen to deal with very large...

Oracle Solaris 11.4 Open Beta Released!

Today we released Oracle Solaris 11.4 public beta (#solaris114beta). This latest update in our continuous innovation stream delivers many new features and enhancements to help secure your data, simplify the system and application lifecycle, and streamline your cloud journey, while preserving your current on-premises investment. Oracle Solaris is engineered for security at every level, allowing you to spend time innovating while reducing risk. New security features include: New application sandbox management tool for constraining both privileged and unprivileged apps, even within a single virtualization environment Automatically protect key applications and the system kernel with enhanced exploit management controls that leverage Oracle SPARC Silicon Secured Memory Stay secure and compliant with an integrated deployment workflow, now with the ability to gather compliance assessments for multiple instances Trust what is running with secure WANboot and secure UEFI boot on x86 Easily build and control immutable environments with even better change control and trusted services. The new sandboxing capability allows you to control exactly what an application can see, and even what different aspects of an application can see.  So, for example, you can prevent the DB admin in two different databases from seeing the data files of the other database, even in a container database environment. We've enhanced compliance to be able to report all your Oracle Solaris 11.4 systems and to also report on configuration changes on the systems in a single step. SPARC Silicon Secured Memory is now easier to use on applications and can be controlled on a system-wide or file basis easily with our sxadm(1) tool, and is enabled in the kernel by default. We've continued to simply the system and application life-cycle. Making common tasks easier and taking less time, enabling you to adapt to business needs quickly and run your data center with the utmost confidence and efficiency.  Improved zone migration between systems using shared storage, including support for NFS shared storage, and the ability to predefine destination systems for each zone New live zone reconfiguration allows datasets to be added without reboot, reducing downtime Easy monitoring, restart and notification of zone states with zone dependency that allow zones to describe the required boot order Improved ZFS replication using send streams for backup and archive, with the introduction of RAW streams and improved stream deduplication Improved administrative and automation experience when deleting large datasets with ZFS asynchronous dataset destroy New planned graceless recovery support reduces downtime for NFS clients on server reboots New StatsStore delivers powerful data and deep insights into what is happening on your systems Updated compliance reporting gives deep insight into system compliance New SMF Goal Services capability delivers single point of monitoring for business functionality. The new StatsStore also has a technology preview web GUI that will show you the statistical data in nice graphs. RAW sends stream for ZFS means that you can now forward compressed and encrypted data between datacenter as efficiently as possible. No matter what the enterprise workload from ERP to DevOps, Oracle Solaris delivers secure, reliable and agile compute that will setup your business for the new wave of enterprise computing. Enhanced secure remote administration greatly reduces risks and errors New dehydrated unified archives provide ability to remove the OS portion of the archive, reducing archive size and making it easier to move into cloud environments Encrypt everything, everywhere, all the time – now with the ability to retrieve crypto keys from OASIS KMIP (Key Management Interoperability Protocol) key servers such as the Oracle Key Vault or third-party KIMP compliant servers Updated FOSS evaluation packages, including golang. That is only a brief look at what's available in Oracle Solaris 11.4 beta. There is a wealth of information already available on Oracle Solaris 11.4. For more information, you can read the data sheet. The What's New is invaluable for finding out the details. For your frequently asked questions, we have the FAQ. Please do read the Release Notes for important information about this release. There are also many blogs coming to this very site as well as others such as those Brian (Oracle ACE Director, Collier IT) and Erik (Oracle ACE Director, Mythics) already posted. Go download it now and try it yourself!  

Today we released Oracle Solaris 11.4 public beta (#solaris114beta). This latest update in our continuous innovation stream delivers many new features and enhancements to help secure your...

Oracle Solaris 11.3 SRU 28

Happy New Year from the Oracle Solaris SRU team! This SRU is the first Critical Patch Update of 2018 and contains some important security fixes and enhancements. SRU28 is now available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . Features included in this SRU include: The Oracle Solaris LDAP client now recognizes more LDAP clients which includes: Oracle Directory Server Enterprise Edition Oracle Unified Directory Oracle Directory Proxy Server Active Directory OpenLDAP server Novell eDirectory Enhancements to the Oracle Dual Port 10/25Gb Ethernet Adapter which includes: Support for SRIOV Ability to update firmware from Oracle Solaris Explorer 8.18 is now available cmake has been updated to 3.9.1 HMP has been updated to 2.4.2.2 The SRU also updates the following components which have security fixes: OpenSSL has been updated to 1.0.2n Firefox has been updated to 52.5.2esr git has been updated to 2.15.0 Wireshark has been updated to 2.4.3 Apache Web Server has been updated to 2.4.29 Security fixes for the following components: emacs Apache Web Server 2.2 django Full details of this SRU can be found in My Oracle Support Doc 2347232.1 For the list of Service Alerts affecting each Oracle Solaris 11.3 SRU, see Important Oracle Solaris 11.3 SRU Issues (Doc ID 2076753.1).

Happy New Year from the Oracle Solaris SRU team! This SRU is the first Critical Patch Update of 2018 and contains some important security fixes and enhancements. SRU28 is now available from My Oracle...

Technologies

2017 in Review and Looking ahead to 2018

As 2017 comes to a close and we begin looking forward to 2018, I want to take a few moments to reflect on 2017 and look forward to the future. It's been quite a year for Oracle Solaris. In January, we announced a new development and delivery model. A new continuous delivery model was going to take the place of our "Big Bang" releases. Instead of a big release with far too many changes that caused our customers so many headaches with requirements to test and verify the new OS, and our partner ISVs needing to do the same thing, we are now delivering changes, new functionality and FOSS updates in the Support Repository Updates. This means your getting all the fixes and new functionality, but you aren't getting it all at once. Joost Pronk and I wrote a blog about what we've delivered in the SRU stream here. At the same time, to accommodate the new release model, we extended Premier Support for Oracle Solaris to at least 2031 and Extended Support to at least 2034.  This is the longest Premier Support that Oracle has ever offered.  I want to address something I've been hearing from some of you about this.  This doesn't mean Oracle Solaris is only doing support bug fixing. We are actively developing on and innovating in Oracle Solaris (that is what Premier Support is) until at least 2034. Since then, we've been hard at work to get the next version of Oracle Solaris (11.4) ready for release. Speaking of support, there have been many questions about Oracle Solaris 10 support since the end of Premier Support is coming on January 31, 2018. I posted a blog explaining Oracle Solaris 10 support at the beginning of December. In September Fujitsu launched the SPARC M12 systems, and we launched the Oracle SPARC M8 processor and the T8/M8 systems. You can watch our Chief Architect, Edward Screven talk about them here. We then began a world wide Oracle Solaris roadshow with our partner Fujitsu. That roadshow has been on going around the world since.  Bill Nesheim just wrapped up a Northern European leg of the tour. We might be doing more dates and cities in the first half of 2018. If you are interested in us coming to your city/country, please let us know. In October, at OOW17 we showed the public what we had been up to and where we are going: Edward Screven presented our Oracle Systems Strategy to the world. Bill Neshiem, vice president Oracle Solaris, spoke as a guest speaker in Fujitsu's session on Artificial Intelligence with Goro Watanabe. Then, Bill followed up with a general session on Oracle Solaris directions. You can download the presentation here. Joost Pronk and I spoke on migrating your workloads to the cloud. You can get that presentation here. On November 16th, I did a webcast of some of the road show: 3rd Thursday Tech Talk: Solaris Continuous Innovation: Modernize with the Risk (requires a free account). Oracle Magazine interviewed Marshall Choy for its November/December issue. Looking Forward to 2018 2018 looks to be a busy year for Oracle Solaris. In the first half of 2018, we plan on releasing an open beta of Oracle Solaris 11.4. It's a great opportunity to "kick the tires" of the up coming version of Oracle Solaris.  Here is a short list of just a few of the new capabilities that might be coming in Oracle Solaris 11.4: Enhanced exploit mitigation with SPARC SSM Trusted Services via SMF Unified statistics gathering with a RESTful API and a graphical interface. Compressed & resumable replication with ZFS Asynchronous ZFS data set destroy Agent-less configuration and management via REST Configuration at Scale via Puppet or Chef Scalable Zone Boot Environment Management Network configuration via SMF and AI Then, later in 2018, we are planning on releasing Oracle Solaris 11.4 for General Availability. 2018 is going to be a great year for Oracle Solaris.   Disclaimer: The preceeding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

As 2017 comes to a close and we begin looking forward to 2018, I want to take a few moments to reflect on 2017 and look forward to the future. It's been quite a year for Oracle Solaris. In January,...

Continuous Delivery, Really?

Yes! You probably haven't noticed, but we've been doing continuous delivery in Oracle Solaris for quite some time now. Last January, Oracle Solaris moved from a "Big Bang" release model to a continuous delivery model. We also extended our promise of Oracle Solaris active development and innovation through at least 2034. Many of our customers don't seem to believe it because nothing seems to have changed from an update perspective. Well, that's because we already had a built in model for delivering new capabilities and functionality as well as new FOSS and updates to you in our Service Repository Updates (SRU) model. However, if you go back and look at what we've delivered in those SRUs, I think you'll be pleasantly surprised in the amount of new software we've delivered to you without any hiccups. In fact, we have had some customers complain that we're now delivering too much in the SRUs, and they just want them to contain bug fixes! We thought it might be helpful to highlight some of the new functionality and FOSS that we've delivered in SRUs so that you can see just how our new continuous delivery model is working. Below is a list of just some of what's been released/updated: It is now possible to access Oracle Databases from Python using the cx_Oracle python module. This is now included as the package: library/python/cx_oracle . For more information, see cx_Oracle documentation. The Java 8, Java 7, and Java 6 packages have been updated 9 times. Timezone has been updated 7 times. Wireshark has been updated 15 times. Explorer has been updated 4 times. BIND has been updated 7 times. OpenSSL has been updated 9 times. curl has been updated. DHCP has been updated twice. OpenSSH has been updated 5 times. Apache Tomcat has been updated 12 times. NTP has been updated 3 times. Enhancements and fixes to VLAN: Kernel Zones are now VLAN-aware. VLANs now support Dynamic MAC Addresses and VLAN IDs. New VLAN resource type has been added to make anet a trunk port. Added support for MPxIO Enabled to HBA cards being used by the vHBA. MySQL 5.5 has been updated 4 times. MySQL 5.6 has been updated 5 times. MySQL 5.7 was added and updated twice. Oracle VM Server for SPARC has been updated 5 times. Firefox has been updated 9 times. Python bindings for the Oracle Database ( cx_Oracle ) has been updated to 5.2.1. tcsh has been updated. Added support for Large Pages for Kernel Zones. Added support for 32G Emulex FC adapters. ksh93 has been updated to the latest community version ImageMagick has been updated 7 times. (Wow! you've read this far!) The following components of the InfiniBand (IB) OS-bypass framework from the Open: Fabrics Enterprise Distribution (OFED) have been updated to version 3.18: Library for userspace use of RDMA Userspace driver for Mellanox ConnectX InfiniBand HCAs InfiniBand management datagram (MAD) library InfiniBand userspace management datagram (uMAD) library InfiniBand subnet manager libraries InfiniBand diagnostic programs InfiniBand fabric management and diagnostic tool set qperf command General RDMA communication manager Two-Factor Authentication CAC/PIV Smart Card support has been added. Virtual Ethernet datalink (veth) support has been added. ADI support for named pages has been added. PHP has been updated to 5.6.22. Apache Web Server has been updated 5 times. Thunderbird has been updated 6 times. Samba has been updated 5 times. Performance improvements on the x86 platform with a reduction in cross calls while unmapping large contiguous virtual address ranges in the kernel. Exclusive-IP zones under the same Global Zone can now communicate with each other. Performance enhancements in the KOM (Kernel Object Manager) layer of the VM (Virtual Memory) subsystem. The Time Slider can now take ZFS snapshots for all file systems. bash has been updated once. Includes the pflog daemon that allows for the capture and examination of Packet Filter (PF) network packets using wireshark . getent(1M) has been updated to support the databases rpc(4) , shadow(4) , netgroups and initgroups. zsh has been updated once. Pixman has been updated once LDAP support has been added to the Standalone LDAP Daemon, slapd. tcpdump has been updated 4 times. (It just keeps going...) Perl 5.22 has been added. With the addition of Perl 5.22, the following modules are updated: CGI.pm has been updated. DBI has been updated. Net-SSLeay has been updated. xml-parser has been updated. A new command line interface (CLI) to view DAX utilization. Mercurial has been updated. glib has been updated. zlib has been updated. libsndfile has been updated. screen has been updated. libarchive has been updated. HMP has been updated 3 times. Kerberos has been updated. libmicrohttpd has been updated. dnsmasq has been updated to 2.76. S.M.A.R.T. Monitoring Tools (smartmontools) support has been added to Oracle Solaris. Improvements to SCSI DTrace script. (... And going...) RabbitMQ has been updated. cryptography has been updated to 1.7.2. vim has been updated. gtar has been updated. NVIDIA driver has been updated 3 times. gcc 5.4.0 has been added. xz has been updated. libpng has been updated. php has been updated. unrar has been updated. pixz has been updated. irssi has been updated. SunVTS 8.2.1 is now available. Puppet has been updated. Facter has been updated. SQLite has been updated. Django has been updated. ruby 2.3 has been added. PHP 7.1 has been added. The Cairo graphics library has been updated. GnuTLS has been updated. Mailman has been updated. postfix has been updated. erlang has been updated. pcre has been updated. nmap has been updated. autoconf has been updated. (OK. I bet you're just skimming.) DAX and VA Masking is now supported on Oracle Solaris Kernel Zones. The eXtended Reliable Connected (XRC) Transport Service is now supported on Oracle Solaris. snort has been updated twice. FreeType has been updated. Expat has been updated. libgcrypt has been updated. libgpg-error has been updated. harfbuzz library is now available. gnupg has been updated. gpgme has been updated. libksba has been updated. libassuan has been updated. Oracle Instant Client 12.2 is now available. ICU has been updated. (Whew! Reached the end!) That's a massive amount of change over the life of Oracle Solaris 11.3. We're not stopping either. Oracle Solaris 11.3 will have more SRU updates with new software and enhancements, and coming in 2018, we are bringing you Oracle Solaris 11.4! So, keep an eye out, there's more coming your way.  

Yes! You probably haven't noticed, but we've been doing continuous delivery in Oracle Solaris for quite some time now. Last January, Oracle Solaris moved from a "Big Bang" release model to a continuous...

Oracle Solaris Support in Emulation

I've been getting many questions about running Oracle Solaris on emulated environments. I want to answer those questions and give you unambiguous answers here so that you can make the best purchasing decisions possible. Let's start with the Oracle Solaris license. If you purchase an Oracle system, Oracle grants you a license to use the version of Oracle Solaris that came pre-installed on that system. If the system didn't come pre-installed, Oracle grants you the right to download the version that is currently available off of Oracle E-Delivery at the time you receive the system. That license, however, is non-transferable. You'll notice that the license document says, "...limited license to use the Programs only as pre-installed..." This includes if you had to download it from Oracle E-Delivery. It then says, "All rights not expressly granted above are hereby reserved." This ties the license to the system, as the right to copy it to another system isn't specifically stated above. So, if you have a copy of Oracle Solaris running on a system today, you cannot shutdown that system, install Oracle Solaris on another system and carry that license over to the new system. The license doesn't transfer. You should also have noticed that the license applies only to the original user (that is the original purchaser) of the computer equipment. This means that if you purchase used equipment from some entity that isn't Oracle, you have no license to use Oracle Solaris on that system because you aren't the "original user." So, how do you get a license to run Oracle Solaris on a new non-Oracle system or a used Oracle (Sun) system? You purchase Oracle Solaris Support. For used Oracle SPARC and Oracle x86 systems, you can purchase Oracle Premier Support for Systems, after hardware re-certification, or Oracle Solaris Premier Support (or for Oracle Solaris 10, Extended Support beginning February 1, 2018) for Operating Systems. For non-Oracle hardware systems, you can purchase Oracle Solaris Premier Support for Non-Oracle Hardware. All of our support offerings include a license to use Oracle Solaris as well as access to the latest Oracle Solaris patches and updates for that version of Oracle Solaris. That will give you a right to use Oracle Solaris on the new/used system. Now, let's talk about emulation. Oracle doesn't certify Oracle Solaris in emulated environments. You can read more about that in MOS Document ID 2341146.1. Here are the important details from that document on Oracle Solaris Support in emulated environments: Oracle has not certified Oracle Solaris on emulated environments (neither SPARC nor x86 emulation). Oracle Support will assist customers running Oracle Solaris on these environments in the following manner: Oracle will only provide support for issues that either are known to occur on the bare metal, or can be demonstrated not to be as a result of running on a emulated environment. If a problem is a known Oracle issue, Oracle support will recommend the appropriate solution on the native OS. If that solution does not work in the emulated environment, the customer will be referred to the emulation environment vendor for support. When the customer can demonstrate that the Oracle solution does not work when running on the bare metal, Oracle will resume support, including logging a bug with Oracle Development for investigation if required. If the problem is determined not to be a known Oracle issue, we will refer the customer to the emulation environment vendor for support. When the customer can demonstrate that the issue occurs when running on the bare metal, Oracle will resume support, including logging a bug with Oracle Development for investigation if required. So, if you encounter a problem running Oracle Solaris in an emulated environment, the extent of support will be to recommend any known fixes for what it looks like the problem might be, or if the problem isn't a known problem, to request you recreate the problem on a bare-metal system. Finally, if you choose to run Oracle Solaris in an emulated environment on a Non-Oracle server, Oracle Solaris for Non-Oracle hardware is licensed by the server not by any configuration of the emulation software. The number of sockets in the server determines the pricing. All sockets (even unpopulated sockets) must be accounted for in determining the per-socket pricing for a server. So, to summarize, Oracle Solaris licenses aren't transferable, you can get a license to run Oracle Solaris on non-Oracle hardware or on used Sun/Oracle hardware by purchasing support, and if you attempt to run Oracle Solaris in an emulated environment, any not currently known problems will need to be recreated on a bare metal system before we can diagnose the problem. Given all that, it's likely easier and cheaper (yes, I said cheaper) to continue running any workloads you are considering moving to an emulated environment by upgrading to a new Oracle SPARC S7, T8, M8 or M12 system. I hope this helps clear up your questions about Oracle Solaris licensing and support in an emulated environment. Disclaimer: This document is for educational purposes only and provides guidelines regarding Oracle's policies in effect as of December 1st, 2017. It may not be incorporated into any contract and does not constitute a contract or a commitment to any specific terms. Policies and this document are subject to change without notice. This document may not be reproduced in any manner without the express written permission of Oracle Corporation.

I've been getting many questions about running Oracle Solaris on emulated environments. I want to answer those questions and give you unambiguous answers here so that you can make the best purchasing...

Oracle Solaris 11.3 SRU 27 Released

Solaris 11.3 SRU27 is now available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . Included in this SRU are some features that users may find useful:  A new daxinfo command. This allows users to determine the static configuration of Data Analytics Accelerator (DAX) hardware available on a system. Oracle Database Programming Interface-C has been added to Oracle Solaris. Oracle Database Programming Interface-C (ODPI-C) is a wrapper around the Oracle Call Interface (OCI), working transparently with different versions of the Oracle Instant Client libraries. ODPI-C eliminates the requirement for applications to set LD_LIBRARY_PATH prior to execution. Additionally, ODPI-C works transparently with multiple versions of the Oracle Instant Client libraries. Software that requires use of the Oracle Instant Client libraries will not require setting ORACLE_HOME prior to execution. For more information about ODPI-C, see the Oracle Database Programming Interface for Drivers and Applications project on GitHub. GNOME accesibility toolkit libraries (library/desktop/atk) have been updated to version 2.24.0 GNOME core text and font handling libraries ( library/desktop/pango) have been updated to 1.40.4 Openssh has been updated to fix bugs relating to sftp. The SRU also updates the following components which have security fixes: Firefox has been updated to 52.5.0esr Thunderbird has been updated to 52.5.0 Updated versions of MySQL: MySQL has been updated to 5.5.58 MySQL has been updated to 5.6.38 wget has been updated to 1.19.2 Full details of this SRU can be found in My Oracle Support Doc 2338177.1 For the list of Service Alerts affecting each Oracle Solaris 11.3 SRU, see Important Oracle Solaris 11.3 SRU Issues (Doc ID 2076753.1).

Solaris 11.3 SRU27 is now available from My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . Included in this SRU are some...

FOSS Support In Oracle Solaris

Introduction Support of Free and Open Source Software in Oracle Solaris is described inside a knowledge article [ID 1400676.1], which can be found at My Oracle Support (MOS). This knowledge article is the most definitive source of information concerning FOSS support in Oracle Solaris and shall be used by Oracle Solaris customers. Details FOSS packages selected for delivery with Oracle Solaris are ported and tested on each Solaris release, so users can install the packages they require without the need to compile the binaries from source code. The most important packages are integrated with Oracle Solaris technologies, for example SMF, ZFS, SAM-QFS, DTrace, etc. For such packages Oracle Solaris engineers work with the respective FOSS community to bring in the Solaris related changes. The intention is to ensure that the up-stream versions of the FOSS components include such related changes and Oracle Solaris FOSS components do not have a separate source base. The FOSS components delivered as part of Oracle Solaris fall into several areas: Core Operating System Core Operating System Services Core System Administration and Diagnostic Tools Development Tools Desktop Environment (including Gnome components) Backward Compatibility Dependency Components General FOSS Support Principles All FOSS components delivered as part of Oracle Solaris are supported as described below. A subset (see a knowledge article [ID 1400676.1]) of FOSS components delivered in Oracle Solaris are supported in terms of Oracle Premier Support for Hardware and Operating Systems. For these FOSS components, Oracle will provide patches, updates and fixes of security vulnerability issues in conjunction with Open Source communities using commercially reasonable efforts. The remaining FOSS components within Oracle Solaris are supported on best effort basis. For these remaining FOSS components, Oracle will accept Service Requests from contracted customers and resolve any packaging and/or install issues, resolve any dependency issues between packages, investigate security vulnerability issues and issues raised by customers for available fixes from respective communities and provide updated stable versions when available. Oracle regularly provides feasible fixes for all known security vulnerability issues in FOSS components delivered as part of Oracle Solaris. The fixes are integrated either as a patch to an existing FOSS component version or as part of a new version of a FOSS component. To avoid diverging from any community Oracle always follows fixes provided by the communities. Oracle will not provide workarounds for FOSS components, fork FOSS components to suit customer requirements, nor commit to implementing Feature Enhancements. A new version of a FOSS component will be delivered in Oracle Solaris 10 in the form of a patch. See also Oracle Solaris 10 Support Explained. A new version of a FOSS component will be delivered in Oracle Solaris 11 either as part of a Support Repository Update (SRU) or in an update release. Once a FOSS component has been discontinued by its community then it may be removed from support. Such FOSS component(s) may also be removed (End of Feature) from Oracle Solaris or replaced by another FOSS component to provide similar functionality. Knowledge article [ID 1400676.1] contains list of no longer supported FOSS components in Oracle Solaris.  

Introduction Support of Free and Open Source Software in Oracle Solaris is described inside a knowledge article [ID 1400676.1], which can be found at My Oracle Support (MOS). This knowledge article is...

Perspectives

osc-setcoremem: Simulation on SuperCluster Nodes

Running the osc-setcoremem simulator on live SuperCluster nodes is very similar to running it on Non-SuperCluster nodes with the exception of setting a shell variable SSC_SCM_SIMULATE to differentiate simulated actions from normal processing. Any SuperCluster configuration can be simulated on a live SuperCluster node including its own configuration. Please check the first two blog posts in this series too for some information related to osc-setcoremem simulator. Oracle SuperCluster: osc-setcoremem simulator osc-setcoremem: Simulation on Non-SuperCluster Nodes Reproducing all high-level steps below for the sake of completeness (highlighted new text in blue color for convenience). Copy osc-setcoremem v2.4 or later executable binary from any live SuperCluster environment onto the target SuperCluster SPARC system running Solaris 11.3 or later Generate base configuration file in the original live SuperCluster environment that you wish to simulate elsewhere eg., # /opt/oracle.supercluster/bin/osc-setcoremem -g [OK] simulator config file generated location: /var/tmp/oscsetcorememcfg.txt    For the argument list, check "SIMULATOR ARGUMENTS" section in the output of "osc-setcoremem -h|-help" If you do not have access to the live SuperCluster environment that you wish to simulate, generate base configuration file template and edit it manually to populate base configuration of the SuperCluster environment to be simulated. Base configuration file template can be generated on any SPARC node running Solaris 11.3. And this step does not require root privileges. eg., To generate a base configuration containing 4 domains, run: % ./osc-setcoremem -g -dc 4 [OK] simulator config file generated location: /var/tmp/oscsetcorememcfg.txt % cat /var/tmp/oscsetcorememcfg.txt #DOMAIN ROOT SERVICE SOCKET CORE MEMORY HCA # NAME DOMAIN DOMAIN COUNT COUNT GB COUNT #-------------------------------------------------------------------------------------------------------------------------- primary YES|NO YES|NO <COUNT> <COUNT> <CAPACITY> 1|2 ssccn-dom1 YES|NO YES|NO <COUNT> <COUNT> <CAPACITY> 1|2 ssccn-dom2 YES|NO YES|NO <COUNT> <COUNT> <CAPACITY> 1|2 ssccn-dom3 YES|NO YES|NO <COUNT> <COUNT> <CAPACITY> 1|2    Check the Guidelines page for the manual editing of base configuration file Kick off simulation with the help of the base configuration file populated in either of the last two steps. osc-setcoremem's non-interactive mode can be activated too by supplying non-interactive arguments. Syntax: osc-setcoremem -p <platform> -c <config_file_path> [<non-interactive_arguments>] It is not necessary to set the shell variable SSC_SCM_SIMULATE when starting a new simulation from scratch with the help of a base configuration file. The presence of simulator specific options such as "-p" and "-c" eliminate the need for any hints about the simulation on a live SuperCluster system. eg., % ./osc-setcoremem -p m8 -c ./oscsetcorememcfg.txt -type core -res 16/480:16/480:16/480 osc-setcoremem simulator (non-interactive) v2.5 built on Oct 13 2017 11:33:52 Current Configuration: SuperCluster M8 +----------------------------------+-------+--------+-----------+--- MINIMUM ----+ | DOMAIN | CORES | MEM GB | TYPE | CORES | MEM GB | +----------------------------------+-------+--------+-----------+-------+--------+ | primary | 32 | 960 | Dedicated | 2 | 32 | | ssccn1-dom1 | 32 | 960 | Dedicated | 2 | 32 | | ssccn1-dom2 | 32 | 960 | Dedicated | 2 | 32 | | ssccn1-dom3 | 2 | 32 | Root | 2 | 32 | +----------------------------------+-------+--------+-----------+-------+--------+ | Parked Resources (Approx) | 30 | 928 | -- | -- | -- | +----------------------------------+-------+--------+-----------+-------+--------+ [ INFO ] following domains will be ignored in this session. Root Domains ------------ ssccn1-dom3 CPU Granularity Preference: 1. Socket 2. Core In case of Socket granularity, proportional memory capacity is automatically selected for you. Choose Socket or Core [S or C] C ... ... DOMAIN REBOOT SUMMARY The following domains would have rebooted on a live system: ssccn1-dom2 ssccn1-dom1 primary POSSIBLE NEXT STEP Continue the simulation with updated configuration eg., SSC_SCM_SIMULATE=1 /osc-setcoremem [<option(s)>] - OR - Start with an existing or brand new base configuration eg., /osc-setcoremem -p [T4|T5|M6|M7|M8] -c <path_to_config_file> By this time osc-setcoremem simulator would have saved the changes made to the base configuration in previous step. You can verify by running osc-setcoremem executable with no options or using "-list" option. Ensure to set the shell variable SSC_SCM_SIMULATE to any value before running osc-setcoremem executable. Without this variable, osc-setcoremem shows the configuration of the underlying SuperCluster that it is running on. eg., Changes highlighted below. % SSC_SCM_SIMULATE=1 ./osc-setcoremem osc-setcoremem simulator v2.5 built on Oct 13 2017 11:33:52 Current Configuration: SuperCluster M8 +----------------------------------+-------+--------+-----------+--- MINIMUM ----+ | DOMAIN | CORES | MEM GB | TYPE | CORES | MEM GB | +----------------------------------+-------+--------+-----------+-------+--------+ | primary | 16 | 480 | Dedicated | 2 | 32 | | ssccn1-dom1 | 16 | 480 | Dedicated | 2 | 32 | | ssccn1-dom2 | 16 | 480 | Dedicated | 2 | 32 | | ssccn1-dom3 | 2 | 32 | Root | 2 | 32 | +----------------------------------+-------+--------+-----------+-------+--------+ | Parked Resources (Approx) | 78 | 2368 | -- | -- | -- | +----------------------------------+-------+--------+-----------+-------+--------+ [ INFO ] following domains will be ignored in this session. Root Domains ------------ ssccn1-dom3 ... ... Two options to choose from at this point: Continue simulation using the updated configuration. Simply running osc-setcoremem executable in the presence of SSC_SCM_SIMULATE shell variable without any arguments or with optional non-interactive arguments shall continue the simulation. This will let you simulate the move from one arbitrary configuration state (cores & memory assigned to different domains) to another. Syntax: SSC_SCM_SIMULATE=1 osc-setcoremem [<non-interactive_arguments>]            -OR- Start a brand new simulation using any base configuration file This is nothing but step #4 above. Here we assume that the required base configuration file was populated and ready. Be aware that this step wipes the current modified core, memory [virtual] configuration clean and starts again with a base configuration that was specified in the configuration file input to "-c" option Repeat steps 2-6 to simulate different SuperCluster configurations    A complete example can be found in Oracle SuperCluster M7/M8 Administration Guide at Run a Simulation on a SuperCluster Node

Running the osc-setcoremem simulator on live SuperCluster nodes is very similar to running it on Non-SuperCluster nodes with the exception of setting a shell variable SSC_SCM_SIMULATE to differentiate...

Oracle Solaris Cluster

Standalone Oracle Solaris Cluster manager user interface

What it is? Oracle Solaris Cluster manager user interface (OSCM UI) can be installed as a standalone package without installing the Oracle Solaris Cluster (OSC) software. This feature is available since OSC 4.3. How to install OSCM UI? Following two packages are required: ha-cluster/system/manager ha-cluster/system/manager-glassfish3. The former package (1) has a dependency on the latter package. Hence installing just ha-cluster/system/manager would installs both the packages: # /usr/bin/pkg install ha-cluster/system/manager Once the packages are installed the following two services would be online by default: # /usr/bin/svcs -a | grep manager online Oct_10 svc:/system/cluster/manager-glassfish3:default online Oct_10 svc:/system/cluster/manager:default In case, the services are not online, please see the troubleshooting steps at the end of this blog. How to use OSCM UI to login to different clusters? The OSCM UI is backward compatible, i.e. an OSCM UI of version 4.3 can connect to OSC 4.2. However, there might be some features which might not be available for the earlier version. In such cases, the OSC version would need to be updated to avail the feature. How launch the OSCM UI Once the online services mentioned in the install section above, the OSCM UI can be launched in any browser using the URL  https:// <ip_or_name_of_installed_machine>:8998/ How to login to any OSC node? In the login page, after launching the OSCM UI, do the following steps: 1.      Enter the Cluster Node to login. OSC 4.3.4 and later releases also allows login into zone cluster node. However in this case, the OSCM UI also should be 4.3.4 or higher. 2.      Enter the User name to login. OSC 4.3.6 and later releases also supports RBAC login. However in this case, the OSCM UI also should be 4.3.6 or higher. NOTE: In case of global cluster, it is also possible to configure the cluster using OSCM UI. Troubleshooting The services are not online If any or both of the services, /system/cluster/manager-glassfish3 and /system/cluster/manager are not online, restart the services. # /usr/sbin/svcadm restart <service_name> OR # /usr/sbin/svcadm disable <service_name> # /usr/sbin/svcadm enable <service_name> NOTE: The service /system/cluster/manager-glassfish3 should be online, before starting the service, /system/cluster/manager. OSCM UI Browser URL is not loading 1.      Ensure that the services are running as mentioned above. 2.      Ensure that the cacao is running: # /usr/sbin/cacaoadm status default instance is ENABLED at system startup. Smf monitoring process: 1073 1074 Uptime: 2 day(s), 14:36# /usr/sbin/cacaoadm status default instance is ENABLED at system startup. Smf monitoring process: 1073 1074 Uptime: 2 day(s), 14:36

What it is? Oracle Solaris Cluster manager user interface (OSCM UI) can be installed as a standalone package without installing the Oracle Solaris Cluster (OSC) software. This feature is available...

DevOps on Oracle Solaris Like A Pro

As you may know we also release technical blogs about Oracle Solaris on the Observatory blog and on this blog we recently wrote a series of blogs on the DevOps Hands on Lab we did at Oracle OpenWorld this year. One of the requests I got after these blogs was for a higher level overview blog of this DevOps Hands on Lab. So here it is. Introduction In general this Hands on Lab was created to show how you can set up a typical DevOps toolchain on (a set of) Oracle Solaris systems. In practice the toolchain will be more complex and probably a combination of different types of systems and Operating Systems. In this example we've chosen to do everything on Oracle Solaris in part to show that this is possible (as this is sometimes not realized), and how you can leverage Oracle Solaris features like Solaris Zones and SMF to your advantage when doing so. In this particular case we also used Oracle WebLogic Server to show how this is installed on Oracle Solaris and to show how install and use the Oracle Maven plugin for WebLogic Server as this is also a very handy tool and works pretty much the same on all platforms. The Workflow The DevOps toolchain we want to create essentially looks like this: Note: The whole setup here is inside a single VirtualBox instance with Solaris Zones inside it because in the Hands on Lab every participant get a separate laptop and this way we can simulate the full set up within a single system. The general flow goes from left to right with the following main steps: A developer writes their code, probably on their laptop or workstation in their favorite development tool, in this case we are using NetBeans but they can essentially use any tool they like. The language in this case is Java because we're going to be using WebLogic as the application server but the same flow holds for other languages too. Once the developer is ready to test their code, they commit their changes to their source code management system, which in our case is Git. In Git you first commit to the local copy of your code repository and then you push your changes to the main repo (which often is called "Origin"). This main repository is where other developers also will be pushing their changes and is often located on a central server (or even on a public location like GitHub). Once the new version is in the central repository it will signal this to a build manager, in our case we've chosen to use Jenkins as the build manager as it's extremely popular and has many extra modules for all kinds of use cases. In this case Jenkins is installed on the same "server" (personified by a Solaris Zone) as the main Git repository but this is by no means necessary, we're running in the same environment to save space. Jenkins has jobs defined by the developer where they can configure what needs to happen once the code is changed. According to the job configuration Jenkins will then build and run the first tests on the application, but first it creates a clone of the Git repository into a work environment so it can run a completely fresh build. Now the code is built and tested, on our case we're using Maven as the Java build tool, it will put the resulting code in the same work environment. For the initial testing of the resulting code we can use a tool like JUnit, this is done with test code that has come from the developer inside the same repository. In simple example we don't use any test code. Once the build is successful Jenkins tells Maven to connect to the application server on the deployment server and push the new version to it so you can do further testing. The use of deployment server may be confusing, at this stage it's more a test server simulating being a deployment server. At this point the new version of the application is up and running, which means it can for example be connected to other systems in a larger test of all the different application pieces, or just used for individual function testing. The intent of this whole chain is to allow a developer to push their code into the repository and that after that everything automatically happens and they don't have to do anything by hand or submit tickets to have someone else clear space on a system set it up and install all the bits necessary. This is all predefined and rolled out automatically, saving a lot of time and engineering cycles both on the developer side as well as on the operations side. The other benefit is that the developers can very easily try out different versions quickly after each other this way quickly iterating to a better solution than if things need to The reality of course is slightly more complex than this simple case. This workflow would be used maybe for initial testing and once it has successfully run this new version can be promoted to a different repository where it will be built (possibly by a different Jenkins server) and then deployed to for example systems for stress testing, or pre-production systems for testing in a production like setting. And then finally once it's deemed ready it can be promoted and deployed to the production systems. And all of this can be made easier with this type of automation. DevOps and Oracle Solaris So what about DevOps and Oracle Solaris? Well, there are many aspects to this, first as you can see in this Hands on Lab example, all these tools run just like they would on other platforms. This is in part because they are designed to do so and also in part because the languages they're based on also run on Oracle Solaris. We actually ship most of them (Java, Python, Ruby, Perl, …) with Oracle Solaris and adding them is a simple package install. In general we ship a lot of FOSS tools with Oracle Solaris as well as work with FOSS communities to make and keep it working on Oracle Solaris. So besides the languages we include source code management systems like Mercurial, Subversion, Git, and others. We ship JUnit, all the GNU tools, Puppet, and many other tools and packages. Maven or Jenkins mentioned in the example above we don't ship because these constantly update themselves anyway and folks tend to be picky about what version they want to use. We actually use many of these tools in our own development of new Solaris versions. For example we use Mercurial and Git as you may expect for all the different types of code we either write ourselves or bring in from the FOSS communities. But we also heavily use a tool like Jenkins to automate our build and test system. We rely in the same way on these tools to make development simpler, with less mistakes, and quicker. Finally a short word on using the tools in Oracle Solaris for a DevOps environment. In the Lab we only use Solaris Zones and SMF, the first to create isolated OS containers that can function as secure isolated environments with their own resources and namespaces. And the second to turn Jenkins into a Solaris service that you can enable and disable, and that will automatically restart if it for some reason fails or is accidentally stopped. But there are many others you can use too. For example you can use Boot Environments to easily move between different configuration or patch versions. You can use the built-in Network Virtualization to create separate networks, each isolated from each other that can even span different systems. You can use Unified Archives to create an archive of a system, domain, or zone in a certain state which you can easily roll out multiple times across one or more systems/domains/zones to fit growing demand. And there are many others. As stated above the in-depth technical blogs can be found on the Observatory blog, and they are spread over Part 1, Part 2, Part 3, and Part 4. Note: that even though the workflow goes from left to right, the blogs build the toolchain from right to left as it's easier verify each step is working that way.

As you may know we also release technical blogs about Oracle Solaris on the Observatory blog and on this blog we recently wrote a series of blogs on the DevOps Hands on Lab we did at Oracle OpenWorld...

Oracle Solaris 10 Support Explained

With Oracle Solaris 10 Premier Support ending on January 31, 2018, we've been getting questions about the different support options for Oracle Solaris 10 going forward. I'm going to attempt to answer all your questions here. Types of Oracle Solaris 10 Support After January 31, there will be two types of support available for Oracle Solaris 10. These are Extended Support and Sustaining Support. Here is what each of those gives you or doesn't give you according to our Oracle Lifetime Support Policy: Oracle and Sun System Software Document (edited to remove references to Oracle Linux for clarity): Extended Support For selected Oracle Solaris operating system software releases, Oracle may extend the technical support period by offering Extended Support for a three year period. With Extended Support, you receive access to technical experts, backed by industry leading online tools and knowledgebase resources. You benefit from: Major product and technology releases for Oracle Solaris operating system software Program updates, fixes, security patches, security alerts, and critical patch updates for Oracle Solaris operating system software Upgrade tools/scripts (when offered) General maintenance releases, selected functionality releases and documentation updates (when offered) Access to My Oracle Support (24x7 web-based customer support systems), including the ability to log service request online Assistance with service requests 24 hours per day, 7 days a week Right to use Oracle Enterprise Manager Ops Center Non-technical customer service during normal business hours Extended Support does not include: Certification with most new third-party products Hardware certification Sustaining Support For selected products, Oracle may offer Sustaining Support for an indefinite period. Sustaining Support applies after Extended Support expires or should you not purchase Extended Support, immediately after Premier expires. With Sustaining Support, you receive continued access to technical experts, backed by industry leading online tools and knowledgebase resources. You benefit from: Program updates, patches, fixes, security patches and security alerts created during the Oracle Premier Support period General maintenance releases, selected functionality releases and documentation updates created during the Oracle Premier Support period Upgrade tools/scripts created during the Oracle Premier Support period Assistance with service requests, on a commercially reasonable basis, 24 hours per day, 7 days a week Access to My Oracle Support (24x7 web-based customer support systems) , including the ability to log service request online Non-technical customer service during normal business hours Sustaining Support does not include: New program updates, patches, fixes, security patches, security alerts, critical patch updates, general maintenance releases, selected functionality releases, documentation updates or upgrade tools New critical patch updates for Oracle Solaris operating system software New upgrade tools Certification with most new third-party products 24 hour commitment and response guidelines for Severity 1 service requests Previously released fixes or updates that Oracle no longer supports Hardware certification What does that all mean? Well, let's start with Extended Support. Oracle Solaris 10 Extended Support will continue to supply you with fixes, security patches, security alerts for Oracle Solaris 10 similar to Oracle Solaris 10 Premier Support for 3 years.  It doesn't, however, include new hardware support. After 3 years, Extended Support ends, and your only option is Sustaining Support. Sustaining Support includes all fixes up to the point you entered Sustaining Support (more on this in a minute). It doesn't include any new fixes beyond the date you entered Sustaining Support nor does it include new hardware tools. Oracle Solaris Licensing Now, an important part of support is how it interacts with Oracle Solaris licensing. When you purchase Oracle x86 or Oracle SPARC servers, you are granted a non-exlusive, royalty free, non-assignable limited license for Oracle Solaris for the exact version of Oracle Solaris that shipped on that machine, or, in the case of Oracle SPARC M7 and Oracle SPARC M8 systems, that was available at the time of delivery. You can read the license here, if you'd like. It is Oracle Solaris Support that gives you access to additional patches and updates. Your Oracle Solaris Support contract also has a license that is associated with it. That license is only valid as long as the support contract is maintained. When support lapses, the license becomes invalid, and you are no longer entitled to any new fixes, patches, updates, etc. If you purchase Oracle Solaris Premier Support for Non-Oracle x86 Hardware, that support contract grants you a license for Oracle Solaris. That license is valid while Oracle Solaris Premier Support for Non-Oracle x86 Hardware is maintained. If you stop support, you are no longer entitled to any new fixes, patches, updates, etc. What does it mean February 1? Oracle Solaris 10 Premier Support ends on January 31st, 2018. You will have to choose 1 of 4 options before then: Continue your Oracle Premier Support for Systems or Oracle Premier Support for Software subscription and migrate to Oracle Solaris 11 free of charge. Transition to Oracle Solaris 10 Extended Support and be entitled to continue to receive updates as you already have been. Transition to Oracle Solaris 10 Sustaining Support. You will be entitled to patches that were available as of January 31, 2018. So, even if your systems aren't quite up to date, you will be able to update them to those fixes in the future. Discontinue Oracle Solaris 10 Support. I hope this helped explain what your options are beginning on February 1, 2018. Please ask any other questions you may have, and reach out to your Oracle Sales Team so they can work with you on the next steps.

With Oracle Solaris 10 Premier Support ending on January 31, 2018, we've been getting questions about the different support options for Oracle Solaris 10 going forward. I'm going to attempt to answer...

Technologies

DevOps Hands on Lab - Installing Netbeans on Oracle Solaris

Here's Part 4, the final part of our series on this year's Hands on Lab on setting up a simple DevOps toolchain on Oracle Solaris. In the previous Part 1, Part 2, and Part 3 we respectively showed how to install and set up Oracle Weblogic Server, Maven, the Oracle Maven WebLogic plugin, Git, and Jenkins in Solaris Zones to create the following toolchain: The only thing missing from our install is the install of NetBeans or some Integrated Development Environment (IDE) like that. Generally this would actually be on a developer's laptop or workstation so they can work when and where they want, but for this example we'll show how to install and connect NetBeans in the Global Zone of the VirtualBox guest we've been using up to now. This way things are nice and contained for our example even though it's now what you'd normally to. This blog assumes you're working in the environment we've already set up in the previous parts, if not it should be fairly easy to understand how to use this in a generic case. Of course you could use another IDE if you favor that more The steps up to now have been focused on creating the automated toolchain on the build server and the deployment server and we've used an archetype as a sample application to test if things are working correctly. Now we only need to install the IDE, install Git, and then pull in the git repository with a clone operation. This will allow us to make some modifications to the code in the application and then test the full toolchain by committing the changes and pushing it up to the origin repository. 1. Installing NetBeans NetBeans is a very popular open source IDE that supports development in a whole range of different languages but in it's origin lies in development for Java applications and is also written in Java. So like Jenkins and Maven this makes NetBeans very portable and easy to run on practically any operating system. The only thing we will need if first install Java 8 and then we can download and run it. Because we will need Git too we can install both Java 8 and Git in one action, so as user root in the Global Zone run: root@solaris:~# pkg install jdk-8 git As a side note if you've installed Oracle Solaris but are not running the Gnome desktop environment yet, it's very easy to install it. Install the meta package solaris-desktop and reboot the OS. root@solaris:~# pkg install solaris-desktop At this point we can choose to either run NetBeans as root or as the user you created when you first installed Oracle Solaris in the first Part of this series. This was necessary because by default Oracle Solaris doesn't allow you to log in as root, you can only assume the root role (with su -) once you're logged in. In my case in VirtualBox I created a user demo and I'll use this user to install and run NetBeans from. Once you're the user you want navigate to the NetBeans download page and select the Java EE version. Also making sure that you choose the OS independent version (on the top right of the page). You can use the Firefox browser to download it but in my case I'm getting the zip file with wget: demo@solaris:~$ cd Downloads/ demo@solaris:~/Downloads$ wget http://download.netbeans.org/netbeans/8.2/final/zip/netbeans-8.2-201609300101-javaee.zip demo@solaris:~/Downloads$ unzip -q netbeans-8.2-201609300101-javaee.zip At this point you can run NetBeans: demo@solaris:~/Downloads$ ./netbeans/bin/netbeans 2. Cloning the Repository Once NetBeans has started we need to create a clone of the Git repository. We do this by going to Team -> Git -> Clone: Which will ask the location of the repository. The location of the Git repository is in the j-m_zone, so use the IP address of the j-m_zone which in my case is 10.0.2.17. And I fill in "ssh://10.0.2.17/export/home/git-user/mywebapp.git" as the URL, "git-user" as the username and the corresponding password. Note you can choose to have NetBeans save the password if you want. Then click "Next>": Then click "Next>" again: And click "Finish" to finalize the clone: It will pop up a window to ask you if you want to open the project to work with it, so click "Open Project": Because we don't have all the specific WebLogic Maven plugins installed here yet NetBeans will find this issue and flag it with a pop-up window. We're not going to be building anything here only editing the code so click "Close" to ignore: 3. Making a Change Now the left panel should show the "basicWebapp" project. Expand the project, locate the "index.xhtml" file and double click on it. This should open it in the right hand panel: Find the heading again and make another change to it, or change some other part of the text. Once you've done this you're ready to commit the change and push it to the origin. First click Team -> Commit: This will bring up a commit screen. Fill in the commit message, something like "Testing Netbeans", and click "Commit": This will complain that you don't have a user set up yet. For now we can just use the default and click "Yes": Now the change is committed to the local copy of the Git repository we can push the change to the origin repository. We do this by going to Team -> Remote -> Push… : Click on "Next >": Click on "Next >" again: And finally on "Finish": Now the code changes have been pushed Jenkins will wake up like before and within a few seconds should start building the application again and then push it to WebLogic. Go back to the browser see how Jenkins runs and then verify the newly made changes automatically appear on your webpage. At this point the full DevOps toolchain is complete and verified to work. Of course this is a fairly simple setup, we didn't add any testing framework like JUnit or did any work on the Maven POM file to make a differentiation between types of deployment phases but now the base framework is in place you can expand on this like on any other framework. Finally, the beauty of this way of working is that where the application is finally pushed and run can be fully abstracted from the developer, this means they can focus on creating new code and don't have to worry about platform specific things and the operators can focus on choosing the best platform for the application, be it x86 or SPARC, be it Window, Linux, or Solaris. This ends this series, check in in the future for more content like this.

Here's Part 4, the final part of our series on this year's Hands on Lab on setting up a simple DevOps toolchain on Oracle Solaris. In the previous Part 1, Part 2, and Part 3 we respectively showed...

Oracle Solaris 11

Security Compliance Reporting for Oracle Solaris 11

During the Oracle Solaris 11 launch (November 2011) one of the questions I was asked from the audience was from a retail customer asking for documentation on how to configure Oracle Solaris to pass a PCI-DSS audit.  At that time we didn't have anything beyond saying that Oracle Solaris was secure by default and it was no longer necessary to run the Solaris Security Toolkit to get there.  Since then we have produced a PCI-DSS white paper with Coalfire (a PCI-DSS QSA) and we have invested a significant amount of work in building a new Compliance Framework and making compliance a "lifestyle" feature in Solaris core development. We delievered OpenSCAP in Oracle Solaris 11.1 since SCAP is the foundation language of how we will provide compliance reporting. In Oracle Solaris 11.2 we added a new command compliance(1M) for running system assesments against security/compliance benchmarks and for generating html reports from those.  A lot of work went into generating the content as well as the framework, and we have been continuously updating and adding the check rules. When we introduced the compliance framework in Oracle Solaris 11.2 there was no easy way to customise (tailor) the policies to suit individual machine or site deployment needs. While it was certainly possible for users familiar with the XCCDF/OVAL policy language it wasn't easy to do in away that preserved your customisations while still allowing access to new and policy rules when the system was updated. To address this a new subcommand for compliance(1M) was added in Oracle Solaris 11.3 that allows proides for this customisation.  The initial release of tailoring in Oracle Solaris 11.3 allows the enabling and disabling of individual checks, later we added variable level customisation. There is more functionality for compliance planned for future update releases, but for now here is a reminder of how easy it is to get started with compliance reporting: # pkg install security/compliance # compliance assess # compliance report That will give us an html report that we can then view.  Since we didn't give any compliance benchmark name it defaults to 'Solaris Baseline', so now lets install and run the PCI-DSS benchmark. The 'security/compliance' package has a group dependency for 'security/compliance/benchmark/pci-dss' so it will be installed already but if you don't want it you can remove that benchmark and keep the others and the infrastructure. # compliance assess -b pci-dss Assessment will be named 'pci-dss.Solaris_PCI-DSS.2014-04-14,16:39' # compliance report -a pci-dss.Solaris_PCI-DSS.2014-04-14,16:39 That will give you the path to an HTML report file that you can view in your browser. The full report has a summary section, over all score at the top, for example: This is followed by the details of each of the checks and their pass/fail status: The report is interactive and we can filter it to show only the failing results but we can also do that when we generate the report from the CLI like this: # compliance report -s fail -a pci-dss.Solaris_PCI-DSS.2014-04-14,16:39 Now lets say we want to turn off some of those checks that are failing because they are not relevant to our deployment. The simplest way of using 'compliance tailor' is use the interactive pick tool: # compliance tailor -t mysite *** compliance tailor: No existing tailoring 'mysite', initializing tailoring:mysite> pick The above shows the interactive mode where using 'x' or 'space' allows us to enable or disable an individual test.  Note also that since the Oracle Solaris 11.2 release all the tests have been renumbered and now have unique rule identifiers that are stable across releases.  The same rule number always refers to the same test in all of the security benchmark policy files delivered with the operating system. When exiting from the interactive pick mode just type 'commit' to write this out to a locally installed tailoring; that will create an XCCDF tailoring file under /var/share/compliance/tailorings.  The XML tailoring files should not be copied across update releases, instead use the export functionality. The 'export' action for the tailoring subcommand that allows you to save off your customisations for importing into a different system, this works similarly to zonecfg(1M) export. $ compliance tailor -t mysite export | tee /tmp/mysite.out set tailoring=mysite # version=2015-06-29T14:16:34.000+00:00 set benchmark=solaris set profile=Baseline # OSC-16005: All local filesystems are ZFS exclude OSC-16005 # OSC-15000: Find and list files with extended attributes include OSC-15000 # OSC-35000: /etc/motd and /etc/issue contain appropriate policy text include OSC-35000 The saved command file can then be used for input redirection to create the same tailoring on another system.  You should assume that over time new checks will be added so you will want to periodically, and certainly at update release boundaries, review the list of included and excluded checks. To run an assessment of the system using a tailoring we simply need to do this: # compliance assess -t mysite Assessment will be named 'mysite.2015-06-29,15:22' Title Package integrity is verified Rule OSC-54005 Result PASS ... We have more exciting functionality planned for the future. When Oracle Solaris 11.4 beta is available I'll be describing some of the new compliance features we have been working on including remote assesment, and graphing historical assesments in the new Web Dashboard.

During the Oracle Solaris 11 launch (November 2011) one of the questions I was asked from the audience was from a retail customer asking for documentation on how to configure Oracle Solaris to pass a P...

Understanding Oracle Solaris Constraint Packages

Constraint packages are a mechanism that will limit the versions of the software that can be installed on a system. An important example of this is the Oracle Solaris 11.3 constraint package. It is simply called 'pkg:/release/constraint/solaris-11.3'. When this constraint package is installed it prevents 'pkg update' from moving the system onto the next update release, ie Solaris 11.4, while still allowing installation of any SRUs for the current update release. This is important for when the beta and GA of Solaris 11.4 is present in the repository but you need a machine to stay on the 11.3 SRU release. There will be a constraint package for each update release going forward, this allows you to control when you move to the update release will still allowing 'pkg update' for either SRUs or your own or 3rd party packages in your repositories. Additionally such constraints can be used to ensure compatible versions of software packages are installed. A simple example — you need to ensure that the installed libc is compatible with the kernel — if it is not that lots of horrible things will occur, most of these errors will be up front. More insidious is the installation of a library to support some application, you hope that the version of the library is the one that the application has been verified with, if not then strange and hard to diagnose things may occur. Therefore we need to be able to ensure that when software is installed the correct version is installed, but additionally any dependencies are correctly selected now and in the future. The packaging system allows us to define exactly the versions of the various packages that can be installed by using a variety of dependencies within the packages. The packaging system has a rich set of dependency types — these are all documented in pkg(5). We will have a look at a couple of them. 'require' — this says that a package is needed and, if a version is specified, the version to be installed will be equal or greater than that version. Example: pkg:/constA has depend fmri=pkg:/appB type=require depend fmri=pkg:/appC@1.1 type=require This means when 'constA' is installed then 'appB' will be installed. The version of 'appB' will be the highest that will install on the system. 'appC' will also be installed but with a minimum version of 1.1. If 'appC' was available but only at version 1.0 then 'constA' would fail to install. Additionally if 'appC' versions 1.1 and 1.2 were available then version 1.2 would be installed. 'incorporate' — this says that if a package is to be installed then the version selected will be the version that satisfies this dependency. Note by itself this dependency DOES NOT cause a package to install, it is only specifying a version to use if the package is to be installed. Example: pkg:/constA has depend fmri=pkg:/appB type=require depend fmri=pkg:/appB@1.1 type=incorporate depend fmri=pkg:/appC@1.1 type=incorporate When 'constA' is installed then 'appB' will be installed (due to the 'require' dependency) but it will be installed with version 1.1 due to the incorporate dependency. The package 'appC' will not be installed because there is no 'require' dependency, however if it is installed later then the version to be installed will be 1.1. Now one interesting, and important, aspect of the values associated in the above dependencies is that it is not required to specify the full version of the package. Using the last example. If there were the following 'appC' versions available: appC@1.1 appC@1.1.1 appC@1.2 The first two satisfy the 'incorporate' dependency and ultimately appC@1.1.1 will be installed as the pkg system assumes that missing version numbers are 0. So what ? This allows us to build what we call constraint packages. A constraint package is one that delivers, as part of its content, a set of dependencies that will use the 'incorporate' dependency and optionally the 'require' one. Such a package will ensure that only particular versions of packages can be installed on a system. Again so what ? One simple use case is to define a package that will be prevent updates of the system from major version jumps while still maintaining the capability to update the system for simple package updates. Example: Initially, we have available: appB@1.1.0 appB@2.0 appC@1.1.0 appC@2.0 A system installed with appB@1.1.0 and appC@1.1.0 when a 'pkg update' is run will update 'appB' and 'appC' to the 2.0 versions. This is potentially problematic because we have moved to major versions and those could deliver unwanted/unexpected changes. Now if the vendor then released, as patch builds: appB@1.1.1 appC@1.1.1 It would be nice to be able to update to those simply. We could specify the exact required versions on the command line: pkg update appB@1.1.1 appC@1.1.1 This is awkward because it means we have to know the versions and type them in correctly and from an maintenance perspective simply makes life harder. This is where a constraint package could be used. That is we have 'constA' (version 1.0): pkg:/constA@1.0 has depend fmri=pkg:/appB@1.1 type=incorporate depend fmri=pkg:/appC@1.1 type=incorporate Now installing this package onto the system now means we can do a pkg update and we know that only the patch versions of 'appB' and 'appC' will be installed (assuming of course the 3rd digit represents a patch release). As 'constA' has it's own version space we could actually decide at some point that constA@2.0 allows for the update of 'appB' and 'appC' to the versions 2.0. This means constA@2.0 can be released and it can contain: pkg:/constA@2.0 has depend fmri=pkg:/appB@2.0 type=incorporate depend fmri=pkg:/appC@2.0 type=incorporate Allowing for the system to update 'constA', 'appB' and 'appC' using the simple: pkg update So how does Oracle Solaris use this capability ? In the first instance the 'entire' package can be considered a high level constraint package as it defines, ultimately, what versions of software can be installed as it simply has a list of 'incorporate' dependencies. As menationed previously Oracle Solaris, as of 11.3, provides a 'constraint' package that will prevent the system updating to a later version of the operating system, if it is available in the package repositories. Such a package will be delivered for each and every Solaris Update (called solaris-11.4, solaris-11.5 and so on). A more advanced use of this capability is with 'Interim Diagnostic Relief' (IDR) packages. These packages deliver intermin bug fixes. An IDR delivers: A constraint package: idrXYZ where XYZ is the number of the IDR. A set of packages being patched with a special version number. Example: IDR2903 has, within it's manifest: set name=pkg.fmri value=pkg://solaris/idr2903@1 depend fmri=pkg:/system/header@0.5.11,5.11-0.175.3.15.0.3.0.2903.1 type=incorporate depend fmri=pkg:/system/kernel@0.5.11,5.11-0.175.3.15.0.3.0.2903.1 type=incorporate Meaning that when IDR2903 is installed on a system then if either of the named packages are installed then they must be at the versions specified. If IDR2903 is installed on a system then if pkg:/system/header is later installed it will be at the version specified. In Summary: Constraint packages offer the ability to control the versions of the software on systems while providing for the ability to maintain the simple updating of the system.

Constraint packages are a mechanism that will limit the versions of the software that can be installed on a system. An important example of this is the Oracle Solaris 11.3 constraint package. It is...

Perspectives

Solaris 11.3: Changes to bundled software packages since GA

Looking over my blog recently, I realized I never did a post for the Solaris 11.3 GA release to list the bundled software updates, as I’d previously done for the Solaris 11.1, Solaris 11.2 beta, Solaris 11.2 GA, and Solaris 11.3 beta releases.  But that was two years ago, so telling you now what we shipped then would be boring.  Instead, I've put together a list of what's changed in the Solaris support repository since then. When we shipped the 11.3 beta, James announced that Oracle Instant Client 12.1 packages were now in the IPS repo for building software that connects to Oracle databases. He's now added Oracle Instant Client 12.2 packages as well in the IPS repo, with the old packages renamed to allow installing both versions. While there's plenty of updates and additions in this package list, there's also a good number of older packages removed, especially those which were no longer being supported by the upstream community.  While the End of Features Notices page gives a heads up for what's coming out in some future release, the SRU readmes also have a section listing things scheduled to be obsoleted soon in the support repository.  For instance, the SRU 26 Read Me announces the removal of Tomcat 6.0 has been completed and warns these packages will be removed soon: MySQL 5.1 sox beanshell Apache httpd 2.2 GnuTLS 2.8.6 libgcrypt 1.5.3 libtiff 3.9.5 grails Detailed list of changes This table shows most of the changes to the bundled packages between the original Solaris 11.3 release and the latest Solaris 11.3 support repository update (SRU26, aka 11.3.26, released November 15, 2017). These show the versions they were released with, and not later versions that may also be available via the FOSS Evaluation Packages for existing Solaris releases. As with previous posts, some were excluded for clarity, or to reduce noise and duplication. All of the bundled packages which didn’t change the version number in their packaging info are not included, even if they had updates to fix bugs, security holes, or add support for new hardware or new features of Solaris. Package Upstream 11.3 11.3 SRU 26 archiver/gnu-tar GNU tar 1.27.1 1.28 archiver/unrar RARLAB 4.2.4 5.5.5 benchmark/gtkperf GtkPerf 0.40 not included cloud/openstack/cinder OpenStack 0.2014.2.2 0.2015.1.2 cloud/openstack/glance OpenStack 0.2014.2.2 0.2015.1.2 cloud/openstack/heat OpenStack 0.2014.2.2 0.2015.1.2 cloud/openstack/horizon OpenStack 0.2014.2.2 0.2015.1.2 cloud/openstack/ironic OpenStack 0.2014.2.1 0.2015.1.2 cloud/openstack/keystone OpenStack 0.2014.2.2 0.2015.1.2 cloud/openstack/neutron OpenStack 0.2014.2.2 0.2015.1.2 cloud/openstack/nova OpenStack 0.2014.2.2 0.2015.1.2 cloud/openstack/swift OpenStack 2.2.2 2.3.0 compress/p7zip p7zip 9.20.1 15.14.1 compress/pixz Dave Vasilevsky 1.0 1.0.6 compress/xz xz 5.0.1 5.2.3 crypto/gnupg GnuPG 2.0.27 2.0.30 database/mysql-55 MySQL 5.5.43 5.5.57 database/mysql-56 MySQL 5.6.21 5.6.37 database/mysql-57 MySQL not included 5.7.17 database/oracle/instantclient-122 Oracle Instant Client not included 12.1.0.2.0 database/sqlite-3   3.8.8.1 3.17.0 desktop/project-management/openproj OpenProj 1.4 not included desktop/project-management/planner GNOME Planner 0.14.4 not included desktop/studio/jokosher Jokosher 0.11.5 not included desktop/system-monitor/gkrellm GKrellM 2.3.4 not included developer/astdev93 AT&T Software Technology (AST) 93.21.0.20110208 (93u 2011-02-08) 93.21.1.20120801 (93u+ 2012-08-01) developer/build/ant Apache Ant 1.9.3 1.9.6 developer/build/autoconf-213   not included 2.13 developer/documentation-tool/gtk-doc   1.15 1.24 developer/gcc-5 GNU Compiler Collection not included 5.4.0 developer/java/jdepend jdepend 2.9 not included developer/meld meldmerge.org 1.4.0 1.8.6 developer/parser/byaccj byaccj 1.14 not included developer/versioning/git Git 1.7.9.2 2.7.4 developer/versioning/mercurial Mercurial 3.4 4.1.3 developer/yasm Yasm not included 1.3.0 diagnostic/mrtg MRTG 2.16.2 2.17.4 diagnostic/nmap Nmap 6.25 7.50 diagnostic/snort Snort 2.8.4.1 2.8.6.1 diagnostic/tcpdump tcpdump 4.7.4 4.9.2 diagnostic/wireshark Wireshark 1.12.7 2.4.2 documentation/diveintopython Dive Into Python 5.4 not included editor/vim vim.org 7.3.600 8.0.95 gnome/applet/contact-lookup-applet Ross Burton 0.17 not included image/imagemagick ImageMagick 6.8.3.5 6.9.9.19 image/library/libpng libpng 1.4.11 1.4.20 image/nvidia/cg-toolkit Cg Toolkit 3.0.15 not included image/scanner/xsane Xsane 0.997 not included image/viewer/gqview GQview 2.1.5 not included library/cacao Common Agent Container 2.4.3.0 2.4.7.0 library/desktop/cairo Cairo 1.8.10 1.14.8 library/desktop/gobject/gobject-introspection GObject Introspection 0.9.12 1.46.0 library/desktop/harfbuzz HarfBuzz not included 1.0.6 library/expat Expat 2.1.0 2.2.1 library/glib2 GLib 2.28.6 2.46.0 library/gnutls-3 GnuTLS not included 3.5.13 library/graphics/pixman X.Org Foundation 0.29.2 0.34.0 library/id3lib id3lib 3.8.3 not included library/java/java-gnome/java-libvte java-gnome.sourceforge.net 0.12.3 not included library/libarchive libarchive 3.0.4 3.3.1 library/libffi libffi 3.0.9 3.2.1 library/libidn GNU IDN Library 1.19 1.33 library/libmicrohttpd GNU libmicrohttpd 0.9.37 0.9.52 library/libsndfile libsndfile 1.0.23 1.0.28 library/libssh2 libssh2 not included 1.7.0 library/libusb-1 libusb not included 1.0.20 library/libxml2 xmlsoft.org 2.9.2 2.9.4 library/nspr Mozilla NSPR 4.10.7 4.14 library/pcre pcre.org 8.37 8.41 library/perl-5/CGI CGI.pm not included 4.28 library/perl-5/database DBI 1.58 1.636 library/perl-5/net-ssleay Net-SSLeay 1.36 1.78 library/perl-5/openscap OpenSCAP 1.2.3 1.2.6 library/perl-5/xml-parser CPAN 2.36 2.44 library/perl5/perl-tk Perl-Tk 804.31 804.33 library/python/aioeventlet aioeventlet not included 0.4 library/python/argparse Steven Bethard 1.2.1 not included library/python/barbicanclient-27 OpenStack 3.0.1 3.0.3 library/python/ceilometerclient-27 OpenStack 1.0.12 1.1.1 library/python/cffi python-cffi 0.8.2 1.5.2 library/python/cherrypy CherryPy Team 3.1.2 5.1.0 library/python/cinderclient OpenStack 1.1.1 1.3.1 library/python/cliff Paul McGuire 1.9.0 1.10.1 library/python/cryptography cryptography.io 0.8.2 1.7.2 library/python/cx_oracle   not included 5.2.1 library/python/django Django 1.4.22 1.11.1 library/python/django_openstack_auth OpenStack 1.1.9 1.2.0 library/python/dnspython dnspython-dev 1.11.1 1.12.0 library/python/eventlet Linden Lab 0.15.2 0.17.4 library/python/fixtures   not included 1.0.0 library/python/glance_store OpenStack 0.1.10 0.4.0 library/python/glanceclient OpenStack 0.15.0 0.17.2 library/python/greenlet Ralf Schmitt 0.4.5 0.4.9 library/python/heatclient OpenStack 0.2.12 0.4.0 library/python/idna   not included 2.0 library/python/importlib Brett Cannon 1.0.2 not included library/python/ipaddress ipaddress not included 1.0.16 library/python/ironicclient OpenStack 0.3.3 0.5.1 library/python/keystoneclient OpenStack 1.0.0 1.3.3 library/python/keystonemiddleware OpenStack 1.3.1 1.5.2 library/python/kombu Ask Solem 3.0.7 3.0.32 library/python/libxml2 xmlsoft.org 2.9.2 2.9.4 library/python/linecache2   not included 1.0.0 library/python/msgpack   not included 0.4.6 library/python/neutronclient OpenStack 2.3.10 2.4.0 library/python/novaclient OpenStack 2.20.0 2.23.2 library/python/openscap OpenSCAP 1.2.3 1.2.6 library/python/openstackclient OpenStack not included 1.0.4 library/python/ordereddict ordereddict 1.1 not included library/python/oslo.concurrency   not included 1.8.2 library/python/oslo.config OpenStack 1.6.0 1.9.3 library/python/oslo.context OpenStack 0.1.0 0.2.0 library/python/oslo.db OpenStack 1.0.3 1.7.2 library/python/oslo.i18n OpenStack 1.3.1 1.5.0 library/python/oslo.log OpenStack not included 1.0.0 library/python/oslo.messaging OpenStack 1.4.1 1.8.3 library/python/oslo.middleware OpenStack 0.4.0 1.0.0 library/python/oslo.policy OpenStack not included 0.3.2 library/python/oslo.serialization OpenStack 1.2.0 1.4.0 library/python/oslo.utils OpenStack 1.2.1 1.4.0 library/python/oslo.versionedobjects OpenStack not included 0.1.1 library/python/oslo.vmware OpenStack 0.8.0 0.11.2 library/python/pbr OpenStack 0.8.1 0.11.0 library/python/pint pint not included 0.6 library/python/psutil   not included 1.2.1 library/python/pycadf OpenStack 0.6.0 0.8.0 library/python/python-dbus   1.1.1 1.2.0 library/python/python-gtk-vnc   not included 0.3.10 library/python/python-mimeparse   not included 0.1.4 library/python/saharaclient OpenStack 0.7.6 0.8.0 library/python/semantic-version semantic_version not included 2.4.2 library/python/sqlalchemy-migrate Jan Dittberner 0.9.1 0.9.6 library/python/sqlparse sqlparse not included 0.1.14 library/python/stevedore Doug Hellmann 1.2.0 1.3.0 library/python/swiftclient OpenStack 2.3.1 2.4.0 library/python/taskflow OpenStack 0.6.1 0.7.1 library/python/testresources Testing Cabal not included 0.2.7 library/python/testscenarios Testing Cabal not included 0.5.0 library/python/testtools Testing Cabal not included 1.8.0 library/python/traceback2 traceback2 not included 1.4.0 library/python/trollius trollius not included 2.0 library/python/troveclient OpenStack 1.0.8 1.0.9 library/python/unittest2 Michael Foord 0.5.1 not included library/python/urllib3   not included 1.10.4 library/security/gpgme gpgme 1.5.3 1.6.0 library/security/libassuan libassuan 2.2.0 2.4.3 library/security/libgpg-error libgpg-error 1.12 1.27 library/security/libksba libksba 1.3.2 1.3.5 library/security/nettle Nettle not included 3.1 library/security/nss Mozilla NSS 4.17.2 4.30.2 library/security/ocsp/libpki LibPKI not included 0.8.9 library/security/ocsp/openca-ocspd OpenCA OCSP Responder not included 3.1.2 library/security/openssl OpenSSL 1.0.1.16 1.0.2.12 library/security/openssl/openssl-fips-140 OpenSSL 2.0.6 2.0.12 library/security/pam/module/pam-pkcs11 OpenSC 0.6.0 0.6.8 library/security/pcsc-lite/ccid PCSClite not included 1.4.20 library/security/pcsc/pcsclite PCSClite not included 1.8.14 library/zlib Zlib 1.2.8 1.2.11 mail/mailman GNU mailman 2.1.18.1 2.1.24.1 mail/thunderbird Mozilla Thunderbird 31.6.0 52.4.0 media/dvd+rw-tools DVD+RW 7.1 not included media/mtx mtx 1.3.12 not included network/amqp/rabbitmq rabbitmq.com 3.1.3 3.6.9 network/chat/irssi irssi.org 0.8.15 1.0.2 network/dns/bind service/network/dns/bind ISC BIND 9.6.3.11.3 (9.6-ESV-R11-S3) 9.6.3.11.10 (9.6-ESV-R11-S10) network/firewall/firewall-pflog   not included 5.5 network/ftp/lftp lftp 4.3.1 4.7.6 network/open-fabrics OpenFabrics 1.5.3 3.18 network/openssh OpenSSH 6.5.0.1 7.5.0.1 network/rsync rsync.samba.org 3.1.1 3.1.2 print/filter/hplip hplip 3.14.6 3.15.7 runtime/erlang Erlang 17.5 19.3 runtime/perl-522   not included 5.22.1 runtime/perl-584 Perl community 5.8.4 not included runtime/python-26 Python community 2.6.8 not included runtime/ruby-23   not included 2.3.1 security/compliance/openscap OpenSCAP 1.2.3 1.2.6 security/kerberos-5 Kerberos 1.10 1.14.4.0 security/nss-utilities Mozilla NSS 4.17.2 4.30.2 security/sudo sudo.ws 1.8.9.5 1.8.18.1 service/memcached Memcached 1.4.17 1.4.33 service/network/dhcp/isc-dhcp ISC DHCP 4.1.0.7 4.1.0.7.2 service/network/dnsmasq Dnsmasq 2.68 2.76 service/network/ftp ProFTPD 1.3.5 1.3.5.2 service/network/ntp NTP.org 4.2.8.2 (4.2.8p2) 4.2.8.10 (4.2.8p10) service/network/samba Samba 3.6.25 4.4.16 service/network/smtp/postfix Postfix 2.11.3 3.2.2 service/security/stunnel stunnel 4.56 5.35 shell/bash GNU bash 4.1.17 4.4.11 shell/ksh93 Korn Shell 93.21.0.20110208 (93u 2011-02-08) 93.21.1.20120801 (93u+ 2012-08-01) shell/tcsh tcsh 6.18.1 6.19.0 shell/zsh Zsh Development Group 5.0.7 5.3.1 system/data/timezone IANA Time Zone database 2015.5 (2015e) 2017.1 (2017a) system/desktop/stardict StarDict 3.0.1 not included system/library/dbus D-Bus 1.7.1 1.8.20 system/library/fontconfig FontConfig 2.8.0 2.12.1 system/library/freetype-2 The FreeType Project 2.5.5 2.8 system/library/libdbus-glib   0.100 0.102 system/library/security/libgcrypt libgcrypt 1.5.3 1.7.8 system/library/security/pkcs11_cackey CACKey not included 0.7.4 system/library/security/pkcs11_coolkey CoolKey not included 1.1.0 system/management/cloudbase-init cloudbase-init not included 0.9.9 system/management/facter facter 2.1.0 2.4.6 system/management/ipmitool ipmitool 1.8.12 1.8.15.0 system/management/puppet Puppet 3.6.2 3.8.6 system/management/puppet/nanliu-staging   not included 1.0.3 system/management/puppet/openstack-cinder   not included 6.1.0 system/management/puppet/openstack-glance   not included 6.1.0 system/management/puppet/openstack-heat   not included 6.1.0 system/management/puppet/openstack-horizon   not included 6.1.0 system/management/puppet/openstack-ironic   not included 6.1.0 system/management/puppet/openstack-keystone   not included 6.1.0 system/management/puppet/openstack-neutron   not included 6.1.0 system/management/puppet/openstack-nova   not included 6.1.0 system/management/puppet/openstack-openstacklib   not included 6.1.0 system/management/puppet/openstack-swift   not included 6.1.0 system/management/puppet/puppetlabs-apache   not included 1.8.1 system/management/puppet/puppetlabs-concat   not included 1.2.1 system/management/puppet/puppetlabs-inifile   not included 1.4.3 system/management/puppet/puppetlabs-mysql   not included 3.6.2 system/management/puppet/puppetlabs-ntp   not included 4.1.2 system/management/puppet/puppetlabs-rabbitmq   not included 5.3.1 system/management/puppet/puppetlabs-rsync   not included 0.4.0 system/management/puppet/puppetlabs-stdlib   not included 4.11.0 system/management/puppet/saz-memcached   not included 2.8.1 system/storage/smartmontools smartmontools not included 6.5 system/storage/smp_utils smp_utils not included 0.97 terminal/conman conman 0.2.5 not included terminal/resize Thomas Dickey 0.271 0.320 terminal/screen GNU Screen 4.0.3 4.5.1 terminal/tack   1.0.6 not included terminal/xterm Thomas Dickey 271 320 text/o3read   0.0.4 not included web/browser/firefox Mozilla Firefox 31.8.0 52.4.0 web/browser/w3m w3m 0.5.2 0.5.3 web/curl curl 7.40.0 7.45.0 web/java-servlet/tomcat Apache Tomcat 6.0.44 not included web/java-servlet/tomcat-8 Apache Tomcat 8.0.21 8.5.23 web/php-53 PHP 5.3.29 not included web/php-56 PHP 5.6.8 5.6.30 web/php-71 PHP not included 7.1.4 web/php/extension/php-suhosin-extension suhosin.org 0.9.37.1 0.9.38 web/php/extension/php-xdebug xdebug.org 2.3.2 2.5.1 web/proxy/squid squid-cache.org 3.5.5 3.5.23 web/server/apache-22 Apache HTTPd 2.2.31 2.2.34 web/server/apache-22/module/apache-sed Apache mod_sed 2.2.29 2.2.34 web/server/apache-24 Apache HTTPd 2.4.12 2.4.27 web/server/lighttpd-14 lighttpd 1.4.35 1.4.41 web/wget GNU wget 1.16 1.18 x11/session/sessreg X.Org Foundation 1.0.8 1.1.0 x11/xfs X.Org Foundation 1.1.3 1.1.4

Looking over my blog recently, I realized I never did a post for the Solaris 11.3 GA release to list the bundled software updates, as I’d previously done for the Solaris 11.1, Solaris 11.2 beta, Sola...

Some Light Reading For The Weekend

Over the last few weeks there have been some nice blogs that came out related to Oracle Solaris that I thought it would be useful to sum up. First it's probably good to know that earlier this week we released Oracle Solaris 11.3 SRU 26, the November SRU. It has new updates to features like UEFI, Tomcat, Wireshark, and Java. For more details please read this blog entry. Next I'd like to point out Chris Beal wrote an nice blog on how to start using PF to create NAT rules for Zones, this for example if you want to create your own private network in an environment where you can't get extra IP addresses but still want to run Zones as containers for your applications. For example for development and test cases, for example in a DevOps setup. This links nicely to my third tip. Over the last few weeks I've been bringing out a series of blogs on how to set up a DevOps tools chain on Oracle Solaris. The tools used include Git, Jenkins, Maven, and Oracle WebLogic Server. This is a longer more complete writeup of the DevOps Hands on Lab we did this year at Oracle OpenWorld. One of the things they show for example is how you can use the Oracle Maven WebLogic plugin to connect Maven to a WebLogic instance and deploy applications to it automatically. Check out Part 1, Part 2, and Part 3 (Part 4 coming soon). Happy reading, have a good weekend.

Over the last few weeks there have been some nice blogs that came out related to Oracle Solaris that I thought it would be useful to sum up. First it's probably good to know that earlier this week...

Oracle Solaris 11

Oracle Solaris 11.3 SRU26 Released

Earlier today Oracle Solaris 11.3 Support Repository Update (SRU) 26 was released. It's now available for download on My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support repository at https://pkg.oracle.com/solaris/support . This SRU contains some features and updates that will be of interest to users: Enhanced scalability of the callout system. This addresses issues with Real Time threads where large numbers of scheduled concurrent callouts are getting pinned under higher interrupt levels. This fix should benefit large systems and similar workloads. Support booting from 4k sector disks on UEFI systems. This fix allows the system to boot and use disks with a 4K sector size and allows Solaris to be supported on a larger number of platforms. Oracle Instant Client 12.2 is now available in addition to Oracle Instant Client 12.1. The package names have changed to include a "-122" and "-121" suffix and the version of tools such as sqlplus are now delivered as mediated links. Readme note 29 includes further details. International Components for Unicode have been updated to 59.1. Note the library is now built with the GNU project C++ compiler, so make sure icu-config(1) or pkg-config(1) is used to determine correct build flags. Further details are available from the upstream website at http://site.icu-project.org/ OpenSSL has been updated to 1.0.2l The Java 8, Java7, and Java 6 packages have been updated.  For more information and bugs fixed, see Java 8 Update 152 Release Notes, Java 7 Update 161 Release Notes, and Java 6 Update 171 Release Notes. The following components also have updates to address security issues: NVIDIA driver packages Wireshark has been updated to 2.4.2 gnupg has been updated to 2.0.30. The gnupg related components are also updated: gpgme has been updated to 1.6.0 libksba has been updated to 1.3.5 libassuan has been updated to 2.4.3. tcpdump has been updated to 4.9.2. Apache Tomcat has been updated to 8.5.23. Thunderbird has been updated to 52.4.0. ImageMagick has been updated to 6.9.9. For a complete list of bugs fixed in this SRU, see the Bugs Fixed. section of the Readme. For the list of Service Alerts affecting each Oracle Solaris 11.3 SRU, see Important Oracle Solaris 11.3 SRU Issues (Doc ID 2076753.1).

Earlier today Oracle Solaris 11.3 Support Repository Update (SRU) 26 was released. It's now available for download on My Oracle Support Doc ID 2045311.1, or via 'pkg update' from the support...

Technologies

DevOps Hands on Lab - Installing Jenkins on Oracle Solaris

Here's Part 3 of our series on this year's Hands on Lab on setting up a simple DevOps toolchain on Oracle Solaris. In the previous posts Part 1 how to install Oracle WebLogic Server on Oracle Solaris and Part 2 how connect Maven to the Weblogic instance with a plugin. In this post we'll show how to install and configure Jenkins as a build manager to allow for a completely automatic build and deploy of your Java application to the WebLogic instance. This will help complete the build server part of the toolchain and get closer to making our install look like this: One of the other smaller things we'll need to add to the build server is the Git repository origin of our Java application to ensure that we can signal Jenkins when any updates or changes to the Java application have been committed and pushed. We want to use this repository because we can then later also easily create a clone somewhere else and use that as a source of updates. In this blog we essentially have 3 subsections. The first is on how to install Jenkins and use Oracle Solaris features like the Service Management Facility (SMF) to keep it always on and secure. The second is getting the application into a Git repository and then push it to a central master repository called origin. And the third section will show how to create a project in Jenkins that will then take the code from this repository, build it, and then deploy it to the WebLogic instance using Maven. This last section will heavily leverage the work we've done in the previous two posts to get that backend ready. 1. Installing and Configuring Jenkins Jenkins is a very popular open source build automation server. Because it has a huge following many plugins have been written to it and pretty much anything can be built, tested, and deployed with it. What's also really nice about it is that it's written in Java so therefore it and it's plugins will work on many different platforms. This includes the many different flavors of Linux as well as Oracle Solaris. It's actually one of the tools we use internally to automate building new versions of Solaris and will work nicely in our DevOps example. The way it's normally installed is by starting the jenkins.war file with Java and then going through the procedures where it essentially unpacks itself in your directory. But in our case we want to leverage Solaris SMF to create a Jenkins service that we can turn on and off and that will keep Jenkins running over reboots and failures. To do this we'll have to create a SMF manifest (an XML file that describes the service) and then load it into SMF. The nice thing about this is that you can for example define as which user the service will run and what it's allowed to do. This greatly helps you to run services securely as you can create a user that nobody would ever log in as that has very limited privileges and even start the service with even more restrictive privileges. In our case we're going to use some of these features. So first, as root in the j-m_zone download the latest version of Jenkins from their download site and copy it into the /opt directory: root@j-m_zone:~# wget http://mirrors.jenkins.io/war-stable/latest/jenkins.war --2017-09-12 06:35:22-- http://mirrors.jenkins.io/war-stable/latest/jenkins.war ... jenkins.war 100%[==========================================================>] 67.35M 2.18MB/s in 31s 2017-09-12 06:35:53 (2.18 MB/s) - ‘jenkins.war’ saved [70620203/70620203] root@j-m_zone:~# cp jenkins.war /opt/. We copy it to /opt because that's a standard place to put optional software but in reality because Jenkins will download and unpack itself inside JENKINS_HOME this is step is optional. As long as you know where it it so that you can point the SMF manifest in the right location (and user jenkins can read the file). Now we'll create the SMF manifest. To make this easy there's a command called svcbundle that will create the manifest from a template after which we only have to make a few tweaks to get the manifest in the form we want it. So first run svcbundle: root@j-m_zone:~# svcbundle -o jenkins.xml -s service-name=site/jenkins -s model=child -s instance-name=default -s start-method="java -jar /opt/jenkins.war" This will create a file called jenkins.xml in your current working directory. And we can finish the configuration by editing this XML file and replacing this line: <instance enabled="true" name="default"/> With these lines: <instance enabled="true" name="default"> <method_context> <method_credential group='nobody' privileges='basic,net_privaddr' user='jenkins' /> <method_environment> <envvar name='JENKINS_HOME' value='/export/home/jenkins' /> </method_environment> </method_context> </instance> These extra lines tell SMF to start Jenkins as user jenkins, group nobody, with basic privileges plus the privilege to bind to a privileged port (below 1024), and that it should set JENKINS_HOME to /export/home/jenkins. This way even though it gets started by root the Jenkins process is limited in this way. Finally we can validate the manifest, load it into SMF, and check it's loaded and running: root@j-m_zone:~# svccfg validate jenkins.xml root@j-m_zone:~# cp /root/jenkins.xml /lib/svc/manifest/site/. root@j-m_zone:~# svcadm restart manifest-import root@j-m_zone:~# svcs -p jenkins STATE STIME FMRI online 12:06:21 svc:/site/jenkins:default 12:06:21 16466 java This shows that the service is now up and running with a Java process with PID 16466. Now we can go to the Jenkins BUI to finalize the install. It will be on port 8080 of the j-m_zone. First you'll see a screen asking for a code to verify yourself as the administrator: Next we'll configure Jenkins not to install any additional plugins, this will save a lot of time and we'll install the plugins we need later. Select te option "Select plugins to install", then deselect all plugins and continue: It will then ask for an admin user, please choose a username and password, we use "jenkins" and "solaris11", but you can use what you want of course . Then click "Save and Finish" and now it's ready, click "Start using Jenkins": Jenkins will restart, it will ask you to log in, and show it's main screen: At this point Jenkins is up and running and will remain so until you choose to stop it with svcadm disable jenkins which tells SMF to stop the service. svcadm enable jenkins will tell SMF to start it. We'll not change it's state as we want to keep on using it. 1.1. Jenkins Configuration Now we'll add the plugins that we need. Click on "Manage Jenkins" and then on "Manage Plugins": Then click on the "Available" tab and in the "Filter" search on the right fill in "Maven". This should give you a whole list of option, scroll down and select the "Maven integration plugin": Then go back to the filter and fill in "Git". This also will give quite a long list and select the "Git plugin" from the list: To start downloading the plugins click on the button "Download now and install after restart" at the bottom of the screen. This should kick off the install the selected plugins and their the plugins they depend upon: At the very bottom of the list there's an option to restart Jenkins once all the plugins are downloaded, check the box to do this. Now wait, either by once in a while refreshing the screen or by enabling the auto refresh button in the top right hand corner. Once the main Jenkins screen is back you're ready to do some final configuration steps. Now we go back to the Jenkins BUI and we'll configure Jenkins to use this copy of Maven as well as the local JDK. Go back to the "Manage Jenkins" screen, and this time click on the "Global Tool Configuration" link: Now go to the JDK section of this page and click on "Add JDK", fill in a name like "System Java", uncheck "Install automatically" and for "JAVA_HOME" fill in "/usr/jdk/latest": Then scroll down to the Maven section click on "Add Maven" and add the local Maven installation by unchecking "Install automatically" again and filling in "System Maven" and "/opt/apache-maven-3.2.5/" in the fields for "Name" and "MAVEN_HOME": Now click on the "Save" button at the bottom to save these configurations. 2. Configuring the Git Repositories Now we have Jenkins up and running we need to first put our application (in section 6 of Part 2) into a Git repository and then push it to a central origin repository Jenkins is going to pull from. We could also have Jenkins pull from our local repository but when we want to have a remote development environment pull the code and then push changes it's better to already use a central repository. First initialize the repo and add the Java app to it: jenkins@j-m_zone:~/mywebapp$ git init Initialized empty Git repository in /export/home/jenkins/mywebapp/.git/ jenkins@j-m_zone:~/mywebapp$ git status On branch master Initial commit Untracked files: (use "git add <file>..." to include in what will be committed) pom.xml src/ target/ nothing added to commit but untracked files present (use "git add" to track) jenkins@j-m_zone:~/mywebapp$ git add * jenkins@j-m_zone:~/mywebapp$ git status On branch master Initial commit Changes to be committed: (use "git rm --cached <file>..." to unstage) new file: pom.xml ... new file: target/maven-archiver/pom.properties jenkins@j-m_zone:~/mywebapp$ git config --global user.email "you@example.com" jenkins@j-m_zone:~/mywebapp$ git config --global user.name "Your Name" jenkins@j-m_zone:~/mywebapp$ git commit -a -m "My first commit." [master (root-commit) 92595fd] My first commit. 16 files changed, 10414 insertions(+) create mode 100644 pom.xml ... create mode 100644 target/maven-archiver/pom.properties At this point the Java app has been added to the local repository, but as said above, in order for Jenkins to easily use it we're going to create the repository origin, in this case on the same system. We're choosing to put it on the same system but in most cases it will be remote, or possibly even on a site like GitHub. To create origin we create a bare repository in the home directory of a new git-repo user for which we can set a new password so we can easily access it from another system. First exit from being user jenkins and then create the new user, set the password, and create the bare repository: jenkins@j-m_zone:~$ exit logout root@j-m_zone:~# useradd -m git-user 80 blocks root@j-m_zone:~# passwd git-user New Password: Re-enter new Password: passwd: password successfully changed for git-user root@j-m_zone:~# su - git-user Oracle Corporation SunOS 5.11 11.3 July 2017 git-user@j-m_zone:~$ git init --bare mywebapp.git Initialized empty Git repository in /export/home/git-user/mywebapp.git/ Now we can go back to user jenkins and set this as the origin of out Java app repository: git-user@j-m_zone:~$ exit logout root@j-m_zone:~# su - jenkins Oracle Corporation SunOS 5.11 11.3 July 2017 jenkins@j-m_zone:~$ cd mywebapp jenkins@j-m_zone:~/mywebapp$ git remote add origin ssh://git-user@localhost:/export/home/git-user/mywebapp.git jenkins@j-m_zone:~/mywebapp$ git push origin master Password: Counting objects: 35, done. Compressing objects: 100% (25/25), done. Writing objects: 100% (35/35), 43.04 KiB | 0 bytes/s, done. Total 35 (delta 4), reused 0 (delta 0) To ssh://git-user@localhost:/export/home/git-user/mywebapp.git * [new branch] master -> master At this point the Java app has been pushed to origin and we can start configuring a Jenkins project. 3. Configuring a Jenkins Project Now Jenkins is up and running and we also have Maven and WebLogic configured and working we can move towards creating the first Jenkins project that will build our Java app and deploy it to our WebLogic Server instance automatically every time we commit a new change to the Java app. To do this we're going to create a new project by going to the main screen in Jenkins and click on "create new jobs" to start a new project: Then as the item name fill something in like "MyWLSWebApp" and click on "Maven project" and click "OK": Under "Source Code Management" select the "Git" and fill in "/export/home/git-user/mywebapp.git": Note: We're using the direct file path because it's easy and we can. Normally you'd use something like ssh and set up credentials. For the direct file path to work of course the repository needs to at least be readable by user jenkins, which by default it should be. And under "Build Triggers" make sure "Poll SCM" is checked: Then under "Build" fill in "pre-integration-test" in the "Goals" field: Finally click "Save" and we've configured the first project. 4. Create SCM Hook At this point Jenkins is ready but we need to make sure that it gets notified when there's a new update to the Git repository. We do this with an "SCM hook" which is part of Git. The hook we're going to use is called the post-receive hook. The way this works is we create a new file in the hooks directory of the origin Git repository, this file contains a script that is invoked after Git receives new code. The script contains a curl command that in turn pokes Jenkins that there's new code: curl http://localhost:8080/git/notifyCommit?url=file:///export/home/git-user/mywebapp.git So, as user git-user go to the hooks directory and create the post-receive file, not forgetting to make sure the new script is executable: git-user@j-m_zone:~$ cd mywebapp.git/hooks/ git-user@j-m_zone:~/mywebapp.git/hooks$ vi post-receive git-user@j-m_zone:~/mywebapp.git/hooks$ cat post-receive #!/usr/bin/bash curl http://localhost:8080/git/notifyCommit?url=file:///export/home/git-user/mywebapp.git git-user@j-m_zone:~/mywebapp.git/hooks$ chmod +x post-receive 5. Test Project Setup Go back to the directory that holds the Java apps code and make a small edit to the source code of the src/main/webapp/index.xhtml file. For example change the heading from: <h2>Basic Webapp</h2> To: <h2>My Basic Webapp</h2> and then commit the change to Git and push the new code to origin: git-user@j-m_zone:~/mywebapp.git/hooks$ exit logout root@j-m_zone:~# su - jenkins Oracle Corporation SunOS 5.11 11.3 July 2017 jenkins@j-m_zone:~$ cd mywebapp/ jenkins@j-m_zone:~/mywebapp$ vi src/main/webapp/index.xhtml jenkins@j-m_zone:~/mywebapp$ git commit -a -m "My first change." [master 8bbe4cc] My first change. 1 file changed, 1 insertion(+), 1 deletion(-) jenkins@j-m_zone:~/mywebapp$ git push origin master Password: Counting objects: 6, done. Compressing objects: 100% (5/5), done. Writing objects: 100% (6/6), 512 bytes | 0 bytes/s, done. Total 6 (delta 2), reused 0 (delta 0) remote: % Total % Received % Xferd Average Speed Time Time Time Current remote: Dload Upload Total Spent Left Speed remote: 100 117 100 117 0 0 1449 0 --:--:-- --:--:-- --:--:-- 1500 remote: Scheduled polling of MyWLSWebApp remote: No Git consumers using SCM API plugin for: file:///export/home/git-user/mywebapp.git To ssh://git-user@localhost:/export/home/git-user/mywebapp.git 4efe4bd..0660e8b master -> master And if you quickly go back to the Jenkins screen you'll see that it's picked up the new build: And once it's done you can go back to the Java App page and refresh it, once you've done that it should now show the new heading: This verifies the build chain is working. At this point the build part of your toolchain is up and running and any change you make to your code that you push to the origin repository will then automatically be built and deployed to the WebLogic instance. This ends the blog on installing and configuring Jenkins on Oracle Solaris. Check in later for the last part of this series where we install NetBeans as the IDE as the development front-end to this toolchain thereby completing the picture.

Here's Part 3 of our series on this year's Hands on Lab on setting up a simple DevOps toolchain on Oracle Solaris. In the previous posts Part 1 how to install Oracle WebLogic Server on Oracle...

Technologies

DevOps Hands on Lab - Connecting Maven to Oracle WebLogic Server

Continuing on from last week's DevOps toolchain Part 1 blog. This is Part 2 on how to set up a simple DevOps toolchain on Oracle Solaris with tools like Jenkins, Maven, Git, and WebLogic Server. This is based on a Hands on Lab we did at Oracle OpenWorld this year and the toolchain at the end  of the Hands on Lab basically looks like this: In this second part we'll be installing Maven and creating and loading the Oracle Maven Plugins for Oracle WebLogic Server. To achieve this we'll first install the second Solaris Zone called j-m_zone and then install Java, Git, and Maven. At this point we can create/build the Oracle Maven Plugins for Oracle WebLogic Server with the WebLogic install in the wls_zone, pull them across and then use them to create, upload, and test a WebLogic demo application. The basic assumption is that you're continuing on working in the same environment as with Part 1, because we'll need the WebLogic install to be there to create the plugins and later connect to. 1. Installing the Jenkins-Maven Zone This procedure is very similar to the earlier install of the wls_zone so we'll only do the abbreviated version here. First (as user root in the Global Zone) configure the zone: root@solaris:~# zonecfg -z j-m_zone Use 'create' to begin configuring a new zone. zonecfg:j-m_zone> create create: Using system default template 'SYSdefault' zonecfg:j-m_zone> set autoboot=true zonecfg:j-m_zone> set zonepath=/zones/j-m_zone zonecfg:j-m_zone> verify zonecfg:j-m_zone> commit zonecfg:j-m_zone> exit Then install it: root@solaris:~# zoneadm -z j-m_zone install And once that's run through boot and connect to the zone console to go through the initial system configuration steps: root@solaris:~# zoneadm -z j-m_zone boot; zlogin -C j-m_zone This will again take you through the texted based configuration screen and once you've finished this and the zone has rebooted you can break out of the console (using ~.) and your zone is up and running. Note that before you break out of the console you might want to log in and run the ipadm command to get the IP address the zone has acquired from DHCP (if you've chosen to use DHCP instead of a fixed IP address). You can of course at a later point in time from the Global Zone always just run it in the context of the zone by running: root@solaris:~# zlogin j-m_zone ipadm This will run that single command in the context of the zone and on exit return you to the Global Zone. 2. Create/Build Oracle Maven WebLogic Server Plugins Before we go any further with the Install of Maven, we go back to the WebLogic installation and use it to create the plugins we're looking for. The WebLogic install includes the files necessary to build the plugins as well as the version of Maven needed to do this. So once logged into the wls_zone you need to find the Maven binaries and then use them to build the plugins. First log back in to the wls_zone and become user oracle. You can do this in one step by using the zlogin -l option. And then we add the Maven that ships with WebLogic Server to the PATH: root@solaris:~# zlogin -l oracle wls_zone [Connected to zone 'wls_zone' pts/4] Last login: Fri Sep 15 07:31:15 2017 from 10.0.2.17 Oracle Corporation SunOS 5.11 11.3 July 2017 oracle@wls_zone:~$ export PATH=$PATH:/u01/oracle/oracle_common/modules/org.apache.maven_3.2.5/bin oracle@wls_zone:~$ mvn -v Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 2014-12-14T09:29:23-08:00) Maven home: /u01/oracle/oracle_common/modules/org.apache.maven_3.2.5 Java version: 1.8.0_141, vendor: Oracle Corporation Java home: /usr/jdk/instances/jdk1.8.0/jre Default locale: en_US, platform encoding: UTF-8 OS name: "sunos", version: "5.11", arch: "amd64", family: "unix" Next is an optional point. Maven will need to connect with it's repository (it's default is https://repo.maven.apache.org) and if you need to use a proxy to get to this repository because it's not directly visible this needs to be defined. In Maven this is done through the ~/.m2/settings.xml file and it should look something like this: oracle@wls_zone:~$ vi ~/.m2/settings.xml oracle@wls_zone:~$ cat ~/.m2/settings.xml <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd"> <localRepository/> <interactiveMode/> <usePluginRegistry/> <offline/> <pluginGroups/> <servers/> <mirrors/> <proxies> <proxy> <id>my_proxy</id> <active>true</active> <protocol>http</protocol> <host>proxy.my-company.com</host> <port>80</port> <nonProxyHosts>*.some-domain.my-company.com|*.other-domain.my-company.com</nonProxyHosts> </proxy> </proxies> <profiles/> <activeProfiles/> </settings> Now we can run the first step, installing the Oracle Maven Synchronization Plug-In: oracle@wls_zone:~$ cd oracle_common/plugins/maven/com/oracle/maven/oracle-maven-sync/12.2.1/ oracle@wls_zone:~/oracle_common/plugins/maven/com/oracle/maven/oracle-maven-sync/12.2.1$ ls oracle-maven-sync-12.2.1.jar oracle-maven-sync-12.2.1.pom oracle@wls_zone:~/oracle_common/plugins/maven/com/oracle/maven/oracle-maven-sync/12.2.1$ mvn install:install-file -DpomFile=oracle-maven-sync-12.2.1.pom -Dfile=oracle-maven-sync-12.2.1.jar [INFO] Scanning for projects... Downloading: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/2.5/maven-clean-plugin-2.5.pom Downloaded: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/2.5/maven-clean-plugin-2.5.pom (4 KB at 5.5 KB/sec) ... [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 4.530 s [INFO] Finished at: 2017-09-11T19:38:00-07:00 [INFO] Final Memory: 11M/123M [INFO] ------------------------------------------------------------------------ Note: Details on this step can be found on the Installing and Configuring Maven for Build Automation and Dependency Management page. Now we push the plugin into the directory we want (in this case /u01/oracle): oracle@wls_zone:~/oracle_common/plugins/maven/com/oracle/maven/oracle-maven-sync/12.2.1$ mvn com.oracle.maven:oracle-maven-sync:push -DoracleHome=/u01/oracle -DtestingOnly=false [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Stub Project (No POM) 1 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- oracle-maven-sync:12.2.1-2-0:push (default-cli) @ standalone-pom --- [INFO] ------------------------------------------------------------------------ [INFO] ORACLE MAVEN SYNCHRONIZATION PLUGIN - PUSH [INFO] ------------------------------------------------------------------------ [INFO] [INFO] Found 491 location files in /u01/oracle/wlserver/plugins/maven [INFO] Installing /u01/oracle/wlserver/modules/com.bea.core.xml.weblogic.xpath.jar to /u01/oracle/.m2/repository/com/oracle/weblogic/com.bea.core.xml.weblogic.xpath/12.2.1-2-0/com.bea.core.xml.weblogic.xpath-12.2.1-2-0.jar ... [INFO] Installing /u01/oracle/oracle_common/plugins/maven/com/oracle/weblogic/oracle.apache.commons.collections.mod/3.2.0/oracle.apache.commons.collections.mod-3.2.0.pom to /u01/oracle/.m2/repository/com/oracle/weblogic/oracle.apache.commons.collections.mod/3.2.0-0-2/oracle.apache.commons.collections.mod-3.2.0-0-2.pom [INFO] SUMMARY [INFO] ------------------------------------------------------------------------ [INFO] PUSH SUMMARY - ARTIFACTS PROCESSED SUCCESSFULLY [INFO] ------------------------------------------------------------------------ [INFO] Number of artifacts pushed: 983 [INFO] [INFO] ------------------------------------------------------------------------ [INFO] PUSH SUMMARY - ERRORS ENCOUNTERED [INFO] ------------------------------------------------------------------------ [INFO] No issues encountered. [INFO] [INFO] IMPORTANT NOTE [INFO] This operation may have added/updated archetypes in your repository. [INFO] To update your archetype catalog, you should run: [INFO] 'mvn archetype:crawl -Dcatalog=$HOME/.m2/archetype-catalog.xml' [INFO] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 23.209 s [INFO] Finished at: 2017-09-11T19:51:36-07:00 [INFO] Final Memory: 12M/123M [INFO] ------------------------------------------------------------------------ Now we can inspect the new plugin: oracle@wls_zone:~/oracle_common/plugins/maven/com/oracle/maven/oracle-maven-sync/12.2.1$ mvn help:describe -DgroupId=com.oracle.weblogic -DartifactId=weblogic-maven-plugin -Dversion=12.2.1-2-0 [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Stub Project (No POM) 1 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-help-plugin:2.2:describe (default-cli) @ standalone-pom --- [INFO] com.oracle.weblogic:weblogic-maven-plugin:12.2.1-2-0 Name: weblogic-maven-plugin Description: The Oracle WebLogic Server 12.2.1 Maven plugin Group Id: com.oracle.weblogic Artifact Id: weblogic-maven-plugin Version: 12.2.1-2-0 Goal Prefix: weblogic This plugin has 23 goals: weblogic:appc Description: This goal is a wrapper for the weblogic.appc compiler. weblogic:create-domain Description: Create a domain for WebLogic Server using the default domain template. For more complex domain creation use the WLST goal. Note: Starting in WLS 12.2.1, there is a single unified version of WLST that automatically includes the WLST environment from all products in the ORACLE_HOME. ... weblogic:wsgen Description: Reads a JAX-WS service endpoint implementation class and generates all of the portable artifacts for a JAX-WS web service. weblogic:wsimport Description: Parses wsdl and binding files and generates Java code needed to access it. For more information, run 'mvn help:describe [...] -Ddetail' [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3.455 s [INFO] Finished at: 2017-09-11T19:51:53-07:00 [INFO] Final Memory: 12M/123M [INFO] ------------------------------------------------------------------------ This has the list of all the goals and a basic description on how to use them. For a more in-depth description use the -Ddetail flag. Note: More info on how to use the plugin can also be found at Using the WebLogic Maven Plug-In. At this point all the necessary plugin files are now copied into the /u01/oracle/.m2/repository directory and by copying it's content to any other .m2/repository directory on any other system it should have the ability to now create Maven archetypes for Oracle Weblogic as well as connect with WebLogic and control it. 3. Installing Maven Now we go back to the j-m_zone to install Maven. As the first step we need to install the Java packages, and because we'll need it later we'll also install Git: oracle@wls_zone:~$ exit logout [Connection to zone 'wls_zone' pts/4 closed] root@solaris:~# zlogin j-m_zone [Connected to zone 'j-m_zone' pts/4] Last login: Sun Oct 1 11:40:35 2017 on pts/2 Oracle Corporation SunOS 5.11 11.3 July 2017 root@j-m_zone:~# pkg install jdk-8 git Now we can pull down the Maven binaries from their website with wget. Note that I'm using version 3.2.5 in this case to stay the same as the version that shipped with WebLogic: root@j-m_zone:~# wget https://archive.apache.org/dist/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz Note: You might need to set the https_proxy environment variable to allow wget to connect outside your network. Then unpack the files into /opt: root@j-m_zone:~# cd /opt root@j-m_zone:/opt# tar xvfz /root/apache-maven-3.2.5-bin.tar.gz And test if it runs correctly: root@j-m_zone:/opt# /opt/apache-maven-3.2.5/bin/mvn -v Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 2014-12-14T09:29:23-08:00) Maven home: /opt/apache-maven-3.2.5 Java version: 1.8.0_141, vendor: Oracle Corporation Java home: /usr/jdk/instances/jdk1.8.0/jre Default locale: en, platform encoding: ISO646-US OS name: "sunos", version: "5.11", arch: "amd64", family: "unix" 3.1 Create User Jenkins The next step is to create a new user that will run the Jenkins processes. We're doing this as it's actually going to be Jenkins that will be using Maven to build the Java code and push it to WebLogic and even though we're not going to install Jenkins in this blog we need to put the plugins in place so we can use them later. So first create user jenkins: root@j-m_zone:/opt# useradd -m jenkins 80 blocks root@j-m_zone:/opt# su - jenkins Oracle Corporation SunOS 5.11 11.3 July 2017 jenkins@j-m_zone:~$ Now edit .profile to include the Maven binaries in the PATH by changing this line: export PATH=/usr/bin:/usr/sbin:/opt/apache-maven-3.2.5/bin And test again if Maven runs: jenkins@j-m_zone:~$ source .profile jenkins@j-m_zone:~$ mvn -v Apache Maven 3.2.5 (12a6b3acb947671f09b81f49094c53f426d8cea1; 2014-12-14T09:29:23-08:00) Maven home: /opt/apache-maven-3.2.5 Java version: 1.8.0_141, vendor: Oracle Corporation Java home: /usr/jdk/instances/jdk1.8.0/jre Default locale: en, platform encoding: ISO646-US OS name: "sunos", version: "5.11", arch: "amd64", family: "unix" 4. Copy in the Oracle Maven WebLogic Plugins Now we get to copy the plugins we've built in the wls_zone across to this one and put them into place so we can use them. To do this we need to copy the ~/.m2/repository directory across from user oracle on one side into that same directory under user jenkins on the other. Note that if you needed a proxy set earlier you probably need to copy the ~/.m2/settings.xml over too. So to get both we only need to pull ~/.m2 across with rsync: jenkins@j-m_zone:~$ rsync -avu --progress oracle@10.0.2.16:~/.m2 . The authenticity of host '10.0.2.16 (10.0.2.16)' can't be established. RSA key fingerprint is b7:50:99:9f:dc:71:19:f0:d3:a4:93:b0:a5:e5:f4:71. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.0.2.16' (RSA) to the list of known hosts. Password: receiving incremental file list .m2/ .m2/archetype-catalog.xml 2,132 100% 2.03MB/s 0:00:00 (xfr#1, ir-chk=1004/1006) .m2/settings.xml 686 100% 669.92kB/s 0:00:00 (xfr#2, ir-chk=1003/1006) .m2/repository/ ... .m2/repository/xpp3/xpp3_min/1.1.4c/xpp3_min-1.1.4c.pom 1,610 100% 1.58kB/s 0:00:00 (xfr#5104, to-chk=1/7729) .m2/repository/xpp3/xpp3_min/1.1.4c/xpp3_min-1.1.4c.pom.sha1 62 100% 0.06kB/s 0:00:00 (xfr#5105, to-chk=0/7729) sent 114,493 bytes received 454,986,747 bytes 17,173,631.70 bytes/sec total size is 454,356,946 speedup is 1.00 Note: The rsync also pulled in the .m2/archetype-catalog.xml file. More about that next. 5. Building a New Maven Archetype The plugin we've copied across not only holds the bits needed to connect to WebLogic, but it also includes sample archetypes for WebLogic that you can build and use as the basis of your new app or just to test if everything is working. The .m2/archetype-catalog.xml holds the list of archetypes we can use. We're going to use the basic-webapp archetype and create an app called my-webapp using Maven: jenkins@j-m_zone:~$ mvn archetype:generate -DarchetypeGroupId=com.oracle.weblogic.archetype -DarchetypeArtifactId=basic-webapp -DarchetypeVersion=12.2.1-2-0 -DgroupId=com.solarisdemo -DartifactId=mywebapp -Dversion=1.0-SNAPSHOT [INFO] Scanning for projects... [INFO] ... Confirm properties configuration: groupId: com.solarisdemo artifactId: mywebapp version: 1.0-SNAPSHOT package: com.solarisdemo Y: : Press [enter] [INFO] ---------------------------------------------------------------------------- [INFO] Using following parameters for creating project from Archetype: basic-webapp:12.2.1-2-0 [INFO] ---------------------------------------------------------------------------- [INFO] Parameter: groupId, Value: com.solarisdemo ... [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 09:16 min [INFO] Finished at: 2017-09-15T08:14:31-07:00 [INFO] Final Memory: 18M/123M [INFO] ------------------------------------------------------------------------ jenkins@j-m_zone:~$ ls local.cshrc local.login local.profile mywebapp jenkins@j-m_zone:~$ ls mywebapp/ pom.xml src Look around the mywebapp directory and the contents of the pom.xml file and find the Java source files. Then edit the pom.xml file and change these lines: <!--adminurl>${oracleServerUrl}</adminurl--> <user>${oracleUsername}</user> <password>${oraclePassword}</password> To now read: <adminurl>t3://10.0.2.16:7001</adminurl> <remote>true</remote> <upload>true</upload> <user>weblogic</user> <password>welcome1</password> Note: That you'll need to point to the IP address of the wls_zone as it has WebLogic that we want to connect to. Note 2: The addition of remote and upload, these are needed if you want to use Maven to connect to anything else than localhost. Please look an explanation on the Oracle Documentation for the Maven plugin. Note 3: That the password is not encrypted, for details on how to encrypt your password please got to the Maven page on encrypting server passwords. 6. Build and Deploy the Java App At this point we're ready to build the Java app and then deploy it to WebLogic. We do this by running Maven with the pre-integration-test goal: jenkins@j-m_zone:~$ cd mywebapp/ jenkins@j-m_zone:~/mywebapp$ mvn pre-integration-test [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building basicWebapp 1.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ ... [INFO] Command flags are: -noexit -deploy -username weblogic -password ******* -name basicWebapp -source /export/home/jenkins/mywebapp/target/basicWebapp.war -upload -verbose -remote -adminurl t3://10.0.2.16:7001 weblogic.Deployer invoked with options: -noexit -deploy -username weblogic -name basicWebapp -source /export/home/jenkins/mywebapp/target/basicWebapp.war -upload -verbose -remote -adminurl t3://10.0.2.16:7001 <Sep 15, 2017 8:52:21 AM PDT> <Info> <J2EE Deployment SPI> <BEA-260121> <Initiating deploy operation for application, basicWebapp [archive: /export/home/jenkins/mywebapp/target/basicWebapp.war], to configured targets.> Task 0 initiated: [Deployer:149026]deploy application basicWebapp on AdminServer. Task 0 completed: [Deployer:149026]deploy application basicWebapp on AdminServer. Target state: deploy completed on Server AdminServer Target Assignments: + basicWebapp AdminServer [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:22 min [INFO] Finished at: 2017-09-15T08:52:27-07:00 [INFO] Final Memory: 33M/123M [INFO] ------------------------------------------------------------------------ Now to test if the web app is up and running go to http://<wls_zone>:7001/basicWebapp in Firefox and you should see something like this: At this point Maven has been set up to be able to connect to the WebLogic instance and control it. In this case we've done a simple build and deploy by using the pre-integration-test goal but there are many other goals you can use, even to create new WebLogic domains and starting and stopping them. For more information check out the Oracle documentation on the different types of goals. This ends this blog on installing Maven on Oracle Solaris. Check in later for the next part in this Hands on Lab series.

Continuing on from last week's DevOps toolchain Part 1 blog. This is Part 2 on how to set up a simple DevOps toolchain on Oracle Solaris with tools like Jenkins, Maven, Git, and WebLogic Server. This...

Oracle Solaris 11.3 Documentation Updates, November 2017

The Oracle Solaris 11.3 Information Library has been updated with new information about memory page size, security, and SPARC Data Analytics Accelerator (DAX) support. If you do not see “November 2017” in the upper right corner of the library page, reload the browser page. New Security Article Security: An Oracle Solaris Differentiator Memory Page Size Policy The default value of the pagesize-policy resource can prevent migration between platforms whose page sizes differ. For how to migrate Oracle Solaris Kernel Zones between platforms whose page sizes differ, see “About Memory Page Size Policy and Physical Memory” in Creating and Using Oracle Solaris Kernel Zones. GSS-API Credentials You can use per-session GSS-API credentials for authenticating Secure Shell connections in Kerberos. See “GSS-API Authentication in Secure Shell” in Managing Secure Shell Access in Oracle Solaris 11.3 for details. Passwordless Public Key Login Passwordless Public Key login is now supported for Microsoft Active Directory (AD) users. When running with Oracle Directory Service Enterprise Edition (ODSEE) or AD, users can log in with commands such as ssh and sftp without giving a password. See “LDAP Service Module” in Working With Oracle Solaris 11.3 Directory and Naming Services: LDAP. DAX Support You can use the daxstat command to report utilization and performance statistics for Data Analytics Accelerator (DAX) on SPARC M7, SPARC M8, SPARC T7, and SPARC T8 systems. See “Displaying Data Analytics Accelerator Statistics” in Managing System Information, Processes, and Performance in Oracle Solaris 11.3 for examples. For more information about DAX, see “Breaking New Ground with Software in Silicon” and the Oracle Developer Community Software in Silicon site.

The Oracle Solaris 11.3 Information Library has been updated with new information about memory page size, security, and SPARC Data Analytics Accelerator (DAX) support. If you do not see...

Oracle

Integrated Cloud Applications & Platform Services