Oracle Solaris 11.4 SRU 81 is now available via “pkg update” from the support repository or by downloading the SRU from My Oracle Support Doc ID 2433412.1. Highlights of the changes in this release are given in the release announcement and important information to read before installing it is provided in the Readme linked from the above support document. This blog post provides more details about selected new features and interface changes in this SRU, as well as some preparation work for changes coming in future SRUs.

Security and Compliance Features

Login SLEEPTIME moved to pam_unix_auth

The pam_unix_auth(7) module now implements the sleep time after a failed login attempt in the centralized authentication code replacing the per-login-service code that was previously included in programs such as login(1) and su(8). A new option nosleeponfail may be specified for the pam_unix_auth(7) module in the PAM configuration files to disable this for particular services. The amount of time to sleep is still set via the login_policy/sleeptime property in the system/account-policy:default SMF service. This does not affect authentication via other PAM modules, such as those for LDAP, Kerberos, or third party methods.

The login(1), su(8), account-policy(8S), and pam_unix_auth(7) man pages were updated for this change.

sulogin allows users with root role in single-user mode

Since Solaris 11.0, the sulogin(8) command has allowed users with the solaris.system.maintenance authorization to login in single-user mode. Since Solaris 11.2, pam_unix_account(7) has allowed users with either that authorization or the root role to login when /etc/nologin is present on the system. To make these consistent, sulogin(8) was modified in SRU 81 to also allow users with the root role to login in single-user mode.

Auto-Unlock after timer expiry for non-password authentication

The Solaris 11.4.0 release introduced the unlock_after setting to automatically unlock an account on a successful authentication after a timeout period. Prior to SRU 81, only password-based authentication served to unlock the account. Starting in SRU 81 this has been expanded to include other login methods, such as SSH public key authentication. See the user_attr(5), policy.conf(5), and account-policy(8S) man pages for details.

KMIP “failover_limit” property obsoleted

Prior to SRU 81, a KMIP server group configuration included the ability to set a failover_limit which was documented in kmipcfg(8). That setting is now obsoleted and will no longer be honored in SRU 81 and beyond. Instead, kmip_connect() will try each of the configured servers once, trying each of its host entries before moving on to the next configured server. If a reconnect is required by one of the other libkmip functions then it will first attempt to connect to the last known online server. This property may be fully removed in a future SRU.

Networking Features

Per-interface tunable to prevent sending of ICMP unreachable messages

RFC 4443 says “For security reasons, it is recommended that implementations SHOULD allow sending of ICMP destination unreachable messages to be disabled, preferably on a per-interface basis.”

In SRU 81, a new per-interface send-unreachables property has been added to allow admins to control this. It can be set via ipadm set-ifprop -p send-unreachables=off and queried with ipadm show-ifprop -p send-unreachables. The default setting is on, preserving the behavior in previous releases. The ipadm(8) man page was updated to document this.

dladm(8) subcommand to show transceiver information

A new show-transceiver subcommand has been added to the dladm(8) command in SRU 81 to display information about pluggable transceivers in Ethernet NICs. For example:

root# dladm show-transceiver -x net4
LINK  NAME                VALUE                   UNIT
net4  Vendor Name         AVAGO                   --
net4  Vendor OUI          --                      --
net4  Part Number         7326504                 --
net4  Revision            --                      --
net4  Serial Number       AS1631Z0658             --
net4  Date Code           160803                  --
net4  Device              QSFP+                   --
net4  Connector           MPO                     --
net4  Compliance codes    14 00 00 00 00 00 00 00 --
net4  Encoding            SONET Scrambled         --
net4  Bit Rate            10.300                  GBd/s
net4  Link Length Codes   00 32 00 00 00          --
net4  Temperature         45.445                  degrees C
net4  Supply Voltage      3.267                   V
net4  Rx1 Power           0.694400                mW
net4  Rx2 Power           0.710000                mW
net4  Rx3 Power           0.658000                mW
net4  Rx4 Power           0.683400                mW
net4  Tx1 Bias            6.470000                mA
net4  Tx2 Bias            6.564000                mA
net4  Tx3 Bias            6.518000                mA
net4  Tx4 Bias            6.576000                mA

Virtualization Features

Support for more compressed archive formats for Solaris 10 branded zone installation

The solaris10 branded zone installer supports the Unified Archive (UAR), cpio, ustar, xustar, pax, level 0 ufsdump, and ZFS archive formats. Prior to SRU 81, compressed archives were only supported for the cpio & ZFS archive formats. Starting in SRU 81, compression is also supported for ustar, xustar, and pax file archives. A new section Creating and Using Installation Archives was added to the solaris10(7) man page, and updates were made to the zoneadm(8) man page.

Non-global zone init(8) and zsched given virtual PIDs 1 and 0

Inside a non-global zone, the process id for the zone’s zsched process is now reported as 0, and the process id for the zone’s init(8) process is now reported as 1. Their process ids remain unchanged when seen from the global zone. This helps with software that checks for a parent process id of 1 to determine if its parent has exited, since we reparent such processes to the non-global zone’s instance of init instead of the global zone instance since SRU 66.

Synchronized system time enforcement for zone migration

In general, Oracle Solaris zones derive their system time from the system time of the global zone. Thus if a zone is migrated to a host whose system time is not synchronized with the source host’s system time, the zone’s system time can unexpectedly move forward or backward. To avoid this, we previously documented that one of requirements for the zone migration was a running NTP client on both hosts. Starting with SRU 81, the time difference between the source and destination hosts is checked, and migration failed if the difference is too large. See the zoneadm migrate section of the zoneadm(8) man page for details.

New options for configuring ldmd logging via ‘ldm set-logctl’

Most of ldmd‘s configurable logging severity levels (i.e. all but ‘fatal‘ and ‘cmd‘) can now be specified to include function name annotations.

ldmd‘s ‘debug‘ logging severity now has three verbosity levels: ‘on‘ (standard), ‘v‘ (verbose) & ‘vv‘ (very verbose).

Both of these are available via the ‘ldm set-logctl‘ CLI. See the ldm(8) man page.

Improved columnar output of ‘ldm list’ for ‘disk’, ‘hba’, and ‘san’

This SRU improves the output of ‘ldm list‘ and related CLIs when the names assigned to vdisk or vHBA devices are long. For example, this:

NAME
dc1nvsinf001

DISK
    NAME         VOLUME                 TOUT ID   DEVICE  SERVER         MPGROUP
    dc1nvsinf001_os dc1nvsinf001_xtermio0_0@primary-vds0 10   0    disk@0  primary        dc1nvsinf001_xtermio0_0-mpgroup
    dc1nvsinf001_app1 dc1nvsinf001_xtermio0_1@primary-vds0 10   1    disk@1  primary        dc1nvsinf001_xtermio0_1-mpgroup
    dc1nvsinf001_app2 dc1nvsinf001_xtermio0_2@primary-vds0 10   2    disk@2  primary        dc1nvsinf001_xtermio0_2-mpgroup
    dc1nvsinf001_app3 dc1nvsinf001_xtermio0_3@primary-vds0 10   3    disk@3  primary        dc1nvsinf001_xtermio0_3-mpgroup
    dc1nvsinf001_app4 dc1nvsinf001_xtermio0_4@primary-vds0 10   4    disk@4  primary        dc1nvsinf001_xtermio0_4-mpgroup
    OS_MIRR_50_dc1nvsinf001 OS_MIRR_50_dc1nvsinf001@primary-vds0 10   5    disk@5  primary        OS_MIRR_50_dc1nvsinf001-mpgroup
    xio0_2FF_dc1nvsinf001 xio0_2FF_dc1nvsinf001@primary-vds0 10   6    disk@6  primary        xio0_2FF_dc1nvsinf001-mpgroup
    xio0_300_dc1nvsinf001 xio0_300_dc1nvsinf001@primary-vds0 10   7    disk@7  primary        xio0_300_dc1nvsinf001-mpgroup

------------------------------------------------------------------------------
NAME
dc1pvsdbaud01

DISK
    NAME         VOLUME                 TOUT ID   DEVICE  SERVER         MPGROUP
    xio0_223_dc1pvsdbaud01_pri xio0_223_dc1pvsdbaud01@primary-vds0 30   0    disk@0  primary
    xio0_223_dc1pvsdbaud01_alt xio0_223_dc1pvsdbaud01@alternate-vds0 30   1    disk@1  alternate
    xio0_23E_dc1pvsdbaud01_pri xio0_23E_dc1pvsdbaud01@primary-vds0 30   2    disk@2  primary

becomes more like this:

cdom.1> ldm list -o disk primary
NAME
primary

VDS
    NAME                           VOLUME                                 ID   OPTIONS          MPGROUP                          DEVICE
    primary-vds0                   HDD5-s2-2                              4    ro               pushpage                         /dev/dsk/c15t5000CCA0564F71C9d0s2
                                   HDD5-s2-3                              5    ro                                                /dev/dsk/c15t5000CCA0564F71C9d0s2
                                   vjunk                                  1                                                      /vds-backend
                                   test                                   3                                                      /vds-backend3
                                   vdisk-vh1                              0                     aar                              /dev/dsk/c3t5000CCA0564F6B89d0s2
                                   ldg1                                   2                                                      /var/share/disk_img/root-ldg1.img
                                   vdc-backendddddddddddddddddddddddddddd 6                     mpg11111111111111111111111111111 /vdc-backend
                                   ldg2                                                                                          /var/share/disk_img/root.img
                                   HDD5                                        ro                                                /dev/dsk/c15t5000CCA0564F71C9d0s2
    NAME                           VOLUME                                 ID   OPTIONS          MPGROUP                          DEVICE
    vds456789012345678901234567890

DISK
    NAME                       VOLUME                                              TOUT ID   DEVICE  SERVER                           MPGROUP
    vd-HDD5-s2-2               HDD5-s2-2@primary-vds0                                   2    disk@2  primary                          pushpage
    vd-HDD5-s2-3               HDD5-s2-3@primary-vds0                                   3    disk@3  primary
    jj                         vjunk@primary-vds0                                       1    disk@1  primary
    test                       test@primary-vds0                                        4    disk@4  primary
    aar-disk                   vdisk-vh1@primary-vds0                                   6    disk@6  primary                          aar
    primary-disk7              ldg1@primary-vds0                                        7    disk@7  primary
    vdsdev3                    vdsdev3@ldg2-vds                                         8    disk@8  ldg2
    vjunk                      vjunk@ldg1-vds                                           5    disk@5  ldg1
    foo                        foobar@foo-vds                                           9    disk@9  ldom4444444444444444444444444444
    ldom1111111111111111111111 vdc-backendddddddddddddddddddddddddddd@primary-vds0      10   disk@10 primary                          mpg11111111111111111111111111111

Generate warnings when tunables are set that could impact vnet performance

This specifically relates to the ‘vnet_enable_lso‘ tunable, which can negatively impact vnet performance. If the tunable is found in /etc/system, the following warning is produced:

WARNING: LDom virtual network offloading features are partially disabled
using deprecated /etc/sytem tunable (vnet_enable_lso = 0), performance
degradation is possible.

vHBA robustness improvements

This SRU includes several bug fixes for vHBA, to better handle LDC channel resets, service domains taking a long time to reboot, and deadlocks in low-memory situations.

System Management Features

x86 boot loader upgraded to GRUB 2.12 & Shim 15.8

The boot loader used on x86 systems has been upgraded to GRUB 2.12, and the secure boot shim has been upgraded to version 15.8. With this update, UEFI platforms will always have the secure boot loader image installed, and it will no longer be necessary to run “bootadm install-bootloader -s” to enable secure boot, nor will it be needed to disable secure boot when doing an OS install. Systems already running secure boot may require the admin accept the hash for the new grub2securex64.efi image after the update.

The boot loader image will be updated after the first boot into an SRU containing GRUB 2.12, by the boot-loader-update SMF service. After the update, the version of GRUB displayed on the menu will be “GNU GRUB 2.12,<SRU version>”.

From Oracle Solaris 11.4.81 onwards, the installadm command places the boot files for x86 Automated Install clients configured with “installadm create-client” under the directory /etc/netboot/client/<client id>. If you are managing the DHCP server configuration outside of installadm, you will need to modify the configuration for newly-created clients to reflect the changed path. The specific path for each client will be printed by installadm when the client is created.

If your Automated Install server is not updated to 11.4.81, x86 installation clients that are not assigned to a specific service using installadm create-client may fail to boot as the shim will be unable to locate GRUB. This can be worked around by placing a symbolic link to the updated GRUB in /etc/netboot. We suggest the following:

ln -s service/default-i386/boot/grub/grub2netx64.efi /etc/netboot/grubx64.efi

Installation and Software Management Features

Automatic access to support repo from OCI compute instances

Beginning with 11.4.79, OCI compute instances can install additional software or updates from the Oracle Solaris Support Repository immediately; for older versions you will need to register for a user key and certificate.

Enhancements for Developers

getpeereid(3c) added to libc

A getpeereid() function was added to libc, built on top of the existing getpeerucred() system call. The API is compatible with the interface found on other systems and proposed for inclusion in a future POSIX version.

linker support for CTF generation

SRU 81 adds a new option to the ld(1) link-editor, “-z ctf”, that provides the abilities of the ctfmerge(1) utility as part of the link-edit, thereby eliminating the need to run ctfmerge afterwards as a post processing step. The ctfmerge(1) utility remains supported for existing build systems, but this simplifies the process of adding CTF support to builds that don’t already run ctfmerge.

Solaris CTF version 3

Version 2 of CTF (Compact C Type Format) has been delivered with Solaris objects for 23 years, since Solaris 9. Solaris CTF version 3 lifts the severe limits on the number and size of types found in version 2. The 16-bit integer values used in Version 2 are widened to 32-bits.

ctfconvert, ctfmerge, and ld -z ctf now generate CTF version 3 data by default. ctfconvert and ctfmerge offer a new -V option that allows the user to explicitly choose the version to generate. The ld -z ctf option was similarly extended with a ‘version‘ keyword to allow the CTF version to be specified.

libctf internally supports all known CTF versions (1-3), through the existing libctf interfaces. As such, mdb, kmdb, DTrace, and any other CTF consumers are immediately able to use the new format. Users of these tools should not experience any change in operation.

ctfconvert(1), ctfmerge(1), and ld(1) manual pages were updated. The ctf(5) manual page was added.

Other Changes

Significant FOSS Updates

PHP 8.4 has been added alongside the existing PHP versions 8.1, 8.2, and 8.3. The PHP 8.1 package has been marked legacy in preparation for removal in a future SRU, since the PHP community is ending support for 8.1 at the end of December 2025.

ImageMagick has been upgraded from the legacy version 6 branch to the current version 7 branch. This delivers a single /usr/bin/magick command that replaces all the individual commands in the previous version, such as /usr/bin/convert and /usr/bin/identify. For compatibility, the package delivers symlinks from the old command paths to /usr/bin/magick, but using those will display a deprecation warning suggesting you convert to using the magick command instead.

LLVM/clang version 19 has been added alongside the existing version 13 package in SRU 81. A new pkg(7) mediator named llvm allows choosing which version is used for the symlinks in /usr/bin. Specific command versions can be run by using the paths under /usr/llvm-13 or /usr/llvm-19.

Before Upgrading to SRU 81

Migration from OpenSSL 1.0.2 to 3.0

SRUs 42 through 80 provided packages for both versions 1.0.2 & 3.0 of the OpenSSL libraries. OpenSSL 1.0.2 has been removed in SRU 81. All locally built applications and ISV applications that use the system provided OpenSSL 1.0.2 need to migrate to OpenSSL 3.0 before updating to SRU 81. The OpenSSL Foundation has supplied a OpenSSL 3.0 migration guide to help with this. Migration to OpenSSL 3 of Solaris delivered packages was delivered incrementally over a number of SRUs and was completed in SRU 78.

Preparation for Upcoming SRUs

The following are a subset of the removals planned for future SRUs. See End of Feature Notices for Oracle Solaris 11 for the complete list of removals announced so far.

Migration from MySQL 8.0 to 8.4

SRU 78 added packages for version 8.4 of the MySQL database alongside the existing packages for version 8.0. Upstream support for MySQL 8.0 is scheduled to end in April 2026 and it is planned for removal in a future Solaris 11.4 SRU. Administrators of MySQL 8.0 databases should follow the instructions in MySQL 8.4 Reference Manual: Upgrading MySQL to migrate their databases to version 8.4 before upgrading to an SRU in which 8.0 has been removed.

Migration from PCRE to PCRE2

SRU 78 provides packages for both ABI versions 1 and 2 of the Perl Compatible Regular Expressions (PCRE) library, as provided by library/pcre (version 8.45) and library/pcre2 (version 10.42). Upstream ended support for the version 1 API/ABI after June 2021 and recommends all users port to version 2. Migration of the Solaris delivered packages to the new version is ongoing and continues to be delivered incrementally over a number of SRUs. Once this is complete, the package for version 1 will be obsoleted and removed on upgrade. All locally built applications and ISV applications that use the system provided libpcre need to migrate to libpcre2 as soon as possible.

Migration from Python 3.9 to 3.11 or 3.13

SRU 78 provides packages for Python versions 3.9, 3.11, and 3.13. Upstream support for Python 3.9 will end in October 2025. Python 3.9 will be removed in a future SRU. All locally built applications and ISV applications that use the system provided Python 3.9 need to migrate to a later version as soon as possible. See Porting to Python 3.10, Porting to Python 3.11, Porting to Python 3.12, and Porting to Python 3.13 to help with this. Migration of Solaris delivered core functionality is ongoing and is being delivered incrementally over a number of SRUs.

Migration from DSA to newer SSH key types

SRU 81 provides OpenSSH 9.6. We plan to migrate to OpenSSH 10 in a future SRU. OpenSSH 10 has completely removed support for the DSA signature algorithm, which is available, but disabled by default in OpenSSH 7 through 9 releases.

Users relying on ssh-dss keys for passwordless logins should set up keys using newer algorithms now, before updating to a SRU that removes support for DSA keys. For connections to machines running SunSSH on Solaris 11.3 or older, an ssh-rsa key will be required.