Wednesday Feb 13, 2013

Solaris Stories

Demand for Solaris continues to increase, as shown by these recent customer references:

Tuesday Feb 12, 2013

Solaris 10 1/13 (aka "Update 11") Released

Larry Wake lets us know that Solaris 10 1/13 has been released and is available for download.

Tuesday Jan 22, 2013

Analyst commentary on Solaris and SPARC

Forrester's Richard Fichera updates and confirms his earlier views on the present and future of Solaris and SPARC.

Wednesday Nov 14, 2012

Webcast: New Features of Solaris 11.1 and Solaris Cluster 4.1

If you missed last week's webcast of the new features in Oracle Solaris 11.1 you can view the recording. The speakers discuss changes that improve performance and scalability, particularly for Oracle DB, and many other enhancements.

New features include Optimized Shared Memory (improves DB startup time), accelerated kernel locks (improves Oracle RAC performance and scalability), virtual memory improvements, a DTrace data collecter in the DB, Zones installed on Shared Storage (simplifies migration), Data Center Bridging, and Edge Virtual Bridging.

To view the archived webcast, you must register and use the URL that you receive in e-mail.

Tuesday Nov 13, 2012

Oracle Solaris: Zones on Shared Storage

Oracle Solaris 11.1 has several new features. At you can find a detailed list.

One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS.

ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past.

With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! :-) What could I use, instead? [There he goes, foreshadowing again... -Ed.]

Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. ;-) Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system.

Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. [Update, 2013.01.28: Apparently the previous sentence caused some confusion. I do recommend the use of Zones on Shared Storage in production environments, when appropriate storage is used. "Appropriate storage" would include SAN or iSCSI at this point. I do not recommend using ZOSS with local disks in production because doing so would require moving the disks between computers.]

In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices.

Why Migrate?

Why is the migration of virtual servers important? Some of the most common reasons are:
  • Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance.
  • Moving a workload to a larger system because the workload has outgrown its original system.
  • If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot.
  • You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system.


For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool:

  • contains the zone's Solaris content, i.e. the root file system
  • does not contain any content not related to the zone
  • can only be mounted by one Solaris instance at a time

Method Overview

Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output.
  1. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!")
  2. Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.)
  3. Install the zone on one system, onto shared storage.
  4. Boot the zone.
  5. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.)
  6. Shutdown the zone.
  7. Detach the zone from the original system.
  8. Attach the zone to its new "home" system.
  9. Boot the zone.
The zone can be used normally, and even migrated back, or to a different system.


The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB".

Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone.

The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them.

  1. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions.
  2. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk.
  3. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things.
  4. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions.
  5. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this.
  6. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time.
  7. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1."


In the example shown below, I make these assumptions.
  • The guest that will own the zone at the beginning is named sysA.
  • The guest that will own the zone after the first migration is named sysB.
  • On sysA, the shared disk is named /dev/dsk/c7t2d0
  • On sysB, the shared disk is named /dev/dsk/c7t3d0

(Finally!) The Steps

Step 1) Determine the name of the disk that will move back and forth between the systems.
root@sysA:~# format
Searching for disks...done

       0. c7t0d0 
       1. c7t2d0 
Specify disk (enter its number): ^D
Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated.
root@sysA:~# format -e c7t2d0
selecting c7t2d0
[disk formatted]

format> fdisk
No fdisk table exists. The default partition for the disk is:

  a 100% "SOLARIS System" partition

Type "y" to accept the default partition,  otherwise type "n" to edit the
 partition table. n
Enter Selection: 1
  G=EFI_SYS    0=Exit? f

format> label
Specify Label type[1]: 1
Ready to label disk, continue? y

format> quit

root@sysA:~# ls /dev/dsk/c7t2d0

Step 3) Configure zone1 on sysA.
root@sysA:~# zonecfg -z zone1
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
create: Using system default template 'SYSdefault'
zonecfg:zone1> set zonename=zone1
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> add rootzpool
zonecfg:zone1:rootzpool> add storage dev:dsk/c7t2d0
zonecfg:zone1:rootzpool> end
zonecfg:zone1> exit
oot@sysA:~# zonecfg -z zone1 info
zonename: zone1
zonepath: /zones/zone1
brand: solaris
autoboot: false
ip-type: exclusive
        storage: dev:dsk/c7t2d0
Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...)
root@sysA:~# zoneadm -z zone1 install
Created zone zpool: zone1_rpool
Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install
       Image: Preparing at /zones/zone1/root.

 AI Manifest: /tmp/manifest.xml.RXaycg
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: zone1
Installation: Starting ...

              Creating IPS image
Startup linked: 1/1 done
              Installing packages from:
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            183/183   33556/33556  222.2/222.2  2.8M/s

PHASE                                          ITEMS
Installing new actions                   46825/46825
Updating package state database                 Done
Updating image state                            Done
Creating fast lookup database                   Done
Installation: Succeeded

        Note: Man pages can be obtained by installing pkg:/system/manual


        Done: Installation completed in 1696.847 seconds.

  Next Steps: Boot the zone, then log into the zone console (zlogin -C)

              to complete the configuration process.

Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install
Step 5) Boot the Zone.
root@sysA:~# zoneadm -z zone1 boot
Step 6) Login to zone's console to complete the specification of system information.
root@sysA:~# zlogin -C zone1
Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation.

Step 7) Shutdown the zone so it can be "moved."

root@sysA:~# zoneadm -z zone1 shutdown
Step 8) Detach the zone so that the original global zone can't use it.
root@sysA:~# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              solaris  shared
   - zone1            installed  /zones/zone1                   solaris  excl
root@sysA:~# zpool list
rpool        17.6G  11.2G  6.47G  63%  1.00x  ONLINE  -
zone1_rpool  1.98G   484M  1.51G  23%  1.00x  ONLINE  -
root@sysA:~# zoneadm -z zone1 detach
Exported zone zpool: zone1_rpool
Step 9) Review the result and shutdown sysA so that sysB can use the shared disk.
root@sysA:~# zpool list
rpool  17.6G  11.2G  6.47G  63%  1.00x  ONLINE  -
root@sysA:~# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              solaris  shared
   - zone1            configured /zones/zone1                   solaris  excl
root@sysA:~# init 0
Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command.

When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.)

root@sysB:~# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              solaris  shared
   - zone1            configured /zones/zone1                   solaris  excl
root@sysB:~# zonecfg -z zone1 info
zonename: zone1
zonepath: /zones/zone1
brand: solaris
autoboot: false
ip-type: exclusive
        linkname: net0
        storage: dev:dsk/c7t3d0
Step 11) Attaching the zone automatically imports the zpool.
root@sysB:~# zoneadm -z zone1 attach
Imported zone zpool: zone1_rpool
Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach
    Installing: Using existing zone boot environment
      Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris
                     Cache: Using /var/pkg/publisher.
  Updating non-global zone: Linking to image /.
Processing linked: 1/1 done
  Updating non-global zone: Auditing packages.
No updates necessary for this image.

  Updating non-global zone: Zone updated.
                    Result: Attach Succeeded.
Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach

root@sysB:~# zoneadm -z zone1 boot
root@sysB:~# zlogin zone1
[Connected to zone 'zone1' pts/2]
Oracle Corporation      SunOS 5.11      11.1    September 2012
Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back.
root@zone1:~# ls /opt
root@zone1:~# touch /opt/fileA
root@zone1:~# ls -l /opt/fileA
-rw-r--r--   1 root     root           0 Oct 22 14:47 /opt/fileA
root@zone1:~# exit

[Connection to zone 'zone1' pts/2 closed]
root@sysB:~# zoneadm -z zone1 shutdown
root@sysB:~# zoneadm -z zone1 detach
Exported zone zpool: zone1_rpool
root@sysB:~# init 0
Step 13) Back on sysA, check the status.
Oracle Corporation      SunOS 5.11      11.1    September 2012
root@sysA:~# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              solaris  shared
   - zone1            configured /zones/zone1                   solaris  excl
root@sysA:~# zpool list
rpool  17.6G  11.2G  6.47G  63%  1.00x  ONLINE  -
Step 14) Re-attach the zone back to sysA.
root@sysA:~# zoneadm -z zone1 attach
Imported zone zpool: zone1_rpool
Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach
    Installing: Using existing zone boot environment
      Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris
                     Cache: Using /var/pkg/publisher.
  Updating non-global zone: Linking to image /.
Processing linked: 1/1 done
  Updating non-global zone: Auditing packages.
No updates necessary for this image.

  Updating non-global zone: Zone updated.
                    Result: Attach Succeeded.
Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach

root@sysA:~# zpool list
rpool        17.6G  11.2G  6.47G  63%  1.00x  ONLINE  -
zone1_rpool  1.98G   491M  1.51G  24%  1.00x  ONLINE  -
root@sysA:~# zoneadm -z zone1 boot
root@sysA:~# zlogin zone1
[Connected to zone 'zone1' pts/2]
Oracle Corporation      SunOS 5.11      11.1    September 2012
root@zone1:~# zpool list
rpool  1.98G   538M  1.46G  26%  1.00x  ONLINE  -
Step 15) Check for the file created on sysB, earlier.
root@zone1:~# ls -l /opt
total 1
-rw-r--r--   1 root     root           0 Oct 22 14:47 fileA

Next Steps

Here is a brief list of some of the fun things you can try next.
  • Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones!
  • Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.)


Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

Monday Nov 12, 2012

New Solaris Cluster!

We released Oracle Solaris Cluster 4.1 recently. OSC offers both High Availability (HA) and also Scalable Services capabilities. HA delivers automatic restart of software on the same cluster node and/or automatic failover from a failed node to a working cluster node. Software and support is available for both x86 and SPARC systems.

The Scalable Services features manage multiple cluster nodes all providing a load-balanced service such as web servers or app serves.

OSC 4.1 includes the ability to recover services from software failures, failure of hardware components such as DIMMs, CPUs, and I/O cards, a global file system, rolling upgrades, and much more.

Oracle Availability Engineering posted a brief description and links to details. Or, you can just download it now!

Thursday Nov 08, 2012

Happy Birthday! (to Solaris and SPARC)

Oracle is celebrating the 20th and 25th anniversaries (birthdays?) of Solaris and SPARC.

You can find video highlights of the histories of SPARC and Solaris and brief (static) infographic histories of SPARC and Solaris.

Wednesday Nov 07, 2012

Today: Oracle Solaris [&Cluster] Live Webcast!

Today, Oracle is hosting a live webcast, with Q&A, discussing Solaris 11.1 and Solaris Cluster 4.1 The webcast begins at 11AM EST, but you should register before the event. (Registration is also available at

Thursday Oct 25, 2012

Oracle Solaris 11.1

Oracle Solaris 11.1 was announced at Oracle OpenWorld recently. This release added 300 new performance and feature enhancements.

My favorite new features:

  • Solaris Zones on Shared Storage
  • Support for 32 TB (!) of RAM
  • Improved Oracle RAC lock latency
  • Dynamically resize the Oracle DB SGA
  • Industry-first support for FedFS
You can learn more from the press release or by attending the Solaris 11.1 webcast on November 7.

Wednesday Oct 24, 2012

SPARC SuperCluster Papers

Oracle has been publishing white papers that describe uses and characteristics of the SPARC SuperCluster product. Here are just a few: I hope you find some of those papers useful.

Tuesday Oct 23, 2012

Virtual Networks in Oracle Solaris - Part 5

     ago in a
  far, far away...

I wrote four blog entries to describe the new network virtualization features that were in Solaris 11 Express:

  • Part 1 introduced the concept of network virtualization and listed the basic virtual network elements.
  • Part 2 expanded on the concepts and discussed the resource management features.
  • Part 3 demonstrated the creation of some of these virtual network elements.
  • Part 4 demonstrated the network resource controls.
I had planned a final entry that added virtual routers to the list of virtual network elements, but Jeff McMeekin wrote a paper that discuses the same features. That paper is available at OTN. And this Jeff can't write any better than that Jeff...

All of the features described in those blog entries and that paper are also available in Solaris 11. It is possible that some details have changed, but the vast majority of the content is unchanged.

Tuesday Jan 31, 2012

(Solaris) Destination: Detroit

We will have another Solaris 11 Technology Forum soon. The two I hosted in New York City and Boston included over 150 attendees. Dozens more attended the session in Chicago hosted by my associate Scott Dickson. These sessions pack hundreds of details regarding the practical uses of Solaris 11 into 4 hours - plus lunch!

Next week - February 8, to be exact - I will host a session near Detroit, Michigan. You can attend by registering online. Two other Solaris experts - Dave Miner and Alex Barclay - will join me as we explain new Solaris 11 features such as the Image Packaging System and Automated Installer, network virtualization and resource controls, Immutable Zones, and ZFS Encryption and other new security features.

Registration is not required, but available seating is filling up quickly, so don't wait!

Thursday Nov 10, 2011

Solaris 11 Released!!

Oracle released Solaris 11 on November 9, 2011.

You can download Solaris 11.

You can also watch videos, participate in forums, and read white papers, data sheets and documentation.

Friday Oct 28, 2011

Oracle Solaris 11 Launch

Join Oracle executives Mark Hurd and John Fowler and key Oracle Solaris Engineers and Execs at the Oracle Solaris 11 launch event in New York City, at Gotham Hall on Broadway, November 9th and learn how you can build your infrastructure with Oracle Solaris 11 to:

  • Accelerate internal, public, and hybrid cloud applications
  • Optimize application deployment with built-in virtualization
  • Achieve top performance and cost advantages with Oracle Solaris 11–based engineered systems
The launch event will also feature exclusive content for our in-person audience including a session led by the VP of core Solaris development and his leads on Solaris 11 and a customer insights panel during lunch. We will also have a technology showcase featuring our latest systems and Solaris technologies. The Solaris executive team will also be there throughout the day to answer questions and give insights into future developments in Solaris.

Don't miss the Oracle Solaris 11 launch in New York on November 9. Register Today!

Tuesday Oct 18, 2011

What's New in Oracle Solaris 11

Oracle Solaris 11 adds new features to the #1 Enterprise OS, Solaris 10. Some of these features were in "preview form" in Solaris 11 Express. The feature sets introduced there have been greatly expanded in order to make Solaris 11 ready for your data center. Also, new features have been added that were not in Solaris 11 Express in any form.

The list of features below is not exhaustive. Complete documentation about changes to Solaris will be made available. To learn more, register for the Solaris 11 launch. You can attend in person, in New York City, or via webcast.

Software management features designed for cloud computing

The new package management system is far easier to use than previous versions of Solaris.
  • A completely new Solaris packaging system uses network-based repositories (located at our data centers or at yours) to modernize Solaris packaging.
  • A new version of Live Upgrade minimizes service downtime during package updates. It also provides the ability to simply reboot to a previous version of the software if necessary - without resorting to backup tapes.
  • The new Automated Installer replaces Solaris JumpStart and simplifies hands-off installation. AI also supports automatic creation of Solaris Zones.
  • Distro Constructor creates Solaris binary images that can be installed over the network, or copied to physical media.
  • The previous SVR4 (System V Release 4) packaging tools are included in Solaris 11 for installation of non-Solaris software packages.
  • All of this is integrated with ZFS. For example, the alternate boot environemnts (ABEs) created by the Live Upgrade tools are ZFS clones, minimizing the time to create them and the space they occupy.

Network virtualization and resource control features enable networks-in-a-box

Previewed in Solaris 11 Express, the network virtualization and resource control features in Oracle Solaris 11 enable you to create an entire network in a Solaris instance. This can include virtual switches, virtual routers, integrated firewall and load-balancing software, IP tunnels, and more. I described the relevant concepts in an earlier blog entry.

In addition to the significant improvements in flexibility compared to a physical network, network performance typically improves. Instead of traversing multiple physical network components (NICs, cables, switches and routers), packet transfers are accomplished by in-memory loads and stores. Packet latency shrinks dramatically, and aggregate bandwidth is no longer limited by NICs, but by memory link bandwidth.

But mimicking a network wasn't enough. The Solaris 11 network resource controls provide the ability to dynamically control the amount of network bandwidth that a particular workload can use. Another blog entry described these controls. (Note that some of the details may have changed between the Solaris 11 Express details described in that entry, and the details of Solaris 11.)

Easy, efficient data management

Solaris 11 expands on the award-winning ZFS file system, adding encryption and deduplication. Multiple encryption algorithms are available and can make use of encryption features included in the CPU, such as the SPARC T3 and T4 CPUs. An in-kernel CIFS server was also added, and the data is stored in a ZFS dataset. Ease-of-use is still a high-priority goal. Enabling CIFS service is as simple as enabling a dataset property.

Improved built-in computer virtualization

Along with ZFS, Oracle Solaris Zones continues to be a core feature set in use at many data centers. (The use of the word "Zones" will be preferred over the use of "Containers" to reduce confusion.) These features are enhanced in Solaris 11. I will detail these enhancements in a future blog entry, but here is a quick summary:
  • Greater flexibility for immutable zones - called "sparse-root zones" in Solaris 10. Multiple options are available in Solaris 11.
  • A zone can be an NFS server!
  • Administration of existing zones can be delegated to users in the global zone.
  • Zonestat(1) reports on resource consumption of zones. I blogged about the Solaris 11 Express version of this tool.
  • A P2V "pre-flight" checker verifies that a Solaris 10 or Solaris 11 system is configured correctly for migration (P2V) into a zone on Solaris 11.
  • To simplify the process of creating a zone, by default a zone gets a VNIC that is automatically configured on the most obvious physical NIC. Of course, you can manually configure a plethora of non-default network options.

Advanced protection

Long known as one of the most secure operating systems on the planet, Oracle Solaris 11 continues making advances, including:
  • CPU-speed network encryption means no compromises
  • Secure startup: by default, only the ssh service is enabled - a minimal attack surface reduces risk
  • Restricted root: by default, 'root' is a role, not a user - all actions are logged or audited by username
  • Anti-spoofing properties for data links
  • ...and more.
As you can guess, we're looking forward to releasing Oracle Solaris 11! Its new features provide you with you the tools to simplify enterprise computing. To learn more about Solaris 11, register for the Solaris 11 launch.

Jeff Victor writes this blog to help you understand Oracle's Solaris and virtualization technologies.

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.


« October 2015