Tuesday Jan 31, 2012

(Solaris) Destination: Detroit

We will have another Solaris 11 Technology Forum soon. The two I hosted in New York City and Boston included over 150 attendees. Dozens more attended the session in Chicago hosted by my associate Scott Dickson. These sessions pack hundreds of details regarding the practical uses of Solaris 11 into 4 hours - plus lunch!

Next week - February 8, to be exact - I will host a session near Detroit, Michigan. You can attend by registering online. Two other Solaris experts - Dave Miner and Alex Barclay - will join me as we explain new Solaris 11 features such as the Image Packaging System and Automated Installer, network virtualization and resource controls, Immutable Zones, and ZFS Encryption and other new security features.

Registration is not required, but available seating is filling up quickly, so don't wait!

Thursday Nov 10, 2011

Solaris 11 Released!!

Oracle released Solaris 11 on November 9, 2011.

You can download Solaris 11.

You can also watch videos, participate in forums, and read white papers, data sheets and documentation.

Friday Oct 28, 2011

Oracle Solaris 11 Launch

Join Oracle executives Mark Hurd and John Fowler and key Oracle Solaris Engineers and Execs at the Oracle Solaris 11 launch event in New York City, at Gotham Hall on Broadway, November 9th and learn how you can build your infrastructure with Oracle Solaris 11 to:

  • Accelerate internal, public, and hybrid cloud applications
  • Optimize application deployment with built-in virtualization
  • Achieve top performance and cost advantages with Oracle Solaris 11–based engineered systems
The launch event will also feature exclusive content for our in-person audience including a session led by the VP of core Solaris development and his leads on Solaris 11 and a customer insights panel during lunch. We will also have a technology showcase featuring our latest systems and Solaris technologies. The Solaris executive team will also be there throughout the day to answer questions and give insights into future developments in Solaris.

Don't miss the Oracle Solaris 11 launch in New York on November 9. Register Today!

Tuesday Oct 18, 2011

What's New in Oracle Solaris 11

Oracle Solaris 11 adds new features to the #1 Enterprise OS, Solaris 10. Some of these features were in "preview form" in Solaris 11 Express. The feature sets introduced there have been greatly expanded in order to make Solaris 11 ready for your data center. Also, new features have been added that were not in Solaris 11 Express in any form.

The list of features below is not exhaustive. Complete documentation about changes to Solaris will be made available. To learn more, register for the Solaris 11 launch. You can attend in person, in New York City, or via webcast.

Software management features designed for cloud computing

The new package management system is far easier to use than previous versions of Solaris.
  • A completely new Solaris packaging system uses network-based repositories (located at our data centers or at yours) to modernize Solaris packaging.
  • A new version of Live Upgrade minimizes service downtime during package updates. It also provides the ability to simply reboot to a previous version of the software if necessary - without resorting to backup tapes.
  • The new Automated Installer replaces Solaris JumpStart and simplifies hands-off installation. AI also supports automatic creation of Solaris Zones.
  • Distro Constructor creates Solaris binary images that can be installed over the network, or copied to physical media.
  • The previous SVR4 (System V Release 4) packaging tools are included in Solaris 11 for installation of non-Solaris software packages.
  • All of this is integrated with ZFS. For example, the alternate boot environemnts (ABEs) created by the Live Upgrade tools are ZFS clones, minimizing the time to create them and the space they occupy.

Network virtualization and resource control features enable networks-in-a-box

Previewed in Solaris 11 Express, the network virtualization and resource control features in Oracle Solaris 11 enable you to create an entire network in a Solaris instance. This can include virtual switches, virtual routers, integrated firewall and load-balancing software, IP tunnels, and more. I described the relevant concepts in an earlier blog entry.

In addition to the significant improvements in flexibility compared to a physical network, network performance typically improves. Instead of traversing multiple physical network components (NICs, cables, switches and routers), packet transfers are accomplished by in-memory loads and stores. Packet latency shrinks dramatically, and aggregate bandwidth is no longer limited by NICs, but by memory link bandwidth.

But mimicking a network wasn't enough. The Solaris 11 network resource controls provide the ability to dynamically control the amount of network bandwidth that a particular workload can use. Another blog entry described these controls. (Note that some of the details may have changed between the Solaris 11 Express details described in that entry, and the details of Solaris 11.)

Easy, efficient data management

Solaris 11 expands on the award-winning ZFS file system, adding encryption and deduplication. Multiple encryption algorithms are available and can make use of encryption features included in the CPU, such as the SPARC T3 and T4 CPUs. An in-kernel CIFS server was also added, and the data is stored in a ZFS dataset. Ease-of-use is still a high-priority goal. Enabling CIFS service is as simple as enabling a dataset property.

Improved built-in computer virtualization

Along with ZFS, Oracle Solaris Zones continues to be a core feature set in use at many data centers. (The use of the word "Zones" will be preferred over the use of "Containers" to reduce confusion.) These features are enhanced in Solaris 11. I will detail these enhancements in a future blog entry, but here is a quick summary:
  • Greater flexibility for immutable zones - called "sparse-root zones" in Solaris 10. Multiple options are available in Solaris 11.
  • A zone can be an NFS server!
  • Administration of existing zones can be delegated to users in the global zone.
  • Zonestat(1) reports on resource consumption of zones. I blogged about the Solaris 11 Express version of this tool.
  • A P2V "pre-flight" checker verifies that a Solaris 10 or Solaris 11 system is configured correctly for migration (P2V) into a zone on Solaris 11.
  • To simplify the process of creating a zone, by default a zone gets a VNIC that is automatically configured on the most obvious physical NIC. Of course, you can manually configure a plethora of non-default network options.

Advanced protection

Long known as one of the most secure operating systems on the planet, Oracle Solaris 11 continues making advances, including:
  • CPU-speed network encryption means no compromises
  • Secure startup: by default, only the ssh service is enabled - a minimal attack surface reduces risk
  • Restricted root: by default, 'root' is a role, not a user - all actions are logged or audited by username
  • Anti-spoofing properties for data links
  • ...and more.
As you can guess, we're looking forward to releasing Oracle Solaris 11! Its new features provide you with you the tools to simplify enterprise computing. To learn more about Solaris 11, register for the Solaris 11 launch.

Tuesday Oct 11, 2011

Solaris 11 is Coming!

Yes, the noise you hear is the pre-launch sequence for Solaris 11, which is getting closer every day!

The Launch Event will be held in New York City, at Gotham Hall on Broadway. Space is limited, so if you want to attend in person you should register online. A webcast of the event will also be available. Registration is available for both.

I have registered, so if you attend perhaps I will see you there!

Monday Sep 12, 2011

Oracle Solaris 11 Early Adopter Program

The next step in the release of Oracle Solaris 11 is here! Gold members of the Oracle Partner Network (OPN) may download the Early Adopter release to begin qualification of their applications on Oracle Solaris 11.

Oracle Solaris 11 includes the new major feature sets that are available in Solaris 11 Express, released in November 2010, and much more. You can find a complete description and download link of this Early Adopter release at oracle.com.

If you are not currently a member of the Oracle PartnerNetwork (OPN), you still have two choices:

  1. Learn more about OPN and register at http://www.oracle.com/partners/en/opn-program/index.html
  2. Begin to experience key new features by downloading Solaris 11 Express

We are looking forward to helping you learn all about these exciting new features and the benefits you will derive from them!

Thursday Aug 18, 2011

Oracle Solaris available for Exadata Database Machines

Oracle Solaris 11 Express is now available on Oracle Exadata Database Machines, enabling you to benefit from all of the OLTP and DW performance of Exadata hardware, and the industry-leading scalability and availability characteristics of Oracle Solaris.

For details, see the press release.

Tuesday Aug 02, 2011

Solaris Zones Optimize Real Workloads

Oracle published two Optimized Solutions last week that utilize Oracle Solaris Containers.

The first is for Oracle WebCenter Suite. The Optimized Solution shows how one server can support more than 1,000 users for WebCenter Spaces.The announcement includes links to business-focused and technical white papers.

The second is for Agile Product Lifecycle Management. The announcement includes links to business-focused and technical white papers.

Each optimized solution showcases the ability to use Oracle Solaris Containers to optimize performance of multiple workloads within one consolidated server.

Thursday Jul 14, 2011

Extreme Oracle Solaris Virtualization

There will be a live webcast today, explaining how to leverage Oracle Solaris' unmatched virtualization features. The webcast begins at 9 AM PT. Register is required, at: Oracle.com.

Wednesday Jul 13, 2011

Solaris Zones help achieve World Record Benchmark Result

Maximizing performance of multi-node workloads can be challenging. Should I maximize CPU clock rate, or RAM size per node, or network bandwidth? And how do I analyze performance of each component while also measuring aggregate throughput? Solaris Zones provide characteristics that are useful for multi-node architectures:
  • Architectural flexibility: easily remove a network bottleneck between two components by running both in zones on one Solaris server - and move any of them to different servers later as processing needs change
  • Convenient, dynamic resource management, assign a workload to a set of CPUs for predictable, consistent performance, ensure that each workload component has access to sufficient hardware resources, etc.
These characteristics are displayed in the world record benchmark result for Oracle JD Edwards EnterpriseOne. It was achieved using Solaris Containers (Zones) to isolate individual software components, including the WebLogic-based applications and Web Tier Utilities.

Solaris Zones features enabled software isolation and resource management, making the process of fine-tuning resource assignment very easy. For more details, see:

Monday Jul 11, 2011

Bob's Live Upgrade Advice

Do you wish you could shorten the service outage during an operating system upgrade? Do you want the ability to easily "backout" an OS upgrade?

Do you already use Solaris Live Upgrade? No? Why not!?!? :-)

Bob Netherton has compiled a comprehensive list of Live Upgrade Survival Tips - important pieces of advice that help you minimize service downtime and shorten service recovery if something goes wrong.

Friday Jul 08, 2011

Downloadable Database in a Solaris Zone

To simplify the process of creating Oracle databases, Oracle has released two Solaris Zones with Oracle 11gR2 pre-installed. You can simply download the appropriate template and "attach" it to your x86 or SPARC system running Solaris 10 (update 10/09 or newer).

Links to the downloads are at oracle.com.

Of course, you must have a valid license to run 11gR2 on that computer.

Monday May 23, 2011

Oracle DB 11gR2 Certified for Solaris Containers

Just a short entry today: last week Oracle completed certification of Oracle RAC 11gR2 (with Clusterware) on Oracle Solaris 10 Containers ("Zones").

For details, see http://www.oracle.com/technetwork/database/virtualizationmatrix-172995.html .

A paper "Best Practices for Deploying Oracle RAC Inside Oracle Solaris Containers" is also available, but is not specific to 11gR2.

This extends the previous certifications of Oracle RAC (9iR2, 10gR2, 11gR1) on Solaris Containers.

Tuesday Mar 01, 2011

Virtual Network - Part 4

Resource Controls

This is the fourth part of a series of blog entries about Solaris network virtualization. Part 1 introduced network virtualization, Part 2 discussed network resource management capabilities available in Solaris 11 Express, and Part 3 demonstrated the use of virtual NICs and virtual switches.

This entry shows the use of a bandwidth cap on Virtual Network Elements (VNEs). This form of network resource control can effectively limit the amount of bandwidth consumed by a particular stream of packets. In our context, we will restrict the amount of bandwidth that a zone can use.

As a reminder, we have the following network topology, with three zones and three VNICs, one VNIC per zone.

All three VNICs were created on one ethernet interface in Part 3 of this series.

Capping VNIC Bandwidth

Using a T2000 server in a lab environment, we can measure network throughput with the new dlstat(1) command. This command reports various statistics about data links, including the quantity of packets, bytes, interrupts, polls, drops, blocks, and other data. Because I am trying to illustrate the use of commands, not optimize performance, the network workload will be a simple file transfer using ftp(1). This method of measuring network bandwidth is reasonable for this purpose, but says nothing about the performance of this platform. For example, this method reads data from a disk. Some of that data may be cached, but disk performance may impact the network bandwidth measured here. However, we can still achieve the basic goal: demonstrating the effectiveness of a bandwidth cap.

With that background out of the way, first let's check the current status of our links.

GZ# dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
e1000g0     phys      1500   up       --         --
e1000g2     phys      1500   unknown  --         --
e1000g1     phys      1500   down     --         --
e1000g3     phys      1500   unknown  --         --
emp_web1    vnic      1500   up       --         e1000g0
emp_app1    vnic      1500   up       --         e1000g0
emp_db1     vnic      1500   up       --         e1000g0
GZ# dladm show-linkprop emp_app1
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
emp_app1     autopush        rw   --             --             --
emp_app1     zone            rw   emp-app        --             --
emp_app1     state           r-   unknown        up             up,down
emp_app1     mtu             rw   1500           1500           1500
emp_app1     maxbw           rw   --             --             --
emp_app1     cpus            rw   --             --             --
emp_app1     cpus-effective  r-   1-9            --             --
emp_app1     pool            rw   SUNWtmp_emp-app --             --
emp_app1     pool-effective  r-   SUNWtmp_emp-app --             --
emp_app1     priority        rw   high           high           low,medium,high
emp_app1     tagmode         rw   vlanonly       vlanonly       normal,vlanonly
emp_app1     protection      rw   --             --             mac-nospoof,
                                                                restricted,
                                                                ip-nospoof,
                                                                dhcp-nospoof
<some lines deleted>
Before setting any bandwidth caps, let's determine the transfer rates between a zone on this system and a remote system.

It's easy to use dlstat to determine the data rate to my home system while transferring a file from a zone:

GZ# dlstat -i 10 e1000g0 
           LINK    IPKTS   RBYTES    OPKTS   OBYTES
       emp_app1   27.99M    2.11G   54.18M   77.34G
       emp_app1       83    6.72K        0        0
       emp_app1      339   23.73K    1.36K    1.68M
       emp_app1    1.79K  120.09K    6.78K    8.38M
       emp_app1    2.27K  153.60K    8.49K   10.50M
       emp_app1    2.35K  156.27K    8.88K   10.98M
       emp_app1    2.65K  182.81K    5.09K    6.30M
       emp_app1      600   44.10K      935    1.15M
       emp_app1      112    8.43K        0        0
The OBYTES column is simply the number of bytes transferred during that data sample. I'll ignore the 1.68MB and 1.15MB data points because the file transfer began and ended during those samples. The average of the other values leads to a bandwidth of 7.6 Mbps (megabits per second), which is typical for my broadband connection.

Let's pretend that we want to constrain the bandwidth consumed by that workload to 2 Mbps. Perhaps we want to leave all of the rest for a higher-priority workload. Perhaps we're an ISP and charge for different levels of available bandwidth. Regardless of the situation, capping bandwidth is easy:

GZ# dladm set-linkprop -p maxbw=2000k emp_app1
GZ# dladm show-linkprop -p maxbw emp__app1
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
emp_app1     maxbw           rw       2          --             --
GZ# dlstat -i 20 emp_app1 
           LINK    IPKTS   RBYTES    OPKTS   OBYTES
       emp_app1   18.21M    1.43G   10.22M   14.56G
       emp_app1      186   13.98K        0        0
       emp_app1      613   51.98K    1.09K    1.34M
       emp_app1    1.51K  107.85K    3.94K    4.87M
       emp_app1    1.88K  131.19K    3.12K    3.86M
       emp_app1    2.07K  143.17K    3.65K    4.51M
       emp_app1    1.84K  136.03K    3.03K    3.75M
       emp_app1    2.10K  145.69K    3.70K    4.57M
       emp_app1    2.24K  154.95K    3.89K    4.81M
       emp_app1    2.43K  166.01K    4.33K    5.35M
       emp_app1    2.48K  168.63K    4.29K    5.30M
       emp_app1    2.36K  164.55K    4.32K    5.34M
       emp_app1      519   42.91K      643  793.01K
       emp_app1      200   18.59K        0        0
Note that for dladm, the default unit for maxbw is Mbps. The average of the full samples is 1.97 Mbps.

Between zones, the uncapped data rate is higher:

GZ# dladm reset-linkprop -p maxbw emp_app1
GZ# dladm show-linkprop  -p maxbw emp_app1
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
emp_app1     maxbw           rw   --             --             --
GZ# dlstat -i 20 emp_app1
           LINK    IPKTS   RBYTES    OPKTS   OBYTES
       emp_app1   20.80M    1.62G   23.36M   33.25G
       emp_app1      208   16.59K        0        0
       emp_app1   24.48K    1.63M  193.94K  277.50M
       emp_app1  265.68K   17.54M    2.05M    2.93G
       emp_app1  266.87K   17.62M    2.06M    2.94G
       emp_app1  255.78K   16.88M    1.98M    2.83G
       emp_app1  206.20K   13.62M    1.34M    1.92G
       emp_app1   18.87K    1.25M   79.81K  114.23M
       emp_app1      246   17.08K        0        0
This five year old T2000 can move at least 1.2 Gbps of data, internally, but that took five simultaneous ftp sessions. (A better measurement method, one that doesn't include the limits of disk drives, would yield better results, and newer systems, either x86 or SPARC, have higher internal bandwidth characteristics.) In any case, the maximum data rate is not interesting for our purpose, which is demonstration of the ability to cap that rate.

You can often resolve a network bottleneck while maintaining workload isolation, by moving two separate workloads onto the same system, within separate zones. However, you might choose to limit their bandwidth consumption. Fortunately, the NV tools in Solaris 11 Express enable you to accomplish that:

GZ# dladm set-linkprop -t -p maxbw=25m emp_app1
GZ# dladm show-linkprop -p maxbw emp_app1
LINK         PROPERTY        PERM VALUE          DEFAULT        POSSIBLE
emp_app1     maxbw           rw      25          --             --
Note that the change to the bandwidth cap was made while the zone was running, potentially while network traffic was flowing. Also, changes made by dladm are persistent across reboots of Solaris unless you specify a "-t" on command line.

Data moves much more slowly now:

GZ# # dlstat  -i 20 emp_app1
           LINK    IPKTS   RBYTES    OPKTS   OBYTES
       emp_app1   23.84M    1.82G   46.44M   66.28G
       emp_app1      192   16.10K        0        0
       emp_app1    1.15K   79.21K    5.77K    8.24M
       emp_app1   18.16K    1.20M   40.24K   57.60M
       emp_app1   17.99K    1.20M   39.46K   56.48M
       emp_app1   17.85K    1.19M   39.11K   55.97M
       emp_app1   17.39K    1.15M   38.16K   54.62M
       emp_app1   18.02K    1.19M   39.53K   56.58M
       emp_app1   18.66K    1.24M   39.60K   56.68M
       emp_app1   18.56K    1.23M   39.24K   56.17M
<many lines deleted>
The data show an aggregate bandwidth of 24 Mbps.

Conclusion

The network virtualization tools in Solaris 11 Express include various resource controls. The simplest of these is the bandwidth cap, which you can use to effectively limit the amount of bandwidth that a workload can consume. Both physical NICs and virtual NICs may be capped by using this simple method. This also applies to workloads that are in Solaris Zones - both default zones and Solaris 10 Zones which mimic Solaris 10 systems.

Next time we'll explore some other virtual network architectures.

Tuesday Feb 08, 2011

Virtual Network - Part 3

This is the third in a series of blog entries that discuss the network virtualization features in Oracle Solaris 11 Express. Part 1 introduced the concept of network virtualization and listed the basic virtual network elements that Solaris 11 Express (S11E) provides. Part 2 expanded on the concepts and discussed the resource management features which can be applied to those virtual network elements (VNEs).

This blog entry assumes that you have some experience with Solaris Zones. If you don't, you can read my earlier blog entries, or buy the book "Oracle Solaris 10 System Virtualization Essentials" or read the documentation.

This entry will demonstrate the creation of some of these VNEs.

For today's examples, I will use an old Sun Fire T2000 that has one SPARC CMT (T1) chip and 32GB RAM. I will pretend that I am implementing a 3-tier architecture in this one system, where each tier is represented by one Solaris zone. The mythical example provides access to an employee database. The 3-tier service is named 'emp' and VNEs will use 'emp' in their names to reduce confusion regarding the dozens of VNEs we expect to create for the services this system will deliver.

The commands shown below use the prompt "GZ#" to indicate that the command is entered in the global zone by someone with sufficient privileges. Similarly, the prompt "emp-web1#" indicates a command which is entered in the zone "emp-web1" as a sufficiently privileged user.

Fortunately, Solaris network engineers gathered all of the actions regarding the management of network elements (virtual or physical) into one command: dladm(1M). You use dladm to create, destroy, and configure datalinks such as VNICs. You can also use it to list physical NICs:

GZ# dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
e1000g0     phys      1500   up       --         --
e1000g2     phys      1500   unknown  --         --
e1000g1     phys      1500   down     --         --
e1000g3     phys      1500   unknown  --         --
We need three VNICs for our three zones, one VNIC per zone. They will also have useful names - one for each of the tiers - and will share e1000g0:
GZ# dladm create-vnic -l e1000g0 emp_web1
GZ# dladm create-vnic -l e1000g0 emp_app1
GZ# dladm create-vnic -l e1000g0 emp_db1
GZ# dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
e1000g0     phys      1500   up       --         --
e1000g2     phys      1500   unknown  --         --
e1000g1     phys      1500   down     --         --
e1000g3     phys      1500   unknown  --         --
emp_web1    vnic      1500   up       --         e1000g0
emp_app1    vnic      1500   up       --         e1000g0
emp_db1     vnic      1500   up       --         e1000g0
GZ# dladm show-vnic
LINK         OVER         SPEED  MACADDRESS        MACADDRTYPE         VID
emp_web1     e1000g0      0      2:8:20:3a:43:c8   random              0
emp_app1     e1000g0      0      2:8:20:36:a1:17   random              0
emp_db1      e1000g0      0      2:8:20:b4:5b:d3   random              0

The system has four NICs and three VNICs. Note that the name of a VNIC may not include a hyphen (-) but may include an underscore (_).

VNICs that share a NIC appear to be attached together via a virtual switch. That vSwitch is created automatically by Solaris. This diagram represents the NIC and NVEs we have created.

Now that these datalinks - the VNICs - exist, we can assign them to our zones. I'll assume that the zones already exist, and just need network assignment.

GZ# zonecfg -z emp-web1 info
zonename: emp-web1
zonepath: /zones/emp-web1
brand: ipkg
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: exclusive
hostid:
fs-allowed:
GZ# zonecfg -z emp-web1
zonecfg:emp-web1> add net
zonecfg:emp-web1:net> set physical=emp_web1
zonecfg:emp-web1:net> end
zonecfg:emp-web1> exit

Those steps can be followed for the other two zones and matching VNICs. After those steps are completed, our earlier diagram would look like this:

Packets passing from one zone to another within a Solaris instance do not leave the computer, if they are in the same subnet and use the same datalink. This greatly improves network bandwidth and latency. Otherwise, the packets will head for the zone's default router.

Therefore, in the above diagram packets sent from emp-web1 destined for emp-app1 would traverse the virtual switch, but not pass through e1000g0.

This zone is an "exclusive-IP" zone, meaning that it "owns" its own networking. What is its view of networking? That's easy to determine. The zlogin(1M) command inserts a complete command-line into the zone. By default, the command is run as the root user.

GZ# zoneadm -z emp-web1 boot
GZ# zlogin emp-web1 dladm show-link
LINK        CLASS     MTU    STATE    BRIDGE     OVER
emp_web1    vnic      1500   up       --         ?
GZ# zlogin emp-web1 dladm show-vnic
LINK         OVER         SPEED  MACADDRESS        MACADDRTYPE         VID
emp_web1     ?            0      2:8:20:3a:43:c8   random              0

Notice that the zone sees its own VNEs, but cannot see NEs or VNEs in the global zone, or in any other zone.

The other important new networking command in Solaris 11 Express is ipadm(1M). That command creates IP address assignments, enables and disables them, displays IP address configuration information, and performs other actions.

The following example shows the global zone's view before configuring IP in the zone:

GZ# ipadm show-if
IFNAME     STATE    CURRENT      PERSISTENT
lo0        ok       -m-v------46 ---
e1000g0    ok       bm--------4- ---
GZ# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
lo0/?             static   ok           127.0.0.1/8
lo0/?             static   ok           127.0.0.1/8
lo0/?             static   ok           127.0.0.1/8
e1000g0/_a        static   ok           10.140.204.69/24
lo0/v6            static   ok           ::1/128
lo0/?             static   ok           ::1/128
lo0/?             static   ok           ::1/128
lo0/?             static   ok           ::1/128

At this point, not only does the zone know it has a datalink (which we saw above) but the IP tools show that it is there, ready for use. The next example shows this:

GZ# zlogin emp-web1 ipadm show-if
IFNAME     STATE    CURRENT      PERSISTENT
lo0        ok       -m-v------46 ---
GZ# zlogin emp-web1 ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
lo0/v6            static   ok           ::1/128
An ethernet datalink without an IP address isn't very useful, so let's configure an IP interface and apply an IP address to it:
GZ# zlogin emp-web1 ipadm show-if
IFNAME     STATE    CURRENT      PERSISTENT
lo0        ok       -m-v------46 ---
GZ# zlogin emp-web1 ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
lo0/v6            static   ok           ::1/128

GZ# zlogin emp-web1 ipadm create-if emp_web1
GZ# zlogin emp-web1 ipadm show-if
IFNAME     STATE    CURRENT      PERSISTENT
lo0        ok       -m-v------46 ---
emp_web1   down     bm--------46 -46

GZ# zlogin emp-web1 ipadm create-addr -T static -a local=10.140.205.82/24 emp_web1/v4static
GZ# zlogin emp-web1 ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
emp_web1/v4static static   ok           10.140.205.82/24
lo0/v6            static   ok           ::1/128

GZ# zlogin emp-web1 ifconfig emp_web1
emp_web1: flags=1000843 mtu 1500 index 2
        inet 10.140.205.82 netmask ffffff00 broadcast 10.140.205.255
        ether 2:8:20:3a:43:c8

The last command above shows the "old" way of displaying IP address configuration. The command ifconfig(1) is still there, but the new tools dladm and ipadm provide a more consistent interface, with well-defined separation between datalink management and IP management.

Of course, if you want the zone's outbound packets to be routed to other networks, you must use the route(1M) command, the /etc/defaultrouter file, or both.

Next time, I'll show a new network measurement tool and the ability to control the amount of network bandwidth consumed.

About

Jeff Victor writes this blog to help you understand Oracle's Solaris and virtualization technologies.

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today