Saturday Aug 08, 2015

OpenLDAP + TLS in Solaris 11

This blog post serves as a followup to Configuring a Basic LDAP Server + Client in Solaris 11. It covers creating self-signed certificates and enabling TLS for secure communication.

1) Create certificates
# mkdir /etc/openldap/certs
# cd /etc/openldap/certs
# openssl req -x509 -nodes -days 3650 -newkey rsa:2048 \
  -keyout server.key -out server.crt
# chmod 400 server.*
# chown openldap:openldap server.*
2) Update slapd.conf
Add the following lines to the end of /etc/openldap/slapd.conf

TLSCACertificateFile /etc/certs/ca-certificates.crt
TLSCertificateFile /etc/openldap/certs/server.crt
TLSCertificateKeyFile /etc/openldap/certs/server.key
3) Restart LDAP server
# svcadm disable ldap/server
# svcadm enable ldap/server
That's it! Connect to your LDAP server on port 389.

Sunday Mar 01, 2015

Backing up MySQL using ZFS Snapshots and Clones

The blog post serves as a follow-up to Deploying MySQL over NFS using the Oracle ZFS Storage Appliance and Deploying MySQL over Fibre Channel / iSCSI using the Oracle ZFS Storage Appliance.

The snapshot and clone features of the Oracle ZFS Storage Appliance provides a convenient mechanism to backup, restore, and fork a MySQL database. A ZFS snapshot is a read-only copy of a file-system that is created instantaneously while initially occupying zero additional space. As the data of a file-system changes, the differences are tracked inside the snapshot. A ZFS clone is a writeable snapshot that can be used to branch off an existing filesystem without modifying the original contents.

Creating a MySQL Snapshot

1. From the MySQL console, disable autocommit:

mysql> set autocommit=0;

2. From the MySQL console, close and lock all database tables with a read lock. This will help to create a consistent snapshot.

mysql> flush tables with read lock;

3. From the ZFS console, create a project-level snapshot of the MySQL project:

zfs:> shares select mysql snapshots snapshot test_snapshot

4. From the MySQL console, unlock all tables to restore the databases to normal operation:

mysql> unlock tables;

Restoring a MySQL Snapshot

1. From the MySQL console, close and lock all database tables with a read lock. This will help to create a consistent snapshot.

mysql> flush tables with read lock;

2. From the ZFS console, rollback the previously created snapshot for each InnoDB share:

zfs:> confirm shares select mysql select innodb-data snapshots select test_snapshot
 rollback
zfs:> confirm shares select mysql select innodb-log snapshots select test_snapshot
 rollback

3. From the MySQL console, unlock all tables to restore the databases to normal operation:

mysql> unlock tables;

Cloning a MySQL Snapshot

1. From the ZFS console, create a clone of both InnoDB snapshots:

zfs:> shares select mysql select innodb-data snapshots select test_snapshot
 clone innodb-data-clone
zfs:shares mysql/innodb-data-clone (uncommitted clone)> commit
zfs:> shares select mysql select innodb-log snapshots select test_snapshot
 clone innodb-log-clone
zfs:shares mysql/innodb-log-clone (uncommitted clone)> commit

2. Mount the newly created shares depending on which protocol you intend to deploy. Refer to the two previous blog posts on this subject.

3. Update /etc/my.cnf with the new InnoDB locations:

innodb_data_home_dir = /path/to/innodb-data-clone
innodb_log_group_home_dir = /path/to/innodb-log-clone

4. Start a new instance of MySQL with the updated my.cnf entries. Any further updates will have no effect on the original database.

Sample REST script for creating a MySQL snapshot

The following python script has been developed to show how to automate this process. It leverages the REST API of the Oracle ZFS Storage Appliance.
#!/usr/bin/python
    import MySQLdb
    import datetime
    import json
    import urllib2

    def zfs_snapshot():
        user = "zfssa_username"
        password = "zfssa_password"
        url = "zfssa_ip_address"
        path = "/api/storage/v1/pools/poolname/projects/mysql/snapshots" 
        url = "https:// " + zfs + ":215" + path

        properties = {"name":"MySQL-snapshot"}
        post_data = json.dumps(properties)

        request = urllib2.Request(url, post_data)
        request.add_header("Content-type", "application/json")
        request.add_header("X-Auth-User", user)
        request.add_header("X-Auth-Key", password)
        response = urllib2.urlopen(request)

    def main():
        mysql_server = "localhost"
        mysql_user = "root"
        mysql_pass = ""

        try:
            connection = MySQLdb.connect(host=mysql_server,
            user=mysql_user,passwd=mysql_pass)
        except MySQLdb.OperationalError:
            print "Could not connect to the MySQL server"
            sys.exit(-5)

        print "Connected to the MySQL server"

        start_time = datetime.datetime.now().replace(hour=0)

        backup = connection.cursor()
        backup.execute("set autocommit=0;")
        backup.execute("flush tables with read lock;")

        print "Creating project snapshot \"MySQL-snapshot\""

        zfs_snapshot()
        backup.execute("unlock tables;")

        finish_time = datetime.datetime.now().replace(hour=0)
        total_time = finish_time - start_time
        print "Completed in ", total_time

    if __name__ == "__main__":
        main()

Wednesday Feb 25, 2015

Deploying MySQL over Fibre Channel / iSCSI using the Oracle ZFS Storage Appliance

The blog post serves as a follow-up to Deploying MySQL over NFS using the Oracle ZFS Storage Appliance. The benefits remain the same which means this discussion will focus completely on configuring your Oracle ZFS Storage Appliance and database server.

Configuring the Oracle ZFS Storage Appliance

Depending on which block protocol you plan on using, set up target and initiator groups per the following documentation:

These instructions assume you are using a target and initiator group called 'mysql-tgt' and 'mysql-init'

1. From the ZFS controller’s CLI, create a project called ‘mysql’:

        zfs:> shares project mysql

2. Set logbias to latency to leverage write flash capabilities:

        zfs:shares mysql (uncommitted)> set logbias=latency
                               logbias = latency (uncommitted)

3. Commit the settings:

        zfs:shares mysql (uncommitted)> commit

4. Select the 'mysql' project:

        zfs:> shares select mysql

5. Create a LUN called 'innodb-data' to hold data files:

        zfs:shares mysql> lun innodb-data

6. Set the volume block record size to 16K to match Innodb’s standard page size:

        zfs:shares mysql/innodb-data (uncommitted)> set volblocksize=16K
                               volblocksize = 16K (uncommitted)

7. Set the volume size to a value large enough to accommodate your database:

        zfs:shares mysql/innodb-data (uncommitted)> set volsize=1T
                               volsize = 1T (uncommitted)

8. Set the initiator and target groups:

        zfs:shares mysql/innodb-data (uncommitted)> set initiatorgroup=mysql-init
                               initiatorgroup = 1T (uncommitted)
        zfs:shares mysql/innodb-data (uncommitted)> set targetgroup=mysql-tgt
                               initiatorgroup = 1T (uncommitted)
        zfs:shares mysql/innodb-data (uncommitted)> commit

9. Create a LUN called ‘innodb-log’ to hold the redo logs:

        zfs:shares mysql> lun innodb-log

10. Set the volume block record size to 128K:

        zfs:shares mysql/innodb-log (uncommitted)> set volblocksize=128K
                               volblocksize = 128K (uncommitted)

11. Set the volume size to a value large enough to accommodate your database:

        zfs:shares mysql/innodb-log (uncommitted)> set volsize=256G
                               volsize = 256G (uncommitted)
        zfs:shares mysql/innodb-log (uncommitted)> commit

Configuring your database server

1. A directory structure should be created to contain the MySQL database:

# mkdir -p /mysql/san/innodb-data
# mkdir -p /mysql/san/innodb-log
# chown -R mysql:mysql /mysql/san

2. If using iSCSI, login to the ZFS Storage Appliance:

# iscsiadm -m discovery -t sendtargets -p zfssa-ip-address
# iscsiadm -m node -p zfssa-ip-address -login

3. The multipath configuration held in /etc/multipath.conf should contain the following:

defaults {
    find_multipaths yes
    user_friendly_names yes
}
devices {
    device {
        vendor "SUN"
        product "ZFS Storage 7430"
        getuid_callout "/lib/udev/scsi_id --page=0x83 --whitelisted --device=/dev/%n"
        prio alua
        hardware_handler "1 alua"
        path_grouping_policy group_by_prio
        failback immediate
        no_path_retry 600
        rr_min_io 100
        path_checker tur
        rr_weight uniform
        features "0"
    }
}

4. Discover the LUNs with multipath. This may require a restart of the multipath service to take in the new configuration changes.

# multipath -ll
mpatha (3600144f0fa2f948b0000537cdb970008) dm-2 SUN,ZFS Storage 7430
size=1.0T features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `-0:0:0:0 sda 8:0   active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `-0:0:1:0 sdc 8:32  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `-1:0:0:0 sde 8:64  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `-1:0:1:0 sdg 8:96  active  ready running
mpathb (3600144f0fa2f948b0000537cdbc10009) dm-3 SUN,ZFS Storage 7430
size=256G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=active
| `-0:0:0:1 sdb 8:16  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `-0:0:1:1 sdd 8:48  active ready running
|-+- policy='round-robin 0' prio=1 status=enabled
| `-1:0:0:1 sdf 8:80  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `-1:0:1:1 sdh 8:112 active ready running

5. Align each LUN on a 32 byte boundary, specifying multiples of 256 for each partition. This is documented more extensively in Aligning Partitions to Maximize Performance, page 31.

# fdisk -u -S 32 /dev/dm-2
# fdisk -u -S 32 /dev/dm-3

6. Build an XFS filesystem* on top of each LUN for the full device path on the first partition:

# mkfs.xfs /dev/mapper/mpathap1
# mkfs.xfs /dev/mapper/mpathbp1
* XFS is preferred on Linux. ZFS is preferred on Solaris.

7. Mount each LUN either from a shell or automatically from /etc/fstab:

# mount -t xfs -o noatime,nodiratime,nobarrier,logbufs=8,logbsize=256k
 /dev/mapper/mpathap1 /mysql/san/innodb-data
# mount -t xfs -o noatime,nodiratime,nobarrier,logbufs=8,logbsize=256k
 /dev/mapper/mpathbp1 /mysql/san/innodb-log
That's it. Refer to the previous blog post to understand how to setup the MySQL database once these filesystems are mounted. In a future blog post, I'll cover how to backup your MySQL database using snapshots and cloning.

Saturday Feb 14, 2015

Deploying MySQL over NFS using the Oracle ZFS Storage Appliance

The Oracle ZFS Storage Appliance supports NFS, iSCSI, and Fibre Channel access to a MySQL database. By consolidating the storage of MySQL databases onto the Oracle ZFS Storage Appliance, the following goals can be achieved:

1. Expand and contract storage space easily in a pooled environment.
2. Focus high-end caching on the Oracle ZFS Storage Appliance to simplify provisioning.
3. Eliminate network overhead by leveraging Infiniband and Fibre Channel connectivity.

This blog post will focus specifically on NFS. A followup post will discuss iSCSI and Fibre Channel.

Configuring the Oracle ZFS Storage Appliance

Each database should be contained in its own project.

1. From the ZFS controller’s CLI, create a project called ‘mysql’.

        zfs:> shares project mysql

2. Set logbias to latency to leverage write flash capabilities:

        zfs:shares mysql (uncommitted)> set logbias=latency
                               logbias = latency (uncommitted)

3. Set the default user to mysql and default group to mysql:

        zfs:shares mysql (uncommitted)> set default_user=mysql
                          default_user = mysql (uncommitted)
        zfs:shares mysql (uncommitted)> set default_group=mysql
                            default_group = mysql (uncommitted)

Note: If a name service such as LDAP or NIS is not being used, change these to the actual UID and GID found in /etc/passwd and /etc/group on the host.

4. Disable ‘Update access time on read’:

        zfs:shares mysql> set atime=false
                                 atime = false (uncommitted)

5. Commit the changes:

        zfs:shares mysql> commit

6. Create a filesystem called innodb-data to hold data files:

        zfs:shares mysql> filesystem innodb-data

7. Set the database record size to 16K to match Innodb’s standard page size:

        zfs:shares mysql/innodb-data (uncommitted)> set recordsize=16K
                             recordsize = 16K (uncommitted)
        zfs:shares mysql/innodb-data (uncommitted)> commit

8. Create a filesystem called ‘innodb-log’ to hold redo logs:

        zfs:shares mysql> filesystem innodb-log

9. Set the database record size to 128K:

        zfs:shares mysql/innodb-log (uncommitted)> set recordsize=128K
                            recordsize = 128K (uncommitted)
        zfs:shares mysql/innodb-log (uncommitted)> commit

Configuring the server

This example assumes a Linux server will be running the MySQL database. The following commands are roughly the same for a Solaris machine:

1. A directory structure should be created to contain the MySQL database:

        # mkdir –p /mysql/nas/innodb-data
        # mkdir –p /mysql/nas/innodb-log
        # chown –R mysql:mysql /mysql/nas

2. Each filesystem provisioned on the Oracle ZFS Storage Appliance should be mounted with the following options:

        rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,timeo=600,tcp,
        actimeo =0,nolock

3. This should be supplied in /etc/fstab in order to be mounted automatically at boot, or it can be run manually from a shell like so:

        # mount –t nfs –o rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,
          timeo=600,tcp,actimeo =0,nolock zfs:/export/innodb-data /mysql/nas/innodb-data
        # mount –t nfs –o rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,
          timeo=600,tcp,actimeo =0,nolock zfs:/export/innodb-log /mysql/nas/innodb-log

Configuring the MySQL database

The option file my.cnf should be modified to offload the database onto the Oracle ZFS Storage Appliance and to make additional tunings for optimal performance. Prior to changing this file, MySQL should be stopped and restarted once completed.
    # service mysql stop
    # service mysql start

Important my.cnf changes

1. innodb_doublewrite = 0

A double write buffer is necessary in the event of partial page writes. However, the transactional nature of ZFS guarantees that partial writes will never occur. This can be safely disabled.

2. innodb_flush_method = O_DIRECT

Ensures that InnoDB calls directio() instead of fcntl() for the data files. This allows the data to be accessed without OS level buffering and read-ahead.

3. innodb_data_home_dir = /path/to/innodb-data

The data filesystem for InnoDB should be located on its own share or LUN on the Oracle ZFS Storage Appliance.

4. innodb_log_group_home_dir = /path/to/innodb-log

The log filesystem for InnoDB should be located on its own share or LUN on the Oracle ZFS Storage Appliance.

5. innodb_data_file_path = ibdatafile:1G:autoextend

This configures a single large tablespace for InnoDB. The ZFS controller is then responsible for managing the growth of new data. This eliminates the complexity needed for controlling multiple tablespaces.

You can also download the following example my.cnf file to get started.

Testing with Sysbench

Sysbench is a handy benchmark tool for generating a database workload. To fill a test database, run the following command:
    # sysbench \
    --test=oltp \
    --oltp-table-size=1000000 \
    --mysql-db=test \
    --mysql-user=root \
    --mysql-password= \
    prepare
Once filled, create an OLTP workload with the following command and parameters:
    # sysbench \
    --test=oltp \
    --oltp-table-size=1000000 \
    --oltp-test-mode=complex \
    --oltp-read-only=off \
    --num-threads=128 \
    --max-time=3600 \
    --max-requests=0 \
    --mysql-db=test \
    --mysql-user=root \
    --mysql-password= \
    --mysql-table-engine=innodb \
    run

Analytics

The Analytics feature of the Oracle ZFS Storage Appliance offers an unprecedented level of observability into your database workload. This can assist in identifying performance bottlenecks based on the utilization of your network, hardware, and storage protocols. Its drill-down functionality can also narrow the focus of a MySQL instance into a workload’s operation type (read/write), I/O pattern (sequential / random), response latency, and I/O size for both the data and log files. At any point in time, a DBA can track a database instance at an incredibly granular level.

Once you have a single database installed, you can try creating more instances to analyze your I/O patterns. Run separate sysbench processes for each database and then use Analytics to monitor the differences between workloads.

Monday Sep 22, 2014

ZFS Storage at Oracle OpenWorld 2014

Join my colleagues and myself at this year's Oracle OpenWorld. We have five hands-on lab sessions available to attend. These are all heavily focused on 12c, MySQL, and the new RESTful API for the Oracle ZFS Storage Appliance.

HOL9715 - Deploying Oracle Database 12c with Oracle ZFS Storage Appliance 

September 29, (Monday) 2:45 PM - Hotel Nikko - Mendocino I/II
September 30, (Tuesday) 5:15 PM - Hotel Nikko - Mendocino I/II
HOL9718 - Managing and Monitoring Oracle ZFS Storage Appliance via the RESTful API 

September 29, (Monday) 2:45 PM - Hotel Nikko - Mendocino I/II
October 1, (Wednesday) 10:15 AM - Hotel Nikko - Mendocino I/II
HOL9760 - Deploying MySQL with Oracle ZFS Storage Appliance 

September 30, (Tuesday) 6:45 PM - Hotel Nikko - Mendocino I/II

Tuesday Jan 14, 2014

Configuration Applications / OS's for 1MB ZFS Block Sizes

The latest release of the ZFS Storage Appliance, 2013.1.1.1, introduces 1MB block sizes for shares. This is a deferred update that can only be enabled inside of Maintenance → System. You can edit individual Filesystems or LUNs from within 'Shares' to enable the 1MB support (database record size).

1m_enable.png

This new feature may need additional tweaking on all connected servers to fully realize significant performance gains. Most operating systems currently do not support a 1MB transfer size by default. This can be very easily spotted within analytics by breaking down your expected protocol by IO size. As an example, let's look at a fibre channel workload being generated by an Oracle Linux 6.5 server:

Example


1m_fcbad.png

The IO size is sitting at 501K, a very strange number that's eerily close to 512K. Why is this a problem? Well, take a look at our backend disks:

1m_diskiobad.png

Our disk IO size (block size) is heavily fragmented! This causes our overall throughput to nosedive.

1m_throughputbad.png

2GB/s is okay, but we can do better if our buffer size was 1MB on the host side.

Fixing the problem


Fibre Channel

Solaris
# echo 'set maxphys=1048576' > /etc/system

Oracle Linux 6.5 uek3 kernel (previous releases do not support 1MB sizes for multipath)
# echo 1024 > /sys/block/dm*/queue/max_sectors_kb 

or create a permanent udev rule:

# vi /etc/udev/rules.d/99-zfssa.rules

ACTION=="add", SYSFS{vendor}=="SUN", SYSFS{model}=="*ZFS*", 
ENV{ID_FS_USAGE}!="filesystem", ENV{ID_PATH}=="*-fc-*", 
RUN+="sh -c 'echo 1024 > /sys$DEVPATH/queue/max_sectors_kb'"

Windows

QLogic [qlfc]
C:\> qlfcx64.exe -tsize /fc /set 1024

Emulex [HBAnyware]
set ExtTransferSize = 1

Please see MOS Note 1640013.1 for configuration for iSCSI and NFS.

Results


After re-running the same FC workload with the correctly set 1MB transfer size, I can see the IO size is now where it should be.

1m_fcgood.png

This has a drastic impact on the block sizes being allocated on the backend disks:

1m_diskiogood.png

And an even more drastic impact on the overall throughput:

1m_throughputgood.png

A very small tweak resulted in a 5X performance gain (2.1GB/s to 10.9GB/s)! Until 1MB is the default for all physical I/O requests, expect to make some configuration changes on your underlying OS's.

System Configuration


Storage

  • 1 x Oracle ZS3-4 Controller
  • 2013.1.1.1 firmware
  • 1TB DRAM
  • 4 x 16G Fibre Channel HBAs
  • 4 x SAS2 HBAs
  • 4 x Disk Trays (24 4TB 7200RPM disks each)
Servers
  • 4 x Oracle x4170 M2 servers
  • Oracle Linux 6.5 (3.8.x kernel)
  • 16G DRAM
  • 1 x 16G Fibre Channel HBA

Workload


Each Oracle Linux server ran the following vdbench profile running against 4 LUNs:

sd=sd1,lun=/dev/mapper/mpatha,size=1g,openflags=o_direct,threads=128
sd=sd2,lun=/dev/mapper/mpathb,size=1g,openflags=o_direct,threads=128
sd=sd3,lun=/dev/mapper/mpathc,size=1g,openflags=o_direct,threads=128
sd=sd4,lun=/dev/mapper/mpathd,size=1g,openflags=o_direct,threads=128

wd=wd1,sd=sd*,xfersize=1m,readpct=70,seekpct=0
rd=run1,wd=wd1,iorate=max,elapsed=999h,interval=1
This is a 70% read / 30% write sequential workload.

Tuesday Sep 17, 2013

ZFS Storage at Oracle OpenWorld 2013

Join my colleagues and myself at this year's Oracle OpenWorld. I have a session, hands-on-lab, and demo being held in and around Moscone. These are all heavily focused on 12c and ZFS analytics.

HOL10103 - Managing ZFS Storage Inside Oracle Database 12c Environments 

September 23, (Monday) 10:45 AM - Marriott Marquis - Salon 10A
CON2846 - Oracle Use and Best Practices for High-Performance Cloud Storage 

September 23, (Monday) 12:15 PM - Westin San Francisco - Franciscan II
DEMO3619 - Maintaining the Performance of Your Cloud Infrastructure

Moscone South Lower Level, SC-152

Wednesday Jul 10, 2013

Solaris 11 IPoIB + IPMP

I recently needed to create a two port active:standby IPMP group to be served over Infiniband on Solaris 11. Wow that's a mouthful of terminology! Here's how I did it:

List available IB links

[root@adrenaline ~]# dladm show-ib
LINK         HCAGUID         PORTGUID        PORT STATE  PKEYS
net5         21280001CF4C96  21280001CF4C97  1    up     FFFF
net6         21280001CF4C96  21280001CF4C98  2    up     FFFF
Partition the IB links. My pkey will be 8001.
[root@adrenaline ~]# dladm create-part -l net5 -P 0x8001 p8001.net5
[root@adrenaline ~]# dladm create-part -l net6 -P 0x8001 p8001.net6
[root@adrenaline ~]# dladm show-part
LINK         PKEY  OVER         STATE    FLAGS
p8001.net5   8001  net5         unknown  ----
p8001.net6   8001  net6         unknown  ----
Create test addresses for the newly created datalinks
[root@adrenaline ~]# ipadm create-ip p8001.net5
[root@adrenaline ~]# ipadm create-addr -T static -a 192.168.1.101 p8001.net5/ipv4
[root@adrenaline ~]# ipadm create-ip p8001.net6
[root@adrenaline ~]# ipadm create-addr -T static -a 192.168.1.102 p8001.net6/ipv4
[root@adrenaline ~]# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
p8001.net5/ipv4   static   ok           192.168.1.101/24
p8001.net6/ipv4   static   ok           192.168.1.102/24
Create an IPMP group and add the IB datalinks
[root@adrenaline ~]# ipadm create-ipmp ipmp0
[root@adrenaline ~]# ipadm add-ipmp -i p8001.net5 -i p8001.net6 ipmp0
Set one IB datalink to standby
[root@adrenaline ~]# ipadm set-ifprop -p standby=on -m ip p8001.net6
Assign an IP address to the IPMP group
[root@adrenaline ~]# ipadm create-addr -T static -a 192.168.1.100/24 ipmp0/v4
That's it! Final checks:
[root@adrenaline ~]# ipadm
NAME              CLASS/TYPE STATE        UNDER      ADDR
ipmp0             ipmp       ok           --         --
   ipmp0/v4       static     ok           --         192.168.1.100/24
p8001.net5        ip         ok           ipmp0      --
   p8001.net5/ipv4 static    ok           --         192.168.1.101/24
p8001.net6        ip         ok           ipmp0      --
   p8001.net6/ipv4 static    ok           --         192.168.1.102/24

[root@adrenaline ~]# ping 192.168.1.100
192.168.1.100 is alive

Monday Mar 04, 2013

Configuring a Basic DNS Server + Client in Solaris 11

Configuring the Server
The default install of Solaris 11 does not come with a DNS server, but this can be added easily through IPS like so:
[paulie@griff ~]$ sudo pkg install service/network/dns/bind
Before enabling this service, the named.conf file needs to be modified to support the DNS structure. Here's what mine looks like:
[paulie@griff ~]$ cat /etc/named.conf
options {
        directory       "/etc/namedb/working";
        pid-file        "/var/run/named/pid";
        dump-file       "/var/dump/named_dump.db";
        statistics-file "/var/stats/named.stats";
        forwarders { 208.67.222.222; 208.67.220.220; };
};

zone "hillvalley" {
        type master;
        file "/etc/namedb/master/hillvalley.db";
};

zone "1.168.192.in-addr.arpa" {
        type master;
        file "/etc/namedb/master/1.168.192.db";
};
My forwarders use the OpenDNS servers, so any request that the local DNS server can't process goes through there. I've also setup two zones: hillvalley.db for my forward zone and 1.168.192.db for my reverse zone. We need both for a proper configuration. We also need to create some directories to support this file:
[paulie@griff ~]$ sudo mkdir /var/dump
[paulie@griff ~]$ sudo mkdir /var/stats
[paulie@griff ~]$ sudo mkdir -p /var/run/namedb
[paulie@griff ~]$ sudo mkdir -p /etc/namedb/master
[paulie@griff ~]$ sudo mkdir -p /etc/namedb/working
Now, let's populate the DNS server with a forward and reverse file.

Forward file
[paulie@griff ~]$ cat /etc/namedb/master/hillvalley.db 
$TTL 3h
@       IN      SOA     griff.hillvalley. paulie.griff.hillvalley. (
        2013022744 ;serial (change after every update)
        3600 ;refresh (1 hour)
        3600 ;retry (1 hour)
        604800 ;expire (1 week)
        38400 ;minimum (1 day)
)

hillvalley.     IN      NS      griff.hillvalley.

delorean        IN      A       192.168.1.1   ; Router
biff            IN      A       192.168.1.101 ; NFS Server
griff           IN      A       192.168.1.102 ; DNS Server
buford          IN      A       192.168.1.103 ; LDAP Server
marty           IN      A       192.168.1.104 ; Workstation
doc             IN      A       192.168.1.105 ; Laptop
jennifer        IN      A       192.168.1.106 ; Boxee
lorraine        IN      A       192.168.1.107 ; Boxee
Reverse File
[paulie@griff ~]$ cat /etc/namedb/master/1.168.192.db 
$TTL 3h
@       IN      SOA     griff.hillvalley. paulie.griff.hillvalley. (
        2013022744 ;serial (change after every update)
        3600 ;refresh (1 hour)
        3600 ;retry (1 hour)
        604800 ;expire (1 week)
        38400 ;minimum (1 day)
)

        IN      NS      griff.hillvalley.

1       IN      PTR     delorean.hillvalley.    ; Router
101     IN      PTR     biff.hillvalley.        ; NFS Server
102     IN      PTR     griff.hillvalley.       ; DNS Server
103     IN      PTR     buford.hillvalley.      ; LDAP Server
104     IN      PTR     marty.hillvalley.       ; Workstation
105     IN      PTR     doc.hillvalley.         ; Laptop
106     IN      PTR     jennifer.hillvalley.    ; Boxee
107     IN      PTR     lorraine.hillvalley.    ; Boxee
For referencing how these files works:
  • paulie is the admin user account name
  • griff is the hostname of the DNS server
  • hillvalley is the domain name of the network
  • I love BTTF
Feel free to tweak this example to match your own network. Finally, enable the DNS service and check that it's online:
[paulie@griff ~]$ sudo svcadm enable dns/server
[paulie@griff ~]$ sudo svcs | grep dns/server
online         22:32:20 svc:/network/dns/server:default
Configuring the Client
We will need the IP address (192.168.1.102), hostname (griff), and domain name (hillvalley) to configure DNS with these commands:
[paulie@buford ~]$ sudo svccfg -s network/dns/client setprop config/nameserver = net_address: 192.168.1.102
[paulie@buford ~]$ sudo svccfg -s network/dns/client setprop config/domain = astring: hillvalley
[paulie@buford ~]$ sudo svccfg -s network/dns/client setprop config/search = astring: hillvalley
[paulie@buford ~]$ sudo svccfg -s name-service/switch setprop config/ipnodes = astring: '"files dns"'
[paulie@buford ~]$ sudo svccfg -s name-service/switch setprop config/host = astring: '"files dns"'
Verify the configuration is correct:
[paulie@buford ~]$ svccfg -s network/dns/client listprop config
config                      application        
config/value_authorization astring     solaris.smf.value.name-service.dns.client
config/nameserver          net_address 192.168.1.102
config/domain              astring     hillvalley
config/search              astring     hillvalley
And enable:
[paulie@buford ~]$ sudo svcadm enable dns/client
Now we need to test that the DNS server is working using both forward and reverse DNS lookups:
[paulie@buford ~]$ nslookup lorraine
Server:         192.168.1.102
Address:        192.168.1.102#53

Name:   lorraine.hillvalley
Address: 192.168.1.107

[paulie@buford ~]$ nslookup 192.168.1.1
Server:         192.168.1.102
Address:        192.168.1.102#53

1.1.168.192.in-addr.arpa        name = delorean.hillvalley.

Thursday Feb 21, 2013

Configuring a Basic LDAP Server + Client in Solaris 11

Configuring the Server
Solaris 11 ships with OpenLDAP to use as an LDAP server. To configure, you're going to need a simple slapd.conf file and an LDIF schema file to populate the database. First, let's look at the slapd.conf configuration:
# cat /etc/openldap/slapd.conf
include         /etc/openldap/schema/core.schema
include         /etc/openldap/schema/cosine.schema
include         /etc/openldap/schema/inetorgperson.schema
include         /etc/openldap/schema/nis.schema

pidfile         /var/openldap/run/slapd.pid
argsfile        /var/openldap/run/slapd.args

database        bdb
suffix          "dc=buford,dc=hillvalley"
rootdn          "cn=admin,dc=buford,dc=hillvalley"
rootpw          secret
directory       /var/openldap/openldap-data
index           objectClass     eq
You may want to change the lines suffix and rootdn to better represent your network naming schema. My LDAP server's hostname is buford and domain name is hillvalley. You will need to add additional domain components (dc=) if the name is longer. This schema assumes the LDAP manager will be called admin. Its password is 'secret'. This is in clear-text just as an example, but you can generate a new one using slappasswd:
[paulie@buford ~]$ slappasswd
New password: 
Re-enter new password: 
{SSHA}MlyFaZxG6YIQ0d/Vw6fIGhAXZiaogk0G
Replace 'secret' with the entire hash, {SSHA}MlyFaZxG6YIQ0d/Vw6fIGhAXZiaogk0G, for the rootpw line. Now, let's create a basic schema for my network.
# cat /etc/openldap/schema/hillvalley.ldif
dn: dc=buford,dc=hillvalley
objectClass: dcObject
objectClass: organization
o: bufford.hillvalley
dc: buford

dn: ou=groups,dc=buford,dc=hillvalley
objectCLass: top
objectClass: organizationalunit
ou: groups

dn: ou=users,dc=buford,dc=hillvalley
objectClass: top
objectClass: organizationalunit
ou: users

dn: cn=world,ou=groups,dc=buford,dc=hillvalley
objectClass: top
objectClass: posixGroup
cn: world
gidNumber: 1001

dn: uid=paulie,ou=users,dc=buford,dc=hillvalley
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: Paul Johnson
uid: paulie
uidNumber: 1001
gidNumber: 1001
homeDirectory: /paulie/
loginShell: /usr/bin/bash
userPassword: secret
I've created a single group, world, and a single user, paulie. Both share the uid and gid of 1001. LDAP supports lots of additional variables for configuring a user and group account, but I've kept it basic in this example. Once again, be sure to change the domain components to match your network. Feel free to also change the user and group details. I've left the userPassword field in clear-text as 'secret'. The same slappasswd method above applies here as well. It's time to turn on the server, but first, let's change some ownership permissions:
[paulie@buford ~]$ sudo chown -R openldap:openldap /var/openldap/
... and now ...
[paulie@buford ~]$ sudo svcadm enable ldap/server
Check that it worked:
[paulie@buford ~]$ svcs | grep ldap
online         12:13:49 svc:/network/ldap/server:openldap_24
Neat, now let's add our schema file to the database:
[paulie@buford ~]$ ldapadd -D "cn=admin,dc=buford,dc=hillvalley" -f /etc/openldap/schema/hillvalley.ldif
Enter bind password: 
adding new entry dc=buford,dc=hillvalley
adding new entry ou=groups,dc=buford,dc=hillvalley
adding new entry ou=users,dc=buford,dc=hillvalley
adding new entry cn=world,ou=groups,dc=buford,dc=hillvalley
adding new entry uid=paulie,ou=users,dc=buford,dc=hillvalley
That's it! Our LDAP server is up, populated, and ready to authenticate against.

Configuring the Client
I'm going to turn my example server, buford.hillvalley, into an LDAP client as well. To do this, we need to run the `ldapclient` command to map our new user and group data:
[paulie@buford ~]$ ldapclient manual \
-a credentialLevel=proxy \
-a authenticationMethod=simple \
-a defaultSearchBase=dc=buford,dc=hillvalley \
-a domainName=buford.hillvalley \
-a defaultServerList=192.168.1.103 \
-a proxyDN=cn=admin,dc=buford,dc=hillvalley \
-a proxyPassword=secret \
-a attributeMap=group:gidnumber=gidNumber \
-a attributeMap=passwd:gidnumber=gidNumber \
-a attributeMap=passwd:uidnumber=uidNumber \
-a attributeMap=passwd:homedirectory=homeDirectory \
-a attributeMap=passwd:loginshell=loginShell \
-a attributeMap=shadow:userpassword=userPassword \
-a objectClassMap=group:posixGroup=posixgroup \
-a objectClassMap=passwd:posixAccount=posixaccount \
-a objectClassMap=shadow:shadowAccount=posixaccount \
-a serviceSearchDescriptor=passwd:ou=users,dc=buford,dc=hillvalley \
-a serviceSearchDescriptor=group:ou=groups,dc=buford,dc=hillvalley \
-a serviceSearchDescriptor=shadow:ou=users,dc=buford,dc=hillvalley
As usual, change the host and domain names as well as the IP address held in defaultServerList and the proxyPassword. The command should respond back that the system was configured properly, however, additional changes will need to be made if you use DNS for hostname lookups (most people use DNS, so run these commands).
svccfg -s name-service/switch setprop config/host = astring: \"files dns ldap\"
svccfg -s name-service/switch:default refresh
svcadm restart name-service/cache
Now, we need to change how users login so that the client knows that there is an extra LDAP server to authenticate against. This should not lockout local worries. Examine the two files /etc/pam.d/login and /etc/pam.d/other. Change any instance of
auth required            pam_unix_auth.so.1
to
auth binding            pam_unix_auth.so.1 server_policy
After this line, add the following new line:
auth required           pam_ldap.so.1
That's it! Finally, reboot your system and see if you can login with your newly created user.

Update: Glenn Faden wrote an excellent guide to configuring OpenLDAP using the native Solaris user/group/role management system.

Monday Feb 11, 2013

Recovering Passwords in Solaris 11

About once a year, I'll find a way to lock myself out of a Solaris system. Here's how to get out of this scenario. You'll need a Solaris 11 Live CD or Live USB stick.
  • Boot up from the Live CD/USB
  • Select the 'Text Console' option from the GRUB menu
  • Login to the solaris console using the username/password of jack/jack
  • Switch to root
  • $ sudo su
    password jack
    
  • Mount the solaris boot environment in a temporary directory
  • # beadm mount solaris /a
    
  • Edit the shadow file
  • # vi /a/etc/shadow
    
  • Find your username and remove the password hash
  • Convert
    username:iEwei23SamPleHashonf0981:15746::::::17216
    to
    username::15746::::::17216
    
  • Allow empty passwords at login
  • $ vi /a/etc/default/login
    Switch this line
    PASSREQ=YES
    to
    PASSREQ=NO
    
  • Update the boot archive
  • # bootadm update-archive -R /a
    
  • Reboot and remove the Live CD/USB from system
  • # reboot
    
    If prompted for a password, hit return since this has now been blanked.

Monday Sep 24, 2012

ZFS Storage at Oracle OpenWorld 2012

Join my colleagues and myself at this year's Oracle OpenWorld. We'll be hosting a hands-on lab, demonstrating the ZFS Storage Appliance and its analytics features.

HOL10034 - Managing Storage in the Cloud 

October 1st (Monday) 3:15 PM - Marriott Marquis - Salon 14/15
October 2nd (Tuesday) 5:00 PM - Marriott Marquis - Salon 14/15

Monday Feb 20, 2012

CIFS Sharing on Solaris 11

Things have changed since Solaris 10 (and Solaris 11 Express too!) on how to properly set up a CIFS server on your Solaris 11 machine so that Windows clients can access files. There's some documentation on the changes here, but let me share the full instructions from beginning to end.
hostname: adrenaline
username: paulie
poolname: pool
mountpnt: /pool
share: mysharename
  • Install SMB server package
  • [paulie@adrenaline ~]$ sudo pkg install service/file-system/smb
    
  • Create the name of the share
  • [paulie@adrenaline ~]$ sudo zfs set share=name=mysharename,path=/pool,prot=smb pool
    
  • Turn on sharing using zfs
  • [paulie@adrenaline ~]$ sudo zfs set sharesmb=on pool
    
  • Turn on your smb server
  • [paulie@adrenaline ~]$ sudo svcadm enable -r smb/server
    
  • Check that the share is active
  • [paulie@adrenaline ~]$ sudo smbadm show-shares adrenaline
    Enter password: 
    c$                  Default Share
    IPC$                Remote IPC
    mysharename           
    3 shares (total=3, read=3)
    
  • Enable an existing UNIX user for CIFS sharing (you may have to reset the password again eg.`passwd paulie` )
  • [paulie@adrenaline ~]$ sudo smbadm enable-user paulie
    
  • Edit pam to allow for smb authentication (add line to end of file)
  • Solaris 11 GA only:
    [paulie@adrenaline ~]$ vi /etc/pam.conf
    
    other   password required       pam_smb_passwd.so.1 nowarn
    
    Solaris 11 U1 or later:
    [paulie@adrenaline ~]$ vi /etc/pam.d/other
    
    password required       pam_smb_passwd.so.1 nowarn
    
  • Try to mount the share on your Windows machine
  • \\adrenaline\mysharename
    

Tuesday Jan 03, 2012

NFS mounts with ZFS

I ran into a strange automount issue where my NFS shares were not being mounted at boot time. nfs/client was enabled, my entry in /etc/vfstab was correct, and issuing a `mount -a` worked flawlessly. So what was the problem? Well, this was the entry in my vfstab file:
biff:/paulie    -       /export/home/paulie/biff   nfs     -       yes     proto=tcp,vers=3
I wanted to place my NFS share inside a zfs filesystem so that it was easily accessible in my home directory.
[paulie@doc ~]$ zfs list | grep export/home/paulie
rpool/export/home/paulie  78.3M  2.82G  78.3M  /export/home/paulie
Turns out this is not such a good idea since the /etc/vfstab file is read *before* zpool's are imported and mounted. This means that all NFS shares need to be listed outside any filesystems to be mounted at boot time and then symlinked in.
[root@doc ~]# mkdir /biff
[paulie@doc ~]$ ln -s /biff/ /export/home/paulie/biff/
... and then changing around vfstab ...
biff:/paulie    -       /biff   nfs     -       yes     proto=tcp,vers=3
And that's it, NFS should automount now:
[paulie@doc ~]$ df -kh | grep biff
biff:/paulie           2.7T   1.2T       1.4T    47%    /biff
Lesson learned.

Wednesday Oct 12, 2011

ZFS Storage Appliance at Storage Networking World Fall 2011

I was an instructor at SNW this year at the JW Marriott hotel in Orlando, Florida. Along with fellow Oracle co-worker Ray Clarke, we represented the ZFS Storage Appliance in a hands-on environment that allowed storage administrators and industry experts to demo our product in a simulated environment.


Rather than haul physical equipment to the convention, we setup an array of Windows 7 virtual machine sessions paired with a ZFS simulator running on Virtual Box across two remote X4270 machines. This let us create a classroom environment of 24 stations (48 VM sessions) that created a superb replica of the 7000 product that each user could toy around with as they completed the storage exercises we devised.

If you missed the opportunity to demo our product, or would like to download and play with the simulator in your own environment, feel free to check out the following links to get started.

About

Hiya, my name is Paul Johnson and I'm a software engineer working on the Oracle ZFS Storage Appliance .

Search

Categories
Archives
« September 2015
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today