Friday Dec 18, 2009

More ZFS Goodness: The OpenSolaris Build Machine

<script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try { var pageTracker = _gat._getTracker("UA-12162483-1"); pageTracker._trackPageview(); } catch(err) {}</script>

Apart from my usual LDAP'ing, I also -try to- help the opensolaris team with anything I can.

Lately, I've helped build their new x64 build rig, for which I carefully selected the best components out there while trying to keep the overall box budget on a leash. It came out at about $5k. Not on the cheap side, but cheaper than most machines in most data centers.

The components:

  • 2 Intel Xeon E5220 HyperThreaded QuadCores@2.27GHz. 16 cpus in solaris
  • 2 32GB Intel X25 SSD
  • 2 2TB WD drives
  • 24GB ECC DDR2

I felt compelled to follow up my previous post about making the most out your SSD because some people commented that non mirrored pools were evil. Well, here's how this is set up this time: in order to avoid using either of the relatively small SSDs for the system, I have partitioned the big 2TB drives with exactly the same layout, one 100GB partition for the system, the rest of the disk is going to be holding our data. This leaves our SSD available for the ZIL and the L2ARC. But thinking about it, the ZIL is never going to take up the entire 32GB SSD. So I partitioned one of the SSDs with a 3GB slice for the ZIL and the rest for L2ARC.

The result is a system with 24GB of RAM for the Level 1 ZFS cache (ARC) and 57GB for L2ARC in combination with a 3GB ZIL. So we know it will be fast. But the icing on the cache ... the cake sorry, is that the rpool is mirrored. And so is the data pool.

Here's how it looks: 

admin@factory:~$ zpool status
  pool: data
 state: ONLINE
 scrub: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    data        ONLINE       0     0     0
      c5d1p2    ONLINE       0     0     0
      c6d1p2    ONLINE       0     0     0
    logs
      c6d0p1    ONLINE       0     0     0
    cache
      c5d0p1    ONLINE       0     0     0
      c6d0p2    ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        c5d1s0  ONLINE       0     0     0
        c6d1p1  ONLINE       0     0     0

errors: No known data errors
admin@factory:~$

 This is a really good example of how to setup a real-life machine designed to be robust and fast without compromise. This rig achieves performance on par with $40k+ servers. And THAT is why ZFS is so compelling.

Wednesday Aug 06, 2008

From An Old Suffix To A New One Live With No Down Time

You currently have your entries an "old" suffix that we will here on call dc=old and you would like to be able to move your entries to a new suffix we will refer to as dc=new for the purpose of this article. The catch is that you cannot stop your server and migrate your data off line. On top of this, during the transition period when your client applications get reconfigured to use dc=new, you need entries to appear to be in both dc=new and  dc=old.

To make this a little simpler to picture in our minds, let's look at the data life cycle:

  1. Before the migration starts, all data resides under dc=old, requests on dc=new would fail but no application is "aware" of dc=new
    • e.g. cn=user.1,dc=old works but cn=user.1,dc=new fails. At this point the DIT looks like this:
      • dc=old
        • cn=user.1
        • cn=user.3
      • dc=new
  2. When the migration starts, all data resides under dc=old, both requests on dc=old and dc=new are honored
    • e.g. cn=user.1,dc=old and cn=user.1,dc=new will work but the entry is actually only stored physically under dc=old. The DIT is unchanged compared to 1. but we have shoehorned DPS in the topology and added a data view to take care of the transformation.
  3. While the migration is ongoing, to be migrated data resides under dc=old but migrated data resides on dc=new
    • this is when it will start to get a little complicated. Here is what the DIT might look like:
      • dc=old
        • cn=user.1
      • dc=new
        • cn=user.3
    • At this point, a request on cn=user.1,dc=old will work along with a request for cn=user.1,dc=new. But both requests for cn=user.3,dc=new and cn=user.3,dc=old will work as well. Basically, to the client application there is a true virtualization of where the data actually resides. This is crucial when you have a heterogeneous environment of applications being reconfigured to use the new suffix while some older applications might take too much work to reconfigure and are simply waiting to be discontinued.
  4. When data migration is complete, all data resides under dc=new but both requests for entries under both suffixes will be served. At this point our DIT would look like this:
    • dc=old
    • dc=new
      • cn=user.1
      • cn=user.3
  5. Once we are confident no application requests entries under dc=old then we can take the virtualization data views down. Only requests to dc=new will be served.

If you want to try it out for yourself, here are the steps to follow to get such a setup with DPS 6 but first let us agree on the environment:

  • Directory Server bits are installed in/path/to/sun/dsee/6.3/bits/ds6
  • Directory Proxy Server bits installed in /path/to/sun/dsee/6.3/bits/dps6
  • I won't use "cn=Directory Manager" but uid=admin instead with a password file containing the "password" string in /path/to/pwd
  • the data for dc=new and dc=old in our example is as follows

    dn: dc=old
    dc: old
    objectClass: top
    objectClass: domain
    aci: (targetattr=\*) ( version 3.0; acl "allow all anonymous"; allow (all) userdn="ldap:///anyone";)

    dn: cn=user.1,dc=old
    objectClass: person
    objectClass: top
    cn: user.1
    sn: 1
    userPassword: {SSHA}PzAK73RDZIikdI8qRqD7MYubasZ5JyJa/BToMw==

    dn: dc=new
    dc: new

    objectClass: top
    objectClass: domain
    aci: (targetattr=\*) ( version 3.0; acl "allow all anonymous"; allow (all) userdn="ldap:///anyone";)

    dn: cn=user.3,dc=new
    objectClass: person
    objectClass: top
    cn: user.3
    sn: 3
    userPassword: {SSHA}PzAK73RDZIikdI8qRqD7MYubasZ5JyJa/BToMw==

Before we begin, it is very convenient to set the following environment variables to make subsequent calls the CLIs much easier to read:

export PATH=${PATH}:/path/to/sun/dsee/6.3/bits/ds6/bin:/path/to/sun/dsee/6.3/bits/dps6/bin:/path/to/sun/dsee/6.3/bits/dsrk6/bin
export LDAP_ADMIN_PWF=/path/to/pwd
export LDAP_ADMIN_USER=uid=admin
export DIRSERV_PORT=1389
export DIRSERV_HOST=localhost
export DIRSERV_UNSECURED=TRUE
export DIR_PROXY_HOST=localhost
export DIR_PROXY_PORT=7777
export DIR_PROXY_UNSECURED=TRUE

First off we need to create an instance of Directory Server to store our entries. This is a two step process:

  1. Create and start the instance that we will name master
    >dsadm create -D uid=admin -w /path/to/pwd -p 1389 -P 1636 master
    Use 'dsadm start 'master'' to start the instance
    >dsadm start master
    Directory Server instance '/path/to/sun/dsee/6.3/live-migration/master' started: pid=2968
  2. On to creating a suffix and populating it with data
    >dsconf create-suffix dc=old
    >dsconf import /path/to/sun/dsee/6.3/instances/dc\\=old.ldif dc=old
    New data will override existing data of the suffix "dc=old".
    Initialization will have to be performed on replicated suffixes.
    Do you want to continue [y/n] ?  y
    ## Index buffering enabled with bucket size 40
    ## Beginning import job...
    ## Processing file "/path/to/sun/dsee/6.3/instances/dc=old.ldif"
    ## Finished scanning file "/path/to/sun/dsee/6.3/instances/dc=old.ldif" (3 entries)
    ## Workers finished; cleaning up...
    ## Workers cleaned up.
    ## Cleaning up producer thread...
    ## Indexing complete.
    ## Starting numsubordinates attribute generation. This may take a while, please wait for further activity reports.
    ## Numsubordinates attribute generation complete. Flushing caches...
    ## Closing files...
    ## Import complete.  Processed 3 entries in 4 seconds. (0.75 entries/sec)
    Task completed (slapd exit code: 0).
  3. We can now check the data was successfully loaded with a quick broad sweep search:
    >ldapsearch -p 1389 -b "dc=old" "(objectClass=\*)"
    version: 1
    dn: dc=old
    dc: old
    objectClass: top
    objectClass: domain

    dn: cn=user.1,dc=old
    objectClass: person
    objectClass: top
    cn: user.1
    sn: 1
    userPassword: {SSHA}PzAK73RDZIikdI8qRqD7MYubasZ5JyJa/BToMw==
  4. Repeat these last 3 steps for dc=new

Directory Proxy Server configuration

  1. Create and start an instance of the proxy

    >dpadm create -p 7777 -P7778 -D uid=admin -w /path/to/pwd proxy
    Use 'dpadm start /path/to/sun/dsee/6.3/live-migration/proxy' to start the instance

    >dpadm start proxy
    Directory Proxy Server instance '/path/to/sun/dsee/6.3/live-migration/proxy' started: pid=3061

  2. Connect the proxy to the Directory Server instance

    >dpconf create-ldap-data-source master localhost:1389

    >dpconf create-ldap-data-source-pool master-pool

    >dpconf attach-ldap-data-source master-pool master

    >dpconf set-attached-ldap-data-source-prop master-pool master add-weight:1 bind-weight:1 compare-weight:1 delete-weight:1 modify-dn-weight:1 modify-weight:1 search-weight:1

  3. Create a straight data view to dc=old and verify we can get through to the source

    >dpconf create-ldap-data-view actual-old master-pool dc=old

    >ldapsearch -p 7777 -b dc=old "(objectClass=\*)"
    version: 1
    dn: dc=old
    dc: old
    objectClass: top
    objectClass: domain

    dn: cn=user.1,dc=old
    objectClass: person
    objectClass: top
    cn: user.1
    sn: 1
    userPassword: {SSHA}PzAK73RDZIikdI8qRqD7MYubasZ5JyJa/BToMw==

    >ldapsearch -p 7777 -b dc=old "(cn=user.1)"
    version: 1
    dn: cn=user.1,dc=old
    objectClass: person
    objectClass: top
    cn: user.1
    sn: 1
    userPassword: {SSHA}PzAK73RDZIikdI8qRqD7MYubasZ5JyJa/BToMw==

  4. Create a virtual data view representing the physical entries under dc=old as it were under dc=new

    >dpconf create-ldap-data-view virtual-new master-pool dc=new

    >dpconf set-ldap-data-view-prop virtual-new dn-mapping-source-base-dn:dc=old

    >ldapsearch -p 7777 -b dc=new "(cn=user.1)"
    version: 1
    dn: cn=user.1,dc=new
    objectClass: person
    objectClass: top
    cn: user.1
    sn: 1
    userPassword: {SSHA}PzAK73RDZIikdI8qRqD7MYubasZ5JyJa/BToMw==


  5. Repeat 3 and 4 for the physical dc=new suffix and we now have totally virtualized back end.

    >dpconf create-ldap-data-view actual-new master-pool dc=new

    >dpconf create-ldap-data-view virtual-old master-pool dc=old

    >dpconf set-ldap-data-view-prop virtual-old dn-mapping-source-base-dn:dc=new

    >ldapsearch -p 7777 -b dc=old "(cn=user.3)"
    version: 1
    dn: cn=user.3,dc=old
    objectClass: person
    objectClass: top
    cn: user.3
    sn: 3
    userPassword: {SSHA}5LFqYHLashsY7PFAvFV9pM+C2oedPTdV/AIADQ==

    >ldapsearch -p 7777 -b dc=new "(cn=user.3)"
    version: 1
    dn: cn=user.3,dc=new
    objectClass: person
    objectClass: top
    cn: user.3
    sn: 3
    userPassword: {SSHA}5LFqYHLashsY7PFAvFV9pM+C2oedPTdV/AIADQ==

<script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try { var pageTracker = _gat._getTracker("UA-12162483-1"); pageTracker._trackPageview(); } catch(err) {}</script>
About

Directory Services Tutorials, Utilities, Tips and Tricks

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today