Tuesday Sep 06, 2011

Moving on

After 15-plus years at Sun and Oracle, I'm going to be leaving the company to do something a little different.  It was not an easy decision to make: Sun and Oracle have both treated me well, and I've met a lot of incredible people at both companies over the years.  That's tough to leave, but new opportunities come from everywhere, and I'm taking one outside the company.

I'll keep blogging, writing about software, our solar panels, perhaps the occasional travel entry (the new gig's got me on the road a fair bit), and whatever else catches my interest.  Let's hope it catches yours as well.

It's been a privilege, this last 15 or so years of my career.  But I've got a ton more to do, so I'll be around for a while.

See ya!

Wednesday Oct 14, 2009

Observations from the Oracle Open World 2009 Applications Keynote

It's been a busy conference, that's for sure.  On Monday I spent a fair amount of time both in keynotes and in the exhibition halls.  By the way, there are two exhibition halls; if you're like me and think that JavaOne is a large conference with "only" one enormous exhibition hall being enough to satisfy the vendors and attendees, well you haven't been to Oracle Open World.  Two exhibition halls in two different Moscone Center buildings is almost overwhelming.


The morning keynote session with Safra Catz and Charles Phillips was pretty straightforward; I wouldn't say the keynotes were inspiring, but they did move along smoothly and delivered their business messages crisply.  The demos all went perfectly which is also something I can't say about JavaOne in the past couple of years (man, it hurts to write that).

I've never seen Andy Mendelsohn speak in person but I did on Monday, watching his keynote about new features in the Oracle database 11g.  He did fine, the features seem interesting if you're a DB guy (which I am not, but I got the point of the features being discussed), and Andy seems like an uber-nerd.  I mean that as a compliment; I think Sun people would have an easy time adjusting to him.

The last keynote of the day was Steve Miranda's keynote about applications; to be clear, the focus was on the Oracle E-Business Suite ("E-Biz"), PeopleSoft, and Siebel CRM.  Here are the two main things I observed about that keynote's content:

  1. Miranda had a slide that mentioned PeopleSoft's latest release, and he made it extremely clear that this is the third release since PeopleSoft was acquired, and that thousands of features had been added.  The message: we are not killing off PeopleSoft, customers.  You like it?  We have continued to invest heavily in it.  I find it interesting that almost five years after the acquisition, Oracle is still emphasizing that PeopleSoft is still a viable entity; I wonder if this is still a customer concern?  In any case, it is a positive message for customers, and Miranda made sure his customers got the point.
  2. There is a component that I should learn more about, called "AIA" (Oracle Application Integration Architecture).  There was a slide that showed how Oracle's Analytics product (the former Siebel Analytics product) can be used to run decision-making analysis on data from the following applications: Siebel, PeopleSoft, E-Biz, SAP (yes, that's right: SAP), JD Edwards.  There was a component in the middle that took the data from these apps and did something to it to make it all look the same to Analytics, essentially.  That component in the middle is AIA.

Here is why I think this AIA thing is important to learn about.  Several years ago when I managed Sun's engineering relationship with Siebel, I remember a series of conversations with the Siebel Analytics business unit where they told us that their focus on Siebel Analytics was to "democratize" analytics; up to that point with everybody's analytics products, anybody in an enterprise used the CRM tool (think customer service rep at an AT&T Wireless store), but only a minority used the decision-making analytics packages. Siebel wanted to push decision-making power down the chain.  You can also think of analytics software as higher in the stack than the standard CRM / ERP software: you're taking the CRM and ERP data, looking at it, and making decisions about your enterprise based on it.

If AIA really can ingest data from SAP, then this is a smart strategy by Oracle to gain control over SAP in SAP accounts.  Suddenly, to the customer it doesn't matter so much what application is running under your analytics: the high-value activity is your decision-making activity (i.e., analytics), so who cares about whether it's SAP or E-Biz or JD Edwards that is feeding the ERP data into the analytics package?  SAP just became a lot less important.

So, I think it's important to learn more about AIA.  If it really can talk to all of these applications, it's a key component to the future integration of Oracle's application properties, and it's also an SAP take-out tool.

Tuesday Dec 23, 2008

My Home Media Server on OpenSolaris + ZFS: Part 2

In my previous blog entry, I decided how ZFS will protect my data for a home media server I'm building.  Next: partition the disks on my two larger drives and install OpenSolaris on them.

This would be stupendously easy if my four disk drives were all the same size: I would type "zpool create mediapool raidz <disk1> <disk2> <disk3> <disk4>" and ZFS would give me a ton of storage all nice and protected for me.  But I have two 1TB drives and two 1.5TB drives.   My problem: ZFS wants all the pieces of a "vdev" (a virtual device; in this case, I'm creating a virtual RAID-Z device with four disks in it) to be the same size.  So I have some partitioning work to do.  I'm documenting what I did in case any of you want to use ZFS with different sized drives.

Here is my plan:
  • make sure the 1.5TB drives are the 1st and 2nd drives seen by the computer's BIOS, so that I can install OpenSolaris on one of these bigger drives
  • partition each 1.5TB drive into a 1TB partition and a .5TB partition (I recommend doing the partitioning from the Live CD instead of after installing the OS; it went easier for me this way)
  • install OpenSolaris onto the first 1.5TB drive's .5TB partition; installation will create a ZFS pool called "rpool"
  • put the four 1TB partitions into a ZFS raidz pool I will call "mediapool", my primary storage for our home's stuff
  • attach the remaining .5TB partition (from the second 1.5TB drive) to "rpool", making it a ZFS mirror pool so that the OS is protected against a single disk failure
I suppose I could've just made a single pool for storage, but I still like the idea of being able to separate my media storage from my OS.  Anyway, this is my plan for now.

Recalling that ZFS wants all the devices in a vdev to be the same size, I need to do some disk math to make sure the partition sizes are the same number of bytes.  Here's why (and don't laugh if this is all trivial to you; I'm a manager, okay?  If I don't see headcount or budget somewhere in this, I just get confused):

First of all, fdisk lets me specify partition sizes by either a percentage of the disk or a number of cylinders.  Specifying a percentage doesn't let me get precise enough to match the partition sizes on the the 1.5TB disks and the 1TB disks, so I need to specify partition size in terms of cylinders.  But cylinders aren't the same size on the two different disks.

The fdisk utility reports the following information about the 1.5TB and 1TB disks:

1.5TB Disk geometry:
Total disk size is 60800 cyls
Cylinder size is 48195 512-byte blocks
1TB Disk geometry:
Total disk size is 60800 cyls
Cylinder size is 32130 512-byte blocks

Notice that one cylinder on the 1.5TB drive is 1.5 the size (or 3/2, this way of reckoning comes in handy later) of a cylinder on the 1TB drive (48195 = 3/2 \* 32130).

I want to use as much of the 1TB drive as possible (60800 cylinders) but I can't: 60800 cylinders on the 1TB drive corresponds to 40533.33333 cylinders on the 1.5TB drive; I can't enter a non-integer number into fdisk.  I must find a size that works for both disks.  It needs to be a multiple of 3 cylinders on the 1TB drive (which would be a multiple of 2 cylinders on the 1.5TB drive).  I'll waste a little space (2 cylinders' worth on the 1TB drive or about 32MB), but that's okay given that I'll get RAID-Z error correction in return.

I'll create 1 partition on the 1TB disk, 60798 cylinders (next closest multiple of 3) == 1,953,439,740 blocks.
I'll create two partitions the 1.5TB disk:
  1. 40532 cyls == 1,953,439,740 blocks
  2. 20268 cyls (use this for the OS "rpool")
Now that I know exactly how big each partition needs to be on the four disks, I can use this easy-to-follow example to create the Solaris fdisk partitions.  It's easy; it takes less than five minutes, once I've worked out the math I just laid out here.

Next, it's time to install the OS.  I'm already running OpenSolaris from the Live CD, so I just click on the icon to install and less then 20 minutes later, it's there.

Next: create the media storage pool, using all four disks in a RAID-Z configuration:
drapeau@blackfoot:$ pfexec formatSearching for disks...done

 0. c1t0d0  /pci@0,0/pci108e,534a@7/disk@0,0
 1. c1t1d0  /pci@0,0/pci108e,534a@7/disk@1,0 
 2. c2t0d0  /pci@0,0/pci108e,534a@8/disk@0,0
 3. c2t1d0  /pci@0,0/pci108e,534a@8/disk@1,0
Specify disk (enter its number): \^C

drapeau@blackfoot:$ zpool listNAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 444G 13.7G 430G 3% ONLINE -

drapeau@blackfoot:$ pfexec zpool create mediapool raidz c2t1d0p1 c2t0d0p1 c1t1d0p2 c1t0d0p2

Note that I used partition names for these disks, which is important: according to this helpful document, in Solaris disk device names, you'll see four primary partitions (p1-p4) and a "p0" as well which means "the whole disk".  I had to be clear to tell ZFS that I didn't want to use the whole 1TB disks, only the 1st partition on them (c2t1d0p1, c2t0d0p1).  And I told ZFS to use the 2nd partitions on the 1.5TB disks (c1t1d0p2, c1t0d0p2), which are the roughly-1TB partitions.

So, did it work?  Let's see:
drapeau@blackfoot:$ zpool list

mediapool 3.62T 132K 3.62T 0% ONLINE -
rpool 444G 13.7G 430G 3% ONLINE -

So far, so good: two ZFS pools. Let's check status:
drapeau@blackfoot:$ zpool status
 pool: mediapool
state: ONLINE
scrub: none requested

 mediapool ONLINE 0 0 0
 raidz1 ONLINE 0 0 0
 c2t1d0p1 ONLINE 0 0 0
 c2t0d0p1 ONLINE 0 0 0
 c1t1d0p2 ONLINE 0 0 0
 c1t0d0p2 ONLINE 0 0 0

errors: No known data errors

 pool: rpool
state: ONLINE
scrub: none requested

 rpool ONLINE 0 0 0
 c1t0d0s0 ONLINE 0 0 0

errors: No known data errors

drapeau@blackfoot:$ zfs list

mediapool 92.0K 2.67T 26.9K /mediapool
rpool 21.7G 415G 72K /rpool
rpool/ROOT 5.74G 415G 18K legacy
rpool/ROOT/opensolaris 5.74G 415G 5.61G /
rpool/dump 8.00G 415G 8.00G -
rpool/export 634K 415G 19K /export
rpool/export/home 615K 415G 19K /export/home
rpool/export/home/drapeau 596K 415G 596K /export/home/drapeau
rpool/swap 8.00G 423G 16K -

Sweet.  Now I've got a mediapool configured as a four-disk RAID-Z, and I have the rpool but right now it's only using one disk.  I want to mirror it now, using the 2nd 1.5TB disk's extra space.  I'll do that right now, then ask ZFS for status (I'll omit ZFS's status report on the mediapool because we just saw that).  Oh, and I'll make sure that mirrored rpool is bootable; ZFS will remind me to do it, so I'll include my steps here:

drapeau@blackfoot:$ pfexec zpool attach rpool c1t0d0s0 c1t1d0p1
Please be sure to invoke installgrub(1M) to make 'c1t1d0p1' bootable.

drapeau@blackfoot:$ pfexec installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0

Updating master boot sector destroys existing boot managers (if any).
continue (y/n)?y
stage1 written to partition 0 sector 0 (abs 48195)
stage2 written to partition 0, 267 sectors starting at 50 (abs 48245)
stage1 written to master boot sector

drapeau@blackfoot:$ zpool status
  pool: rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h5m, 39.40% done, 0h9m to go
        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0  84.1M resilvered
            c1t1d0p1  ONLINE       0     0     0  5.41G resilvered

errors: No known data errors
About 8 minutes later, zpool status reported that the 2nd drive in rpool was resilvered and I had a fully-mirrored rpool.  Now if one of the two drives fails, I can still boot the OS and replace the bad disk.  And with the mediapool, I'm protected against any one of the four disks failing.  I'm feeling nice and secure; it's unlikely that two disks will fail at once unless the whole computer goes up in flames.  I'll deal with backup later, maybe by looking into Zmanda or something.

This is great: to this point, I've decided how to set up my storage and protect it, I've installed the OS, and I've created my storage pools.

My next blog entry will describe how I set up the computer to share all that storage with the rest of the house.

Powered by ScribeFire.

Thursday Dec 18, 2008

My Home Media Server on OpenSolaris + ZFS: Part 1

A couple of people gave me some good pointers after my last blog entry, in which I was saying that doing an ssh or vncviewer into my new OpenSolaris installation was taking a while.  They pointed out that it may be reverse DNS lookups, not something about the OpenSolaris box itself, and that reminded me that recently I had changed something else about my home network setup: I had two hubs/routers active.  So, that mystery's solved; things are looking much better now and doing an ssh into the OpenSolaris box doesn't take so long.  Moving on...

My mission: use the current release of OpenSolaris (2008.11) as the basis for the main fileserver for home.  Right now, we've got several computers that have external disks attached to them for extra storage (music, photos, movies, etc.) and I want to centralize that for a couple of reasons:

  1. Friends of mine lost their house to a wildfire; fortunately, they had stored all of their critical data on a single computer with lots of disk so when they had to evacuate the house they grabbed one box and didn't lose any critical data.  Laugh all you want; when The Big One comes, I want to be ready.
  2. I'd like to simplify the administration of our home machines.  This is home, for goodness' sake; I don't want to be hiring a system administrator to keep our stuff in order.
I'll document here what I'm doing to build the home server.  My intent is to use OpenSolaris, use ZFS to manage the disks and files, and to do it with a cheap computer that I build myself from off-the-shelf parts (as opposed to, say, buying a Dell computer).  Besides, I want to put a bunch of disks in that computer and it's hard for me to find a cheap computer from Dell or HP with a bunch of internal drive bays, but you can pretty easily buy a reasonable computer enclosure with plenty of internal drive bays.  Building yourself can save money, and I'm all about saving some money on this.

But first, I'm going to try this out on a computer that is known to work with OpenSolaris.  I'll get the setup running there to make sure that ZFS + OpenSolaris really is as easy and reliable as I think it is.  Once I'm convinced that works, I'll switch to my cheapo computer and see if OpenSolaris runs on that.

So here goes...

Step 1: Data Protection

I have four disks: two 1TB drives, two 1.5 TB drives.  I'll split the larger drives into two partitions: 500GB for the operating system and the remaining TB for the big bucket-o-storage.  (let's call it my media pool: ZFS pool used primarily for storing audio, video, and photos)

So my first decision to make is: how should I have ZFS protect my data against disk failure?  After all, I'm buying consumer-grade disk drives but the server will be on 24/7.  The disks will fail.  I don't want to lose my data just because I don't want to pay extra for more reliable disks.  I want the software (ZFS) to take care of the problem for me.  I start by looking at the ZFS Best Practices Guide to see what my options are.

I'm considering three options for ZFS protecting my data:
  1. mirror
  2. raidz (shorthand for "raidz1", meaning 1 error can happen and I don't lose data)
  3. raidz2 (meaning 2 errors can happen and I don't lose data)
The guide points me to this blog entry by Roch Bourbonnais which tells me that the tradeoff I need to make is space versus performance: mirroring gives me maximum performance but cuts my disk storage in half: 4 TB of storage over 4 disks would take 50% overhead to protect the data by mirroring, leaving me a 2TB storage pool for storing movies, photos, music, and the like.  I don't need super-high performance but I want as much usable space as I can get out of my disks, so I choose raidz which should give me about 3TB of usable space; I lose 1TB to data protection, which sounds fine to me.  Later, I may buy a fifth disk and use raidz2 to give me even more robust data protection, but I'm not going to do that right now.

Now that I've decided how to protect my data, I just need to create the appropriate partitions on the larger disks, and I'll be ready to install the OS.  I'll document that in my next blog entry.

Powered by ScribeFire.

Wednesday Oct 08, 2008

Some questions about Sun's M-series servers

I had the pleasure of meeting with an ISV yesterday who is interested in simplifying how they deploy their solution to customers.  To make a long story short, simplification for them will result in more stable deployments for their customers, who can really stress the solution of software + hardware system.  So they came to Sun to talk about our systems.

A couple of questions came out of the meeting and I agreed to follow up; I figured I'd post the answers here, because the questions are not proprietary and the answers aren't, either.

Question #1: how many power supplies on an M4000 server?
Answer: 2.  The Sun SPARC Enterprise M4000 Server has two power cords coming into the box, in what we call a 1 + 1 redundant power supply configuration.  The M4000 needs only one power cord coming into the system to power it, but we have a second power supply just in case the 1st one fails.

(by the way, the M5000 server has four power cords.  Why?  It uses two power supplies active to power the server, and each of those has a redundant power supply.)

Here are the specs for the M4000 server, which comes from the Sun System Handbook.

Question #2: what kind of monitoring interface does the M-series server have?

Answer: I hope I'm answering the question that was asked here.  I recall that one question had to do with what the console looks like when you log into it: ALOM or ILOM-style interface?  (these have two different styles of giving commands to the server when you log into the console)  According to the technical lead in our ISV Engineering Labs datacenter, the M4000's console interface looks kind of like the SunFire 6800's style, which is (and I quote) "ALOM-ish".

And here is the documentation for the M-series's console, called the XSCF (eXtended System Control Facility).

If this isn't answering the question you asked, let me know what you need to know and I'll happily track it down.

Powered by ScribeFire.

Sunday Jan 29, 2006

VMware and the Treo 650

I do what I can to avoid using Microsoft Windows, but right now there are a few things I don't want to do without: Quicken, and the Palm Desktop. (this is one reason that I want Jonathan to be right about volume being everything...when Solaris volume is great enough, both of these apps will have to be ported to the OS. Either that, or I'm buying a Mac. Been a while since I owned one --- can you say Macintosh SE?)

Anyway, my notebook computer runs the Java Desktop System (JDS), the version that is SuSE Linux-based. I run VMware Workstation and have Windows as a guest operating system. (Dual-booting just makes no sense to me; you tend to pick one of the operating systems and rarely use the other, so if you can run both OS's at the same time, why not?) And I just got a new Treo 650. So the first thing I want to do is sync it with Palm Desktop.

That didn't work. At least, not at first. This blog entry is a note to myself about what I did to make it work.

I tried connecting with the USB sync cable that came with my Treo 650. When HotSync started, the Windows XP guest OS didn't seem to do anything. It just sat there, and eventually the Treo HotSync app timed out. VMware's only solution that I could see was to add a line to the .vmx file like this:

usb.generic.skipSetConfig = "TRUE"
...which also didn't work.

I gave up on the USB cable, and bought myself a tiny little USB-Bluetooth adapter, which somebody on some web site said should work fine. It was easy to configure, it's smaller than my thumb, and whaddya know, it worked the first time.

I suppose it's pushing my luck to see if I can try the USB-Bluetooth adapter with the SunRay at work. But for now, I have a solution that lets me marginalize Windows as an occasional operating system, and keeps JDS as my preferred desktop.

A Note to VMware, And A Comment On Virtualization

This is the second USB device that I've hooked up to Windows as a VMware guest OS that doesn't work, the other being a USB 2.0 Webcam. I wonder when VMware will be good enough for me to be able to plug in common consumer-electronics devices like this without having to hack? I hear a lot about how good VMware's virtualization is, but it looks like they've got some way to go.


The views expressed on this blog are my own and do not necessarily reflect the views of Oracle. What more do you need to know, really?


« February 2017