Monday Mar 23, 2009

Yubico on Solaris 10

I'm back configuring Yubikeys but this time on Solaris 10 as it is what the majority of our servers run.

Here are are the steps required to get it working on Solaris 10 update 6:

  1. Install curl
    pkgadd SFWcurl
  2. Configure libyubico-client
    configure CPPFLAGS=-I/opt/sfw/include CFLAGS-std=c99 --prefix=/usr
  3. Compile and install
    gmake install
  4. Configure pam_yubico
    configure --prefix=/usr --without-ldap
  5. Compile and install
    gmake install
  6. Setup a user to key mapping file (e.g. /etc/yubikeys)
  7. Configure /etc/pam.conf
    other   auth requisite
    other   auth required 
    other   auth required 
    other   auth required  id=16 authfile=/etc/yubikeys ignorepass

Then a ssh login will look like this:

martin@workstation$ ssh server
Yubikey for `martin': 

You might have noticed the ignorepass option which I have added, this is to prevent pam_yubico from trying to (re)use the password I typed, nd instead force pam_yubico to prompt me for it. I have sent Simon the diff so he can add it to the next release.

Friday Jan 16, 2009

Testing the Yubico Yubikey

I've been looking at different solutions for two-factor authentication (as in something you have) to use as a backup to what Sun IT provides us. Since we run two data centers outside of Sun, and require two-factor authentication to log on to all our external servers, we are often prevented from logging on as the network path back to the Sun IT verification servers is down. So we need a backup solution that allows us to do the verification in our data center when the network is down.

The top contender for this is Yubico's yubikey which I think is a very cool device. And the best part of it, is that all software needed to do the verification is open sourced!

I've compiled and on OpenSolaris with the help of Simon as we had to make some minor adjustments to get it compiled on Solaris.

I've made some additional minor modifications to to let me use it for two-factor authentication (I'll post the diffs later).

This is how the authentication looks now:

martin@mbp$ ssh puppet-tst2
Password: my normal UNIX passphrase
Yubikey: the output from the yubikey
martin@puppet-tst2 $ 

I'm very pleased with the results of my tests so far, and if you are looking at a two-factor authentication, buy a few of them and git it a try...

Thursday Jan 15, 2009

Audit chapter

As I wrote before, I've written the audit chapter for an upcoming Solaris Security book. The chapter is now available on Safari Rough Cuts and feedback is very welcome...

Friday Dec 12, 2008

A new Solaris security book on the way

For the last few months I've been spending my evenings tapping away on the keyboard - but not producing code or managing Solaris servers like I usually do. I've been writing two chapters for an upcoming Solaris security book! It has been fun, but it has also been hard - not hard because I didn't know what to write, but hard to constrain myself from wanting to include too much.

The book is not intended to cover every nitty gritty detail of every security feature in Solaris - that would make it a real brick of a book! So I've had to think hard about what to include, and the level of detail of the included parts.

Parts of the book is already available on Safari Rough Cuts for review before we publish. Please leave comments about on the Safari site so that nothing gets lost.

The chapter about File System Security is mine, and I've also authored the chapter about auditing (not very surprising), though it hasn't been processed for publication yet, but when it is - I'll post a blog entry with a link to it.

Tuesday Jun 24, 2008

EU wants to control bloggers

I just read an article which scared the hell out of me! There is an EU proposal to require a government controlled registry of blogs. No more anonymous blogging!

"there is a need to clarify their status, and to create legal safeguards for use in the event of lawsuits as well as to establish a right to reply" in other words each blog needs to have an publisher, just like a newspaper, and this will require you to register with the local authorities. No more whistle blowing through blogs...

I wonder how they plan to enforce this? Especially in a case like mine where the server resides outside the EU.

Sunday Apr 06, 2008

Importing audit records into a databse

I've checked up on how my friends are progressing with the AuditAnalyzer and they have gotten quite far!

I've played with some pre-alpha stuff off and on, and the main problem have been importing audit data into the database - it has been too slow. It has managed to import about 150 records/second, which may sound much but if you are like me and get audit trails from 300+ systems, it is not enough to keep up with the stream of inbound records.

Luckily they worked on the import speed now, and have two possible solutions. One yields around 1500 records/second and the other a whopping 4500 records/second!

I can't wait until they have a new version available for me to try out :)

[Technorati Tags: ] ]

Monday Nov 26, 2007

root as a role and zlogin

If you have turned root into a role in a zone and try to use zlogin from the global zone to log in as root you will see something like this:

root@global# zlogin zn1
[Connected to zone 'zn1' pts/2]
Login incorrect

[Connection to zone 'zn1' pts/2 closed]

This is because pam.conf is by default configured to prevent this, as roles must only be assumed by authorized users.

If you trust the ones who can become root in the global zone, you can change this restriction by adding the following line to pam.conf

zlogin  account required

Now you can zlogin directly to a role without having to first log in to a normal user:

root@global# zlogin zn1
[Connected to zone 'zn1' pts/2]
Sun Microsystems Inc.   SunOS 5.11      snv_75  October 2007

Wednesday Sep 19, 2007

Solaris audit settings for PCI DSS version 1.1

I've reviewed the Payment Card Industry Data Security Standard version 1.1 for which audit settings are needed to be compliant, and thought I should share this. The reason for sharing it is twofold: I want to verify that I got it right as many of the requirements are vague or ambiguous, and think it is useful for those of you who also have to be compliant with the PCI DSS.

Requirement 10: Track and monitor all access to network resources and cardholder data
Logging mechanisms and the ability to track user activities are critical. The presence of
logs in all environments allows thorough tracking and analysis if something does go wrong.
Determining the cause of a compromise is very difficult without system activity logs.

10.1 Establish a process for linking all access to system components (especially access done with 
administrative privileges such as root) to each individual user. 

This is handled by default through Solaris auditing, but you need to configure what to audit.

10.2 Implement automated audit trails for all system components to reconstruct the following events:
10.2.1 All individual user accesses to cardholder data 

This requirement is met by the fr and fw audit classes, but unfortunately you can not enable it for just the cardholder data, you will have to audit all files, which will generate a considerable amount of traffic.

10.2.2 All actions taken by any individual with root or administrative privileges 

This requirement is a bit vague. What is an action? Assuming that they mean executed commands, you can meet this requirement by using the ex audit class.

10.2.3 Access to all audit trails 

Access to the audit trails is audited using the fr and fw classes. As with 10.2.1 this will generate loads of unwanted audit records.

10.2.4 Invalid logical access attempts 

Again this requirement is vague. Access to what? Assuming that they refer to files, this requirement is met by the -fr and -fw audit classes.

10.2 5 Use of identification and authentication mechanisms 

This requirement depends on what authentication mechanisms you are using. Assuming that you just use plain Solaris it is covered by the lo class, and if you use Kerberos you need the ap class too.

10.2.6 Initialization of the audit logs 

This requirement is met by the +aa class.

10.2.7 Creation and deletion of system-level objects. 

This requirement is met by the fc and fd classes (file creation and file deletion). I assume that they only mean successful events, so we can use +fc,+fd to reduce the size of the audit trail.

10.3 Record at least the following audit trail entries for all system components for each event:
10.3.1 User identification
10.3.2 Type of event
10.3.3 Date and time
10.3.4 Success or failure indication
10.3.5 Origination of event
10.3.6 Identity or name of affected data, system component, or resource. 

All these requirements are met by the audit records generated by Solaris.

10.4 Synchronize all critical system clocks and times. 

This requirement is met by synchronizing time using NTP.

10.5 Secure audit trails so they cannot be altered. 
10.5.1 Limit viewing of audit trails to those with a job-related need 

This requirement is met by limiting who can become root, which is best handled by turning root into a role, as it require explicit authorization (knowing the password isn't enough).

10.5.2 Protect audit trail files from unauthorized modifications 

The default owner, group and mode of the audit trails are root, root and 640, and the only user who is a member of the root group is root, unless you have changed that this requirement is met by default.

10.5.3 Promptly back-up audit trail files to a centralized log server or media that is difficult to alter 

Upon audit file rotation (audit -n) it should immediately be sent through a drop box to a remote system.

10.5.4 Copy logs for wireless networks onto a log server on the internal LAN. 

This requirement has nothing to do with Solaris auditing, so I don't cover it here.

10.5.5 Use file integrity monitoring and change detection software on logs to ensure that existing
log data cannot be changed without generating alerts (although new data being added 
should not cause an alert). 

This requirement can be met by computing a MAC (mac -a sha256_hmac) of the audit file once it is terminated (audit -n) and recomputing it before you use it again.

10.6 Review logs for all system components at least daily. Log reviews must include those servers that 
perform security functions like intrusion detection system (IDS) and authentication, authorization, 
and accounting protocol (AAA) servers (for example, RADIUS). 
Note: Log harvesting, parsing, and alerting tools may be used to achieve compliance with Requirement 10.6. 

This is the hard part. The above settings will generate several gigabytes of data on one busy system.

10.7 Retain audit trail history for at least one year, with a minimum of three months online availability.

Two tips for storing audit trails: either compress it using gzip or save them on a zfs file system with compression enabled. See this post for more information.

A trick

Since we are required to audit all file access it will generate a gargantuan amount of data, probably several gigabytes per day per system. This got me thinking of how to minimize this without post-processing the audit trail, and I came up with a solution.

There are two sets of files for which we must audit access to: cardholder data and audit files.

If you make the cardholder data owned by a role (e.g. pci), and set the file mode so that only the role may access the file (chown pci and chmod 0600), you don't have to audit fr for everyone. It will be enough to audit fr for the pci role. When the users who are authorized to access the data assume the pcirole, they get audited for fr even though their normal account aren't.

However, since root can read all files, that account also needs to be audited for fr. This also takes care of the auditing of access to the audit file, which are only accessible by root.

To catch someone changing the mode of cardholder data, e.g. making it world readable, the pci role should be audited for +fm (successful file attribute modification).

Audit configuration files

Below are the audit configuration for the the PCI DSS version 1.1:





Note! I have yet to try this out on a live system, but as soon as I have, I'll post the results here.

[Technorati Tags: ]

Friday Aug 31, 2007

Using ZFS ACLs to restrict what a user can do

On some systems where we are really paranoid, we restrict who can execute most setuid and setgid binaries just in case of a zero day exploit. I.e. we only let the administrators can run those commands.

Previously if you wanted to create this kind of restriction, you had to first chmod 4750 the files, and then grant all but the user(s) you wanted to restrict to a group and chgrp the binaries. This was very messy, and you could not have two different sets of restrictions.

Note: if you change file owner, group or permissions for files in a package, you must use installf to update the software installation database.

# chmod 4750 /usr/bin/su
# chgrp sysadmin /usr/bin/su
# installf SUNWcsu /usr/bin/su 4750 root sysadmin

Then came UFS ACLs. It allowed you to add multiple ACLs to the same file, so now you could have multiple sets of files with execute restrictions, but you still had to chmod 4750 all involved files.

Now that we have ZFS it is possible to use the ZFS ACLs to revoke permissions, so if the user danny should not be able to execute /usr/bin/su you can just add an ACL to remove the execute permission for that user.

# chmod A+user:danny:execute:deny /usr/bin/su

Now when danny tries to execute su it'll look like this:

$ ls -l /usr/bin/su
-r-sr-xr-x+  1 root     sys        34624 Feb 26  2007 /usr/bin/su
$ su -
bash: su: Permission denied

As with UFS ACLs the way to spot that a file has an ACL is the + sign at the end of the permissions when you execute ls -l. If you want to see the full ACL use this

$ ls -v /usr/bin/su
-r-sr-xr-x+  1 root     sys        34624 Feb 26  2007 /usr/bin/su

[Technorati Tags: ]

Friday Aug 24, 2007

How system calls are audited

While talking to Tomas about measuring the impact of auditing, he gave me a nice call flow tree which I thought I'd share.

This is how it syscall auditing looks (for intel):

  + dosyscall()
    + syscall_entry()
    | |
    | + pre_syscall() (if t_pre_sys set)
    |   |
    |   + audit_start() (if audit_active set)
    |     |
    |     + au_init
    |     | |
    |     | + aui_\*()
    |     + auditme() (to audit or not to audit)
    |     + au_start
    |       |
    |       + aus_\*()
    + syscall_exit()
    | |
    | + post_syscall()
    |   |
    |   + audit_finish() (if audit_active set)
    |     |
    |     + au_finish
    |       |
    |       + auf_\*()

Update: the ASCII graph was hand crafted

[Technorati Tags: ]

Friday Aug 17, 2007

Measuring the impact of auditing

This has bothered me since I first ran bsmconv on Solaris 2.3. People are using performance as a reason not to enable auditing on their systems, and it has been hard to convince them as there are no hard metrics to look at.

With Solaris 10 I was given the solution to the problem - DTrace. Since the day I first read about it I thought that I'd write this script, but since it never have been something I've truly needed in my job, until now, I've put it off.

Yesterday I started to write on a script, and it was easier than I thought it would, but on the other hand, I know much more about DTrace and the gory details of the auditing magic in the kernel than I did when first looked at this about two years ago.

Once I've fine-tuned the script I'll publish it, but until I've worked out all the quirks I'll just post some results for the execve system call:
the average execution time for the 1066 execve syscalls I measured was 839 µs, of those calls 419 were audited, and the average time spent in the audit_start() function in the kernel was 1.8 µs, and the average time spent in the audit_finish() function in the kernel was 13.5 µs.

The numbers above comes from using the vtimestamp to measure the time spent in the audit_start() and audit_finish() functions in the c2audit kernel module. As you can see only about 1.8% of the time is spent in the auditing code, not very much!

Two things to note:
First the execve syscall is one of the slower, so things will probably look different when I look at some of the faster ones.
Second this is not the whole truth. When the c2audit kernel module is loaded, it sets t_pre_sys and and t_post_sys on klwp_t and therefore pre_syscall() and post_syscall() are executed in addition to audit_start() and audit_finish().

The above number does not include the other code in pre_syscall() and post_syscall() in the calculation, which I need to include if I am going to measure the full impact of enabling auditing.

Thursday Aug 16, 2007

CIA and the Vatican modifes Wikipedia

I just read a news article in a Swedish newspaper about CIA and the Vatican modifying information in Wikipedia. It was discovered when the folks at Wikipedia analyzed who has modified what, and discovered some interesting things.

Someone from the Vatican have modified the entry on Gerry Adams (the leader of Sinn Fein) and removed a link relating to his connection to a double homicide in 1971.

The person from CIA had only made minor adjustments, like adding the comment "Wahhhhhhh!" to the page about Mahmoud Ahmadinejad (the president of Iran).

So why am I amused by this? Because I'm an audit freak, and this is an excellent example of why you want auditing. You catch people with their hand in the cookie jar :)

Update: I've found a BBC news article about this too, so it will be easier for you read about this.

[Technorati Tags: ]

Thursday Aug 09, 2007

ZFS the perfect file system for audit trails

Those of you who have to deal with audit trails from busy systems know that they can get really big, and when you need to store them for a couple of years they consume a considerable amount of disk.

To minimize the disk usage that you can compress the audit trails, and it works really well (I've adjusted the output to right-align the size):

root@warlord# ls -l
total 19447638
-rw-r--r--   1 root     other    1353214369 Aug  9 09:27 20070728065900.20070730163539.warlord
-rw-------   1 root     other      62209268 Aug  7 08:25 20070728065900.20070730163539.warlord.gz
-rw-r--r--   1 root     other    1965073391 Aug  7 09:35 20070728065900.20070730163539.warlord.txt
-rw-r--r--   1 root     other      71460194 Aug  7 09:35 20070728065900.20070730163539.warlord.txt.gz

As you can see the compression ratio is really good (over 90%), but one problem still remains: if you need to work with the files you have to uncompress them before you can run your scripts to go an find who edited /etc/passwd. Uncompressing one file doesn't take that long, but when you don't know exactly in which file the audit records you are looking for are, things start to take time.

Enter ZFS: with on-the-fly disk compression it is the perfect file system to store audit trails. First of all you have to enable compression:

root@warlord# zfs set compression on pool/audit

That only makes future writes compressed, so the files you already have needs to be rewritten to be compressed. After having done that it is time to look at the compression ratio:

root@warlord# ls -l
total 19447638
-rw-r--r--   1 root     other    1353214369 Aug  9 09:27 20070728065900.20070730163539.warlord
-rw-r--r--   1 root     other    1965073391 Aug  7 09:35 20070728065900.20070730163539.warlord.txt

Wait a minute! That didn't compress anything, or did it? ls -l shows the size of the uncompressed file, so you have to use du to see the compressed size:

root@warlord# du -k
554067  20070728065900.20070730163539.warlord
732399  20070728065900.20070730163539.warlord.txt

Much better! (Note that it displays the size in kilo bytes while ls -l displays it in bytes) But it still is not as good as when I ran gzip on them. Why? ZFS uses the LZJB compression algorithm which isn't as space effecient as the gzip algorithm, but it is much faster. If I had been running Nevada I could have used:

root@warlord# zfs set compression gzip pool/audit

And have gotten the same compression ratio as when I "hand compress" my audit trails. This thanks to Adam who integrated gzip support in ZFS in build 62 of Nevada.

[Technorati Tags: ]

Tuesday Aug 07, 2007

ssh job queue

Today I started to ponder a problem which I can't be alone to have encountered. When you administer over 300 systems and want to perform bulk operations over ssh, there are always one or two systems which are down or unreachable, so your nifty little scripts which log on to each system to install a package, apply a patch, change a configuration setting, tweak a variable or just pull statistics from the system will fail.

So I started toying with the idea of an ssh job queue which helps you keep track of bulk operations, so you can see the on which systems the operation has successfully completed. Once I started to try this out I figured that I can't be the first one to face this problem, so i thought I'd ask you for input.

How do you deal with this problem? And "pen and paper" isn't the answer I'm looking for :)

Monday Aug 06, 2007

Ever wondered what the files /var/spool/cron/crontabs/\*.au are

You might have noticed some strange files in /var/spool/cron/crontabs ending with .au. These are not µlaw audit files, but auxiliary audit files for crontab, which are created when auditing have been enabled and you edit your crontab entry.

# cd /var/spool/cron/crontabs
# ls -l
total 19
-rw-------   1 root     sys         1010 Feb 25 18:04 adm
-r--------   1 root     root        1371 Feb 25 18:06 lp
-rw-------   1 root     martin        38 Jun 21 00:20 martin
-r--------   1 root     martin        45 Jun 21 00:20
-rw-------   1 root     sys         1401 Mar 13 04:28 root
-rw-------   1 root     sys         1128 Feb 25 18:09 sys

Looking closer at what is in my .au file we find the following:

# cat
1dad35c9 0 0 0

This is quite cryptic, especially as it isn't documented anywhere but in the source! Using it you can discern what the above settings are.

The first number (300) is the audit id, i.e. my user id. The second and third rows are the pre-selection mask split up in two parts, first the audit on success and then audit on failure. The next three rows are the terminal id, starting with the port, address type and last the address. The port number (5f81600) is made up of two parts (major and minor) which are joined together. After that follows the address type (4) which represents IPv4, as defined in audit.h. Note that the address is made up of 4 numbers to fit IPv6 addresses, but since I logged from a system using IPv4 it is only the first part which is filled. There is a gotcha here, the number is written depends on the architecture, the example is from my X2200 M2, so the 1dad35c9 needs to be changed to network byte order to map correctly to an IP address. The last row is the session id (2441309132).

This file is created (and updated) when you edit crontab, which can cause a lot of confusion. The pre-selection mask used by cron is calculated by logically ORing the entry in the .au file with the user entry from audit_user and the global flags in audit_control. So if you reduce the auditing for a particular user in audit_user, you expect that the audit trail from the user's cron jobs would change too, but if the .au file have already been created the pre-selection masks are frozen.

To fix this you need to update the .au file too when you change the audit flags or edit the crontab so that the .au file gets rewritten.

[Technorati Tags: ]




« July 2016