Wednesday Apr 28, 2010

Last call for the paranoid git

I joined Sun on July 11th 1995, so I'm very close to making 15 years now that I'm being assimilated into Oracle on May 1st - it is a pity I didn't make it all the way.

I've been posting entries in this blog since April 4th 2004, and now that I'm being sucked into the huge beast that Oracle is, I'm going to end this blog and start posting stuff at http://blog.codenursery.com/ instead, to allow me to speak freely without fear of violating some corporate policy of what I can and can't say.

Code Nursery is where I spend my spare time - working on open source projects I find interesting, like:

Or helping friends and family build websites for their interest groups or small businesses - just for the mental exercise and that I like keeping myself up to date with new technology!

According to the "Proprietary Information Agreement" I signed today, Oracle wants to claim everything I do that falls under "any current or reasonably anticipated business of Oracle" ... "whether or not conceived during regular business hours". Luckily that doesn't hold up in court in Sweden, they can only claim stuff that I do at work which is related to what I'm hired to do.

However, to continue contributing to open source projects I have to be really careful not to mix business with pleasure, so that is yet another reason for me to move away from the corporate site, and use my hardware on my spare time from now on.

For those of you who come here for Solaris auditing information - I'm gradually going to review all those posts I've made over the years and do a "clean-room rewrite" of them at the new blog, so keep your eyes peeled if you are an audit junkie like me :)

That's all I had to say - now mosey on over to the new site and read about how I access it securely...

:wq!

Sunday Apr 06, 2008

Importing audit records into a databse

I've checked up on how my friends are progressing with the AuditAnalyzer and they have gotten quite far!

I've played with some pre-alpha stuff off and on, and the main problem have been importing audit data into the database - it has been too slow. It has managed to import about 150 records/second, which may sound much but if you are like me and get audit trails from 300+ systems, it is not enough to keep up with the stream of inbound records.

Luckily they worked on the import speed now, and have two possible solutions. One yields around 1500 records/second and the other a whopping 4500 records/second!

I can't wait until they have a new version available for me to try out :)

[Technorati Tags: ] ]

Wednesday Sep 19, 2007

Solaris audit settings for PCI DSS version 1.1

I've reviewed the Payment Card Industry Data Security Standard version 1.1 for which audit settings are needed to be compliant, and thought I should share this. The reason for sharing it is twofold: I want to verify that I got it right as many of the requirements are vague or ambiguous, and think it is useful for those of you who also have to be compliant with the PCI DSS.

Requirement 10: Track and monitor all access to network resources and cardholder data
Logging mechanisms and the ability to track user activities are critical. The presence of
logs in all environments allows thorough tracking and analysis if something does go wrong.
Determining the cause of a compromise is very difficult without system activity logs.

10.1 Establish a process for linking all access to system components (especially access done with 
administrative privileges such as root) to each individual user. 

This is handled by default through Solaris auditing, but you need to configure what to audit.

10.2 Implement automated audit trails for all system components to reconstruct the following events:
10.2.1 All individual user accesses to cardholder data 

This requirement is met by the fr and fw audit classes, but unfortunately you can not enable it for just the cardholder data, you will have to audit all files, which will generate a considerable amount of traffic.

10.2.2 All actions taken by any individual with root or administrative privileges 

This requirement is a bit vague. What is an action? Assuming that they mean executed commands, you can meet this requirement by using the ex audit class.

10.2.3 Access to all audit trails 

Access to the audit trails is audited using the fr and fw classes. As with 10.2.1 this will generate loads of unwanted audit records.

10.2.4 Invalid logical access attempts 

Again this requirement is vague. Access to what? Assuming that they refer to files, this requirement is met by the -fr and -fw audit classes.

10.2 5 Use of identification and authentication mechanisms 

This requirement depends on what authentication mechanisms you are using. Assuming that you just use plain Solaris it is covered by the lo class, and if you use Kerberos you need the ap class too.

10.2.6 Initialization of the audit logs 

This requirement is met by the +aa class.

10.2.7 Creation and deletion of system-level objects. 

This requirement is met by the fc and fd classes (file creation and file deletion). I assume that they only mean successful events, so we can use +fc,+fd to reduce the size of the audit trail.

10.3 Record at least the following audit trail entries for all system components for each event:
10.3.1 User identification
10.3.2 Type of event
10.3.3 Date and time
10.3.4 Success or failure indication
10.3.5 Origination of event
10.3.6 Identity or name of affected data, system component, or resource. 

All these requirements are met by the audit records generated by Solaris.

10.4 Synchronize all critical system clocks and times. 

This requirement is met by synchronizing time using NTP.

10.5 Secure audit trails so they cannot be altered. 
10.5.1 Limit viewing of audit trails to those with a job-related need 

This requirement is met by limiting who can become root, which is best handled by turning root into a role, as it require explicit authorization (knowing the password isn't enough).

10.5.2 Protect audit trail files from unauthorized modifications 

The default owner, group and mode of the audit trails are root, root and 640, and the only user who is a member of the root group is root, unless you have changed that this requirement is met by default.

10.5.3 Promptly back-up audit trail files to a centralized log server or media that is difficult to alter 

Upon audit file rotation (audit -n) it should immediately be sent through a drop box to a remote system.

10.5.4 Copy logs for wireless networks onto a log server on the internal LAN. 

This requirement has nothing to do with Solaris auditing, so I don't cover it here.

10.5.5 Use file integrity monitoring and change detection software on logs to ensure that existing
log data cannot be changed without generating alerts (although new data being added 
should not cause an alert). 

This requirement can be met by computing a MAC (mac -a sha256_hmac) of the audit file once it is terminated (audit -n) and recomputing it before you use it again.

10.6 Review logs for all system components at least daily. Log reviews must include those servers that 
perform security functions like intrusion detection system (IDS) and authentication, authorization, 
and accounting protocol (AAA) servers (for example, RADIUS). 
Note: Log harvesting, parsing, and alerting tools may be used to achieve compliance with Requirement 10.6. 

This is the hard part. The above settings will generate several gigabytes of data on one busy system.

10.7 Retain audit trail history for at least one year, with a minimum of three months online availability.

Two tips for storing audit trails: either compress it using gzip or save them on a zfs file system with compression enabled. See this post for more information.

A trick

Since we are required to audit all file access it will generate a gargantuan amount of data, probably several gigabytes per day per system. This got me thinking of how to minimize this without post-processing the audit trail, and I came up with a solution.

There are two sets of files for which we must audit access to: cardholder data and audit files.

If you make the cardholder data owned by a role (e.g. pci), and set the file mode so that only the role may access the file (chown pci and chmod 0600), you don't have to audit fr for everyone. It will be enough to audit fr for the pci role. When the users who are authorized to access the data assume the pcirole, they get audited for fr even though their normal account aren't.

However, since root can read all files, that account also needs to be audited for fr. This also takes care of the auditing of access to the audit file, which are only accessible by root.

To catch someone changing the mode of cardholder data, e.g. making it world readable, the pci role should be audited for +fm (successful file attribute modification).

Audit configuration files

Below are the audit configuration for the the PCI DSS version 1.1:

/etc/security/audit_control

flags:lo,-fr,+fc,+fd

/etc/security/audit_user

root:ex,fr,fw,+aa:no
pci:fr,fw,+fm:no

Note! I have yet to try this out on a live system, but as soon as I have, I'll post the results here.

[Technorati Tags: ]

Friday Aug 24, 2007

How system calls are audited

While talking to Tomas about measuring the impact of auditing, he gave me a nice call flow tree which I thought I'd share.

This is how it syscall auditing looks (for intel):

  + dosyscall()
    |
    + syscall_entry()
    | |
    | + pre_syscall() (if t_pre_sys set)
    |   |
    |   + audit_start() (if audit_active set)
    |     |
    |     + au_init
    |     | |
    |     | + aui_\*()
    |     + auditme() (to audit or not to audit)
    |     + au_start
    |       |
    |       + aus_\*()
    |
   ...
    |
    |
    + syscall_exit()
    | |
    | + post_syscall()
    |   |
    |   + audit_finish() (if audit_active set)
    |     |
    |     + au_finish
    |       |
    |       + auf_\*()
   ...

Update: the ASCII graph was hand crafted

[Technorati Tags: ]

Friday Aug 17, 2007

Measuring the impact of auditing

This has bothered me since I first ran bsmconv on Solaris 2.3. People are using performance as a reason not to enable auditing on their systems, and it has been hard to convince them as there are no hard metrics to look at.

With Solaris 10 I was given the solution to the problem - DTrace. Since the day I first read about it I thought that I'd write this script, but since it never have been something I've truly needed in my job, until now, I've put it off.

Yesterday I started to write on a script, and it was easier than I thought it would, but on the other hand, I know much more about DTrace and the gory details of the auditing magic in the kernel than I did when first looked at this about two years ago.

Once I've fine-tuned the script I'll publish it, but until I've worked out all the quirks I'll just post some results for the execve system call:
the average execution time for the 1066 execve syscalls I measured was 839 µs, of those calls 419 were audited, and the average time spent in the audit_start() function in the kernel was 1.8 µs, and the average time spent in the audit_finish() function in the kernel was 13.5 µs.

The numbers above comes from using the vtimestamp to measure the time spent in the audit_start() and audit_finish() functions in the c2audit kernel module. As you can see only about 1.8% of the time is spent in the auditing code, not very much!

Two things to note:
First the execve syscall is one of the slower, so things will probably look different when I look at some of the faster ones.
Second this is not the whole truth. When the c2audit kernel module is loaded, it sets t_pre_sys and and t_post_sys on klwp_t and therefore pre_syscall() and post_syscall() are executed in addition to audit_start() and audit_finish().

The above number does not include the other code in pre_syscall() and post_syscall() in the calculation, which I need to include if I am going to measure the full impact of enabling auditing.

Thursday Aug 16, 2007

CIA and the Vatican modifes Wikipedia

I just read a news article in a Swedish newspaper about CIA and the Vatican modifying information in Wikipedia. It was discovered when the folks at Wikipedia analyzed who has modified what, and discovered some interesting things.

Someone from the Vatican have modified the entry on Gerry Adams (the leader of Sinn Fein) and removed a link relating to his connection to a double homicide in 1971.

The person from CIA had only made minor adjustments, like adding the comment "Wahhhhhhh!" to the page about Mahmoud Ahmadinejad (the president of Iran).

So why am I amused by this? Because I'm an audit freak, and this is an excellent example of why you want auditing. You catch people with their hand in the cookie jar :)

Update: I've found a BBC news article about this too, so it will be easier for you read about this.

[Technorati Tags: ]

Thursday Aug 09, 2007

ZFS the perfect file system for audit trails

Those of you who have to deal with audit trails from busy systems know that they can get really big, and when you need to store them for a couple of years they consume a considerable amount of disk.

To minimize the disk usage that you can compress the audit trails, and it works really well (I've adjusted the output to right-align the size):

root@warlord# ls -l
total 19447638
-rw-r--r--   1 root     other    1353214369 Aug  9 09:27 20070728065900.20070730163539.warlord
-rw-------   1 root     other      62209268 Aug  7 08:25 20070728065900.20070730163539.warlord.gz
-rw-r--r--   1 root     other    1965073391 Aug  7 09:35 20070728065900.20070730163539.warlord.txt
-rw-r--r--   1 root     other      71460194 Aug  7 09:35 20070728065900.20070730163539.warlord.txt.gz

As you can see the compression ratio is really good (over 90%), but one problem still remains: if you need to work with the files you have to uncompress them before you can run your scripts to go an find who edited /etc/passwd. Uncompressing one file doesn't take that long, but when you don't know exactly in which file the audit records you are looking for are, things start to take time.

Enter ZFS: with on-the-fly disk compression it is the perfect file system to store audit trails. First of all you have to enable compression:

root@warlord# zfs set compression on pool/audit

That only makes future writes compressed, so the files you already have needs to be rewritten to be compressed. After having done that it is time to look at the compression ratio:

root@warlord# ls -l
total 19447638
-rw-r--r--   1 root     other    1353214369 Aug  9 09:27 20070728065900.20070730163539.warlord
-rw-r--r--   1 root     other    1965073391 Aug  7 09:35 20070728065900.20070730163539.warlord.txt

Wait a minute! That didn't compress anything, or did it? ls -l shows the size of the uncompressed file, so you have to use du to see the compressed size:

root@warlord# du -k
554067  20070728065900.20070730163539.warlord
732399  20070728065900.20070730163539.warlord.txt

Much better! (Note that it displays the size in kilo bytes while ls -l displays it in bytes) But it still is not as good as when I ran gzip on them. Why? ZFS uses the LZJB compression algorithm which isn't as space effecient as the gzip algorithm, but it is much faster. If I had been running Nevada I could have used:

root@warlord# zfs set compression gzip pool/audit

And have gotten the same compression ratio as when I "hand compress" my audit trails. This thanks to Adam who integrated gzip support in ZFS in build 62 of Nevada.

[Technorati Tags: ]

Monday Aug 06, 2007

Ever wondered what the files /var/spool/cron/crontabs/\*.au are

You might have noticed some strange files in /var/spool/cron/crontabs ending with .au. These are not µlaw audit files, but auxiliary audit files for crontab, which are created when auditing have been enabled and you edit your crontab entry.

# cd /var/spool/cron/crontabs
# ls -l
total 19
-rw-------   1 root     sys         1010 Feb 25 18:04 adm
-r--------   1 root     root        1371 Feb 25 18:06 lp
-rw-------   1 root     martin        38 Jun 21 00:20 martin
-r--------   1 root     martin        45 Jun 21 00:20 martin.au
-rw-------   1 root     sys         1401 Mar 13 04:28 root
-rw-------   1 root     sys         1128 Feb 25 18:09 sys

Looking closer at what is in my .au file we find the following:

# cat martin.au
300
0
0
7ff81600
4
1dad35c9 0 0 0
2441309132

This is quite cryptic, especially as it isn't documented anywhere but in the source! Using it you can discern what the above settings are.

The first number (300) is the audit id, i.e. my user id. The second and third rows are the pre-selection mask split up in two parts, first the audit on success and then audit on failure. The next three rows are the terminal id, starting with the port, address type and last the address. The port number (5f81600) is made up of two parts (major and minor) which are joined together. After that follows the address type (4) which represents IPv4, as defined in audit.h. Note that the address is made up of 4 numbers to fit IPv6 addresses, but since I logged from a system using IPv4 it is only the first part which is filled. There is a gotcha here, the number is written depends on the architecture, the example is from my X2200 M2, so the 1dad35c9 needs to be changed to network byte order to map correctly to an IP address. The last row is the session id (2441309132).

This file is created (and updated) when you edit crontab, which can cause a lot of confusion. The pre-selection mask used by cron is calculated by logically ORing the entry in the .au file with the user entry from audit_user and the global flags in audit_control. So if you reduce the auditing for a particular user in audit_user, you expect that the audit trail from the user's cron jobs would change too, but if the .au file have already been created the pre-selection masks are frozen.

To fix this you need to update the .au file too when you change the audit flags or edit the crontab so that the .au file gets rewritten.

[Technorati Tags: ]

About

martin

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today