Tuesday Oct 30, 2007

Visa payWave

Much of the London Underground is plastered with advertisements for this new card, right now; it has various taglines, the one which particularly springs to my mind being "in the future, nobody queues".

Am I the only one thinking it needs a new one, along the lines of "in the future, all transactions are 'cardholder not present', as nobody authenticates"?

Tuesday Sep 18, 2007

Bedtime Reading

CSI's 12th annual Computer Crime and Security Survey came out, yesterday.

Update:

I decided to read it over lunch, instead :-).

Salient points which jumped off the page, at me:

  • Financial losses are up
  • Directed attacks are up
  • "Fraud" has displaced "viruses" as the primary cause of financial loss
  • More than half of all security spend is on antivirus (hey, just call it "Solaris" ;-) )
  • There's still an uncomfortably-large number of "Don't know"s when it comes to compromise, SB1386 and all its clones notwithstanding
  • There's a lot of folk who appreciate that many, many things which have been done in the name of Sarbanes-Oxley are Just Plain Silly
Make of it, what you will.

Sunday Sep 02, 2007

TX-Ranger, config script v1.0

Jeff's come up with v1.0 of the goods, bless him :-)

This hasn't been posted to opensolaris.org yet, on the grounds that it doesn't quite work correctly with the SXDE / Nevada releases; however, it works just fine with Solaris 10 11/06 (aka Update 3), which is the current version of the production code.

So, if you want to do a simple automated build of Trusted Extensions specifically on Solaris 10 11/06 and with the default label_encodings file, do the following:

  • assume root in the global zone
  • delete any non-global zones
  • cut and paste the script into your preferred editor
  • save it, chmod it to 500
  • run it, read the README
  • copy the TX packages to a suitable scratch area
  • let it rip :-)
Here's the code.

Caveat emptor: I've done my best to format the code as HTML, as I have yet to figure out how to post downloadable files to blogs.sun.com, but if you find a formatting error, please let me know ASAP.

Jeff has Done The Right Thing regarding formatting and indentation, and I can only apologise to him if this doesn't come across properly as a result of my lack of skill in HTML formatting.

NB. If you find a problem with the script as reproduced below, please report it to me, initially.

#!/bin/ksh
# Korn shell script to automate the creation of a demonstration environment
# that can be used to demonstrate Solaris 10 Trusted Extensions
# Script written by: Jeff Turner, Context-Switch Limited
# ( Jeff [dot] Turner [at] Context-Switch [dot] com )
# Version: A.0
# Creation Date: 08/13/07
#

##################################################################
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
# Copyright 2007 Context-Switch Limited. All rights reserved.
# Use is subject to license terms.
#
#ident "@(#)txranger 1.0 07/08/13"
#
##################################################################

# This script provides a simple TUI for managing labeled zones.
# It takes no arguments, but provides contextual menus which
# provide appropriate choices. It must be run in the global
# zone as root.
#
# To read the README text, invoke the program and select option 'r'
#

###########START OF PROGRAM######################################

# GLOBAL VARIABLES and SETTINGS
BON="$(tput smso)" export BON # char-sequence to turn reverse text display on
BOF="$(tput rmso)" export BOF # char-sequence to turn reverse text display off
CLR="$(tput clear)" export CLR # char-sequence to clear the screen display
CURR_DIR=${PWD} export CURR_DIR
PACKAGE_DIR=/tx/Trusted_Extensions/Packages export PACKAGE_DIR

LIST1=" SUNWtsg SUNWtsu SUNWtsr SUNWtsmc SUNWxwts SUNWdttsr SUNWtgnome-docs"
LIST2=" SUNWtgnome-tsol-libs SUNWtgnome-tsol-libs-devel SUNWtgnome-tsoljdsdevmgr SUNWtgnome-tsoljdslabel"
LIST3=" SUNWtgnome-tsoljdsselmgr SUNWtgnome-tstripe SUNWtgnome-xagent SUNWmgts SUNWjdtts SUNWjmgts SUNWjtsu"
LIST4=" SUNWkdtts SUNWkmgts SUNWktsu SUNWodtts SUNWdttshelp SUNWdttsu SUNWtsman SUNWjtsman"
LIST5=" SUNWtgnome-l10n-doc-ja SUNWtgnome-l10n-doc-ko SUNWtgnome-l10n-ui-de SUNWtgnome-l10n-ui-es"
LIST6=" SUNWtgnome-l10n-ui-fr SUNWtgnome-l10n-ui-it SUNWtgnome-l10n-ui-ja SUNWtgnome-l10n-ui-ko"
LIST7=" SUNWtgnome-l10n-ui-ptBR SUNWtgnome-l10n-ui-ru SUNWtgnome-l10n-ui-sv SUNWtgnome-l10n-ui-zhCN"
LIST8=" SUNWtgnome-l10n-ui-zhHK SUNWtgnome-l10n-ui-zhTW"
PACKAGE_LIST="${LIST1} ${LIST2} ${LIST3} ${LIST4} ${LIST5} ${LIST6} ${LIST7} ${LIST8}"

RLST1=" SUNWtgnome-l10n-ui-zhTW SUNWtgnome-l10n-ui-zhHK SUNWtgnome-l10n-ui-zhCN SUNWtgnome-l10n-ui-sv"
RLST2=" SUNWtgnome-l10n-ui-ru SUNWtgnome-l10n-ui-ptBR SUNWtgnome-l10n-ui-ko SUNWtgnome-l10n-ui-ja"
RLST3=" SUNWtgnome-l10n-ui-it SUNWtgnome-l10n-ui-fr SUNWtgnome-l10n-ui-es SUNWtgnome-l10n-ui-de"
RLST4=" SUNWtgnome-l10n-doc-ko SUNWtgnome-l10n-doc-ja SUNWjtsman SUNWtsman SUNWdttsu SUNWdttshelp"
RLST5=" SUNWodtts SUNWktsu SUNWkmgts SUNWkdtts SUNWjtsu SUNWjmgts SUNWjdtts SUNWmgts SUNWtgnome-xagent"
RLST6=" SUNWtgnome-tstripe SUNWtgnome-tsoljdsselmgr SUNWtgnome-tsoljdslabel SUNWtgnome-tsoljdsdevmgr"
RLST7=" SUNWtgnome-tsol-libs-devel SUNWtgnome-tsol-libs SUNWtgnome-docs SUNWdttsr SUNWxwts SUNWtsmc"
RLST8=" SUNWtsr SUNWtsu SUNWtsg"
PACKAGE_RM="${RLST1} ${RLST2} ${RLST3} ${RLST4} ${RLST5} ${RLST6} ${RLST7} ${RLST8}"

stty intr  # Interrupt set to Control-C

# GLOBAL FUNCTIONS
function screenhead
{
echo "${CLR}\\n\\t${BON} Trusted Extensions - txranger ${BOF}\\n"
echo " This program allows you to install or remove"
echo " the Trusted Extensions demo environment.\\n"
echo " Primary Network Port: ${NETPORT}"
echo " Nodename is: ${NODENAME}"
echo " Original IP address: ${IPADDR}"
echo " IPAddr will be set to: 10.1.70.${OCTE
T}" echo " Interrupt is set to: \^C"
echo " TX Source directory: ${PACKAGE_DIR}"
echo "\\n\\n"
}

function holder
{
echo "\\nPress ${BON}RETURN${BOF} to continue:\\c"
read PRESSRETN
}

####################### BODY OF SCRIPT ######

# Validate that this is running on a Solaris 10 (or Nevada) system
which zonename > /dev/null 2>&1
if [[ $? != 0 ]]
then
echo "${CLR}ERROR\\a"
echo "This is not a Solaris 10 compatible system."
echo "Exiting now."
exit 1
else
THISZONE=$(zonename)
if [[ "${THISZONE}" != "global" || "${LOGNAME}" != "root" ]]
then
echo "${CLR}ERROR\\a"
echo "This script must be executed in the global zone by the root user"
echo "Exiting now."
exit 1
fi
fi

# If we get to here, we are OK to execute

# Determine the nodename of this host and the primary network port in use.
NODENAME=$(cat /etc/nodename) export NODENAME
NODENAME="${NODENAME%%.\*}" export NODENAME
STATUS=1 export STATUS

for CHECKFILE in /etc/hostname.\*[0-9]
do
grep "${NODENAME}" "${CHECKFILE}" > /dev/null && STATUS=0

if (( STATUS == 0 ))
then
NETPORT="${CHECKFILE##\*.}" export NETPORT IPADDR=$(nawk "/${NODENAME}/ {print \\$1}" /etc/inet/ipnodes) export IPADDR
OCTET="${IPADDR##\*.}" export OCTET
break
fi
done

while true
do
screenhead # call function

echo "Before you install, you are recommended to ${BON}read the Readme${BOF} document\\n"
echo "Do you want to (I)nstall or (U)ninstall the Trusted Extensions(TX) Demo"
echo "Read the (R)eadme doc or (Q)uit from this program?\\n"
echo "Please enter your choice (i/q/r/u): [ ]\\b\\b\\c"
read CHOICE other
case "${CHOICE}" in

[iI]\*) # Install was selected
STEP=I export STEP
break ;;

[Qq]\*) # Quit was selected
echo "Quitting the program now..." ; sleep 2
exit 0 ;;

[rR]\*) # Read the README doc
STEP=R
break ;;

[uU]\*) # Unintsall was selected
STEP=U export STEP
break ;;

\*) # Not a valid choice
echo "\\aPlease enter the letter (as shown in the menu) associated with your
choice!\\n"
holder # call the function
;;

esac
done

###################INSTALL PHASE#################
if [[ "${STEP}" == "I" ]]
then

################################
# Set the directory pathname for
# the source of the packages
################################
while true
do

screenhead # call the function

if [[ -d "${PACKAGE_DIR}" ]]
then
if [[ -d ${PACKAGE_DIR}/SUNWtsg ]]
then
break
else
PACKAGE_DIR=""
continue
fi
else
echo "The source directory for the TX packages either does not exist"
echo "or is in a different location."
echo "\\n\\nPlease enter the absolute pathname of the directory containing"
echo "the Trusted Extension packages. For example: ${PACKAGE_DIR}\\n"
echo "Pathname is: \\c"
read PATH2PACKAGES other
if [[ -d ${PATH2PACKAGES} ]]
then
PACKAGE_DIR="${PATH2PACKAGES}"
continue
else
echo "\\n\\aSorry. That pathname does not seem to be valid."
holder # call the function
read dummy
fi
fi
done

#################################
# Install the packages if they
# are not already installed.
#
# If they are installed, then offer
# to remove the packages.
#################################

while true
do

screenhead # call the function

pkginfo SUNWtsg > /dev/null 2>&1

if (( $? == 0 ))
then
# Packages must already be installed
echo "The Trusted Extensions packages appear to already be installed."
echo "Perhaps you should uninstall them first then run this install program again?"
exit 1
else
echo "Installing the packages on the system. Please be patient..."
sleep 2
for PKGLIST in ${PACKAGE_LIST}
do
echo "Installing package: ${PKGLIST}"
yes | pkgadd -d "${PACKAGE_DIR}" ${PKGLIST} > /dev/null
done
break
fi
done

screenhead # call the function

echo "Creating the network files now..."

###### Now create the network control files
CHECK=0
if [[ ! -f /var/tmp/txnetfiles.backup.tar ]]
then
# Make a backup of the primary network control files
echo "Backing up network port and file information"
BACKUPFILES="/etc/hostname.${NETPORT} /etc/inet/ipnodes /etc/inet/hosts /etc/inet/netmasks"
BACKUPFILES="${BACKUPFILES} /etc/inet/networks /etc/hosts /etc/netmasks /etc/networks"
BACKUPFILES="${BACKUPFILES} /etc/nodename /etc/user_attr /etc/passwd /etc/shadow"
BACKUPFILES="${BACKUPFILES} /etc/dfs/dfstab /etc/security/tsol/tn\*"
tar cf /var/tmp/txnetfiles.backup.tar ${BACKUPFILES} 2>/dev/null || CHECK=1
if [[ "${CHECK}" == 1 ]]
then
echo "Error encountered during the backup of original files."
echo "Exiting now."
exit 1
fi
fi

# Now, replace with the TX-demo-compliant file
# Create network port file
echo "${NODENAME} netmask + broadcast + up \\\\" > /etc/hostname.${NETPORT}
echo "addif ${NODENAME}-zones all-zones up" >> /etc/hostname.${NETPORT} && STATUS=0

# Create network ipnodes file echo "#" > /etc/inet/ipnodes
echo "# Internet hosts" >> /etc/inet/ipnodes
echo "#" >> /etc/inet/ipnodes
echo "127.0.0.1 loopback localhost loghost" >> /etc/inet/ipnodes
echo "10.1.70.${OCTET} ${NODENAME} ${NODENAME}.global.zone" >> /etc/inet/ipnodes

# Create network hosts file
echo "#" > /etc/inet/hosts
echo "# Internet hosts" >> /etc/inet/hosts
echo "#" >> /etc/inet/hosts
echo "127.0.0.1 loopback localhost loghost" >> /etc/inet/hosts
echo "10.1.70.${OCTET} ${NODENAME} ${NODENAME}.global.zone" >> /etc/inet/hosts
echo "10.1.71.${OCTET} public" >> /etc/inet/hosts
echo "10.1.72.${OCTET} confidential>> /etc/inet/hosts
echo "10.1.74.${OCTET} internal" >> /etc/inet/hosts

# Update the /etc/netmasks file
echo "10.1.0.0 255.255.0.0" >> /etc/netmasks

# Create the required entries in the /etc/tsol directory
# trusted network remote-host data-base file
echo "10.1.0.0:cipso" >> /etc/security/tsol/tnrhdb

# trusted network remote-host template file>> echo "public:min_sl=0x0002-08-08;max_sl=0x0002-08-08;def_label=0x0002-08-08;doi=1;host_type=unlabeled" >> /etc/security/tsol/tnrhtp
echo "confidential:min_sl=0x0004-08-08;max_sl=0x0004-08-78;def_label=0x0002-08-08;doi=1;host_type=unlabeled" >> /etc/security/tsol/tnrhtp
echo "ntk:min_sl=0x0004-08-08;max_sl=0x0004-08-68;def_label=0x0002-08-08;doi=1;host_type=unlabeled" >> /etc/security/tsol/tnrhtp
echo "internal:min_sl=0x0004-08-08;max_sl=0x0004-08-48;def_label=0x0002-08-08;doi=1;host_type=unlabeled" >> /etc/security/tsol/tnrhtp

# trusted network zone configuration file

echo "public:0x0002-08-08:0::" >> /etc/security/tsol/tnzonecfg
echo "confidential:0x0004-08-78:0::" >> /etc/security/tsol/tnzonecfg
echo "ntk:0x0004-08-68:0::" >> /etc/security/tsol/tnzonecfg
echo "internal:0x0004-08-48:0::" >> /etc/security/tsol/tnzonecfg

#### Now, create the zones

screenhead # call the function

# Make the /zone parent directory (if it does not yet exist)
CHECKIT=0
if [[ ! -d /zone ]]
then
echo "There is ${BON}not${BOF} a /zone directory"
echo "Creating /zone now...\\n"
mkdir /zone || CHECKIT=1
if [[ "${CHECKIT}" == "1" ]]
then
echo "Error encountered making /zone. Exiting Now."
echo "Run this program, again, and use the UnInstall option to clean up the system"
exit 1
fi
fi

# Create the zone configuration files
for ZONESETUP in public confidential ntk internal
do

case ${ZONESETUP} in
public) ZIPADDR=10.1.71 ;;
confidential) ZIPADDR=10.1.72 ;;
ntk) ZIPADDR=10.1.73 ;;
internal) ZIPADDR=10.1.74 ;;
esac

echo "${CLR}Creating zone config file for ${ZONESETUP}... Please wait"
ZCONFIG="create -b
set zonepath=/zone/${ZONESETUP}
set autoboot=true
add inherit-pkg-dir
set dir=/lib
end
add inherit-pkg-dir
set dir=/platform
end
add inherit-pkg-dir
set dir=/sbin
end
add inherit-pkg-dir
set dir=/usr
end
add inherit-pkg-dir
set dir=/opt
end
add inherit-pkg-dir
set dir=/kernel
end
add fs
set dir=/var/tsol/doors
set special=/var/tsol/doors
set type=lofs
add options ro
end
add net
set physical=${NETPORT}
set address=${ZIPADDR}.${OCTET}
end"
echo "${ZCONFIG}" > /tmp/${ZONESETUP}.config
done

# Now configure the zones
for ZONESETUP in public confidential ntk internal
do
zonecfg -z ${ZONESETUP} -f /tmp/${ZONESETUP}.config
done

set +xv

# Now install the zones
for ZONESETUP in public confidential ntk internal
do
zoneadm -z ${ZONESETUP} install
done

# Now create required files in each of the zones

for ZONESETUP in public confidential ntk internal
do

case ${ZONESETUP} in
public) ZIPADDR=10.1.71 ;;
confidential) ZIPADDR=10.1.72 ;;
ntk) ZIPADDR=10.1.73 ;;
internal) ZIPADDR=10.1.74 ;;
esac

cat > /zone/${ZONESETUP}/root/etc/default/init << EOF
#
# Copyright 1992, 1999-2002 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "@(#)init.dfl 1.7 02/12/03 SMI"
#
# This file is /etc/default/init. /etc/TIMEZONE is a symlink to this file.
# This file looks like a shell script, but it is not. To maintain
# compatibility with old versions of /etc/TIMEZONE, some shell constructs
# (i.e., export commands) are allowed in this file, but are ignored.
#
# Lines of this file should be of the form VAR=value, where VAR is one of
# TZ, LANG, CMASK, or any of the LC_\* environment variables. value may
# be enclosed in double quotes (") or single quotes (').
#
TZ=GB
CMASK=022
LC_COLLATE=en_GB.ISO8859-15
LC_CTYPE=en_GB.ISO8859-15
LC_MESSAGES=C
LC_MONETARY=en_GB.ISO8859-15
LC_NUMERIC=en_GB.ISO8859-15
LC_TIME=en_GB.ISO8859-15
EOF

# set up the ipnodes file
echo "#
# Internet host table
#
::1 localhost
127.0.0.1 localhost ${ZIPADDR}.${OCTET} ${ZONESETUP} ${ZONESETUP}.${NODENAME}.org loghost
#" > /zone/${ZONESETUP}/root/etc/inet/ipnodes

# set up the hosts file
echo "#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
${ZIPADDR}.${OCTET} ${ZONESETUP} ${ZONESETUP}.${NODENAME}.org loghost
10.1.70.${OCTET} ${NODENAME} ${NODENAME}.${NODENAME}.org loghost
10.1.71.${OCTET} public
10.1.72.${OCTET} confidential
10.1.73.${OCTET} ntk
10.1.74.${OCTET} internal" > /zone/${ZONESETUP}/root/etc/inet/hosts

# set up the nsswitch.conf file
cp /zone/${ZONESETUP}/root/etc/nsswitch.files /zone/${ZONESETUP}/root/etc/nsswitch.conf

# set up the .sysidconfig.apps file
echo "/lib/svc/method/sshd
/usr/lib/cc-ccr/bin/eraseCCRRepository" > /zone/${ZONESETUP}/root/etc/.sysidconfig.apps

# set up the shadow file
# where the root user has a password of: "root"
echo "root:lySCmJ.1txm4M:6445::::::" > /zone/${ZONESETUP}/root/etc/newshadow
sed '1d' /zone/${ZONESETUP}/root/etc/shadow >> /zone/${ZONESETUP}/root/etc/newshadow
chmod u+w /zone/${ZONESETUP}/root/etc/shadow
cat /zone/${ZONESETUP}/root/etc/newshadow > /zone/${ZONESETUP}/root/etc/shadow && \\ rm /zone/${ZONESETUP}/root/etc/newshadow
chmod u-w /zone/${ZONESETUP}/root/etc/shadow

# set up the nodname file
echo "${ZONESETUP}" > /zone/${ZONESETUP}/root/etc/nodename

# set up the netmasks file echo "10.1.0.0 255.255.0.0" >> /zone/${ZONESETUP}/root/etc/netmasks

# set up a sysidcfg file for the zone
echo "# The root password = root
root_password=El2UPcUnIueS6
name_service=NONE
security_policy=NONE
timeserver=localhost
system_locale=C
timezone=GB-Eire
terminal=dtterm
network_interface=vnic0 { hostname=${ZONESETUP}
ip_address=${ZIPADDR}.${OCTET}
protocol_ipv6=no
netmask=255.255.0.0
default_route=10.1.70.${OCTET} }" > /zone/${ZONESETUP}/root/etc/sysidcfg

# Configure the dfstab file for the zone
# Create a shared directory with content

mkdir -p /zone/${ZONESETUP}/root/export/sharedir
mkdir -p /zone/${ZONESETUP}/etc/dfs
echo 'share -F nfs -o rw -d "security shared directory" /export/sharedir' > /zone/${ZONESETUP}/etc/dfs/dfstab
banner "${ZONESETUP}" > /zone/${ZONESETUP}/root/export/sharedir/${ZONESETUP}_file

done

###### Now, boot up the zones to allow SMF to configure the manifest
echo "Booting zones... This could take some time... Please be patient."

for ZONESETUP in public confidential ntk internal
do
echo "\\n\\nBooting the zone: ${BON}${ZONESETUP}${BOF}"
echo "As this is the first boot, the SMF manifest needs to be built"
echo "Please wait. \\c"
zoneadm -z ${ZONESETUP} boot
sleep 15
while true
do
echo ".\\c"
ps -efZ | grep "${ZONESETUP}.\*manifest.\*import" >/dev/null
if (( $? != 0 ))
then
break
fi
sleep 10
done
done

###### That concludes the zone setup

##### Now for the user creation

##### Enable the NFS server service
svcadm enable nfs/server

#### Now, reboot the system

echo "\\n\\nThe system needs to be rebooted for the demo to take effect"
echo "Rebooting now... Please wait..."
init 6

fi
############END OF INSTALL PHASE#################

######################UNINSTALL PHASE################
if [[ "${STEP}" = "U" ]]
then
screenhead # call the function

# First, remove the Zones
zoneadm list -cv | grep 'public' > /dev/null
if (( $? == 0 ))
then
for ZNAME in public confidential ntk internal
do
echo "Halting zone: ${ZNAME}"
zoneadm -z ${ZNAME} halt
sleep 10
echo "Uninstalling zone: ${ZNAME}"
zoneadm -z ${ZNAME} uninstall -F
sleep 10
echo "Deleting zone: ${ZNAME}"
# Force the zone to be deleted to avoid user-interaction
zonecfg -z ${ZNAME} delete -F
done
else
echo "Zones are not installed. No removal required"
fi

# Now, remove the packages (should be fast if the zones do not exist)
pkginfo SUNWtsg > /dev/null 2>&1
if (( $? == 0 ))
then
echo "Removing the TX packages from the system."
# Attempt to Unregister from the Product registry first
prodreg unregister -fr -u "Solaris Trusted Extensions" -i 1 > /dev/null 2>&1
# Then remove the packages
for PKGLIST in ${PACKAGE_RM}
do
echo "Removing package: ${PKGLIST}"
yes | pkgrm ${PKGLIST}
done && echo "All packages removed."
else
echo "TX packages are not installed. No removal required"
fi

# Now, re-instate the backup files
if [[ -f /var/tmp/txnetfiles.backup.tar ]]
then
echo "Recovering backed-up files"
tar xvf /var/tmp/txnetfiles.backup.tar && \\
mv /var/tmp/txnetfiles.backup.tar /var/tmp/txnetfiles.backup.tar.old else
echo "No files to recover."
fi

echo "Rebooting now..."
init 6
fi
###############END OF UNINSTALL PHASE################

###############README PHASE##########################
if [[ "${STEP}" = "R" ]]
then
more << EOF
${CLR}
The ${BON}txranger${BOF} Program
====================
${BON} \*\*\* WARNING \*\*\* WARNING \*\*\* WARNING \*\*\* WARNING \*\*\* WARNING \*\*\* ${BOF}

The txranger program is intended for use on a non-production
system. You are ${BON}strongly advised${BOF} not to use this program
on a production system.

You are also advised to make a flash-archive backup of the system upon
which you wish to install this TX demo-environment. This will, therefore,
allow you to re-install the system back to its original state once
the demo-environment has been tested.

You are recommended to run this program on a system that can
easily be re-built using JumpStart, as the uninstall processes
do not, necessarily, re-set every file that may have been amended
or created during the demo-install process.

${BON} \*\*\* INFO \*\*\* INFO \*\*\* INFO \*\*\* INFO \*\*\* INFO \*\*\* INFO \*\*\* ${BOF}

The purpose of the txranger program is to install (or uninstall)
a demonstration environment in which the features of the Solaris 10
${BON}Trusted Extensions (TX)${BOF} can be tested.

The program will install the Trusted Extensions packages from a named
directory. By default, the program expects to find the individual
software packages in the ${BON}/tx/Trusted_Extensions/Packages${BOF} directory.

The program will also install four Solaris 10 Zones on the system, where
each zone will relate to a Security Classification (public, confidential,
ntk [need-to-know] and internal). The files for the Zones will be
installed in a ${BON}/zones${BOF} directory.

The program will also create or update a series of control files
related to networking and security-management. These changes
will allow the four Solaris 10 Zones and their respective classifications
to be used for testing purposes.

Finally, three user identities will be added to the system so that these
user IDs can be used to test the settings of the Trusted Extensions
environment.

In addition to installing this demo-environment, the txranger program
can be used to uninstall the demo environment.

Options are provided on the main menu screen, as shown below:

----------MAIN MENU DISPLAY-----------------------------

The ${BON}txranger${BOF} program

This program allows you to install or remove
the ${BON}Trusted Extensions${BOF} demo environment.

Primary Network Port: eri0
Nodename is: moon
Original IP address: 192.168.2.34
IPAddr will be set to: 10.1.70.34
Interrupt is set to: \^C
TX Source directory: /tx/Trusted_Extensions/Packages

Before you install, you are recommended to ${BON}read the Readme${BOF} document

Do you want to (I)nstall or (U)ninstall the Trusted Extensions(TX) Demo, Read the (R)eadme doc or (Q)uit from this program?

Please enter your choice (i/q/r/u): [ ]

--------------------------------------------------------

The information at the top of the screen shows you what the official
nodename of the system has been found to be, which primary network
port will be altered, what the current and demo-environment IP addresses
are (or will become), which key-sequence can be used to act as the
Interrupt key [which should be used with caution!] and the pathname
of the source of the Trusted Extensions software packages.

[NOTE: The package names match those as distributed with the Solaris
10 Release 3 (11/06) version of the Operating System.]

${BON}User Identities:${BOF}
----------------

Once the demo-environment has been installed and the system rebooted,
you will still be able to log in as the ${BON}root${BOF} user using
the appropriate password.

In addition, three more user identities will be available. These are:

userp1 (Classification - PUBLIC)
userc1 (Classifications - PUBLIC and CONFIDENTIAL)
useri1 (Classifications - PUBLIC, CONFIDENTIAL, Need-To-Know and INTERNAL)

In each case, the password for the user is the same as their login name.
[Yes, I know that it is not secure, but ${BON}this _IS_ a demo!${BOF}]

${BON}Testing NFS Shares:${BOF}
-------------------

The demo-environment also creates an NFS shared directory in each of
the classified Zones. The pathname that is shared is
${BON}/export/sharedir${BOF}

The IP address of the classified zone will be as follows:

10.1.71.## public zone
10.1.72.## classified zone
10.1.73.## ntk zone
10.1.74.## internal zone

where ## is the same value as the last octet value of the
demo-system's original IP address.

You may wish to try ${BON}dfshares IPaddr${BOF} when logged in to
one of the classified zone environments to see if you can see what
is being shared in a zone that is in a different classification to
your current classification.

${BON}Finally:${BOF}
--------

Context-Switch Limited are currently working on some demonstration
exercises to be used within the demo-environment.

These will be made available at this URL at some point:

http://www.context-switch.com/docs/training.htm

We hope that this demo-environment helps you understand the features
of the Solaris 10 Trusted Extensions security environment a little better.

Jeff Turner, Context-Switch Limited
http://www.Context-Switch.com
August 2007
EOF

fi
###############END OF README PHASE###################

Friday Aug 31, 2007

A Holy Grail: Non-Web Single Sign-on across Multiple Labels, with Trusted Extensions and Secure Global Desktop

As a result of lots of configuring and a workaround from Stephen Browne, I've finally got a significant element of my Trusted Extensions (TX) lab environment to where I wanted it to be!

Background, and Problem Statement

One of the things I've been working on, is eliminating the need for users to log on to systems they access from a TX environment. After all, they've already authenticated to the TX environment to its satisfaction, in order to get their sesson running in the first place; so, why should they need to enter more passwords in order to use remote, per-label systems?

Password-entry profusion, while it's something most folk working in a single-label world of distributed systems are reasonably pragmatic about living with, really starts to become a bind in a multi-label environment, where a user would most likely need to enter passwords to usefully interact with systems which have an isolated instance at every label in their clearance range in order to start productively doing their job, once they come on shift or watch. This makes staff changeovers take much longer than the single smartcard-swap and password entry you'd expect in a streamlined Sun Ray environment.

So, as the Krikkiters said about the rest of the Universe at large, "it has to go".

Technology Choices

The initial idea was to use Kerberos, on the grounds that it's elegant and I'm reasonably familiar with it. If each zone in the TX environment was a Kerberos client, and there was a KDC on each stovepipe network, then a bit of scriptage (or, more likely, PAMmery) in a user's Trusted Path home directory could potentially call runinzone (see page 15) or something similar, to do a kinit in the zone and thus get the user a per-zone ticket. The "runinzone or similar" hack would be necessary as the zone_enter() call with which TX engages a user with processes running at a label (ie, in a non-global zone) doesn't traverse the non-global zone's PAM stack, so tweaking around with /zone/<zone>/root/etc/pam.conf wouldn't be productive.

There are "various things being done" to Kerberos to make it play more nicely in a TX environment, so while this is still ongoing, I was left scratching my head.

By considerable good fortune, I caught up with John Pither, who told me about the "JDS Integrated Mode" in SGD. This does some cunning single sign-on to the SGD server when you log into your regular account, and nails an extra menu into the JDS Launch tool, populated with the same pick-list apps you get in your SGD Webtop app menu, so that you can use the apps and render their windows in your main desktop session without having to launch a browser and manually log into SGD.

As users are likely to migrate eventually from Trusted CDE to Trusted JDS on their TX / SNAP environments, "game on"!

SGD Integrated Mode and the Trusted JDS Launch Tool

Integrated Mode works by adding an extra action to those performed at user login (or, in our case, non-global zone entry). This looks up the .tarantella/tcc/profile.xml file which gets installed in the user's home directory when Integrated Mode is first set up, looks for the SGD server defined in the file's <url> tag, and authenticates to it with the token in the <AT> tag. As each user has a home directory at each label in their clearance range, plus one on Trusted Path, setting this up means that a given user has multiple .tarantella/tcc/profile.xml files, one per label, and they are different from eachother.

Extra menu items, to integrate with the Launch tool, are also copied into the user's .gnome2/vfolders hierarchy.

This is all fine, so far; and the lookup / authenticate action is independent of the PAM stack, so it works just as well on a zone_enter() as a regular login.

(An aside: we're not running the SGD server in a non-global zone on the TX box right now, as it doesn't work; SGD wants to bind its own X server, and the X11 ports are already in use as multi-level ports across all zones by TX's own label-aware X server. The SGD team assure me that this will be addressed in the next release, but right now, you just need to have an SGD server running on regular Solaris on each of your stovepiped, labelled networks.)

However, there's a snag with the Trusted JDS Launch tool; when a user starts a session, it reads its configuration from the user's home directory on Trusted Path (since Trusted Path is the label which paints the Launch tool on the display, anyway) and leaves it at that. It doesn't read any further configuration from user home directories at other labels in other zones. It would be Really Cool if the behaviour associated with the Trusted JDS Launch tool was commensurate with the configuration of the user's home directory at the label at which the currently-shown workspace is running; the RFE is in, and I'm assured that the functionality will be implemented in the next-but-one incremental release of Solaris 10 (ie Update 5, for folk keeping count).

Workaround

The current workaround - which most organisations are likely to find acceptable anyway - is to:

cp ~/.gnome2/vfolders/applications/\*.desktop ~/Desktop

...for each user at each label within their clearance range where SGD-style SSO is required. This puts the application launch actions which would be presented in the Launch tool, on the backdrop.

So once this is all set up, a user can log into their TX environment, switch to a workspace at the appropriate label, click the appropriate icon on their backdrop and be presented with an authenticated session to whichever remote system or application they need to access, at the appropriate label.

Job done :-).

btw, a small teaser; I put "non-web" in the title to distinguish this type of single sign-on from other types of single sign-on. I'm thinking of writing a posting summarising, and perhaps comparing, the types of single sign-on I'm aware of. Maybe more, later...

Thursday Aug 30, 2007

Mobile / Home-based Computing and Duress

With the continued rise in home-based and mobile working, the possibility of staff being forced to access and potentially modify data by suitably-armed ne'er-do-wells becomes a genuine - if niche - security issue.

I was chatting to a pal on Friday evening who has an Armed Forces background, about duress situations and passwords which might be required.

It turns out that there are actually three categories of duress, these being:

  • local: a threat to your person, which will be exercised unless you do what you are told (eg: a gun to your head)
  • divorced: a threat to your family or other people you personally care about (and who are in a different location), which will be exercised unless you do what you are told (eg: a gun to your wife's head)
  • remote: a threat to individuals unknown to you, which will be carried out unless you do what you are told (eg: a bomb in a populated area).
Taking this into account, it's possible that a well-designed system which authenticates users based on a username and password would require up to 4 passwords per user - one for legitimate login in a non-duress situation, and three more, one for each type of duress!

It's entirely possible that all these different categories would be required, as different actions would be desirable based on the nature of the duress. For instance:

Local duress:

  • log me in, increase level of user activity logging on my account, start signing logs if not done already
  • start backups / snapshots of databases to which I have access, my home directory, etc
  • alert security personnel as to my location and the fact I'm in peril, request their intervention
Divorced duress:
  • log me in, increase level of user activity logging on my account, start signing logs if not done already
  • start backups / snapshots of databases to which I have access, my home directory, etc
  • alert security personnel to the fact that folk I care about are in peril, contact appropriate authorities but remain on standby
Remote duress:
  • log me in, increase level of user activity logging on my account, start signing logs if not done already
  • start backups / snapshots of databases to which I have access, my home directory, etc
  • alert security personnel to the fact that there is a threat to some remote location which can't be disclosed right now, contact appropriate authorities and remain on standby
...or whatever is considered appropriate for the situation, by organisational policy. If it is not considered useful to make a fine-grained characterisation of the duress in order to be able to instruct authorities, the different situations above can be collapsed somewhat.

To re-iterate, in these days of remote working and given the nature of data which many folk have access to, the need for a duress password (or other duress-alerting) system is becoming increasingly important. In an infrastructure designed around "Defence in Depth" principles, a duress password is not only the "last line of defence" for an imperiled legitimate user, but it does for them what smartcards, shared-secret tokens, etc cannot, by enabling them to surreptitiously raise a useful alert.

In fact, for some kinds of protectively-marked data, it's fair to say "if a user's physical location isn't inside a suitable building with appropriate authenticating physical access controls and on-site security personnel, then it's in battlespace".

I can see various points in Sun products at which a duress capability could be inserted; an LDAP server would be the most obvious place (as both normal and duress passwords would be stored there, and the LDAP server would be the natural point at which to record the use of a duress password, approve access as though the password was correct, and raise an alert to some workflow system which would do all the audit and snapshot-wrangling). Changes in account maintenance software would be required, in order to be able to change both normal and duress passwords, but otherwise the surrounding impact would be small provided LDAP was used for pretty much all user authentication (which, frequently, is the case)...

Why a Passport can't be treated as a definitive proof of identity...

It's "that time of the decade, again" when I need to renew my British passport.

I'm relieved to see (from here) that I don't need to have a mad dash round to find someone who can put a declaration and signature on the back of my photograph (in suitably microscopic writing) to the effect that they assert that I'm me - while my ponytail went the way of the scissors the better part of a decade ago and I'm a little plumper around the cheekbones than I was, I'm still recognisably "me" in my old passport photo.

This led me to think about the bootstrapping mechanism for new passports, and in particular, how folk are deemed worthy of being able to assert to the Passport Office that someone is genuinely who they claim to be, to the Passport Office's satisfaction.

The list of approved counter-signatories can be found at http://www.passport.gov.uk/passport_countersign.asp

From the large list presented - and notwithstanding the extending clause of "someone of similar standing in the community" - I suspect that the average person wouldn't have too much trouble finding someone who could be duped or bribed into providing a false assertion of identity for the Passport Office...

Sunday Aug 26, 2007

WGA server outage: "I told you so"

Every now and again, something I predict, comes to pass.

While the consequences aren't as dire as I thought, way back when (in that machines aren't shutting down), I wouldn't expect any third-party software house to write anything which uses DirectX, ever again, in case the situation is repeated.

(Outage news via Geoff.)

Update:

Actually, it is as dire as I thought. See here. Microsoft just "upgraded" Vista to introduce "the Black Screen of Death", so successful attacks against WGA servers now will perpetrate a worldwide Vista outage.

Silly boys.

Saturday Aug 25, 2007

Airport security, Dutch-style

Last Tuesday, I had to go to Amersfoort for the day, for a customer meeting. Now that I'm officially signed-off to fly (regular readers will know I have Deep Vein Thrombosis right now), the plan was to drive to Heathrow and fly to Schiphol, from where the customer's account manager had kindly offered to drive me to the meeting and back.

Everything went according to plan, and a useful and productive meeting was had; we came up with 5 possible ways to solve the customer's problem (of varying costs, complexities and likely accreditabilities), and the customer now has a small write-up of them all, for consideration.

When I got back to the airport, I was very pleasantly surprised to see how the Dutch airport security system worked. It's worth a "compare and contrast".

At Heathrow (and every other major international airport in the UK), you go through the front door, and are faced with a "ground side" area containing check-in desks, shops, cafes etc. At any point before your flight, you can go "air side" by presenting your passport and boarding pass for inspection, and then joining the (usually long) queue for the security arches and hand luggage scanners. While queuing, I usually take the opportunity to transfer all the metal I have about my person (other than belt buckle and glasses) into my jacket, so that it can all go through the scanner that way. Laptops have to come out of bags and be scanned separately, shoes have to be scanned, and containers of liquids have to be scanned (and there's the usual thing about bottles being <100ml, etc).

Once you're through to "air side" proper, there are more shops (usually more upmarket than on ground side, expecting folk to take advantage of duty free offers), cafes, plenty of seating, etc. Any purchases you make here, get sealed in transparent plastic bags and franked with an authenticity stamp. When it's time to go to your gate, you take your hand luggage and go; the only further check performed is when you present your boarding pass to board the aircraft.

(While it's not particularly on-topic, I was unsurprised to see that the IRIS biometric enrolment station was out of service owing to technical faults.)

At Schiphol, things are a little different - and in several ways, much better.

As you'd expect, once you're through the front door, you have "ground side" facilities. However, on moving to notionally "air side", all that happened was that I had to present my passport and boarding pass - and there I was, in an environment which looked like a very Dutch version of a British "air side" (hint to fellow travellers: the mini-Rijksmuseum in Terminal 2 is a lovely place to kill a little time, if you have some to spare). When the overhead signs told me to go to my gate, I did - and that's where I found the security arches and bag scanners.

I think that this approach is so much more elegant than the British approach, for the following reasons:

  • true "air side" - the area between the arches and the aircraft door - is much, much smaller, and folk are in it for a much shorter length of time; it would be very much harder at Schiphol for ground staff with air-side clearance to smuggle something to a passenger, or vice versa
  • security arches scale with the number of gates - add more gates, you get more arches with them (thus reducing the infamous 2-hour security queues you get at Heathrow rush hour, owing to the inability to expand the centralised arch numbers)
  • different security procedures can be employed as required by destination countries, on a per-flight basis (eg shoe and liquid checks, or not, as required)
Liquids purchased at airport shops would still need to be bagged, sealed and stamped, though.

So, BAA, how about following this example?

Also, again slightly off-topic, while the Dutch equivalent of IRIS (called Privium, and apparently in use since 2001) located at Passport Control appeared to be working, I didn't see anybody use it.

Friday Aug 24, 2007

TX-Ranger script v1.0 gets first Happy Customer

...and he's a Sun Fellow :-).

v1.0 has now been submitted for upload to the TX area of opensolaris.org; I'll post an update pointing to it, once it's there.

v1.0 is fairly basic; it will install the TX packages and build zones suitable for use with the standard label_encodings, and configure the all-zones interface to give a vanilla, working TX environment. Still, for initial TX investigation and evaluation, it's useful - it was exactly what Jim was looking for - more flexible and customisable capabilities will be coming in a future version...

Update:

The initial script doesn't work properly on SXDE / Nevada, so - as you might expect - it has not been accepted for upload to opensolaris.org in its current state.

However, for folk using Solaris 10 11/06 (aka "Update 3"), it works just fine.

Therefore, pending tweaks to make it work in current cutting-edge environments, please find copy of the v1.0 script here.

Sunday Aug 12, 2007

TX Ranger: Update on Developments

"The day job" has kept me very seriously occupied for the last few weeks, so I've not had a great deal of opportunity to work on TX Ranger stuff.

However, last week, I had a bit of luck.

I've specced-up a couple of commands which ought to make the "heavy lifting" of enabling a label far more straightforward than it currently is, and Jeff Turner, Managing Director of Context-Switch (and one of the Sun Ed trainsers I spent a couple of days training-up on Trusted Extensions, last week) has very kindly offered to write them. While I'm still happy going down to state machine levels, it's been rather too long since I hung my coding gloves up, for me to realistically do it myself.

Anyway, while I gather that there's a bunch of Jumpstart scripts in development for TX which will do all this, I've been careful not to look at them yet, so that there will be no legal issues in having what Jeff writes, being posted to opensolaris.org. Jeff's also happy for his scripts to be open, bless him.

So, to whet the appetite while Jeff codes, here's what's coming:

Assumptions:

  • All IPv4, no IPv6
  • No LDAP integration (ie, local files only); folk will be able to do DNS integration, if they feel they need it, manually
  • VLANs are untagged
  • ZFS for zones (so we can have a scratch zone to hand in the default build, which we can then go a-cloning) - but we won't assume that our user has set up /zone as a zpool, just that /zone exists as a separate filesystem...
  • A richer "standard" label_encodings for users to select usable labels from - and potentially a script to clean out labels which aren't chosen
  • Ability to handle change requirements to SRSS config (ie, primary interface on the box is vni0)
Procedures to script in a nice meta-package:
  • Build and do basic config on a zone, give it a label and an IP address, and either a dedicated physical interface or an [interface]:[instance]
  • Delete a zone (ie, tear a zone down and destroy its config)
  • Change the global zone's IP address (and, if we include SRSS support, the IP address range for the Sun Rays, firmware server, etc etc)
Note that all our scripted procedures are to be run as root from within the global zone.

How we go about setting / changing zone parameters involves the runinzone script from the TX Developer's Guide...

So, what the TX-Ranger initial install procedure needs to do, is:

  • Install the TX packages, a la the Java wizard, from /cdrom/cdrom0/s0/Solaris_10/ExtraValue/CoBundled/Trusted-Extensions (or just pkgadd from the Packages subdir - I'll check with Darren to see if there's any installation ordering requirements now)
  • Copy our rich label_encodings (which I'll craft) to /etc/security/tsol/label_encodings
  • Copy a similarly-rich tnrhtp (which I'll craft) to /etc/security/tsol/tnrhtp
  • Ditto for a rich tnzonecfg
  • Search /etc/vfs/vfstab for the slice configured as /zone; comment it out, create the /zone zpool
  • Build our first zone (PUBLIC), for cloning - we want to halt the zone at the point where the packages are installed and SMF has imported its manifest, but before any sysidtool-related config has been entered
  • Reboot (or tell the user that they need to)
Note that I think we probably shouldn't look at automating the install of SRSS as part of TX Ranger - a JET module for it is being worked on :-).

Now, on to the things that the scripts need to do:

(Notation: \*\*\* = heading of procedure, \*\* = note on which zone changes need to be made to, \* = procedural element)

\*\*\* Build and do basic config on a zone, give it a label and an IP address, and either a dedicated physical interface or an [interface]:[instance]

I think the command should look like:

# activate-label <label> <interface> <IP addr>

\*\* In the global zone:

\* Verify that there is no clashing IP addr in /etc/hosts, add an entry mapping the new address to the short version of the label name (which will also be the hostname of the new zone)
\* Verify that /etc/hostname.[interface] exists and comprises "0.0.0.0"; create it if it doesn't
\* Verify that the interface is plumbed; plumb it if it isn't
\* Add two entries to /etc/security/tsol/tnrhdb:
Entry 1 is of the form "[IP addr]:cipso"
Entry 2 is of the form "[subnet base address associated with IP addr]:[label]
\* Restart tnrhdb
\* Use either Expect and zonecfg, or scriptably-hack the XML in /etc/zones (naughty as it's a private interface), to do the functional equivalent of:
# zonecfg -z [zone name]
> add net
> set physical=[interface] (and note that, in an [interface]:[instance] scenario, you always just specify [interface] and let the OS sort it out)
> set address=[IP addr]
> end
> commit
> exit
\* Clone the zone from PUBLIC with zoneadm -z PUBLIC clone
\* Make the tweaks necessary to avoid having to use sysidtool to set the zone up - I managed to find my old internal blog entry for doing this :-)
\* Populate /zone/[label]/root/etc/hosts with:
127.0.0.1 loopback loghost
[IP addr] [label]
[IP addr of vni0] [nodename of the global zone]
\* Populate /zone/[label]/root/etc/nsswitch.conf; set everything to point to files
\* Insert the global zone root user's crypt+salted / MD5ed / sunmd5ed root passwd in /zone/[label]/root/etc/shadow - and, obviously, check /etc/default/policy to see what algorithm is in use and reflect it in /zone/[label]/root/etc/policy ...
\* Insert "TZ=GB" into /zone/[label]/root/etc/default/init
\* rm /zone/[label]/root/etc/.UNCONFIGURED
\* touch /zone/[label]/root/etc/.NFS4inst_state.domain
\* Ensure that /zone/[label]/root/etc/nodename is [label]
\* Ensure that /zone/[label]/root/etc/hostname.[interface] is [label]
\* Ensure that /zone/[label]/root/etc/hostname.vni0 is set to the global zone's nodename
Borrow liberally from the runinzone script to do the following: \* zoneadm -z [label] boot

\*\* in the new labelled zone:

\* ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N "" -C root@`hostname` ; ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key -N "" -C root@`hostname`
\* netservices limited
\* svcadm disable auditd
\* svcadm disable cde-login

...and we're done. Phew!

\*\*\* Delete a zone (ie, tear a zone down and destroy its config)
deactivate-label <label> feels like the right command syntax. \*\* In the global zone:

\* zoneadm -z [label] halt (and watch that SMF doesn't try to start it up again - it usually does, requiring a second halt to actually halt the zone)
\* zoneadm -z [label] uninstall -F
\* Restart tnrhdb

...and we're done. User clearance management is Somebody Else's Problem, IMHO.

\*\*\* Change the global zone's IP address (and, if we include SRSS support, the IP address range for the Sun Rays, firmware server, etc etc)

I think I'll leave this for another day :-)

Wednesday Aug 01, 2007

More National ID Card food for thought...

While an immovable appointment for yet another round of blood tests (I have Deep Vein Thrombosis in my legs - Don't Ask) annoyingly prevented me attending the DTI conference that Robin went to, I nonetheless had my mind expanded at the initial kick-off meeting of Intellect's Biometrics Working Group a couple of weeks ago last Thursday.

While I've been somewhat sceptical about the usability of biometrics for some time now, the session was well worth attending. As well as having representation and presentation from staff-who-must-remain-nameless at the Home Office, we were fortunate enough to have Professor John Daugman (whose principal claim to fame is the characterisation of the analysis and transforms needed to authenticate people by iris recognition) presenting on issues he has regarding the N-to-N biometric comparison which is required at biometric registration time. An N-to-N comparison is needed to ensure that a person can't turn up on one day with one set of papers and get an ID card, and turn up with the following day with a different set of papers, and get a second and different ID card.

Daugman has his head screwed on properly, and then some. While the paper he presented doesn't appear to have made it to the web yet, he calculates the number of biometric comparisons which need to be made at biometric enrolment time for the proposed UK National ID card to be - for a database of 45 million principals, ie the UK adult population - around 10\^15 to ensure biometric non-duplication. 10\^15. Ouch.

He cited the example of the UAE biometric database, which makes 14 billion comparisons daily - this is 1/5000th the size of what woud be needed for the UK National ID Card system.

Daugman is currently undecided-but-tending-to-sceptical about combining multiple biometrics; he is concerned that the accuracy will average rather than be additive. Naturally, he believes scaling this out will require new approaches to search; fuzzy rather than exhaustive searches and use of adaptive decision thresholds the reduce the risk of probability summation of False Match likelihoods. Using fuzzy search also potentially causes issues to arise when isolating weakly-differentiated but nonetheless different samples.

Of course, any check other than enrolment is a straightforward 1-to-1; a person presents a credential to an appropriate officer, the biometric on the credential (or stored in some database) is checked against the individual's stored biometric as mapped to their credential, and the match between the ID and the biometric is either accepted (at which point, the credential's presenter is validated) or rejected (at which point, the presenter of the credential is subject to whatever due process of law). Still, the inability to eliminate the single N-to-N comparison required, makes enrolment very big hill to climb.

While I haven't yet listened to the episode of "File on Four" which Robin has posted about here, I'd expect it to be worthwhile...

Thursday Jul 26, 2007

Automated Lip Reading

I watched a fascinating documentary on television the other night, which not only revealed some interesting information on an infamous historical figure, but also raised the curtain on a technology I'd expect to see much greater deployment of in the near future.

Please bear with me, and I'll describe the whole thing in context.

Between 1941 and 1944, Eva Braun - who had received some training in cine camera work and cinematography - shot a whole bunch of silent colour footage of the life that she and Adolf Hitler shared at the Berghof, the near-equally infamous guests they entertained, etc. This footage was discovered in 1945 by the OSS, who were looking for any evidence that would be useful to the prosecution in the Nuremburg trials. As the films were silent, they were considered to be of no evidential value and have remained, for decades, a simple historical curiosity.

A profoundly deaf computer scientist - who is, coincidentally, German - determined some years ago that computers could potentially use image analysis to lip-read, using the techniques that he employs day to day. He developed image analysis software which can, with a high degree of accuracy, map mouth, jaw and throat movements from captured video onto a computer-modelled head, and thus to phonemes. The software - which is currently optimised for German - is able to not only make such a mapping when the captured footage presents a subject face-on to the camera, but also when the subject has their face at anything up to 120 degrees from face-on.

Thus, with a little training (the details of which were unfortunately glossed-over, but which appeared to mostly involve identifying which area of the captured video represented a speaker's mouth), the software was able to translate the lip movements of the filmed subjects into speech components. Using the vocal talent of a number of German actors - one of whom was judged to impersonate Hitler particularly closely, based on the one covert audio recording of Hitler ever made, and the only recording of him in conversation, rather than giving a public speech - a vocal track was made to accompany the Berghof footage, and the two were dubbed together.

This was shown in the documentary, with English subtitles.

Now, consider the ramifications of automated lip reading (ALR) as applied in other contexts. If facial recognition software is able to not only identify a face but its components, such that it could pass details of mouth location to ALR software, if could become possible to reconstruct speech (and potentially and eventually, text) from high-resolution CCTV footage.

Given enough computing power, this could potentially be done for every face in a crowd.

Of course, caveats apply. German is a particularly clearly-enunciated language where every syllable is sounded, has a relatively small number of consistent pronounciation rules, and does not have variations in meaning associated with tonality. Tonal languages such as Thai, where a given small phonemal sequence enunciated in a tenor voice can mean something entirely different to the same phonemal sequence enunciated baritone, would most likely still require a skilled human interpreter to give meaning to the sound that ALR output would have, unless ALR is expected to eventually be able to interpret such things based on analysis of apparent constrictions in the footage of the speaker's throat.

Nonetheless, it's interesting...

Update: The technology on show was developed by Frank Hubner; I've also found a paper on automated head-recognition algorithms, designed specifically to facilitate mouth area identification for automated lip reading, here. Seems there's more folk working on this than initially meets the eye - it's a space to watch.

"Integrity Checking in Depth"

Over the last few weeks, a number of folk have asked me about various elements of integrity checking and intrusion detection - so I figured it would be useful to aggregate my thoughts here.

First, I distinguish between integrity checking and intrusion detection, on the grounds that intrusion detection needs to be realtime whereas integrity checking does not; you can run BART or Tripwire a couple of times a day to gather reports of changes (this being the maximum recommended frequency, incidentally, as doing a full sweep of a system consumes a reasonable amount of system resource), whereas you'd want Prelude (or whatever IDS you're running) to scream at you in the form of an SNMP trap the moment it sees some activity it's not happy with.

With Solaris 10, you actually get the luxury of multiple methods of integrity checking; the title of this article is an allusion to the popular and valuable concept of "Defence in Depth", which is reflected in the ways in which you can make integrity checking work. The ways in which these integrity checking mechanisms relate to and complement eachother is subtle, and thus worth documenting. Out of the box, you get elfsign and BART, and with a little Internet connectivity, you can also call upon the services of the Solaris Fingerprint Database (SFpDB).

Today, elfsign verify <pathname> can tell you whether a given binary was genuinely released by Solaris Release Engineering as part of a production build of Solaris 10, or a subsequent Sun-issued patch to it. What it won't tell you, is if the binary is the current version, patch-wise; it also won't tell you anything about the config or change status of files which aren't ELF binaries, nor will it tell you about binaries bundled with Solaris which are part of specific third-party contributions (these don't get elfsigned by us, as from a legal perspective, signing a binary provided by a third party involves modifying that binary by changing its ELF header, thus creating a derivative work).

It is also worth noting that elfsign is a point of circular dependency. While anyone trying to trojan a Solaris 10 environment cannot get an elfsign-verifiable signature for their trojan binaries, if they were able - as a result of their attack mechanism - to trojan elfsign too, then it's Game Over. This is yet another good reason to deploy network-listening applications in sparse-root zones wherever possible, so that the elfsign binary remains usable but immutable from the application environment's perspective.

BART is a tool I typically describe - with tongue half in cheek - as "a poor man's Tripwire". While Glenn has produced a useful Blueprint on how to scale BART management, I still honestly think that, if you want to integrity-check a datacentre's-worth of installed systems, Tripwire is the superior product - this is owing to its cross-platform nature and the elegance of its key management. What both BART and Tripwire will tell you is, whether the file you had yesterday is the same as the file you have today. Both BART and Tripwire are agnostic regarding the file type, therefore they are applicable to scripts and config files as well as ELF binaries. However, they won't tell you where a file came from, nor what version it is. However, if someone manages to install a Trojan - or downgrade a Sun-issued and signed binary to a version which has an exploitable vulnerability which has subsequently been patched - either BART or Tripwire can potentially catch things which elfsign won't. The database of file digests which Tripwire holds on a system is also more resistant to attack than the equivalent BART database - the Tripwire db being signed with a key which is held on the Tripwire management infrastructure, rather than the host itself. Again, deploying network-accessible applications in sparse-root zones mitigates risk - most system binaries are loopback-mounted readonly in such environments, making them immutable even in the event of application compromise.

Last but not least, we have the Solaris Fingerprint Database. This contains comprehensive mappings between pathnames, MD5 digests and release / patch version numbers for scripts and binaries back to very early versions of Solaris; if you present a pathname and digest to it, it can tell you not only whether the associated file matches something produced by Solaris Release Engineering, but what release or patch version it is associated with. This maps to the functionality of elfsign plus the ability (when correlated with current patch levels) to spot binary version downgrade attacks, however Internet connectivity (direct or indirect) is required, and while it would be difficult to Trojan the MD5 digest generator to produce output which would match what the fingerprint database considered acceptable, it is not impossible (although sparse-root zones can help here, again).

So, there you have it - while any of the integrity verification tools above could potentially be compromised when used in isolation, owing to points of circular dependency (although to do so would be hard in all cases), a reinforcing combination of them plus the use of sparse-root zones results in an integrity enforcing and verification capability which would cause even a very capable attacker to have nightmares.

Glenn also has a Blueprint on integrating SFpDB and BART, which makes for good reading.

Wednesday Jul 11, 2007

"An Enterprise Needs Only Four Computers"

With a tip of the hat to Greg P's posting here (and I thought the quote was attributed to Ken Olsen rather than Thomas J Watson...), I'd like to consider the individual enterprise, and propose that an enterprise only needs four computers.

The four computers in question would comprise two clusterable-or-otherwise-resilient systems in a primary datacentre, and two clusterable-or-otherwise-resilient systems in a business continuity or disaster recovery location. Each of these systems would most likely be populated with some x86 / x64-based boards and some SPARC-based boards. The boards would be grouped into physical domains (where provable, rather than merely certifiable, data segregation is required and covert channels are perceived to be a potential issue if other technologies are used), and physical domains would be sliced up into LDOMs on the SPARC side and (most likely) Xen domains on the x86 / x64 side, where either certifiable data segregation is required or OS-level admins need full independent control of their OS instance (eg, for running different OS versions or different patch levels, etc). Applications would live in zones.

User home directories would need their own OS instance, up until the point where a non-global zone is able to function as an NFS server. Doubling the home directory servers up as Sun Ray servers may also be sensible.

There's enough flexibility and granularity here, given sufficient attached storage, to run wellnigh any enterprise, if the computers are big enough. In fact, I suspect that a great many medium-sized enterprises could probably get by on four fully-loaded Sun Blade 6000 systems, at a pinch.

Now, I love reductio ad absurdum as a device of reasoning - I find that, sometimes, what pops out at the end of such a chain of reasoning may not be as absurd as first expected :-).

The reasons why you don't see enterprises running this way yet are, for the most part, human rather than technological. In particular, admins often feel uncomfortable about the fact that their envronment can potentially be affected by a "higher authority".

To give some concrete examples, if systems belonging to departments A and B in an organisation are consolidated into a pair of zones on a single host, each zone admin would have initial oncerns about who owned root in the global zone. Even if the systems were consolidated into logical or physical domains on a sizeable system, each domain admin would have initial concerns about who had root on the domain controller. Granted, such issues would very likely be alleviated in policy, but at the end of the day you'll still see some circumstances where admins, accreditors and auditors have issues about not being the ultimate authority with final control over a system.

This is why you're not likely to see an enterprise running on just four computers, just yet.

Monday Jun 25, 2007

Extreme Zone Minimisation, SE Linux and More...

A couple of days ago, an internal email pointed me here, the document being a thesis by Eriksson and Palmroos, two students supervised by our very own Christoph Schuba.

The thesis comprises three main sections, all of which are interesting...

  • There is a useful discussion on the nature of Solaris 10 Zones, for those who don't understand them already, and a very useful introduction to SE Linux's type enforcement mechanism and its relative strengths and weaknesses.
  • Our intrepid researchers then investigate the possibilities of minimisation within non-global zones; they find that a zone is initially created using the LiveUpgrade mechanism (which I didn't know), that a standard zone install encompasses the packages installed in the global zone, rather than being able to minimise on a per-zone basis (which I admit, I hadn't spotted), and investigate both bespoke approaches and, most interestingly, the BrandZ mechanism as potential means of installing per-zone minimised environments. This is excellent forward thinking, and I wouldn't be surprised to see this approach followed for many more single-app zones once BrandZ integrates into the "production release" codebase. I admit I consider their more extreme hypotheses and tests of abandoning SMF and reverse-engineering dependencies to enable minimisation on a per-file, rather than per-package, basis as perhaps "going a bit far" - while the "Reactive Minimisation" mechanism that Bart, Glenn and I came up with and presented at RSA 2005 could also potentially minimise on a per-file basis, it at least left the underlying package structure in place so that patches could be readily applied.
  • Finally, they set up a new daemon in SE Linux, give it privileges, and then use it to try to break the system.
Their conclusions are interesting, especially regarding the unfortunate issue that SE Linux's absence of namespace segregation presents to folk who have an app they need to polyinstantiate.

I look forward to seeing more from these two guys in the future. Meantime, go grab the thesis and read...

About

davew

Search

Categories
Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today