Saturday Nov 28, 2009
Thursday Nov 19, 2009
By davew on Nov 19, 2009
...or, if you want to put it another way, "Limited Confessions of a Security Geek".
It's been almost precisely a decade, now, since I found myself working in computer security.
I've been asked, on occasion, how I ended up doing what I do - after all, for the most part, security folk can be divided into two categories; those who started out trying to prove a point by doing things to expose inadequate practices (such as my pal Alec, who wrote Crack to expose standard 8-char Unix passwords for the insecure things they are), or my pal Darren, who came from a background of formal security training in terms of protective markings, etc, and now gives us civilians the benefit of his knowledge.
(There's also the crypto illuminati and general anomalies such as Whit, but it's to our misfortune that such folk surface so infrequently.)
I wonder whether security folk are made, or born. I note with considerable interest that a course on "thinking like a security geek" is now being taught at Washington Uni, and I was also pointed at an interesting article comparing the mindset of a security geek, to that of a mathematician skilled in formal analysis. It's also interesting to note that, of the many security folk I know, few took degrees in CompSci.
In my case, interest in security, followed from interest in networking. I may be showing my age by admitting to being a Micronet 800 subscriber, back in the early to mid '80s . My interest in communications was such that I got my first modem before I even got a floppy disk drive, for my BBC Micro - CommStar had to be modified on my behalf, to fix bugs associated with saving things to cassette tape - but once I realised that one computer could talk to another, it's wasn't long before I figured out that, where a computer had multiple levels of privilege on various accounts (hey, bear in mind that I was in my early teens, and back then and there wasn't such a thing as a remotely-affordable home computer with a time-sharing environment), then nefarious things could be done, to elevate one's privilege on the system at the other end of the 'phone. Of course, while I figured this out in theory, I was a good boy, really :-).
I didn't go hacking while at University - indeed, I was a sysadmin for the Computer Society's shared Unix box, for a while. Granted, I expect it was completely riddled with security holes and hope it has long since been removed from JANET, but it worked.
I first really got interested in security during my last couple of years in my first job at Acorn; if you remember Larry Ellison jumping up and down and getting hugely enthused about thin-client computing in the late '90s, it's a little-known fact that the Oracle Network Computer (abbreviated NC; the original post-dumb-terminal thin client) was a reference design from Acorn, based on an ARM 250 and running a cut-down and embedded RISC OS.
Guess who used to do a bunch of work on the demo-environment server end, where the servers were frequently Risc PCs running RiscBSD :-). In fact, pretty much the last thing I was doing before I was RIFfed, was working on securing NC-to-server communications.
However, I'll hold my hands up and say I'm sad to admit, that's actually the last occasion on which I did any Real Programming. After my days (5 and a half years, come to think of it) at Acorn, I haven't written any actual code. I can still readily go down to the level of designing and debugging state machines, but it's not the same.
After Acorn effectively imploded in the Autumn of 1998, I went through some doldrums and small-time contract work before being picked up by Sun, at the start of '99 - and Sun is where I've been, ever since.
I joined Sun as a general jobbing Project Engineer; after a bit of a learning curve, I was sent off doing Solaris installs, various builds and generally delivering standard services, etc, until I ended up on a gig at one of the UK national newspaper publishers, initially doing enterprise-wide Y2K patching. Their security was, frankly, appalling - some systems didn't even have root passwords, and others had root passwords known to almost the entire IT element of the organisation, and these passwords were which were never changed. It's the latter case, which nearly cost them their business.
Once I'd done all the patching - involving being in their offices at very strange hours and often at weekends, owing to the downtimes they were able to schedule for their systems (the Sunday editions have to come out, after all) - I basically became recognised as "a face in the office of the internal IT group", and started to overhear all sorts of discussions around security. When the Sun project manager was replaced, for political reasons (the replacement, Kevin, became and remains a very good pal of mine), the security agenda was escalated so Kevin had visibility of it; so, a contractor was hired in to develop a security strategy and policy, for the enterprise.
After a few weeks, the contractor submitted a draft security policy. I asked to see it, and saw it for the pile of cut-and-paste crap that it was. I told Kevin that I could give him something better by close of business that Friday, and at 16:50 on the Friday, I hit "Send" on my mail client.
Kevin, bless him, agreed I'd done a better job, got rid of the consultant, and took me on as a security proto-geek. Fundamentally, that's why I'm where I am, today.
However, much more amusement was to come.
The customer's senior management, decided that they liked the new security strategy and wanted it implemented. So, this being in the days before
JASS the Solaris Security Toolkit (SST), system hardening was done to a manual - and time-consuming - script that I'd written.
About halfway through configuration roll-out, we found that a sysadmin in the Output Services group, had resigned under "something of a cloud" when he was advised that his shift patterns were being changed to something that he considered unreasonable. He had subsequently approached a rival newspaper, with his knowledge of operating practices and access controls, and offered to disable the systems of his former employer, such that there would not be any January 1st, 2000 editions of any of their newspapers.
(A note to my readers; UK newspapers operate on extremely tight financial margins, which are governeed by their advertising revenue. In the event that a paper does not hit the news stand before or at the same time at its rivals, the contractual clauses with the advertisers are really punitive. For a group which produces several titles, a day's outage on all their titles could result in financial damages so huge, that the enterprise is effectively taken to the cleaner's.)
So, our Bad Guy (let's call him Fred, as it's not his name) was basically handing his former employer's competitor, a bankruptcy of their primary rival, on a plate.
Fortunately, newspaper editors do have some integrity. Our Man at the rival publication, notified the police.
The first I knew about any of this, was when I and the on-site Sun Technical Project Manager were summoned to the IT Director's office.
We were sworn to secrecy (even from our own Project Manager), and introduced to a couple of extremely cool gentlemen (a Detective Sergeant and Detective Inspector, respectively) from the local police force's High Tech Crime Unit.
Kevin was, therefore, shut out of everything which was going on, for a fortnight; we were still being paid for, but he couldn't know what we were doing. I consider it a measure of his character, that he trusted us to get on with the job in hand.
Forensics gigs (and I've done a few, but don't need to use more than one hand's worth of fingers to count them, even now), tend to run one of two ways; the customer either wishes to mitigate the risk of someone compromising their systems by the same attack, or they want to find the perpetrator and nail them to the fullest extent of the law.
The newspaper firm in question, wanted to do both. Usually this is next to impossible, but we had the extreme luxury of already knowing who our Bad Guy was. So, what we needed to do, was protect the systems he had access to, as best we could, and encourage him to make another attack, so we could find out where he was entering the network from and gather evidence.
Fortunately, Fred was a known entity; it was definitely known which systems he had root access to, and fortunately, these were environments which only processed transient data. For folk outside the newspaper industry, Fred had worked in Output Services; the function of this unit is to take the fully-formatted pages which emerge from Editorial, and work a little magic on them. Specifically, Output Services is where medium-resolution images, as manipulated in Editorial, are substituted for the high-resolution images from Image Services; also, it's where page pairing takes place. "Page pairing" is the process by which 4 newspaper pages are put together into a double-sided sheet a little more than twice the width of an editorially-output page; this dictates what the printers print, and also adorns the pages with the day's date and the page numbers.
So, we set about our forensics gig. We knew Fred wasn't the sharpest tool in the box, but we knew he still potentially had root access to systems vital to next-day publication. So, we got radical. As the systems Fred potentially had root access to, were both resilient system-wise - there being multiple boxes configured as failovers, to do the same job - and had mirrored disk arrays, we removed a layer of resilience by breaking the disk mirrors and offlining one set of disk arrays; even if Fred decided to do an "rm -rf /" on a system, we could potentially have it back up in the next 5 minutes and ready to receive data, in a manner which would only adversely affect data flow for less than half an hour, and would most likely confound Fred's efforts.
We couldn't apply any obvious security lock-down to the systems Fred had access to, though, as it was reckoned that he would run rampant and destroy any systems he might have access to, before we could shut him out of them, if he saw any obvious modifications. Talk about playing cat and mouse... Next, the team (comprising Muggins here, Ray the Sun Technical Project Manager, the customer's Head of Networking, the contact point for a particularly important networking application, and our two new friends from the Police) identified network segments that Fred might attack our environment through, and posted network sniffers (interesting boxes from Dolch) on them, to detect any anomalous connections.
We were also very fortunate to be able to persuade the editor of the rival paper, to contact Fred and ask for proof of his hacking capability. This seriously brave guy wore a wire, and recorded his conversation with Fred; the challenge was for Fred to hack the page pairer systems such that on a particular day, and on a particular page, "Thursday" would be changed to "Thrusday", and that would be the evidence that Fred could do what he liked, with Output Services.
The page was hacked, the change was made. The readers didn't notice.
We did, though :-).
It turned out, that Fred had got a new job with the maintainers of the newspaper printers. Now, I don't know how many of my readers, have seen a newspaper printer in action; they're pretty impressive beasts, standing up to 3 storeys high, constructed out of skyscraper-style steel I-beams and running paper at significant width, tension and velocity. Anyway, Fred had joined a firm which manufactured and maintained these beasts, and was using the dial-home fault alert line, to dial-in to the printer and hence bridge out to the enterprise IP network.
We spotted the traffic to the page pairers, once we had Output Services ring-fenced with Dolch boxes.
We had everything in Output Services not only set up for short-term service resumption in the event of system destruction, but had logging seriously cranked-up to detect Fred attempting access.
We also had an out-of-band channel detecting differences between pages submitted by Editorial and pages as paired, but that's a little bit of a trade secret, as to how we did it :-).
Anyway, Fred spent 18 months being housed by Her Majesty (for readers outside the UK, this is a euphemism for a prison sentence).
So, that's how I properly got into security. It's so interesting, I've wanted to stay there there ever since. I've pretty much succeeded, in doing so, too.
Saturday Apr 05, 2008
Thursday Mar 13, 2008
By davew on Mar 13, 2008
While I'm not that surprised that the crack has been achieved, especially given what the disclosure paper says, it would appear that the researchers went to lengths the likes of which I've only seen Ross Anderson and his electron microscope- (and laser-) wielding friends go to, before. In particular, deducing the algorithm by 3D modelling of the silicon from electron micrographs, in order to produce the gate pattern, is a new one on me.
Well done to the team involved, especially over their care to state that only Classic, rather than other Mifare products, are associated with the crack, and that some simple changes to Classic would mitigate against their attack method.
Still, once again, they have proved that if a user has physical control over a device and its operating environment, DRM is a non-starter.
Saturday Mar 08, 2008
By davew on Mar 08, 2008
As smartphones increase their memory capacity and implement increasingly sophisticated apps, losing (or being relieved of, under duress) your smartphone is rapidly becoming as serious a disaster as losing (or being relieved of) your laptop. Samsung even ran a billboard ad last year around Heathrow, to the effect of "how could people imagine running their lives without their Samsung smartphone" - to which my obvious reaction was, "what happens if someone nicks it, then?"
So, wouldn't it be excellent if - on parting company with your smartphone - you could make one 'phone call, either from home or a public 'phone box, to both turn your missing smartphone into something about as useful to a ne'er-do-well as a brick, and order a new smartphone which, once you receive it and get it registered, will automagically have access to your address book, documents, music, browser favourites, etc?
This is why, what you really need isn't actually a smartphone, but a smartphone-form-factor Sun Ray.
Admittedly, there are one or two downsides to having a Sun Ray instead of a smartphone. You'll need a continuous 3G connection, if you want the unit to be usable wherever you are - this would currently limit the take-up of such a unit to folk who very rarely stray outside metropolitan areas with major 3G coverage, such as central London. However, there's enough such people, that a device could probably be justified.
Also, while the fundamental point of smartphones, and indeed 'phones in general - the ability to make and receive voice calls - is still (AFAIK - my information may now be out of date...) "not quite there yet" on a Sun Ray in terms of being able to do interoperable VoIP, I've seen working VoIP implementations on Sun Ray which aren't quite production-ready yet, but which appear very close to being so.
Still, I've also been doing some thinking. While a bunch of applications specific to smartphone-form-factor Sun Rays would clearly have to be written bespoke and designed for a small-screen user interface, how might "something which is being used as a smartphone, yet which is being served from Solaris" benefit from security technologies such as Trusted Extensions?
Let's start by considering the 'phone book app. If I run my 'phone book app at a label which strictly dominates the label at which the apps which make my 'phone calls, handle Bluetooth connectivity, etc, are made, and I give my 'phone book app the privilege to write data down across labels, then I can set things up such that a connectivity app will only have a number or other connectivity details exposed to it, when a connection needs to be made.
Thus, practices such as Bluejacking would cease to be feasible, as the connectivity apps don't have the privilege to access 'phone book data.
Now, let's consider my 'phone-based web browser, or my 'phone-based copy of iTunes or similar. Where I want to run an app which needs to make external connections and upload data, I could run it at the same label as the rest of my connectivity apps, but give it the privilege to write data up, across labels. For a browser, it would still need to be able to read up (unless I had a separate "Favourites" app, which had the privilege to write URLs down and launch my browser at a lower label, much like GlennF's "safer browsing" prototype) which is, on consideration, probably the better way to go.
In fact, splitting apps into separate "uploader" (with write-up priv) and "player" (run at higher label, with "Favourites" potentially having write-down so that updates may be checked for) components, is probably the definitive way to go.
Does this have mileage, or what?
Friday Mar 07, 2008
By davew on Mar 07, 2008
Following Sun's purchase of Innotek - suppliers of the reasonably-fine VirtualBox Type 2 hypervisor - I've been thinking.
OK, so VirtualBox for OS X is still very much beta - shared folders don't work, and networking only works in NAT rather than Bridged mode - but it's still stable and full-featured enough for me to build a Solaris 10 Update 4 image on top of it, complete with Trusted Extensions. In short, "not bad at all" :-).
However, my thinking is taking me down an interesting line of reasoning. Our press release states that VirtualBox is primarily aimed at developers, so I can only hope and assume that one of the things we are going to do with it shortly in terms of enhancements, now we've acquired it, is thoroughly enhance and decorate it with DTrace probes and providers.
Here's where things potentially get fun - although I must first add, that all my musings in this regard, are currently hypothetical.
Consider a system running Solaris 10, or OS X, as a host OS.
Now run VirtualBox, on top of it.
Now run another DTrace-enabled OS, such as Solaris or OS X (again), in a VirtualBox as a guest OS.
Depending on the degree of complexity involved in VirtualBox, particularly regarding its memory management, I wonder whether it might be possible to DTrace activity in the guest OS from the host OS, potentially without the host OS knowing about it. Being able to do this, could have both good and bad repercussions:
- If, from the host OS, you could trap a guest OS' calls to fork() and exec(), you could potentially do Validated Execution for an OS at the hypervisor level, rather than within the OS itself. This not only potentially gives you much greater security - even root on a guest OS can't turn vaidated execution off - but it means that validated execution could potentially be made OS-heterogeneous.
- You could use DTrace to supplant Solaris Audit, gathering audit information about OS activities at the hypervisor level, where nothing which happens at the OS level can touch it.
- It would make for a great kernel-level debugging tool, where you might not necessarily want (or be able) to use DTrace within the OS itself.
- All of a sudden, "Satan's Computer" becomes real. If you're root on the guest OS, you can still have All Manner of Strange Things happen in your environment, if your hypervisor is pwned, and there's nothing you can do about it. For example, if you take a look at Jon Haslam's posting on how DTrace can be used to read an environment variable for an arbitrary process, consider what DTrace might be able to do, in terms of changing the value of an environment variable, under the feet of the application. If you can make such a change without the app crashing, and such that it notices the new value, Life Gets Interesting.
Friday Feb 15, 2008
By davew on Feb 15, 2008
While Trusted JDS is a reasonably well-featured desktop (although there are some new features which we're expecting to deliver in Update 5), some customers are likely to want to use TX as "the ultimate, luxury KVM switch" from the perspective of allowing access to very few intrinsic capabilities. I've been on something of a voyage of discovery in my lab for a couple of days, figuring out how all this works; I'd like to give a very big tip of the hat to Joerg Barfuth, for his assistance with a number of issues I came across.
I cut my Unix GUI teeth nigh on a decade and a half ago on twm, moved (briefly) to mwm, settled happily into CDE on Solaris, and fvwm2 on Linux, before going significantly Aqua when Apple went OS X. I'd hardly used Gnome at all, until I was asked to strip features out of Trusted JDS for a demo we're showing at an exhibition, next month. So, here we go...
If you've used a Trusted desktop before (and if you haven't, and have the time, download Solaris 10 and give it a go), you might have noticed that the X server in such an environment is a fascinating thing; different elements of the screen as you see it, are rendered by different parts of the platform.
Specifically, in Trusted JDS, the Launch menu, toolbar, trusted screenstripe (which gives you the trusted shield, the password change tool, label builder, graphical role changer, object label display and, privilege permitting, device allocator) and screensaver are all rendered by Trusted Path (in other words, the Global Zone). The desktop workspace, icons and any user apps which are started up, are rendered by the various labelled non-global zones.
The way this works, is that the X server uses multi-level ports (see /etc/security/tsol/tnzonecfg, and you'll see the familiar 6000-range ports included) to move X data between non-global zones and Trusted Path. When a workspace is opened at a label, or a label is changed by the label builder, the X server uses a TX-specific zone_enter() call to implicitly log the user into the zone so that they can do work there; a major way in which zone_enter() differs from zlogin, is that a zone responding to a zone_enter() trusts Trusted Path to have appropriately authenticated the user, so a zone_enter() call doesn't traverse the non-global zone's PAM stack.
So - as the Launch tool runs at Trusted Path we need to remove access to apps from it, there, and as the workspace runs in the labelled zone, we need to restrict access to apps, from there.
It's always best to do this as a scratch user. Another point worth being aware of, is that a user has a home directory on Trusted Path and in every zone corresponding to a label within their clearance range; some configuration steps will need iterating across zones.
Anyway, here's where Joerg introduced me to the delights of gconf-editor. Launch it from a terminal running at a label in a non-global zone.
Under apps/nautilus/desktop, are the tunables (ending in '=icon_visible") which can prevent the display of "This Computer", "Documents", "Network Places" and "Trash"; removal of "StarOffice" and "Help" is most readily accomplished by highlighting the icons and selecting "Move to Trash".
Once you have a workspace devoid of any icons, go to desktop/gnome/lockdown and enable whichever lockdown options you need, with the exception of disable_save_to_disk (otherwise, you won't be able to do the next bit :-) ).
Using the terminal you launched gconf-edit from (as you may not want to be able to launch a terminal in your new profile :-) ):
$ gconftool-2 --direct --config-source xml::$HOME/.gconf --dump /desktop/gnome/lockdown > mylockdown.xml
$ gconftool-2 --direct --config-source xml::$HOME/.gconf --dump /apps/nautilus/desktop > mybackdrop.xml
Log out, log back in as a user with appropriate privilege on Trusted Path, and cp /zone/<zonename>/root/export/home/<username>/mylockdown.xml and mybackdrop.xml back to somewhere on Trusted Path which labelled zones loopback-mount; hand-edit mylockdown.xml if you wish, to add:
...between the similarly-formatted entity for "disable_printing" and the one for "restrict_application_launching".
As the Launch tool runs on Trusted Path, mylockdown.xml only needs to be imported into gconf once; at Trusted Path, do:
# gconftool-2 --direct --config-source xml::/etc/gconf/gconf.xml.mandatory --load mylockdown.xml
...and then, at each label within the user's clearance range, connect to the appropriate non-global zones (via zlogin, label change, take your pick) and:
# gconftool-2 --direct --config-source xml::/etc/gconf/gconf.xml.mandatory --load $WHEREVER/mylockdown.xml
# gconftool-2 --direct --config-source xml::/etc/gconf/gconf.xml.mandatory --load $WHEREVER/mybackdrop.xml
Note that this assumes you are looking to lock the desktop down for all users, which is normally the case; I'll re-edit this posting shortly, to indicate how different degrees of lockdown can be achieved for different users or roles, at a given label. Of course, if you wish to vary the available applications on a per-label basis for the same user, you just need to create multiple lockdown and backdrop profiles as appropriate, and deploy the relevant profiles at the relevant labels, as above.
Part 2, if all goes to plan, will cover what you can do to achieve the same ends, with Trusted CDE; this will also (very likely) act as a "belt and braces" approach to what I've described above...
Tuesday Feb 12, 2008
By davew on Feb 12, 2008
This is thought-provoking, and I think the Internet Service Providers' Association have it spot-on with their comment on the article.
Specifically, I'm wondering how easy, putting a notional Black Hat on, it would be to prevent an ISP from finding out what I was downloading.
Torrent technology isn't something I'm intimately familiar with (not having used it), but I would hope that it incorporates something akin to IM's OTR.
If it doesn't, I'd need to VPN into some bastion host outside my ISP's remit - ideally getting there using Tor or something very much like it - and run whatever Torrent peer app from there (where "there" equals "a country which doesn't have Internet piracy laws").
If the bastion was used for things other than piracy of copyrighted media, an ISP would have a major job on its hands to prove that the heap of ciphertext traversing their infrastructure was the latest Hollywood blockbuster rather than, say, the latest .iso of Solaris 10.
Also, I expect the test case will happen shortly after any new law in this area is enacted; if I can prove I have paid my TV licence fee continually since it started showing in its current format, would it still be illegal for me to download, via a Torrent system, what is supposedly the third-most popular TV series on the P2P networks; "Top Gear"?
Monday Jan 28, 2008
By davew on Jan 28, 2008
Reading Friday's Telegraph and Saturday's Times on the matter (my home local is less right of centre than my office local, in terms of newspaper choice), it would appear that Jérome Kerviel's activities wouldn't have been curtailed by technology as currently deployed. The three standard controls deployed at SocGen are:
- cash flow monitoring - all transactions are monitored and the flow of cash traced
- "straight-through" automated processing - every trade is performed on a central infrastructure which distributes the details to the accounting, cash management and risk control groups
- the "middle office", where the risk controllers act as gatekeepers, monitoring the flow of cash and mediating any trades or requests for trades from the dealers
As far as I can tell, there are two ways in which this can be prevented from happening again.
1. Fix the human factors, by ensuring that no risk controller or former risk controller can ever get a job as a trader. You'd have to have some sort of central database of risk controllers maintained by the banking industry as a whole, and the risk controllers would likely be annoyed by such an initiative, since good traders are paid significantly better than good risk controllers; it may be necessary to even this up a bit...
2. Actually authenticate the endpoints, as well as the transactions. If a signed trade request is sent, and returned countersigned with an acknowledgement, such that both certificates have organisaton names matching the organisation names on the trade-to-be, can be traced back to "known good" root CAs and aren't on any "known good" CRLs, before any funds are transferred, then you have to have parties in both organisations collaborating in order to achieve anythng underhand.
Of course, how a CA or CRL is determined to be "known good" is left as an exercise for the reader...
Anyone for smartcards and Sun Rays, SocGen?
Monday Jan 21, 2008
By davew on Jan 21, 2008
A copy of this article arrived as part of our internal weekly e-newsletter the week before last (and yes, I have been that busy); I would provide a pointer to it, but for some reason I can't seem to find it on http://www.techrepublic.com/. Nonetheless, I believe it merits comment; I hope TechRepublic don't mind me quoting the article (with HTML blockquote tags) in full.
Consumerization: The IT Civil War. If this really is a war, I think it’s fair to say that IT is losing. Many users are circumventing IT by using widely available technologies such as Yahoo Messenger, Gmail, USB drives, and BlackBerry phones to help them accomplish their tasks at work. The practice is so common that The Wall Street Journal has even published an entire article aimed at helping business users circumvent their own IT departments. I wrote a diatribe about how irresponsible it was for WSJ to publish that article, but that does not diminish the fact that this is happening everywhere and IT has become virtually powerless to stop it. “It’s almost become a sport for users to vilify IT.” – Jeff Comport Gartner Analyst Jeff Comport, said, “There’s a reason people are trying to use this kind of technology and very often it’s to do their jobs better… We have IT very often coming from a world of budgets, controls, and projects, and they have spent their lives keeping this kind of stuff out.” As a result, “It’s almost become a sport for users to vilify IT,” said Comport. Let’s take a look at the six consumer technologies that are causing IT the most trouble and then consider what IT can do to turn around a situation that is quickly going from bad to worse in many places. 6. Instant messaging software Whether it is Yahoo Messenger, Windows Live Messenger, AOL Instant Messenger, Skype, Google Talk, or a variety of other IM clients, the fact is that instant messaging has spread to the point that as many as 20% of business users or more are now running it at work. Those are U.S. stats. The percentage is higher in Asia and far higher among younger workers everywhere. Users typically install the software themselves, often against IT policy. Most of the IM clients send data unencrypted so even two workers in the same company and on the same network can end up sending corporate secrets out onto the Internet for any hacker to sniff. There’s also the issue of IM file transfers that can introduce files that have not been scanned by antivirus software. However, IM can also be a good thing. It can relieve e-mail inboxes from worthless chatter and it can help users quickly locate colleagues to solve timely problems. And there are enterprise options from Skype, Microsoft, and others that are making IM much easier for IT to regulate and standardize.I also believe IM is A Good Thing; I use it myself, on a regular basis (invariably with OTR, when discussing interesting security things with colleagues, even when such discussions are over our internal network).
Granted, IM can be a double-edged sword; having both internal and external contacts available through the same IM interface, is a security risk; it's all too easy to accidentally paste something into a chat session with someone outside the company, which should instead have been pasted to somewhere inside the company. Here's where Trusted Extensions ("TX") comes into its own; even if, as an organisation, you're not up for doing a major data classification exercise, you could nonetheless have just two labels (call them PUBLIC and INTERNAL, or EXTERNAL and INTERNAL, it makes no difference) such that INTERNAL strictly dominates EXTERNAL (or PUBLIC). If you then - as the norm - run an internal IM instance for internal contacts and an external one for external contacts, give users the privilege in /etc/user_attr to copy and paste data "upwards" in sensitivity but not "downwards", then they can still bring data in from the world at large, but data inside the organisation, stays there. Where internal communications prompt the need to look for something out in the world at large, Glenn Faden has an elegant solution prototyped.
For folk who have legitimate need to move data from INTERNAL to PUBLIC - a corporate press officer would need to do this as part of the process of making a press release, for example - then a role could be created in RBAC such that the officer would assume the role,
5. Personal smartphones Now that BlackBerry phones, Palm Treos, and Windows-based phones are priced as low as $200 by many of the big cellular carriers, lots of users who don’t have a spiffy company smartphone are just going out and buying one of their own. Many of them have figured out how to forward their business e-mail to their personal smartphones, which opens up a ton of privacy, regulatory, and security issues. There are secure ways for IT departments to handle this. Turning a blind eye or trying to block it are not valid options.
Owing to Blackberry's data propagation mechanism (via RIM, with a state machine such that a man-in-the-middle attack could be perpetrated), their use is rightly prohibited within Sun.
TX comes in use here, too; as the INTERNAL-labelled network isn't connected to the Internet, from its perspective, forwarding email from it to smartphones is going to be "very much less than straightforward" :-). For email on the PUBLIC network, email forwarding to mobile devices is a technology which could potentially be embraced within both policy and practice.
Of course, the world is not usually this straightforward and may require some finessing within the labelling scheme; PARTNER and (potentially) CUSTOMER could potentially be inserted as labels between PUBLIC and INTERNAL, such that data associated with these classes of organisation may be forwarded where policy permits.
4. BitTorrent and P2P Transferring big files is very difficult for most users. E-mail policies usually restrict it. FTP is too slow and often too difficult to configure (and sometimes even blocked by firewalls). IM clients are clunky and often fail at file transfers (usually blocked by firewalls). That’s why some users will turn to P2P programs such as BitTorrent, because they are much more effective. Unfortunately, these programs can also have a lot baggage since they are regularly used for hosting and transferring illegal music and video files. That doesn’t mean IT should necessarily abandon P2P software altogether. It can often prove extremely useful and efficient. For example, Collanos software can be used for sharing and collaborating on documents between various users in a team or workgroup.
I've had the "email thresholds too low, ftp (and sftp, as part of ssh) blocked, IM file xfer fail" issue before now, and my normal approach in these circumstances is either to use split to divide data into small enough chunks that they can be emailed as several parts, or to burn stuff to CDs and either put them in the post, or take them to the customer by hand.
If folk need to transfer stuff which takes more than 5 such emails, then they either have an overly-Draconian email policy, or excessvely low bandwidth or absolute transfer size limits from their network service provider.
Of course, TX also prevents folk sharing stuff via these mechanisms, that they shouldn't :-).
3. Web mail with GB of storage Another method that users often employ to transfer large company files is with a consumer e-mail account, such as Gmail, Yahoo Mail, and Hotmail, which all have much larger storage capacity and allow larger file attachments than most corporate mail accounts. The problem is that not only are these systems far less secure than corporate mail servers, but many of them thoroughly index messages and files and so sensitive corporate data transfered through these mail systems can get spread throughout lots of different servers and search indexes. New Windows storage technologies that do not save multiple copies of the same file can help IT deal with the e-mail storage issue and allow IT administrators to expand storage limits for users. There are also new Exchange plug-ins, such as Mimosa, that offload all attachments from messages and store them separately to streamline inboxes and allow IT to increase quotas.
Hear, Hear, TechRepublic; if you haven't already seen my pal Alec's recent talk incorporating his thoughts on this and other subjects, it's worth a watch. For folk in my position, who find it useful be able to make VMWare images of Solaris with Trusted Extensions configured, available for internal download, a home directory of anything less than 20 gigabytes just doesn't cut it.
Also, for what it's worth, our messaging server hasn't been saving multiple copies of the same file, for as long as I can remember; it looks like Exchange just hit on something we've been doing for nigh on a decade ;-).
Naturally, TX will stop people putting internal stuff on external systems, as above...
2. Rogue wireless access points It’s a wireless world in home networking now. Users who see how easy it is to connect a router to their DSL or cable modem and roam the house wonder why they can’t just do the same thing when they take their laptop from their cubicle to the conference room. If the company doesn’t offer wireless LAN access in their office, many of them just get sub-$100 wireless access points, plug into their Ethernet jack at work, and start roaming the building. Of course, if their desk is at the window next to the parking lot, they don’t realize that they just provided anyone who drives up with a free Internet connection and easy access to the corporate network. IT departments can follow best practices (see TechRepublic’s ultimate guide to enterprise wireless LAN security) to establish their own secure wireless LAN, or they can use products like Xirrus to simplify secure wireless deployments. They can also educate users and use intrusion prevention software to scan for rogue access points.
Not in my home, it isn't.
The best way to do campus / office wireless, is to provide encrypted access direct to the Internet; if folk need to access the intranet they can VPN back in (ensure the keys are only in the INTERNAL keystore), and encryption (even if it's something as trivial to crack as WEP) can at least inconvenience a Bad Guy and ensure that anyone who busts in for general access, actually is provably a Bad Guy.
Of course, the best way to prevent rogue access points springing up, is to install sufficient non-rogue ones that the users feel no need to install access points of their own :-).
1. USB flash drives Portable storage is nothing new. Twenty years ago, users were carrying around floppy discs full of files. However, the size of those old floppy discs limited the amount of data that users could take out of the company. Today, with 4-GB USB flash drives costing $40 or less (and flash drives as large as 64 GB now on the market), users can copy all of their My Documents files to a flash drive and walk out the door with them. Or a user could copy a huge chunk of a file server and walk out with it on an unencrypted USB drive.
As a Geek Of A Certain Age, I consider personal email and occasional personal web browsing (news.bbc.co.uk, various security sites and blogs, occasional oddities), via the corporate network to be "perks of the job". Having a nice hosted blog is a perk, too.
if Alec (and the originator of this idea, JP Rangaswami) are to be believed, the geeks who are entering the job market now, will consider it a perk of the job to be able to plug their iPods into their office desktops and fire up iTunes. In fact, if they can't do this, it's reckoned they'll be reluctant to work for you.
So, let them - at PUBLIC.In fact, ensure that there's a little bit of infrastructure at PUBLIC for them to get to and run iTunes on, use as their temporary file springboard to get stuff to Flickr, etc. Of course, it may be necessary to have this bit of infrastructure automagically rebuild itself from time to time, so ensure the users know to download stuff to their iPods, etc in safe time windows. Let them mount USB sticks, digital cameras, etc at PUBLIC, deny them the privilege to do so at INTERNAL, and pragmatically, you're doing OK.
Users need to be able to easily transport their files in order to work from home or on the road, transfer documents to partners, etc. IT has to find ways to make it simple for users to do this while also protecting sensitive corporate data. For example, an IT department could educate users about flash drive security, provide encryption software for those who need to use flash drives, or simply provide company-sanctioned flash drives that are preconfigured with encryption and other security standards. The cost of the flash drives would be much cheaper than the legal fees and/or fines of dealing with customer data that slipped into the wrong hands.Cue Flagstone and T10000 tape drives for really sensitive stuff (or large aggregations of notionally less sensitive stuff), and the likes of ZFS crypto and FileVault for more general day-to-day things.
What will come of all this? Gartner Analyst Stephen Prentice said, “The critical thing to understand is that your employees are not doing any of these things … to be awkward. They’re not doing it because they’re trying to break security. They’re simply trying to get their job done… The approach has be to not go in there and stop them from doing it. Go in there and find what constraint have you put in their way that’s forcing them to do something that is out of your control, and then fix your problem. If you gave people the option of using an in-house, secure, controlled environment that meets all of their needs, they simply aren’t going to have the need to go outside. If you fail to give them that — if you give them restrictions that are unreasonable or stop them doing their job effectively — then they will find another way.” Gartner Fellow David Mitchell Smith added, “If rogue users start to see some flexibility on the part of the IT department — some genuine interest in wanting to provide what they need — they may be more open to go to them first and say ‘Can you help us provide this,’ as opposed to just going out and doing it. [They could] be part of the solution, instead of part of the problem. But long term, there’s this unstoppable force which is demographics. New people are coming into the workforce, in IT and in non-IT functions, and they are becoming more open-minded and having more and more of an impact. Over time it’s pretty inevitable that the trend is moving toward the more open way of doing things. It’s just a matter of how long it takes and how well it fits into the culture of each organization.” Ultimately, this “civil war” is merely a sign of two larger problems that IT must address: 1.) There are lot of IT departments that have policies and attitudes that are stuck in a time warp. The procedures that allowed IT to deploy important technologies while protecting users from themselves are no longer valid in a world where individual users often have newer and more advanced technologies in their homes than the IT department has in the office. IT is now entering into more of partnership with users, and policies and attitudes need to reflect that. 2.) There’s a general disconnect and lack of constructive communications between many IT departments and their users. IT departments need to view themselves as customer service organizations, with their users being their primary customers. IT departments have got to lose their paternalistic approach to users and focus their efforts around serving users and enabling them to become more productive. The IT departments that make these changes will thrive. The ones that don’t will see their role within the organization diminished and become prime targets for outsourcing.
This isn't the first place I've seen this - Alec hit it first, AFAIK - but it's worth commenting on.
Monday Dec 31, 2007
By davew on Dec 31, 2007
Following the recent NHS regional authority data leaks, and taking advantage of the lull in workload associated with the festive season, I've been thinking about whether care record centralisation or decentralisation is the better idea.
Currently, I'm in favour of centralisation; this is mostly down to human factors. If a centralised infrastructure needs fewer but more capable sysadmins than the regional authorities currently have, such sysadmins can be found, and measures can be be put in place (codes of connection, etc) such that any data which is legitimately accessed by a regional authority cannot be cached outside the central infrastructure, then centralisation is pragmatically the best bet.
However, I'm open to other opinions and lines of argument.
I've also had a careful re-read of some standards I tend to refer to, from a healthcare-oriented perspective, and doing so raises a number of questions; I was originally planning to blog about what changes might be needed in an end-to-end, centralised electronic patient and care record system in order to maintain compliance with these standards, until I realised that I don't have current and detailed knowledge of what various health authorities are actually using, today.
So, I have a request. If you are a UK-based GP, or know one who wouldn't mind answering a few questions for a security geek, please let me know (either by email - usual Sun format - or in this posting's comments):
- for a typical PC in a GP's surgery, who owns it?
- for ditto, who maintains it, from the perspective of patching, AV, etc?
- what OS and apps does it run?
- what is the nature of the data connection between the GP's surgery and the local trust - who owns it, and who provides it?
- what authentication does a GP have to provide, to access online records or services?
- does said typical PC have internet connectivity, and if so, is this direct or via some relay / proxy in the local authority?
- what does the computer do, when you put a CD or USB stick in it?
If you would like to email me about this (being my preferred means of communication on the subject), please use your NHSnet or doctors.net.uk email address; I'll drop you a quick line back with my thoughts, and this will also serve to verify that the email comes from a valid address...
Monday Dec 24, 2007
By davew on Dec 24, 2007
Robin reckons that PII should be "treated as a controlled substance", and makes a convincing argument to this effect. However, there's an even deeper truth in his statement that PII should be considered to be like "fissile material, or the kinds of materiel covered by arms limitation agreements during the Cold War".
Just like fissile material, PII has a half-life.
If the infamous HMRC CDs have fallen into the hands of a ne'er-do-well, said ne'er-do-well would be wise to sit on them until the media brouhaha has died down, but not so long that much of the data is no longer accurate.
People die, move house, change their names on getting married and divorced - in short, PII changes. For the amount of PII disclosed by HMRC, the analogy can just about be drawn between loss of accuracy over time, and radioactive decay.
In a hundred years' time, the misplaced HMRC data will be entirely useless to someone who wants to try faking identity. In fact, if you look at it from the perspective of the disclosure state machine I put together, if someone was to try to fake an identity based on a piece of "naturally expired" PII in a few years' time, the "expired" PII could serve as a strong indicator of suspicion that they were in possession of the misplaced HMRC data. I sincerely hope that HMRC has realised this, and has made a reference copy of the as-misplaced database such that a "watch-for" list will come into being inside HMRC and slowly grow, based on updates to the live database resulting in increasing discrepancies with the misplaced records.
Potentially, HMRC could even offer a service to other UK Government departments, to check offered identity information against this watch-for list...
Oh, and a happy Newtonmas to all my readers :-)
Wednesday Dec 19, 2007
By davew on Dec 19, 2007
I'm scratching my head over the news that HMRC is offering a substantial reward for the return of their missing child benefit data CDs.
As has been said elsewhere (see posting dated November 24th, 2007), the data hasn't been so much "lost" as "published". If the CDs genuinely have fallen into the hands of a ne'er-do-well, they would certainly have the sense to take a copy of the contents, before attempting to claim the reward - in fact, I idly wonder if the reward is a hook such that, if return is attempted, the returnee will immediately be arrested, have their home thoroughly searched for backup media, and have their computer equipment seized for forensic examination to determine whether such a backup exists on hard disk.
I also idly wonder what HMRC's response would be, if they were to receive multiple, identical copies of the discs, from multiple sources? After all, this is quite possibly the distribution status of the data, by now...
Wednesday Dec 12, 2007
By davew on Dec 12, 2007
While further examples of questionable media handling security within Government are now starting to come out of the woodwork (DWP, DVLA Northern Ireland), I'm also seeing some interesting comments on my previous posting about the HMRC data leak.
While I don't believe everything I read in my blog comments, the enigmatic "wigwam" has kindly pointed me at this - the minutes of evidence presented to the Treasury sub-committee on the breach.
Take a look at Q389 - Q393.
Monday Nov 26, 2007
By davew on Nov 26, 2007
While I'm happy that my own details haven't been leaked as part of the HMRC data leak (not having children is good for my privacy as well as my bank balance and my carbon footprint, it would seem), I'm following the news closely, as more information about the leak is disclosed.
The Chancellor of the Exchequer was interviewed on Radio 4's "Today" programme on Tuesday morning last week, and said something which particuarly surprised me. Specifically, he referred to the way in which the data was stored on the missing discs as "password-protected, but not encrypted".
I conjecture that you can't actually have password protection, without encryption.
Consider one of these missing disks. If it was to turn up and you put in your DVD-ROM drive, you could dd the blocks off it, to get yourself a file of anything up to 4-and-a-bit GB in size. If you then grep through it for known cleartext (such as names of folk you know, who are parents) you'll get matches unless the data is either compressed or encrypted; it's fair to assume that the files on the disks will have been generated by fresh extraction from a database of some sort, so you're going to be looking at a reasonably sequential set of blocks, without much fragmentation or indirection.
This neatly bypasses any application-layer password system.
If the files on the disks are simply compressed, you could either reconstruct the compressed data sets from the dd'ed blocks using forensic tools, or simply mount the disks, copy the files to scratch space and decompress them.
Here's where you're likely to hit password protection - at the application layer.
Thinking about what is likely to have been done when marshalling the files to burn onto the disks, it's rather probably that whatever raw data required, was put into a password-protected zip archive (in fact, http://news.bbc.co.uk/1/hi/uk_politics/7106987.stm suggests this is the case).
The zip compression standard indicates that, where password protection is applied, the password is used to unlock a soft keystore from which a symmetric key is extracted, and that key then decrypts the main body of the archive before the usual decompression takes place.
Please note the use of the word "decrypts", Chancellor :-).
Apparently, WinZip 9.x introduced AES encryption, so depending on what version of what zipping app is in use at HMRC, it may even be using a US-formally-approved algorithm.
Granted, the soft keystore needs to be bound up with the data in the file (and it's usually advisable to keep your keys somewhere where your data isn't), but encryption is still encryption. However, for earlier versions of Zip, I'm reliably informed that the PC1 encryption algorithm it uses, is rather straightforward to break.
It's also possible that, rather than password-protect a zip archive, HMRC sent the data in some password-protected spreadsheet form; let's look at what happens with StarOffice Spreadsheet and Microsoft Excel, in this regard...
From the OASIS standard for ODF 1.0...
The encryption process takes place in the following multiple stages:
1. A 20-byte SHA1 digest of the user entered password is created and passed to the package component.
2. The package component initializes a random number generator with the current time.
3. The random number generator is used to generate a random 8-byte initialization vector and 16-byte salt for each file.
4. This salt is used together with the 20-byte SHA1 digest of the password to derive a unique 128-bit key for each file. The algorithm used to derive the key is PBKDF2 using HMAC-SHA- 1 (see [RFC2898]) with an iteration count of 1024.
5. The derived key is used together with the initialization vector to encrypt the file using the Blowfish algorithm in cipher-feedback (CFB) mode.
For Excel, here's the appropriate quote directly from Microsoft's support site:
"You can use a strong password with the Password to Open feature in conjunction with RC4 level advanced encryption to require a user to enter a password to open an Office file."
Not as explicitly defined as the ODF standard, but then, that's Microsoft for you.
Nonetheless, RC4, if correctly implemented, is Plenty Good Enough to count as "encryption".
Of course, if the HMRC infrastructure had been built on top of Trusted Extensions, the "junior employee" (noting the rumours forming, that more senior staff may have been complicit) would probably not have had the label at which all this data was stored within his clearance range, or the clearance range of a role that he was allowed to assume without passing through a two-person rule; he certainly wouldn't have had the privilege to mount or burn media at that label...
Actually, it looks like "password protction without encryption" has been implemented as a feature, as "Password to modify" in Microsoft Office - but, as you might expect, it doesn't work...
- Music for the Laptop Generation?
- I'm an Inventor! :-)
- Charitable fellow Britons, it's time to put your hands in your pockets again.
- The Cold War may be over, but some things don't change; only the threats do...
- "Tactical Nuclear Penguin"
- "Dear Santa"...
- "How I got into doing Security for a living"
- init 6
- Bad Phorm
- Mifare Classic cracked