By alecm on Jun 12, 2007
Cryppies will want to listen to track 4 off the album 'Algorhythms', viz Alice and Bo b.
Cryppies will want to listen to track 4 off the album 'Algorhythms', viz Alice and Bo b.
This posting is a list of security video blogs which have been posted to the community.
InfoWorld: Should vendors close all security holes?
Fix it or forget it? A reader presents an intriguing counterpoint
In last week's column, I argued that vendors should close all known security holes. A reader wrote me with a somewhat interesting argument that I'm still slightly debating, although my overall conclusion stands: Vendors should close all known security holes, whether publicly discussed or not. The idea behind this is that any existing security vulnerability should be closed to strengthen the product and protect consumers. Sounds great, right?
The reader wrote to say that his company often sits on security bugs until they are publicly announced or until at least one customer complaint is made. Before you start disagreeing with this policy, hear out the rest of his argument.
...and I shared the link and article with our security community chat channel, inviting comment. The result itself surpassed what I was intending to write, and provides food for thought:
- (15:36:32) alecm has changed the topic to: http://www.infoworld.com/article/07/05/11/20OPsecadvise_1.html - Comments please before I SLOTD a critique...
- (16:13:03) bartb: alecm: what kind of comments are you looking for? Summary of thoughts: reality says you'll always have bugs, you'll always be short-staffed, so fix those that are publicly known, fix those that have a large-ish impact, fix the remainder if time permits.
- (16:33:14) darrenm: re: the topic, I agree with some of what is being said in that article though not the "defer indefinitely until it is reported" way; priority scheduling for fixes to get patches out for those that are publically known is a no-brainer.
- (17:32:15) bartb: alecm: schneier's fresh blog posting may be of interest around your planned blog post
- (17:46:27) alecm: My take on the Infoworld article: I think it's a bad idea, I'm with Bart and Darren but probably showing my sys-admin heritage. To me security-bugs need "fixing" ASAP they are found, and yes I expect vendors to do the extant exploits first... but if a hole is wide-open and I am told about it, I can mitigate it in more ways than a software patch. Eg: Firewall port-blocking. Hence why I support full-disclosure within a per-event "reasonable" time period.
- (18:08:45) Avalon: Yes, vendors should close all security holes.
- (18:09:14) sommerfeld: the more interesting question is: given limited development resources, in what order should they be closed.
- (18:09:59) Avalon: There's another question that begs to be asked by that article: what about security holes that are closed without being disclosed publicly? I make that point because quite obviously someone is saying that fixing one (and subsequent patch/fix announcement) draws more attention and therefor exploits than without said publicity.
- (18:11:19) sommerfeld: what about security holes that are closed without the developer realizing that they bug they fixed was exploitable? I have done this. IMHO that's the best kind of security fix - \*NOBODY\* knows about the exploit! The one time in my memory when someone asked me why I didn't publish a security alert on a bugfix, it was because I had been fixing a bunch of gcc -Wall warnings and didn't stop to check whether any of the fixes were exploitable.
- (18:12:38) Avalon: hmmm, the article does mention people reverse engineering patches. Hmmm... Would you rather have only the Red Army know about your bugs or everyone? It's unquestioned that there are teams of Chinese hackers trying to break into military and commercial systems around the world. They've got access to more man power than most hacker organisations ever will...
- (18:18:12) alecm: Avalon: have you a citation for the chinese thing ?
- (18:37:02) Avalon: USA Today.
...and on rolls the paranoid chatter which drives all us security geeks. I thought I would post this and open it up to discussion than just leave it hanging.
In summary I would suggest (in slight disagreement with the "fix it when it goes public" position cited by the article) that the consensus amongst our little group is that we should fix everything because that's the right thing to do; but pragmatic prioritisation in the face of exploitation is wise.
You just can't rely on that, since you don't always know when you're being exploited.
A man is going on vacation (ie: on holiday) - and he's worried about the possibility of someone breaking into his house whilst he's away; so he checks all the window locks from inside the house, steps outside, walks around the house to inspect for anything he's missed - checking that patio doors, etc, are locked - then locks his front door and drives off. What's he done wrong?...which is my usual schtick for trying to explain the importance of doing things in the right order, because even if you have the right security-ingredients you can still mess up by not using them properly, or not laying them out in a sensible manner. I was blown away by some of the creativity that was provided in the responses - the person who went for the jugular and got my typically sought-for answer was Andy Paton:
While he was busy checking the windows and backdoor he left the front door unlocked!!...which is the obvious flaw in the process; it's astonishing how many people completely miss that. That said - and thank you Andy - this being an open question there is always room for a different perspective, eg: trojan horses:
- Wes W:
- Apparently he's assumed someone hasn't already broken in or compromised existing security already. For example, your vacation man didn't seem to check the interior for a trojan horse (stowaway) and he didn't change the locks.
- Mark Musante:
- He hasn't checked the first floor?
...the architectural and integrational:
- My first thought was that it has to relate to the "then locks his front door" i.e. he hasn't 'tested' his security from the outside in the state it will actually be in. As the other comment mentions, he ahs also left the door unlocked while checking! And the second thought was around "and drives off" - the car present/missing is a clue of his absence but I can't see much that you can do about that unless you religously use the garage (which isn't stated either way, so I supect it isn't that).
...and the slightly tongue-in-cheek operational risk:
- assuming it's a single story house without any other mean of entrance except doors and windows and all access will need separate keys; so he checks all the window locks from inside the house - should check/test the locks from the outside. steps outside - How, through what? - Lock it from the outside before proceed. Checking that patio doors - How does he protect it? it's a big visual vulnerability. Does he taken steps to make like the house has someone living [in it and is] not abandoned. interactive :)
...all of these are legitimate and interesting answers; even the last one by analogy of the occasion I saw someone enable system-auditing in a particularly nitpicky mode, only to see the machine crash from filling its root partition two days later. This is related to the reason I generally put /var/log and /var/adm on a partition completely separate from root and the normal /var - it's a signature perversity of a Muffett-specified machine, but your machine is at less risk from log-flooding.
- Tom Hawtin:
- He hasn't checked that the iron is switched off. He returns to find a perfectly secure but somewhat charred house. With two weeks worth of milk on the doorstep.
So, next time I have to stand up and give this talk to somebody, I'll have something extra to say. Thank you folks, and thank you for sharing. Thank you also to Tom for this little gem which made me smile:
He should check that the front door is locked, from the inside? My father's old front door you could open the lock through the letterbox using a handily located small crowbar....which just goes to prove that security can be perfectly acceptable if it fits your environment; I still know places where nobody bothers to lock their doors when they go out for the day, but nowadays they seem somehow fewer and further between...
So: A man is going on vacation (ie: on holiday) - and he's worried about the possibility of someone breaking into his house whilst he's away; so he checks all the window locks from inside the house, steps outside, walks around the house to inspect for anything he's missed - checking that patio doors, etc, are locked - then locks his front door and drives off.
What's he done wrong? Feel free to critique. My answer will be posted tomorrow, unless it appears in the comments beforehand. :-)
ps: if you've heard me explain this before, please keep schtum. This is just for fun.
Last week, security expert Bruce Schneier caused a bit of a stir when he said that there shouldn't be a security industry. While his comment engendered a lot of debate, it really wasn't a particularly radical statement. As he's made clear in his latest Wired column, all he meant was that IT vendors should be building security directly into their products, rather than requiring customers to purchase security products and services separately.
...citing Bruce as reported at Silicon.COM:
"The fact this show even exists is a problem. You should not have to come to this show ever. [...] We shouldn't have to come and find a company to secure our email. Email should already be secure. We shouldn't have to buy from somebody to secure our network or servers. Our networks and servers should already be secure."
...and I think he is right, as I find Bruce generally is. My experience bears this out - I have friends who ask "What Anti-Virus Software / Malware Detector / Intrusion Detection System Should I Use?" - and in none of these cases do I actually have an answer for them.
Sometimes they must really wonder what I do for a living, if I'm a "security expert" and don't know what AV software to use.
It's true, however. Given what I use at home (Solaris, Mac, Linux, and an solitary and rarely booted XP system), plus the manner in which I connect to the Internet (NAT/firewall built in to my DSL router) and the fact that I understand the value of keeping security patches up to date, not running services/daemons unless they are necessary, and cycling WEP and login passwords occasionally, with all that in place I don't have to use any specialist security software at all.
Instead I use what tools I have available with my network hardware and software platforms - generally some form of Unix - making sure they're all properly deployed. Sometimes I get a hacker knocking on my door, I've certainly seen a few attempts in my logfiles, but it's not something I fret about since there's very little exposed to attack, and of the latter it's all generally well-maintained.
So why should I worry? Beats me. The Silicon.COM article also contains this quote from Graham Cluley at Sophos:
"I can't imagine there ever being a 100 per cent secure operating system because a vital component of programming that operating system is human."
Well yes, Gray, you're right, but one of the things you've left unstated is that there is no such absolute thing as 100% security.
Security is relative: 100% security means "100% Adequate" security, that the security features you've deployed are proportionate to the exposure you make in transacting with the rest of the network, plus mitigation of any risks you face in terms of availability ("I can't access my Gmail! Argh!") or physical security ("Someone stole my laptop!")
No, there won't ever be a 100% secure system, but people who care are currently able to get systems which are adequately "secure by default" and if they know how to use and maintain those systems properly then yes, there won't be a security industry any more.
Hello, Susan. :-)
However, last week I posted a video about web2.0 security and am in some ways delighted that an example of the gap I didn't cover, coming to the public consciousness so soon.
Our fearless leader two years ago was described and quoted thusly:
Blogging's advantage, from his perspective, is in the transparency and authenticity that nothing else can provide. With more than 1000 company bloggers, people can see inside Sun in ways that are infinitely more valuable than Federal governance regulations. 'Executives are missing a point. There is no perfect truth despite transparency.' He argued that SEC requirements for quarterly reporting is far from as revealing as 1000 Sun bloggers talking about 'the guts of the company,' on a daily basis in a public forum.
From Schwartz' perspective, blogging is not an appendage to Sun's marketing communications strategy, it is central to it. He believes that the 1000 Sun bloggers contribution hasn't just moved the needle for the company, 'they've moved the whole damned compass. The perception of Sun as a faithful and authentic tech company is now very strong. What blogs have done has authenticated the Sun brand more than a billion dollar ad campaign could have done. I care more about the ink you get from developer community than any other coverage. Sun has experienced a sea change in their perception of us and that has come from blogs. Everyone blogging at Sun is verifying that we possess a culture of tenacity and authenticity.'
...and the flipside of that is summed-up in a nutshell: if you manage to do something which trashes your authenticity, makes you look artificial, opaque, plastic, or disrespectful of the members of your community, then you can suffer in a way that hasn't really had adequate comparison since the days of tar & feathers, stocks or other forms of community social humiliation.
Sun Microsystems has its own internal vocabulary, and one of the phrases which used to be common was that of the CNN Moment - a "damaging public infrastructure failure often experienced by dot-com enterprises" which presumably would be big enough and embarrassing enough to end up on the front page of the eponymous website.
What I am finding is less obvious to some of my colleagues (and customers) is that as mainstream media websites become less relevant, blogs and other communities become more relevant in terms of how people will perceive you and your company; and the distributed nature of blogs means that stories don't get retracted, they get amplified.
So nowadays we should fear "blog moments", or perhaps social-tar-and-feathering, since once humiliation is stuck to your brand then it's awfully hard to wash off.
So there's your security risk for today, and its respective mitigation: if you're going to engage with your community then do respect them and don't junk those amongst them with whom you have an issue; instead you need to engage with your community about the underlying problem - eg: "Our advisers think this is a legal risk to us, so we're very sorry but we're suspending this thread until we sort this out..." - and you'll come out of it a lot cleaner, and with fewer feathers.
And sadly there is no shortcut. No amount of firewalls, VPNs, privilege management, cryptography or methodology will save you from the business risk of not "getting it".
This was a great weekend for WiFi on OpenSolaris (and thus future releases of Solaris and Solaris Express) [build 64]. Not only did we get a driver for the Intel Centrino 3945 chipset but more importantly (well at least in the eyes of a security geek like me) we got support for WPA-PSK. I've been working with the project team, not as a core developer - mostly design advice and codereview, on this for quite some time now and I'm really glad to see it integrated I'm really pleased with the architecture and the implementation.
Yeah I know lots of other operating systems had this already and now we do to! This combined with NWAM which integrated its first deliverables into build 62 and we are really going somewhere with usability and security for Solaris on laptops.
Now I can put WPA-PSK on my home router again instead of relying on WEP, not brodcasting my ssid and MAC address restrictions. Meanwhile the project team are now off developing WPA Enterprise support, I expect to work with them a little as they design and implement that support.
I attended InfoSec Europe at Olympia in London earlier this week. I find this show generally a little to "PC" biased sometimes so I wasn't expecting to get too much out of it. I spent most of the time looking around for encrypted storage solutions and products. Last year I found an excellent hardware only encrypting disk drive that is approved for UK government use.
This year I found a device by a company called SafeBoot. Initially I almost discounted this device because I was expecting it to be Windows only. The device is a small USB flash drive with a fingerprint reader to access the data, I think it is their phantom product that I saw. While the device can only be configured from Windows the lock/unlock functionality works on any system. We tried it out under the MacOS X laptop we had with us (this ensures there are no drivers needed for this) and it works just fine. What was even nicer is that a simple software eject under MacOS caused the drive to relock again. So I fully expect this to work just the same under Solaris. Under MacOS X the encrypted part of the device that you need your fingerprint to unlock appears as a removable drive that doesn't have the media in it - until you swipe your fingerprint.
Pretty cool little device, I don't have one at the moment to try it out but it looks promising. I can even see some uses for this in a primarily Solaris based solution, so you might see this or something like it in the future....
Apparently the device can also allow the crypto functionality to be used by the host OS, but only Windows. I wonder if I can get them to write (or collaborate with us to do so) a driver for the OpenSolaris cryptographic framework.
We were in the midst of some on-the-fly rearchitecture discussion (read: "if we replumb it all in a more elegant fashion, what needs to be fixed or added in order to make it safe?") and it turned out that an extra firewall to demarcate a line between some public and private machines, would make matters a lot more secure.
"It'll cost a lot, this new firewall", says their long-haired sysadmin.
"Why", says I?
"Firewall license" says he, and names a largeish four-figure number. Eek. That's more than the hardware!
So one of the things I've never understood - and I've told him this - is why the "Cult Of Firewall" is such that only a "dedicated box or appliance" running "genuine firewall software" for which $$$$$$ are paid, is what people go running towards whenever firewalls are mentioned.
Sure, in an enterprise context where people bandy words like "five nines" (ie: 99.999% uptime) - or "extreme(ly) high availability", or where you need "management consoles" - then do buy an enterprise solution where you might be able to sue the vendor if it blows up.
But if you are a small-to-medium organisation with your own in-house pet geeks, then why not take advantage of general-purpose functionality of general-purpose operating systems and deploy Solaris, Linux or \*BSD as a firewall? Consider your choice carefully, minimise it to the utmost, but it'd be a lot cheaper and often perfectly adequate and more than adequately performant.
I started at Sun in 1992 and if I had had more business sense back then, and if I had had more money, then I would have cottoned on to the number of SparcStation2's that I was buying, to act as "routers" for our intranet. This observation might have led me to invest in Cisco and its dedicated routers, and made me a tidy profit. Oh well.
But the thing about IT security is that "what goes around, comes around". Maybe it's time for the comeback of the general-purpose operating system, in tiny tasks, on more-than-adequately-powerful hardware?
 yes, this is an intentional pun. :-)
This blog provides security vulnerability fix notifications relevant to third party software components distributed and supported as part of Oracle Products.
Summarized version of this blog is available as a mapping of CVEs and solutions.