Tuesday May 15, 2007

SLOTD: Infoworld Writes On Vulnerability Fixing...

So a friend punted this URL over to me:

InfoWorld: Should vendors close all security holes?
Fix it or forget it? A reader presents an intriguing counterpoint

In last week's column, I argued that vendors should close all known security holes. A reader wrote me with a somewhat interesting argument that I'm still slightly debating, although my overall conclusion stands: Vendors should close all known security holes, whether publicly discussed or not. The idea behind this is that any existing security vulnerability should be closed to strengthen the product and protect consumers. Sounds great, right?

The reader wrote to say that his company often sits on security bugs until they are publicly announced or until at least one customer complaint is made. Before you start disagreeing with this policy, hear out the rest of his argument.

(continues...)

...and I shared the link and article with our security community chat channel, inviting comment. The result itself surpassed what I was intending to write, and provides food for thought:

  • (15:36:32) alecm has changed the topic to: http://www.infoworld.com/article/07/05/11/20OPsecadvise_1.html - Comments please before I SLOTD a critique...

  • (16:13:03) bartb: alecm: what kind of comments are you looking for? Summary of thoughts: reality says you'll always have bugs, you'll always be short-staffed, so fix those that are publicly known, fix those that have a large-ish impact, fix the remainder if time permits.

  • (16:33:14) darrenm: re: the topic, I agree with some of what is being said in that article though not the "defer indefinitely until it is reported" way; priority scheduling for fixes to get patches out for those that are publically known is a no-brainer.

  • (17:32:15) bartb: alecm: schneier's fresh blog posting may be of interest around your planned blog post

  • (17:46:27) alecm: My take on the Infoworld article: I think it's a bad idea, I'm with Bart and Darren but probably showing my sys-admin heritage. To me security-bugs need "fixing" ASAP they are found, and yes I expect vendors to do the extant exploits first... but if a hole is wide-open and I am told about it, I can mitigate it in more ways than a software patch. Eg: Firewall port-blocking. Hence why I support full-disclosure within a per-event "reasonable" time period.

  • (18:08:45) Avalon: Yes, vendors should close all security holes.

  • (18:09:14) sommerfeld: the more interesting question is: given limited development resources, in what order should they be closed.

  • (18:09:59) Avalon: There's another question that begs to be asked by that article: what about security holes that are closed without being disclosed publicly? I make that point because quite obviously someone is saying that fixing one (and subsequent patch/fix announcement) draws more attention and therefor exploits than without said publicity.

  • (18:11:19) sommerfeld: what about security holes that are closed without the developer realizing that they bug they fixed was exploitable? I have done this. IMHO that's the best kind of security fix - \*NOBODY\* knows about the exploit! The one time in my memory when someone asked me why I didn't publish a security alert on a bugfix, it was because I had been fixing a bunch of gcc -Wall warnings and didn't stop to check whether any of the fixes were exploitable.

  • (18:12:38) Avalon: hmmm, the article does mention people reverse engineering patches. Hmmm... Would you rather have only the Red Army know about your bugs or everyone? It's unquestioned that there are teams of Chinese hackers trying to break into military and commercial systems around the world. They've got access to more man power than most hacker organisations ever will...

  • (18:18:12) alecm: Avalon: have you a citation for the chinese thing ?

  • (18:37:02) Avalon: USA Today.

...and on rolls the paranoid chatter which drives all us security geeks. I thought I would post this and open it up to discussion than just leave it hanging.

In summary I would suggest (in slight disagreement with the "fix it when it goes public" position cited by the article) that the consensus amongst our little group is that we should fix everything because that's the right thing to do; but pragmatic prioritisation in the face of exploitation is wise.

You just can't rely on that, since you don't always know when you're being exploited.

- alec

Friday May 04, 2007

SLOTD: Schneier, Industry, And 100% Security

So Techdirt writes:

Last week, security expert Bruce Schneier caused a bit of a stir when he said that there shouldn't be a security industry. While his comment engendered a lot of debate, it really wasn't a particularly radical statement. As he's made clear in his latest Wired column, all he meant was that IT vendors should be building security directly into their products, rather than requiring customers to purchase security products and services separately.

...citing Bruce as reported at Silicon.COM:

"The fact this show even exists is a problem. You should not have to come to this show ever. [...] We shouldn't have to come and find a company to secure our email. Email should already be secure. We shouldn't have to buy from somebody to secure our network or servers. Our networks and servers should already be secure."

...and I think he is right, as I find Bruce generally is. My experience bears this out - I have friends who ask "What Anti-Virus Software / Malware Detector / Intrusion Detection System Should I Use?" - and in none of these cases do I actually have an answer for them.

Sometimes they must really wonder what I do for a living, if I'm a "security expert" and don't know what AV software to use.

It's true, however. Given what I use at home (Solaris, Mac, Linux, and an solitary and rarely booted XP system), plus the manner in which I connect to the Internet (NAT/firewall built in to my DSL router) and the fact that I understand the value of keeping security patches up to date, not running services/daemons unless they are necessary, and cycling WEP and login passwords occasionally, with all that in place I don't have to use any specialist security software at all.

Instead I use what tools I have available with my network hardware and software platforms - generally some form of Unix - making sure they're all properly deployed. Sometimes I get a hacker knocking on my door, I've certainly seen a few attempts in my logfiles, but it's not something I fret about since there's very little exposed to attack, and of the latter it's all generally well-maintained.

So why should I worry? Beats me. The Silicon.COM article also contains this quote from Graham Cluley at Sophos:

"I can't imagine there ever being a 100 per cent secure operating system because a vital component of programming that operating system is human."

Well yes, Gray, you're right, but one of the things you've left unstated is that there is no such absolute thing as 100% security.

Security is relative: 100% security means "100% Adequate" security, that the security features you've deployed are proportionate to the exposure you make in transacting with the rest of the network, plus mitigation of any risks you face in terms of availability ("I can't access my Gmail! Argh!") or physical security ("Someone stole my laptop!")

No, there won't ever be a 100% secure system, but people who care are currently able to get systems which are adequately "secure by default" and if they know how to use and maintain those systems properly then yes, there won't be a security industry any more.

- alec

About

This blog provides security vulnerability fix notifications relevant to third party software components distributed and supported as part of Oracle Products.
Summarized version of this blog is available as a mapping of CVEs and solutions.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
2
3
4
5
6
7
8
9
10
11
12
13
14
16
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today