A few errata correction on my last blog entry before I go any further: 1) my mother insists that I was closer to 3, not 4 years of age when I threw a fit and demanded (and got) my own library card* and 2) the name of the devastating fire in Ketchum, Idaho last month was the Castle Rock Fire, not Castle Creek Fire. I can only plead brain fuzziness based on the amount of smoke I inhaled over the two weeks it was burning. 


Now that the fire is over, I have a newfound appreciation for the beautiful, clean, cool and pristine air in Idaho. For the two weeks the 46,000-acre Castle Rock Fire was burning, dense smoke and haze clouded the sky to the point that I could see neither the ski runs at Sun Valley (just a hoot and holler from my house) or the Boulder Mountains to the north of me. You will never know how beautiful clean air can be until you've lived through several weeks of smoke, ash, and debris falling around you. It's like living through the Apocalypse, particularly the experience of looking across the valley and seeing fire burn down the ridge so fast that it was as if it were being fanned by the Devil himself.


The fire has been hard on people, particularly businesses. It caused a cancellation of a lot of activities over Labor Day that were not only a lot of fun but that the local merchants depended on to bring in revenue. We are now officially in what is known as "slack season": hardly anybody comes here in fall, though heaven knows why. Fishing, hiking, camping and hunting are all great Idaho fall activities. I once went on a beautiful 6-mile hike to a pristine alpine lake and I did not see a single other soul during the hike, other than my hiking buddy and my dog. (Try that in California.) So come on up to Sun Valley, y'all. If there is anything better than terrific natural beauty, it's terrific natural beauty with no crowds.


My other change in perspective (besides a newfound appreciation for clean air) is the way I feel about firefighters. You hear all the time - and most of us believe it - that firefighters are heroes. I never doubted that. But it's one thing to think that in the abstract and another to have experienced it firsthand. I got to see a lot of them in Sun Valley in August, since we had 1600 firefighters in a town of 3000 people. My house was never in any real danger, for which I am grateful. Furthermore, there was no loss of life and no structural damage to anybody's houses or businesses. The critters even made out OK, too, though there are a lot of hungry bears wandering around looking for chow.  Pretty much every place in town now has a "thanks, firefighters" sign or banner displayed prominently. We really mean it: thank you, wildland firefighters, you saved our town.


Now that the fire is 100% contained, a lot of locals are saying that in the long run it is going to be healthy for the forests that we had a burn; in fact, we were overdue for one. The forest will recover; the wildlife will thrive (so long as cheat grass doesn't crowd out the sage that is a key habitat for many species). It's only been a couple of months since the Trail Creek Fire burned one of my favorite hikes in Sun Valley, but you can already see a sheen of green on the mountains and some new seedlings sprouting up through the blackened detritus. Forests recover, and a periodic burn gets rid of the underbrush that can otherwise build up and contribute to "crown fires" where the fire spreads not along the forest floor, but leaps from treetop to treetop. The difference between a disaster and a blessing in Ketchum was the skill of the firefighters, the grace of God and also the passage and perspective of time.


When you think about it, it's amazing how much of what you see really is based on your perspective. Perspective can include where you are as you look at The Big Picture, where you are in the picture and who else is in the picture.


I was reminded of this recently in a discussion with a state government struggling with open records issues. States keep a lot of data on their citizens to support, among other things, taxation (personal and property) and licensing (driver's, hunting, fishing, construction, "concealed carry"  permits and more). The question they were asking was how much of this data should be on-line and searchable?


I did not offer to write, critique or edit their state's open records laws, but I did point out to one of their legislators that a lot of concerns over privacy might depend very much on who is accessing the data and why they might want to access the data.


Most people are OK with some data being collected relevant to a transaction between parties. For example, to get a concealed carry permit in the state of Idaho, I needed to give the state some information to so they could do a background check on me. I also expect the state of Idaho to keep records about the fact they gave me a concealed carry permit (so that a law enforcement official can independently verify that I have a valid license and not a fake one, for example).


Many people who provide information for a service or transaction become unhappy if that data is accessed or sold or otherwise used for some purpose they didn't agree to. If you are dealing with a government entity like a state, you expect that when you give information to the state (that they need for things like raising taxes and providing services to citizens) they are going to use it for those "stated" purposes (no pun intended) and not for three thousand other things. I would not expect that the Idaho gun permit database would be searchable, say, by a gun ownership organization (or, conversely, by an anti-gun ownership organization). "Taint none of their goldurn business."


When data suppliers' expectations on who accesses what and for what purpose do not match with data collectors' uses, it's a problem. For example, if you've ordered books from, the next time you log on, you might get a friendly message that says something like, "Hi, <Your Name>! Based on your last few book purchases, we think you might be interested in the following books..." (In my case, the book list will be on military history or the Hawaiian language.) Many people might think: "Wow! How cool that they know me and can recommend books I might like!" 


Now imagine, if you will, the exact same message coming from the FBI**: "Hi, <Your Name>, based on your last five book purchases, we think you might be interested in ..." Many people would be outraged to think that the FBI (or another law enforcement entity) was looking at their book purchases. But, and here is the kicker: it is exactly the same data! Whether the above message is a "service" or an "invasion of privacy" depends on who had access to "my" data, who is doing the data analysis and why they are looking at the data. It's all about perspective.


In the private sector, these discussions take place in the realm of what a company collects, what they use the data for and who they can share the data with. Most companies have privacy policies that forbid collecting data for one stated purpose and using it or sharing it for another purpose that the "collectee" did not agree to, for example.


However, if data is public, or a public record, especially if it is Internet accessible and searchable, potentially anybody can access and analyze the data, for any purpose. My advice to the state was that they ought to hire someone to review the data they already have and figure out all the ways that data access could be misused by the evil-minded, like spear-phishers or stalkers. That is the place to start a legitimate public discussion about "open records;" specifically, how much the citizens of the state want to trade off convenience for privacy, and how much citizen data should be searchable and accessible by someone other than the state agency that collected it. It's all about perspective. 


People's perspectives on data collection can also be colored by the accuracy of the data that is kept. If someone made a mistake in doing a background check on me, that led to my being denied a carry permit, I should be able to get that "mistake" corrected. Otherwise, someone down the pike may find that I was once "denied" a carry permit and deny me something else. It's the second law of thermodynamics applied to data: entropy always increases. If data is inaccurate, inaccurate decisions will flow from use of that data.


Along those lines, there is another issue I've opined about a couple of times, and I'd be done with it except the topic keeps rearing its head in different forums, and that is the idea of "automated vulnerability testing your way to security." As much as I think that the use of automated tools can help deliver more security-worthy software and have said so, there are too many discussions of late dominated by the perspective that vendors are all evil, lazy and greedy slugs (ELGSs) that happily ship products with tons of security holes in them. The perspective of people who subscribe to the ELGS theory is that vendors must be forced to submit their code to multiple, random, unvetted tools to "validate" their security.


A differing perspective (mine) is that these tools are useful only to the extent they are used and work in development: they can't "prove" security, and vendors should license and use the tools that work well for them in development. The idea, after all, is to make products better, not have public "rat out" sessions after products have shipped. And I feel really strongly that anybody wanting to run a third party tool against a product should have to prove the tool works properly and accurately. It's only fair.


In fact, they ought to have to prove that the tool is accurate before it's used, otherwise the results may "taint" a vendor (just like a mistake in my background check could color people's perceptions of me forever if it is not corrected).


The idea of "burden of proof" is important for a couple of reasons. One of them is that we are still in the nascent stages of tool usage (if it were easy, everyone would already do it) and some of the tools don't work so well. The last thing industry needs when we are trying to promote and encourage tool usage in development is every customer, or every country, deciding that IT products need to be submitted to 348 different "tool tests."  Aside from annoyance and inefficiency, accepting tools' "vulnerability alarms" without question goes against the grain of how a lot of other things are supposed to and generally do work. For example:

  • People who are put on trial are assumed to be innocent until proven guilty. Hardly anybody gets thrown in jail for 25 years to life without someone (a prosecutor) validating the evidence, presenting it in court, and defending it (from defense challenges). The burden of proof in our court system is on the prosecution, and the standard of conviction is "beyond a reasonable doubt." (A 90% "false alarm rate" of evidence presented in a prosecution would not be "reasonable doubt.")

  • Journalists are expected to check facts before reporting that, for example, a celebrity was caught in a love nest with another celebrity. Furthermore, if journalists get the news wrong, they generally print a retraction or correction. (Of course, at that point, reputational damage may not be "retractable," which is one reason why good journalists are rigorous about fact checking.)

  • Gossip is called "gossip" and not "impartial fact exchange" because so much of it is not true and potentially hurtful or damaging. This is why your mom tells you not to do it. Mom is right, as she almost always is.


The ugly issue in the promise of automated vulnerability tools is that there is no standard for these tools: what they find, how well they find it. Which means anybody can create a tool, point it at a product, claim to find problems, and all the work is on the product vendor to prove their product does not have a problem instead of on a tools vendor to prove the tool is accurate. And let me tell you, having to go through hundreds or thousands of "potential vulnerability fire alarms"  to validate every one makes security worse, not better, because it takes a scarce resource (a security-aware developer) and puts him/her to work chasing phantoms instead of improving products.


Some tools vendors push the "evil vendor" perspective because to the extent they can convince IT vendors' customers that their products need to be scanned, they create fear, uncertainty, and doubt (FUD) and thus increase the demand for their scanning product. Can't blame them for that: it's capitalism at work. That said, I take the perspective that these tools offer promise, but they need to be validated to prove that they are accurate before anyone can be expected to use them. Only if they are accurate are they useful. If they are inaccurate, they are useless and harmful.  (Putting it differently, if IT vendors need to "prove" their products are secure, why shouldn't tools vendors need to "prove" their tools are accurate before anybody would even think of using them? What's sauce for the goose is sauce for the gander.)


Lastly, some of these tools are so "chattery" and "noisy" that it really is like gossip and, like gossip, the damage is done even if there is a retraction. A tool that has a lot of false alarms taints a vendor's brand just like tabloid journalists can print innuendo that damages someone's reputation unjustly. I shouldn't have to prove the coding equivalent of "I did not spend the weekend in a love nest with a celebrity,"  the vulnerability tool maker should have to prove that I did.


(Aside: one of my own amazingly wonderful ethical hacking team members just improved one of our internally-developed tools, a protocol fuzzer lovingly called BitRotter, to do more pernicious and nefarious code breaking in a good cause. He's just rechristened it ByteRotter. Thanks, Jeff.)


Clearly, my perspective isn't unbiased, because I work for an IT vendor. I believe in better security, doing more in secure development, and in industry "raising the bar" through better development practice. Automation (and automated tools) can definitely help.


I also believe in accuracy and fairness as basic principles of any business undertaking, because it is only when the haze and smoke and debris is swept away, that you can see - really see - what is there.


I climbed to the top of the ridge behind my house a few days after the Castle Rock Fire was declared 100% contained. The fall rains had come to help soothe the burns, and the winds that a few days prior had been fanning the fire were now whisking the few remaining puffs of smoke out of the valley. It's about a 600 foot climb through sage and scrub, but when I got to the top of the ridge, I could see the Boulder Mountains in the distance, and the ski runs at Sun Valley, still green and beautiful, and the aspens beginning to change color on the mountains that ring the Wood River Valley.


After two weeks of hellish smoke and ash and debris, I could see rightly - 'ike pono, as the Hawaiians say - for miles and miles and miles. There is no better perspective than that.



*  Mom also noted it was far from the last fit I would throw. What can I say? I learned useful business skills early.


** Disclaimer: I know several people who work for the FBI. They have difficult jobs that the rest of us don't understand and take for granted. I am quite sure they have more important things to do than check up on my latest book-buying binge. Ergo, no slight to them was intended nor should be inferred.


For more information:


Book of the week: I just read another book by James Hornfischer: Ship of Ghosts, about the USS Houston, sunk at the Battle of Sunda Straights in March 1942. Many of the survivors were forced to build the Burma Railway. An amazing story of survival and heroism. Definitely worth a read.


About the Castle Rock Fire:


Hi Mary Ann, Nice post, I was particularly interested in your views on automated testing products and I agree that whilst they can be useful to help build better security they can also sometimes be time wasters or in my experience off-putters to the security process. I have been to clients that have used automated test tools and have been overwhelmed by the numbers of issues found, they are then not in a proper position to assess the real level of risk, whether the issues are accurate or not or how best to fix them and often they then give up. As you said its a capitalist world and tools will continue to be written, not sure how you can validate them accurately though. I have seen the efforts of some to sell 0-days to tool vendors, whilst this may prove whether a tool can find a previously unpublished and unfixed bug its not a valid test of a tool used to assess security. A tool is also only as good as the people who wrote its knowledge of issues, how do you rate a tool that accuratley find as a few hundred issues but then misses some key issue? - is it still valid? - in the sense of finding and helping the overall security process, then yes but it doesnt make a vendors tool complete or make the software its testing secure. in my experience education is the best course to take, if you understand why software can be insecure then it can help write better securer software, similarly if you are deploying a vendors software education can help you deploy and configure securely. Automated tools can help the process but they shouldnt hinder it, often spotting false positives is a matter of education and experience or hard work. interesting subject though. cheers pete

Posted by Pete Finnigan on September 27, 2007 at 09:58 PM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed



« July 2016