If you are an old security hand, you can skip reading this. If you think "assurance" is something you pay for so your repair bills are covered if someone hits your car, please keep reading.
Way back in the pre-Internet days, I used to say that computer security was kind of a lonely job, because hardly any customers seemed to be really interested in talking about it. There were, of course, some keenly interested customers, including defense and intelligence agencies and a few banks, most of which were concerned with our security functionality and—to a lesser degree—how we were building security into everything, a difference I will explain below, and which is known as assurance.
Times change. Now, when I meet someone who complains of a virus, it's better-than-even odds that he is talking about the latest digital plague and not a case of the flu. Information technology (IT) has moved way beyond mission-critical applications to things that are literally in the palm of our hands and is in places we never even thought would (or in some cases should) be computerized ("turn your crock pot on remotely? There's an app for that!") More and more of our world is not only IT-based but Internet accessible. Alas, the growth in Internet-accessible whatchamacallits has also led to a growth in Evil Dudes in Upper Wherever wreaking havoc in systems in Anywheresville. This is one big reason that cybersecurity is something (almost) everybody cares about.
Historically, computer security has often been described as "CIA" (Confidentiality, Integrity and Availability):
Confidentiality means that the data is protected such that people who don't need to access it, can't access it, via restrictions on who can view, delete or change data. For example, at Oracle, I can review my salary online (so can my Human Resource representative), but I cannot look at the salaries of employees who do not report to me.
Integrity means that the data hasn't been corrupted (technical term: "futzed with"). In other words, you know that "A" means "A" and isn't really "B" that has been garbled to look like "A." Corrupted data is often worse than no data, precisely because you can't trust it. (Wire transfers wouldn't work if extra 0s were mysteriously and randomly appended to amounts.)
Availability means in that you are able to access data (and systems) you have legitimate access to—when you need to. In other words, someone hasn't prevented access by say, flooding a machine with so many requests that the system just gives up (the digital equivalent of a persistent three-year-old asking for more candy "now mommy now mommy now mommy" to the point where mommy can't think).
C, I and A are all important attributes of security that may vary in terms of importance from system to system.
Assurance is not CIA, but it is the confidence that a system does what it was designed to do, including protecting against specific threats, and also that there aren't sneaky ways around the security controls it provides. It's important because, if you don't have a lot of confidence that the CIA cannot be bypassed by Evil Dude (or Evil Dudette), then the CIA isn't useful. If you have a digital doorlock—but the lock designer goofed by allowing anybody to unlock the door by typing '98765,' then you don't have any security once Evil Dude figures out that 98765 always gets him into your house (and shares that with everybody else on the Internet).
Here's the definition of assurance that the US Department of Defense uses:
"Software assurance relates to "the level of confidence that software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software (https://acc.dau.mil/CommunityBrowser.aspx?id=25749)."
When I started working in security, most security people knew a lot about the CIA of security, but fewer of us—fewer of anybody—thought about the "functions as designed" and "free from vulnerabilities" part. "Functions as intended" is a design aspect of security. That means that a designer not only considered what the software (or hardware) was intended to do, but thought about how someone could try to make the software (or hardware) do what it was not intended to do. Both are important because unless you never deploy a product, it's most likely going to be attacked, somehow, somewhere, by someone. Thinking about how Evil Dude can try to break stuff (and making that hard/unlikely to succeed) is a very important part of "functions as intended."
The "free of vulnerabilities" part is also important; having said that, nobody who knows anything about code would say, "all of our code is absolutely perfect." ("Pride goeth before destruction and a haughty spirit before a fall.") That said, one of the most important aspects of assurance is secure coding. Secure coding practices include training your designers, developers, testers (and yes, even documentation writers) about how code can be broken, so people think about that before starting to code. Having a development process that incorporates security into design, development, testing and maintenance is also important. Security isn't a sort of magic pixie dust you can sprinkle over software or hardware after it's all done to magically make it secure—it is a quality just as structural integrity is part of a building, not something you slap your head over and think, "dang, I forgot the rebar, I need to add some to this building." It's too late after the concrete has set. Secure coding practices include actively looking for coding errors that could be exploited by a Evil Dude, triaging those coding errors to determine "how bad is bad," fixing the worst stuff the fastest and making sure that a problem is fixed completely. If Evil Dude can break in by typing '^X,' it's tempting to just redo your code so typing ^X doesn't get Evil Dude anything. But that likely isn't the root of the problem (what about ^Y - what does that do? Or ^Z, ^A...?) Automated tools designed to help find avoidable, preventable defects in software are a huge help (they didn't really exist when I started in security).
Nobody who buys a house expects the house to be 100% perfect, but you'd like to think that the architect hired structural engineers to ensure the walls wouldn't fall over, the contractor had people checking the work all along ("don't skimp on the rebar"), there was a building inspection, etc. Noting that even with a really well-designed and well-built house, there is probably a ding or two in the paint somewhere even before you move in—it's probably not letter perfect. Code is like that, too, although a "ding in your code" is probably more significant than a ding in your paint, so there should be far fewer of them.
Assurance matters not only because people who use IT want to know things work as intended—and cannot be easily broken—but because time, money and people are always limited resources. Most companies would rather hire 50 more salespeople to reach more potential customers than hire 50 more people to patch their systems. (For that matter, I'd rather have such strong, repeatable, ingrained secure development practices that instead of hiring 50 more people to fix bad/ insecure code, we can use those 50 people to build new, cool (and secure) stuff.)
Assurance is always going to be good and necessary, even as the baseline technology we are trying to "assure" continues to change. One of the most enjoyable aspects of my job is continuing to "bake security in" as we grow in breadth and as we adapt to changes in the market. Many companies are moving from "buy-build-maintain" their own systems to "rent," by using cloud services. (It makes a lot of sense: companies don't typically build apartment buildings in every city their employees visit: they use "cloud housing," a.k.a. hotels.) The increasing move to cloud services comes with security challenges, but also has a lot of security benefits. If it's hard to find enough IT people to maintain your systems, it's even harder to find enough security people to defend them. A service provider can secure the same thing, 5000 times, much better than 5000 individual customers can. (Or, alternatively, a service provider can secure one big multi-tenant service offering better than the 5000 customers using it can do themselves.) The assurance practices we have adapted from "home grown" software and hardware has already morphed and will continue to morph to how we build and deliver cloud services.
Click here for more information on Oracle assurance.