Revisiting the Laws of Identity

Kim Cameron of Microsoft just reposted a shortened version of his laws of identity.

I really didn't comment much when these were first being developed, though I recall being at a number of forums and conferences where discussion about them took place.

While they've gotten a lot of focus at times, I have my doubts about how important and practical some of the "laws" are. Rather than just parrot the laws, I thought it would be useful to discuss these possible issues and see what others may have found that either mitigates these issues or bring them to focus.

Here are the shortened laws in bold and my take on them:

1. People using computers should be in control of giving out information about themselves, just as they are in the physical world.

There are two ways identity information gets populated and shared:

1. We put it there, or

2. Someone else put it there.

We can all control the first path. I can choose to fill out your web form or not based on whether I will exchange elements of my personal information for the value that you are providing.

For the second path, we use enterprise systems every day where the systems in use have some existing knowledge about us. Marketing databases are bought, built, and sold every day -- often by the same publications that will actively run shrill articles about how your privacy is being invaded at this very moment.

In effect, often times once you've done #1, it's hard to prevent #2. You can give a web site the technical ability to reduce #2 and actively enforce a stricter privacy policy, but the reality is that the Web 2.0 world is often driven by "free" content and services that will drive more, not less, of this collection and sharing.

2. The minimum information needed for the purpose at hand should be released, and only to those who need it. Details should be retained no longer than necesary.

Nothing wrong with this particular ideal.

For example, when I get mailings from third parties as a subscriber to Harvard Business Review or TheStreet.com, they are always sent by those entities, not directly by third parties -- or at least they appear to be.

In general, this is actually a good business practice. Oftentimes the data you're collecting has proprietary business value in itself. If your business has made the decision that you're willing to part with it, you're highly unlikely to worry that there's a law of identity related to this.

You might be a little worried if there is a REAL law related to this. It's not like hospitals can go around selling lists of patients to drug companies. This is where privacy laws come into play.

3. It should NOT be possible to automatically link up everything we do in all aspects of how we use the Internet. A single identifier that stitches everything up would have many unintended consequences.

So I guess I should stop using FriendFeed, Facebook, and LinkedIn, eh? :-)

Ok. I know that this isn't what's really being said here. What's really being said is that using a shared identifier across a large number of systems allows people to know things about you that they shouldn't.

True. That said, this is hard to do within an enterprise. Are we really on a path for convergence across the vast Internet?

4. We need choice in terms of who provides our identity information in different contexts.

All of the references to control remind me of how most Windows firewall products work.

Basically I click on an application or link and get a pop up window in the lower right corner of my screen that says something like this:

"Application XYZ is attempting to access the internet to connect to 192.168.1.5 on port 848. Would you like to allow this? YES/NO"

The one thing all users learn quickly is that if they click YES, the application works. If they click NO, it doesn't. After a while, the pop-up is just another annoyance for the user such the actual applications, hosts, and ports aren't even noticed.

Now translate this to most web applications where if I click YES, some amount of information is shared and I can access what I want. If I click NO, nothing is shared, but I can't get access. What do you think the typical user will do? Do users read the EULA and privacy terms before they click?

And keep in mind that if we automate the process of entering all of this information by keeping it on a electronic card, they'll actually notice even less about the information that they're sharing because they won't be entering it. It'll become Yet Another Dialog to Accept (YADA?).

5. The system must be built so we can understand how it works, make rational decisions and protect ourselves.

6. Devices through which we employ identity should offer people the same kinds of identity controls - just as car makers offer similar controls so we can all drive safely.

It's hard to disagree with these last two points. They're very attractive points and give the users a lot of control.

I do like things such as the new Firefox address bar, which actively help me figure out whether I landed where I intended:

Firefox Address Bar

I also like the auto-form fill-out functionality in most browsers that makes registering for the myriad of sites easier.

Combined, this lets me know that I'm sharing my information with the entity I think I am and can visibly see and adjust the information I'm willing to share.

What's missing here is user education. A year ago, you had to look at the link you were following and know the structure of a URL to understand that you were being phish'ed...or just not click on anything. Incremental enhancements, such as those in the address bar, give us a path towards training users to avoid these negative situations without requiring them to be geniuses.

After you've verified that the vendor in question isn't fraudulent in itself, the question becomes whether you want to give the information requested.

Portable identity is probably helpful here, but if I were an enterprise I'd be more focused on the back office.

Just as it's a bad waiter that stole your credit card number and not an evil plot by TGI Friday's, it's not the intent of most organizations to actively compromise private information.

The difference here is that instead of a handful of credit card numbers, we're talking about whole repositories of data.

This may be an identity management problem (e.g. user with too much access and not being audited), but it's just as likely to be a data management problem, backup security process problem, or other issues that can lead to massive insider compromise (accidental or intentional). If you're not solving these in a concerted way, it won't matter much what your privacy policy is except for any liability you've created for yourself.

In Summary...

Not saying that the laws of identity lack value in the real world. Not saying that users shouldn't control their destiny.

Am saying that we need to be careful to ensure that these laws line up with the reality of how people use computers and that the embodiment of the laws doesn't open up additional risks, while keeping us from focusing on systematic risks that might be taking place behind the browser in or applications, middleware, databases, directories, and back office systems.

Technorati Tags:
, ,


Comments:

Post a Comment:
Comments are closed for this entry.
About

This is Clayton Donley's official blog. Views expressed are not necessarily those of Oracle.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today