The Privacy Oxymoron – How do I increase My Privacy AND still get a great online experience




Everyday now there is more and more discussion on Privacy. On the one hand you have the Privacy advocates who want nothing more than complete control over every aspect of their Privacy, and then on the other hand you have the Govt. and online content providers who want even more detailed information on you. 

It’s becoming like a Seinfeld episode – “something’s got to give Jerry!”.

But what? Privacy is really an oxymoron unto itself. If you de-identify data enough it has no value in which case the experience isn’t going to be that great because Web sites are built around figuring out who you are.

Two articles appeared on the Web today:

  1. How ‘Do Not Track’ Could Kill The Internet Startup Economy
  2. Developer Builds Privacy-enhancing Web Browser for Apple Devices

Also I’m starting to see Do Not Track show up in public company filings – saying that it could effect earnings. Let’s face it the Web has been built on the premise that in exchange for “free” I get to use your information. So it could be a huge drain on resources if this standard gets implemented. And now we’re also seeing new browsers pop-up (no pun intended) that basically anonymize your tracks on the Internet, but slows down your experience.

What continues to perplex me is that no one is turning this problem “on it’s head” and looking at it from a different perspective. It’s an opportunity vs a problem.

Lets face it nobody is going to suddenly overturn the last 10 years on the Internet. We’re all addicted to free and we basically turn a blind eye to Web sites using my private data. However with Mobile showing up to the party things are beginning to change. Mobile is deemed “really personal” and so we want to be sure that nobody is tracking us while we walk around. 

So can we really ever have our “cake and eat it to?” 

Well yes – I think we can. I wrote about how in a previous blog (A Contextual Approach to Online Privacy – It’s all about Me) but it bears repeating. What’s going to be needed is a way to placate both “stakeholders” – Me the consumer and You the content provider.

What I want is:

  • Convenience
  • Privacy
  • Control

What the Content Provider wants is:

  • Control
  • Commerce ($$$)

What we have to do is “align” those two sides and give them away to resolve the differences – when we align those sides you’ll see the real power of the Internet realized for the first time. 

So instead of trying to create more complexity, instead look for more simplicity. Alignment vs disorder. And as usual the answer will be staring us in the face.


Do NoT Track – Cui Bono?


Cui bono

Cui Bono – or in other words, Who Benefits?

Well I’m not really sure. I’ve been doing lots and lots of research into this, and I still can’t figure out how this is going to really benefit anyone other than the programmers who stay employed to try and implement everything. 

Lets start with the definition of Privacy. There are a lot of them but for this blog let’s use the one I came up with:

“Privacy is:  My ability to control the collection, flow, and use of My personal information”.

That’s pretty simple. I want a convenient easy way to control what I share online. If someone abuses the data then I want an easy way to “un-share” that information. So lets see how DNT enables that.

After launching my browser I go to the Preferences and then the Privacy tab. There I select the check box which says “Tell Websites that I don’t wish to be tracked”. So far so good. Now what is meant to happen is that automagically every Web site I go to will start looking for this incoming message and automatically disable any tracking capability that they may be using.

Ok, lets stop right here. Can you imagine the amount of code they’ll have to wade through to check A) to see what they’re doing as it relates to tracking and then B) disable that or re-program it in the case that I haven’t actually checked the Do Not Track box in the browser. This is an incredible amount of work and as the saying goes “what’s in it for me?”

Well not a lot actually. You’ll have to spend time, money, effort to rebuild your site so that it supports this new capability. You’ll have to publish new terms of service, new privacy policies and finally make sure all of it works perfectly. And after doing all of this you may lose ad revenue because you’re no longer sharing customer information.

So lets sum all this up – spend money, and see a drop in revenue. Hmmm not what I really wanted.

However that’s only one side of the equation – what about “Me”… what’s in it for me?

Well not a lot really. You have no way to actually know whether or not you’re being tracked. There’s no change in the amount of data you’re sending – the Web site can still see everything as before. There’s no granular control over what you’re sending and no way to change any of it – or – even add to it. In short it’s a check box with little or no meaning. 

Returning to the question: Do Not Track – Cui Bono?

As far as I can tell – no one. It’s more work for the Web content provider, if they implement it could result in a loss of revenue, and it’s only a recommendation so there’s no enforcement. For the consumer there’s zero benefit. There’s no improvement to the Web experience and no way to verify if the content provider is actually honoring the browser setting.

What about an alternative approach?

For that to work you have to look at the stakeholders, and in this case there are 2. The user and the content provider. What’s needed is a simple way to share more context with the content provider so they can provide an “enhanced service”. Enhanced services drive new revenue which is something they want. The “cost” of this is “Trust”. The more I trust the more I share. The more I share the greater the potential for revenue.

So for DNT to really succeed it has to provide new revenue opportunities for the content providers who are currently trading the cost of supporting the free service by selling your data. The current approach to DNT does not do this.


Privacy – A new definition for the Internet




I think it’s time for a new definition of Privacy. In the last few months i’ve lost count of the number of white papers & books I’ve read on the subject. And yet I found all of them lacking. They never seem to sum things up so that regular folks could understand it. So I thought I’d propose a new definition of Privacy.

Privacy is: My ability to control the collection, flow, and use of My personal information

Whew – that wasn’t so bad was it? As I’ve said on numerous occasions privacy is about “Me” and my information. Before the Internet the flow was much more easily controlled – however now that we all have smartphones and are connected 24*7 our data is much harder to control. So any definition of Privacy has to be supportable in both an online and offline world.

Privacy is simple, it’s about Me, My data and how it’s used. There’s no need to make it any more complex than that.


ICOSA & Starto.TV




This was taken at last nights shoot with the folks over at ICOSA (Starto.TV) (that’s Me in the front center). The show will air next week and it will be a doozy – all about Internet Privacy.

We had a total blast at the shoot. These folks are amazing.

A Contextual Approach to Online Privacy – It’s all about ME



In my last blog (Privacy By Design – The Secret Inside the Internet) I wrote about how the very design of the Web allows us to extend it to support a contextual approach to privacy online. In this post we’ll talk about how you can enable it.

But first a little context (pun intended).

The Internet has introduced disruptions at an unprecedented scale and variety. In doing so it has created a “target rich information environment” that is on par with the Wild, Wild West of yesteryear.

Unfortunately what hasn’t kept up is our approach to Privacy. In fact if anything, it’s completely the opposite of private. Now it appears that everything is for sale. So the challenge becomes one of suitable constraints on the flow of my personal information. Unfortunately this is out of alignment with those companies whose profit comes from the unrestricted flow of my data.

So how do we align these seemingly opposing forces?

As humans when we interact we use situational controls to share our context – however up until now there’s been no easy way to add this level of control to the user on the Internet. In fact they’ve been missing entirely on the client side (the browser) – as we seem to be increasingly driven by algorithms on the server side.

Well lets look at the two constituents – Me (the client/browser) and the Enterprise (the Web server) that I interact with. What I want is:

  • Convenience
  • Privacy
  • Control

What the Enterprise wants is:

  • Control
  • Commerce ($$$)

So the commonality between the two is “Control”. To resolve this problem we have to introduce a control mechanism for the consumer that allows him/her to conveniently share their privacy settings with the Enterprise in a way that fosters “Trust”. Remember Trust drives commerce.

The control mechanism is a database that contains my “Me” data. The information (context) that I wish to “exchange” in return for increased levels of trust and a better experience. The database is then integrated into the browser via a plugin. Now all we have to do is use the secret discussed in the last post (headers) to add the data going to the Web server.

Now we have a convenient method to store my data on the device, and a way to easily control what gets shared with the Web server. 

What’s left? The transparency problem. (Or as Prof. Helen Nissenbaum puts it on her essay in “Protecting the Internet as a Public Commons” – the transparency paradox.)

  • Achieving transparency means conveying information handling practices in ways that are relevant and meaningful to the choices individuals must make. Transparency of textual meaning and transparency of practice conflict in all but rare instances

So how do you solve the Transparency Paradox?

You don’t.

It can’t be solved – so don’t go there. Even the Wild, Wild West eventually moved on and so will we. No matter what we say to the consumer their ability to determine the risk level from those documents is going to be different. So keep it simple and start establishing levels of Trust that we as humans do understand.

Then the control mechanism comes into play. As we establish more trust we can share more, and if that trust is abused we can remove trust. That’s what’s really been missing on the Web. The ability to turn off what I share vs. what we have now – without effecting the “User Experience”. If I turn off cookies now my experience come to a halt. Whereas if I’m sharing contextual data via headers the experience can be better or the same – but what it won’t be is worse than it is now.

So there you have it – use a database to store your Me data that you want to share. Have built in controls that allow you to enable or disable data that gets shared as the trust levels increase between you and the Enterprise Web site.

And it’s only been right in front of us for the last 30 years or so.


Privacy By Design – The Secret Inside the Internet




Shush – I’m going to tell you a secret. It’s one that no one has paid attention to for over a decade now. And like most secrets, it’s been hiding in plain sight. The story starts years ago, but we can skip most of that, and begin in June of 1999 when they finalized the HTTP 1.1 standard for the Internet (RFC 2616).

The Internet is essentially the one ring that binds us all. We can’t remember life before the Internet, and we certainly can’t imagine life without it. So what secrets does it hold that haven’t been exposed already?

Well let’s start with the abstract and see if there’s anything obvious…

The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. It is a generic, stateless, protocol, which can be used for many tasks beyond its use for hypertext, such as name servers and distributed object management systems, through extension of its request methods, error codes and headers. A feature of HTTP is the typing and negotiation of data representation, allowing systems to be built independently of the data being transferred.

Nothing appears to jump out at me, but wait, maybe there is something? Notice in the third sentence that the protocol can be used for many tasks beyond hypertext. That’s interesting – it’s indicating that it’s an “extensible” protocol i.e. we can add things to it. Which of course begs the question “what can add, and how can we add it”?

Well from the text it says that we can use it’s request methods (not sure what that is) it’s error codes (doesn’t sound too exciting) and it’s headers – now that sounds like it could be interesting. Because a header is “data” so that means I can add new data to the protocol that touches everything on the planet.

Interesting – but we better check to see if there are any gotcha’s?

Oh dear – this document goes on forever and as I get down close to Section 12.1 I spy what could be trouble for the header idea.

Section 12.1 talks about Server Driven Negotiation. (Fancy talk for a Web server receiving a request from a browser.)

This section states…

If the selection of the best representation for a response is made by an algorithm located at the server, it is called server-driven negotiation. Selection is based on the available representations of the response (the dimensions over which it can vary; e.g. language, content-coding, etc.) and the contents of particular header fields in the request message or on other information pertaining to the request (such as the network address of the client)

It then goes on to state…

Server-driven negotiation has disadvantages:

  1. It is impossible for the server to accurately determine what might be “best” for any given user, since that would require complete knowledge of both the capabilities of the user agent and the intended use for the response (e.g., does the user want to view it on screen or print it on paper?).
  2. Having the user agent describe its capabilities in every request can be both very inefficient (given that only a small percentage of responses have multiple representations) and a potential violation of the user’s privacy.
  3. It complicates the implementation of an origin server and the algorithms for generating responses to a request.

Crikey (as they say down under), this does not look too promising. It appears that if we add header data we can expose people’s privacy, slow down the Web server, and worst of all, item 1 above says that server can never know enough to determine what would be good for the user.

So Dear Reader it appears that what I first thought was a secret is not really a secret at all – or is it?

What if…

  • You sent enough data so that the server could accurately determine what might be best for any given user without slowing it down
  • You encrypted the users private data
  • You made it very simple for the origin server to generate a request

Well then Section 12.1’s disadvantages wouldn’t be a disadvantage anymore and in fact could become an “advantage”. And you would have just discovered a way to extend the Internet protocol so that it supported more data.

And that Dear Reader is the Privacy By Design secret inside the Internet. The very protocol that binds us can accept new data that allows it to become “Contextually aware” – in essence making it smarter about who we are, what device we’re using and where we are. It also allows us to encrypt that data to ensure privacy and send it such a way that servers of today can easily handle those extra 1,000 bytes or so of data.

In my next blog I’m going to discuss how you can use this secret to take a contextual approach to online privacy.





A Solution for NISTs Identity Ecosystem



For today’s blog we’re going to take a look at NISTs “Identity Ecosystem” and see if it’s possible to build it.

The key attributes of the Identity Ecosystem include privacy, convenience, efficiency, ease-of-use, security, confidence, innovation, and choice. So we have a lot of things to consider when building this solution.

When looking at these kind of projects I always like to start around 50,000 feet and then zoom down all the way to ground level. So lets start with one of NISTs use cases – (remembering that our solution has to work for all of them)

Let’s use the Smartphone one:

Parvati does most of her online transactions using her smart phone. She downloads a “digital certificate” from an ID provider that resides as an application on her phone. Used with a single, short PIN or password, the phone’s application is used to prove her identity. She can do all her sensitive transactions, even pay her taxes, through her smart phone without remembering complex passwords whenever and wherever it is convenient for her.

Now lets add some “context” around the ecosystem.

  • Digital Certificate – could come from anyone, and whatever you need to store it in, has to accept multiple certificates
    • Not sure why this needs to be an application (after all it’s just a certificate)
  • We need to add PIN number to this Cert. (this assumes that she’s registered for the Cert. and has already set up her PIN number)

Now when Parvati logs into her financial Web sites to conduct business she’ll be required to use this certificate to prove who she is. Here’s where we hit our first real snag. Lets say I’ve stolen Parvati’s phone, all I have to do is guess her PIN number and I can now access anything that Cert. allows me to.

Not good. While I’ve made it convenient and easy to use, I’ve created a huge Identity hole. I can easily masquerade as Parvati just by guessing her PIN number. So what could we do better?

Well how about adding some more “authentication” capabilities?  Wikipedia has a great link on “Multi-Factor Authentication”  

US Federal regulators consistently recognize three authentication factors:”Existing authentication methodologies involve three basic “factors”:

  1. Something the user knows (e.g., password, PIN)
  2. Something the user has (e.g., ATM card, smart card); and
  3. Something the user is (e.g., biometric characteristic, such as a fingerprint).

Authentication methods that depend on more than one factor are more difficult to compromise than single-factor methods.”

At first blush it appears that the NIST use case is already using 2 factor authentication – A PIN and a Cert. assigned to Parvati. However they’re really just distinct pieces of information vs. using information from two or more categories.

So in this case what we really need to do for Parvati is add “something the user is”. And for that we could use a fingerprint reader or even a voice print which can be compared with one on record at the financial Web site.

Now lets return to the use case and update it.

  • Install a secure database on the device – think of this like the wallet you carry in your pocket
  • Allow it to store “anything” from certificates, to voice prints, to data about your device

That’s much better. Now we have lots of distinct pieces of information about “ME” (Identity). Now all you have to do is merge them into the Web request going to the financial site which is very straightforward.

Identity takes many forms – but ultimately boils down to something very simple – ME. It’s data that defines who I am, where I am and what device I’m using (so the experience can be optimized).

All you need is a simple way to store it securely (a database) – transmit it securely (HTTPS or Encrypted Headers) and process it with your existing infrastructure (CGI Environment variables which have been around for as long as the Internet has been with us).

If you go back and look at the use cases you see this solution scales to everyone one of them without requiring a single change to your current infrastructure.



The Consensual Web

As you know, we’ve been closely watching the discussions regarding the Do Not Track (DNT) initiative.  A key discussion point is about first and third parties and how a third party can become a first party once you click on a “like” button or click through to another site or use an embedded service within the primary site.  But the question arises as to whether or not the average user KNOWS that these actions change the status of first and third parties?

If we cannot determine whose site we are on, then how can we engage in a consensual relationship with the various Web content and service providers?  Here is a case in point:


My journey begins “off the Web”.  I have opted into receive emails so have given my consent to the USA ProCycling Challenge organization to contact me.  Today I opened this email and clicked on the Read More link.


As I finished reading, I see a familiar looking black bar across the top.  I look at the particular URL and realized I’m on a BlogSpot page, not a website. Who owns BlogSpot?  Google.  So now, based upon DNT definitions, Google has become the first party and has a right to capture and use my information (my context) for it’s own marketing purposes, without, in my opinion, my consent.  But according to current DNT definitions I gave my consent the moment I clicked on the link.

I had no reasonable way of knowing that by clicking that email link I’d be sharing my cycling interest with Google.  USA ProCycling’s privacy policy is very clear, “USA Pro Cycling Challenge does not sell, rent, individually post or otherwise disclose any personal information about visitors to unrelated third parties for marketing purposes.”   I did not see anything about Google when I opted in for the emails or read the privacy policy.  There is nothing in the email or on the Web page to make me aware of this change in service providers. In fact, the only reason I know is because I have a personal blog account with BlogSpot (now Blogger) and am paying attention to privacy issues such as DNT and Google’s merged privacy policy, which coincided with their “black bar” page formatting.

I know that this is not some deliberate means to hijack my information, but it highlights the problem with DNT and their definitions of first and third parties.  I never got a chance to provide my consent – or not.  Once again, I have no choice about with whom my data is being shared.

So what is the definition of “Consensual Web”?  It was good for Google, but not for me.  A good Web experience is more than just serving me relevant ads and custom-sorting my searches.  It’s about transparency and respect -and this morning, I’m not feeling very respected.

Oh what a tangled “web” we weave… When first we practice to “deceive!”


o what a tangled web

With apologies to Sir Walter Scott I’m going to highlight two words in his quote – web & deceive

The Internet has created this incredible ecosystem for users to express themselves – however with this increased expression has come increased access to personal information that can be saved on corporate servers, searched and then resold. With the advent of the Mobile Internet, tensions regarding privacy are reaching a boiling point, as my personal information is hijacked on a daily basis by current (and sometimes deceptive) Web practices.

With this in mind we can now peer into this “tangled Web” and perceive yet another wicked problem…

How does marketing communicate with customers in a one-to-one manner consistent with their current context, and do so while both preserving customer identity across multiple digital channels and respecting privacy?

For the last decade or so we’ve only had to worry about the desktop (i.e. a single context). Now with the advent of Mobile we have a shifting context that is incredibly personal, and yet lacks the attributes of the desktop medium (no more big screen, keyboard or mouse). Now marketers are faced with an incredibly complex problem – how to communicate with a consistent but personal voice, and respect the customers right to privacy.

In my last post “Is building an Identity Ecosystem a “Wicked Problem”? I introduced the notion of real time context:  the ability to transparently share my Identity and context with a Web server in real time. However I left off one crucial item – consent which ties directly into the above problem.

The current practice (which is frequently deceptive) is to bury privacy and data use policy in legalese or Terms of Service. Basically, you sign away all your rights – consent to everything by using the site, and then they can do anything they want to with your data. That sounds just so archaic, so 1999, back when Mobile meant a laptop computer.

So how do we bring privacy and consent into the 21st century? Or should we even bother? I say, yes – it’s absolutely worth the bother. Let’s think of it in terms of this simple analogy. Remember when bankers hours were 9-5, Monday to Friday. But then they found out that everyone was working and if they wanted to keep their business they needed to adapt to the customer. Well now we have 24 hour ATMs and can bank on our way home from work or on a Saturday.

Well that’s what’s going to happen with Privacy – and the catalyst is going to be Mobile. It’s too late to put Pandora back in the box – Web-based advertising and behavioral targeting are here to stay.  However what we can do is figure out a “programmatic” solution to play nicely with Pandora. And let me tell you the stakes are HIGH. There are billions of dollars in revenues at stake here, let alone the other wicked problem mentioned above.

So what is the solution? – simple – give me a clear and simple choice.  Let me manage what context I am willing to share with a user-driven,  “Personal Context Manager.” In other words give me an electronic “ME” database that I have complete control over, and lives on my devices, not someone else’s servers. Inside that electronic database is my data.  It includes personal information, device information and also geo-location information. All combined it’s a very precise database (or not) on who I am, what device I’m using and where I am.

Now what I need to be able to do is easily share that data with trusted Web sites. The only criterion is my definition of “trust”. If you abuse my trust I can turn it off – and we go back to 1999 – page/content only context. However if we all play nicely in the sandbox, then I’m willing to share my data with you in return for more relevance and value from you.

Think about it for a moment. In the history of browsers there’s never been a way for me to control the data I share. Even the Do Not Track standard doesn’t allow me to do it. And that’s got to change. Only when I determine the trust level can I be confident that online businesses will respect my privacy.

So the answer to these two wicked problems can be summed up as “consensual context”. There’s now a programmatic way to add my consent and my context to the protocol that binds us all – the Internet. And even though my worst case position is 1999 (i.e. what we have now) for those Corporate brands that really want to go the next “Marketing Mile,” they can start with user-controlled consent and establish a new level of trust that crosses over to any “screen” with which I choose to connect to them.