Monday, March 27th, 2017
As a company, we try to remain politically neutral, but on the topic of privacy, we simply cannot sit back and watch our rights be dismantled and our online lives sold to the highest bidder.
For our U.S. followers, please contact your Congressional Representative in the remaining hours of the day and urge them to vote NO tomorrow on SJ Res 34, which rolls back the FCC’s rule on securing customer permission before selling their data. It is an affront to our constitutional privacy protections. And because of how it’s been submitted by the senate, once voted in, it is nearly impossible to reverse, should the real impact become transparent to everyone.
Since our business is built upon retaining consumer choice and privacy, this resolution is particularly alarming. We are not opposed to digital advertising or the selling of data. We simply believe that each consumer has the right to determine with whom and for what purpose their online metadata is shared and used.
The right to privacy is FUNDAMENTAL. The ISPs, Phone Companies and big platform companies can make the “it’s only metadata” argument all day long, but this metadata about where we search, and who we text and call, and what sites we visit, is the context that defines the patterns of our lives. That is why it is so valuable to not only advertisers, but also to governments and even to campaigns, where social media data and behavioral algorithms were used in the Brexit initiative and the last U.S. election to hyper-target citizens in a way that is tantamount, in our opinion, to manipulation – far beyond the usual “marketing” influence of most consumer packaged goods companies.
This bill is subject to the Congressional Review Act, which basically means, that it is impossible to reverse, once voted in. Tell Congress to vote NO on Tuesday 9/28/17
Friday, March 3rd, 2017
Recently Jo Pedder of the Information Commissioner’s Office published a great blog – ICO guidance for consent in the GDPR
It’s worth a read, and I suggest downloading a copy of the ‘suggested guidelines’ here: Our first piece of detailed topic-specific GDPR guidance has been published today for public consultation
So, with that in mind I thought I’d compare that guidance with the current version of the proposed Do Not Track standard which is being offered as a solution for GDPR compliance.
I think a single image will suffice here to say that it has a way to go.
Thursday, February 23rd, 2017
This caught my eye this morning – CNIL launched today a public consultation on data breaches, profiling and consent under the GDPR. It’s open until 23 March. https://lnkd.in/ewUf-G3
Looks interesting so I clicked on the link. I was taken to this page:
You may be thinking so what? But it was something else that really caught my eye. Did you notice the message at the top of the page? The one that mentions ‘cookies’?
I spotted the word ‘cookies’ but have no idea what the message is asking me to do. Other than the fact that there’s an option to click on a radio button at the end of the message.
So how does this relate to GDPR? It goes a lot deeper than you think. Article 3 Clause 2 of the GDPR states the following:
- This Regulation applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not
- This Regulation applies to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to:
(a) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or
(b) The monitoring of their behaviour as far as their behaviour takes place within the Union.
- This Regulation applies to the processing of personal data by a controller not established in the Union, but in a place where Member State law applies by virtue of public international law.
Let’s dig in a little. I checked the settings in my browser and it’s transmitting the following: HTTP_ACCEPT_LANGUAGE=”en-us” — translated this means that my browser sent a message to the website that I would like to receive the page in English. This should be the first clue that I’m located out of the member country. Secondly, the website should have looked at my location (a topic for another day). As far as I know, it did neither.
What it did do immediately is add two cookies to my browser — even before I’ve accepted them! So, I personalized the page by indicating my desire for NO cookies. I then refreshed the page after deleting the cookies. I then checked local storage and guess what? The cookies were reappeared.
So far, if this was about GDPR compliance they would have failed. They have failed to recognize my location, my preferred language, and they did not respect my consent.
GDPR is far stricter than cookie consent. Obtaining meaningful consent is a MUST, not a SHOULD or a MAY. What seems to be missing from the equation at the moment is what LANGUAGE should the consent be in. The clue to that comes from the browser – HTTP_ACCEPT_LANGUAGE=”en-us” — after that, it is the job of the data processor to show me a consent page that is in English if they want to continue offering a service.
Conclusion – GDPR, by virtue of it’s territorial scope, will need to consider language as part of meaningful consent.
GDPR: The Key to Unlocking the Value of a Digital Single Market is Automated Voluntary Consent that is Controlled by Me
Saturday, February 11th, 2017
The question is how?
Consumers want three things:
The business wants:
- Simple integration
- To make money
Now how do we align these two stakeholders?
First we need to understand that the Internet was never designed with either security or privacy in mind.
It was also never designed to know who I am, what device I’m connecting with, or where I am. (Source)
So how do we solve these problems AND show that business can profit from Privacy?
There are three design considerations:
- A cloud-based consent service
- The business manages the consent mechanism
- The consumer simply transmits the consent each time they connect
Two of those options cost money to implement and maintain – one of the options makes money and is self-maintaining.
Without getting too technical let me explain.
Option 1 takes the building and maintenance away from the business, but due to its design introduces a liability. Why? Because if for any reason the third-party service is not available then the business is out of compliance. And the only business model the service has, is to charge the business as a broker of consent. Considering how many transactions will require consent the undertaking for monetization at scale is daunting and a cost to the business which lowers margin.
Option 2 is costly to build and maintain. However, it does offer a unique advantage — the business is only out of compliance if their business is unavailable. They’ve removed a third-party operational impact; however, they’ve increased their own costs which again lowers margin.
Option 3 (there’s now an app for that) is the simplest, cheapest and easiest to implement. Why? Because of its unique design. Each time the consumer connects to your business via the Internet the Web connection carries their real-time consent information in the form of encrypted data that can then be decrypted and read with a simple script. For more information on how the technology supports your business model click here.
Consent should always be voluntary – I should have a choice in what data I give for what I receive, and from whom I receive it. By allowing the consumer to remain in control of the collection, flow, use and assignment of their private data, the business can build a more trusted relationship with the consumer without costly infrastructure changes.
This now aligns the consumer with the business and turns privacy into a competitive advantage that can also be profitable.
Friday, February 10th, 2017
I just listened to Paul-Olivier Dehaye’s interview ‘The rise and risk of psychometrics in political campaigns.’
It’s well worth 26 minutes of your time (IMO).
Here’s my thoughts (greatly condensed) … To achieve the societal goal of Privacy for all you need ALL the following (around the 15-minute mark in the interview):
- Better laws (GDPR) with enforcement
- Better privacy tools for the consumer
- New business models that sustain the current value network (ecosystem)
We designed Choice® (a software tool) for items 2 and 3 above.
It’s sophisticated technology that enables lower cost business models with a more economically coherent value network while protecting the consumers right to privacy.
And it does this by using your existing web infrastructure and current knowledge base. Why is this so important? No behavioral changes for either the consumer OR the Enterprise!
The new business model (item 3) is: ‘Negotiated Digital Commerce’.
This transitions us from the current consumer value proposition of free services in exchange for business defined data use, to the future’s individual value proposition — choice in what I give and who I give it to; for what I get in return.
So, what will drive (force) change? – Regulation that is enforced – GDPR.
May 25, 2018 is just 325 ‘working’ days away.
Thursday, February 2nd, 2017
GDPR is a contextual privacy regulation that requires the Web to Recognize, Respect and Respond to each user’s personal context to enable meaningful consent. Because of this, the Consumer AND the Enterprise need better privacy/security choices.
- Leverage technology to support lower cost business models and connect economically coherent value networks
- Utilize a privacy-by-design framework, that adapts to any country/region/industry and aligns all the stakeholders
- Allow ME (the individual) to expressly consent to the collection, flow, use, and assignment of MY personal data in real time
- Are Contextual
So, what does that look like for the consumer?
And then what does it look like for the Enterprise?
No third parties controlling the consent/compliance database, no waiting to add new services as profitability or cost savings dictate. And you can support a disruption proof plug and play value network with a click.
Thursday, February 2nd, 2017
From the film “Jerry Maguire” came those now immortal words – “Show me the money,” and it is within the context of those words that we look at HTTP/2 and HTML5 and its effect on your privacy.
First, let me say that within a business context, I believe that IT architecture should support business strategy. It drives efficiency, enables new products and services, and supports healthy margins. But what about when it comes to the Web – a networked infrastructure that belongs to no one person, organization or country? Does this approach support the best interests of all the stakeholders?
The Internet has changed dramatically in the last 10 years. It is fueled by free, ad-supported services and it has gone mobile. That means for the first time ever, I use multiple devices to connect and interact with it. What is immediately apparent to the advertising industry, which fuels these free services, is the real-time need to offer more personalized ads wherever I am, and to whatever device I am using.
Enter two improved specifications that power the Web: HTTP/2 (device agnostic) and HTML5
First we will look at HTTP/2. If you read Section 10.8 carefully, you will see that it has serious privacy issues. It fundamentally changes the Web’s default ‘privacy settings’. While positioned to provide more security around your communications (TLS 1.2) in the ‘name’ of privacy, the actual impact of the change is about tracking you across ‘origins’.
The definition of origin is the point or place where something begins, arises, or is derived – in other words, you and your device. Nothing is more important to the advertising industry than the personalization of ads that are useful to me. They will pay a premium to track me as I move from device to device, from location to location. A consolidated profile that follows me where I go is far more valuable than multiple profiles tied to a desktop, laptop or phone.
HTTP/2 makes that a reality — but at what cost? There is nothing in it so far that makes it more efficient or will result in a better experience on mobile. The security capability of TLS 1.2 is a “nice-to-have” feature, making it harder for the hackers to perpetrate a man-in-the-middle attack. Now, let us couple the advances of HTTP/2 with those of the latest HTML update, HTML5. Sadly, Section 1.8 uncovers more privacy concerns. The first sentence reveals the issue… Some features of HTML trade user convenience for a measure of user privacy.
In general, due to the Internet’s architecture, a user can be distinguished from another by the user’s IP address. However, IP addresses do not perfectly match to a user; as a user moves from device to device, or from network to network, their IP address will change. Other steps like browser fingerprinting help remove that ambiguity thereby targeting the individual as they move from device to device.
As a consumer, I will have no idea that these changes are taking place. They’re designed to be seamless and require no behavioral changes on my part. I simply continue under the guise that my communications are more secure and yet my privacy is clearly at risk. So why are these changes even being contemplated if there is no measurable benefit to the consumer’s experience?
Firstly, we’re close to the end of phase one of Digital Advertising (The End of Digital Advertising as We Know It). The balance between usability and advertising has been lost so only the very largest advertising engines on the Internet will survive. Only they have the resources to enable something like HTTP2/HTML5 due to its complexity.
Trading convenience for privacy is now a familiar refrain. The average Internet user was not asked to weigh in on their preferences relative to privacy vs. convenience. With no mobile user experience gains, this reinforces the argument that business strategy drove Internet architecture changes. (Europe’s new privacy regime will disrupt the adtech Lumascape)
If you combine the privacy concerns of HTML5 with HTTP/2 you have the perfect solution for Wall Street, but at what cost to the Internet user? It was never about mobile or a better experience – it was all about tracking me across devices, in support of a business strategy that gives me no choice on the tradeoff.
How inconvenient for
EU Reg update: Concerning the respect for private life and the protection of personal data in electronic communications
Thursday, January 26th, 2017
Here’s the link
And here’s the gem…
Information and options for privacy settings to be provided
1. Software placed on the market permitting electronic communications, including the retrieval and presentation of information on the internet, shall offer the option to prevent third parties from storing information on the terminal equipment of an end-user or processing information already stored on that equipment.
2. Upon installation, the software shall inform the end-user about the privacy settings options and, to continue with the installation, require the end-user to consent to a setting.
3. In the case of software which has already been installed on 25 May 2018, the requirements under paragraphs 1 and 2 shall be complied with at the time of the first update of the software, but no later than 25 August 2018.
Long story short — every app will need to be updated along with all the current browsers.
That is NOT a trivial exercise. I can imagine the fights now over the user interface. Get it wrong and you’re out of compliance.
Wednesday, January 25th, 2017
This is going to be quite the challenge for GDPR data processors. Why? Because current browser OEMs have not yet agreed on a standard way to allow the user to indicate consent, revoke consent, or store consent of their data.
Can you imagine the confusion — everybody is going to want to do it their way which will result in confusing user interfaces and even more confusing user experiences. And all the data processor has to do is get it wrong once and they’re liable, facing very expensive penalties.
Let’s dig into consent a little further…
- The user accesses a web site -the web site MUST send a request for consent to the user before any content is loaded. (Or they may decide to load the content and ask for consent at the same time).
- The user has to express their consent – if the user doesn’t consent then the loaded page has to disappear or be replaced by something that doesn’t violate the user’s privacy.
- The data processor now has to store the consent and provide a way for the user to revoke consent at any time
- This MUST be done each time the user accesses the site – why? Because how do you (the data processor) KNOW if the user has changed their mind?
To say that this is a serious undertaking is an understatement. Each individual has to be accommodated in real time. Compliance is mandatory to avoid a fine.
Cloud Services like ConsentCheq are already springing up to solve this problem. Only one problem with this approach — you’re now dependent on somebody else for consent. What if their system goes down or is hacked? What happens to all of your consent data?
Do your users ‘consent’ to their consent data being stored on someone a third parties servers? I can see the debates going on right now. Your own security requirements may mandate that this is not an option. So then you have to build everything yourself.
In closing, let me present another option. What if we put the user in charge of the collection, flow, use and consent of their private data? What if every time they came to your website you knew in real time what they did and did not consent to? What if you didn’t have to store that consent?
What would that look like?
Here’s a screen shot — the screen on the right shows exactly what I consent to in real time and it’s available every time I come to (ONLY) your website. And the really cool thing — you know what I consent to BEFORE you send a response back to me.