Taking GDPR Seriously

Published on

For better or for worse, the web is now full of modal screens that block progress until you accept or reject cookies. If you reject them, often you get kicked out. Blocked.

Building 111 online, we know people use the website when they have an urgent medical issue. So most users will have some form of worry, a panicked state is to be expected. The last thing we would want to do is force them to agree to tracking cookies. We couldn’t just ignore the law though, so we banged our heads together to figure out a plan that would be legal, ethical and a nice user experience.

Too long don’t read? We deleted everything and put users first.

I’ve personally (my own opinion) not liked the idea of using Google Analytics (yes I know I use it on my blog) on this kind of website... but we were using it daily to figure out how to improve the site. So it was quite a big deal when we decided to scrap it.

How do you scrap Google Analytics?

That bit was actually quite easy. First to go was the third party scripts (easy performance win there!) followed by the biggie... removing the cookie banner. We rewrote the cookies policy to explain that we only use first-party functionally-required cookies.

This all felt like a brilliant step forward, especially compared to alternatives... who wants a third-party tool taking over your site to manage consent for another third-party tool. We were free, we had full control.

So a happily ever after?

Unfortunately not... you see it turns out analytics are quite important. We started to have all these unanswerable questions: what’s caused the survey response rate to lower (maybe the GDPR related text we added?), what’s the browser usage looking like... what are this months KPI looking like?


Google Analytics was a powerful tool that allowed the whole team to easily answer these questions.

Could we replace it with a different tool? Not really. There are some competitors that are more privacy-aware in some ways but it would still mean reinstating the cookie banner and we really do want to be as first-party as possible. I’ve never liked the idea of random companies being able to inject scripts into any site (again, personal opinion!).

Building an analytics platform

As much as I would have enjoyed it... we couldn’t just go ahead and replicate Google Analytics (although it feels like there’s a gap in the market for more self-hosted non-cookie solutions). But we could take a pragmatic approach.

We already had a database for auditing purposes, a way to track a journey on the site without personal identifiable data. That’s used to give Clinical Commissioning Groups (CCGs) around the country a high level overview. So there’s already a PowerBI dashboard for them, so we figured maybe we could make one for internal analysis.

It’s impossible to get everything done, priorities change and so we went a while before getting around to implementing these ideas. That all began this week. On Tuesday we got most of the developers into a room to discuss the pros and cons of the various solutions... by this afternoon (Thursday) it was already being tested. I often moan about how it can take a while to get updates released but that’s pretty good going!

There’s no fancy dashboard yet but the new structure for creating events is in place. There was already an existing version but it was only strings which meant it could easily become full of variation and hard to query. So we replaced with eventKey and eventValue where the key is an enumeration so it’s easy to organise and query. It’s currently just got a browser info collection, it seemed a good place to start so we can track mobile usage (in the past on GA it’s averaged about 80% mobile vs desktop/tablet, will be interesting to see if that stays the same). This brought me into the ancient lands of... user agent sniffing.

Browser sniffing

Thankfully .NET has pretty good browser detection in place but (probably due to using an older version of the framework) there were quite a few gaps where it got it very wrong. There’s far more useful resources on the web than what I did but I find it useful to take record of it. I created a BrowserInfo class to override BrowserCapabilities, here are some notes:

Client side events (proof of concept)

I had a bit of time spare after implementing the server side events, so I figured to stay in the same context and think about how client side events worked. Usefully I found out that the database we were using already had a public endpoint so there was very little backend work required to get the client side events up and running.

In an ideal world it would be similar to Google Tag Manager, with an admin screen and a first-party injected script to add the events. This might happen in the future, it would have some benefits but for now I wondered how would I implement the basic events.

On the homepage of 111 online there are some links to NHS.UK pages. These are external links so there’s no easy way to see how often they are clicked, but that is very useful to know as we could change the links to more useful ones or provide more relevant information. So I kept my task simple: implement events for those links.

I did this by adding data attributes to the links data-event-trigger and data-event-value. This let me, using jQuery (yep still a thing), add click handlers to all occurrences of the click trigger. It then just needed to build the event that the website could understand and add the value.

Nice and simple, potentially opening the doors for us to have a good ethical solution for analysing journeys on the site and improving it for everyone.

After speaking to Akil Benjamin at New Adventures 2020, I wrote a follow-up article to this post–How it feels to make the right decision.