Ultimately, it also plays into the hands of the most powerful players in our ecosystem when we oversimplify things and view the relationship between data ethics and effectiveness as a zero-sum game. When it comes to customer data consent and usage, the future is unlikely to be simple for brands. However, if we embrace emerging models that can help level the playing field, it can be fairer.
The tech is exciting, but are we missing the point?
When we talk about privacy in advertising, the conversation usually turns to the technologies and applications we use to protect and enforce it.
Where once the buzz was around FLoC, now we weigh the pros and cons of Google Topics versus segments. We get excited about a new solution, and we sometimes forget to examine the fundamental assumptions that have been made along the way – and the implications they have for the future.
It’s important for brands and other players in the ecosystem to take a step back and ask the questions that get to the heart of the matter…
Is the playing field really level?
Take the example of Google Topic. It’s a nice evolution on FLoC, you’re still dealing in aggregates, so what’s really changing? Well, in the past we relied on media traders – now it’s more about algorithms making media placement decisions.
So, it becomes about feeding the AI and ML models with the best data. The companies with access to more granular data are simply better equipped to develop superior models than those that only have access to aggregate data.
This is where the rhetoric around identity and privacy plays into the hands of the most powerful players in the ecosystem. If tech giants and walled gardens can reinforce their data position this way, it will be extremely difficult for smaller players – with fewer resources and first-party data assets – to compete.
Does aggregate data really protect privacy?
One of the most typically unquestioned assumptions in this debate concerns the extent to which aggregate data protects privacy. It’s not as clear-cut as aggregate versus individual data.
You have to ask – how big is the aggregate group? How can it be cross-referenced? What context can be layered on to remove the purported protection that aggregate datasets promise?
Is informed consent from people even possible?
Some people compare data consent to a patient’s consent before they undergo a complex surgery. Is it possible for the average Joe to make an informed decision about the risks, or do they just trust their surgeon?
If you believe informed consent is possible – and not all regulators think it is – then you have to ask what measures are needed. In Europe, GDPR ensures people can’t access their favorite content without providing consent. However, is it truly informed consent when the average person will click anything to get to the content they want online?
Do large-scale walled gardens have an advantage because they are readily trusted, while smaller upcoming brands may garner more skepticism?
What alternative models are possible?
If certain models run the risk of perpetuating an imbalance in the world of adtech and identity, the big question is: what are the alternatives? The key component is building trust in the system in a way that is readily verifiable and doesn’t disadvantage smaller players.
The good news is that there are numerous approaches to building trust and several innovations that make it easier.
Some of them have the potential to get us to the same outcomes we want as an industry—and, crucially, they could level the playing field too. Although privacy wallets in crypto have received bad press in recent years for being used by criminals to hide their transactions, such approaches can enable people to set their own permissions and conditions for access to their data while protecting their identity.
Emerging browsers like Brave utilize “Basic Attention Tokens” to give users more control by providing benefits and rewards in exchange for their data and online activity. Other innovators are developing solutions to help people control – and monetize – their data.
The takeaways for brands
What does this mean for the brands looking ahead to the uncertainties of a post-cookie, privacy-first world? The answer depends on one fundamental question in our space: do you believe everything has to be first-party or not? For example, can individuals give their consent for data sharing between brands in relevant partnerships?
Whatever your answer, it’s easy to see how much easier it is for big, established brands to grab these data opportunities. But what about smaller, newer brands? They probably don’t have huge customer bases to capture data from, or the same power to form data-sharing partnerships or wide-ranging loyalty programs.
That’s where third-party data and trusted data partners come in. They can level the playing field for brands of all types and sizes, especially the ones that can’t build a large first-party data graph from scratch. And when it comes to privacy, they’re governed just as strictly as the companies they deal with – in other words, as strictly as a financial services firm or healthcare company.
Context is everything, so flexibility is key
The other big takeaway for brands is that it all comes down to the challenge you’re solving or the opportunity you’re grabbing. Brands selling insurance are used to dealing with large amounts of sensitive data, and accuracy and reach are super important. If you’re selling soda, not so much – so working with aggregate data may be sufficient for your needs.
We don’t know exactly what the world of identity will look like in the coming years, but we do know one thing. There won’t be a one-size-fits-all solution. As our industry wrestles with fundamental questions about informed consent, data-sharing, and privacy, brands should take heart.
Our goal should be to deliver a fairer future for advertisers and content providers. One where a brand’s size and sway matters less than its ability to combine creativity and pragmatism – identifying its most important customer challenges and finding the best solutions to tackle them.