Data leaks are a design flaw, not bad luck

Odido. The municipality of Epe, where the records of nearly all residents were stolen. The breast-cancer screening programme covering hundreds of thousands of women. Even the Dutch Data Protection Authority itself, the supervisor. Over the past months a new leak seems to land every week, and the response is always the same: better security, harsher fines, faster mandatory reporting. All sensible, but they are bandages on a deeper wound. Our data is in the wrong place.

A thought experiment to make this plain. In the physical world you have one mailbox. The town hall, your bank, your aunt and the parcel courier all use the same slot in your front door. Nobody finds that strange. We figured it out somewhere in the mid-nineteenth century and have kept it that way ever since.

Digitally we did the opposite. My Bank, My Government, My Energy Provider, My Health Insurer, and dozens more. Each with its own login, its own interface, its own password, its own security level. We accept it because we don't know any better. But say it out loud and it sounds absurd: why would I have to manage tens of separate digital mailboxes for parties who all simply want to send me something?

The question behind the question

The answer is that this all grew that way; nobody designed it. When the internet went commercial, every organisation built its own customer environment. There was no neutral digital infrastructure, and no party had any interest in creating one. Worse: businesses have a positive interest in your logging in with them rather than at some shared mailbox where they would be just one of many senders.

The result is that your data now sits in hundreds of places. At every organisation you ever did business with, something of yours is stored, sometimes things you knowingly handed over, like a passport scan or a bank-account number, but often things that arose as a by-product: your click behaviour, what you buy, when you log in. The list is longer than you'd think. Each of those places is a potential leak. Each one manages your information according to its own standards, its own budget and its own degree of care. And you have no idea what they actually have, how long they keep it, or who has access.

At this point a reader might think: but we have e-mail, don't we? E-mail is exactly what could have been the answer, and partly is. It works as a universal postal infrastructure precisely because it was designed before commercial interests could fragment it. But e-mail is not bound to you as a person, it is just an arbitrary address that anyone can create. Nobody can be sure, on the basis of an e-mail address, that you really are you, and you cannot be sure, on the basis of a sender, that the message really comes from your bank. That is why e-mail works fine for incidental messages but organisations build their own portals for anything that matters. We are missing the layer beneath it: a mailbox that does belong to you, and where everyone is sure they are sending to the real you.

A different starting point

There is a more logical alternative, and it can be summarised in one sentence: data belongs to the person, not to the service.

Imagine that you have one digital mailbox. Anyone who wants to send you something, a bill, a message, a form, sends it there. You decide who is allowed to write to it. The same infrastructure can be used to send information back, to fill in forms, and to share specific data temporarily when that is needed.

The principle reaches further than mail. Much of what organisations now store, they do not in fact need to know. A bank that wants to verify you have a valid identity is not legally required to keep a copy of your passport, a one-off confirmation suffices; in practice they keep it anyway. A landlord who wants to know whether your income is sufficient now asks for a payslip, while a verified yes or no from a trusted party would be enough. A web shop that wants to deliver a parcel needs an address only at the moment of delivery, but stores it for years.

The same logic applies to medical data. Instead of having your records spread across every care provider and institution you have ever visited, all of them attack surfaces, you carry the relevant information yourself. A care provider could read out what they need to know in an emergency, secured for instance by a combination of biometrics and a professional token issued only to recognised practitioners. Which exact form works best is something to figure out and test, but the principle is clear: no central database to hack, because it no longer exists.

The obvious objections

Two counter-arguments always come up. The first: what if I lose my carrier? That's a real problem, but it is a technical problem, not a fundamental one. People also lose their bank cards, their passports and their house keys. We have workable solutions: multi-factor verification, recovery via a trusted issuer, social recovery via people you have nominated in advance. No solution is perfect, including today's, but the trade-off is feasible.

The second: this is a lot of work. True. But every large system that ever existed was a lot of work before it existed. The whole postal network was built up step by step over decades in the nineteenth century. The question is not whether it is hard, but whether it is worth doing.

An invitation

What is at stake here is more than IT architecture. It is a design choice about how we, as a society, deal with information, identity and trust. We are now stuck in a model nobody consciously chose, that grows steadily more expensive to maintain, in cost, in data leaks, in loss of control, and that fundamentally stems from an ingrained pattern: we do it this way because we do it this way.

The technology to do it differently exists. There are small-scale efforts, including in the Netherlands. At the European level eIDAS 2.0 has taken a first step: by the end of December 2026 each member state must have made a digital identity wallet available to its citizens, with which people decide for themselves which data they share with whom. An important start, even if it remains limited to identity verification rather than communication or broader data sovereignty. What is missing is a wider public conversation about what we, as citizens, actually want. The discussion about data sovereignty has so far been led by institutions, not individuals.

Perhaps that is the real starting point. Not "how do we secure our data better", but "why is our data organised in such a way that it has to be so heavily secured in the first place?" The answer to that question shapes what digital societies will look like twenty years from now. And the answer comes only when more people start asking it.