Digital regulation in Europe is entering a new phase: privacy compliance is no longer assessed solely through the lens of the GDPR. For large platforms designated as “gatekeepers” under the Digital Markets Act (DMA), regulators are effectively reshaping how core GDPR concepts – especially lawful bases and consent – work in practice. The result is an emerging, more differentiated data protection landscape where obligations can tighten or loosen depending on an organisation’s market role and systemic impact.
Executive takeaways
- For gatekeepers, “no hierarchy among lawful bases” is becoming less true in practice: DMA obligations funnel certain processing toward consent, with stricter conditions than the GDPR alone typically imposes.
- New interpretive “grey zones” are forming – particularly around what counts as a “supporting service” – creating both opportunity and regulatory risk.
- The same logic is likely to spill into adjacent areas such as AI training, where firms may attempt to position data use as “service support” to access more permissive lawful bases.
- In parallel, simplification initiatives aimed at reducing burdens on smaller firms point toward a two-speed compliance environment: stricter for market-shaping actors, lighter for others.
A quiet but material shift: “lawful basis choice” is tightening for gatekeepers
Under the GDPR, organisations choose a lawful basis for processing personal data from a closed list (consent, contract necessity, legal obligation, vital interests, public task, legitimate interests). The orthodox position has been that there is no formal hierarchy among these bases, so long as the selected basis fits the processing purpose and the organisation can substantiate it.
The DMA changes the practical reality for gatekeepers. In several high-value data uses – especially those linked to ad targeting and combining data across services – the DMA imposes default prohibitions unless specific conditions are met. This has a real-world effect: it narrows the lawful bases that are realistically available and pushes firms toward consent more frequently than they might otherwise choose under a GDPR-only analysis.
This matters because lawful basis is not just a legal formality. It determines how product teams design user journeys, how marketers plan data-driven campaigns, how engineering teams structure data pipelines, and how risk teams evidence compliance under scrutiny.
Consent is becoming “more engineered,” not just “more requested”
One of the most consequential developments is how the DMA reframes consent quality. In conventional GDPR practice, providing a genuine alternative (for example, a less personalised option) can help demonstrate that consent is freely given – but it is not always treated as a hard requirement across all contexts.
For gatekeepers operating under the DMA logic, regulators are signalling a tighter standard: consent should be accompanied by a “specific choice,” and that in turn is linked to offering an equivalent, less personalised alternative. In effect, the consent test becomes more product-architectural: the user must be able to choose a comparable experience without the contested processing – otherwise the consent may not be treated as valid.
The practical implication is straightforward but expensive: gatekeepers may need to redesign interfaces, data flows, and experience variants. Consent becomes less about the banner and more about whether the underlying service design meaningfully supports choice.
A second-order effect: regulators are also engineering “systemic compliance”
There is a deeper theme behind the guidance: it treats gatekeepers as actors with a special responsibility to enable compliance across an ecosystem, not only within their own organisation. This is particularly visible where the DMA creates rights for business users and obligations for gatekeepers to facilitate access or portability. The implication is that regulators may lean more often on “legal obligation” reasoning to connect DMA duties back into GDPR compliance structures – especially where multiple controllers interact and the GDPR alone can struggle to operationalise “ecosystem-level” consent and accountability.
For legal and compliance leaders, this is a shift in mindset. The compliance question is no longer “is our processing lawful?” but also “does our platform design support lawful processing across counterparties who depend on our infrastructure?”
The emerging fault line: “supporting services” and the return of legitimate interests
While parts of the DMA push processing toward consent, another part of the guidance introduces a notable carve-out that could expand the room to rely on legitimate interests. It hinges on the concept that data use between a core platform service and other services “provided together with or in support of” that core service may fall outside certain DMA restrictions.
Where this becomes controversial is the interpretation that online advertising can be treated as a “supporting” service to a core platform service. If advertising is framed this way, it creates a pathway to argue that some cross-use of data can be justified under legitimate interests rather than consent – at least for certain configurations.
This is not a purely academic issue. It affects:
- whether data can move across product boundaries without a consent-first model,
- how firms justify personalisation and monetisation design,
- how compliance teams defend risk positions when regulators scrutinise purpose limitation and user expectations.
It also introduces tension: advertising has long been treated as a distinct purpose in EU privacy thinking, and the DMA itself explicitly recognises online advertising services as a category of “core platform service.” Interpreting advertising as merely “supporting” another core service is likely to attract debate, precisely because it risks undermining the DMA’s intent to constrain data-driven market advantages.
The next battleground: AI training and “service improvement” arguments
Once a “supporting service” exception exists, it is natural for firms to test whether the same reasoning can extend to AI. Many organisations already justify certain AI-related processing under legitimate interests, framing it as necessary for security, fraud prevention, or service improvement. The guidance raises a more pointed question for gatekeepers: if AI-enabled features are positioned as “supporting” the core platform service, could personal data use for training be argued into a more permissive lane than if AI is treated as a separate, standalone service?
The risk is that broad, generic framing – “we train AI to improve the service” – becomes the default justification. Regulators are signalling that these determinations are contextual and require granular analysis. Simply labelling training as “support” or “improvement” is unlikely to be defensible on its own, especially where data combination and cross-use resemble the very behaviours the DMA aims to constrain.
There’s also a convergence effect: even where the DMA is less explicit about AI than it is about advertising, the GDPR’s core principles still apply, and the AI regulatory environment is moving toward stricter requirements for governance and risk controls. That means AI training strategies need to be assessed not only for lawful basis, but also for proportionality, data minimisation, transparency, and accountability in practice.
The bigger picture: a more differentiated EU data protection regime is taking shape
Stepping back, the direction of travel is clear: EU data protection obligations are becoming more stratified. On one side, gatekeepers face tighter constraints and more engineered consent expectations because of their systemic market position. On the other side, EU institutions are also exploring simplification efforts aimed at reducing administrative burden and compliance cost – particularly for smaller organisations.
Put together, these signals point toward a “market-power-sensitive” privacy regime: stricter compliance architecture for actors that shape markets and ecosystems, and streamlined expectations for actors with lower systemic leverage. Whether that differentiation is calibrated well will matter. Too much simplification risks weakening foundational protections; too much tightening risks turning compliance into a barrier that entrenches incumbents.
What legal, compliance, and product teams should do now
The firms most exposed are those at the intersection of platform economics, advertising, and data-intensive product design. But the guidance also creates spillover expectations across the broader B2B ecosystem. A pragmatic response typically involves four workstreams.
1) Re-map lawful bases to actual data flows (not policy statements)
Most organisations already have records of processing activities. The gap is often that the documentation doesn’t reflect the real architecture – what is combined, what is re-used, where data crosses product boundaries, and which purposes are genuinely separable. Start with an engineering-grade map and re-validate lawful bases against it.
2) Treat consent as a product capability
For organisations likely to be treated as “gatekeeper-like” in practice – or those partnering with them – the bar is moving toward “choice by design.” That means designing equivalent alternatives, reducing dark-pattern risk, and making consent evidence auditable at scale. The goal isn’t more consent prompts; it’s defensible consent mechanics.
3) Stress-test “supporting service” arguments before regulators do
If a compliance position relies on “supporting service” logic, treat it as a high-scrutiny assumption. Define decision criteria, document the rationale, and anticipate counterarguments (including how advertising or AI is classified in your service taxonomy). Build escalation routes so product teams can’t quietly expand scope without legal review.
4) Build a repeatable governance model for AI-related processing
AI training and deployment decisions are increasingly cross-functional: legal, privacy, security, data science, and product. Governance should make it easy to answer regulator questions quickly: what data is used, why it’s necessary, which basis applies, how risks are mitigated, and how individuals are informed.
The opportunity hidden in the complexity
This regulatory moment is often framed as constraint. But it can also be a competitive lever for organisations that treat compliance as a design and operating-model challenge. Clearer lawful-basis discipline, higher-quality consent journeys, and stronger governance can reduce conduct risk, accelerate product decisions, and improve resilience as enforcement intensifies.
The bigger strategic point is that privacy compliance is no longer “one-size-fits-all.” Market role increasingly shapes compliance expectations – and organisations that recognise that early can build operating models that scale with scrutiny, not just with growth.
