How much your privacy is worth to you?
On Meta's €200 million fine for its "Pay or consent" model
Meta - the company owning platforms such as Facebook, Instagram or WhatsApp - always gives evergreen topics to people interested in their privacy. Recently, they started training their AI model with our public data without anyone’s consent (it is still possible to object against it here, if you don’t want that). And they were fined again in Europe for another inappropriate practice with their users’ data: for their infamous “Pay or consent” model1. If you’re not familiar with this, it looks like this right now (also quite buried here within the Facebook settings):
This is a new setting Meta implemented in 2023 in response to a series of GDPR fines worth of billions. One of these fines was issued due to Meta breaching the requirements of an appropriate consent. According to Meta, all Facebook users agreed to use their data for targeted advertising by accepting their terms and conditions as a “free product”. The Irish Data Protection Commission valued this decision with a fine of 390 million euros.
As a result, Meta changed its approach: if you want to use their platforms without targeted ads, fine, but then you’re going to pay 10 euros a month (recently reduced to 6 euros). If you would like to proceed with the “free” version of Facebook, then your data will be sold as a kind of cost of using their services.
So why was Meta fined again?
You don’t need to be a privacy expert to see that this is not exactly a traditional approach of consent. However, the regulatory landscape in the EU changed recently, and in addition to the GDPR, several other digital laws such as the Digital Markets Act (DMA) came into force. To understand the relevance of these laws, the Trump administration issued a “fact sheet” to defend American companies from “extortion”, such as the fines imposed under the DMA. This fact sheet also stated that
“Regulations that dictate how American companies interact with consumers in the European Union, like the Digital Markets Act and the Digital Services Act, will face scrutiny from the Administration.”
This concern of the US administration is understandable, considering for example that the DMA prohibits significant business practices of Big Techs in Article 5(2) of this regulation:
to process personal data for targeted advertising purposes,
to combine data between several platforms (“core platform services” in the regulation, e.g. Facebook, Instagram), and with data coming from third parties,
to cross-use the data between core platform services,
to sign in users to combine their data across platforms (so-called “lock-in” effect).
All of this is allowed only in the case of a specific choice and consent of the and users (see Article 5(2) DMA). As you can imagine, this goes against the very core of surveillance capitalism, and this gives definitely a stronger control over what happens with your personal data as an average user.
As the DMA became applicable in May 2023 (see timeline here), and the pay or consent model was introduced in September 2023, it was only a question of time when the first DMA will be issued. This happened only 2 months ago, when Meta was fined again for 200 million euros.
What is specific choice and consent?
The DMA is also unique in the sense that it largely builds on the practice and the concepts created under another major digital law, the GDPR. This latter regulation provides a detailed definition of what is consent with several conditions. According to the GDPR, consent has to be
Freely given: it has to be free of an imbalance in power, and it has to be a genuine choice. Although this is one of the more philosophical conditions, “freely” should also mean that I should not be charged for choosing a more privacy-friendly solution, or at least it should not come with a detriment.
Specific: it has to apply for a defined and clear purpose.
Informed: the person giving consent must receive clear information about the purposes of the processing.
A clear indication of your wishes: it has to be clear by action or statement of the user that the consent was given.
Revocable: once you gave consent, you have to be able to easily withdraw it.
There’s a great deal of guidelines, case law and literature on what consent is exactly, and there’s a long history of Meta breaching these requirements. In addition, the DMA introduced a new requirement to the existing rules on consent: there has to be a specific choice of the users to certain platforms, how their data will be used or combined, and whether their data will be used for targeted ads.
On the merits of the DMA fine
One of the key concern from Meta’s side is that their ads are worth about 97.5% of its annual global revenue. This means that if you ask people if they want to sell their data in exchange for using social media, most likely this revenue would shrink largely, and most Big Techs (not only Meta) would need to significantly change their business model at least in Europe, but potentially other regions may follow to enforce similar requirements (see e.g. a similar fine issued in Nigeria).
On the other hand, Meta - as a so-called “gatekeeper” in the DMA - became indispensable as an advertiser in the EU, and across the globe. The stated goal of the DMA is to contest this and to ensure fair markets in the digital sector. Another concern that was raised is that Meta has been combining the data of its users for over two decades on Facebook, Instagram, WhatsApp, Messenger, Marketplace, Gaming Play and Dating, which gave a clear overpower to Meta and a position of market dominance. All this data can be used by advertiser’s via Meta’s Ads Services.
The European Commission’s reasoning of the fine mentioned that many of its users registered several years ago, at a time where none of its platforms had advertisements. However, it was unexpected that over time Facebook introduced not only ads, but to build a whole ecosystem on targeted ads, and this would be based on the “consent”. It was also quite surprising that this evolved into a “Pay or consent” model, which was definitely not expected that it will be introduced at the time of registration.
The Commission also noted that Meta was, in fact, working on an alternative proposal, although it was blacklined from the decision what exactly as it concerns a “business secret” (which happened in many points in the decision, raising an issue of transparency). During this discussion, Meta also introduced a new function of “Additional Ads” in November 2024. These are the annoying mandatory ad breaks that you might have noticed during doom scrolling on Instagram.
To highlight some other surprising practices from Meta:
They have anticipated that almost no users will pay for the privacy-friendly option, and in fact less than 1% of the European users opted for this.
All data on Meta platforms are connected by specific user identifiers such as “Facebook ID” or “Instagram ID” as part of a unified user environment.
Although now you can choose to separate for example your Facebook and Instagram accounts, Meta can still find ways to aggregate your data anyway through the above ID and past information.
They use the term “organic content” for posted and consumed personalised content, implying that it is quite natural to share data across all their platforms and third parties.
So why do we still have to pay for our privacy?
Privacy should be standard, not a premium option. While social media platforms are almost an inevitable part of our lives and it has indeed brought novelties in our lives, making it easier to connect with people, they were simply not designed to preserve your privacy or to give meaningful consent for your data to be used.
While there are a series of fines against Big Techs, the problem is that the procedures to make these decisions may take several years, and during that period they are making billions in profits by breaching the law. The DMA somewhat changes this landscape, as it is evident from the decision that after a few months of discussions the Commission was able to issue a fine.
Yet the question remains whether the current US administration will allow these practices, and whether in fact it is possible to “protect” American companies from European regulations.
Call for engagement
What do you think about this fine? Do you think it makes sense to try regulating American Big Techs from Europe? Or is it only a “European” thing for over-regulating? Feel free to share your thoughts in a comment!
If you enjoyed this post and are interested in data protection, privacy, or AI, subscribe for more and share it with others: