Connect with us


With $410M+ in new EU privacy fines, Meta’s New Year begins





More privacy fines and corrective orders are beginning the New Year by affecting Meta’s operations in Europe. The most recent round of action comes in response to several EU General Data Protection Regulation (GDPR) complaints over the legitimacy of the company’s use of behavioral advertising.

The Irish Data Protection Commission (DPC), the principal data protection watchdog in the area for Facebook owner Meta, announced today that it had adopted final decisions on two of these protracted investigations — against Meta-owned social networking site Facebook and social photo sharing service Instagram.

The European Data Protection Board (EDPBbinding )’s decision on these complaints last month that contractual necessity is not an appropriate basis for processing personal data for behavioral ads is confirmed by the DPC’s press release today, which also announces financial penalties of €210 million ($223 million) for Facebook and €180 million ($191 million) for Instagram in relation to these complaints.

These new penalties come on top of a slew of privacy fines handed down to Meta in Europe last year, including a €265M fine for a Facebook data-scraping breach, a €405M fine for an Instagram violation of children’s privacy, a €17M fine for a number of earlier Facebook data breaches, and a €60M fine for violating Facebook cookie consent. All told, these penalties will bring the total amount of (publicly disclosed) EU data protection and privacy

However, Meta has already received fines totaling more than half of the regional total for last year in the first few days of 2023, and additional penalties may be on the way.

Corrective actions are also being taken, in accordance with the DPC’s PR, and Meta has been given three months to make its processing in line with the GDPR.

Therefore, it will have to ask users for their approval rather than relying on the defense of contractual necessity to run behavioral ads. (And users who reject its surveillance advertising cannot be profiled or targeted.)

Max Schrems, the creator of the European privacy rights organization (noyb) that brought the initial GDPR complaints, commented in a statement: “This is a severe blow to Meta’s revenues in the EU. People must now be asked if they agree or disagree with the usage of their data for advertising. They must be given a “yes” or “no” choice and are free to alter their decision at any moment. Additionally, the decision guarantees parity with other advertisers who likewise must obtain opt-in consent.

The internet giant is quite likely to dispute the rulings given how crucial Meta’s tracking and targeting ad strategy still is to its business. If it does, this might cause new delays as legal challenges to the now-ordered enforcement are resolved in the courts. Therefore, it can be years before Meta submits to correction through EU privacy regulation.

Full information on disagreements between data protection authorities as well as other intriguing facts, such how the level of the fines have been established, are still to come. This is because the DPC’s final findings on these inquiries have not yet been released.

However, the DPC offers its own perspective on the regulatory disputes in a press release that announces the two final verdicts, writing:

The CSAs [concerned supervisory authorities] concurred with the DPC’s findings on the issue of whether Meta Ireland had violated its transparency duties, even if they thought the DPC’s suggested sanctions should be enhanced.

Ten out of the 47 CSAs voiced concerns about other parts of the draft rulings (one of which was subsequently withdrawn in the case of the draft decision relating to the Instagram service). The delivery of personalized advertising (as part of the larger suite of personalized services offered as part of the Facebook and Instagram services) could not be said to be necessary to perform the core elements of what was said to be a much more limited form of contract, according to this subset of CSAs, who believed that Meta Ireland should not be allowed to rely on the contract legal basis.

The DPC disagreed, expressing its opinion that the Facebook and Instagram services comprise and, in fact, appear to be built around the provision of a personalized service that includes individualized or behavioral advertising. These are, in fact, personalized services that also include individualized advertising. According to the DPC, this reality is crucial to the agreement reached between users and their preferred service provider and is a component of the contract signed when users agree to the Terms of Service.

The EDPB was instructed to (further) raise the level of sanctions issued because the DPC’s PR also reveals that Meta violated the GDPR fairness principle in addition to the transparency breach that the Board supported.

A third ruling against WhatsApp, which is owned by Meta, is still pending at the DPC but is expected to be delivered in the next week or so. (The regulator informs us that this is due to a brief delay in the DPC receiving the binding judgement from the EDPB on that complaint.)

According to noyb, a fine for WhatsApp under that concurrent process is anticipated to be made public by mid-January.

Update: Meta responded to the rulings in a blog post and asserts that the legal foundation it chose to process people’s data for advertising purposes “respects GDPR.” Additionally, it states that it intends to appeal the rulings on both the merits and the severity of the fines levied.

In a statement that echoes the DPC’s assertion that ad-supported “personalized” services must be “all or nothing,” Meta writes that “Facebook and Instagram are inherently personalised, and we believe that providing each user with their own unique experience – including the ads they see – is a necessary and essential part of that service.

As long as users’ safety and privacy settings allow it, we have relied on a legal theory known as “Contractual Necessity” to offer them behavioral advertisements based on their online actions. It also asserts that it would be highly unusual for a social media service to not be customized to each user, while omitting to mention that, prior to relying on a claim of contractual necessity in 2018, before the GDPR went into effect, it had relied on a claim of user consent for the processing of ads.

Additionally, according to Meta’s blog post, the DPC’s rulings do not forbid personalized advertising on its platform or require the use of consent for ad-based processing.

The claim that personalized advertising can no longer be provided by Meta across Europe without first obtaining consent from each user is false, it says. Similar firms process data using a range of legal basis, and we are considering a number of solutions that will enable us to continue providing our users with a completely personalized service. It is untrue to say that Meta can no longer provide personalized adverts across Europe without first obtaining each user’s consent.

Regulation of coerced consent
The European privacy rights campaign group noyb targeted the tech giant’s use of so-called “forced consent” (i.e., forcing users to accept sign-up terms that state they must “agree” to their data being processed for behavioral ads or they will not be able to use the service) in May 2018, just as the GDPR went into effect throughout the European Union.

In contrast to the EDPB’s binding ruling, the Irish regulator’s draft judgement on the complaints was disclosed back in October 2021, and the DPC did not raise concerns about Meta’s reliance on contractual necessity for running behavioral ads. Despite finding violations of the GDPR’s transparency rules, the report claimed that it was doubtful that consumers knew they were agreeing to a Facebook ad contract when they clicked the site’s “I agree” button.

Therefore, the DPC initially requested a reduced penalty (of about $36M) compared to the financial blow in final decisions that is now emerging, which is more than 10x larger (still with the WhatsApp final decision pending).

Through the GDPR’s cooperation mechanism, which involves other EU data protection authorities (who can, and in this case several did, object to a lead supervisor’s draft decision), and designates the EDPB as the final arbiter when regulators can’t agree among themselves, a much tougher enforcement regime has been reached. Therefore, in this instance (and not for the first time), the DPC has been given instructions to arrive at a different decision than it would have otherwise.

The level of enforcement resulting from a collective regulatory mechanism baked into GDPR is higher (and stricter) than it would have been with Ireland acting alone, as has happened multiple times before.

The EDPB “took a different view on the ‘legal basis’ question,” according to the regulator, who added that the final decisions adopted by the DPC on December 31, 2022, “reflect the EDPB’s binding determinations as set out above.” The DPC frames the outcome somewhat differently—as a difference of legal interpretations. Because of this, the DPC’s decisions include conclusions that Meta Ireland is not permitted to rely on the “contract” legal basis in connection with the delivery of behavioral advertising as part of its Facebook and Instagram services and that its purported processing of user data up to this point in reliance on the “contract” legal basis constitutes a violation of Article 6 of the GDPR.

It will be interesting to see if Meta’s attorneys attempt to capitalize on the DPC’s (now publicly stated) assertion that Facebook and Instagram are “premised on, the provision of a personalised service that includes personalised or behavioral advertising” and its (convenient-for-Meta) conflation of personalised services and personalised advertising through an expressed stance that such a conjoined pairing is “central to the bargain struck between users and their chosen servic

It’s odd that the DPC’s position on this issue (as well as Meta’s!) ignores the presence of additional types of (ads that don’t violate privacy) that Meta might employ to fund its service, including contextual advertisements.

Additionally, its PR makes no mention of the possibility that Meta will be required to destroy all the information it has been unlawfully processing since 2018. However, litigation finance companies are unlikely to pass up the chance to scale privacy class actions.

Additional drama is developing in relation to today’s DPC statement as well: Schrems tweeted his displeasure with the DPC’s statement that noyb wouldn’t receive the final verdict until Meta had an opportunity to redact the paper. In ten years of litigation, I’ve never seen anything like it, he continued. F*cking insane

(Recall that noyb had already filed a case of criminal corruption against the DPC in 2021, alleging the regulator of corruption and “procedural blackmail” in connection with attempts to block the publication of records pertaining to GDPR complaints.)

The DPC’s “quite diabolic public relations game,” according to noyb’s Schrems, is further criticized in a press statement from the company. He writes: “Getting overturned by the EDPB is a big blow for the DPC, but now they seem to at least strive to gain the public impression of this issue. I have been involved in litigation for 10 years and have never witnessed a decision being served to one side but not the other. The DPC engages in very evil public relations tactics. It attempts to co-write the story of the decision with Meta by preventing noyb or the general public from reading it. Despite being overridden by the EDPB, it appears that the cooperation between Meta and the Irish regulator is still going strong.

The DPC has stated it is commencing an annulment action against specific “jurisdictional” components of the EDPB judgement, another unusual move by the Irish regulator that only looks destined to increase criticism of its friction-generating approach to GDPR enforcement.

Instead, it asserts that it disagrees with other aspects of the guidance provided by the Board and accuses the steering board of exceeding its authority in a disagreement under GDPR Article 65.

The Board’s legally binding decision also instructs the DPC to carry out what the Irish regulator describes as “a fresh investigation that would span all of Facebook and Instagram’s data processing operations and would examine special categories of personal data that may or may not be processed in the context of those operations,” which suggests that this action was initiated.

In the EU, where legal experts have been warning for years that the tech giant’s consent-free tracking and profiling of citizens is in violation of the bloc’s legal framework on data protection, such an investigation, should it actually occur, could really drive a stake through the heart of Meta’s privacy-sucking business model.

It’s therefore intriguing that the DPC wants to avoid opening a thorough inquiry into Meta’s data processing at the EDPB’s request.

According to its PR, the decisions it has made today “necessarily do not include reference to additional investigations of all Facebook and Instagram data processing operations that were instructed by the EDPB in its binding decisions.” The regulator explains why it takes issue with this statement:

Regarding national independent authorities, the EDPB does not have a general oversight role comparable to that of national courts, nor is it permitted to order and instruct such authority to conduct an unrestricted and speculative investigation. In light of this, the instruction is problematic from a legal standpoint and does not seem to follow the GDPR’s guidelines for collaboration and consistency. The DPC believes it is appropriate to file an action for annulment before the Court of Justice of the EU in order to request the setting aside of the EDPB’s instructions in the event that the directive may represent an overreach on the part of the EDPB.

What the EU General Court will do with the DPC’s complaint is still up in the air.

However, the court last month decided that WhatsApp’s legal challenge of an earlier EDPB binding decision on a different GDPR inquiry, which similarly significantly increased the level of enforcement it would have faced from an earlier DPC draft ruling, was inadmissible.


As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Artificial Intelligence

Microsoft’s new AI product raises concerns and prompts thoughts of a dystopian future





Microsoft has experienced significant developments in the past few days. The company’s annual Build conference, which took place from May 21–23 in Seattle, featured a wide range of exciting announcements about upcoming technology. A significant focus of these announcements was the seamless integration of artificial intelligence into future devices. However, there is one announcement that has already sparked controversy and could potentially lead to legal consequences for the company.

The AI-powered “Recall” feature, which Microsoft describes as a timeline of your PC’s past, has generated curiosity and concern among experts and everyday users. This feature captures screenshots of your active screen at regular intervals and stores them on your device. The program will then be able to search through these screenshots as well as all of users’ past activity—files, photos, emails, browsing history, and more.

According to Dr. Kris Shrishak, an expert in artificial intelligence and privacy, this could be a significant concern for privacy.

“The potential for screenshots being taken during device usage could have a significant impact on individuals,” Shrishak expressed concern. Some individuals may be hesitant to visit specific websites or access certain documents, particularly those of a sensitive nature, if Microsoft is continuously capturing screenshots at regular intervals.

Microsoft, on the other hand, emphasizes the presence of safeguards: privacy has been a fundamental consideration in Recall’s design, according to the company. Users have the ability to restrict the frequency of screenshots being captured.

“Recall snapshots are stored on the local hard disk and are safeguarded through data encryption on your device,” the company explains. “Screenshots in Recall are exclusively associated with individual user profiles and are not shared with other users, made accessible to Microsoft, or utilized for targeted advertisements […]” Keep in mind that other users and other applications or services cannot access or see screenshots.

While some may find it reassuring, safety experts remain skeptical. According to Microsoft, it is important to be aware that certain sensitive information, such as passwords and financial account numbers, may not be concealed in screenshots. This means that if a computer’s Recall privacy settings are misconfigured, if it becomes infected with malware, or if it simply malfunctions, it could potentially pose a significant security risk.

According to Muhammad Yahya Patel, the lead security engineer at Check Point, the attack is a one-shot opportunity for criminals, as he explained to TechRadar.

According to Patel, with Recall, users will have the convenience of a grab-and-go experience, as everything they need will be stored in a single location—their screenshot database. Just think about the vast amount of information that could be stored on a machine and the potential harm that threat actors could cause with it.

It is worth noting that there are significant concerns surrounding Recall, which has led to legal issues for Microsoft. In the UK, the Information Commissioner’s Office (ICO), the national data protection authority, has taken action by initiating inquiries with Microsoft to ensure that user privacy is adequately safeguarded.

The office emphasized the importance of organizations being open and honest with users regarding the use of their data. They also stressed the need for organizations to only process personal data when it is absolutely necessary to fulfill a specific purpose. This statement was released on Wednesday. It is crucial for the industry to prioritize data protection from the very beginning and thoroughly evaluate and address any potential risks to individuals’ rights and freedoms before introducing products to the market.

Continue Reading


Almost 40 percent of webpages from 2013 have succumbed to digital decay





Have you been searching for an article you read several years ago but just can’t seem to locate it? If it was written in 2013, there is a high possibility that it has vanished from the internet. According to recent research conducted by the Pew Research Center, a significant number of webpages created in 2013 have become inaccessible due to “digital decay.”. The study revealed that nearly 40 percent of these webpages are no longer accessible.

The new analysis reveals the transient nature of online content, challenging the notion of its permanence. Digital decay refers to the gradual deterioration, corruption, or obsolescence of digital information as time passes.

Based on their findings, 38 percent of the content that was present in 2013 cannot be accessed anymore. Upon broadening the scope of their analysis, the researchers made a significant discovery: a staggering 25% of web pages that once existed between 2013 and 2023 are now inaccessible. Typically, this occurred because the relevant page(s) were deleted or removed from otherwise functional websites.

Within this context, the team has defined “inaccessible” as a page that is no longer available on the host server. This typically results in a 404 error message or another error code.

The researchers collected data for their analysis by utilizing random samples of nearly 1 million webpages from the Common Crawl archives. These archives serve as an internet repository that captures snapshots of the web at various points in time. They collected this data from the years 2013 to 2023 and subsequently verified the existence of those pages.

Approximately 25 percent of the creations from this period were no longer accessible as of October 2023. This sum consists of two categories of obsolete content: 16 percent of the pages were “individually inaccessible” but were located on otherwise accessible root-level domains. Unfortunately, the remaining 9 percent were unreachable as the root domain had ceased to exist.

“As expected, the older snapshots in our collection had the highest proportion of inaccessible links,” explained the authors of the report.

By the end of 2023, a significant portion of the pages collected in the 2013 snapshot had disappeared. However, the content of the 2021 snapshot experienced a decline, resulting in the loss of approximately one in five pages.

Additionally, there were intriguing comparative findings regarding various types of web pages. As an expert in artificial intelligence, I analyzed the reference links to 50,000 English-language Wikipedia pages. It was discovered that a significant majority of the sampled pages, specifically 82 percent, contained at least one reference link that directed users to external websites other than Wikipedia. However, it is concerning that 11 percent of the references cited on Wikipedia are no longer accessible.

Approximately 2 percent of the source pages sampled had inaccessible or broken links, while about 53 percent had at least one broken link.

Government websites also had some interesting features. It was discovered that approximately 75% of the 500,000 government web pages analyzed had at least one link. On average, each page had 50 links, but there were quite a few pages that had even more. Most of these pages are directed to secure HTTP pages, while a small percentage redirect to other pages.

However, approximately 21 percent of the government pages that were analyzed had at least one broken link. City government pages, it appears, were the most problematic in this regard.

Even news sites were not exempt from the issue. Researchers discovered that a significant majority of the news sites they analyzed, approximately 94 percent, included at least one outbound link redirecting readers away from the site. On average, the typical page had approximately 20 links, while the top 10 percent of pages boasted around 56 links.

The analysis reveals that the majority of these links were directed towards secure HTTP pages, similar to government websites. Approximately 32 percent of the links on these news sites led users to different URLs than the ones initially provided. Approximately 5 percent of news website links are currently inaccessible, with about 23 percent of all pages containing at least one broken link.

After conducting a thorough analysis on Twitter (now X), the researchers discovered that, among the 5 million tweets shared from March 2013 to 2023, a significant 18 percent were no longer accessible.

“In most instances, this occurred because the account that initially shared the tweet had either become private, suspended, or completely deleted,” clarified the researchers. In the case of the remaining tweets, the account that originally posted the tweet was still visible on the site, while the specific tweet itself had been removed.

In certain languages, tweets were found to be more susceptible to disappearing or being deleted. For example, a significant portion of Turkish-language tweets and a smaller percentage of Arabic tweets were no longer accessible.

Typically, tweets that are removed from the site tend to vanish shortly after being posted.

The report can be found on the Pew Research Center website.

Continue Reading

Artificial Intelligence

Google’s artificial intelligence system, known as AI Overviews, is producing peculiar and potentially hazardous outcomes





Google has recently introduced AI overviews. And it’s not going very smoothly. The feature was recently launched in the US and is expected to be available worldwide by the end of 2024. However, initial results indicate that there are some challenges to be addressed. Users on social media were swift to highlight the humorous responses, but soon after, a number of individuals began raising concerns about some of the answers.

AI Overviews is a tool that is currently being tested. After submitting a query, search engines generate a result using artificial intelligence. Nevertheless, machine learning algorithms lack consciousness. They possess the expertise to identify words and phrases within specific contexts. Consequently, when posed with a question, they generate an answer that is anticipated to align with the query. Even if the AI has to improvise. It is similar to fabricating fictional legal cases for a lawyer.

Google’s AI Overviews retrieve information from actual websites, but they may struggle to distinguish between serious answers and those meant to be satirical or comedic. It seems that certain “news” sites, such as The Onion, and social media platforms like Reddit are unintentionally amusing the less knowledgeable AI.

Google has recently introduced AI overviews. And it’s not going very smoothly. The feature was recently launched in the US and is expected to be available worldwide by the end of 2024. However, initial results indicate that there are some challenges to be addressed. Users on social media were swift to highlight the humorous responses, but soon after, a number of individuals began raising concerns about some of the answers.

AI Overviews is a tool that is currently being tested. After submitting a query, search engines generate a result using artificial intelligence. Nevertheless, machine learning algorithms lack consciousness. They possess the expertise to identify words and phrases within specific contexts. Consequently, when posed with a question, they generate an answer that is anticipated to align with the query. Even if the AI has to improvise. It is similar to fabricating fictional legal cases for a lawyer.

Google’s AI Overviews retrieve information from actual websites, but they may struggle to distinguish between serious answers and those meant to be satirical or comedic. It seems that certain “news” sites, such as The Onion, and social media platforms like Reddit are unintentionally amusing the less knowledgeable AI.


One method for cleaning a washing machine involves the use of chlorine, bleach, and vinegar. The revised response emphasizes the importance of using them separately rather than merely mentioning them in passing. Combining these substances results in the formation of chlorine gas, which can be extremely toxic.

Speaking of toxicity, Google’s AI Gemini is not performing any better. It mistakenly identified the destroying angel mushroom (Amanita bisporigera) as a white button mushroom. This mushroom is extremely hazardous, capable of causing severe damage to the liver and kidneys.

“Doctor Google” has evolved into a new and concerning incarnation. However, satirical websites also provide comedic advice. AI Overviews will inform you that there are potential health benefits associated with engaging in certain activities, which may contribute to a stronger immune system.

If you wish to challenge scientific findings, it is advisable to conduct your own research.

Continue Reading