Trends in AI data collection by Big Tech and regulatory responses across jurisdictions

Privacy ▪ January 16, 2026

By Saumyaa Naidu, Open Terms Archive team member

In 2025, several big tech companies updated their AI-related policies. Through tracking and analysing updates in these policies throughout 2025 using Open Terms Archive, shifts were detected and a broader trend emerged: large tech companies intensifying data collection for AI training, including AI interaction data. It also revealed a comparative view of the extent of data collection, sharing, and usage in the platforms across jurisdictions.

Diverging policies across jurisdictions

Google announced in November that its AI product, Gemini Deep Research, can access user information from their Gmail, Drive, and Chat accounts, and rolled out this change globally.

Fragmentation appears in the case of LinkedIn, which began collecting user profile data and public posts to train its generative AI systems and enhance its advertising in regions such as the European Union (EU), United Kingdom (UK), Switzerland, Canada, and Hong Kong. This use of data for AI training was already in effect in all other jurisdictions including the United States (US). The expansion was made while offering a manual opt-out to the new jurisdictions.

Further difference is illustrated by Meta, which simply excluded the EU, the UK and South Korea from its December policy update. This update uses data from interactions with AI tools to personalise ads and content on Instagram and Facebook. In the US and other parts of the world, this change has been applied with no opt-out available to users.

Privacy risks and data cross-sharing

These updates can bring severe privacy risks for people using the platforms. User trust in AI chat bots leads to them sharing personal or sensitive data which AI systems may later mishandle, leak, or use for unintended purposes.

A common feature in these updates is the cross-sharing of data within the larger entity. LinkedIn, for instance, will share data with other Microsoft-related business entities, for the purposes of serving “more personalised and relevant ads.” Meta has also integrated data across all its products in order to personalise features, content, and ads. The information will also be shared with other Meta Companies.

Concerns raised by civil society

In response to the change announced by Meta in October, a coalition of 36 US-based1 civil society organisations wrote a letter calling for Federal Trade Commission (FTC) oversight and suspension of Meta’s AI data use for advertising.

The letter asks the FTC to “enforce Meta’s existing consent decrees and require disclosure of risk assessments; treat the practice as an unfair and deceptive act under Section 5 of the FTC Act; suspend Meta’s chatbot advertising program pending Commission review; and finalise the long-pending modifications to the 2020 order to strengthen privacy protections, including a proposed prohibition to monetise minors’ data.”

The coalition highlighted that Meta’s initiative is part of a larger strategy to expand surveillance-driven marketing and warned that without the FTC’s intervention at this time, such invasive AI data practices will become the norm, leaving consumers unprotected. The letter also emphasises how such surveillance-driven and behavioral marketing is shaping how people spend their money, time, and attention, and can lead to exploitative targeting.

Regulatory background and responses

The US pro-innovation stance

The US has positioned itself to be “pro-innovation” with limited AI regulation. Presently, the US relies on existing regulatory bodies and guidelines and initiatives from various agencies to address AI issues, the FTC being one of these agencies. While the FTC is legally established as an independent agency, its status is being challenged in an ongoing legal and political debate. The Supreme Court ruling on the matter is due in the next few months and could have far-reaching consequences for other independent agencies in the country. The US also has varying state laws for AI, focusing on transparency, consumer protection, and high-risk systems.

More recently, President Trump signed an executive order on December 11, 2025, to curb state regulation of AI, declaring it a US federal strategic priority and centralising federal oversight. The order directs key agencies to coordinate AI policy, and creates an AI litigation task force to challenge state laws and identify state measures seen as hindering US AI leadership. President Trump has reiterated in the order his objective of preserving US leadership and asserting its “dominance” in AI. Instead of the previous administration’s strong emphasis on mitigating AI-related risks, his administration pursues a strategy that prioritises promoting innovation and responding to China’s expanding technological influence.

The US is also looking to pass the National Defense Authorization Act (NDAA), which will define how AI can be used in defense and intelligence activities such as surveillance and targeting functions.

The EU rights-oriented framework

On the other side of the Atlantic, the EU leads in AI regulation. Obviously, the AI Act comes to mind, but previous wide-scope legislation applies on these topics as well, notably the Digital Services Act (DSA) and Digital Markets Act (DMA).

The AI Act, passed in 2024 and fully effective by June 2026, classifies AI services by risk levels: minimal, limited, high, and unacceptable, with stricter rules for high-risk AI systems and prohibitions on unacceptable ones.

The DSA enforces platform accountability, transparency, content moderation, and user protections for online services such as social media, online travel platforms, and marketplaces.

The DMA targets “gatekeeper” platforms that provide core platform services, such as search engines, app stores, and messenger services, to foster fairer, more competitive digital markets.

Investigations and fines against Big Tech

Big Tech firms are increasingly facing pressure from the EU to comply with AI regulations in the form of heavy fines and investigations.

On December 4, the EU launched an antitrust investigation into Meta Platforms over its policy of rolling out AI features in WhatsApp. The European Commission announced that it will investigate Meta’s new policy as it would restrict rival AI providers and potentially promote its own Meta AI system that was integrated into the platform earlier this year, thus violating competition laws.

On December 9, the EU began another antitrust investigation, this time into Google, examining whether Google violated EU competition rules by using web publishers’ content for AI-related purposes. It will also assess to what extent Google’s AI Overviews and AI Mode rely on publishers’ material without providing suitable compensation.

Given this ongoing pressure, Meta has agreed to offer EU residents a less-personalised advertising option starting January 2026, to comply with the Digital Markets Act (DMA). After several months of negotiations with the European Commission, Meta will allow users to access its social media platforms without consenting to extensive data processing for fully personalised ads. The Commission had ruled in April 2025 that Meta had breached the DMA and issued a non-compliance decision over insufficient user choice, and fined the company 200 million euros. Meta appealed the fine calling it unlawful and discriminatory.

Criticism from both sides

Amid intensifying geopolitical and technological rivalry, the EU is attempting to balance competing priorities: strategic autonomy in AI and global standard-setting influence.

Some tech industry observers and policy analysts have criticised the EU for its stringent rules that may deter the investment and expertise required to build a strong AI ecosystem.

Conversely, the EU has also received criticism from digital rights advocates and legal experts for lowering the regulatory standards, thereby prioritising short-term growth of homegrown AI infrastructure over promoting trustworthy and human-centric AI systems.

The developments indicative of scaling back oversight include shelving the Liability Directive, introduction of the Digital Omnibus, delay in the enforcement of the Digital Services Act (DSA), and introduction of a new set of AI codes of practice that lack firm restraints.

The Digital Omnibus legislation was passed by the EU parliament through an alliance between the European People’s Party (EPP) and far-right parties, many of which are supported by the US government and US tech actors such as Elon Musk. The legislation includes weakening of oversight mechanisms for AI systems, which will benefit US tech companies.

The role of cross-jurisdiction monitoring

The cross-jurisdiction monitoring of platform policies by Open Terms Archive played a key role in identifying how platforms roll out their terms differently across jurisdictions, which can then be understood based on differences in regulation and severity of enforcement.

The varying levels of notification of data collection purposes for different jurisdictions by LinkedIn or the option of consent and more data control for EU users by Meta are clear examples of how regulations and enforcement impacts big tech policies.

The regulatory positions of the US and the EU and the narratives around it also bring to light the framing of regulations as an obstacle for innovation or economic growth, presenting a false dichotomy. Regulations here are presented as a trade-off to AI leadership rather than as an essential means for protecting people and smaller players through digital rights and competition laws.


  1. These are US-based organisations with the exception of 5Rights that is headquartered in the UK, but also operates in the US. ↩︎