The USA national security framing of platform governance: the case of TikTok

Privacy and Surveillance ▪ January 19, 2026

By Asma Sifaoui, Open Terms Archive team member

The USA debate over TikTok is often framed through questions of speech, culture, or technological nationalism. In policy and security discussions, however, a different concern has increasingly come to the fore: whether a high-reach data platform, operating under foreign jurisdictional leverage, creates national security exposure that cannot be reliably mitigated through ordinary regulatory mechanisms. Over several years, this framing has become more prominent in USA policymaking.

The risk the United States is assessing

The US national security case against TikTok rests on the interaction of three factors, each well established in isolation but unusually concentrated in a single platform.

First, large-scale behavioral data accumulation: TikTok collects categories of data common to major platforms, including device identifiers, engagement signals, inferred interests, and location-related information. What distinguishes TikTok is not novelty but intensity. Its recommendation system depends on continuous, fine-grained behavioral modeling, making user data increasingly predictive over time. At scale, such datasets become strategically valuable beyond advertising, a concern repeatedly raised by USA intelligence and congressional committees.

Second, algorithmic control over attention. TikTok is not merely a repository of user-generated content. It is a ranking and amplification system that shapes what hundreds of millions of users see, ignore, or internalise. USA policymakers have treated this capability as influence infrastructure, particularly relevant during elections, crises, or geopolitical conflict. This concern aligns with broader USA intelligence assessments about information operations and platform-mediated influence, even where no specific manipulation has been publicly proven.

Third, foreign jurisdictional leverage: ByteDance, the company owning TikTok, operates under a Chinese legal framework that permits state intelligence compulsion under conditions of secrecy. China’s National Intelligence Law obligates organisations to “support, assist, and cooperate” with intelligence work, and related data and security laws reinforce state authority over data access and transfer. From a USA national security perspective, the existence of lawful compulsion matters more than evidence of misuse. Risk is assessed on capacity and control, not on public confession.

Taken together, these factors led USA officials to treat TikTok not simply as a privacy issue, but as a system whose future behavior could not be reliably constrained.

From risk recognition to restriction

The USA response followed a familiar security trajectory. Initial measures focused on restricting TikTok on government devices and within sensitive environments, reflecting a baseline risk-management posture rather than an attempt to regulate public speech. Over time, scrutiny widened as policymakers confronted a harder question: whether existing legal tools could meaningfully govern a platform whose data practices, algorithms, and governance could evolve beyond USA control. This shift mattered. The debate moved away from whether TikTok complied with current law and toward whether the United States could manage future exposure. National security frameworks are designed precisely for this kind of forward-looking assessment.

Why evolving data practices strengthen the security case

Ongoing analysis by Open Terms Archive provides concrete evidence for why U.S. policymakers focused on trajectory rather than assurances. In a memo from October 2025, Open Terms Archive documented TikTok’s expansion of data collection in Europe, the United Kingdom, and Switzerland, including broader collection of precise location data, in-app browser activity, enhanced behavioral signals, and deeper account-linked information.

These changes do not directly govern USA users. But they are highly relevant to USA national security analysis for three reasons. First, they demonstrate directionality. TikTok’s data footprint is expanding in scope and sensitivity. Risk assessments that rely on static snapshots of current policy miss how platforms accumulate capability over time. Second, they reflect platform convergence. Large platforms rarely maintain fundamentally different data architectures across jurisdictions. Product features, analytical pipelines, and inference capabilities tend to migrate. What appears in one market often signals global strategy rather than regional exception. Third, they underscore strategic optionality. As data collection becomes more granular, the value of the dataset increases for purposes of profiling, inference, and influence. When such accumulation occurs within a corporate structure subject to foreign legal compulsion, uncertainty compounds. This is exactly the kind of uncertainty national security policy is meant to address.

What this means for other countries

Many USA allies have been more hesitant to adopt an explicit national security framing, instead approaching TikTok primarily through data protection and regulatory compliance regimes such as the GDPR. This reflects both stronger baseline privacy law and different institutional traditions regarding the relationship between digital platforms and national security risk.

The Open Terms Archive findings complicate this distinction. If a platform can continue expanding sensitive data collection even within jurisdictions characterised by robust regulatory oversight, then compliance-based approaches alone may not fully address long-term strategic exposure. From a national security perspective, this raises questions about whether existing regulatory frameworks are equipped to account for platform trajectory, accumulated capability, and jurisdictional leverage over time.

TikTok thus highlights a broader governance dilemma. Platforms operating at global scale increasingly function as infrastructure with potential national security relevance, yet oversight mechanisms globally remain largely organised around industry-specific compliance rather than systemic risk. The EU’s Digital Services Act articulates systemic risk, but assessment and enforcement have yet to be demonstrated. The TikTok case has brought these tensions into clearer view, prompting renewed attention to how data-driven platforms are assessed, governed, and constrained as their capabilities evolve. Whether such scrutiny results in more durable governance frameworks or remains focused on individual platforms remains an open question.