The European Union (EU) enacted the Digital Services Act (DSA) in 2022 as part of a broader regulatory package aimed at creating a safer and more transparent digital environment within the EU. Naturally, since most large online service providers impacted by the DSA are based in the United States, and since many of them are subject to additional obligations, it is unsurprising that the regulation has been closely scrutinized by these companies and U.S. officials.
From a U.S. perspective, one of the main concerns with the DSA is its potential impact on freedom of speech. FCC Commissioner Brendan Carr, for instance, has argued that the regulation is “incompatible with America’s free speech tradition.” Similarly, U.S. Vice President J.D. Vance stated during his speech at the Munich Security Conference in February 2025 that the EU’s content moderation policies amount to “authoritarian censorship.” During his tenure as Senator, Vance even suggested that the United States should consider withdrawing from NATO if the alliance fails to adopt a pro–free speech orientation. Leading U.S. tech executives have expressed similar views; for instance, Meta CEO Mark Zuckerberg has accused the EU of institutionalizing censorship through its regulatory approach.
Clash between American and European Free Speech Traditions
The EU—or at least many of its members (“Member States”)— have long taken pride in their commitment to fundamental and human rights, consistently ranking highly in international indices. According to the Global Expression Report 2025, which tracks freedom of speech, the United States scored 85 out of 100, while major Member States such as France, Germany, and Italy recorded similar results. All Member States in the Nordic region scored even higher, with each score exceeding 90. While the specific methodology of this index can be discussed, it is nonetheless clear that both the United States and most Member States are among the global leaders in safeguarding freedom of speech.
The modern European free speech tradition emerged as a deliberate reaction to the atrocities of the Second World War. The Nazi regime instrumentalized media campaigns to dehumanize minorities and cultivate a public acceptance of genocide. After the war, many European countries began to conceptualize freedom of speech not as an absolute right but as one balanced against other fundamental rights. This approach was also codified in the European Convention on Human Rights in 1950. Consequently, a shared regulatory tradition developed in many countries, in which certain forms of expression, such as hate speech, incitement to violence, and Holocaust denial, fall outside protected speech and are subject to sanctions.
The American free speech tradition, by contrast, is rooted in resistance to colonial-era suppression and in the prevention of governmental abuse. The First Amendment (1791) prohibits Congress from restricting speech or the press, reflecting deep distrust of government overreach. In landmark cases such as Abrams v. United States (1919) and Whitney v. California (1927), it was stressed in separate opinions that open discourse should be preserved and that the appropriate remedy for harmful speech is more speech, on the assumption that truth will ultimately prevail in a free “marketplace of ideas.” Later, in cases such as Brandenburg v. Ohio (1969), the Supreme Court entrenched this principle by setting exceptionally high thresholds for restricting speech, thereby making the marketplace of ideas a practical cornerstone of First Amendment jurisprudence. Although it formally limits only state action, the First Amendment tradition has evolved into a broader cultural norm, central to American political identity.
Considering the inherent differences between European and American free speech traditions, it is understandable that the DSA’s core obligation for most large online service providers (so-called “online intermediaries”)—to remove or restrict access to “illegal content” to avoid liability—has drawn criticism from a free speech perspective. Although the DSA does not define “illegal content” and leaves that determination to national legislation and other EU instruments, the examples it provides have raised concern due to their vague and subjective nature. Particularly contentious are categories such as discriminatory expressions and hate speech, which from a U.S. perspective risk enforcement shaped by political or cultural biases rather than clear legal standards. However, this obligation is not new; similar duties existed under the E-Commerce Directive (2000), though in a less harmonized framework granting Member States greater discretion. The DSA primarily aims to enhance transparency and harmonize enforcement rather than introduce many new substantive obligations. This raises a central question: if the DSA does not materially alter existing restrictions, why has it drawn such intense scrutiny from U.S. critics?
Speech Restricted Due to Over-Compliance
From an American free speech standpoint, the main concern lies less in the formal scope of the DSA itself and more in how online intermediaries might, in practice, interpret and apply the regulation in ways that restrict lawful speech in the United States.
First, there is a risk of over-removal. Beyond the “trusted flaggers” appointed by Member States—specialized organizations officially designated to flag illegal content with priority treatment—platforms must also provide mechanisms for individual users to flag potentially illegal material. While professional flaggers are more likely to understand legal thresholds, user-generated flags force platforms into a difficult position. If a company is made aware of potentially illegal content but fails to act, it risks direct liability and fines of up to 6% of its global turnover. Faced with this risk and the administrative costs of assessing flagged content, intermediaries may err on the side of caution and restrict or remove material more broadly than required. This over-removal, sometimes called “collateral censorship,” can, in the worst case, create a chilling effect that limits diverse opinions and undermines democratic discourse.
Second, the reach of restrictions may extend geographically. Under the DSA, online intermediaries are formally required to remove or restrict content only within jurisdictions where it is deemed illegal. However, implementing country-specific restrictions through tools such as geoblocking is costly, technically limited, and easily circumvented by VPNs. To simplify compliance, intermediaries may therefore opt for global removal. In practice, this means that content deemed illegal in a single Member State could be taken down worldwide, including in countries such as the United States where it remains lawful. This phenomenon of indirect cross-border impact is often referred to as the “de facto Brussels Effect”.
In an extreme case, the combined effect of over-compliance and the extraterritorial reach of EU regulation through the Brussels Effect could make restrictive European speech standards the de facto benchmark for global content moderation, including in the United States. While this scenario may not reflect current realities, the concern is understandable from a U.S perspective, given the sharp divergence between the American and European free speech traditions.
Speech Restricted Due to Systemic Risk Mitigation
Beyond the risk of over-compliance, the DSA also introduces some content governance obligations that raise additional concerns. Articles 34 and 35 require “very large online platforms” (so-called VLOPs) and “very large online search engines” (so-called VLOSEs), such as Google, Facebook, Amazon, X, and YouTube, to identify, assess, and mitigate systemic risks arising from the use of their services. These risks include threats to civic discourse, electoral processes, public security, and public health. Unlike the removal of specific instances of illegal content, systemic risk mitigation entails far-reaching, structural interventions that influence platform-wide policies and operations—potentially affecting users outside the EU, including in the United States. In this respect, the DSA extends its focus beyond illegal content to encompass so-called “harmful” material, such as “misinformation” and “disinformation.”
A relevant example is the European Commission’s 2024 formal investigation into Meta, which examined whether Facebook and Instagram had sufficiently mitigated systemic risks related to disinformation, particularly in the context of the upcoming elections in several EU countries. The Commission criticized Meta for its alleged failure to adequately address Russian state-linked disinformation campaigns and for having limited access to real-time data via its CrowdTangle tool. This tool, used by researchers and journalists to monitor viral content, was deemed essential for ensuring public transparency and electoral integrity. In its press release, the Commission stated that it suspects Meta of non-compliance with DSA obligations “related to addressing the dissemination of deceptive advertisements, disinformation campaigns and coordinated inauthentic behavior in the EU.”
From an American free speech perspective, this kind of systemic monitoring of “harmful” content is particularly controversial. It risks empowering government authorities to define, and indirectly pressure platforms to limit, speech that is not illegal but may be politically or culturally contested. This raises broader concerns about whether regulatory frameworks blur the boundary between legitimate oversight and indirect state influence over online expression. Courts in the United States have begun to grapple with similar tensions, as illustrated by Murthy v. Missouri (2023). In that case, federal agencies were found to have “coerced or significantly encouraged” social media platforms to remove or suppress perceived misinformation related to COVID-19 and election topics—a practice the Fifth Circuit held to be in violation of the First Amendment. Ultimately, the Supreme Court declined to address the constitutional questions and resolved the case solely on procedural grounds concerning standing.
The core challenge in restricting free speech lies in the inherent difficulty of drawing a clear boundary between what is genuinely “harmful” and what constitutes merely unpopular or dissenting political opinion. While the American free speech tradition addresses this challenge by generally refraining from making such distinctions—except in narrowly defined circumstances—the EU approach, particularly when extending beyond strictly illegal content to encompass broadly defined “harmful” speech, risks entering precarious territory. In doing so, it faces the challenge of making inherently subjective judgments about expression, which, according to some critics, may result in the suppression of legitimate political discourse or dissenting viewpoints.
What should be done?
Even though the DSA cannot be deemed unconstitutional in the U.S., as it is an EU regulation, this does not mean it is without consequences for U.S.-based users or online intermediaries, nor that it is unrelated to the broader American free speech tradition. If companies, motivated by convenience or risk minimization, remove content globally rather than implement country-specific geoblocking, the regulation may directly affect free speech in the United States. Likewise, the DSA’s systemic risk obligations shape the entire operations of VLOPs and VLOSEs, prompting stricter global standards aligned with EU expectations. Consequently, U.S. users may face de facto restrictions on speech otherwise permissible under American principles.
One possible response from the United States would be to introduce counter-regulation that limits the extraterritorial impact of foreign content moderation practices. For instance, in recent years several U.S. states, most notably Texas and Florida, have enacted or proposed legislation aimed at constraining online intermediaries’ ability to moderate user content. These measures are often framed as safeguards for free speech, yet they have faced significant constitutional challenges. U.S. courts have repeatedly emphasized that compelling private entities to host or disseminate specific speech constitutes “compelled speech,” which the First Amendment prohibits (see e.g. Miami Herald Publishing Co. v. Tornillo (1974)). Consequently, although these state-level efforts express concern over foreign regulatory influence, they face significant constitutional hurdles.
From an EU perspective, it would be both unrealistic and inappropriate to expect it to subordinate its regulatory objectives to the constitutional values of another sovereign state, particularly when those principles diverge from its own normative foundations. This is especially true given that many of the most significant challenges stem not directly from the DSA’s provisions themselves, but from how online intermediaries interpret and apply the regulation in practice. At the same time, given that most major online intermediaries are headquartered in the United States—and that the transatlantic relationship is founded on mutual trust, economic interdependence, and shared democratic values—it is in the EU’s strategic interest to ensure that its regulatory framework does not inadvertently lead to the global dissemination of European free speech standards. This concern is heightened by growing U.S. apprehension that such standards may conflict with First Amendment principles. Accordingly, the EU should proactively engage with its international partners when regulating sensitive issues with likely extraterritorial effects. A transatlantic dialogue could, for instance, be institutionalized through a revived and more focused EU–U.S. Trade and Technology Council (TTC), which already provides a platform for addressing digital governance and regulatory divergences.

Läs artikeln på CSIS’ hemsida här.