The term “fake news” generally refers to news content that is false, regardless of the intention behind its creation or dissemination. Information that is false or misleading but shared without the intention to deceive is typically classified as misinformation. Conversely, when false information is deliberately created or distributed with the intention of deceiving others or gaining an advantage, it is considered disinformation.
The phenomenon and the issue of misinformation is not exactly a recent development. A paradigmatic example of pre-social media misinformation took place in 1835 with the infamous episode known as The Great Moon Hoax, in which the New York Sun published an article claiming to have discovered abundant life on the moon, reportedly with the objective of increasing newspaper sales.
However, the era of social media has fundamentally transformed the landscape of misinformation. The most significant change lies in the removal of traditional constraints on the dissemination of information, enabling content to reach a global audience at lightning speed. Moreover, as the power of online misinformation became clear, it began to be weaponised for political purposes.
While the effects of the Great Moon Hoax were observable — both in terms of deceiving readers and boosting newspaper sales — the overall harm caused was relatively limited. Today, however, disinformation reaches an unprecedented audience, and the potential consequences have become significantly more severe. The risks extend well beyond individual deception — as shown by the harm caused by disinformation in various contexts —, including during the COVID-19 pandemic, the Brexit campaign and the 2016 US presidential election. At this stage, the imperative to figh disinformation (and, particularly, malicious deep fakes) is clear, as it poses significant threats to democracy and to the stability of society as a whole. The question of how to address this challenge remains.
The most obvious solution would be to create new legislation, but legislators have been slow to react to the issue. Their hesitation to take action is understandable and can be traced back to a long-standing philosophical debate about the legitimacy of authority. Simply put, the issue concerns who holds the legitimate power to determine which rules should be followed and which should not, and on what grounds. This is particularly relevant in the context of disinformation, as, much like in the legislative process, some entities (such as the law, online platforms, or the courts) must ultimately decide what constitutes the truth and what does not, and, ultimately, which content should be removed from online circulation — a rather complex and uncomfortable position to hold.
Another significant challenge is that freedom of speech is strongly protected in Western societies (as it obviously should be). In theory, this means that individuals are generally free to express themselves online without undue restriction, and thus, any limitation on this right is understandably frowned upon.
Without legal guidance, the proposed solutions to this issue, as described below, have largely been ineffective. Disinformation is a complex problem stemming from a range of multidisciplinary factors, including those linked to sociology, psychology, journalism, and cybersecurity. Indeed, for example, the dissemination of disinformation is significantly influenced by psychological factors, which are rooted in a range of cognitive biases, such as the fact that individuals are more likely to share information endorsed by respected figures in order to gain social acceptance, or that people tend to believe things that confirm their existing beliefs more easily. This highlights the need for a collaborative, cross-sector response.
Although disinformation can be advantageous for platform providers — in the sense that their business models often rely on increasing user engagement and monetising data, both of which fake news can facilitate — these providers have recently started paying closer attention to the issue. In fact, providers have demonstrated proactivity by introducing a Code of Conduct, endorsed in February 2025 by the European Board for Digital Services and the European Commission, to be incorporated into the Digital Services Act framework. Over time, they have also tested and implemented various active detection measures, many of which rely on machine learning techniques. However, these solutions are still in the early stages of development and are not yet fully effective.
Meanwhile, public regulators in Europe are focused on educating users and, in recent years, have actively promoted awareness on this issue, aiming to increase the user’s critical thinking when engaging with news content. However, these efforts may prove insufficient without sufficient research into how to effectively address the underlying psychological biases. Indeed, existing research indicates that education alone insufficient; it must be complemented by additional measures to be effective.
As such, relying solely on education of the public and on platform providers for content moderation is problematic, particularly given that business interests may compromise impartiality in the latter case. The same concern applies to direct government intervention. This is why while determining what is true and what is not can be an uncomfortable task, it may ultimately be necessary if we are to invest in safeguarding democratic values.
As stated above, philosophers have long debated the foundations of legitimacy. Hobbes argued that political authority, established by the social contract, is legitimate so long as it ensures the protection of citizens; In turn, Rousseau stated that legitimacy arises from the democratic justification of laws, while Kant viewed legitimacy as contingent upon a hypothetical social contract that serves as a test for the alignment of laws with public reason.
To effectively combat disinformation, it may be necessary to establish a specific authority that is truly independent and exempt from government and/or platform ownership or control, and whose legitimacy is grounded in the philosophical principles advocated by Hobbes, Rousseau, and Kant.
Despite some exceptions, the news industry has generally maintained a degree of impartiality in most democratic countries, seeking to ensure that disseminated information is grounded in verifiable facts rather than subjective opinions. Therefore, one potential solution could be the establishment of an independent, multidisciplinary committee composed of diverse stakeholders and tasked with fact-checking information, either at a national or European level. Such a committee could be empowered to, among others:
- Engage with platform providers to address the blocking of accounts that repeatedly disseminate false information, as these accounts are among the primary sources of disinformation on social media;
- Promote research and public education and awareness on the issue;
- Develop and implement standardised labels and tags for online shared content.
Although these measures may appear to conflict with democratic principles, it is important to recognise that similar regulatory frameworks already exist. For example, regulatory bodies determine the age classification of films, which is a limitation on individual freedoms that is widely accepted and justified by the need to protect children’s development.
Moreover, many democratic countries (including the UK, Germany and the Netherlands) have independent press councils or media ombudsmen that oversee journalistic ethics and standards. These bodies sometimes impose limits on the absolute freedom of the press to ensure accuracy and protect the public from harm. This illustrates that, when freedoms conflict, preserving democracy and the public good may warrant certain restrictions.