Addressing this challenge is possible thanks to the existence of technology that can fight post-truth, without neglecting the need, in addition to technology, for the involvement of public institutions, companies, technological platforms and verification agencies and, of course, reflection among citizens.
Anyone with a smartphone or an internet connection can currently consume information without any limitations or physical barriers and also create it. However, despite the greater access to contents, news and sources in recent times, the concept of post-truth has spread at breakneck speed.
What is post-truth?
According to the definition to be found in the dictionary, post-truth is a neologism that refers to the deliberate distortion of reality and the manipulation of beliefs and emotions with the aim of influencing the population’s opinions and decisions.
The origin of the concept dates back to the early 1990s, when Serbian playwright Steve Tesich used the word post-truth in an article published in the newspaper The Nation. However, over a number of years, in the wake of the advent of the mass media and the digital transformation, post-truth has become an even greater challenge for the current-day democratic societies.
In order to resolve this problem, public and private organisations must promote a critical spirit, attacking disinformation by means of the reflexive participation of the population. Because disinformation and the generation of hoaxes add to the context in which post-truth thrives, with the aim of manipulating feelings and emotions and relegating objective facts to the background.
Within the above context, society can harness technology to combat post-truth by relying on innovative tools to filter sources and instil critical thinking into citizens, thereby helping them to recognise when information is truthful or untruthful.
The social consequences of post-truth
The popularisation of post-truth is linked to numerous negative consequences for democratic society, including the creation and propagation of hoaxes. Firstly, post-truth constructs a discourse based on its ability to generate trust in claims and arguments that appear to be true but in reality are not true and have no basis for being so.
Moreover, the volume and pace of the connectivity-driven information to which citizens are exposed means that the data are often not verified. However, when a verification is performed and a falsehood is revealed, there are no real consequences or repercussions for those responsible, as they usually maintain their status and legitimacy in the public eye.
In the field of politics, the construction of post-truth is sometimes used to redirect citizens towards a particular political ideology. This affects, above all, the thoughts and behaviour of undecided citizens.
The construction of post-truth has been linked to the phenomena of fake news and hoaxes. Disinformation is nothing new, but, in recent years, particularly during the coronavirus health crisis, the efforts to connect emotionally with citizens and convince them with its discourses has increased, thus conditioning decision-making.
Technological innovation to combat post-truth
In times of crisis, technological innovation has been used as a tool to generate polarisation, spread hate speech and manipulate public opinion. But it shouldn’t be demonised, as its benefits far outweigh its harms and it’s an excellent ally when it comes to stamping out post-truth. Connectivity forms part of the solution to countering misinformation, particularly given the speed at which the contents spread.
Data verification and artificial intelligence
The role of verification agencies is becoming increasingly relevant in democratic societies, as their main objective is to ensure that data transmitted en masse register a high degree of truthfulness. This role is also played by technological tools that can contribute to locating the post-truth. Therefore, if the two solutions are brought together, the verification processes can be streamlined and optimised.
Significant progress has already been made in terms of professional data verification and technological innovation. The https://www.newtral.es/Newtral platform has developed the tool with AI Claim Hunter. This tool listens to, transcribes and detects declarations and chiefly focuses on verifying claims made by politicians. Its purpose is to optimise and automate the work of journalists by locating false and true statements. The application has achieved accuracy above 85% in Spanish and the team is now working on developing multilingual AI functions.
Auditing algorithms on social media
Apart from private verification agencies, some data experts advocate the creation of truth commissions to put all the disinformation ecosystems in order by means of technological innovation. Another relevant option is the formation of a public body to drive the creation of an ethical framework in partnership with the private sector to lead changes in the content structures of social platforms and networks. This solution involves auditing the algorithms of the technology firms, which give rise to most of the hoaxes on the internet.
There are algorithms whose codes are public, allowing their data to be viewed, and others that make it impossible to find out what data they collect and how they’re programmed. To resolve this problem, the impact of the algorithm is usually assessed, in other words, an automation is programmed to represent different kinds of users and the resources of each algorithm are analysed. Reverse-engineering is therefore performed to verify how the decision-making takes place on social media.
One of the challenges that brings policy and technology together involves the creation of a registry of algorithms to compile all the information that they each contain. There are initiatives currently working on the above, such as the Observatory of Algorithms with Social Impact, promoted by the Eticas Foundation. The tool it has generated can organise and catalogue all the algorithms of the main social platforms by their dominance and social impact. The consequences and use of the algorithms can thus be understood.