Search
Close this search box.

Chine Labbé: "In disinformation, the big change is the emergence of AI".

Chine Labbé is editor-in-chief and vice-president of NewsGuard. Founded in 2018 in the United States, this start-up monitors disinformation and develops solutions to combat it. A member of Radio France's committee on honesty, independence and pluralism of information and programmes, she previously worked as a reporter for the Reuters news agency in Paris and as a multimedia producer for The Economist in New York. She holds a double degree in journalism from Sciences Po Paris and Columbia University in New York.
Lundis de L'IHEDN, Chine Labbé

WHAT ARE THE RECENT DEVELOPMENTS IN THE FIELD OF DISINFORMATION? ARE YOU SEEING AN INCREASE IN THE PHENOMENON?

Disinformation campaigns have been intensifying and accelerating for several years, particularly since 2020 and the COVID-19 pandemic. Recent conflicts - the invasion of Ukraine, the Israel-Hamas war - have inevitably been marked by this acceleration.

Basically, there's nothing new here: the campaigns are sometimes state-run, sometimes the work of isolated infomercialists, and it's still very tricky to know who's behind them. They are deployed on poor-quality websites (often known as multi-recidivists of disinformation), but also on social networks where a mixture of convinced users and inauthentic bots will give substance and virality to the stories. On the Ukraine and the Israel-Hamas conflict, a veritable information war is under way.

On the Russian side, the state media and pro-Kremlin accounts continue to deploy various strategies aimed at discrediting the Ukrainian President and undermining Western support for Ukraine, while exploiting stories that may sometimes appear to be of no geopolitical interest to Moscow, but which fuel tensions within Western democracies.

"GEOSTRATEGIC INTERESTS COLLIDE" IN DISINFORMATION

Since the beginning of the war, variations on the myth that Ukraine is a Nazi state have proliferated, and are still popular. The latest example is the false allegation that Volodymyr Zelensky bought a villa in Germany that belonged to Joseph Goebbels, the Nazi regime's propaganda minister. In reality, this villa still belongs to the Land of Berlin.

In the Middle East, various geostrategic interests can be seen colliding in online disinformation campaigns. Without necessarily being at the origin of these stories, the Iranian state media have, for example, exploited numerous false accounts of the conflict in recent months to serve their own interests. A very striking aspect of this conflict is also the proliferation of accusations of staged events (fake casualties, actors pretending to be victims), on both sides of the conflict, and fake reports by well-known media, fabricated to give credibility to an infox.

In terms of form, the big change is the emergence of AI, and in particular AI-generated news sites: these new-generation content farms that masquerade as traditional news sites, but whose content is produced using generative AI, with little or no human supervision.

49 SITES GENERATED BY IA LAST MAY, 634 TODAY

In addition to the news sites that disseminate multiple infoxes, and the accounts that we follow on social networks, since the beginning of 2023 we have had to monitor these AI-generated sites, which publish hundreds or even thousands of articles every day thanks to non-existent and free labour. By May 2023, my colleagues at NewsGuard had counted 49 of these sites. To date, they have counted identified 634. And while for the moment most of these sites seem to have been set up to distribute light content and 'get clicks' - the majority of them being fairly harmless - some are already causing viral infomercials.

This is the case of Global Village Voice, a content farm that presents itself as a Pakistani news site, and last November published an article claiming that Benjamin Netanyahu's psychiatrist committed suicide. For this article, this site, which specialises in the republishing and rewriting by IA of articles published by other sites, seems in fact to have republished a satirical article dating from 2010. In fact, the psychiatrist in question does not exist, let alone committed suicide. But the whitewashing of this false information by an apparently credible site has enabled this infox to find its way onto Iranian state television, providing timely support for an ongoing campaign to portray the Israeli Prime Minister as psychologically unstable.

ARE THE REACTIONS OF THE TARGETED ENTITIES, WHETHER GOVERNMENTS OR COMPANIES, UP TO THE TASK?

In recent years, a number of governments have set up bodies specialising in detecting disinformation campaigns from abroad, such as France's Viginum. In particular, Viginum identified the inauthentic amplification by a Russian network called "Recent Reliable News" of images of the Stars of David painted in the 10th arrondissement of Paris last October. France considered that the aim of this campaign was to create tensions in the public debate in France and Europe". This detection and qualification work is important.

As for brands targeted by specific disinformation campaigns, they generally try to respond to infomercials targeting them by communicating on the subject. We've seen this with calls for boycotts of brands presented as supporting Israel in recent weeks, sometimes based on false information that has gone viral... It's important to respond to infoxes, and to try to set the record straight.

"GETTING THE FACTS RIGHT IS NOT ALWAYS EFFECTIVE".

Unfortunately, setting the record straight once information has circulated all over the web, generating millions of views, is not always effective. In fact, it has been proven that repeated exposure to a piece of information makes people more likely to believe it, regardless of its veracity, even after reading its denial (this is called the illusory truth effect).

The most important issue today is therefore that of engagement with disinformation content on social networks, and therefore regulation. The role of platforms is the crux of the matter. But the other major project that needs to be pursued collectively at the same time is that of supporting quality journalism, which, in the long term, remains the best bulwark against disinformation.

DOES ARTIFICIAL INTELLIGENCE AMPLIFY DISINFORMATION? IS IT A TOOL FOR COMBATING IT?

Until recently, AI, whose risks were well understood, still seemed a virtual threat. Yes, AI-generated images, videos and audio clips were circulating, but they were not at the heart of most disinformation campaigns. Online, home-made methods of disinformation (doctored official documents, real images and videos taken out of context or used to talk about a current conflict when in fact they emanate from another conflict, etc.) continued (and continue) to work very well. However, recent months have seen an explosion in AI-generated content and its virality, making disinformation more effective and harder to detect.

The most blatant example is undoubtedly the Slovakian parliamentary elections in September 2023, when a fabricated audio extract was used to support the (unfounded) story that the elections were in danger of being rigged. The explosion in deepfakes can be explained in part by the rapid improvement in the associated technologies in recent weeks, particularly with regard to lip movements, which are becoming increasingly realistic.

"CRYING "DEEPFAKE" IS SOMETIMES ENOUGH TO DISCREDIT AUTHENTIC MATERIAL".

Another very interesting phenomenon is that the explosion in AI-generated images has enabled certain disinformers to use the spectre of AI - without using AI itself - to cast doubt on the veracity of very real images, by making people believe that they are inauthentic. Cries of "deepfake" have thus become a widespread technique, sometimes sufficient to discredit authentic media.

And beyond images, videos and audio extracts, the use of generative AI chatbots to generate disinformation stories on a large scale and at low cost is a very real threat. Last AugustMy colleagues at NewsGuard have carried out audits of the main AI chatbots (Google's Bard and OpenAI's ChatGPT) and the results are worrying: in 80% (for Bard) to 98% (for ChatGPT) of cases, these tools eloquently repeat known stories of disinformation, in the form of essays, press articles, TV scripts, etc. Sometimes these tools run, and as the only safeguard, add a few paragraphs of denial at the end of production. Sometimes these tools are executed and, as the only safeguard, add a few paragraphs of denial at the end of production. So it's easy to imagine how, in the hands of malicious actors, they could turn out to be perfect disinformation assistants.

State media Russian and Chinese have already used ChatGPT as an authoritative source to support some of their anti-United States propaganda stories (with the idea that if ChatGPT, an American tool, recognises it, it must be true!) If these media were to use these tools to their full potential tomorrow, to create thousands of propaganda articles at low cost and in a matter of seconds, then yes, there is no doubt that AI would amplify disinformation even further.

"IN THE FACE OF A GROWING FLOW OF INFOX, WE NEED TO USE IA".

However, and this is also AI can, if guided by human intelligence, prove to be a powerful tool in the fight against misinformation. At NewsGuard, we have created a catalogue of the main misinformation circulating online, as spotted by our specialist journalists. On its own, this database can be used by content moderators on platforms, for example, to monitor misinformation. But coupled with artificial intelligence, it can do much more in a matter of seconds, identifying all the instances of each piece of false information (as described by real-life journalists who know how to distinguish between false accounts and opinions).

Faced with a growing flow of infoxes, there is no doubt that we need to use AI to multiply the impact of our efforts in the fight against disinformation. And if tomorrow's generative AI chatbots were trained to resist infoxes effectively, to sort through their sources to select the most credible information (and if they stopped "hallucinating", the term used to describe those moments when the AI invents responses), then these tools could also prove to be faithful allies in the fight against disinformation.

If you would like to find out more, on 29 January, Chine Labbé will be taking part alongside other specialists in the IHEDN Monday entitled "Disinformation at the heart of conflicts".

Wars are being fought on the battlefield, in the economic arena, but also in the field of information. Conflicts in the Middle East and Ukraine, the proliferation of coups d'état in the Sahel and Africa, and attempted interference by foreign powers: manipulating the facts has become common practice for a number of geopolitical players.

While no-one is immune to disinformation, particularly on social networks which act as a sounding board, advances in artificial intelligence are only increasing the risks. Understanding the driving forces behind this new war can help us to better anticipate the risks.

With :

  • Christine Dugoin-Clémenta researcher at the IAE Paris-Sorbonne Risk Chair and the Observatoire de l'Intelligence Artificielle at Paris 1 Panthéon-Sorbonne.
  • Julien Nocettiassociate researcher with the Russia/Eurasia Centre and the Geopolitics of Technology programme at the French Institute of International Relations (IFRI).
  • China Labbéjournalist, editor-in-chief and vice-president in charge of partnerships, Europe and Canada at NewsGuard 
  • Christophe LemoineDeputy Director of Communications, Press and Spokesman at the Ministry of Foreign and European Affairs. 

Round table chaired by Julien Le Botjournalist, writer-director and deputy editor-in-chief of Dessous des cartes (Arte).

Rendezvous on Monday 29 January at 7.30pm