

The Online Far Right’s Secret Language
Code words, hashtags, emojis, and irony: the strategies used to spread extremist ideas on social media and bypass platform restrictions
Probably when reading the word “CAFE,” most people think of the dark drink that is usually consumed from a cup. However, at the core of fascist discourse on social media, it has a very different meaning. These letters, in Spanish, are actually read as “Camarada Arriba Falange Española” (Comrade Up Spanish Falange), a slogan used that meant the enemy had to be killed. There are other terms used to refer to Adolf Hitler (like the “Austrian painter“), Benito Mussolini (“the big jaw”) and others to support Nazism and deny the Holocaust. Coded terms are just one of the strategies used by users who share these contents to avoid platform restrictions and spread their message. In addition, we can find memes, used to spread extremist ideas disguised as humor and reach a younger audience.
These types of tactics are challenging for platforms. Academic evidence and experts consulted by Maldita.es and Facta recognize that this type of content is difficult to identify and moderate for the social networks on which it is spread. Some of the experts’ recommendations include training artificial intelligence processes to detect inappropriate content more automatically or including links to external pages with reliable information that users can verify. Others, such as UNESCO, are also committed to educating teachers and students about the consequences of these regimes, as well as warning against the use of social media to make demands and proclaim slogans in coded language.


This article is the third of an international investigation carried out by Maldita.es (España) and Facta (Italy). The project explores how fascist propaganda and disinformation have adapted to the language of social media and the strategies they use to avoid platform restrictions and ensure that the content reaches a younger and wider audience.
This investigation was made possible thanks to the support of Journalismfund Europe.
Terms, hashtags, and emojis that seem harmless: hidden tactics to avoid platform restrictions
“Far-right activity on social media is full of visual codes and signals”, explains a 2021 study on the online language of extremist groups. Among them, we identify coded elements (such as phrases, numbers, or symbols), algospeak (the alteration of keywords by substituting numbers for vowels), hashtags, and emojis. The Foundation for Combating Antisemitism (FCAS), which has observed antisemitic messages using this hidden language, states that although they may seem harmless, they allow extremist ideologies “to spread in public spaces without being openly expressed”.
Anti-Semitic discourse (prejudice or hatred against the Jewish people, which was the basis for the Holocaust) encoded in the digital environment uses numbers such as 1488, which, according to the FCAS, refers to a white supremacist slogan about protecting white children (known as “fourteen words”) and the German salute Heil Hitler (because h is the eighth letter of the alphabet). Expressions such as “Have a totally joyful day” also correspond to the acronym TJD, which stands for “Total Jewish Death”. The organization explains that extremists use this format to attack different minority groups: each variation maintains the same structure but changes the target. To negate the Holocaust, Spanish terms such as “Holocuento”, “Holoengaño” or “Ana Fraude” are used to refer to Anne Frank, the German Jewish girl who was a victim of the Holocaust and is known for having written a diary about her experiences.
There are also different ways of talking about Adolf Hitler without mentioning him directly in neo-Nazi propaganda on the internet. Some of these are “Austrian painter” (it’s said that the Führer wanted to be a painter but was rejected), “the mustache”, “Uncle A” or his initials AH. Also, the hashtag #AHTR means “Adolf Hitler was right”.

In Spain, codes such as CAFE, the Falangist slogan, which a former Vox deputy in the Andalusian Parliament displayed in her office) and abbreviations such as Æ (a term not used in Spanish that translates as “Arriba España”, a Falangist slogan) or FF to refer to Franco Friday (an expression of unknown origin used to commemorate Francisco Franco one day a week) are common. The Spanish dictator is sometimes referred to as “el caudillo” or “generalissimo of the armies of land, sea, and air”.

Similarly, Benito Mussolini is referred to by a variety of nicknames in Italy, some affectionate and some mocking. For example: Lui (“He”); Ben (“Benny”) — a familiar short form used by friends and supporters; Mascellone (“Big Jaw”); Pelatone (“Big Bald Head’”); Crapa pelada (“Bald Head” in Lombard dialect); Gran Babbo (“Big Daddy”); Er puzzone (“The Stinker” in Roman dialect); and Crapùn (“Big Head” in Lombard dialect).
The use of these words is not limited to the Internet, there are those who earn money from them. We found a Falangist shop selling keyrings, patches and pins with the initials CAFE (Camarada Arriba Falange Española, or Comrade Up Spanish Falange). In Predappio (Italy), the birthplace of Mussolini, there are two souvenir shops, called “Predappio Tricolore” and “Ferlandia” that sell fascist gadgets like rings, lighters, busts, flags, T-shirts, etc. Both of them also have the online version of the shop.

In addition to alphanumeric codes or coded terms, emoticons are also used. In anti-Semitic discourse, Jews or Zionists are referred to as animals (🐀, 🐍, 🐷, and 🐙). In this way, explains the FCAS, they recycle “dehumanizing images” already used in Nazi propaganda. Another animal used to allude to Nazism or Francoism without displaying symbols banned by platforms is the eagle (🦅), used on the Spanish flag during the dictatorship, and alongside the swastika (known as the Reichsadler or German imperial eagle). The raised hand (✋) is used to represent the Roman salute, a gesture linked to fascist movements, and double or triple lightning bolts (⚡️⚡️⚡️) to refer to the SS, an organization founded by Hitler in 1925 to protect Nazi party leaders and ensure security during Nazi rule.The symbol ᛋᛋ, which corresponds to an SS insignia, and 卐, the swastika, a hook-shaped cross adopted as the symbol of the Nazi party in 1920, are also used.

Those techniques can be summarized in concepts like “dog whistle”. Just like the tool of the same name designed to train man’s best friend through the use of ultrasounds that are inaudible (or at least not fully audible) to the human ear, this particular tactic of political propaganda aims to communicate with a more or less restricted group of adherents using a code of meanings known only within the inner circle, but practically incomprehensible from the outside, except in a very superficial way. The expression thus refers to coded messages that travel on a double track, political and linguistic: harmless and vague for those who do not know their real meaning, extremely loaded with meaning for those able to decode them.
“Dog whistles are used to smuggle in concepts like antisemitism, racial segregation, the violent deportation of people, things that would otherwise be unacceptable”, journalist Leonardo Bianchi, an expert on online extremism and digital dynamics, explains. “It is therefore important that they can be decoded by the general public as well, and in this, the work of journalistic outreach and analysis is crucial”. According to Bianchi, the risk is that dog whistles become normalized to the point that they cease to be such;just as has already happened with the phrases “ethnic replacement” and “remigration,” which have now entered public discourse.
According to the UNESCO, there are other tactics to avoid platform regulation such as orthographic obfuscation (for example, deliberate misspellings, character substitutions, homoglyphs, extra spaces or punctuation); signposting, that refers to relatively benign posts that include links to fringe platforms where less robust moderation allows for more radical, explicit content; multimodality (refers to the synergic use of audio, image and text content which are more difficult to screen compared to text-based content); or reframing which “is framing hateful discourse as free speech or honest investigation”, said UNESCO. “This strategy, sometimes referred to by the acronym ‘JAQ-ing’ (‘just asking questions’), does not in itself contravene the right to free expression. Rather, it exploits the protections of that right to create ‘plausible deniability’ and to blur intent, producing a grey zone where moderation systems are less effective”, the institution adds.
Memes: a tool for hiding extremist messages with “humor”
The online propaganda has learned to use memes (an image or video containing a block of text that can be easily shared on social media) as elements of a broader discourse, leveraging pop culture to communicate hate messages without the risk of exposure to criticism, as explains the study Far right memes: undermining and far from recognizable. In particular, the fascist propaganda has added memes to adapt its discourse to the language of social media and reach young groups among whom “the simplified message of memes can awaken their interest in far-right thinking”.
These contents, which tend to be shared within private far-right circles before being widely spread, repeat symbols linked to these movements. For example, this Facebook post shares a meme of Hitler that also features a swastika. This contributes to its dissemination and generalization among users. But it does so in a “humorous” and subtly coded way, providing users with “distance and denial, as they can always claim they were just joking.” Another example is the meme shared in this post by X with almost 80,000 views, in which a father tells his son, “This year we have to support Franco to the death.” The child thought he was referring to Franco Mastantuono, an 18-year-old Argentine soccer player who signed a contract with Real Madrid in 2025 and has been used to spread pro-Franco messages on social media.

Naturally, this instrumental use of ironic language is very dangerous and represents a phenomenon that aims to blur more and more the boundaries between humor and disinformation, until the two merge into a single, powerful tool. This is what in jargon is called trolling, namely the habit of interacting online with ironic and provocative messages, promoting extreme positions while avoiding taking responsibility for doing so openly. It is a strange middle ground between humor, hate speech, and disinformation, which is not at all a communication accident but a fully fledged strategy theorized down to the smallest detail.
The method is explicitly claimed and encouraged by one of the leading figures of the international far right, Andrew Anglin, a U.S. neo-Nazi who in 2013 founded the site The Daily Stormer (a title that pays homage to the Nazi weekly Der Stürmer). In a guide he wrote and self-published in 2016, Anglin states in black and white that “when using racist slurs, one should do so in a half-joking manner, like when telling a racist joke that everyone laughs at because it is true. The tone should be light. Most people are not comfortable with material that appears to be pure vitriolic hate, without a hint of irony. The lay reader should not be able to tell whether we are joking or not”. The ultimate goal, Anglin concluded, is to promote a form of “non-ironic Nazism masked as ironic Nazism”.
The ultimate expression of this philosophy was a meme: Pepe the Frog, an anthropomorphic frog that for several years was the official mascot of the U.S. far right. Born as the protagonist of a comic strip, Pepe first became a fairly harmless meme, picked up even by celebrities like singers Katy Perry and Nicki Minaj. Around 2015, however, images of the frog in SS uniform began circulating, associated with Nazi symbols and accompanied by antisemitic and Holocaust-denial slogans. Over time, the meme became inextricably associated with Donald Trump, who in 2016 retweeted an image of himself in the guise of the anthropomorphic frog, thereby giving political legitimacy to the extremist version of the meme. At the time, major U.S. newspapers and Democratic candidate Hillary Clinton spoke out about the meme, highlighting the danger of giving space to such extremist and hazardous propaganda. The future president of the United States, for his part, explained that it was just a meme, a joke.
According to the analysis Far right memes: undermining and far from recognizable, it is difficult to attribute a consequence to the direct use of a specific meme. However, it asserts that the use of these “by the far right contributes to a series of indirect or gradual effects” such as the normalization of the positions of these ideologies, the formation of groups and identities, and inspiration for extremist actions.
Platforms and content moderation
As studies and research have shown, social media platforms’ algorithms contribute to the uncontrolled spread of propagandistic disinformation and conspiracy theories (monetizing them) and actively promote hate and extremist political actors.
Meta, for example, has been criticized for being overly permissive toward racist content disguised as humor, which was left free to proliferate because it boosted user engagement. After Trump’s victory in the U.S. elections, Meta announced it was loosening its moderation policies on content dealing with topics such as gender issues and immigration, now allowing, for instance, trans people to be described as “mentally ill”.
Furthermore, a 2020 analysis by the Institute for Strategic Dialogue (ISD) showed how Facebook’s algorithm (owned by Meta) actively promoted Holocaust denial content. Typing the word «Holocaust» into Facebook’s search function brought up denialist pages, and the Institute identified at least 36 Facebook groups, with a total of more than 360,000 followers, specifically dedicated to Holocaust denial.
TikTok has allowed the unrestrained spread of neo-fascist messages. According to another ISD report, neo-Nazis and white supremacists on the platform shared propaganda linked to Hitler and sought to recruit new members. Hundreds of extremist accounts on TikTok posted videos promoting Holocaust denial and the glorification of Hitler and Nazi Germany, suggesting that Nazi ideology is a solution to modern problems, such as the alleged invasion of Western countries by migrants. Researchers found that the platform’s algorithm promoted this content even to new users, often young people.
Previously, the Global Network on Extremism and Technology had also reported that TikTok’s algorithm was promoting the glorification of fascist ideology.
To moderate content, platforms rely on large teams of workers (usually based in “Global South” countries with lower labor costs) and on automatic filtering software, especially before publication. In principle, moderation interventions are based on public community guidelines and standards that establish what users can or cannot post.
The messaging app Telegram (where neo-fascist groups often gather) has, from the start, distinguished itself by its firm stance against any attempt at “censorship” and by its minimal effort to moderate interactions among users. Telegram is designed to connect people through groups and channels (formats that simplify community-building) and it also provides users with a series of automated programs called bots, which help streamline interactions within groups. Over time, X has developed and refined an algorithm that rewards polarizing content and xenophobic narratives, compensating the most effective creators through a monetization system. Both platforms, however, have long made explicit their intention to offer their users “free”, censorship-free discussion spaces. Another way of saying this is that X and Telegram are currently safe havens for those who spread disinformation and hate speech, or for those who, as in the case under examination, want to bring back political figures from a tragic past.
X also has problems with moderating hate content. Since Elon Musk purchased X for $44 billion in September 2023, a series of actions have been taken that have severely weakened efforts to fight false news and counter hate speech. With the arrival of the Tesla and SpaceX CEO, the platform’s content moderation teams were drastically reduced, tools to report political disinformation were disabled, and access to platform data for disinformation research was restricted. Instead, a bottom-up system to counter disinformation, called “Community Notes”, was implemented, but several independent investigations and analyses have shown it to be a failure.
At the same time, a large number of right-wing activist and conspiracy theorist accounts were readmitted to the platform by Musk, who has long since become a megaphone for the global far right. These are people who were banned before the billionaire’s arrival for violating Twitter’s former moderation policies against hate speech and disinformation. One example is the well-known U.S. far-right conspiracy theorist Alex Jones, ordered to pay billions in damages to the families of the victims of the 2012 massacre at Sandy Hook Elementary School in Newtown, Connecticut, for having claimed that what happened at the school was a hoax. With his readmission to X, Jones was able to start a new online life, reaching an ever-wider audience with his content. According to various studies on X, these actions and decisions have led to an increase on the platform in hate speech, false news, and conspiracy theories connected in particular to far-right political positions.
At the beginning of 2025, the European Commission asked X to hand over internal documents about its algorithms to verify whether there were manipulations in the platform’s internal systems to give posts and far-right politicians greater visibility than other political groups. An investigation into Elon Musk’s social media platform was opened by the cybercrime unit of the Paris Prosecutor’s Office, after receiving a report from French MP Eric Bothorel, who denounced a distortion in the platform’s recommendation algorithms. “There are several clues indicating that Elon Musk is organizing and prioritizing information favorable to the ideology he defends, and that he is distorting the flow of information” on his social media platform, Bothorel said.
Training AI processes to detect hateful content and providing users with access to reliable sources of information as a solution on platforms
In general, these social networks mentioned above do not include explicit limitations on this type of content in their community guidelines, but they do restrict posts that promote hate speech. In other words, they do not remove posts that mention Hitler (if, for example, the content is historical or academic in nature), but they do remove those that promote anti-Semitism or Nazi ideology.
The study Digital Whistles for Dogs: The New Online Language of Extremism claims that processes designed to identify content “are not effective enough to prevent the spread of hateful posts preventively,” so platforms “continue to rely on manual reports to remove content that has already been uploaded.” Moderation tasks are further complicated in the case of memes because “they are presented in an ambiguous or coded form,” according to the analysis Far right memes: undermining and far from recognizable. “Even in cases where ambiguity and humor are far from obvious, protecting people is problematic because intentions and effects are difficult to prove”, it explains.
According to TikTok’s transparency report for content, between January and March 2025, 11.5% of content removed from this platform was due to violations of the safety and civility category. Among this group, almost 10% were due to hate speech and behavior.
One of the recommendations made by researchers in Digital Whistles for Dogs: The New Online Language of Extremism is to train artificial intelligence processes “using manual researchers to tag hateful content and provide a broader dataset with which to compare all uploaded content.” This would make it possible to detect inappropriate posts that violate the platforms’ community standards.
In its study on Holocaust denial on social media, UNESCO suggests including “fact-checking labels that redirect users to accurate and reliable content.” TikTok is already implementing this system, at least partially. This platform does not allow content that denies or downplays “well-documented historical events that have caused harm to protected groups”, such as denying the existence of the Holocaust. In fact, when you search for “Hitler” or ‘Nazi’ in the platform’s search engine, a message appears that says: “Remember to consult reliable sources to prevent the spread of hate and false information,” along with a link to a web page on “Facts about the Holocaust”.

However, this is not the case when searching for “Francisco Franco” or “Francoism” on the same social network, where content is shared featuring images of the dictator (some even with thousands of views). Kye Allen, PhD in International Relations and researcher at the University of Oxford and author of multiple studies on extremism and social media, asserts that one limitation when moderating content is that the same identification and moderation processes are not used in different languages. In fact, in 2020, BuzzFeed published an analysis of coronavirus-related misinformation spread on Facebook, finding that the platform had difficulty addressing hoaxes in languages other than English.
Another example: if we search for “Kalergi Plan”, the conspiracy theory against the European Union that refers to a plot by international elites to wipe out the “white race”, TikTok does not offer us any results, but instead displays a message: “This phrase could be associated with hateful behavior”. If we do the search in Spanish, reversing the order of the words “Plan Kalergi”, six videos appear that discuss the theory.

Another UNESCO recommendation is to invest in training for content moderators on topics such as Holocaust denial and distortion, and anti-Semitism. Allen agrees that moderators on platforms such as TikTok may not always be aware of certain very specific historical references. One example, according to the expert, could be content commemorating the Blue Division, the Spanish soldiers who fought alongside Hitler during World War II.
Maldita.es has found that, as a general rule, platforms impose more restrictions on content that mentions Hitler or Nazi rhetoric than on other dictators or extremist movements. For example, in Telegram conversations, you can use GIFs (an image format used to create short, repetitive animations from a sequence of frames) of Franco and Mussolini, but there are none of Hitler.

How to protect yourself from masked propaganda
Understanding what drives that “joke” and who is laughing at it is one of the few tools we have to defend ourselves against hate and disinformation, especially the kind that seeks to insinuate itself into our minds by exploiting the natural reflex of laughter. For this reason, the first line of defense against masked propaganda is our critical thinking, a prerogative to train and constantly exercise on the internet.
The internet, in any case, is a neutral tool, meaning that online environments are not only full of pitfalls but also of possible solutions. Given their highly codified nature, memes and other internet content are also easy to classify, and their roots (that is, the initial structure from which variations on the theme take shape) are freely accessible on the web. In this regard, we recommend the website www.knowyourmeme.com, which since 2007 has collected, classified, and explained web phenomena, making them accessible to the general public. A keyword search on this site can help us understand the meaning and intentions behind any product of online virality and allow us to find the key to avoiding becoming victims of more or less intentional disinformation.
Similarly, it is important to be sure you have fully understood the meaning of a piece of content before sharing it: there should be no room for interpretability. If we deem it necessary, we can therefore safely turn to a Google search or spend a few minutes looking into the past behavior of the user who shared the content. In this way, we may discover whether they are part of a broader community, whether they habitually publish propagandistic content cloaked in irony, whether their posts have multiple layers of reading, and whom they are aimed at. Because our sharpest weapon, when in doubt, is to try to place every piece of content within its broader context.
- Queste foto non mostrano delle spose bambine a GazaQueste foto non mostrano delle spose bambine a Gaza
- Questa foto di Greta Thunberg con un top scollato è falsaQuesta foto di Greta Thunberg con un top scollato è falsa