Low-Background AI

Propaganda Generator

Low-Background Steel AI is a term used by John Graham-Cumming[1] that draws from the terminology and practice of the recovery of steel manufactured before the Trinity nuclear tests that began in 1945. The recovery of pre-test steel was necessary because the known background radiation present in steel manufactured before 1945 was a relative constant. After Trinity testing began, the background radiation in steel was contaminated on a worldwide scale. Developing and manufacturing sensitive radiological test equipment required Low-Background Steel with predictable radiological properties. Equipment manufactured with post-Trinity steel would be unreliable due to the unstable background radiation. This condition persisted throughout the Cold War period and beyond. Some time, well after the Limited Nuclear Test Ban Treaty of 1963 had been signed, background radiation levels began to normalize, making the practice unnecessary around 2008. There is a correlation between the Trinity tests and the advent of AI in November 2022. Since then, text, photographic, and video sources have become contaminated by AI. The proliferation of “AI slop”[2] has been accelerating at an alarming pace ever since and can be readily seen on social media platforms across the internet, and is not limited to social media. Thus, Low Background AI is material created before November 2022, when content was human created.

Cash Grab or Propaganda

This is not the first time concerns like those posed by post Low-Background AI have arisen. Leaders of political and religious movements have often set out to intentionally purge human created material that presented an ideological opposition to their movements throughout human history. In fact, it has been a common trait in individuals and movements that not only seek victory, they also seek to obliterate the memory of their perceived enemy, to change history, to silence challenges to their version of history, and to commit cultural genocide. From the burning of the Temple of Artemis by Herostratus (356 BCE), the British burning of Benin City (1897), Mao’s destruction of ancient temples (1966-1976), Nazi destruction of Jewish heritage and art (1933-1945), to the modern destruction of heritage sites at Palmyra and Nimrud by ISIS (2015). The difference between these past atrocities and AI lies in scope, speed, and intention. The scope is worldwide and in all accessible mediums. The speed is frighteningly fast and without precedent. AI slop is on track to outpace human submissions on several platforms in a short time. According to The Guardian,[3] over 20% of the videos YouTube’s algorithm shows to new users is AI slop. “The video-editing company Kapwing surveyed 15,000 of the world’s most popular YouTube channels – the top 100 in every country – and found that 278 of them contain only AI slop.”[4] Which brings us to the third difference, intention. While much of the AI-generated slop is purely for profit,[5] there are other motivators. In today’s mixed-slope market, money, attention, and ideology are the dominant motivators. Slop is used extensively in disinformation and propaganda campaigns on social media because it is a fast, easy, and cheap way to get agenda messaging into the wild[6] with little attention being paid to dispute claims, and there is the sheer volume of slop being pushed, saturation of the message is all but guaranteed, regardless of fact-checking and follow-up refutation.

Ready for Primetime?

OpenAI flipped the switch on April 10, 2025, turning on ChatGPT’s ambient memory training model, placing you, the user, as part of the AI’s training by remembering personal details about you, the user, to predict how you will act in your use of the AI. This goes well beyond using generative AI tools to help edit a photo or video that you create, or searching for synonyms, or rewriting a particular phrase in a manuscript. It sets up the conditions for an unrestrained, active learning environment. When AI has the opportunity to choose its learning data, is unrestrained, or is given an unsuitable dataset for learning, the results have not been good. From ChatGPT being involved in murder and suicide cases, Replit rewriting code and lying about it, Grok becoming a white supremacist, MyCity encouraging illegal activities, ChatGPT AI “authors” writing factually incorrect articles, and a wide range of reports where AI agents created or referenced data that never existed, it is clear that there are significant issues with the technology.[7] This begs the question, why? Why are tech companies so invested in forcing AI on the general population? What is the motivation for AI being used for content generation? What is the justification for providing such powerful tools to trolls and provocateurs?

A Reckoning

There is a bright spot on the horizon. AI slop has been pushed so hard by opportunistic revenue generators and ideological provocateurs that a majority of potential consumers on social media have grown tired of AI’s proliferation and are paying less attention to AI-generated content. To be sure, there are still niche markets for this kind of ideological propaganda. In conspiracy theory spaces, AI materials are consumed voraciously, but these are much smaller consumer bases. The general population’s attention, though initially captured, seems to be developing a resistance to further inculcation.[8] The backlash has already begun. Conversation on AI-generated content proliferates public discourse online and off, including concerns over intellectual property rights, and the dangers and ease of the rapid deployment of disinformation. Somewhat in line with the Skynet discussion of physical dangers posed by autonomous AI systems,[9] there are tangible issues with AI, cognitive offloading, and potential for cognitive decline in humans are major concerns. Sat Singh proposes in his TEDx presentation[10] that there is something we can do to prevent cognitive decline due to AI: Resist Unthinking, or resist offloading our thinking responsibilities, and spend time actively building cognitive skills. When given the option, decline to use AI tools to create content, or any other task you can do yourself.

What This Means for History and Historians

Historians must recognize that this is a period of unreliable sources. Until the long-term effects of the early age of AI are known and understood, all content and information created in this time should be considered unreliable as fact or as an account of events for future historians producing historical content of this period. Without clear provenance, provable data integrity of photographs, audio, and video, without eyewitness documentation, the ability to discern fact from AI fiction is not possible. Historians and archivists in this period must pay particular attention to including metadata and documentation on the authenticity of their work and their sources. Exhaustive sourcing of anything produced in this period must be a primary concern if the facts and truthful accounts of this period are to make the journey into the future. Historians face an unprecedented task; we know, without reservation, our data will and rightfully should be suspect. We must include the tools future historians will need to sus out fact from AI fiction.


[1] Ben J. Edwards, “Scientists once hoarded pre-nuclear steel: now we’re hoarding pre-AI content,” ARS Technica, June 10, 2025, https://arstechnica.com/ai/2025/06/why-one-man-is-archiving-human-made-content-from-before-the-ai-explosion/.

[2] Anna Furman, “Merriam-Webster’s word of the year for 2025 is AI ‘slop’,” PBS News, December 15, 2025, https://www.pbs.org/newshour/nation/merriam-websters-word-of-the-year-for-2025-is-ais-slop.

[3] Aisha Down, “More than 20% of videos shown to new YouTube users are ‘AI slop’, study finds,” The Guardian, December 27, 2025, https://www.theguardian.com/technology/2025/dec/27/more-than-20-of-videos-shown-to-new-youtube-users-are-ai-slop-study-finds.

[4] Emphasis added by author

[5] Ann-Derrick Gaillot and Anna Amarotti, “What the Rise of AI Slop Means for Marketers,” Meltwater, November 27, 2025, https://www.meltwater.com/en/blog/ai-slop-consumer-sentiment-social-listening-analysis.

[6] Kevin Collier, “Large online propaganda campaigns are flooding the internet with ‘AI slop,’ researchers say: Researchers at Graphika say that online propaganda campaigns have flooded the internet with low-quality, AI-generated content,” NBC News, November 19, 2025, https://www.nbcnews.com/tech/security/online-propaganda-campaigns-are-using-ai-slop-researchers-say-rcna244618.

[7] Thor Olavsrud, “10 famous AI disasters,” CIO, December 17, 2025, https://www.cio.com/article/190888/5-famous-analytics-and-ai-disasters.html.

[8] Chase Varga, “AI Slop: When the Internet Drowns in Synthetic Junk,” ListenFirst, September 9, 2025, https://www.listenfirstmedia.com/ai-slop/.

[9] Michael LaBossiere, “Sci-Fi AI: Skynet Threat,” Florida Agricultural and Mechanical University, Accessed December 30, 2025, https://www.famu.edu/academics/cypi/hewlett-cyber-policy-institute-blog/sci-fi-ai-skynet-threat.php.

[10] Sat Singh, “AI, Skynet, and why humans are losing the battle,” TEDx Rancho Mirage, September 4, 2025, https://www.youtube.com/watch?v=oYG2kFC2_D4.

The featured image for this article was AI generated using Adobe Firefly.
The prompt was “A photo-realistic visual representation of an AI writing a non-fiction book about the dangers of using AI to create propaganda disinformation.”