The robot apocalypse hasn't happened yet, but still I can't escape the feeling that something has gone horribly wrong... Cartoon for Dutch newspaper Trouw.
More of my work for Trouw: https://www.trouw.nl/cartoons/tjeerd-royaards~bcb45712/
The robot apocalypse hasn't happened yet, but still I can't escape the feeling that something has gone horribly wrong... Cartoon for Dutch newspaper Trouw.
More of my work for Trouw: https://www.trouw.nl/cartoons/tjeerd-royaards~bcb45712/
If the use of #GenerativeAI continues to grow in academia, are universities which wish to maintain a reputation for excellence going to be forced to move to examining undergraduates by viva?
It is the only thing I can think of that would work, but it would be hugely expensive.
"Artificial intelligence researchers are grappling with a problem core to their field: how to stop so-called “AI slop” from damaging confidence in the industry’s scientific work.
AI conferences have rushed to restrict the use of large language models for writing and reviewing papers in recent months after being flooded with a wave of poor AI-written content.
Scientists have warned that the surge of low-quality AI-generated material risks eroding trust and the integrity of the sector’s research by introducing false claims and made-up content.
“There is a little bit of irony to the fact that there’s so much enthusiasm for AI shaping other fields when, in reality, our field has gone through this chaotic experience because of the widespread use of AI,” said Inioluwa Deborah Raji, an AI researcher at the University of California, Berkeley.
Recent studies have highlighted the prevalence of the technology in AI research. In August, a study by Stanford University found that up to 22 per cent of computer science papers contained LLM usage."
https://www.ft.com/content/54e274c5-de86-4b3e-96a9-95a46b5e48a0
The robot apocalypse hasn't happened yet, but still I can't escape the feeling that something has gone horribly wrong... Cartoon for Dutch newspaper Trouw.
More of my work for Trouw: https://www.trouw.nl/cartoons/tjeerd-royaards~bcb45712/
"Bottom line
The point isn’t that AI makes democracy impossible. The point is that democracy becomes brittle when it’s cheap to counterfeit social proof—when it costs little to run a fake crowd and minutes to manufacture “public opinion.”
The mission is straightforward: make large-scale impersonation and coordination harder to run, easier to detect, and less profitable to sustain. If we get that right, the public square does not need a central authority to decide what is true. It needs conditions where authentic human participation is visible—and where engineered consensus collapses the moment it tries to scale.
Key takeaways
The next wave of influence operations may not look like obvious copy-paste bots. It may look like communities: thousands of AI personas with memory, social identities, distinct styles, and coordinated goals.
The most dangerous outcome is not a single viral lie—it is synthetic consensus: the illusion that “everyone is saying this,” which can quietly bend beliefs and norms.
This is already moving from theory to reality. In July 2024, the U.S. DOJ announced it disrupted a Russia-linked AI-enhanced bot operation involving nearly 1,000 accounts impersonating Americans.
Defenses should not hinge on policing content. They should focus on coordination and provenance: detecting statistically unlikely patterns, stress-testing defenses with simulations, strengthening identity/proof signals, and shifting platform incentives."
https://www.powerofusnewsletter.com/p/when-ai-can-fake-majorities-democracy
#AI #GenerativeAI #AISwarms #Disinformation #Propaganda #SocialMedia #Democracy