Wikipedia bans AI-generated content in its online encyclopedia

https://www.theguardian.com/technology/2026/mar/27/wikipedia-bans-ai

Comments

EufratMar 28, 2026, 10:32 PM
They seemed open to giving it a try if they were actively involved in the experiment. Instead, it feels like a lot of people don’t really understand how Wikipedia is managed and thought that they could use it as a freeform place to get credibility or just test their pet projects.

Like, this attempt† where the bot then attempted to lecture users who were hostile towards it before it was eventually banned.

https://en.wikipedia.org/wiki/User_talk:TomWikiAssist

halloleMar 29, 2026, 3:13 AM
I've encountered AI contributions on Wikipedia, and, although I wonder how they'll enforce such a rule, I think this is the proper stance to take.

I think readers take for granted how concise Wikipedia's prose tends to be. AI, in comparison, seems built to ramble, being overly specific where it doesn't need to be and lacking specificity where it ought to have it.

When you think about it, "what should go on a thing's Wikipedia page?" is an interesting question; the answer certainly isn't "anything and everything." AI just doesn't have a good sense for what belongs, I feel.

_the_inflatorMar 29, 2026, 8:04 AM
I use the history function from time to time and sometimes catch AI bloat.

I don’t do this systematically, just sometimes out of curiosity.

But it is always the same pattern: bloat, bloat, bloat.

What I very critically witness is the so called gender neutrality movement where large bodies of text get rewritten to fulfill a political agenda.

This is a major loss of quality. Hundreds of years of using language and getting results by using it as a means and if you compare recent downfalls in connection with gender politics you should be very worried of not already.

Even if some admins drive such agendas, why not use a new mode like a new language for those who want it? This would have been the old skill Wikipedia way and the actual edit wars that aren’t sadly made Wikipedia lose massive credibility for me.

justonceokayMar 29, 2026, 4:12 AM
AI is tuned to be liked by everybody. Wikipedia is edited to be liked by nobody. An encyclopedias job is not to be enjoyed.
swingboyMar 28, 2026, 10:45 PM
I’ve contributed a fair amount over the past few months of primarily AI generated content that I mainly just edit for the usual AI tropes and it’s pretty much all still up.
longislandguidoMar 28, 2026, 7:50 PM
Will this open the door to editors' deletion of any content they dislike, under the guise that it might (or might not) be AI generated?

Can't wait for the 80 page Talk threads.

kjkjadksjMar 28, 2026, 7:59 PM
Don’t they already do that?
56745742597Mar 29, 2026, 1:36 AM
Yes, now they just have one more excuse.
slyallMar 28, 2026, 9:43 PM
This policy has been shared a lot by the anti-AI crowd over the last week. They are celebrating it as a major site saying no to AI.

It seems a smaller "win" than most think. Just discourages wholesale rewriting and creation of new articles using AI. Assistance with editing is explicitly allowed.

Kim_BruningMar 29, 2026, 12:54 AM
Right, the wikipedia rules are not that different from the HN rules. A human needs to be responsible for what finally goes on the page. And that's fair enough. There's some experimental (non-wikimedia) wikis that use AI for editing, but they haven't taken off yet.
rose-knuckle17Mar 29, 2026, 1:49 AM
This is about as intelligent and practical as banning school kids in the 80s from using calculators, based the logic that "you won't always have one with you".
scared_togetherMar 29, 2026, 5:12 AM
https://en.wikipedia.org/wiki/Wikipedia:Writing_articles_wit...

> Text generated by large language models (LLMs) such as ChatGPT, Gemini, Claude, DeepSeek etc. often violates several of Wikipedia's core content policies. For this reason, the use of LLMs to generate or rewrite article content is prohibited, save for these two exceptions

(The two exceptions are basic copyediting and translation).

I don't see how this is unintelligent and impractical. Wikipedia are trying to protect their core content policies ie. the very things that separate Wikipedia from Conservapedia, Grokipedia, RationalWiki or any other wiki. They are willing to grant exceptions in cases where LLMs are valuable.

And they even acknowledge that:

> Some editors may have similar writing styles to LLMs. More evidence than just stylistic or linguistic signs is needed to justify sanctions

So it seems like the ban is only intended to be used in extremely egregious cases.

cozzydMar 29, 2026, 12:58 AM
If you want an AI encyclopedia that already exists
rox_kdMar 29, 2026, 12:48 AM
Well on time tbh. or at least some sort of better moderation, because there has really been some unfortunate cases imo
ChrisArchitectMar 28, 2026, 8:37 PM
pugchatMar 29, 2026, 2:28 AM
[dead]
56745742597Mar 29, 2026, 1:36 AM
Even AI slop is too factual for the self-proclaimed arbiters of truth.
jjmarrMar 28, 2026, 11:12 PM
This is the traditional "innovators dillema" where a skilled profession facing an imperfect technological threat decides not to adopt it until it is too late.

AI generated articles are, on the balance, inferior, except for people that want simple, low quality content.

But LLMs are moving up the value chain with Deep Research. They can give explanations tuned to a reader's knowledge/viewpoints and provide interactive content Wikipedia doesn't support. That is a killer app for math/science topics.

Wikipedia will win against a generic corporate encyclopedia on neutrality/oversight, but it'll lose badly on UX, which is what matters.

I think the tipping point will be direct integration of academic sources into ChatGPT/Claude/Gemini and a "WikiLink" type way to discover interesting follow-up topics.

I can't trust AI answers for serious historical or social science topics because of the first. And generally my chat with AI ends once I get the answer I need because I can't get rabbitholed into other topics.

Kim_BruningMar 29, 2026, 12:52 AM
It REALLY depends on how you're using the AI. I get the strong impression a lot of people are still at the "I'll write a few prompts and see what happens" stage, and hoping for an answer from the magical oracle; as opposed to really using the tool. This never fails to disappoint.

I might be slightly wrong, but probably not by a lot, yet. Sure there's an element of "holding-it-wrong-ism" in my position. But ... it does actually take practice to get it right, and best practices are badly documented!

That said the situation is changing rapidly: https://news.ycombinator.com/item?id=47547849 "AI bug reports went from junk to legit overnight, says Linux kernel czar"

--

redanddeadMar 29, 2026, 1:25 AM
it’s not supposed to win on UX, it’s current UX is maybe too conservative sure

of course they banned ai they could barely allow css