> SlopStop is getting ready! Report processing has not started yet at-scale. We are reviewing the initial wave of reports, and finalizing our systems for handling them.
> We will start processing reports officially in January. Please continue submitting reports as you find more content!
From SlopStop is Kagi’s community-driven feature for reporting low-quality, mass‑generated AI content (“AI slop”) found in web, image and video search results. one might conclude slop is low-quality, mass‑generated content, but why limit opposition to the subset that's from "AI"?
> If a domain is found to be mostly AI‑generated (typically more than 80% across its pages), that domain is flagged as AI slop and downranked in web search results.
I think that's pretty clear, no? One AI item is merely AI generated, a trough of AI items is AI slop.
Edited as I think I misunderstood: there's more slop of the AI kind than of whatever other low-effort content, and I think Kagi is already doing a good job of keeping a neat little index that avoids content farms, AI or otherwise. AI slop just happens to be a little harder to evaluate than regular slop (and in my experience is now more pervasive because it's cheaper to produce).
That works as a good definition for me. Whether or not you want to call it "slop", anything that helps to filter out AI-generated stuff could be helpful.
My only concern about this is that it seems to rely on user reporting, and if that reporting includes (mistakenly or otherwise) sites that don't have AI generated content, that could make the tool less useful.
I can see this being weaponised against controversial sites made by humans, with no way to prove they are human.
“Symmetric” user reporting is dearly needed in some websites; as you say something can be mass-reported with no real recourse.