以前都search過下個d字係咪真係有用
They work on Novel AI (NAI) model and its descendants (such as Anything, Orange Abyss), because the model is actually trained on those keywords.
They don't work on models that don't have NAI as its ancestor or are merged too much that the NAI component is diluted.
This is the answer. Almost every anime-based model has NovelAI as one of its "parents" that it was merged from, and so the keywords NovelAI was trained on have an impact when using those models.
To build on this, Waifu Diffusion has [used the same set of keywords](
https://cafeai.notion.site/WD-1-5-Beta-Release-Notes-967d3a5ece054d07bb02cba02e8199b7). They used some software to assign images scraped from a booru an aesthetic score, and depending on the score gave them keywords
* masterpiece
* best quality
* high quality
* medium quality
* normal quality
* low quality
* worst quality
I believe this matches what NovelAI did, so putting "low quality" and "worst quality" in the negative prompt also helps to improve the image quality in most anime-based checkpoints.
https://reddit.com/r/StableDiffusion/s/a4HOquhGgT