[ AI Art ] - stable diffusion 討論 分享 02

696 回覆
9 Like 8 Dislike
2023-11-30 12:25:30
唔該
2023-12-02 21:45:06
inpaint
2023-12-03 00:30:27



2023-12-11 23:36:21
金正恩將軍
2023-12-19 16:15:15
預祝大家聖誕快樂

2023-12-21 23:30:31
有冇人知咩model嚟 頭幾張

AI J圖已經進步到咁?!
https://lih.kg/3577303
- 分享自 LIHKG 討論區
2024-01-06 21:05:39
civitai見到有Heidi Lora
邊位巴打
2024-01-07 07:54:44
有無link
2024-01-08 23:41:10
2024-01-09 20:10:00
2024-01-11 11:43:07
奇怪,我近排無電腦用,遲D試下
有冇其他CHING試過?
2024-01-14 19:18:16
2024-01-17 17:05:42
我係新手,玩左兩日先識點樣裝
Gen 到兩張圖出黎
2024-01-21 18:00:47
有人整咗個嘢叫nightshade聲稱可以令啲人用唔到你張圖嚟train ai原理係佢會喺你張圖上面改少少嘢令到個ai會誤會張圖代表緊其他嘢

但係依家啲model training高手試完之後個個都話根本完全唔work個個都r晒頭
2024-02-09 13:46:42
stable-diffusion-webui-forge for Low VRAM machines huge VRAM and Speed Improvements

Stable Diffusion Web UI Forge
Stable Diffusion Web UI Forge is a platform on top of Stable Diffusion WebUI (based on Gradio) to make development easier, optimize resource management, and speed up inference.

The name "Forge" is inspired from "Minecraft Forge". This project is aimed at becoming SD WebUI's Forge.

Compared to original WebUI (for SDXL inference at 1024px), you can expect the below speed-ups:

If you use common GPU like 8GB vram, you can expect to get about 30~45% speed up in inference speed (it/s), the GPU memory peak (in task manager) will drop about 700MB to 1.3GB, the maximum diffusion resolution (that will not OOM) will increase about 2x to 3x, and the maximum diffusion batch size (that will not OOM) will increase about 4x to 6x.

If you use less powerful GPU like 6GB vram, you can expect to get about 60~75% speed up in inference speed (it/s), the GPU memory peak (in task manager) will drop about 800MB to 1.5GB, the maximum diffusion resolution (that will not OOM) will increase about 3x, and the maximum diffusion batch size (that will not OOM) will increase about 4x.

If you use powerful GPU like 4090 with 24GB vram, you can expect to get about 3~6% speed up in inference speed (it/s), the GPU memory peak (in task manager) will drop about 1GB to 1.4GB, the maximum diffusion resolution (that will not OOM) will increase about 1.6x, and the maximum diffusion batch size (that will not OOM) will increase about 2x.

If you use ControlNet for SDXL, the maximum ControlNet count (that will not OOM) will increase about 2x, the speed with SDXL+ControlNet will speed up about 30~45%.
https://www.reddit.com/r/StableDiffusion/comments/1ajxus6/stablediffusionwebuiforge_for_low_vram_machines/
2024-02-09 13:53:56
New features (that are not available in original WebUI)
Thanks to Unet Patcher, many new things are possible now and supported in Forge, including SVD, Z123, masked Ip-adapter, masked controlnet, photomaker, etc.
2024-02-10 16:22:30
祝大家龍年行大運

2024-02-22 23:38:55
SDXL-Lightning 又真係幾快 4 steps ok, 同埋靚過TURBO

https://x.com/c0nsumption_/status/1760526472165359808?s=20
2024-02-23 12:14:57
咁堅既咩,一段時間無玩,進化得咁快
2024-02-23 17:19:39
2024-02-23 17:20:12
我都暫時無時間玩
2024-04-11 08:13:41
用4090
I9 13th gen
96gb RAM

我用黎 train Lora
吹水台自選台熱 門最 新手機台時事台政事台World體育台娛樂台動漫台Apps台遊戲台影視台講故台健康台感情台家庭台潮流台美容台上班台財經台房屋台飲食台旅遊台學術台校園台汽車台音樂台創意台硬件台電器台攝影台玩具台寵物台軟件台活動台電訊台直播台站務台黑 洞