I use Easy Diffusion, and I had a similar issue of finding relevant documentation since terminology is different. Ultimately ,what I settled on is using information like Hammurobbie provided...
I use Easy Diffusion, and I had a similar issue of finding relevant documentation since terminology is different. Ultimately ,what I settled on is using information like Hammurobbie provided (https://stable-diffusion-art.com/automatic1111/) while keeping in mind that that documentation uses different terms. For example, in img2img mode 'Guidance Scale' is Denoising strength. In that article, just replace the terms.
To get good image references, meaning relevant to what you want to present to a commissioned artist the key is learning how to use Denoising/Guidance Scale, ControlNets (an opt-in beta feature for Easy Diffusion) and LoRAs. The general routine I follow for generating images with Easy Diffusion now is:
Write Prompt
Write Negative Prompt
Select Source Image (if using img2img)
Set Guidance Scale to 35%
Set Prompt Strength to 65%
Select ControlNet Image and model (each one may have different results, some better for certain images)
Add LoRAs with desired strengths
Batch generate in groups of 4 (I do this since I find it gives me more potential images than generating by one, since each generation uses its own seed by default)
Keep generating in the background, at times for hours on end while adjusting Guidance Scale/Prompt Strength, generating with different ControlNets and models until I get what I'm looking for
I realize not all of this might align with what you're asking, but this is effectively all I've got in terms of communicable Easy Diffusion knowledge. Everything else is what I've innately figured out by trial and error, but not to a point where I can express it to others.
I’m not familiar with Easy Diffusion so I can’t answer your specific questions, but if you’ll permit me a moment to shill I’d encourage you to take a look at InvokeAI. I’m linking to their YouTube...
I’m not familiar with Easy Diffusion so I can’t answer your specific questions, but if you’ll permit me a moment to shill I’d encourage you to take a look at InvokeAI. I’m linking to their YouTube channel because it’s an informative and well-produced way to see what Invoke is all about. They’ve been putting a ton of energy into its UI and feature set, and the result is a really full-featured and easy-to-use system.
Writing good prompts and training models / LoRAs is an art that you can refine with practice, but there’s something to be said for having (and knowing how to use) good tools. IMO Invoke is one of the best tools available because of its unified canvas, node-based workflow editor, dynamic prompts, etc. I haven’t played with the latest version yet but I’m intrigued by the new batch queue feature too.
This link will probably help. It's a beginner's guide to automatic1111's repo, but it explains most of the features quite well: https://stable-diffusion-art.com/automatic1111/
Hi, while I am not an the most knowledgeable on stable diffusion or ComfyUI(I am more of a a1111 webui user myself) but I can try and answer your questions if you can give some more information....
Hi, while I am not an the most knowledgeable on stable diffusion or ComfyUI(I am more of a a1111 webui user myself) but I can try and answer your questions if you can give some more information. What specific features are you hoping to learn more about.
I use Easy Diffusion, and I had a similar issue of finding relevant documentation since terminology is different. Ultimately ,what I settled on is using information like Hammurobbie provided (https://stable-diffusion-art.com/automatic1111/) while keeping in mind that that documentation uses different terms. For example, in img2img mode 'Guidance Scale' is Denoising strength. In that article, just replace the terms.
To get good image references, meaning relevant to what you want to present to a commissioned artist the key is learning how to use Denoising/Guidance Scale, ControlNets (an opt-in beta feature for Easy Diffusion) and LoRAs. The general routine I follow for generating images with Easy Diffusion now is:
I realize not all of this might align with what you're asking, but this is effectively all I've got in terms of communicable Easy Diffusion knowledge. Everything else is what I've innately figured out by trial and error, but not to a point where I can express it to others.
I’m not familiar with Easy Diffusion so I can’t answer your specific questions, but if you’ll permit me a moment to shill I’d encourage you to take a look at InvokeAI. I’m linking to their YouTube channel because it’s an informative and well-produced way to see what Invoke is all about. They’ve been putting a ton of energy into its UI and feature set, and the result is a really full-featured and easy-to-use system.
Writing good prompts and training models / LoRAs is an art that you can refine with practice, but there’s something to be said for having (and knowing how to use) good tools. IMO Invoke is one of the best tools available because of its unified canvas, node-based workflow editor, dynamic prompts, etc. I haven’t played with the latest version yet but I’m intrigued by the new batch queue feature too.
This link will probably help. It's a beginner's guide to automatic1111's repo, but it explains most of the features quite well: https://stable-diffusion-art.com/automatic1111/
This is what I used as well. It was all very unfamiliar but the steps are well written and I got it working with no problems.
Hi, while I am not an the most knowledgeable on stable diffusion or ComfyUI(I am more of a a1111 webui user myself) but I can try and answer your questions if you can give some more information. What specific features are you hoping to learn more about.