Posts
Comfyui workflow examples reddit
Comfyui workflow examples reddit. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Muter nodes) and Context Switches. comfy uis inpainting and masking aint perfect. second pic. or through searching reddit, the comfyUI manual needs updating imo. 1; Flux Hardware Requirements; How to install and use Flux. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. It covers the following topics: Introduction to Flux. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. com/. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. https://youtu. Aug 2, 2024 · Flux Dev. In addition, I provide some sample images that can be imported into the program. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. Flux. K12sysadmin is open to view and closed to post. Upcoming tutorial - SDXL Lora + using 1. Please keep posted images SFW. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. 1 checkpoint. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. I think it was 3DS Max. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". . Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work! Nodes include: LoadOpenAIModel It works by converting your workflow. SDXL Default ComfyUI workflow. Inside the workflow, you will find a box with a note containing instructions and specifications on the settings to optimize its use. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. but mine do include workflows for the most part in the video description. 1 or not. Everything else is the same. I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). Is there a workflow with all features and options combined together that I can simply load and use ? 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. Just my two cents. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. WAS suite has some workflow stuff in its github links somewhere as well. The video is just a screenshot of the workflow I used in ComfyUI to get the output files. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. 4. Upscaling ComfyUI workflow. I originally wanted to release 9. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). You can construct an image generation workflow by chaining different blocks (called nodes) together. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] Warning. all in one workflow would be awesome. sft file in your: ComfyUI/models/unet/ folder. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. ControlNet Depth ComfyUI workflow. Img2Img ComfyUI workflow. 75s/it with the 14 frame model. 1. Create animations with AnimateDiff. This by Nathan Shipley didn't use this exact workflow but is a great example of how powerful and beautiful prompt scheduling can be: Share, discover, & run thousands of ComfyUI workflows. No Loras, no fancy detailing (apart from face detailing). this is just a simple node build off what's given and some of the newer nodes that have come out. and it got very good results. We would like to show you a description here but the site won’t allow us. I put the workflow to test by creating people with hands etc. But for a base to start at it'll work. be/ppE1W0-LJas - the tutorial. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. Still working on the the whole thing but I got the idea down And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. But let me know if you need help replicating some of the concepts in my process. Here is an example of 3 characters each with its own pose, outfit, features, and expression : Left : woman wearing full armor, ginger hair, braids hair, hands on hips, serious Middle : girl, princess dress, blonde hair, tiara, jewels, sitting on a throne, blushing Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. I then just sort of pasted them together. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Hi everyone, I’m working on a project to generate furnished interiors from images of empty rooms using ComfyUI and Stable Diffusion, but I want to avoid using inpainting. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. The example images on the top are using the "clip_g" slot on the SDXL encoder on the left, but using the default workflow CLIPText on the right. LoRA selector, (for example, download SDXL LoRA example from StabilityAI, put into ComfyUI\models\lora\) VAE selector, (download default VAE from StabilityAI, put into \ComfyUI\models\vae\), just in case in the future there's better VAE or mandatory VAE for some models, use this selector Restart ComfyUI Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. For your all-in-one workflow, use the Generate tab. Merging 2 Images together. Flux Schnell is a distilled 4 step model. Only the LCM Sampler extension is needed, as shown in this video. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to I recently switched from A1111 to ComfyUI to mess around AI generated image. Ignore the prompts and setup That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. This guide is about how to setup ComfyUI on your Windows computer to run Flux. (Same seed, etc, etc. To add content, your account must be vetted/verified. Its a simpler setup than u/Ferniclestix uses, but I think he likes to generate and inpaint in one session, where I generate several images, then import them and inpaint later (like this) Welcome to the unofficial ComfyUI subreddit. You can then load or drag the following image in ComfyUI to get the workflow: 6 min read. The sample prompt as a test shows a really great result. A1111 has great categories like Features and Extensions that simply show what repo can do, what addon out there and all that stuff. The examples were generated with the RealisticVision 5. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. 0 for ComfyUI. Breakdown of workflow content. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. That being said, here's a 1024x1024 comparison also. 150 workflow examples of things I created with ComfyUI and ai models from Civitai This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. Put the flux1-dev. Table of contents. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. You can encode then decode bck to a normal ksampler with an 1. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Welcome to the unofficial ComfyUI subreddit. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. 5 with lcm with 4 steps and 0. of course) To make differences somewhat easiser to see, the above image is at 512x512. Ending Workflow. 1 with ComfyUI Get the Reddit app Scan this QR code to download the app now Here are approx. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. So. Jul 28, 2024 · Over the last few months I have been working on a project with the goal of allowing users to run ComfyUI workflows from devices other than a desktop as ComfyUI isn't well suited to run on devices with smaller screens. Workflow. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. json files into an executable Python script that can run without launching the ComfyUI server. Seems very hit and miss, most of what I'm getting look like 2d camera pans. Comfy Workflows Comfy Workflows. 4 - The best workflow examples are through the github examples pages. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. Welcome to the unofficial ComfyUI subreddit. hopefully this will be useful to you. I found it very helpful. ComfyUI could have workflow screenshots like example repo has to demonstrate possible usage and also variety of extensions. Civitai has few workflows as well. 86s/it on a 4070 with the 25 frame model, 2. EDIT: For example this workflow shows the use of the other prompt windows. (for 12 gb VRAM Max is about 720p resolution). Starting workflow. Step 2: Download this sample Image. The idea of this workflow is to sample different parts of the sigma_min, cfg_scale, and steps space with a fixed prompt and seed. 1 ComfyUI install guidance, workflow and example. Please share your tips, tricks, and workflows for using this software to create your AI art. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt. Just bse sampler and upscaler. I think perfect place for them is Wiki on GitHub. You can't change clipskip and get anything useful from some models (SD2. You can find the Flux Dev diffusion model weights here. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be ignored as well. AP Workflow 9. K12sysadmin is for K12 techs. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 1; Overview of different versions of Flux. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Workflow Image with generated image But standard A1111 inpaint works mostly same as this ComfyUI example you provided. Thats where I'd gotten my second workflow I posted from, which got me going. This workflow by Antzu is a good example of prompt scheduling, which is working well in Comfy thanks to Fitzdorf's great work. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. Surprisingly, I got the most realistic images of all so far. You can find the workflow here and the full image with metadata here.
jfkw
abrklcuw
cdcpdh
hxscivg
rbjbxh
logdgp
dxow
vnuogyr
kac
ufsne