Comfyui workflows examples reddit. 0, I worked closely with u/Kijai, u/glibsonoran, u/tzwm, and u/rgthree, to test new nodes, optimize parameters (don't ask me about SUPIR), develop new features, and correct bugs. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. I originally wanted to release 9. ComfyUI Fooocus Inpaint with Segmentation Workflow Welcome to the unofficial ComfyUI subreddit. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. 1. SDXL can indeed generate a nude body, and the model itself doesn't stop you from fine-tuning it towards whatever spicy stuff there is with a dataset, at least by the looks of it. 1 or not. or through searching reddit, the comfyUI manual needs updating imo. That being said, here's a 1024x1024 comparison also. you sound very angry. (Same seed, etc, etc. of course) To make differences somewhat easiser to see, the above image is at 512x512. And above all, BE NICE. But standard A1111 inpaint works mostly same as this ComfyUI example you provided. This is an example of an image that I generated with the advanced workflow. This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite , supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. io/VixFlowsDocs/ComfyUI2VixMigration. Sure, it's not 2. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Say, for example, you made a ControlNet workflow for copying the pose of an image. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. The AP Workflow wouldn't exist without the incredible work done by all the node authors out there. Only the LCM Sampler extension is needed, as shown in this video . 0 with support for the new Stable Diffusion 3, but it was way too optimistic. Belittling their efforts will get you banned. Join the largest ComfyUI community. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. html). Hi there. com/. Civitai has few workflows as well. Table of contents. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. The first one is very similar to the old workflow and just called "simple". An example of the images you can generate with this workflow: 4 - The best workflow examples are through the github examples pages. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. To make random (but realistic) examples, the moment you start to want ControlNet in 2 different workflows out of your 10, or you need to fix 4 workflows out of 10 that use the Efficiency Nodes because v2. Adding same JSONs to main repo would only add more hell to commits history and just unnecessary duplicate of already existing examples repo. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. Breakdown of workflow content. true. I played for a few days with ComfyUI and SDXL 1. You can find the Flux Dev diffusion model weights here. No, because it's not there yet. you may need fo an external finding as most of missing custom nodes that may outdate from latest comfyui could not be detect or show to manager. Jul 28, 2024 · You can adopt ComfyUI workflows to show only needed input params in Visionatrix UI (see docs: https://visionatrix. Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. this is just a simple node build off what's given and some of the newer nodes that have come out. Is there a workflow with all features and options combined together that I can simply load and use ? To make random (but realistic) examples, the moment you start to want ControlNet in 2 different workflows out of your 10, or you need to fix 4 workflows out of 10 that use the Efficiency Nodes because v2. best external source willbe @comfyui-chat website which i believed is from comfyui official team. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. Workflow. Put the flux1-dev. A good place to start if you have no idea how any of this works Welcome to the unofficial ComfyUI subreddit. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. 2. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: MoonRide workflow v1. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. but this workflow should also help people learn about modular layouts, control systems and a bunch of modular nodes I use in conjunction to create good images. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. Aug 2, 2024 · Flux Dev. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease of use for end users. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. This repo contains examples of what is achievable with ComfyUI. sft file in your: ComfyUI/models/unet/ folder. json files into an executable Python script that can run without launching the ComfyUI server. Everything else is the same. A good place to start if you have no idea how any of this works is the: No, because it's not there yet. Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work! Nodes include: LoadOpenAIModel You would feel less of a need to build some massive super workflow because you've created yourself a subseries of tools with your existing workflows. if you needed clarification, all you had to do was ask, not this rude outburst of fury. So. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. but mine do include workflows for the most part in the video description. ControlNet Depth ComfyUI workflow. And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. Infinite Zoom: 157 votes, 62 comments. . - lots of pieces to combine with other workflows: 6. Upscaling ComfyUI workflow. second pic. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. SDXL Default ComfyUI workflow. Please keep posted images SFW. I meant using an image as input, not video. This guide is about how to setup ComfyUI on your Windows computer to run Flux. But it is extremely light as we speak, so much so This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. You feed it an image, it runs through openpose, canny, lineart, whatever you decide to include. ComfyUI already has examples repo where you can instantly load all cool native workflows just by drag'n'dropping picture from that repo. Motion LoRAs w/ Latent Upscale: This workflow by Kosinkadink is a good example of Motion LoRAs in action: 7. 1 ComfyUI install guidance, workflow and example. Infinite Zoom: I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. Svelte is a radical new approach to building user interfaces. Merging 2 Images together. It covers the following topics: ComfyUI Examples. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Muter nodes) and Context Switches. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. You can then load or drag the following image in ComfyUI to get the workflow: 6 min read. But it separates LORA to another workflow (and it's not based on SDXL either). Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. however we need it unless there slight possibility of other alt or some1 nodes-pack can do same process . The second workflow is called "advanced" and it uses an experimental way to combine prompts for the sampler. But it is extremely light as we speak, so much so 157 votes, 62 comments. github. 0 released yesterday removes the on-board switch to include/exclude XY Plot input, or you need to manually copy some generation parameters You would feel less of a need to build some massive super workflow because you've created yourself a subseries of tools with your existing workflows. Two workflows included. Both of the workflows in the ComfyUI article use a single image as input/prompt for the video creation and nothing else. It works by converting your workflow. AP Workflow 9. For your all-in-one workflow, use the Generate tab. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Please share your tips, tricks, and workflows for using this software to create your AI art. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Welcome to the unofficial ComfyUI subreddit. Img2Img ComfyUI workflow. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. Create animations with AnimateDiff. Now, because im not actually an asshole, ill explain some things. ComfyUI Fooocus Inpaint with Segmentation Workflow Hi Antique_Juggernaut_7 this could help me massively. https://youtu. That's the one I'm referring to. 0 released yesterday removes the on-board switch to include/exclude XY Plot input, or you need to manually copy some generation parameters Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to Welcome to the unofficial ComfyUI subreddit. Share, discover, & run thousands of ComfyUI workflows. For the AP Workflow 9. Welcome to the unofficial ComfyUI subreddit. be/ppE1W0-LJas - the tutorial. 0 for ComfyUI. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. If you see a few red boxes, be sure to read the Questions section on the page. Flux. It provides workflow for SDXL (base + refiner). A lot of people are just discovering this technology, and want to show off what they created. These people are exceptional. The example images on the top are using the "clip_g" slot on the SDXL encoder on the left, but using the default workflow CLIPText on the right. all in one workflow would be awesome. there you just search the custom node and you comfy uis inpainting and masking aint perfect. While I normally dislike providing workflows because I feel its better to teach someone to catch a fish than giving them one. I have a client who has asked me to produce a ComfyUI workflow as backend for a front-end mobile app (which someone else is developing using React) He wants a basic faceswap workflow. WAS suite has some workflow stuff in its github links somewhere as well. ehnjkzzgnsljjkfiszlellrtpacvoobrcykhaultefkgfve