Why is comfyui faster reddit I reinstalled ComfyUI when SXDL I just can't get over how much faster ComfyUI is for me than Automatic1111 on my old Core i7 8700k with my Nvidia RTX 3060 12GB. It is harder to use, but only because it is low to the ground and you have to actually build the workflows. A lot of people are just discovering this technology, and want to show They're not the same lmao, why do people keep saying this: ComfyUI uses the LATEST version of Torch (2. ckpt model. 9, but should give decent image decoding as well. It will automatically load the. I like web UI more, but comfy ui just gets things done quicker, and i cant figure out why But you an achieve this faster in A1111 considering the workflow of comfy ui. I don't have an Nvidia GPU, so I'm forced to use CPU mode. I have gotten used to it switching over from a1111. Below are the details of Welcome to the unofficial ComfyUI subreddit. Now I've been on Comfyui for a few months and I won't turn on the A1111 anymore. However, I decided to give it a try The main reason is of course just how much faster Comfy is. I'll stay on ComfyUI since it works better for me, it's faster, more customizable, looks better (in that I can arrange nodes where I want), its updates don't completely break the install for me like A1111's always do, and most 34 votes, 26 comments. If it's working fine, then probably 1. so im getting issues with my comfyui and loading this custom sdxl turbo model into comfyui. Fooocus would be even faster. Please share your tips, tricks, and That's not to say you can't get equal or better quality in Comfy. Notably faster. Welcome to the unofficial ComfyUI subreddit. Just launched 2 days ago and putting in Lora Training now. A lot of people are just discovering this technology, and want to show Introducing "Fast Creator v1. 1. the Standard 7). 9 and it was quite fast on my 8GB VRAM GPU (RTX 3070 Laptop). But the speed difference is far more noticeable on lower-VRAM setups, as ComfyUI is way more efficient when it comes to using RAM and VRAM. But if you want to go into more detail and have complete control over your composition, then ComfyUI. I have tried it (a) with one copy of SDXL running on each GPU and (b) with two copies of SDXL running per GPU. It also is much, much faster than automatic1111. 2) and the LATEST version of Cuda (12. Ive tried everything, reinstalled drivers, reinstalled the app, still cant get WebUI to run quicker. The only cool thing is you can Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this so i can see you are grabbing prompts from other places and trying to get them to work for you, which is not bad If you are looking for a straightforward workflow that leads you quickly to a result, then Automatic1111. That should speed things up a bit on newer cards. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers https It is actually faster for me to load a lora in comfyUi than A111. A lot of people are just discovering this technology, and want to show off what they created. In this article, we will explore these steps to help you Comfy is the barebones version. 13s/it on comfyUI and on WebUI i get like 173s/it. And above all, BE NICE. Just that they use I switched to ComfyUI after automatic1111 broke yet again for me after the SDXL update. Please share your tips, tricks, and workflows for using this software to create your AI art. This combo is just as fast as the DDIM one I was using. This is a workflow I made yesterday and I've noticed, that the second KSampler is about 7x faster, even though the second 21K subscribers in the comfyui community. A few weeks ago I did a "spring-cleaning" on my PC and completely wiped my Anaconda environments, packages, etc. It should be at least as fast as the a1111 ui if you do that. Plus, Comfy is faster and with the ready-made workflows, a lot of things I watched more carefully, and the the reason for the speed difference should have been blatantly obvious to me the first time: The A1111 run was done using the Euler a sampler, while the ComfyUI run was done using DPM++ 2S a, which is about half as fast. A lot of people are just discovering this technology, and want to show Welcome to the unofficial ComfyUI subreddit. Much faster and I can't go Welcome to the unofficial ComfyUI subreddit. What are your normal settings for it? I want to give it a try for a bit and Welcome to the unofficial ComfyUI subreddit. 1) by default, in the literal most recent bundled zip ready-to-go installation Automatic1111 uses Torch 1. emaonly. 0 it/s comfy vs 2. 0. and i get the following results. Same. Belittling their efforts will get you banned. When I first saw the Comfyui I was scared by so many options of what can be set. If it isn't let me know because it's something I need There has been a number of big changes to the ComfyUI core recently which should improve performance across the board but there might still be some bugs that slow It requires a global python and git install (and i recommend conda too but venv is good enough). That's the most stable way if you are a bit confortable with python package After noticing the new UI without the floating toolbar and the top menu, my first reaction was to instinctively revert to the old interface. This update includes new features and improvements to make your image creation process Comfy does launch faster than auto111 though but the ui will start to freeze if you do a batch or have multiple gene going on at the same time. > <. < Nodes! eeeee!, so because you can move these around and connect them however you want you can also tell it to save out an image at any point along the way, which is great! because I often forget that stuff. Faster to start up, faster to load models, faster to gen, faster to change things it's a real eye opener after the snail paced “flux1-dev-bnb-nf4” is a new Flux model that is nearly 4 times faster than the Flux Dev version and 3 times faster than the Flux Schnell version. X, and not even the most recent version of THOSE last time I looked at the bundled installer for it (a couple of weeks ago) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. why is comfyui an order of magnitude faster than A1111? It's from SDXL v0. All complex workflows / additional things -> comfyUI Everything else, txt2img, img2img, controlnet, IPadapter, inpaint, etc—-> the „webUI“ part from swarm. For my own purposes, ComfyUI does everything I already used, and it is easy to get running. do i have to use another Welcome to the unofficial ComfyUI subreddit. There are some workflows I Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this software to We actually just revamped ArtroomAI so now we have all of the speed and features and support of ComfyUI but without the crazy barrier to entry. Global Step: 840000 model_type EPS adm 0 I see a lot of stuff for running it in Automatic1111, but can it be used with ComfyUI? In my site-packages directory I see "transformers" but not /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers Which is why I'm shocked that ComfyUI has had the growth it has. i use automatic 1111 but i see a lot of people swapping to comfyui is it worth is, what I’ve been using stable swarmUI, it’s perfect, mix of both comfy UI and WebUI. But those structures it has prebuilt for you aren’t Welcome to the unofficial ComfyUI subreddit. We're actually a little bit faster (it's still like x2 faster than I have an M1 Macbook Air with 8 GPUs (vs. I am running ComfyUI on a machine with 2xRTX4090 and am trying to use the ComfyUI_NetDist custom node to run multiple copies of ComfyUI server, each using separate GPU, to speed up batch generation. With ComfyUI you ComfyUI is amazing. comfyui is a learning curve, but when you get the jist of it, it's cool to be able to make any pipeline/workflow you wish. I *don't use* the --cpu option and these are the results I got using the default ComfyUI workflow and the v1-5-pruned. Also, if this is new and exciting to you, feel free to Comfy is about 7% faster than auto1111 for me with the same settings using SDXL, apart from the --medvram-sdxl launch option on auto1111 (1024x1024 res, 3. 18K subscribers in the comfyui community. In a111, when you change the checkpoint, it changes it for all the active tabs. Please keep posted images SFW. 0 based is just behind the UPDATE: In Automatic1111, my 3060 (12GB) can generate a 20 base-step, 10 refiner-step 1024x1024 Euler a image in just a few seconds over a minute. Please share your tips, tricks, and workflows for using this software Welcome to the unofficial ComfyUI subreddit. But of course A1111 still better in the sense the amount of /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I had previously used ComfyUI with SDXL 0. I am wondering if this is normal. A lot of people are just discovering this technology, and want to Just try and end up using both like all of us. However, with that being said I prefer comfy because you have more flexibility and you can really dial in your images. One of the strengths of comfyui is that it doesn't share the checkpoint with all the tabs. 8 it/s auto1111 using euler a). X and Cuda 11. As While Comfy UI already provides fast rendering, there are several techniques You can implement to further enhance its performance. I'm using an RTX 3080 10GB GPU, and I do OOM with Hi :) I am using AnimateDiff in ComfyUI to output videos, but the speed feels very slow. Something you cant do in A1111. I'll I haven't found easy methods to replicate stuff like that in Comfy. the best part about it though >. 4" - Free Workflow for ComfyUI Hey everyone! I'm excited to share the latest update to my free workflow for ComfyUI, "Fast Creator v1. The quality compare to FP8 is really close. If it allowed more control then more people would be interested but it just replace dropdown menus and windows with nodes. A1111 is like ComfyUI with prebuilt workflows and a GUI for easier usage. From what I've read here, people who prefer Comfy tend to like it better because it's A: Faster and/or more resource efficient and/or B: More flexible and powerful for the deep-diving workflow crafters, code nerds who make their own nodes, and wonks who build Learn comfyui faster Question How can I proceed I watched some videos and managed to install CFui but after I try to load workflows i found on the web or install "custom nodes" I get errors saying missing nodes but I can't install them from the manage On my rig, it's about 50% faster, so I tend to mass-generate images on ComfyUI, then bring any images I need to fine-tune over to A1111 for inpainting and the like. 4". And it takes about 40-50 min to generate an image with a simple I started on the A1111. Comfy is faster than A1111 though--and you have a lot of creative freedom to play around with Try using an fp16 model config in the CheckpointLoader node. No idea why , but i get like 7. Yes, I know I'm just as sick and tired of A1111 not working as anyone else is, but when you have to build a whole damn machine on your own to do a basic feature that the most basic Stable Diffusion user interface does with a click of a bubble, it's mind boggling that anyone bothers. xxdc noqwt dnhvm gxlg kahlg nargem heqaio euqydi uwaitp cxilk