Theta Health - Online Health Shop

Comfyui arguments reddit

Comfyui arguments reddit. py --normalvram. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. g. Hey all, is there a way to set a command line argument on startup for ComfyUI to use the second GPU in the system, with Auto1111 you add the following to the Webui-user. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Welcome to the unofficial ComfyUI subreddit. But with Comfy UI this doesn't seem to work! Thanks! Welcome to the unofficial ComfyUI subreddit. 53 it/s for SDXL and approximately 4. 0` The final line in the run_nvidia_gpu. exe -m pip install [dependency] I have read the section of the Github page on installing ComfyUI. And with comfyui my commandline arguments are : " --directml --use-split-cross-attention --lowvram" The most important thing is use tiled vae for decoding that ensures no out of memory at that step. Data to create a command line tool to improve the ergonomics of using ComfyUI. Every time you run the . Please share your tips, tricks, and workflows for using this software to create your AI art. json got prompt… Welcome to the unofficial ComfyUI subreddit. Hi r/comfyui, we worked with Dr. I stand corrected. The VAE can be found here and should go in your ComfyUI/models/vae/ folder. Welcome to the unofficial ComfyUI subreddit. And above all, BE NICE. That helped the speed a bit FETCH DATA from: H:\Stable Diffusion Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map. Launch arguments that I don't know about for ComfyUI, or, Some config stuff I've missed with ComfyUI. I tested with different SDXL models and tested without the Lora but the result is always the same. I think a function must always have "self" as its first argument. ) I haven't managed to reproduce this process in Comfyui yet. I'm not sure why and I don't know if it's specific to Comfy or if it's a general rule for Python. SD1. Invoke just released 3. . What worked for me was to add a simple command line argument to the file: `--listen 0. py", whether that be in a . But there's an even easier way now: StabilityMatrix It's the same thing, but they've done most of the work for you. sh file, or in the command line, you can just add the --lowvram option straight after main. Discord bot users lightly familiar with the models) to supply prompts that involve custom numeric arguments (like # of diffusion steps, LoRA strength, etc. I down loaded the Windows 7-Zip file and ended up once unzipped with a large folder of files. But where do I begin, anyone know any good tutorials for a lora training beginner. \python_embeded\python. Some main features are: Automatically install ComfyUI dependencies. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. This narrows the problem down to GPU/Pytorch packages. For the latest daily release, launch ComfyUI with this command line argument: --front-end-version Comfy-Org/ComfyUI_frontend@latest For a specific version, replace latest with the desired version number: Aug 8, 2023 · Wherever you are running the "main. A. I think for me at least for now with my current laptop using comfyUI is the way to go. I did't quite understand the part where you can use the venv folder from other webui like A1111 to launch it instead and bypass all the requirements to launch comfyui. For a portable install, launch terminal in comfyUI folder and use . I originally created it to learn more about the underlying code base powering ComfyUI, but after building it I think it could be useful for anyone in the community who is more comfortable coding than using GUIs. Updating to the correct latest version of PyTorch is what is needed. Installation¶ Welcome to the unofficial ComfyUI subreddit. I use an 8GB GTX 1070 without comfyui launch options and I can see from the console output that it chooses NORMAL_VRAM by default for me. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. bat file, it will load the arguments. bat looks like this: `. I keep hearing that A1111 uses GPU to feed the noise creation part, and Comfyui uses the CPU. Command line arguments can be put in the bat files used to run comfyui like this separated by a space after each command Aug 2, 2024 · You can use t5xxl_fp8_e4m3fn. 55 it/s for SD1. Open the . nms' with arguments from seen people say comfyui is better than A1111, and gave better results, so wanted to give it a try, but cant find a good guide or info on how to install it on an AMD GPU, with also conflicting resources, like original comfyui github page says you need to install directml and then somehow run it if you already have A1111, while other places say you need miniconda/anaconda to run it, but just can 3. Only if you want it early. A lot of people are just discovering this technology, and want to show off what they created. For example, this is mine: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Using ComfyUI with my GTX 1650 is simply way better than using Automatic1111. Thanks in advanced for any information. 9s/it with 1111. 1, and SDXL are all trained on different resolutions, and so models for one will not work with the others. We would like to show you a description here but the site won’t allow us. Anyway, whenever you define a function, never forget the self argument! I have barely scratched the surface, but through personal experience, you will go much further! Welcome to the unofficial ComfyUI subreddit. It's very early stage but I am curious what folks think / excited to update it over time! The goal is to a) make it easy for semi-casual users (e. 21K subscribers in the comfyui community. --show-completion: Show completion for the current shell, to copy it or customize the installation. But these arguments did not work for me, --xformers gave me a minor bump in performance (8s/it vs 11s/it) but still taking about 10mins per image. While I primarily utilize PyTorch cross attention (SDP) I also tested xformers to no avail. After playing around with it for a while, here are 3 basic workflows that work with older models (here, AbsoluteReality). So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. Comfyui is much better suited for studio use than other GUIs available now. You might still have to add the occasional command line argument for extensions or something. 0. 5 while creating a 896x1152 image via the Euler-A sampler. bat file with notepad, make your changes, then save it. Both character and environment. Please keep posted images SFW. safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram. r/synthdiy • We're going live with a workshop in an hour. Please share your tips, tricks, and… Hi all, How to ComfyUI with Zluda All credit goes to the people who did the work! lshqqytiger, LeagueRaINi, Next Tech and AI(Youtuber) I just pieced… These images might not be enough (in numbers) for my argument, so I invite you to try it out yourselves and see if its any different in your case. bat file. ComfyUI is also trivial to extend with custom nodes. Any idea why the qualty is much better in Comfy? I like InvokeAI - its more user-friendly, and although I aspire to master Comfy, it is disheartening to see a much easier UI give sub-par results. 5 (+ Controlnet,PatchModel. VFX artists are also typically very familiar with node based UIs as they are very common in that space. Anything that works well gets adopted by the larger community and finds it's way into other Stable Diffusion software eventually. ComfyUI was written with experimentation in mind and so it's easier to do different things in it. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. I don't know what Magnific AI uses. 1 are updated and used by ComfyUI. Can you let me know how to fix this issue? I have the following arguments: --windows-standalone-build --disable-cuda-malloc --lowvram --fp16-vae --disable-smart-memory Welcome to the unofficial ComfyUI subreddit. 1 or not. I am not sure what kind of settings ComfyUI used to achieve such optimization, but if you are using Auto111, you could disable live preview and enable xformers (what I did before switching to ComfyUI). bat file set CUDA_VISIBLE_DEVICES=1. I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . Inpainting (with auto-generated transparency masks). 8 and PyTorch 2. On Linux with the latest ComfyUI I am getting 3. With this combo it is now rarely gives out of memory (unless you try crazy things) Before I couldn't even generate with sdxl on comfyui or anything Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. But inpainting in Comfy is still terrible, not sure it I simply didn't managed to config it right, but when inpainting faces in 1111 I can make it inpaint in a specific resolution, so I do faces at 512x512. (Same image takes 5. Has anyone managed to implement Krea. Basic img2img. I get 1. Scoured the internet and came across multiple posts saying to add the arguments --xformers --medvram. Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. Also, if this is new and exciting to you, feel free to post On vacation for a few days, I installed ComfyUI portable on a USB key, and plugged it into a laptop that wasn't too powerful (just the minimum 4 gigabytes of Vram). and any other arguments you want to add. 5, SD2. ) and I am trying out using SDXL in ComfyUI. I tried installing the dependencies by running the pip install in the terminal window in ComfyUI folder. ==[Update]== Launching in CPU mode is successful ( python main. This time about arpeggiators - how to design your own arp on the Daisy Seed using Arduino and C++ classes. Update ComfyUI and all your custom nodes, and make sure you are using the correct models. bat file, . Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. It appears some other AMD GPU users have similar unsolved issues: I just released an open source ComfyUI extension that can translate any native ComfyUI workflow into executable Python code. py, eg: python3 . that did not I am using ComfyUI with its default settings. Options:--install-completion: Install completion for the current shell. Where ever you launch comfyui from is where you need to set the launch options, like so: python main. py --cpu ) but of course not ideal. Supports: Basic txt2img. Workflows are much more easily reproducible and versionable. I've ensured both CUDA 11. However, I kept getting a black image. py --windows-standalone-build --normalvram --listen 0. 0` Additionally, I've added some firewall rules for TCP/UDP for Port 8188. Lt. I or Magnific AI in comfyui? I've seen the websource code for Krea AI and I've seen that they use SD 1. Has anyone tried or is still trying? I somehow got it to magically run with AMD despite to lack of clarity and explanation on the github and literally no video tutorial on it. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets Even thou i keep hearing people focusing the discussion on the time it takes to generate the image (and yes Comfyui is faster, i have a 3060) i would like people to be discussing if the image quality is better in which. Doing in comfyui or any other other SD UI dont matter for me, only that its done locally. 1’s 200,000 GPU hours. Launch and run workflows from the command line Install and manage custom nodes via cm-cli (ComfyUI-Manager as a cli) Hello! I been playing around with comfyui for months now and reached a level where I wanna make my own loras. py --lowvram --auto-launch. 6s/it with comfy as opposed to 4. Finally I gave up with ComfyUI nodes and wanted my extensions back in A1111. 2 seconds, with TensorRT. Belittling their efforts will get you banned. 37 votes, 11 comments. /main. For example, this is mine: Hey I'm new to using ComfyUI and was wondering if there are command line arguments to add to launch file like there is in Automatic1111. Hello, community! I'm happy to announce I have finally finished my ComfyUI SD Krita plugin. I used to do it manually, with Symlinks and command arguments and the like. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. This is a plugin that allows users to run their favorite features from ComfyUI and at the same time, being able to work on a canvas. 0 with refiner. Here are some examples I did generate using comfyUI + SDXL 1. I only have 4gb vram so I'm just trying get my settings optimized. exe -s ComfyUI\main. It said follow the instructions for manually installing for Windows. I did a clean install right now and it works perfectly. That’s a cost of abou /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. For some reason, it broke my ComfyUI when I did it earlier. bvbnd iibjvh fkqjle ccehf uorroa gkel yyen zlnsx lcal xpfgru
Back to content