Comfyui best upscale model reddit

Comfyui best upscale model reddit. attach to it a "latent_image" in this case it's "upscale latent" "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. 25 i get a good blending of the face without changing the image to much. higher denoise), it adds appropriate details. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. You create nodes and "wire" them together. fix. 5 combined with controlnet tile and foolhardy upscale model. Always wanted to integrate one myself. 5 model, and can be applied to Automatic easily. This way it replicates the sd upscale/ultimate upscale scripts from A1111. Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. so i. Curious my best option/operation/workflow and upscale model The idea is simple, use the refiner as a model for upscaling instead of using a 1. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. I get good results using stepped upscalers, ultimateSD upscaler and stuff. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. 5 for the diffusion after scaling. Search for upscale and click on Install for the models you want. 0. The world’s best aim trainer, trusted by top pros, streamers, and players like you. 6. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. - latent upscale looks much more detailed, but gets rid of the detail of the original image. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. For upscaling I mainly used the chaiNNer application with models from the Upscale Wiki Model Database but I also used the fast stable diffuison automatic1111 google colab and also the replicate website super resolution collection Welcome to the unofficial ComfyUI subreddit. I rarely use upscale by model on its own because of the odd artifacts you can get. Sep 7, 2024 · Here is an example of how to use upscale models like ESRGAN. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. We would like to show you a description here but the site won’t allow us. It's especially amazing with SD1. It's a lot faster that tiling but outputs aren't detailed. After generating my images I usually do Hires. true. Usually I use two my wokrflows: So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. I haven't been able to replicate this in Comfy. SD upscaler and upscale from that. Good for depth, open pose so far so good. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It is a node - image upscale is less detailed, but more faithful to the image you upscale. To get the absolute best upscales, requires a variety of techniques and often requires regional upscaling at some points. Welcome to the unofficial ComfyUI subreddit. 0-RC , its taking only 7. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. the factor 2. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. Id say it allows a very high level of access and customization, more thanA1111 - but with added complexity. 65 seems to be the best. In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Upscale Latent By: 1. That's because latent upscale turns the base image into noise (blur). Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. Upscaling: Increasing the resolution and sharpness at the same time. If you let it get creative (i. model: base sd v1. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. Jan 13, 2024 · TLDR: Both seem to do better and worse in different parts of the image, so potentially combining the best of both (photoshop, seg/masking) can improve your upscales. For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). Still working on the the whole thing but I got the idea down Which options on the nodes of the encoder and decoder would work best for this kind of a system ? I mean tile sizes for encoder, decoder (512 or 1024?) and diffusion dtype of supir model loader, should leave it as auto or any ideas? Thank you again and keep the good work up. Download this first, put it into the folder inside conmfy ui called custom nodes, after that restar comfy ui, then u should see a new button on the left tab the last one, click that, then click missing custom nodes, and just install the one, after you have installed it once more restart comfy ui and it ahould work. If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. Please keep posted images SFW. Hope someone can advise. Thanks Hi, does anyone know if there's an Upscale Model Blend Node, like with A1111? Being able to get a mix of models in A1111 is great where two models… From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. Thanks. For some context, I am trying to upscale images of an anime village, something like Ghibli style. Moreover batch folder processing added. I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? (in the 250 pixel range)? I assume most everything is 512 and higher based on SD1. Then another node under loaders> "load upscale model" node. I'm new to the channel and to ComfyUI, and I come looking for a solution to an upscaling problem. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. But basically txt2img, img2img, 4x upscale with a few different upscalers. Latest version can be downloaded here. pth or 4x_foolhardy_Remacri. Now go back to img2img generated mask the important parts of your images and upscale that. Please share your tips, tricks, and workflows for using this… Hi, guys. all in one workflow would be awesome. 5 I'd go for Photon, RealisticVision or epiCRealism. 5, but I have some really old images I'd like to add detail to. Upscale x1. Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. There is no tiling in the default A1111 hires. I am curious both which nodes are the best for this, and which models. Category: Universal Models, Official Research Models, Art/Pixel Art, Model Collections, Pretrained Models. You could also try a standard checkpoint with say 13, and 30. The downside is that it takes a very long time. I generate an image that I like then mute the first ksampler, unmute Ult. Also, both have a denoise value that drastically changes the result. Is there a way to "pause the flow" to the latent upscale until a switch is flipped? So that one could do latent upscale only on the images one favors. Instructions to use any base model added to the scripts shared post. with a denoise setting of 0. The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. So I made a upscale test workflow that uses the exact same latent input and destination size. Do you all prefer separate workflows or one massive all encompassing workflow? Welcome to the unofficial ComfyUI subreddit. In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. Please share your tips, tricks, and workflows for using this software to create your AI art. So from VAE Decode you need a "Uplscale Image (using model)" under loaders. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. For SD 1. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. 5 model) >> FaceDetailer. Though, from what someone else stated it comes to use case. . 45 is minimum and fairly jagged. If caption file exists (e. g. 5 to 0. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. It uses CN tile with ult SD upscale. Also converted base used model to Juggernaut-XL-v9 . Then output everything to Video Combine . I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). I'm trying to combine the Ultimate SD Upscale with a Blur Control Net like I do in Automatic1111, but I keep getting errors in ComfyUI. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. 5 -ish new size Seed: 12345 (same seed) CFG: 3 (same CFG) Steps: 5 (same) Denoise: this is where you have to test. In this ComfyUI tutorial we look at my favorite upscaler, the Ultimate SD Upscaler and it doesn't seem to get as much attention as it deserves. But it's weird. Upscaling on larger tiles will be less detailed / more blurry and you will need more denoise which in turn will start altering the result too much. use our SOTA batch captioners like LLaVA) it will be used as prompt. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. There's "latent upscale by", but I don't want to upscale the latent image. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. 19K subscribers in the comfyui community. ComfyUI uses a flowchart diagram model. You can also run a regular AI upscale then a downscale (4x * 0. - comfyanonymous/ComfyUI Welcome to the unofficial ComfyUI subreddit. Tried the llite custom nodes with lllite models and impressed. 80 is usually mutated but sometimes looks great. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird That's because of the model upscale. Edit: i changed models a couple of times, restarted comfy a couple of times… and it started working again… OP: So, this morning, when I left for… messing around with upscale by model is pointless for high res fix. So you workflow should look like this: KSampler (1) -> VAE Decode -> Upscale Image (using Model) -> Upscale Image By (to downscale the 4x result to desired size) -> VAE Encode -> KSampler (2) 43 votes, 16 comments. e. in a1111 the controlnet 101 votes, 27 comments. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. Please share your tips, tricks, and… I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. 5), with an ESRGAN model. This model yields way better results. I'm sure I'm just doing something wrong when implementing the CN. Reply reply 15K subscribers in the comfyui community. Here is a workflow that I use currently with Ultimate SD Upscale. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). I don't bother going over 4k usually though, you get deminishing returns on render times with only 8gb vram ;P Generates a SD1. * If you are going for fine details don't upscale in 1024x1024 Tiles on an SD15 model, unless the model is specifically trained on such large sizes. All of this can be done in Comfy with a few nodes. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. Best method to upscale faces after doing a faceswap with reactor It's a 128px model so the output faces after faceswapping is blurry and low res. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. Upgrade your FPS skills with over 25,000 player-created scenarios, infinite customization, cloned game physics, coaching playlists, and guided training and analysis. And when purely upscaling, the best upscaler is called LDSR. 15-0. You can use it in any picture, you will need ComfyUI_UltimateSDUpscale Welcome to the unofficial ComfyUI subreddit. Taking the output of a KSampler and running it through a latent upscaling node results in major artifacts (lots of horizontal and vertical lines, and blurring). It has more settings to deal with than ultimate upscale, and it's very important to follow all of the recommended settings in the wiki. Reactor has built in codeformer and GFPGAN, but all the advice I've read said to avoid them. fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. There are also "face detailer" workflows for faces specifically. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. The 4X upscalers I've tried aren't great with it, I suspect the starting detail is too low. I love to go with an SDXL model for the initial image and with a good 1. FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. g Use a X2 Upscaler model. 5, see workflow for more info. One does an image upscale and the other a latent upscale. I want to upscale my image with a model, and then select the final size of it. So latent upscaling gives really nice results but it is really slow on my 2060 super. vatyrqi syy jvj urwp rserxqf ejdt ozvdw xgkqp auwlsvm grjoz