Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Given groups=1, weight of size [512, 16, 3, 3], expected input[1, 4, 128, 32] to have 16 channels, but got 4 channels instead #6769

Open
1240022745 opened this issue Feb 10, 2025 · 4 comments
Labels
User Support A user needs help with something, probably not a bug.

Comments

@1240022745
Copy link

Your question

ComfyUI Error Report

Error Details

  • Node ID: 10
  • Node Type: VAEDecodeTiled
  • Exception Type: RuntimeError
  • Exception Message: Given groups=1, weight of size [512, 16, 3, 3], expected input[1, 4, 128, 32] to have 16 channels, but got 4 channels instead

Stack Trace

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 320, in decode
    images = vae.decode_tiled(samples["samples"], tile_x=tile_size // compression, tile_y=tile_size // compression, overlap=overlap // compression, tile_t=temporal_size, overlap_t=temporal_overlap)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\comfy\sd.py", line 523, in decode_tiled
    output = self.decode_tiled_(samples, **args)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\comfy\sd.py", line 442, in decode_tiled_
    (comfy.utils.tiled_scale(samples, decode_fn, tile_x // 2, tile_y * 2, overlap, upscale_amount = self.upscale_ratio, output_device=self.output_device, pbar = pbar) +

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\comfy\utils.py", line 976, in tiled_scale
    return tiled_scale_multidim(samples, function, (tile_y, tile_x), overlap=overlap, upscale_amount=upscale_amount, out_channels=out_channels, output_device=output_device, pbar=pbar)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\comfy\utils.py", line 948, in tiled_scale_multidim
    ps = function(s_in).to(output_device)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\comfy\sd.py", line 440, in <lambda>
    decode_fn = lambda a: self.first_stage_model.decode(a.to(self.vae_dtype).to(self.device)).float()

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\models\autoencoder.py", line 139, in decode
    x = self.decoder(z, **kwargs)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\modules\diffusionmodules\model.py", line 709, in forward
    h = self.conv_in(z)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 98, in forward
    return super().forward(*args, **kwargs)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)

  File "C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,

System Information

  • ComfyUI Version: 0.3.14
  • Arguments: C:\BaiduNetdiskDownload\ConfyUI-aki\ComfyUI-aki-v1.4\main.py --auto-launch --preview-method auto --disable-cuda-malloc
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.3.1+cu121

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4070 Laptop GPU : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 8585216000
    • VRAM Free: 5195392494
    • Torch VRAM Total: 2516582400
    • Torch VRAM Free: 374040046
Image

Logs

Other

No response

@1240022745 1240022745 added the User Support A user needs help with something, probably not a bug. label Feb 10, 2025
@LukeG89
Copy link

LukeG89 commented Feb 10, 2025

You used the wrong VAE model

@ReJulien
Copy link

You used the wrong VAE model

Hi, I have the same exact problem, but I don't know which VAE model is correct then...

@ltdrdata
Copy link
Collaborator

ltdrdata commented Feb 14, 2025

You used the wrong VAE model

Hi, I have the same exact problem, but I don't know which VAE model is correct then...

You cannot mix SD diffusion model and FLUX VAE model. Or vice versa. You must always ensure that the VAE model you apply matches the diffusion model.

@ReJulien
Copy link

You used the wrong VAE model

Hi, I have the same exact problem, but I don't know which VAE model is correct then...

You cannot mix SD diffusion model and FLUX VAE model. Or vice versa. You must always ensure that the VAE model you apply matches the diffusion model.

Yes of course, that's why I'm confused.
The loaded VAE in theory is correct, but now I'm not even sure.
Is this one (for example) correct?
https://civitai.com/models/636193/flux-vaesft
This is as simple as confusing for me! Thanks for the reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
User Support A user needs help with something, probably not a bug.
Projects
None yet
Development

No branches or pull requests

4 participants