You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the output shows a mismatch between input channel size
20-11-10 00:57:08.525 - INFO: Model [DualSR_pretrain] is created.
20-11-10 00:57:08.525 - INFO: Start training from epoch: 0, iter: 0
Traceback (most recent call last):
File "train.py", line 182, in<module>main()
File "train.py", line 112, in main
model.optimize_parameters(current_step)
File "/home/e517herb/Desktop/GDSR/codes/models/DualSR_pretrain.py", line 68, in optimize_parameters
self.fake_Low, self.fake_high, self.fake_mask = self.netG(self.var_L)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/_utils.py", line 385, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/e517herb/Desktop/GDSR/codes/models/modules/DualSR_RRDB.py", line 89, in forward
x = self.body_ex(x) # shared module做2次RRDB後
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/e517herb/Desktop/GDSR/codes/models/modules/block.py", line 94, in forward
output = x + self.sub(x)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/e517herb/Desktop/GDSR/codes/models/modules/block.py", line 240, in forward
out = self.RDB1(x)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/e517herb/Desktop/GDSR/codes/models/modules/block.py", line 215, in forward
x1 = self.conv1(x)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 345, in forward
return self.conv2d_forward(input, self.weight)
File "/home/e517herb/anaconda3/envs/GDSR/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size 32 64 3 3, expected input[7, 3, 32, 32] to have 64 channels, but got 3 channels instead
Is there something I did wrong? Thanks in advance!
The text was updated successfully, but these errors were encountered:
@wenchen4321 thank you for your quick reply and bug fix, as I did make it to training, but the furthest I can get is epoch:0 iter:1000, here is the output:
Hi there, I am currently trying to reproduce your project and have faced some issues that I cannot resolve, I was wondering if you could help me?
I am currently running pretrain for DualSR using
python train.py -opt options/train/DualSR_pretrain.json
, the JSON configs are as follows:the output shows a mismatch between input channel size
Is there something I did wrong? Thanks in advance!
The text was updated successfully, but these errors were encountered: