Skip to content
This repository has been archived by the owner on Sep 2, 2024. It is now read-only.

Tensor size mismatch error for batch_size>1 - File:gb_encoder.py #53

Open
m-abdelkarim opened this issue Feb 16, 2023 · 1 comment
Open

Comments

@m-abdelkarim
Copy link

m-abdelkarim commented Feb 16, 2023

When the batch_size is set for any value >1, the following error is triggered:

" File "D:\Photorealism\PhotorealismEnhancement\code\epe\network\gb_encoder.py", line 137, in forward
features += classmap[:,c,:,:] * self.class_encoders[c](
RuntimeError: The size of tensor a (2) must match the size of tensor b (128) at non-singleton dimension 1"

The error takes place in the following line. As seen in the second screenshot, the shapes dont match. Even when I try to reshape the classmap[:,c,:,:] so that the multiplication works, an assert is thrown in the discriminators.py file as shown in the 3rd screenshot. Could anyone help in this regards?
image
image
image

@witchoco
Copy link

witchoco commented Aug 31, 2023

yes... I am facing this same issue...
besides removing the constraints you listed above, there is some other slight modifications I've made:
epe/network/discriminators.py, around line 164:
# precomputed segmentation
#_, _, hy, wy = y.shape # original implementation for batch_size=1 constraint
#y = self.embedding(y.reshape(-1)) # original implementation for batch_size=1 constraint
#y = y.permute(1,0).reshape(1,c,hy,wy) # original implementation for batch_size=1 constraint
#changed as:
ny, _, hy, wy = y.shape
y = self.embedding(y.reshape(ny, -1))
y = y.permute(0,2,1).reshape(ny,c,hy,wy)

Now it's working... But I'm not sure if it's correct or any impact on effect...
I'm not familiar with embedding...
Could any body please share with us some explanation why the batch_size=1 constraint is there?
Thank you guys so much~

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants