Predicted output image is split

I understand that my question is too ambiguous but I couldn’t help. I’m working on 3d reconstruction. Basically, I have latent codes. I train two models: a model that generates my latent code which represents a 3d scene, and a model that renders the image out of latent code.

But my output image is split in two. Can you guess why or when would this happen? I wanted to post my code but as I don’t know which part is wrong and the code is super long, I thought it would be better to ask for some guesses and figure out myself what is causing the issue.


OMG. Unbelievably, I somehow fixed it. lol

values.shape = [4, 128, 1, 1, 409600]
feature_size = 128
bs (batch_size) = 4

I need to make [bs, -1, feature_size]

values = values.squeeze().reshape(bs, -1, feature_size)

values = values.squeeze().permute(0, 2, 1)

I think the before version was causing that issue. I thought those two versions work the same?

Hello @jun_suk_ha ,
unfortunately, I can not say, why your images looked like the ones above. But, I can say, that there is a difference between permute and reshape, although they can generate the same shaped tensors. For visualization, I have created a small example below. Note that I have deleted the “[1,1]” part, because it is taken out by the .squeeze() command:

n_features = 6
batch_size = 2
misc = 100
a = torch.randn( batch_size, n_features, misc)
b = a.reshape(batch_size, -1, n_features)
c = a.permute(0,2,1)
print(torch.allclose(b,c)) # → False
print(b.shape == c.shape) # → True