Chamge latent space dimenasion

Hi every one.

I am trying to create an AE and insert a neuron after encode part. I can easily do it on the forward part by concatenating a new neuron in the encoded data and fitting it into decoder. However, the backward didn’r allow me to do it, given mismatching dismensions.

Creating a PyTorch class

class AE(torch.nn.Module):
def init(self):
super().init()
self.encoder = torch.nn.Sequential(
torch.nn.Linear(9351, 128),
torch.nn.Linear(128, 9)
)

    self.decoder = torch.nn.Sequential(
        torch.nn.Linear(10, 18),
        torch.nn.Linear(18, 9351),
        torch.nn.Sigmoid()
    )

def forward(self, x ,x2):
    encoded = self.encoder(x)
    encoded.data = torch.FloatTensor(np.concatenate((encoded.data, x2),axis = 0)       )
    decoded = self.decoder(encoded)
    return decoded

model = AE()

Validation using MSE Loss function

loss_function = torch.nn.MSELoss()

Using an Adam Optimizer with lr = 0.1

optimizer = torch.optim.Adam(model.parameters(), lr = 1e-1, weight_decay = 1e-8)

epochs = 10
outputs =
losses =
torch.autograd.set_detect_anomaly(True)

for epoch in range(epochs):
for i,image in enumerate(loader):
x2 = [temp[i]]
reconstructed = model(image,x2)
optimizer.zero_grad()
optimizer.step()
loss = loss_function(reconstructed, image)
loss.backward(retain_graph=True,inputs = reconstructed)
losses.append(loss)

RuntimeError: Function UnsqueezeBackward0 returned an invalid gradient at index 0 - got [10] but expected shape compatible with [9]

any help will be very appreciated!

Manipulating the internal .data attribute is deprecated and could raise these kind of errors:

encoded.data = torch.FloatTensor(np.concatenate((encoded.data, x2),axis = 0))

as it would skip Autograd checks.
Could you explain why you want to assign this new tensor to the activation instead of creating a new tensor?

Our data is a signal (function of temperature), and we would like to have a neuron that keeps a fixed information (temperature). So we thought to add it after the encoder part. However, any better idea is welcome.

Use torch.cat to create the tensor and remove the usage of the .data attribute.

Thank you, i managed to concatenate using cat in the forward, but i still have the problem in the backward :frowning:
RuntimeError: Function UnsqueezeBackward0 returned an invalid gradient at index 0 - got [10] but expected shape compatible with [9]

I can reproduce the issue using your code, but it works for me after applying my suggested fixes:

encoded = torch.cat((encoded, x2), dim=1)

so I guess you are still using your old code?

Indeed. I was using:

encoded.data = torch.cat((encoded, x2), 0)

It worked fine now! Thank you.