Different input and output size in autoencoder

Hello everyone.
I have an autoencoder where the input size is 15, and at the decoder, I only want the first 10 numbers. If I use slicing the autograd will capture that as a grad_fn, I wanted to know will this affect the network behavior? I made a simple example for this and the grads for the sliced out elements were zero.

a = torch.tensor([1,2,3], requires_grad=True, dtype=torch.float)
b = 2 * a
c = 2 * a + b
d = c[:1] # like the slicing in the last layer of decoder before calculating the loss
print(a.grad) # prints: tensor([4., 0., 0.])
# if I would do this for c, I would get this
# c.backward()
# print(a.grad) # prins: tensor([4., 4., 4.])

But I want the effect of these 5 elements in the latent space (does this capture that?). Making their gradient zero means that their weights won’t change. Is this correct? What is the best way to do this?


Since these elements don’t contribute to the final output, their gradient is 0.
Note that in a neural net, usually the weights are used to get values for all the outputs or you use a softmax layer that couples all the outputs together. And that will make all the weights of your net to be updated.

Thanks for the reply. It is correct that they don’t contribute to the final output which in this case is the output of the decoder. But they contribute to the output of encoder i.e latent space.

If they contribute to this then they will get gradients yes.