how can i concatenate to the input tensor another tensor that is initialized randomly and then updated with back-prop. the purpose is to have some unsupervised byproduct representation.

generally i understand that i can do this with simply adding it to the input before feeding the network and requiring grad.

BUT, how can i initialize it at the beginning and then throughout the training it is only updated and not re-initialized?

ALSO, how can i assign a few samples (being part of some category the network is not aware of initially) to the same concatenated tensor?

a more formal explanation of what i want:

samples = {X1,X2,…Xn-1,Xn}

X1.shape = (1,64,64,32) (channels, width, height,depth)

initial extra channel = torch.rand((1,64,64,32))

extras = {R1,R2,…Rk-1,Rk}

#notice k extras because k<n is the hidden category number

now randomly the loader loads sample X134 (out of n), which belongs to category 6 (out of k) - so the network f gets as input the tensor Z134

Z134 = torch.cat([X134,R6],dim=0)

then during backprop R6 is updated and not initialized randomly until the end of training.

hope anyone can help, thanks in advance!!!