I’m loding my residual models inside a loop this way
for i in range(self.n_residual_blocks):
self.add_module('residual_block' + str(i + 1), residualBlock(self.lSize))
My batch size is not fixed and it changes during iteration. Is it possible to change the LayerNorm paramter in each iteration I call the model.
I want it to be something like this
where lnsize is [batchsize,x,y]
Have a look at the functional API.
I think this would make manipulating the arguments a bit easier.
I could send the parameters during the model training but if you print them you’ll see the initiative parameters are there and not the new one. it seems like once you initiated the layer with certain parameters you can just change normally. There must be another way to do it .
If you constuct LayerNorm with
elementwise_affine=False it does not have any parameters, and you can use functional interface as Peter suggests. With
elementwise_affine=True you can change the batch size, however, it is required that normalized_shape (last dimensions of the tensor) are not changed, because the size of the learnable parameters is fixed when you initialize the module.