Reuse of nn.Sequential()

If I define my net like this:

class Net(): 
   def __init__
      self.block = nn.Sequential(...)
   def forward
      x0 = self.block(inputs[0])
      x1 = self.block(inputs[1])

Will the learnables of self.block be shared among input 0 and 1, or will each call to self.block in the forward function automatically upscale the number of learnables?

self.block is reused in your approach and so will be its trainable parameters. PyTorch’s Autograd creates a computation graph capturing both operations, won’t create new parameters, but will just reuse them.