Name of layers in repeatative blocks

i want to implement a deep network like resnet. i want to prepare a block of conv and linear layers that are repeated through the code. i believe the name of layers will result in issues on backprop and training process because of repeatation of layers names. is it true? if yes, so what is the soloution? how can i create a block which changes layers name after each repeatation?


No naming of your variable will be a problem for autograd.
Autograd works “below” torch.nn so you can set up your network whichever way you want and it will work just fine.