Namescope of parameters in pytorch

is there a way that we can control the name of the parameters?
I suppose this is known as the key of the dict_state in pytorch?

in tensorflow, it is done by:

with tf.name_scope("my_namescope"): 
    v1 = tf.Variable(1, name="var1", dtype=tf.float32)

print(  #"my_namescope/var1:0

There are not yet scopes in pytorch, so you can’t attach names to Variables.
There is one exception though: inside a nn.Module, you can retrieve a name of Variables assigned to the module via it’s name (this is what you get from model.named_parameters().

1 Like

Is there any news on this? This feature would be really nice in order to create orderly graphs in tensorboard. The autogenerated graphs can be really, really messy.