A more elegant way of creating the nets in pytorch?

reusing a module with no weight sharing would be simple - just define a function

def my_module(x):

if the weight sharing is required, like Adam said it would be unnatural to pass an ID, instead we could use something like this when we define a module with shared weights:

@torch.ann.shared(‘my_module’)
def my_module(x):

where @torch.ann.shared would do the code instrumentation similar to what the JIT does now to save and restore the mlindex variable to the previous invocation of ‘my_module’. The ‘my_module’ parameter could default to a line number from the python’s inspect module.

Dynamic and stochastic nets should probably not be handled by ann. To avoid the bad cases like Adam presented in “Stochastic depth” example, we could set an ann.debug flag, which would check the execution context and raise errors like this:

import inspect

previous_frame = inspect.currentframe().f_back
frame = inspect.getframeinfo(previous_frame)
assert(mlindex == findIndex(frame.filename, frame.lineno))

Though I could imagine that with a bit more effort a similar code to the above could search for the appropriate mlindex based on the execution context and if not found, dynamically create the missing instance in the ModuleList. So, we could handle the dynamic nets too at a small expense of executing a few extra python lines inside the ann module.

-Art