Hi everyone,
I’m currently trying to reproduce the results from “Residual networks behave like ensembles of relatively shallow networks” (https://arxiv.org/abs/1605.06431) with pytorch, and was wondering if there is any way to select specific layers (e.g. inputs / outputs of the 40th residual block) within a network. The closest to an answer that I can think of is either using loss.creator.previous_functions[0][0] in a chained manner, or manually listing the hundreds of residual layers when building a model and assigning a return function to each of them. Any advice out of pytorch would also be greatly appreciated. Thanks
I think this is a answer of your question
1 Like
Thanks for the link. This seems to be a good starting point. I also learned that one can easily reach the weight values using
model.state_dict().keys()
which lists the keys of all weights and biases belonging to our model, then access their values with
model.state_dict()['key_name_of_weight']
You might find .items()
or .values()
useful too.
1 Like