I’m currently trying to reproduce the results from “Residual networks behave like ensembles of relatively shallow networks” (https://arxiv.org/abs/1605.06431) with pytorch, and was wondering if there is any way to select specific layers (e.g. inputs / outputs of the 40th residual block) within a network. The closest to an answer that I can think of is either using loss.creator.previous_functions in a chained manner, or manually listing the hundreds of residual layers when building a model and assigning a return function to each of them. Any advice out of pytorch would also be greatly appreciated. Thanks
I think this is a answer of your question
Thanks for the link. This seems to be a good starting point. I also learned that one can easily reach the weight values using
which lists the keys of all weights and biases belonging to our model, then access their values with
You might find
.values() useful too.