Hi @ptrblck, Thanks, I was able to get the proper activations, however, I am not completely convinced about the use of detach
or clone
.
Please reaffirm if my assumption is correct: detach()
is used to remove the hook when the forward_hook()
is done for an intermediate layer? I did see that when I iterated to get the next layer activation function, I also got the output from the first hook when detach()
was not done.
Secondly, clone()
is used to just clone the entire model as is. I tried with both output
and output. detach()
in the hook
function and both returned after applying in-place operation.
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.clone().detach()
return hook
This is the function that I used and If my understanding is correct this should do the trick.