Is that possible to plot the output layers and weight of layers as well? I did this, and without considering plot weight the error is ValueError: too many values to unpack (expected 2) however it print out the activation output.
and also with weight layer is complaining that AttributeError: 'BaseConv' object has no attribute 'weight'
Could you point me to the line of code throwing the ValueError?
Is the print statement working?
Also, could you post your model architecture, so that we can check why the AttributeError is thrown?
Sorry for the delay in replying. The error is coming from plt.matshow(activations['init_conv'])
Yes, print statement is working.
I am using your UNet implementation.
Could you print the shape of these activations?
If these activations have more than 3 channels, you would need to split them in order to visualize them as an image.
Good to hear itâs working now!
Well, itâs not really necessary in this case, as plt.imshow will also visualize the tensor, but Iâm just used to work with numpy in matplotlib. Maybe thatâs my way of interpreting the âExplicit is better than implicitâ part of the Zen of Python.
@ I see Could you please give me an example when itâs necessary to do this? The only thing I found in doc is this page but couldnât find out when itâs necessary to do this.
Generally I would recommend to get the numpy array, if you would like to use directly numpy methods or libs which are heavily dependent on numpy. I donât really like mixing up both world implicitly.
E.g. while plt.imshow works fine, it seems that plt.plot wonât take a tensor input, since the ndim attribute is missing.
@ptrblck I assume after training for example 10 epochs if I visualise the weight and output of each layer, it will display me the results of the last epochs. Is that correct? How can I get the results of epoch 1 or 2?
Is there a clever way of getting all the features from all the batches in the same torch.tensor?
For now my workaround is to create a single tensor from the list of tensors generated by the hook, but it is a slow process since I have to torch.cat each element of the list.
Is there a way of activating or deactivating the hooks, without the need of using the remove() method? Because in my case I only need the hooks for the testing, not the training.
If there is something not clear enough please let me know, english is not my native language.
Thanks in advance!
Hi!. I have a similar doubt. Iâve been using the definition that you have post, but I am not sure if I have understood it well how it works. I am working with an autoencoder class (the one I have attached) and I want to extract the outputs of the bottleneck (in my case the layer 3) so I am using model.layer3.register_forward_hook(get_activation(âlayer3â)). Am I using it correctly?.Thanks