I am interested in visualizing attention map of test images and dropping all of the attention map after the experiment is done into a separate folder. Can you please give hints what are the part of codes that can change for this purpose?
Additionally, how can I incorporate something like GradCam into this https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html tutorial?
We propose a technique for producing "visual explanations" for decisions from
a large class of CNN-based models, making them more transparent. Our approach -
Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of
any target...
Hi Mona,
I think this what you’re looking for:
import torch
from torch.autograd import Variable
from torch.autograd import Function
from torchvision import models
from torchvision import utils
import cv2
import sys
import numpy as np
import argparse
class FeatureExtractor():
""" Class for extracting activations and
registering gradients from targetted intermediate layers """
def __init__(self, model, target_layers):
self.model = model
self.target_layers = target_layers
self.gradients = []
def save_gradient(self, grad):
self.gradients.append(grad)
This file has been truncated. show original
The code is very clear, so you shouldn’t have troubles to adapt it to your situation. If you do, let me know.
If you’re interested in other visualizations, you should also look at this Github:
1 Like