Captum LayerGradCam for vit_b_16 model

Has anyone succesfully used LayerGradCam on vit_b_16 model? I am using

create_feature_extractor(base_model, return_nodes=[‘encoder’])
to extract the encoder features. Interested to visualize the GradCam output.

Also, visualizing the gradients of the model (vit_b_16), we observe jpeg artifacts (on both JPEG and non-JPEG). However, previous research infers that compression should not effect model training. Thoughts?