Hi,
I’m working on creating exactly reproducible results when backpropagating a loss in a vision transformer back to its input. To clarify, I am modifying the input and not the model weights. Before the forward pass, some transformations are applied to the image, through which the gradient is ultimately backpropagated. These transformations currently contain a rotation with kornia, but I am changing it to the torchvision rotation. The issue is, that both of these implementations use the pytorch function grid_align
of which the backwards cuda implementation is not deterministic. My current function to trigger the determinism is the following:
def activate_determinism_with_seed(seed: int):
torch.backends.cudnn.benchmark=False
torch.backends.cudnn.deterministic=True
torch.use_deterministic_algorithms(True, warn_only=True)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
np.random.seed(seed)
After calling this function and backprogpating, I indeed get a warning when using either of the kornia or the torchvision rotation functions. Is there a deterministic alternative (such that no warning is raised by torch.use_deterministic_algorithms(True)
), that runs on the GPU and performs a rotation on a Tensor of shape B x C x H x W
around its center or do you have recommendations on how to implement one?