I am currently trying to modify the output of a convolution with a forward hook. So far this works in PyTorch and I have ~67% on Cifar10 before adding the hook and ~35% after adding the hook. This are values which are fine for this testcase. But after exporting the model to onnx, the accuracy drops down to 10% being equivalent to random within Cifar10. The forward hook looks like this:
zero_tensor = torch.zeros( (o_tensor.shape, module.out_channels, o_tensor.shape, o_tensor.shape) ).to(o_tensor.device) zero_tensor[:, module.keep_idxs.to(o_tensor.device), :, :] = o_tensor return zero_tensor
My current guess is, that the slicing notation is not compatible with onnx exporting. Further notice is, that the keep_idxs stay constant when set once.
So my question now is: How to write this, that the functionality stays, but it is compatible with an onnx export? I have already looked into the diverse tensor methods, but I was unable to find one to recreate what I achieve with the slicing notation.
EDIT 1: Turns out the whole output after the layer is zero in the onnx. Therefore not random data, but all zeros is output.