How are you comparing the results?
It seems scipy.ndimage.affine_transform
uses pixel values for the translation part, while F.affine_grid
seems to want values in the range [-1, 1]
(which you already provided).
This code tries to rotate and translate a line:
x = torch.eye(10).view(1, 1, 10, 10)
theta = torch.zeros(1, 2, 3)
angle = np.pi/2.
theta[:, :, :2] = torch.tensor([[np.cos(angle), -1.0*np.sin(angle)],
[np.sin(angle), np.cos(angle)]])
theta[:, :, 2] = 0.5
grid = F.affine_grid(theta, x.size())
x_trans = F.grid_sample(x, grid)
plt.imshow(x.squeeze().numpy())
plt.imshow(x_trans.squeeze().numpy())
Based on the values of grid
the operation should work.
The visualizations however look a bit strange, but this might be due to some interpolation,
Maybe someone knows this better.