How to rotate a cube with simple rotation matrix

I have a rotation matrix:
rotation_A = torch.tensor([[[ 0.7198, -0.6428, -0.2620],
[ 0.5266, 0.7516, -0.3973],
[ 0.4523, 0.1480, 0.8795]]])

and I want to rotate a cube with a size of torch.Size([74, 626, 766]).

How can I do that with PyTorch? or simple python?

Hi Eran!

If by “cube” you mean that you interpret your 74x626x766 tensor
as a three-dimensional “image,” where each element of the tensor
is a “voxel,” then it is not especially simple to rotate it, and I am not
aware of anything built in to pytorch that will do it for you.

To rotate an image (three-dimensional or not), you have to do the
actual rotation, interpolate pixels (voxels), pad or crop, etc.

If your “cube” is something different than I outlined above, could
you describe your use case in more detail?

Best.

K. Frank

Hi KFrank and thank you for replay,

I have a CT image (slice, height, width) and I want to rotate it (my major is to create data augmentation after I sum up all the slices every time).

I used this code (I saw it on this forum) :

rotation_A = torch.tensor([[[ 0.7198, 0, 0],
[ 0.5266, 0.7516, 0],
[ 0, 0, 1]]])

input_tensor = cube_TF.squeeze(0).float()
rotation_A = rotation_A.squeeze()
def get_3d_locations(d, h, w, device_):
locations_x = torch.linspace(0, w - 1, w).view(1, 1, 1, w).to(device_).expand(1, d, h, w)
locations_y = torch.linspace(0, h - 1, h).view(1, 1, h, 1).to(device_).expand(1, d, h, w)
locations_z = torch.linspace(0, d - 1, d).view(1, d, 1, 1).to(device_).expand(1, d, h, w)
# stack locations
locations_3d = torch.stack([locations_x, locations_y, locations_z], dim=4).view(-1, 3, 1)
return locations_3d

s = torch.from_numpy(s).float()
device_ = input_tensor.device
, d, h, w = input_tensor.shape
input_tensor = input_tensor.unsqueeze(0)
# get x,y,z indices of target 3d data
locations_3d = get_3d_locations(d, h, w, device
)
# rotate target positions to the source coordinate

rotated_3d_positions = torch.bmm(s.view(1, 3, 3).expand(d * h * w, 3, 3), locations_3d).view(1, d, h,w, 3)
rot_locs = torch.split(rotated_3d_positions, split_size_or_sections=1, dim=4)
normalised_locs_x = (2.0 * rot_locs[0] - (w - 1)) / (w - 1)
normalised_locs_y = (2.0 * rot_locs[1] - (h - 1)) / (h - 1)
normalised_locs_z = (2.0 * rot_locs[2] - (d - 1)) / (d - 1)
grid = torch.stack([normalised_locs_x, normalised_locs_y, normalised_locs_z], dim=4).view(1, d, h, w, 3)
# here we use the destination voxel-positions and sample the input 3d data trilinearly
rotated_signal = torch.nn.functional.grid_sample(input=input_tensor, grid=grid, mode=‘nearest’)
rotated_signal = rotated_signal.squeeze(0)

The problem with this code it’s not create new grid so a lot of my image is turning to black (like disappeared). I think it’s not padding and I don’t know how to do that.