How do I rotate a 3d array?

I have an array which is a DICOM image with [180,512,512 ] = (z,x,y axis)shapes.

The value to rotate this 3d array as
X-asix : [“0.99896972112790”, “-0.0428598338637”, “0.01491746998657”]

Y-asix :[“0.04301557426519”, “0.99902150615699”, “-0.0102805936812”].

I have no idea how to rotate a 3d array with this value.

I think there are a lot of experts here working with numpy and I would really appreciate it if you could help.

Thank you for your reply.

The above value is the ImageOrientationPatient value from DICOM and I referenced the tow sites below.

https://nipy.org/nibabel/dicom/dicom_orientation.html
http://dicomiseasy.blogspot.com/2013/06/getting-oriented-using-image-plane.html

If I understand right, ImageOrientationPatient is a 3D Normalized Vector, in my case
X-Vector is row direction, [0.99896972112790,-0.0428598338637,0.01491746998657] =[Xx,Xy,Xz]
and Y-Vector is column direction,
[0.04301557426519,0.99902150615699,-0.0102805946813]=[Yx,Yy,Yz].

When referring to the first referenced site, the 3x3 rotation matrix is probably:
[Xx, Yx, 0,
Xy, Yy, 0,
Xz, Yz, 0]


import torch
import torch.nn.functional as F

# Assume you have 2 values for each z,y,x location
data = torch.rand(2, 64, 256, 256).float()

# lets create a rotation  matrix
# for sanity check take identity as rotation matix firs
rot_mat = torch.FloatTensor([[1, 0, 0],[0, 1, 0],[0, 0, 1]]) # identity
# then test with the following
# rot_mat = torch.FloatTensor([[0, 1, 0],[1, 0, 0],[0, 0, 1]]) # 90 degreee rotation around z
print('rotation matrix \n', rot_mat)
print('determinant === ', torch.det(rot_mat))



def get_3d_locations(d,h,w,device_):
      locations_x = torch.linspace(0, w-1, w).view(1, 1, 1, w).to(device_).expand(1, d, h, w)
      locations_y = torch.linspace(0, h-1, h).view(1, 1, h, 1).to(device_).expand(1, d, h, w)
      locations_z = torch.linspace(0, d-1,d).view(1, d, 1, 1).to(device_).expand(1, d, h, w)
      # stack locations
      locations_3d = torch.stack([locations_x, locations_y, locations_z], dim=4).view(-1, 3, 1)
      return locations_3d

def rotate(input_tensor, rotation_matrix):
      device_ = input_tensor.device
      _, d, h, w  = input_tensor.shape
      input_tensor = input_tensor.unsqueeze(0)
      # get x,y,z indices of target 3d data
      locations_3d = get_3d_locations(d, h, w, device_)
      # rotate target positions to the source coordinate
      rotated_3d_positions = torch.bmm(rotation_matrix.view(1, 3, 3).expand(d*h*w, 3, 3), locations_3d).view(1, d,h,w,3)
      rot_locs = torch.split(rotated_3d_positions, split_size_or_sections=1, dim=4)
      # change the range of x,y,z locations to [-1,1]
      normalised_locs_x = (2.0*rot_locs[0] - (w-1))/(w-1)
      normalised_locs_y = (2.0*rot_locs[1] - (h-1))/(h-1)
      normalised_locs_z = (2.0*rot_locs[2] - (d-1))/(d-1)
      grid = torch.stack([normalised_locs_x, normalised_locs_y, normalised_locs_z], dim=4).view(1, d, h, w, 3)
      # here we use the destination voxel-positions and sample the input 3d data trilinearly
      rotated_signal = F.grid_sample(input=input_tensor, grid=grid, mode='nearest',  align_corners=True)
      return rotated_signal.squeeze(0)

rotated_data = rotate(data, rot_mat)
print(data.shape)
print(rotated_data.shape)
print(torch.mean(rotated_data - data)) # 0 , for identity rotation 


Thank you for your reply!

%EC%BA%A1%EC%B2%981
%EC%BA%A1%EC%B2%982
You said 3x3 matrix. In my case, I got the rotation matrix like the figure above and I got the number of 4x4 matrix.

[[ 2.1219860e-01 9.1372589e-03 -1.4462248e-02 -1.1527188e+02]
[-9.1041764e-03 2.1220960e-01 1.0911685e-02 -9.0879768e+01]
[ 3.1687310e-03 -2.1837775e-03 9.9983585e-01 -7.8977943e+01]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 1.0000000e+00]]

Perhaps the above matrix is correct.

Can you please help me apply 4x4 matrix to 3d array again?

Is your 3d data is given in Pixel Coordinates or in RCS ? Can you tell from which coordinate to which coordinate you want to rotate?

My 3d array is a stack of pixel values obtained with the following code.

dcm_list = glob.glob(os.path.join(PathDicom, "*.dcm"))
dcms = sorted(dcm_list, key = numericalSort)
slices = [pydicom.read_file(dcm) for dcm in dcms]
slices.sort(key = lambda x: float(x.InstanceNumber))
if ('RescaleIntercept' in slices[0] and 'RescaleSlope' in slices[0]):
    slope = slices[0].RescaleSlope
    intercept = slices[0].RescaleIntercept
    image = np.stack([s.pixel_array*slope+intercept for s in slices], axis=0)
else:
    image = np.stack([s.pixel_array for s in slices], axis=0)

The following example shows the columns from the 512th row of the 0th slice.

I want voxel (pixel) coordinates to be calculated in RCS coordinates using affine formula.

Thanks!

Hi Ted,
I tried your code, it works. But how can I run it on GPU?
I added below

device_gpu = torch.device(“cuda”) #to gpu
data = torch.rand((2, 64, 256, 256), device=device_gpu).float()

but I got error when runing rotate funciton, RuntimeError

Traceback (most recent call last)
in
----> 1 rotated_data = rotate(data, rot_mat)
2 print(data.shape)
3 print(rotated_data.shape)
4 print(torch.mean(rotated_data - data))

in rotate(input_tensor, rotation_matrix)
14 locations_3d = get_3d_locations(d, h, w, device_)
15 # rotate target positions to the source coordinate
—> 16 rotated_3d_positions = torch.bmm(rotation_matrix.view(1, 3, 3).expand(dhw, 3, 3), locations_3d).view(1, d,h,w,3)
17 rot_locs = torch.split(rotated_3d_positions, split_size_or_sections=1, dim=4)
18 # change the range of x,y,z locations to [-1,1]

RuntimeError: Expected tensor to have CPU Backend, but got tensor with CUDA Backend (while checking arguments for bmm)

Hi, Ted
How to run your code on gpu? I got runtime error when runing rotate fuction

I am going to remove the answer above ( it’s wrong) and post a correct solution. What is the shape of the data that you want to rotate? How is your rotation defined (is it a 3x3 matrix)?

My data is in shape (depth, height, width)=(512, 512, 2048), and dtype=‘uint16’. As to rotation matrix, I have no idea. What I know is the angle is 30 degree around the last axis (axis=3)

For anybody still working on this using Ted’s code, I think that it is incomplete. At least in my experiments, the sampling grid was outside of the FOV. That is because the fixpoint (about which the 3d locations are rotated) is not the center of the image (0,0) but the upper left corner (-1,-1). To project it back you want to add the following lines:

# change the range of x,y,z locations to [-1,1]   !!! Values might still be outside [-1,-1]
normalised_locs_x = (2.0*rot_locs[0] - (w-1))/(w-1)
normalised_locs_y = (2.0*rot_locs[1] - (h-1))/(h-1)
normalised_locs_z = (2.0*rot_locs[2] - (d-1))/(d-1)
# Recenter grid into FOV
normalised_locs_x = norm(normalised_locs_x, w-1)
normalised_locs_y = norm(normalised_locs_y, h-1)
normalised_locs_z = norm(normalised_locs_z, d-1)