# Custom Convolutional Operations

I have a matrix that is size ([4, 3, 50, 50, 50]). I’m looking for a way to take each 4 by 3 element and take the inverse of the first 3 channels matrix multiplied by the last channel. I came up with a solution that uses a lot of unbind operations. It uses

``````z_ls = []
for z in torch.unbind(sum_conv, -1):
y_ls = []
for y in torch.unbind(z, -1):
x_ls = []
for x in torch.unbind(y,-1):
res = torch.inverse(x[:-1,...]).mm(-x[-1,...][...,None])#This should be right

x_ls.append(res)
x_ls = torch.cat(x_ls, -1)
y_ls.append(x_ls)
y_ls = torch.stack(y_ls, -1)[...,None]
z_ls.append(y_ls)
z_ls = torch.cat(z_ls, -1)
``````

I’m wondering if there is a better way to go about this. Is there a way I can perform some sort of convolution operation and then just modify it so I perform other operations instead of normal matrix multiplication?

I’m not sure I fully understood your question, but the following code gives the same output as your loop.
The `permute` function can be helpful here.

``````a = torch.rand((4, 3, 50, 50, 50))

b = a.permute(2, 3, 4, 0, 1)
c = b[..., :-1, :].contiguous().view(-1, 3, 3)
d = b[..., -1, :].contiguous().view(-1, 3, 1)

inv = torch.stack([mat.inverse() for mat in torch.unbind(c, 0)])
out = torch.bmm(inv, d)
out = out.transpose(0,1).contiguous().view(3,50,50,50)
``````

Note I still have to unbind once and to loop to compute the inverse.

1 Like