The above tensor is supposed to be a batch of B images, with C channels (currently 1 grayscale channel). dimA and dimB are the result of unfolding each images into smaller blocks and then we have the pixels in each block, h and w.
Using convNd I have applied a 4D convolution. I would now like to apply 4D batch normalization across this tensor. Is this possible? Is there a more efficient alternative?
Use view to change the size of tensor to that of 1d and then apply BatchNorm1d. After that again use view.
Define BatchNorm4d. batch norm internally uses torch.batch_norm and it can work on input of any size. But you might not get CUDNN speedup (I am not sure about this).
Thanks for the response! I’ll implement the view method for now. Do I need to use contiguous in conjunction with view() to keep my tensor in the correct order?
I looked up the definition of batch_norm and it seems to have two different functions, one for CPU and one for GPU as you mentioned. They’re also written in C++ which I haven’t worked with in a while!
This is the way 1d, 2d and 3d batchnorm are done. If you want N dimensional batchnorm (or dont care about dimension) you can change _check_input_dim to whatever you need. This way you dont need to worry about changing view etc.