Undocumented behavior with functional.normalize()

I have a 4-d tensor and I would like to (in one go) L1 normalize each of the 3-d slices

tensor[0], tensor[1], etc

In pytorch I was able to accomplish this with

tensor = torch.nn.functional.normalize(tensor, dim=[1,2,3], p=1)

Which is odd since the documentation says that dim must be an int. Either way, I would like to replicate this in libtorch, but I have no idea how.