Confusion about pytorch dropout

It seems that if I passed in a 2d tensor, say (32, 64). Here my batch_size = 32, input_channels = 64. And I pass it to the torch.nn.Dropout() API with dropout_probability = 0.5, each of the element will get a p=0.5 probability to be zeroed out.
But could this be modified to let for example, along the 1st dimension or 2nd dimension, independently. For example, [0, :] will be either 0 or unchanged with p = 0.5?