How to pad zeros on Batch

Is there a better way to do this? How to pad tensor with zeros, without creating new tensor object? I need inputs to be of the same batchsize all the time, so I want to pad inputs that are smaller than batchsize with zeros. Like padding zeros in NLP when sequence length is shorter, but this is padding for batch.

Currently, I created a new tensor, but because of that, my GPU will go out of memory. I don’t want to reduce batchsize by half to handle this operation.

import torch
from torch import nn

class MyModel(nn.Module):
    def __init__(self, batchsize=16):
        super().__init__()
        self.batchsize = batchsize
    
    def forward(self, x):
        b, d = x.shape
        
        print(x.shape) # torch.Size([7, 32])

        if b != self.batchsize: # 2. I need batches to be of size 16, if batch isn't 16, I want to pad the rest to zero
            new_x = torch.zeros(self.batchsize,d) # 3. so I create a new tensor, but this is bad as it increase the GPU memory required greatly
            new_x[0:b,:] = x
            x = new_x
            b = self.batchsize
        
        print(x.shape) # torch.Size([16, 32])

        return x

model = MyModel()
x = torch.randn((7, 32)) # 1. shape's batch is 7, because this is last batch, and I dont want to "drop_last"
y = model(x)
print(y.shape)

If you use DataLoader, drop_last=True may work.

Intentionally want to avoid drop_last=True.

My current solution is:

x = F.pad(x, (0,0,0,(self.batchsize-b)))