Bucketiterator gives variable Batch Size

Why Bucket-iterator gives me variable length of input_size.

> train_iterator,valid_iterator,test_iterator = BucketIterator.splits((train_data , valid_data , test_data),
>                                                                    batch_size = 96 , device ='cuda',repeat=False,
>                                                                    sort=False, sort_within_batch=False,
>                                                                    sort_key=lambda x: len(x.text)
>                                                                    )

Every time I run below Code it gives me different results:

train_batch = next(iter(train_iterator))
print(train_batch.text.shape)
print(train_batch.LABEL.shape

1st time Result are below:
torch.Size([383, 96])
torch.Size([96])

2nd time Result are below:
torch.Size([525, 96])
torch.Size([96])

Also when I am trying to run model

for index,data  in enumerate(train_iterator):
    optimizer.zero_grad()
    
    prediction = model(data.text).squeeze(1)
    loss = criterion(prediction, data.LABEL)
    break;

I got error related to sizes of original Label abd prediction Label.
ValueError: Target size (torch.Size([96])) must be the same as input size (torch.Size([501]))

Batch Size should be constant similar to Target_size but every time it got changed.

Thanks.!!

Issue is solved I did not provide Fix_length in Field Function.

TEXT = Field(sequential=True, tokenize=tokenize, lower=True,stop_words = stopwords,
             include_lengths=True, batch_first=True, fix_length=200)