Is it advisable to use the same Dataset class for training and predicting?

I have recently started using PyTorch and I liked it for its object-oriented style. However, I wonder what’s the best and advised workflow when predicting the model. I wanted to use a custom Dataset class I wrote and which I use for training and validating my model. This class is a map-style dataset, therefore I implement getitem method to return image and target:

class CDiscountDataset:

    def __init__(self, ...):
        ...

    def __getitem__(self, image_id):
        ....
        return (
            torch.tensor(image, dtype=torch.float),
            torch.tensor(target, dtype=torch.long),
       )

However, when I’m using this class for predicting I don’t have any targets to return. My current workaround is something like

    def __getitem__(self, image_id):
        ....
        if predict:
            return (
                torch.tensor(image, dtype=torch.float),
                np.nan,
           )
       else:
            return (
                 torch.tensor(image, dtype=torch.float),
                 torch.tensor(target, dtype=torch.long),
           )

However, I wonder if there’s a better way to do it. And at the same time, as it feels a bit unnatural, I started wondering if it is even advisable to use the same class for training and predicting (it should be, but the clunkiness of my solutions makes me wonder). Of course, I could not return a tuple at all, but only a first element, but this still needs if-else.