Almost same file size when saving model's state_dict vs full model in Google Colab

As far as i could see from documentation and forums, the recommended method to save the model for inference later is to save with

torch.save(model_name.state_dict(), "path_of_file.pt")

I tried just that in a Jupyter notebook on Google colab after training the model, but it seems to have saved the complete model as the file size is almost same as when I saved the full model (18MB). Is it possible for the two file sizes to be same? And can I verify without loading either of them if the state_dict was successfully saved?

My model is a custom inherited class of torch.nn.Module(), and torch version is 2.2.1+cu121

I have verified the size again, and the state_dict file is only around 5KB smaller than the full model. What could be the cause of this?
My model had around 4 Million parameters and around 15 mixed layers of Conv2d, Relu, BatchNorm2d, MaxPool2d, dropout and Linear. It was set to evaluation mode twice after training, to validate and test, and then I tried to save it.

I would expect the majority of the size of a checkpoint would be the model.state_dict(), not the actual model code, so I think your observation is reasonable.