What version of torchsummary are you using? EDIT: In most conventional setups, you can check it by running, in a terminal: pip list and looking at the number next to torch-summary.
Name: torchsummary
Version: 1.5.1
Summary: Model summary in PyTorch similar to `model.summary()` in Keras
Home-page: https://github.com/sksq96/pytorch-summary
Author: Shubham Chandel @sksq96
Author-email: shubham.zeez@gmail.com
License: UNKNOWN
Location: /usr/local/lib/python3.7/dist-packages
Requires:
Required-by:
Hmm, it looks like you might be using torchsummary (one word) rather than torch-summary (two words). The one youâre using looks like it was last updated in 2018, the other one was updated in 2020. Looking at the repo, it looks like theyâve now moved over to torchinfo.
The readme for torchinfo presents this example use:
from torchinfo import summary
model = ConvNet()
batch_size = 16
summary(model, input_size=(batch_size, 1, 28, 28))
So perhaps you try installing torchinfo and using it like so:
from torchinfo import summary
summary(loaded_model, input_size=(1, 3, 224, 224))
device = torch.device(âcudaâ if torch.cuda.is_available() else âcpuâ)
model = KthreeDUNet2().to(device)
def count_parameters_and_size(model):
# Count total parameters
total_params = sum(p.numel() for p in model.parameters())
# Count trainable parameters
trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
non_trainable_params = total_params - trainable_params
# Calculate model size in MB
# Each parameter is typically stored in float32 format (4 bytes)
model_size_bytes = total_params * 4 # 4 bytes per parameter
model_size_mb = model_size_bytes / (1024 * 1024) # Convert to MB
model_size_gb = model_size_mb / 1024 # Convert to GB
return {
'total_parameters': total_params,
'trainable_parameters': trainable_params,
"non_trainable_parameters" : non_trainable_params,
'model_size_mb': model_size_mb, "model_size_gb": model_size_gb
}