How can i read Pt or Pth files?

I finished running of my code. and some part of results are Pth file that i guess are loss and validation plots in terms of epochs. but when i unzip them i see some files with pkl formats. May you help me how can i see the result of my plots? i put its download link here.

Thanks in advance,

If these files were created using torch.save you can load them via torch.load.
The file extension (.pt or .pth) can be anything.

1 Like

Thanks i run a code like this

import torch

Path to your .pth or .pt file

file_path = “/path_to_Pth_file/”

Load the model state dictionary

model_state_dict = torch.load(file_path)

Print keys in model_state_dict to see what’s available

print(model_state_dict.keys())

Access and use model_state_dict as needed

and results was sth like this:

‘transformer.patch_embed.proj.bias’, ‘transformer.patch_embed.norm.weight’, ‘transformer.patch_embed.norm.bias’, ‘transformer.layers.0.blocks.0.norm1.weight’, ‘transformer.layers.0.blocks.0.norm1.bias’, ‘transformer.layers.0.blocks.0.attn.relative_position_bias_table’, ‘transformer.layers.0.blocks.0.attn.relative_position_index’, ‘transformer.layers.0.blocks.0.attn.qkv.weight’, ‘transformer.layers.0.blocks.0.attn.proj.weight’, ‘transformer.layers.0.blocks.0.attn.proj.bias’, ‘transformer.layers.0.blocks.0.norm2.weight’, ‘transformer.layers.0.blocks.0.norm2.bias’, ‘transformer.layers.0.blocks.0.mlp.fc1.weight’, ‘transformer.layers.0.blocks.0.mlp.fc1.bias’, ‘transformer.layers.0.blocks.0.mlp.fc2.weight’, ‘transformer.layers.0.blocks.0.mlp.fc2.bias’, ‘transformer.layers.0.blocks.1.norm1.weight’, ‘transformer.layers.0.blocks.1.norm1.bias’, ‘transformer.layers.0.blocks.1.attn.relative_position_bias_table’, ‘transformer.layers.0.blocks.1.attn.relative_position_index’, ‘transformer.layers.0.blocks.1.attn.qkv.weight’, ‘transformer.layers.0.blocks.1.attn.proj.weight’, ‘transformer.layers.0.blocks.1.attn.proj.bias’, ‘transformer.layers.0.blocks.1.norm2.weight’, ‘transformer.layers.0.blocks.1.norm2.bias’, ‘transformer.layers.0.blocks.1.mlp.fc1.weight’, ‘transformer.layers.0.blocks.1.mlp.fc1.bias’, ‘transformer.layers.0.blocks.1.mlp.fc2.weight’, ‘transformer.layers.0.blocks.1.mlp.fc2.bias’, ‘transformer.layers.0.downsample.reduction.weight’, ‘transformer.layers.0.downsample.norm.weight’, ‘transformer.layers.0.downsample.norm.bias’, ‘transformer.layers.1.blocks.0.norm1.weight’, ‘transformer.layers.1.blocks.0.norm1.bias’, ‘transformer.layers.1.blocks.0.attn.relative_position_bias_table’, ‘transformer.layers.1.blocks.0.attn.relative_position_index’, ‘transformer.layers.1.blocks.0.attn.qkv.weight’, ‘transformer.layers.1.blocks.0.attn.proj.weight’, ‘transformer.layers.1.blocks.0.attn.proj.bias’, ‘transformer.layers.1.blocks.0.norm2.weight’, ‘transformer.layers.1.blocks.0.norm2.bias’, ‘transformer.layers.1.blocks.0.mlp.fc1.weight’, ‘transformer.layers.1.blocks.0.mlp.fc1.bias’, ‘transformer.layers.1.blocks.0.mlp.fc2.weight’, ‘transformer.layers.1.blocks.0.mlp.fc2.bias’, ‘transformer.layers.1.blocks.1.norm1.weight’, ‘transformer.layers.1.blocks.1.norm1.bias’, ‘transformer.layers.1.blocks.1.attn.relative_position_bias_table’, ‘transformer.layers.1.blocks.1.attn.relative_position_index’, ‘transformer.layers.1.blocks.1.attn.qkv.weight’, ‘transformer.layers.1.blocks.1.attn.proj.weight’, ‘transformer.layers.1.blocks.1.attn.proj.bias’, ‘transformer.layers.1.blocks.1.norm2.weight’, ‘transformer.layers.1.blocks.1.norm2.bias’, ‘transformer.layers.1.blocks.1.mlp.fc1.weight’, ‘transformer.layers.1.blocks.1.mlp.fc1.bias’, ‘transformer.layers.1.blocks.1.mlp.fc2.weight’, ‘transformer.layers.1.blocks.1.mlp.fc2.bias’, ‘transformer.layers.1.downsample.reduction.weight’, ‘transformer.layers.1.downsample.norm.weight’, ‘transformer.layers.1.downsample.norm.bias’, ‘transformer.layers.2.blocks.0.norm1.weight’, ‘transformer.layers.2.blocks.0.norm1.bias’, ‘transformer.layers.2.blocks.0.attn.relative_position_bias_table’, ‘transformer.layers.2.blocks.0.attn.relative_position_index’, ‘transformer.layers.2.blocks.0.attn.qkv.weight’, ‘transformer.layers.2.blocks.0.attn.proj.weight’, ‘transformer.layers.2.blocks.0.attn.proj.bias’, ‘transformer.layers.2.blocks.0.norm2.weight’, ‘transformer.layers.2.blocks.0.norm2.bias’, ‘transformer.layers.2.blocks.0.mlp.fc1.weight’, ‘transformer.layers.2.blocks.0.mlp.fc1.bias’, ‘transformer.layers.2.blocks.0.mlp.fc2.weight’, ‘transformer.layers.2.blocks.0.mlp.fc2.bias’, ‘transformer.layers.2.blocks.1.norm1.weight’, ‘transformer.layers.2.blocks.1.norm1.bias’, ‘transformer.layers.2.blocks.1.attn.relative_position_bias_table’, ‘transformer.layers.2.blocks.1.attn.relative_position_index’, ‘transformer.layers.2.blocks.1.attn.qkv.weight’, ‘transformer.layers.2.blocks.1.attn.proj.weight’, ‘transformer.layers.2.blocks.1.attn.proj.bias’, ‘transformer.layers.2.blocks.1.norm2.weight’, ‘transformer.layers.2.blocks.1.norm2.bias’, ‘transformer.layers.2.blocks.1.mlp.fc1.weight’, ‘transformer.layers.2.blocks.1.mlp.fc1.bias’, ‘transformer.layers.2.blocks.1.mlp.fc2.weight’, ‘transformer.layers.2.blocks.1.mlp.fc2.bias’, ‘transformer.layers.2.blocks.2.norm1.weight’, ‘transformer.layers.2.blocks.2.norm1.bias’, ‘transformer.layers.2.blocks.2.attn.relative_position_bias_table’, ‘transformer.layers.2.blocks.2.attn.relative_position_index’, ‘transformer.layers.2.blocks.2.attn.qkv.weight’, ‘transformer.layers.2.blocks.2.attn.proj.weight’, ‘transformer.layers.2.blocks.2.attn.proj.bias’, ‘transformer.layers.2.blocks.2.norm2.weight’, ‘transformer.layers.2.blocks.2.norm2.bias’, ‘transformer.layers.2.blocks.2.mlp.fc1.weight’, ‘transformer.layers.2.blocks.2.mlp.fc1.bias’, ‘transformer.layers.2.blocks.2.mlp.fc2.weight’, ‘transformer.layers.2.blocks.2.mlp.fc2.bias’, ‘transformer.layers.2.blocks.3.norm1.weight’, ‘transformer.layers.2.blocks.3.norm1.bias’, ‘transformer.layers.2.blocks.3.attn.relative_position_bias_table’, ‘transformer.layers.2.blocks.3.attn.relative_position_index’, ‘transformer.layers.2.blocks.3.attn.qkv.weight’, ‘transformer.layers.2.blocks.3.attn.proj.weight’, ‘transformer.layers.2.blocks.3.attn.proj.bias’, ‘transformer.layers.2.blocks.3.norm2.weight’, ‘transformer.layers.2.blocks.3.norm2.bias’, ‘transformer.layers.2.blocks.3.mlp.fc1.weight’, ‘transformer.layers.2.blocks.3.mlp.fc1.bias’, ‘transformer.layers.2.blocks.3.mlp.fc2.weight’, ‘transformer.layers.2.blocks.3.mlp.fc2.bias’, ‘transformer.layers.2.downsample.reduction.weight’, ‘transformer.layers.2.downsample.norm.weight’, ‘transformer.layers.2.downsample.norm.bias’, ‘transformer.layers.3.blocks.0.norm1.weight’, ‘transformer.layers.3.blocks.0.norm1.bias’, ‘transformer.layers.3.blocks.0.attn.relative_position_bias_table’, ‘transformer.layers.3.blocks.0.attn.relative_position_index’, ‘transformer.layers.3.blocks.0.attn.qkv.weight’, ‘transformer.layers.3.blocks.0.attn.proj.weight’, ‘transformer.layers.3.blocks.0.attn.proj.bias’, ‘transformer.layers.3.blocks.0.norm2.weight’, ‘transformer.layers.3.blocks.0.norm2.bias’, ‘transformer.layers.3.blocks.0.mlp.fc1.weight’, ‘transformer.layers.3.blocks.0.mlp.fc1.bias’, ‘transformer.layers.3.blocks.0.mlp.fc2.weight’, ‘transformer.layers.3.blocks.0.mlp.fc2.bias’, ‘transformer.layers.3.blocks.1.norm1.weight’, ‘transformer.layers.3.blocks.1.norm1.bias’, ‘transformer.layers.3.blocks.1.attn.relative_position_bias_table’, ‘transformer.layers.3.blocks.1.attn.relative_position_index’, ‘transformer.layers.3.blocks.1.attn.qkv.weight’, ‘transformer.layers.3.blocks.1.attn.proj.weight’, ‘transformer.layers.3.blocks.1.attn.proj.bias’, ‘transformer.layers.3.blocks.1.norm2.weight’, ‘transformer.layers.3.blocks.1.norm2.bias’, ‘transformer.layers.3.blocks.1.mlp.fc1.weight’, ‘transformer.layers.3.blocks.1.mlp.fc1.bias’, ‘transformer.layers.3.blocks.1.mlp.fc2.weight’, ‘transformer.layers.3.blocks.1.mlp.fc2.bias’, ‘transformer.norm0.weight’, ‘transformer.norm0.bias’, ‘transformer.norm1.weight’, ‘transformer.norm1.bias’, ‘transformer.norm2.weight’, ‘transformer.norm2.bias’, ‘transformer.norm3.weight’, ‘transformer.norm3.bias’, ‘up0.conv1.0.weight’, ‘up0.conv2.0.weight’, ‘up1.conv1.0.weight’, ‘up1.conv2.0.weight’, ‘up2.conv1.0.weight’, ‘up2.conv2.0.weight’, ‘up3.conv1.0.weight’, ‘up3.conv2.0.weight’, ‘up4.conv1.0.weight’, ‘up4.conv2.0.weight’, ‘c1.0.weight’, ‘c2.0.weight’, ‘reg_head.0.weight’, ‘reg_head.0.bias’, ‘spatial_trans.grid’, ‘spatial_trans_seg.grid’])

i am still wondering how can i plot loss from these keys…

If you look closely these are more of the model structure ( the weights and biases of layers )
Loss is more of a compute phenomenon that requires predictedOutput and actualOutput. You can use the model_state_dict to reload the model object and run the prediction again

e.g. code

  • Saving the model
torch.save(model, "modelFile.pth")
  • In a different python session. We will reload the model and do a single prediction and calculate the loss
import os
import numpy as np

import torch
from torch import nn
from torch.autograd import Variable

class linearRegressionOLS(nn.Module):
    def __init__(self):
        super(linearRegressionOLS,self).__init__()
        self.linearModel=nn.Linear(10,1)
    def forward(self,x):
        x = self.linearModel(x)
        return x

learning_rate=0.001
num_epochs=10
model=linearRegressionOLS()

model = torch.load("modelFile.pth")

# My Input Data
X=np.random.rand(100,10).astype(np.float32)
# My expected Output
Y=np.random.randint(2,size=(100)).reshape(100,1).astype(np.float32)
inputVal=Variable(torch.from_numpy(X).to('cuda'))
outputVal=Variable(torch.from_numpy(Y).to('cuda'))
criterion=nn.MSELoss()
dataOutput = model(inputVal)
# My loss between actual prediction and expected value
loss = criterion(dataOutput, outputVal)
print(loss)
1 Like