.pt version not supported in cpp libtorch

Hello and hope you are doing well,

cpp torchlib version: 1.7
python torch version: tried the same job first with 2.2.1 (the version used to convert other models that worked) then 1.13.1

I am working on a project where I am new. My role is to convert the model DepthAnything2 (GitHub - DepthAnything/Depth-Anything-V2: [NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation) to .pt to be called in a cpp script. What I am doing is basically, loading the .pth checkpoint file (the giant version) in a model in python, then saving it as .pt after using torch.trace (this is the standard flow to go to .pt and it has worked well so far). When converting the model to .pt and querying it in a cpp script I got an error stating that “model/version is not found”. After investigating the zip .pt files, the version file was under model/.data/version. I manually put the version file at the right place then after running the cpp script again I got:

ERROR: pxGANs::Impl::LoadAndCacheModel() : 'version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at "/pytorch/caffe2/serialize/inline_container.cc":134, please report a bug to PyTorch. Attempted to read a PyTorch file with version 10, but the maximum supported version for reading is 5. Your PyTorch installation may be too old.
Exception raised from init at /pytorch/caffe2/serialize/inline_container.cc:134 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, stGANs Inference - Model NOT loaded

I made sure the .pt is working in python after conversion. Afterwards, I went manually to the version file then repalced what was there with “5” in a desperate attempt but got another error (I don’t think this was a good path to follow). From what I understand, the error above is coming from using some torch tools that require higher version. Do you please have any suggestion to what I can do without modifying my cpp torch environment?

Here is also the code I used to convert the model to .pt:

from depth_anything_v2.dpt import DepthAnythingV2
import torch

""" Convert model from pth to pt -- from python to cpp """
def convert_model(model, dim, device):
    dummy_tensor = torch.rand(1, 3, dim, dim, dtype=torch.float32).contiguous().to(device)
    return torch.jit.trace(model, dummy_tensor)

if __name__ == '__main__':
    convert_dir = "depth_anything.pt"

    #DEVICE = 'cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu'
    DEVICE = 'cpu'

    model_configs = {
        'vits': {'encoder': 'vits', 'features': 64, 'out_channels': [48, 96, 192, 384]},
        'vitb': {'encoder': 'vitb', 'features': 128, 'out_channels': [96, 192, 384, 768]},
        'vitl': {'encoder': 'vitl', 'features': 256, 'out_channels': [256, 512, 1024, 1024]},
        'vitg': {'encoder': 'vitg', 'features': 384, 'out_channels': [1536, 1536, 1536, 1536]}
    }

    encoder = 'vitl'
    model_d_anything = DepthAnythingV2(**model_configs[encoder])
    model_d_anything.load_state_dict(torch.load(f'depth_anything_v2/checkpoints/Depth_Anything_V2_Large.pth', weights_only=True, map_location='cpu'))
    model_d_anything = model_d_anything.eval().to(DEVICE)


    # Dummy input for tracing
    script_model = convert_model(model_d_anything, 574, DEVICE)
    script_model.save(convert_dir)
    
    print("Model converted")

and finally, here is the code I used to move model/.data/version to model/version

import zipfile
import os
import shutil

# Paths
model_path = "depth_anything.pt"  # Replace with your .pt file path
output_path = "depth_anything_repackaged.pt"
temp_folder = "temp_model"

# Step 1: Extract the zip contents
with zipfile.ZipFile(model_path, 'r') as zip_ref:
    zip_ref.extractall(temp_folder)

# Step 2: Move 'version' from '.data/' to 'depth_anything/'
src_folder = os.path.join(temp_folder, "depth_anything", ".data")
dst_folder = os.path.join(temp_folder, "depth_anything")


# Ensure the source file exists
src_file = os.path.join(src_folder, "version")
if os.path.exists(src_file):
    # Move 'version' file to 'depth_anything'
    dst_file = os.path.join(dst_folder, "version")
    shutil.move(src_file, dst_file)
    print(f"Moved 'version' from '.data/' to 'depth_anything/'.")
    
    # Step 3: Modify the 'version' file content
    '''with open(dst_file, 'w') as version_file:
        version_file.write("5")  # Change version to 5
        print("Updated 'version' file content to 5.")'''
else:
    print(f"'version' file not found in '{src_folder}'. Aborting.")

# Step 3: Repackage the updated model
with zipfile.ZipFile(output_path, 'w') as zip_ref:
    for folder_name, subfolders, filenames in os.walk(temp_folder):
        for filename in filenames:
            file_path = os.path.join(folder_name, filename)
            # Maintain relative structure inside the zip
            archive_name = os.path.relpath(file_path, temp_folder)
            zip_ref.write(file_path, arcname=archive_name)

print(f"Updated model saved as '{output_path}'.")

# Step 4: Cleanup
shutil.rmtree(temp_folder)

The error is raised since your libtorch==1.7 installation is too old to execute models created in PyTorch 1.13 or 2.2, so update libtorch.

1 Like

It’s unfortunate I must change libtorch. Thank you very much for your kind answer.