Model Summary for libTorch jit model?

Hi,

Is it possible to get the model summary described in this link


, but for cpp jit model?

This is an enhancement for printing the model in this link;

Thanks.

rgds,
CL

Just for sharing, this the one of the way of show network architecture for jit model in cpp.

// torchex1.cpp : This file contains the 'main' function. Program execution
// begins and ends there.
//

#include <torch/script.h>
#include <iostream>
#include <inttypes.h>

#include <iostream>
#include <memory>

void tabs(size_t num) {
  for (size_t i = 0; i < num; i++) {
	std::cout << "\t";
  }
}

void print_modules(const torch::jit::script::Module& module, size_t level = 0) {
  // std::cout << module.name().qualifiedName() << " (\n";
  std::cout << module.name().name() << " (\n";	
  
  for (const auto& parameter : module.get_parameters()) {
	tabs(level + 1);
	std::cout << parameter.name() << '\t';
	std::cout << parameter.value().toTensor().sizes() << '\n';
  } 


  for (const auto& module : module.get_modules()) {
	tabs(level + 1);
	print_modules(module, level + 1);
  }

  tabs(level);
  std::cout << ")\n";
}

int main(int argc, const char* argv[]) {
  torch::jit::script::Module container = torch::jit::load("net.pt");
  print_modules(container);
  return 0;
}

The output looks like:

net (
        conv1 (
                weight  [10, 1, 5, 5]
                bias    [10]
        )
        conv2 (
                weight  [20, 10, 5, 5]
                bias    [20]
        )
        conv2_drop (
        )
        fc1 (
                weight  [50, 320]
                bias    [50]
        )
        fc2 (
                weight  [10, 50]
                bias    [10]
        )
)

It seems like a stupid way, any smarter way please feel free to share.

thanks.

rgds,
CL

1 Like

It also looks like there is script::Module::dump() which will print something like this, you can toggle it to include the sections that are relevant to you:

  void dump(
      bool print_method_bodies,  // you probably want this to be `false` for a summary
      bool print_attr_values,
      bool print_param_values) const;
module __torch__.M {
  parameters {
  }
  attributes {
    training = True
  }
  methods {
    method forward {
      graph(%self : ClassType<M>,
            %x.1 : Tensor):
        %3 : Tensor = prim::CallMethod[name="other_fn"](%self, %x.1) # ../test.py:36:15
        return (%3)
  
    }
    method other_fn {
      graph(%self : ClassType<M>,
            %x.1 : Tensor):
        %4 : int = prim::Constant[value=1]()
        %3 : int = prim::Constant[value=10]() # ../test.py:33:19
        %5 : Tensor = aten::add(%x.1, %3, %4) # ../test.py:33:15
        return (%5)
  
    }
  }
  submodules {
  }
}
1 Like

thanks for the recommendation again.