Error when Convert Pytorch model to TorchScript: " RuntimeError: undefined value super:"

I got this error while trying to convert nn.module to torchscript.

RuntimeError:
undefined value super:
  File "D:\project\Kikai project\DIFRINT\models\correlation\correlation.py", line 147
        def __init__(self):
                super(FunctionCorrelation, self).__init__()
  ~~~~~ <--- HERE
'FunctionCorrelation.__init__' is being compiled since it was called from '__torch__.models.correlation.correlation.FunctionCorrelation'
  File "D:\project\Kikai project\DIFRINT\models\correlation\correlation.py", line 238
        def forward(self, tensorFirst, tensorSecond):
                return FunctionCorrelation.apply(tensorFirst, tensorSecond)
         ~~~~~~~~~~~~~~~~~~~ <--- HERE
'__torch__.models.correlation.correlation.FunctionCorrelation' is being compiled since it was called from 'ModuleCorrelation.forward'
  File "D:\project\Kikai project\DIFRINT\models\correlation\correlation.py", line 238
        def forward(self, tensorFirst, tensorSecond):
                return FunctionCorrelation.apply(tensorFirst, tensorSecond)
         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE

This is my code

class FunctionCorrelation(torch.autograd.Function):
	def __init__(self):
		super(FunctionCorrelation, self).__init__()
		# super().__init__()
	# end

	@staticmethod
	def forward(self, first, second):
		self.save_for_backward(first, second)

and

class ModuleCorrelation(torch.nn.Module):
	def __init__(self):
		super(ModuleCorrelation, self).__init__()
		# super().__init__()
	# end

	def forward(self, tensorFirst, tensorSecond):
		return FunctionCorrelation.apply(tensorFirst, tensorSecond)
	# end

I used torch==1.6.0, torchvision==0.7.0, cupy-cuda102==7.8.0, Pillow==6.1.0
I think may be get problem when i use torch.autograd.Function this cause i have to use @staticmethod in torch==1.6.0
Someone can help me
thank

At this point, you cannot use autograd.Functions with the JIT.

@tom
so have some thing can replace torch.autograd.Function that TorchScript support

Well, so if you push your autograd function to C++ you can wrap the function with a JIT operator. You cannot currently do that in Python, though.

I once made a prototype of a scripteable function interface, but I never pushed it to the point where it was a mergeable PR.

1 Like

@tom
in my case. I have trained model in pytorch and i want convert it to C++
so i dont need train or calculate gradient with autograd function just convert trained model
may be have way to replate autograd function to normal function to Script it???

If you trace the model, you can just use torch.jit.is_tracing() to check whether you want the JIT-compatible bit and use the function otherwise.
There also is torch.jit.is_scripting() for the scripting equivalent.

Best regards

Thomas

@tom
thank you very much

Hi, I got the same error:

RuntimeError: undefined value super:

while trying to TorchScript torch.nn.BCEWithLogitsLoss. Do you have any idea?

It seems in my case the reason is due to the fact that only instances of nn.Modules can be scripted. Therefore, it is necessary to first produce an instance out of torch.nn.BCEWithLogitsLoss and then continue with TorchScripting that instance.