Where is _TensorBase __mul__ assigned/defined?

Hi, I’m extending pytorch for complex types and I’m getting a TypeError if I multiply two complex tensors.

TypeError: mul received an invalid combination of arguments - got (float), but expected one of:

  • (float value)
    didn’t match because some of the arguments have invalid types: (float)
  • (torch.ZFloatTensor other)
    didn’t match because some of the arguments have invalid types: (!float!)

In _TensorBase there is theis function, but I can’t find where self.mul() is defined.

def __mul__(self, other):
    return self.mul(other)

Can somebody help me out?

I think it’s in torch/csrc/generic/methods/TensorMath.cwrap which then gets parsed in TensorMethods.cpp

But where can I find the code for TypeError handling?

Hi,

.mul is one of the function that is automatically generated, you can find it’s definition here.

Yes I found it. But where is the code generator for TypeError handling?

You should check this discussion, I think your question is answered there.
Let me know if you need more details.

Got it, thanks alot!

Where is THPUtils_(unpackReal) defined?

As is done in most of the C/C++ backend, THPUtils_ is a macro that specialize for each type, you can find it’s definition here.
Note that this macro is always used inside a “generic” file for which Real is the type that is currently considered.
The function names for THPUtils_(unpackReal) will thus be THFloatUtils_unpackReal for float, THDoubleUtils_unpackReal for double …
You can find the definition of all these functions here.

As a rule of thump (I am not sure it’s true for 100% of the code base) you have the following things:

  • A function called TH(P/C/D/NN/CUNN)XXX_(YYY)(args) (where one or none of P/C/D/NN/CUNN is used as a prefix for the different libraries) will have the C symbol for float for example TH(P)FloatXXX_YYY(args), for double :TH(P)DoubleXXX_YYY(args)
  • This function will either be defined inside a generic/ subfolder as TH(P)XXX_(YYY)(args) or in the main folder (not in a generic subfolder) as TH(P)FloatXXX_YYY(args) (which is the case for THPUtils_(unpackReal).

Thank you. I still haven’t gotten to the bottom of this issue. Maybe someone can see my mistake. So the generated code in TensorMethods.cpp is:

PyObject * THPTensor_(mul)(PyObject *self, PyObject *args, PyObject *kwargs)
{
PyObject *__kw_value = NULL;
PyObject *__kw_other = NULL;
if (kwargs) {
__kw_value = PyDict_GetItemString(kwargs, “value”);
__kw_other = PyDict_GetItemString(kwargs, “other”);
}

HANDLE_TH_ERRORS
int __tuplecount = args ? PyTuple_Size(args) : 0;
int __dictcount = kwargs ? PyDict_Size(kwargs) : 0;
int __argcount = __tuplecount + __dictcount;
PyObject *__out;

__out = kwargs ? PyDict_GetItemString(kwargs, "out") : NULL;
if (__out == Py_None) { __out = NULL; __dictcount--; __argcount--; }



if (__out != NULL &&
      __argcount == 2 &&
      (PyObject*)Py_TYPE(__out) == THPTensorClass &&
      (__tuplecount > 0 || __kw_value) && THPUtils_(checkReal)((__tuplecount > 0 ? PyTuple_GET_ITEM(args, 0) : __kw_value))) {

[…]

} else if (__out != NULL &&
      __argcount == 2 &&
      (PyObject*)Py_TYPE(__out) == THPTensorClass &&
      (__tuplecount > 0 || __kw_value) && THPUtils_(checkPart)((__tuplecount > 0 ? PyTuple_GET_ITEM(args, 0) : __kw_value))) {
  [..]
} else if (__out != NULL &&
      __argcount == 2 &&
      (PyObject*)Py_TYPE(__out) == THPTensorClass &&
      (__tuplecount > 0 || __kw_other) && (PyObject*)Py_TYPE((__tuplecount > 0 ? PyTuple_GET_ITEM(args, 0) : __kw_other)) == THPTensorClass) {

[…]

} else if (__out == NULL &&
      __argcount == 1 &&
      (__tuplecount > 0 || __kw_value) && THPUtils_(checkReal)((__tuplecount > 0 ? PyTuple_GET_ITEM(args, 0) : __kw_value))) {
  [...]

} else if (__out == NULL &&
      __argcount == 1 &&
      (__tuplecount > 0 || __kw_value) && THPUtils_(checkPart)((__tuplecount > 0 ? PyTuple_GET_ITEM(args, 0) : __kw_value))) {
  [...]

} else if (__out == NULL &&
      __argcount == 1 &&
      (__tuplecount > 0 || __kw_other) && (PyObject*)Py_TYPE((__tuplecount > 0 ? PyTuple_GET_ITEM(args, 0) : __kw_other)) == THPTensorClass) {
  
  #if IS_CUDA
  THCPAutoGPU __autogpu_guard = THCPAutoGPU(args, (PyObject*)self);
  #endif
  
  THPTensorPtr _result_guard((THPTensor*) THPTensor_(NewEmpty)());
  if (!_result_guard.get()) return NULL;
  THPTensor* result = _result_guard.get();
  
  
  THTensor* arg_result = ((THPTensor*)result)->cdata;
  THTensor* arg_self = ((THPTensor*)self)->cdata;
  THTensor* arg_other = ((THPTensor*)(__tuplecount > 0 ? PyTuple_GET_ITEM(args, 0) : __kw_other))->cdata;
  
  THTensor *arg_self_save = arg_self;
  THTensorPtr arg_self_guard(nullptr);
  THTensor *arg_other_save = arg_other;
  THTensorPtr arg_other_guard(nullptr);
  
  bool try_expand = !THSize_isSameSizeAs(arg_self->size, arg_self->nDimension,
      arg_other->size, arg_other->nDimension);
  if (try_expand) {
    bool expand_success = false;
    try {
      arg_self_guard =
      THTensor_(new)(LIBRARY_STATE_NOARGS);
      
      arg_other_guard =
      THTensor_(new)(LIBRARY_STATE_NOARGS);
      
      expand_outplace2(LIBRARY_STATE arg_self_guard.get(), arg_other_guard.get(),
          arg_self, arg_other,
          "self", "other", !false);
      expand_success = true;
    }
  catch (std::exception &e) {}
    if(expand_success) {
      arg_self = arg_self_guard.get();
      arg_other = arg_other_guard.get();
    }
  }
  
  
  PyThreadState *_save = NULL;
  try {
    Py_UNBLOCK_THREADS;
    THTensor_(cmul)(LIBRARY_STATE arg_result, arg_self, arg_other);
    Py_BLOCK_THREADS;
    Py_INCREF(result);
    return (PyObject*)(result);
  } catch (...) {
    if (_save) {
      Py_BLOCK_THREADS;
    }
    throw;
  }
  arg_self = arg_self_save;
  arg_other = arg_other_save;
  
  

}

THPUtils_invalidArguments(args, kwargs, "mul", 3, "(" RealStr " value, #" THPTensorStr " out)", "(" PartStr " value, #" THPTensorStr " out)", "(" THPTensorStr " other, #" THPTensorStr " out)");
return NULL;
END_HANDLE_TH_ERRORS

}

And I here is the python stack trace of my call:

File “/home/philipp/projects/scikit-pr/skpr/core/p/models/x.py”, line 31, in forward
z = F.cmul(P, O_cropped)

File “/home/philipp/projects/scikit-pr/skpr/nn/functional.py”, line 12, in cmul
return CMul().forward(x,y)

File “/home/philipp/projects/scikit-pr/skpr/nn/_functions/CMul.py”, line 16, in forward
return x * y

File “/home/philipp/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py”, line 840, in mul
return self.mul(other)

File “/home/philipp/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py”, line 341, in mul
return Mul.apply(self, other)

File “/home/philipp/anaconda2/lib/python2.7/site-packages/torch/autograd/_functions/basic_ops.py”, line 66, in forward
return a.mul(b)

TypeError: mul received an invalid combination of arguments - got (torch.ZFloatTensor), but expected one of:

  • (float value)
    didn’t match because some of the arguments have invalid types: (!torch.ZFloatTensor!)
  • (torch.cuda.ZFloatTensor other)
    didn’t match because some of the arguments have invalid types: (!torch.ZFloatTensor!)

Now if I call mul with two tensors of type torch.cuda.ZFloatTensor, this means the code in the last [else if] statement should be executed. Somehow one of the tensors is converted to OR recognized as torch.ZFloatTensor. Does anyone have an idea where this could happen or how.

The error here:

(torch.cuda.ZFloatTensor other)
didn’t match because some of the arguments have invalid types: (!torch.ZFloatTensor!)

is that your other is a cuda Tensor but it expects a cpu Tensor.
Are you sure both P and O_cropped are on gpu ?

Yes I am 100% sure

The output of

def forward(ctx, x, y):
    ctx.save_for_backward(x,y)
    print 'CMul.forward'
    print 'x', x.size(), type(x.data)
    print 'y', x.size(), type(x.data) 
    return x * y

is

CMul.forward
x torch.Size([732, 1, 1, 128, 128]) <class ‘torch.cuda.ZFloatTensor’>
y torch.Size([732, 1, 1, 128, 128]) <class ‘torch.cuda.ZFloatTensor’>

There must be some conversion happening or some type is recognized incorrectly. I know it’s hard to tell without knowing my modifications, but maybe an educated guess already helps.

How are the tensors wrapped into python types? Could the error happen in the unwrapping?

OK I am stupid. Sorry

The output of this:

def forward(ctx, x, y):
    ctx.save_for_backward(x,y)
    print 'CMul.forward'  
    print 'x', x.size(), type(x.data)
    print 'y', y.size(), type(y.data) 
    return x * y

is

x torch.Size([732, 1, 1, 128, 128]) <class ‘torch.cuda.ZFloatTensor’>
y torch.Size([732, 1, 1, 128, 128]) <class ‘torch.ZFloatTensor’>

So that’s where the mistake is. Sorry for wasting your time.

No problem ! Glad you found the problem :slight_smile: