markl
October 8, 2019, 9:17pm
1
I see Tensor::slice being called, for example, in this code.https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/Integration.cpp
However, slice() is not listed in any of the following files
#pragma once
/*
* We split Tensor.h into TensorBody.h and TensorMethods.h because we want
* all TensorMethods to be inlined, but they depend on the Dispatcher,
* which in turn depends on many other things, which then depend back on Tensor.
*
* We can break this dependency chain by having the dispatcher only depend on
* TensorBody.h and not TensorMethods.h.
*/
#include <ATen/core/TensorBody.h>
#include <ATen/core/TensorMethods.h>
I understand that there is some code generation that is happening, and maybe that the function is being injected by a python script.
albanD
(Alban D)
October 8, 2019, 10:23pm
2
I believe it’s implenented here
2 Likes
markl
October 8, 2019, 10:26pm
3
Thanks @albanD . Do you understand how the code generation mechanism works? How does it end up being a member function of Tensor? As it is declared in this file, it is a free standing function.
tom
(Thomas V)
October 8, 2019, 11:15pm
4
The bindings (at::… and Tensor member functions) are generated from native_functions.yaml – look for variants: method for members.
1 Like
markl
October 9, 2019, 3:05am
6
Thanks @tom . This is kind of a tangent, but what is the purpose of the overload_name
- func: func_name[.overload_name](ArgType arg0[=default], ArgType arg1[=default], ...) -> Return
Shouldn’t the argument list be sufficient to uniquely identify an overloaded C++ function? If an overload_name is specified, does it need to also appear in the C++ source somewhere for the code generation to work?
tom
(Thomas V)
October 9, 2019, 10:02am
7
I am not aware of the overload name being used for C++, you certainly can access the overloads as expected.
1 Like