can anyone have a sample for the correct use of and torch::nn::functional::pad in CPP,
this is python code
input_data = torch.nn.functional.pad( input_data.unsqueeze(1), (2,2, 0, 0), mode=‘reflect’)
and I cant convert this line to CPP with libtorch. equal function in libtorch is
torch:;nn::functional::pad()
what is the type of parameters in this function?
Hi
The function exists below link , is still before release.
If you need the following function, either code it yourself or use the nightly build version.
//usage sample
torch ::nn::functional::detail ::pad(
input_data.unsqueeze(1), {2,2,0,0},torch::nn::functional::PadFuncOptions().mode(torch::kReplicate));
if (padding.size() > 4) {
input = torch::cat({input, _narrow_with_range(input, 4, 0, padding[-5 + padding.size()])}, /*dim=*/4);
input = torch::cat({_narrow_with_range(input, 4, -(padding[-5 + padding.size()] + padding[-6 + padding.size()]), -padding[-5 + padding.size()]), input}, /*dim=*/4);
}
return input;
}
namespace detail {
inline Tensor pad(const Tensor& input,
IntArrayRef pad,
PadFuncOptions::mode_t mode,
double value) {
TORCH_CHECK(pad.size() % 2 == 0, "Padding length must be divisible by 2");
TORCH_CHECK(((int64_t)(pad.size() / 2)) <= input.dim(), "Padding length too large");
if (c10::get_if<enumtype::kConstant>(&mode)) {
return torch::constant_pad_nd(input, pad, value);
} else {
TORCH_CHECK(
value == 0,
1 Like
21fa417e3fb06e56040c:
torch::kReplicate
thanks a lot. I can run it on ubutnu 18.04 with nightly build version.
i used this code:
std::vector<int64_t> pad {(int)(filterLength/2), (int)(filterLength/2),0,0};
torch::nn::functional::PadFuncOptions option (pad);
option.mode(torch::kReplicate);
input_data = torch::nn::functional::pad(input_data.unsqueeze(1), option);
1 Like
Esmaeil_Farhang:
pad
i think , can work below code,
Can you try and tell me?
torch::nn::functional::pad(input_data.unsqueeze(1), torch::nn::functional::PadFuncOptions({(int)(filterLength/2), (int)(filterLength/2),0,0}).mode(torch::kReplicate));
in runtime occurred:
`terminate called after throwing an instance of 'c10::Error'
what(): Calculated padded input size per channel: (7). Kernel size: (8). Kernel size can't be greater than actual input size (check_shape_forward at ../../aten/src/ATen/native/Convolution.cpp:436)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6a (0x7fbfa34c7bea in /media/gata/Code/denoiser_cpp/libtorch/lib/libc10.so)
frame #1: <unknown function> + 0xbf731c (0x7fbfa42db31c in /media/gata/Code/denoiser_cpp/libtorch/lib/libtorch.so)
frame #2: at::native::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long, bool, bool, bool) + 0x48e (0x7fbfa42e1fae in ..
`
yf225
(PyTorch Developer, Meta)
December 4, 2019, 4:33pm
6
@Esmaeil_Farhang The equivalent of
input_data = torch.nn.functional.pad( input_data.unsqueeze(1), (2,2, 0, 0), mode=‘reflect’)
is
auto input_data = torch::nn::functional::pad( input_data.unsqueeze(1), torch::nn::functional::PadFuncOptions({2,2, 0, 0}).mode(torch::kReflect))
To debug the runtime error, it’s best to run the equivalent script in Python first and make sure it works, and then translate it to C++ code.
2 Likes