SIGSEV when trying to use tensor from_blob

For loss computing (for a convolutional model) i am trying to convert my data which is stored in a 4D-std::vector of type double to a torch::Tensor using torch::from_blob.

i am using the Gramian Angular Field to convert timeseries to an image. My code looks like this:

std::vector<std::vector<std::vector<std::vector<double>>>> resallGAF(32);
    // Compute GAF batch
    for(unsigned int i = 0; i < batchSize; i++)
       // Creating data here        

    torch::Tensor result = torch::from_blob(, {static_cast<long>(resallGAF.size()), static_cast<long>(resallGAF[0].size()), static_cast<long>(resallGAF[0][0].size()), static_cast<long>(resallGAF[0][0][0].size())} ).clone();

When .clone() is called. i get a exc_bad_access error from xcode. (SIGSEV) resallGAF has the shape {32, 4, 150, 150}.

When i use similar code to convert vector of shape {1, 4, 32, 32} to a Tensor everything works fine. I cannot print or forward the resulting Tensor either, when removing the .clone().

I checked the data and the shapes and data are normally valid.
I tired it with libtorch 1.3.1 and 1.4.0 both without CUDA support.

Thanks in advance

I think you need to use continous memory, not vector of vectors. For example:

std::vector<double> resallGAF;

long long DIM0 = 1;
long long DIM1 = 4;
long long DIM2 = 32;
long long DIM3 = 32;

double someValue = 0.0;

for (long long i = 0; i < DIM0; ++i)
   for (long long j = 0; j < DIM1; ++j)
      for (long long k = 0; k < DIM2; ++k)
        for (long long m = 0; m < DIM3; ++m)

torch::Tensor result = torch::from_blob(, { DIM0, DIM1, DIM2, DIM3 }, torch::kDouble);
std::cout << result << std::endl;

Your solution probably works only on small tensors because ‘from_blob’ does not reach beyond memory segment, however data inside the tensor is garbage.


Thanks for the reply. Ur solution would take my data and put it in a 1D tensor. I need a 4d tensor tho, for feeding it into a Conv2d Model.

If I call

std::cout << result.sizes() << std::endl;

I get:
[1, 4, 32, 32]

Isn’t that a 4D tensor ?

1 Like

Seems like i was confused there for a second. it is indeed a 4D tensor. thank you for your help it works now :smiley:

1 Like