Kernel death confusion

Tried doing some digging, looks like segmentation faults are related to Cython and memory, so I attached the following lines to highlight the issue:

data_train = tensorData.view(-1)[idx].view(tensorData.size())

idx is just torch.randperm(8591). Would this just be the result of an extremely memory inefficient approach? I used the method listed here. If it’s just a memory issue, are there any alternative approaches to shuffling that I should look into?