When I execute the following code, I get a segmentation fault that I do not really understand.
import torch
from torch import nn
from torch.autograd import Variable
v = Variable(torch.randn(1, 1, 590, 45, 80), volatile=True)
model = nn.Sequential(nn.Conv3d(1,16,5), nn.ELU(),nn.Conv3d(16,16,5))
While the tensors are quite large, they should easily fit into the memory of my machine. Is there a maximal tensor size? Thanks for you help.