When I pass input to
nn.GRU, it comes across such problem:
UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greately increasing memory usage. To compact weights again call flatten_parameters(). output, h_n = self.gru(concatenated_input.transpose(0, 1), h_0) ... RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1502006348621/work/torch/lib/THC/generic/THCStorage.cu:66
What could cause such problem?