What's the difference between a[0][1] and a[0 , 1]

		representation = Variable(th.zeros(batch_size , max_length , self.HIDDEN_SIZE * 2))

	for i in xrange(batch_size):
		for j in xrange(length[i]):
			representation[i][j] = th.cat((hidden_forward[max_length - length[i] + j][i]\
				, hidden_backward[max_length - 1 - j][i]) , 0)

	return representation

In short, I want to implement a bi-directional RNN.
hidden_forward and hidden_backward are list of hidden states from previous rnn.

this code yields an error ’ RuntimeError: in-place operations can be only used on variables that don’t share storage with any other variables, but detected that there are 2 objects sharing it’

however, if I replace representation[i][j] with representation[i , j], the code runs just well.

I’m wondering what’s the difference between these two ways to mention a particular part of a high-dimension tensor?

When you index with x[i][j], then an intermediate Tensor x[i] is created first, and the operation [j] is applied on it. If you index with x[i, j] then there’s no intermediate operation.