Inserting and popping to a Tensor?

Let’s say I have two tensors:

A = torch.empty([N, M]).to(device)
B = torch.empty([batch_size, M]).to(device)

The desired effect I want to achieve is this:

A = torch.cat((B.to(device), A[:-batch_size])).to(device)

In plain English, I just want to insert B into the front of A but keep A the same size.

However, when I do it with the following code:

A[batch_size:].data = A[:-batch_size].data.to(device)
A[batch_size:].data = B.data.to(device)

Now I get a CUDA error:

RuntimeError: CUDA error: device-side assert triggered

The assert it’s referring seems to be:

Assertion index >= -sizes[i] && index < sizes[i] && “index out of bounds” failed.

I don’t think I got your question properly. But is this what you are trying?

a = torch.empty(10,2).to(device)
b = torch.empty(5,2).to(device)
batch_size = 5
a[:batch_size, :] = b

Here, the device can be any “cpu” or “cuda”. numbers 10,5,2 are random examples for demonstration (instead of using N, batch_size, M, but the order remains same).

Close, but no, I’m trying to implement a kind of queue. So first in, first out. Your 4th line of code overwrites the top of a, but I want to shift a so that the top is preserved and insert b before it, if that makes sense.

Oh but I see your point! Yes, I would do

A[batch_size:].data = A[:-batch_size].data.to(device)
A[:batch_size].data = B.data.to(device)

I missed that that colon was on the wrong side.

yeah, is it solved then ?

Never mind, still getting the same CUDA error, which I don’t get when I do:

A = torch.cat((B.to(device), A[:-batch_size])).to(device)

I changed it a bit and now I’m getting:

A[batch_size:] = A[:-batch_size].to(device)

RuntimeError: unsupported operation: some elements of the input tensor and the written-to tensor refer to a single memory location. Please clone() the tensor before performing the operation.

But that’s exactly what’s shown being done here: python - Is it possible to create a FIFO queue with pyTorch? - Stack Overflow