The title may be confusing, as I really don’t know how to name such operations. Basically, what I want to do is illustrated as follow:
- suppose that I have a vector
a= [2, 3, 4] and a length int 5;
- I want to create a matrix
I can of course get this done with
F.pad and enumerating the elements in
a. However, as I don’t want to transfer data between cpu and gpu, is there any other way to get this done?
Hi, It’s a strange operation dude.
You can create tensors directly on gpu with
If you don’t want to pad you can just create a mask of boolean values and do something like
y_matrix[mask] = 1.
That would be executed directly on gpu.
Hope it helps or someone find a better way
Another alternative would be creating a tensor with method
torch.cudatensor (or something likle that) containing all the elements and then reshaping.
Hi, man. Thanks very much for your help. I know that this operation may looks very strange. But, I have to do it. Instead of padding, this is actually to encode the original vector
a=[2,3,4]. As I think my current codes are too “ugly”, I’m seeking for some more elegant ways to get it done.
What is your current approach?
Not sure if this one looks better than yours:
device = 'cuda'
a = torch.tensor([2, 3, 4], device=device)
length = 5
res = torch.stack([torch.arange(length, device=device) < a_ for a_ in a])
Thanks for your help. Mine looks like:
ret = torch.zeros((a.size(0), length), device=device)
for i, s in enumerate(state):
ret[i, :s] = 1.
I’m not sure which one is faster.