Hello, I have a tensor size of BxCxHxW that assign to cuda likes

input=input.to(device)

I have a numpy array size of HxW with the type of bool, the value in the numpy array is randomly generated by True or False. Based on the numpy array, I want to change the value in the tensor with a condition:

If the value in numpy at position x,y is True, then change the value in the tensor to zero, otherwise, keep the original value in the tensor

This is my code

import torch
import numpy as np
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
B, C, H, W=2, 2, 4, 5
input = torch.randn(B, C, H, W)
input = input.to(device)
A_array=np.random.rand(H, W) > 0.5
print (input)
for ind_batch in range (B):
input[ind_batch,:, A_array]=0
print (input)

For the above code, I got the error

input[ind_batch,:, A_array]=0. TypeError: canâ€™t convert np.ndarray of type numpy.bool_. The only supported types are: double, float, float16, int64, int32, and uint8.

One more thing, it cannot work with larger size of A_array. I think the purpose to convert to bool to save memory. When I change to int. It shows cuda error out of memory

This is example

import torch
import numpy as np
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
B, C, H, W, D=2, 2, 64, 64, 64
input = torch.randn(B, C, H, W, D)
input = input.to(device)
A_array=np.random.rand(H, W, D) > 0.5
A_array=A_array.astype(int)
print (input)
for ind_batch in range (B):
input[ind_batch,:, A_array]=0
print (input)

Sorry for the half answer. That seems to be very memory-inefficient. Trying to run the same code on cpu gave me a â€śTried to allocate 16GBâ€ť error message.

I believe the same output can be obtained with masked_fill. Tried it here and I believe it to be correct.

Generating A_array with torch.rand resolved the memory issue as well for this tensor size, but masked fill ultimately handled somewhat larger tensor sizes without crashing - crashed at (2, 2, 256, 256, 512), which should be a 1GB tensor, a bit much for my notebook.

Thanks, It worked well on pytorch 0.4 but it has error in pytorch 0.3.1. Have you ever seen the error

File â€ś/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.pyâ€ť, line 78, in getitem
return Index.apply(self, key)
File â€ś/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/tensor.pyâ€ť, line 89, in forward
result = i.index(ctx.index)
TypeError: Performing basic indexing on a tensor and encountered an error indexing dim 2 with an object of type torch.cuda.ByteTensor. The only supported types are integers, slices, numpy scalars, or if indexing with a torch.cuda.LongTensor or torch.cuda.ByteTensor only a single Tensor may be passed.