How to assign a part of tensor to zero?

Hello, I have a tensor size of BxCxHxW that assign to cuda likes

input=input.to(device)

I have a numpy array size of HxW with the type of bool, the value in the numpy array is randomly generated by True or False. Based on the numpy array, I want to change the value in the tensor with a condition:

If the value in numpy at position x,y is True, then change the value in the tensor to zero, otherwise, keep the original value in the tensor

This is my code

import torch
import numpy as np

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
B, C, H, W=2, 2, 4, 5
input =  torch.randn(B, C, H, W)
input = input.to(device)
A_array=np.random.rand(H, W) > 0.5
print (input)
for ind_batch in range (B):
    input[ind_batch,:, A_array]=0

print (input)

For the above code, I got the error

input[ind_batch,:, A_array]=0. TypeError: can’t convert np.ndarray of type numpy.bool_. The only supported types are: double, float, float16, int64, int32, and uint8.

How could I fix it? Thanks

A_array = np.random.rand(H, W) > 0.5
A_array = A_array.astype(int)

Adding a type conversion seems to have solved it. This will just replace True with 1 and False with 0.

One more thing, it cannot work with larger size of A_array. I think the purpose to convert to bool to save memory. When I change to int. It shows cuda error out of memory

This is example

import torch
import numpy as np

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
B, C, H, W, D=2, 2, 64, 64, 64
input =  torch.randn(B, C, H, W, D)
input = input.to(device)
A_array=np.random.rand(H, W, D) > 0.5
A_array=A_array.astype(int)
print (input)
for ind_batch in range (B):
    input[ind_batch,:, A_array]=0

print (input)

RuntimeError: CUDA error: out of memory

Sorry for the half answer. That seems to be very memory-inefficient. Trying to run the same code on cpu gave me a “Tried to allocate 16GB” error message.

I believe the same output can be obtained with masked_fill. Tried it here and I believe it to be correct.

Generating A_array with torch.rand resolved the memory issue as well for this tensor size, but masked fill ultimately handled somewhat larger tensor sizes without crashing - crashed at (2, 2, 256, 256, 512), which should be a 1GB tensor, a bit much for my notebook.

import torch
import numpy as np

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
B, C, H, W, D = 2, 2, 64, 64, 64
input = torch.randn(B, C, H, W, D)
input = input.to(device)

A_array = torch.rand(H, W, D) > 0.5
A_array = A_array.to(device)

input = input.masked_fill(A_array, 0)
2 Likes

Thanks, It worked well on pytorch 0.4 but it has error in pytorch 0.3.1. Have you ever seen the error

File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 78, in getitem
return Index.apply(self, key)
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/tensor.py”, line 89, in forward
result = i.index(ctx.index)
TypeError: Performing basic indexing on a tensor and encountered an error indexing dim 2 with an object of type torch.cuda.ByteTensor. The only supported types are integers, slices, numpy scalars, or if indexing with a torch.cuda.LongTensor or torch.cuda.ByteTensor only a single Tensor may be passed.