The current shapes are incompatible, since dim2 in f won’t be broadcasted to x.
Since you are using 3 elements in f, you could align dim2 to dim1 in x which would work:

f = torch.zeros(1, 1, 3, dtype=torch.float)
f[:,:,0] = 255
f = f.squeeze(1).unsqueeze(2).unsqueeze(3)
x = torch.randn(4,3,10,10)
res = x*f

I don’t quite understand the use case. If you want to apply an elementwise multiplication you would either have to make sure the number of elements match in both tensors or are broadcastable.