Is there a convenient way to replace all occurrences of 0 in an Tensor with 1?
I’d like to normalize my data but the std deviation of some features may happen to be 0 and since we’re dividing by it I’d like to avoid division by zero errors by replacing those zeros.
Thanks for your reply. Just looked up clamp function, but it’s not exactly what I need. Specifying min=0 would not remove zeros if I get it right. Picking some value bigger 0 for min would result in legitimate small values being ‘clamped’ as well…
Some way to iterate over a Tensor’s values and overwriting them if needed would be just as fine for the time being. I’m new to PyTorch/libTorch and I’m having quite some difficulties finding stuff in the documentation and getting things up and running.
Torch.where is what you should then use.
That said, those legitimate small values can be mean to you.
In general, it is a good idea to know what you’d do in Python (PyTorch or numpy), which most often translates to Libtorch I a straightforward way.
Thanks for the hint on torch::where(), I’ll have a look at that. I have a running model in python but data preprocessing is done in pandas/numpy there which I believe is not easily available in C++.
I decided to use an accessor now, iterating over the tensor elements and comparing & setting where necessary. May not be elegant but solves the problem at hand for now. Code is like:
auto a = t.accessor<float, 1>();
for(int i=0; i<a.size(0); i++)
if(a[i] == 0.0)
a[i] = 1.0;