Is there a convenient way to replace all occurrences of 0 in an Tensor with 1?
I’d like to normalize my data but the std deviation of some features may happen to be 0 and since we’re dividing by it I’d like to avoid division by zero errors by replacing those zeros.
where to replace with 1, clamp for bounding from below. Note that very small numbers might be inconvenient, too.
Thanks for your reply. Just looked up clamp function, but it’s not exactly what I need. Specifying min=0 would not remove zeros if I get it right. Picking some value bigger 0 for min would result in legitimate small values being ‘clamped’ as well…
Some way to iterate over a Tensor’s values and overwriting them if needed would be just as fine for the time being. I’m new to PyTorch/libTorch and I’m having quite some difficulties finding stuff in the documentation and getting things up and running.
Any hints would be welcome.
Torch.where is what you should then use.
That said, those legitimate small values can be mean to you.
In general, it is a good idea to know what you’d do in Python (PyTorch or numpy), which most often translates to Libtorch I a straightforward way.
Thanks for the hint on torch::where(), I’ll have a look at that. I have a running model in python but data preprocessing is done in pandas/numpy there which I believe is not easily available in C++.
Hmmm… looking at it I find
Tensor where(const Tensor & condition, const Tensor & other) const;
Python documentation is rather pythonic and seemingly not applicable to C++ if I am not mistaken. C++ documentation is not to be found…
How am I to provide a condition like ‘value is equal to 0’ to this function in a Tensor?
I decided to use an accessor now, iterating over the tensor elements and comparing & setting where necessary. May not be elegant but solves the problem at hand for now. Code is like:
auto a = t.accessor<float, 1>();
for(int i=0; i<a.size(0); i++)
if(a[i] == 0.0)
a[i] = 1.0;
Thanks for your help! )