How to normalize a vector so all it’s values would be between 0 and 1 ([0,1])?

This is one way, but I doubt it is what you wanted as you weren’t very specific.

```
min_v = torch.min(vector)
range_v = torch.max(vector) - min_v
if range_v > 0:
normalised = (vector - min) / range_v
else:
normalised = torch.zeros(vector.size())
```

I want to make the following division:`tensor_vec / tensor_vec.sum()`

but when I do this I get:

```
RuntimeError: inconsistent tensor size at /opt/conda/conda-bld/pytorch_1501972792122/work/pytorch-0.1.12/torch/lib/TH/generic/THTensorMath.c:87
```

It looks like you have pytorch 0.1.12 installed. It may be time to upgrade.

You’re right, it works with PyTorch 0.3, isn’t there a way to make it work with 0.1.12?

I have no idea, I have only ever used pytorch 0.3+.

That said tensor_vec.sum() should output a single scalar value, so you shouldn’t get an inconsistent tensor size error unless either I have misunderstood what your code does or pytorch 0.1.12 has a bug.

Broadcasting wasn’t available in version `0.1.12`

.

You could try:

```
tensor_vec = tensor_vec / tensor_vec.sum(0).expand_as(tensor_vec)
```

This is great! One thing I did was add something to handle tensors with negative values (e.g. one that had been 0 mean scaled at some point):

```
# Push positive before scaling:
tensor_vec.add(tensor_vec.min() * -1)
```

Major caveat : If tensor_vec has a column with all 0’s, the operation will make that column all NaNs

So a checker, would be ideal:

```
x = x/x.sum(0).expand_as(x)
x[torch.isnan(x)]=0
```

How to scale [0,1] image tensors for matplotlib? I tried min-max scaling and it didn’t retain my pre-processing effect. Applying sigmoid looks like working, is this the right way to do so?