# Forward pass: compute predicted y using operations on Variables; these
# are exactly the same operations we used to compute the forward pass using
# Tensors, but we do not need to keep references to intermediate values since
# we are not implementing the backward pass by hand.
y_pred = x.mm(w1).clamp(min=0).mm(w2)
I read the documentation for clamp:
torch.clamp(input, min, max, out=None) ā Tensor
Clamp all elements in input into the range [min, max] and return a resulting Tensor.
but it didnāt make sense to me. Why do we need to do such a weird thing? Tensorflow doesnāt āclampā anything during matrix multiplication why does pytorch?
Oh I see. Thanks. So there is no relu function? we just use clamp?
I was very confused cuz I was just trying to implement linear regression and there was a random clamp command in most examples Iāve seen which was confusing to me.
I think I have a pretty good grasp of the situations which might lead one to use ReLU6; but Iām kinda interested in itās history. Got any good links?