That’s not the case as backward calculates the gradients. The optimizer.step() method is updating the parameters using the previously calculated gradients or as seen in your case you are manually updating the parameters. This tutorial might be helpful.
Everything became clear for me after watching sections Autograd and Backpropagation in Deep Learning with PyTorch course. Indeed PyTorch Tensors are magic (not just a wrapper around fast C array) and they track all their history of computations via requires_grad option, so every operand gets gradient automatically including neuron weights!