Change weights only if loss decreases

is anything wrong if I change weights only when loss decreases, otherwise let them stay as they are.

loss_stored = 1000000000000
for i in range(10):
  optimizer.zero_grad()
  input = torch.randn(3, 3, requires_grad=True)
  loss = model(input).abs().sum()
  loss.backward()
  print(list(model.parameters())[0].sum())
  if loss < loss_stored:
    loss_stored = loss
    optimizer.step()
  print(list(model.parameters())[0].sum())
  print(loss)

It makes not sense at all. Some samples may be harder than other, thus, their loss will be harder.
Dummy example:
Classify this:


Now classify this:

Lastly classify this:

Last case is harder, loss will be higher but you aren’t teaching the network to discern that as you aren’t updating weights.