I was wondering if there was any sort of issue with torch.optim.lr_scheduler.ReduceLROnPlateau in version 0.3.1b0+2b47480!!!
Since as soon as I switch to using scheduler my loss stays almost constant!
Here is the code I am using:
optimizer = torch.optim.Adam(model.parameters(), lr=0.00003) scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=300, verbose=True, min_lr=0.00000001)
And I use the following in my training loop: