How to do projected gradient descent in pyTorch?

i have a variable which is in [0,1]. So, to optimize it I was thinking of using projected gradient descent. Was wondering how to do that. Here is a skeleton I have right now:

for it in range(no_it):
    optimizer.zero_grad()
    //get data D, y
    z = model(D)
    loss = crit(z,y)
    loss.backward()
    optimizer.step()
    if model.b.detach().cpu().numpy() > 1.0:
        model.b.data = torch.tensor(1.0).to(device)
    if model.b.detach().cpu().numpy() <10**-5:
        model.b.data = torch.tensor(10**-5).to(device)

Was wondering if this is the efficient way to do it. I’m afraid it might fill up the GPU with tensor variables if I keep creating them. Is there a way to directly change the value in the underlying numpy variable?

note that model.b.size() outputs torch.Size([]).

Any help is appreciated.