Pytorch doesen't use GPU

PyTorch does not using GPU on full power.
model and tensors are in GPU and PyTorch use GPU

    print('The train is continue!')
    print(torch.cuda.get_device_name(0))
    print('Memory Usage:')
    print('Allocated:', round(torch.cuda.memory_allocated(0) / 1024 ** 2, 1), 'MB')
    print('Cached:   ', round(torch.cuda.memory_reserved(0) / 1024 ** 2, 1), 'MB')
    double_barrier_net = double_barrier_net.double().to(get_device())

This is the console logs

cuda:0
The train is continue!
NVIDIA GeForce RTX 2070 Super
Memory Usage:
Allocated: 14.5 MB
Cached: 22.0 MB

but training is going too slow and on my time_waste_test we could see that train on GPU even slowly than on CPU.
link to logs

I have no idea how to fix this