Hello.
I saw the code
cudnn.fastest = True
in the repo. https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch.
What is the meaning of cudnn.fastest?
Thank you for your answer in advance.
Hello.
I saw the code
cudnn.fastest = True
in the repo. https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch.
What is the meaning of cudnn.fastest?
Thank you for your answer in advance.
It just signals the Pytorch to use the fastest implementation available for operations such as Convolution etc. when enabled, they usually consume more memory (that is cudnn.benchmark and cudnn.fastest)
It shouldn’t have any effect, as torch.backends.cudnn.fastest
is not implemented.
In your IDE or here you can see, that benchmark
, deterministic
, and enabled
are known flags.
benchmark = True
will use cudnnFind
to profile all available Kernels and select the fastest one.
The repository seems to only set this new argument without using it in the code.
Thank you so much @ptrblck.
So it means that torch.backends.cudnn.fastest can be deleted in my coding set. right?
You should be able to delete it, as I cannot find any usage of it.
Since the PyTorch backends doesn’t use this flag, I searched for any usage in the repository, but couldn’t find it either.
I’m not familiar with the repository, but I couldn’t find any “custom PyTorch” installation, which would use this flag.
Thank you so much
Anyway, let me know, if you see any difference after removing the flag.
Also, @Shisho_Sama seems to have seen it before, so it would be interesting where it was used before.
Actually I guess I mistook torch with Pytorch and though this is available here as well (as cudnn.benchmark is!) . This was the case for torch back in the day if I’m not mistaken.
Since Pytorch released I have only used cudnn.benchmark
myself and never used the cudnn.fastest
as I remember benchmark handled everything by itself . I have never seen cudnn.fastes in any official examples and chances are those repos that do use cudnn.fastes, do it as a habit comming from torch!