What does cudnn.fastest = True work?

Hello.

I saw the code

cudnn.fastest = True

in the repo. https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch.

What is the meaning of cudnn.fastest?

Thank you for your answer in advance.

2 Likes

It just signals the Pytorch to use the fastest implementation available for operations such as Convolution etc. when enabled, they usually consume more memory (that is cudnn.benchmark and cudnn.fastest)

1 Like

It shouldn’t have any effect, as torch.backends.cudnn.fastest is not implemented.
In your IDE or here you can see, that benchmark, deterministic, and enabled are known flags.

benchmark = True will use cudnnFind to profile all available Kernels and select the fastest one.

The repository seems to only set this new argument without using it in the code.

3 Likes

Thank you so much @ptrblck.
So it means that torch.backends.cudnn.fastest can be deleted in my coding set. right?

You should be able to delete it, as I cannot find any usage of it.
Since the PyTorch backends doesn’t use this flag, I searched for any usage in the repository, but couldn’t find it either.

I’m not familiar with the repository, but I couldn’t find any “custom PyTorch” installation, which would use this flag.

1 Like

Thank you so much :slight_smile:

Anyway, let me know, if you see any difference after removing the flag.
Also, @Shisho_Sama seems to have seen it before, so it would be interesting where it was used before.

1 Like

Actually I guess I mistook torch with Pytorch and though this is available here as well (as cudnn.benchmark is!) . This was the case for torch back in the day if I’m not mistaken.
Since Pytorch released I have only used cudnn.benchmark myself and never used the cudnn.fastest as I remember benchmark handled everything by itself . I have never seen cudnn.fastes in any official examples and chances are those repos that do use cudnn.fastes, do it as a habit comming from torch!

2 Likes