Choose backends

It is not obvious to me how to select different backends. In torch, you can mix layers from cudnn and cunn easily. Is there an easy way to do the same in pytorch?

Right now there’s no way to pick which layers will use cuDNN and which not. It will be automatically picked, based on what’s the fastest option. There’s only a global switch torch.backends.cudnn.enabled that you can set to False to globally disable cuDNN.

Just out of curiosity, what’s the reason why you need to select a particular backend?

1 Like

Thanks for the information. Sometimes I extract extra information from the layers. For example, for maxpooling layer, I have cases that make use of the indices in the cunn implementation. The cudnn implementation does not have the indices variable available.

So in cases there is no corresponding cudnn implementation, cunn option will be selected for that layer?

Yes, if you request the pooling function/module to return indices and cuDNN doesn’t support it, it will be automatically ignored. We’re always picking the fastest backend that supports all specified options.

Just to clarify, if I have a network with some layers supported by cudnn, but if the net contains a layer that cudnn does not support (e.g. FractionalMaxPooling), will the entire net be run using cunn or just the FractionalMaxPooling layer?

The decision is made on a per-operation basis. When we have a choice of multiple backends for a given op, we filter out these that don’t support the options you chose and pick the fastest one.

For example cuDNN 5 doesn’t support dilated convolutions. If your model has both dilated and regular convolutions, THCUNN will be used for dilated and cuDNN for regular ones.

This has the upside of always running at the max speed, across many different systems, that have only a subset of backends (e.g. no CUDA support), based on a single model definition.

2 Likes

In my case, since I’d like to get deterministic results, I have to set torch.backends.cudnn.enabled = False, which shows much slow performance than using CUDNN.

I’d like to use CUDNN libraries for deterministic modules but not for non-deterministic modules such as CNN libraries. If I can choose backends for each module, I would be able to get some degree of speedup while keeping determinic results.

You don’t really have a plan to allow manually choosing backends?

We want to add the ability to pick the backends in a more fine-grained manner, but we haven’t discussed any solutions yet.

Any update on this? Is there a way to choose conv algorithms?