Memory Compute Tradeoff

Hi Everybody,

I was recently suggested by someone that Pytorch has a memory compute tradeoff option i.e one may set the library to a high compute setting that occupies more GPU memory, and vice versa, by choosing faster memory intensive algorithms for convlutional operations. I have been unable to find said option. Can someone tell if such an option exists, or some other method to increase speed at the cost of memory ? Thanks.