The algorithms used for 1D, 2D and 3D convolution might be slightly different (especially if you use cudnn) so I am not sure you can predict the runtime and memory footprint without trying it.
The reshape is not a problem with respect to the autograd, but it is still one extra operation (even though it is a really cheap one).
If you just want it to work, you can just use the simplest one.
If you really need the last few (potential) percent of performance/memory, then you can benchmark each approach for your input sizes.