How do I transform conv2d into conv3d

I am trying to implement LRCN,witch input as (Batch, time,H,W).Then it needs 5D tensor . And pytorch has conv3d wtich I want to use . BUT there are many models trained by using conv2d. Now I want to transform a pytorch model witch use conv2d into a new model using conv3d. Dose it make sence?

def filter2d_to_3d(weight2d, weight3d):
nb_filter, channel, Time, h, w = weight3d.size()
for i in range(Time):
weight3d[:,:,i,:,:] = weight2d.data #Time aways =1 ,

return weight3d      #conv3d's weight  should be like this:  weight3d[:,:,0,:,:] = weight2d.data

Check out https://github.com/kenshohara/3D-ResNets-PyTorch

While this does not directly answer your question it nevertheless provides networks adapted from 2D architectures which are pretrained on temporal tasks and exhibit the conv3d system you speak of.

Utilizing one of the pretrained networks is likely easier then implementing your own LRCN. Furthermore, it seems like the results of these pretrained networks are quite good. (maybe better then LRCN?)