Recurrent neural networks

Hi I am trying to add recurrent neural network layer to this model but I am having trouble because if I add nn.RNN() there is some error saying 2D,3D tensor values so how can I add a recurrent layer for this code snippet, I am still developing the code so request for some help

 N_train = X_train.shape[0]
 N_val = X_val.shape[0]
 C = Y_train.shape[1]
 H = X_train.shape[2]
 W = X_train.shape[3]
 dims_X = [-1, 1, H, W]
 dims_Y = [-1, C, H, W]

 train_dset = TensorDataset(torch.from_numpy(X_train).view(N_train, -1),
                            torch.from_numpy(Y_train).view(N_train, -1))

 train_loader = DataLoader(train_dset, batch_size=args.batch_size,
                           num_workers=args.num_workers, shuffle=True)

 val_dset = TensorDataset(torch.from_numpy(X_val).view(N_val, -1),
                            torch.from_numpy(Y_val).view(N_val, -1))

 val_loader = DataLoader(val_dset, batch_size=args.batch_size,
                         num_workers=args.num_workers, shuffle=False)

 model = nn.Sequential(     
   nn.Conv2d(1, 16, kernel_size=33, stride=1, padding=16, bias=True),    
   nn.BatchNorm2d(16),
   nn.Dropout2d(p=0.5, inplace=False),
   nn.ReLU6(inplace=True),

   nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1, bias=True),    
   nn.BatchNorm2d(16),
   nn.Dropout2d(p=0.5, inplace=False),
   nn.ReLU6(inplace=True),

   nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1, bias=True),    
   nn.BatchNorm2d(16),
   nn.Dropout2d(p=0.5, inplace=False),
   nn.ReLU6(inplace=True),

   nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1, bias=True),   
   nn.BatchNorm2d(16),
   nn.Dropout2d(p=0.5, inplace=False),
   nn.ReLU6(inplace=True),

   nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1, bias=True),   
   nn.BatchNorm2d(16),
   nn.Dropout2d(p=0.5, inplace=False),
   nn.ReLU6(inplace=True)
 )

Can you specify where you want to add the nn.RNN() layer?

I want to add after 5th layer that is after

   nn.Conv2d(16, 16, kernel_size=3, stride=1, padding=1, bias=True),   
   nn.BatchNorm2d(16),
   nn.Dropout2d(p=0.5, inplace=False),
   nn.ReLU6(inplace=True)

The output of nn.Conv2d is of shape batch_size x num_channels x height x width, how do you want it to be interpretted t as a sequence? nn.RNN() takes a sequence_length x batch_size x input_size shape tensor as input. I think the error comes from the mismatching between the output of nn.Conv2d() and the input of nn.RNN().

Thank you, yeah my model output would be something like this

loss = train(model, train_loader, loss_function, optimizer, dtype, dims_X, dims_Y, epoch)

So now how should I add the RNN layer ?

@smth please help with this question