In matlab we use matconvnet’s vl_nnconv with x, F, B, Y. This is how it looks in matlab;

Example 1

```
l = net.layers{i}
[res(i).dzdx, dzdw{1}, dzdw{2}] = vl_nnconv(res(i).x, l.weights{1}, l.weights{2}, res(i+1).dzdx)
```

Example 2

```
l = net.layers{i}
[res(i).dzdx] = vl_nnpool(res(i).x, res(i+1).dzdx)
```

Assume that in Example 2 res(i).x is a matrix of size (29, 29, 512) and res(i+1).dzdx is a matrix of size (15, 15, 512). The result is res(i).dzdx, of shape (29, 29, 512) respectively.

I am interested in exactly the same thing but in pytorch. I am trying to use just a simple numpy array (or a pytorch tensor/variable) for my equivalent of res(i).x and res(i+1).dzdx (and weights). Is there anyway that can be achieved? Thanks.