Google Colab RuntimeError: mat1 dim 1 must match mat2 dim 0

Hi, I am studying NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, and I found a very useful Pytorch implementation( on Github. I encountered a problem when running the full code version( of NeRF on Google Colab, all the cells are working fine except for the last cell(below Run training / validation cell).

I got the following error at line 113:

RuntimeError: mat1 dim 1 must match mat2 dim 0

It seems like the source code is missing two arguments in nerf_forward_pass function, the nerf_forward_pass function has 19 arguments and it inputs only 17, but I still got the same error after adding the two missing arguments. How do I solve this problem?

I would be glad if you can provide guidelines or solutions.

Could you post the complete stack trace of the error message?
I guess a linear layer might be using a wrong number of in_features for the incoming activations and will thus yield this shape mismatch.

I contacted the author, he said he enabled the public editing. I think that’s why this error occurred. It is very likely that someone mistakenly edited the code. But recently the author is not available, so it would be long until he comes back to fix it.

RuntimeError                              Traceback (most recent call last)
<ipython-input-15-885795271e1c> in <module>()
    111         num_fine=num_fine, mode="train", lindisp=False, perturb=True,
    112         encode_position_fn=encode_position_fn,
--> 113         encode_direction_fn=encode_direction_fn
    114     )

7 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/ in linear(input, weight, bias)
   1688     if input.dim() == 2 and bias is not None:
   1689         # fused op is marginally faster
-> 1690         ret = torch.addmm(bias, input, weight.t())
   1691     else:
   1692         output = input.matmul(weight.t())

RuntimeError: mat1 dim 1 must match mat2 dim 0

Add print statements to show the shape of each activation before feeding it into a linear layer in the forward function via:

x = ...
x = self.fc(x)

This would tell you which activation is causing this error and if a reshaping is done in a wrong way or if the in_features of the linear are set to a wrong value.