One of the variables needed for gradient computation has been modified by an inplace operation, using pytorch geometric

HI,

Sorry for not being helpful. Maybe you could try to update the version first.
I notice that there are two error messages in different section:

     41     losses_q = torch.stack(losses_q).mean(0)
     42     global_optim.zero_grad()
---> 43     losses_q.backward()
     44     global_optim.step()
     46 return model

and

     84 meta_optim.zero_grad()
     85 torch.autograd.set_detect_anomaly(True)
---> 86 loss_q.backward()
     87 # print('meta update')
     88 # for p in self.net.parameters()[:5]:
     89 # 	print(torch.norm(p).item())
     90 meta_optim.step()

I have some similar experience about this issue, hope it will be help.
If there are several .backward() and .step() in your system, Pytorch will updates your model weights inplace in first .step() .
And it will cause this RuntimeError if the remaining loss computation is based on the same model.
You could refer here.
This is all solution that I can think of, sorry about that.