I’m conducting simulated maximum likelihood estimation using autograd for gradient information. I noticed that while my program runs on my GPU, the kernel in Jupyter and Spyder dies every time I try to run it on my CPU. I tried running the program from the command line, but the program still stops in the same place without returning any error information.
My CPU has 5.2 GB of available memory, and the program only consumes 1.4 GB so I don’t think memory is the issue. I’ve also updated pytorch (stable version) and anaconda recently, so I don’t think any of my packages should be out of date. When I run the program without autograd or if I change .rsample() to .sample(), however, the program doesn’t experience this problem. I need autograd and .rsample() though to get the relevant gradient information. Does anyone know why this might be happening?
Edit: I’ve also learned that if I restrict the number of simulations to be sufficiently small, the program will run on the CPU but it fails to calculate the gradient and returns nan’s instead. The GPU conversely correctly calculates the gradient at each time step. The CPU and GPU also return very different values for the objective function.