I wasn’t using the nightly version, so I switched to that but it still wouldn’t give me the traceback. I finally tried running it from the anaconda prompt and it gave me this:
[W ..\torch\csrc\autograd\python_anomaly_mode.cpp:60] Warning: Error detected in CheckpointFunctionBackward. Traceback of forward call that caused the error:
File "estimations_new2.py", line 200, in <module>
take_step=take_step, minimizer_kwargs=arg)
File "C:\Users\Peter\Anaconda3\lib\site-packages\scipy\optimize\_basinhopping.py", line 679, in basinhopping
accept_tests, disp=disp)
File "C:\Users\Peter\Anaconda3\lib\site-packages\scipy\optimize\_basinhopping.py", line 72, in __init__
minres = minimizer(self.x)
File "C:\Users\Peter\Anaconda3\lib\site-packages\scipy\optimize\_basinhopping.py", line 284, in __call__
return self.minimizer(self.func, x0, **self.kwargs)
File "C:\Users\Peter\Anaconda3\lib\site-packages\scipy\optimize\_minimize.py", line 626, in minimize
constraints, callback=callback, **options)
File "C:\Users\Peter\Anaconda3\lib\site-packages\scipy\optimize\slsqp.py", line 370, in _minimize_slsqp
bounds=new_bounds)
File "C:\Users\Peter\Anaconda3\lib\site-packages\scipy\optimize\optimize.py", line 262, in _prepare_scalar_function
finite_diff_rel_step, bounds, epsilon=epsilon)
File "C:\Users\Peter\Anaconda3\lib\site-packages\scipy\optimize\_differentiable_functions.py", line 76, in __init__
self._update_fun()
File "C:\Users\Peter\Anaconda3\lib\site-packages\scipy\optimize\_differentiable_functions.py", line 166, in _update_fun
self._update_fun_impl()
File "C:\Users\Peter\Anaconda3\lib\site-packages\scipy\optimize\_differentiable_functions.py", line 73, in update_fun
self.f = fun_wrapped(self.x)
File "C:\Users\Peter\Anaconda3\lib\site-packages\scipy\optimize\_differentiable_functions.py", line 70, in fun_wrapped
return fun(x, *args)
File "C:\Users\Peter\Anaconda3\lib\site-packages\scipy\optimize\optimize.py", line 74, in __call__
self._compute_if_needed(x, *args)
File "C:\Users\Peter\Anaconda3\lib\site-packages\scipy\optimize\optimize.py", line 68, in _compute_if_needed
fg = self.fun(x, *args)
File "C:\Users\Peter\Desktop\JMP\Analysis\Original_estimation_3-22-19\myLib_new2.py", line 115, in fun
ll = ll + logL(x, theta, model, data[d])
File "C:\Users\Peter\Desktop\JMP\Analysis\Original_estimation_3-22-19\myLib_new2.py", line 54, in logL
out = checkpoint.checkpoint(fn, x)
File "C:\Users\Peter\Anaconda3\lib\site-packages\torch\utils\checkpoint.py", line 163, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
(function print_stack)
I also discovered that I don’t encounter the error if I only conduct my estimation on one of the two data sets I’m using, which has the effect of making the for loop indexed by d
in my original post run only once instead of twice.