I use ‘torch.autograd.functional.jacobian(f,x)’ to calculate the partial derivatives of f with respect to x but when the dimension of f is incremented the time of calculation increases.
does anyone know any solution to speed up the jacobian calculation in PyTorch?
It has an issue…
Successfully built functorch
Installing collected packages: functorch
Successfully installed functorch-0.2.0a0+a2d2f99
but when I run my code
from functorch import vmap, grad
ModuleNotFoundError: No module named ‘functorch’
If you’re using Linux, you can check what python3 executable you’re using with which python, if you installed FuncTorch you most likely made a new environment (as is stated in the install guide) and installed it in that. Also check you didn’t install pytorch with conda when you installed! Then if that’s all ok, manually check your environemnt and see where pytorch is installed and where functorch is installed?
Did you follow the instruction of the google colab? When you install functorch in a new environment you’ll need to ensure you’ve PyTorch (among other dependencies) reinstalled for the new environment too.
Yes, I followed them without any problem but when my google drive was mounted to load my code which used the’ functorch’ package the error of there is no package by the name of the functorch was shown.