Auto-gptq installation

Hello, I am trying to install auto-gptq locally and I receive this error (apparently torch is not installed, but it is):

Collecting auto-gptq
Using cached auto_gptq-0.3.2.tar.gz (63 kB)
Installing build dependencies … done
Getting requirements to build wheel … error
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 4294967295
╰─> [1 lines of output]
torch is not installed, please install torch first!
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 4294967295
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

I installed torch:

PS C:\Users\danie> C:\Users\danie\AppData\Local\Programs\Python\Python311\python.exe -m pip install torch
Requirement already satisfied: torch in c:\users\danie\appdata\local\programs\python\python311\lib\site-packages (2.0.1)

But still I recieve the error. Why is that? I would appreciate your help very much:)

You could of course remove this check if you are convinced PyTorch is properly installed in your environment, but I would guess the build would fail at a later stage and the error is real, so check if different envs are used/created.

Thank you for your reply. I am only using visual code studio to install everything… I do not have different envs. I do not have conda or so

Does cuda 12.2 work with the latest torch version? I have cude 12.2

Yes, the PyTorch binaries ship with their own CUDA toolkit and only a properly installed NVIDIA driver is needed. Your locally installed CUDA toolkit will be used if you build PyTorch from source or a custom CUDA extension. The error is also unrelated to CUDA since no installed binary can be found at all.

Okay, I work in the same environment, have cuda, have a NVIDIA driver (I bought the laptop one week ago and updated everything). Is there anything else I could try to make it work?

Yes, you could install PyTorch manually and run and smoke test to make sure that PyTorch itself can be imported and your GPU used. Something like this should do it: import torch; print(torch.randn(1).cuda()). If this prints a proper CUDATensor your setup looks good. Afterwards try to install auto_gptq. If this then wipes your PyTorch installation or complains again, you could either debug their setup or ask the authors of this package.

Hi ptrblck,

Thank you for all your help.

import torch; print(torch.randn(1).cuda()) showd that cuda and torch are working together, but I still had the “torch is not installed, please install torch first!” error installing auto-gptq. I ended up downloading the auto-gptq from github and installing it with setup.py, after some minor errors it worked out well and there was no complaining about “torch is not installed, please install torch first!” despite it being in the setup.py code.

My model is working and I hope everything is fine now, so thank you very much for all your help :slight_smile: :slight_smile: :slight_smile: I appreciate it very much!

Cool! Thanks for the update and good to hear it’s working now :slight_smile: