Installation from source stuck

I’ve installed pytorch 1.12.1 using the following command:
pip3 install torch torchvision torchaudio --extra-index-url

This is what I got installed:

torch 1.12.1+cu116
torchaudio 0.12.1+cu116
torchvision 0.13.1+cu116

Now I’m checking to see if torch is able to find my GPU:

Python 3.10.4 (main, Jun 29 2022, 12:14:53) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
>>> torch.cuda.current_device()
/home/alex/invoicenet/lib/python3.10/site-packages/torch/cuda/ UserWarning: 
    Found GPU0 NVIDIA GeForce 920M which is of cuda capability 3.5.
    PyTorch no longer supports this GPU because it is too old.
    The minimum cuda capability supported by this library is 3.7.
  warnings.warn(old_gpu_warn % (d, name, major, minor, min_arch // 10, min_arch % 10))

Correct me if I’m wrong, but from what I have read now I can only either upgrade my hardware or install pytorch from source.

So I tried to install pytorch from source following step by step the guide posted here.

Unfortunately is not working for me. Installation gets stuck showing this error:

In file included from /home/alex/pytorch/c10/util/ConstexprCrc.h:3,
                 from /home/alex/pytorch/c10/test/util/ConstexprCrc_test.cpp:1:
/home/alex/pytorch/c10/util/IdWrapper.h:42:10: error: ‘size_t’ does not name a type
   42 |   friend size_t hash_value(const concrete_type& v) {
      |          ^~~~~~
/home/alex/pytorch/c10/util/IdWrapper.h:5:1: note: ‘size_t’ is defined in header ‘<cstddef>’; did you forget to ‘#include <cstddef>’?
    4 | #include <functional>
  +++ |+#include <cstddef>
    5 | #include <utility>
/home/alex/pytorch/c10/util/ConstexprCrc.h: In member function ‘std::size_t std::hash<c10::util::crc64_t>::operator()(c10::util::crc64_t) const’:
/home/alex/pytorch/c10/util/IdWrapper.h:74:14: error: ‘hash_value’ was not declared in this scope
   74 |       return hash_value(x);                      \
      |              ^~~~~~~~~~
/home/alex/pytorch/c10/util/ConstexprCrc.h:131:1: note: in expansion of macro ‘C10_DEFINE_HASH_FOR_IDWRAPPER’
  131 | C10_DEFINE_HASH_FOR_IDWRAPPER(c10::util::crc64_t);
      | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[4749/6823] Building CXX object c10/te....dir/util/DeadlockDetection_test.cpp.o

Is there a way to fix this or any other workaround so I can use my GPU to train my models without upgrading my hardware?

Im using ubuntu 22. This is my nvidia-smi output in case it helps:

Thu Aug 25 11:11:59 2022       
| NVIDIA-SMI 470.141.03   Driver Version: 470.141.03   CUDA Version: 11.4     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 N/A |                  N/A |
| N/A   38C    P8    N/A /  N/A |      4MiB /  2004MiB |     N/A      Default |
|                               |                      |                  N/A |
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|  No running processes found                                                 |

Is this the first error you are seeing?

/home/alex/pytorch/c10/util/IdWrapper.h:42:10: error: ‘size_t’ does not name a type

or were other errors raised before this issue?
I haven’t seen this error before and it usually would point to a missing #include which is strange as a current source build works fine in my environment.

Hi, I started the process from scratch and just seen I also get a previous error during the clonning phase (git clone --recursive

The clonning ends with the following error:
fatal: Failed to request submodule path ‘third_party/tensorpipe’