DeepSpeed libarary - Can not install - getting error - "unable to compile cuda/cpp extensions without a matching cuda version"

I understand that its happening for cuda mismatch, but otherwise PyTorch itself is running just fine.

After running pip install depspeed getting the below error

OS - Windows 11
Python 3.10.9

Collecting deepspeed
  Using cached deepspeed-0.8.1.tar.gz (759 kB)
  Preparing metadata ( ... error
  error: subprocess-exited-with-error

  × python egg_info did not run successfully.
  │ exit code: 1
  ╰─> [13 lines of output]
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\PC60\AppData\Local\Temp\pip-install-7aimsv5v\deepspeed_b92cc522763946a986a0e569e5e169e0\", line 166, in <module>
        File "C:\Users\PC60\AppData\Local\Temp\pip-install-7aimsv5v\deepspeed_b92cc522763946a986a0e569e5e169e0\op_builder\", line 623, in builder
          self.build_for_cpu = not assert_no_cuda_mismatch(
        File "C:\Users\PC60\AppData\Local\Temp\pip-install-7aimsv5v\deepspeed_b92cc522763946a986a0e569e5e169e0\op_builder\", line 105, in assert_no_cuda_mismatch
          raise Exception(
      Exception: >- DeepSpeed Op Builder: Installed CUDA version 12.0 does not match the version torch was compiled with 11.7, unable to compile cuda/cpp extensions without a matching cuda version.
       [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
       [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

Your locally installed CUDA toolkit (in this case CUDA 12.0) will be used to build custom CUDA extensions and should match the CUDA version used to build the PyTorch binaries (in this case CUDA 11.7).
Downgrade your local CUDA toolkit to 11.7 or build PyTorch from source using CUDA 12.0.