Quick conclusion but not an exact solution: I made a new project and used the regular command install like before and there is no error now. Something about the original project, excluding the code as the code works, was causing the error.
Below are the exact instructions I used to install and uninstall.
pip uninstall triton
Found existing installation: triton 2.1.0
Uninstalling triton-2.1.0:
Successfully uninstalled triton-2.1.0
pip uninstall torch -y
Found existing installation: torch 2.1.2
Uninstalling torch-2.1.2:
Successfully uninstalled torch-2.1.2
pip install torch
Collecting torch
Obtaining dependency information for torch from https://files.pythonhosted.org/packages/03/f1/13137340776dd5d5bcfd2574c9c6dfcc7618285035cd77240496e5c1a79b/torch-2.1.2-cp310-cp310-manylinux1_x86_64.whl.metadata
Using cached torch-2.1.2-cp310-cp310-manylinux1_x86_64.whl.metadata (25 kB)
Collecting triton==2.1.0 (from torch)
Obtaining dependency information for triton==2.1.0 from https://files.pythonhosted.org/packages/4d/22/91a8af421c8a8902dde76e6ef3db01b258af16c53d81e8c0d0dc13900a9e/triton-2.1.0-0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata
Using cached triton-2.1.0-0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.3 kB)
Using cached torch-2.1.2-cp310-cp310-manylinux1_x86_64.whl (670.2 MB)
Using cached triton-2.1.0-0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89.2 MB)
Installing collected packages: triton, torch
Successfully installed torch-2.1.2 triton-2.1.0
./.venv/lib/python3.10/site-packages/triton/third_party/cuda/bin/ptxas --version
ptxas: NVIDIA (R) Ptx optimizing assembler
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:13:45_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0
Doing this, I still receive the error:
Traceback (most recent call last):
File "/home/user_n/PycharmProjects/Resnet18 Tests/main.py", line 371, in <module>
model_ft = torch.compile(model_ft)
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/__init__.py", line 1723, in compile
return torch._dynamo.optimize(backend=backend, nopython=fullgraph, dynamic=dynamic, disable=disable)(model)
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 610, in optimize
compiler_config=backend.get_compiler_config()
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/__init__.py", line 1571, in get_compiler_config
from torch._inductor.compile_fx import get_patched_config_dict
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 38, in <module>
from .fx_passes.joint_graph import joint_graph_passes
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/_inductor/fx_passes/joint_graph.py", line 8, in <module>
from ..pattern_matcher import (
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 28, in <module>
from .lowering import fallback_node_due_to_unsupported_type
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 4768, in <module>
import_submodule(kernel)
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1492, in import_submodule
importlib.import_module(f"{mod.__name__}.{filename[:-3]}")
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/_inductor/kernel/bmm.py", line 4, in <module>
from ..select_algorithm import (
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/_inductor/select_algorithm.py", line 25, in <module>
from .codegen.triton import texpr, TritonKernel, TritonPrinter, TritonScheduling
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/_inductor/codegen/triton.py", line 26, in <module>
from ..triton_heuristics import AutotuneHint
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/_inductor/triton_heuristics.py", line 23, in <module>
from .coordinate_descent_tuner import CoordescTuner
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/_inductor/coordinate_descent_tuner.py", line 8, in <module>
if has_triton():
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/torch/_inductor/utils.py", line 83, in has_triton
import triton
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/__init__.py", line 20, in <module>
from .compiler import compile, CompilationError
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/compiler/__init__.py", line 1, in <module>
from .compiler import CompiledKernel, compile, instance_descriptor
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/compiler/compiler.py", line 27, in <module>
from .code_generator import ast_to_ttir
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/compiler/code_generator.py", line 8, in <module>
from .. import language
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/language/__init__.py", line 4, in <module>
from . import math
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/language/math.py", line 4, in <module>
from . import core
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/language/core.py", line 1376, in <module>
def minimum(x, y):
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/runtime/jit.py", line 542, in jit
return decorator(fn)
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/runtime/jit.py", line 534, in decorator
return JITFunction(
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/runtime/jit.py", line 433, in __init__
self.run = self._make_launcher()
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/runtime/jit.py", line 388, in _make_launcher
scope = {"version_key": version_key(),
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/runtime/jit.py", line 120, in version_key
ptxas = path_to_ptxas()[0]
File "/home/user_n/PycharmProjects/Resnet18 Tests/.venv/lib/python3.10/site-packages/triton/common/backend.py", line 119, in path_to_ptxas
raise RuntimeError("Cannot find ptxas")
RuntimeError: Cannot find ptxas
I tried running the two basic examples in INTRODUCTION TO TORCH.COMPILE and both gave the same error.
Then I tried the two basic examples again but this time in a new fresh project and both examples worked. For the new fresh project I install torch using:
pip install torch
Collecting torch
Obtaining dependency information for torch from https://files.pythonhosted.org/packages/03/f1/13137340776dd5d5bcfd2574c9c6dfcc7618285035cd77240496e5c1a79b/torch-2.1.2-cp310-cp310-manylinux1_x86_64.whl.metadata
Using cached torch-2.1.2-cp310-cp310-manylinux1_x86_64.whl.metadata (25 kB)
Collecting triton==2.1.0 (from torch)
Obtaining dependency information for triton==2.1.0 from https://files.pythonhosted.org/packages/4d/22/91a8af421c8a8902dde76e6ef3db01b258af16c53d81e8c0d0dc13900a9e/triton-2.1.0-0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata
Using cached triton-2.1.0-0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.3 kB)
Using cached torch-2.1.2-cp310-cp310-manylinux1_x86_64.whl (670.2 MB)
Using cached triton-2.1.0-0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89.2 MB)
Successfully installed torch-2.1.2 triton-2.1.0
./.venv/lib/python3.10/site-packages/triton/third_party/cuda/bin/ptxas --version
ptxas: NVIDIA (R) Ptx optimizing assembler
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:13:45_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0
One thing I noticed differently with the new fresh project is:
Obtaining dependency information for MarkupSafe>=2.0 from https://files.pythonhosted.org/packages/36/2a/fab302636634e1f770a26aac212e44cff25522ed3c9189bd8afc9ae2effd/MarkupSafe-2.1.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
Downloading MarkupSafe-2.1.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Downloading MarkupSafe-2.1.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
While in my original project MarkupSafe used a cache version:
Requirement already satisfied: MarkupSafe>=2.0 in ./.venv/lib/python3.10/site-packages (from jinja2->torch) (2.1.3)
Could the issue have been a slightly out of date version of MarkupSafe? The answer is no:
pip uninstall MarkupSafe
Found existing installation: MarkupSafe 2.1.3
Uninstalling MarkupSafe-2.1.3:
Successfully uninstalled MarkupSafe-2.1.3
pip install torch
Collecting MarkupSafe>=2.0 (from jinja2->torch)
Obtaining dependency information for MarkupSafe>=2.0 from https://files.pythonhosted.org/packages/36/2a/fab302636634e1f770a26aac212e44cff25522ed3c9189bd8afc9ae2effd/MarkupSafe-2.1.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
Using cached MarkupSafe-2.1.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Using cached MarkupSafe-2.1.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Installing collected packages: MarkupSafe
Successfully installed MarkupSafe-2.1.4
The error persists.
raise RuntimeError("Cannot find ptxas")
RuntimeError: Cannot find ptxas
Moving my original code to the new fresh project gave no errors. So I believe that something in my original project was causing the issue somehow.