About the torch._inductor category
|
|
0
|
405
|
December 6, 2022
|
Generate Triton kernels for CPU
|
|
3
|
87
|
April 18, 2024
|
Torch.compile raises error when compiled function calls other functions
|
|
1
|
1270
|
March 22, 2024
|
Output runnable backwards _inductor graph `torch.compile`
|
|
0
|
150
|
January 23, 2024
|
Prologue fusion
|
|
0
|
219
|
January 23, 2024
|
Triton kernel launch in TorchInductor
|
|
1
|
398
|
December 7, 2023
|
Torch.compile too slow with SDXL model
|
|
1
|
577
|
December 2, 2023
|
Getting Triton to generate all kernels
|
|
2
|
635
|
October 4, 2023
|
Inductor CPP codegen for WebAssembly target
|
|
2
|
381
|
September 27, 2023
|
Emit Triton IR/ LLVM IR from TorchInductor
|
|
0
|
337
|
September 21, 2023
|
Inductor CPU C++ backend
|
|
0
|
281
|
August 28, 2023
|
Correct way to avoid torch.compile recompilations
|
|
1
|
611
|
August 2, 2023
|
Torch.compile() error with detach() and with torch.no_grad()
|
|
2
|
586
|
April 20, 2023
|
Xblock is not defined
|
|
1
|
633
|
April 19, 2023
|
Does torch.compile use FlashAttention?
|
|
5
|
2144
|
April 10, 2023
|
Trying to use triton on torch inductor
|
|
1
|
1379
|
March 21, 2023
|
BackendCompilerFailed: _compile_fn raised RuntimeError: Triton requires CUDA 11.4+
|
|
23
|
3927
|
February 17, 2023
|
Torch 2.0 Dynamo Inductor Does not Work for Huggingface Transformers Text Generation Model
|
|
2
|
2474
|
January 27, 2023
|