About the torch._inductor category
|
|
0
|
407
|
December 6, 2022
|
Generate Triton kernels for CPU
|
|
4
|
103
|
April 26, 2024
|
Torch.compile raises error when compiled function calls other functions
|
|
1
|
1286
|
March 22, 2024
|
Output runnable backwards _inductor graph `torch.compile`
|
|
0
|
153
|
January 23, 2024
|
Prologue fusion
|
|
0
|
229
|
January 23, 2024
|
Triton kernel launch in TorchInductor
|
|
1
|
407
|
December 7, 2023
|
Torch.compile too slow with SDXL model
|
|
1
|
585
|
December 2, 2023
|
Getting Triton to generate all kernels
|
|
2
|
649
|
October 4, 2023
|
Inductor CPP codegen for WebAssembly target
|
|
2
|
384
|
September 27, 2023
|
Emit Triton IR/ LLVM IR from TorchInductor
|
|
0
|
344
|
September 21, 2023
|
Inductor CPU C++ backend
|
|
0
|
282
|
August 28, 2023
|
Correct way to avoid torch.compile recompilations
|
|
1
|
618
|
August 2, 2023
|
Torch.compile() error with detach() and with torch.no_grad()
|
|
2
|
592
|
April 20, 2023
|
Xblock is not defined
|
|
1
|
635
|
April 19, 2023
|
Does torch.compile use FlashAttention?
|
|
5
|
2165
|
April 10, 2023
|
Trying to use triton on torch inductor
|
|
1
|
1393
|
March 21, 2023
|
BackendCompilerFailed: _compile_fn raised RuntimeError: Triton requires CUDA 11.4+
|
|
23
|
3948
|
February 17, 2023
|
Torch 2.0 Dynamo Inductor Does not Work for Huggingface Transformers Text Generation Model
|
|
2
|
2484
|
January 27, 2023
|