Triton_ kernel after torch.compile

Dear all,

I noticed lots of triton_ kernel executions in nsight systems after I wrap my custom triton kernel and other functions with torch.compile.

Can anyone please let me know what triton_ kernel is?

I’m not sure I understand the question as it seems you are already using custom Triton kernels, so I assume you know what Triton is. In any case, these are generated kernels via TorchInductor (used in torch.compile) using triton-lang/triton.

1 Like

Thank you for your kind reply!

Please see also “How do I map a kernel back to Inductor code and graph?” in torch.compile, the missing manual - Google Docs

1 Like

Wow, you should totally publish this on gumroad.