How can we reliably figured out what the parameters of a generated PTX kernel are with torch.compile()?

When we run torch.compile() on torch.matmul(A, B), we get a PTX function with signature:
``
.visible .entry triton_(
.param .u64 triton__param_0,
.param .u64 triton__param_1,
.param .u64 triton__param_2,
.param .u64 triton__param_3,
.param .u32 triton__param_4,
.param .u32 triton__param_5
)
```
I want to be able to run this PTX function as a standalone with my own launch code in CUDA. In general, how do we know what parameters are being passed in?