How to Build Pytorch from source for ROCm?

'm asking question after trying to build pytorch for months with no results.
Hope some devs would help.
I built pytorch from source in a local python environment for specific text-generation-ui app in linux.
The compile ends with no libhipblas.so, and the other .so files, and rocblas folder in torch/lib folder. Those libraries are required for text-generation to run.

Tested in different python versions from 3.8 to 3.11, different gcc versions from 11 to 13 and two different distros: ubuntu and arch.
No luck.

Yes, I could use the nightly whl or some whl(s) available, with those .so libraries available, but it will defeat the purposes.
I’m using 7900 xtx gpu, trying to make it work for ml/ai things since the released beta of rocm5.5, pytorch released whl always behind the rocm versions, that’s why I should build pytorch from source. In the time of writing this question, rocm has released version 5.6, and 5.6.1 in beta stage.

My question is, is there any other specific flags, or cmake setups, or different setting beside the guide provided in the page

to build pytorch from source so we can have libraries at least similar to the released whls?

I have searched github issues, online guides to build pytorch, no result.
I have posted question in the pytorch github and no answers
Hope some pros would help here.

Thank you.
@ptrblck @tom

Please don’t tag specific users as it could discourage others to post a valid question and could just create noise.

To your question: I don’t know as I’m not familiar with rocm.

Back in the olden days of Radeon VII, I used the recipe on my blog.
I’d suspect that the overall process is similar, but I haven’t really tried recently.