There is a new collection API named cuLibrary in the CUDA12, but I cannot find any related roadmap in the pytorch repo. Does pytorch have a plan to use these API?
I’m working on the AI infrastructure and wonder if we need to support cuLibrary in time, in case that our user with pytorch will need them. Or I can postpone this if most frameworks do not need cuLibrary support.
I captured these API in the samples with cuda 12. Seems nvcc calls these API during kernel registration in its preprocessing steps. But I guess torch won’t call API such as cuLibraryGetGlobal during runtime?
We don’t plan to use these APIs soon, but of course can add them for interesting use cases. I don’t understand your concern and what exactly you want to maintain as these are CUDA driver APIs, which are already needed to be present if you want to execute any GPU workload.
Thanks for your reply. Yes you are right that I found it’s necessary to support cuLibraryLoadData and cuLibraryUnload, etc… But I guess APIs such as cuLibraryGetUnifiedFunction are barely called.