Is there any SDK help to use gpu in Orin with endpoint model?
I was using one x86 PC and one Jetson orin connected by pcie cable. The x86 PC in RC model and the Orin in endpoint model. As I now, some data can by transfer by share memory, but is there any SDK can help to use gpu in Orin? and then used by cuda and pytorch.
Here is my question:
The share memory can be more bigger? and what’s the upper limit
according by https://docs.nvidia.com/jetson/archives/r35.1/DeveloperGuide/text/SD/Communications/PcieEndpointMode.html, “exposes a page of RAM to the root port system”.
Is there any SDK help to use gpu in Orin with endpoint model?
according by https://forums.developer.nvidia.com/t/can-jetson-orin-support-nccl/232845/6, Jetson can’t use NCCL, only NvSciStream. But NvSciStream can solve my problem? How to use?