Hello everyone,
Correct me if I am wrong but, from what I understand the OculusAPI is not in charge of the actual rendering of the scene, this is done by a call to the rendering api (dx11, vulkan...)
OculusAPI calls are "only" "preparing" the texture object and distort it to cope with oculus lens shape (and surely a lot of other things, like deal with sensors, ASW, etc...)
Thus, with low-level api, like vulkan, it is up to the game developer (or game-engine developer) to implement features like mGPU rendering, leveraging the full access to hardware provided by this kind of rendering api.
I doubt oculus will ever implement such things at their API level.
I hope ED will dig into this when implementing Vulkan, because I believe, mGPU rendering to VR is key to provide the necessary rendering power for higher resolution.