Over the years Unity have perfected their single pass pipeline, and now with the upcoming instanced mode the overhead of stereoscopic 3D is mostly on the GPU side of things. There is one big drawback for some of us though, which is that it only supports one camera for each eye. Monoscopic shooter games that feature scopes can get by with lowering the FOV for the entire camera to achieve a zoom effect. This is not possible in VR, we need to render the scene a second time, typically using a rendertexture which is rendered in the scene. This means the entire game needs to be optimized for the case when the player has a scope up. Its a big problem for any VR game using weapons with scopes (There are plenty of us).
Here is a video showing our current scopes and the problem (which is hard to show in a video), but if you view it in 60fps you can see some frame drops when the scope is aligned with the eye. It's perfectly smooth when the scope is down.
If Unity could support a third camera (or nth, as the case may be) in their pipeline that would save us a lot of batches and SetPass calls, and open up plenty of awesome possibilities for both stereoscopic and monoscopic games. The shader stereoscopic helper functions seem to indicate that Left / Right is only for convenience, and that the implementation actually relies on camera indexes, which in turn could indicate that it might be possible to expose the possibility to control the number and transform of cameras?
Also created a forum post about it: https://forum.unity.com/threads/ideas-that-can-benefit-ut.513085