The image is rendered to cover the entirety of the HMD panel. It is expected and normal you don’t see all of it when looking straight forward but it is just an illusion. You can see some if not most of it on your peripheral vision depending on how well you adjust your HMD and how far you’re adjusting the eye relief (if you have this like in the Vive or Index).
The problem is that due to lens deformation, a better strategy for visual crispness is rendering the centre region which higher details/resolution and the periphery with lower resolution. You can do this in only 1 ways: render the image distorted as the inverse of the lens distortion so that the once seen through the lens, the distortion cancels out and appears planar.
However rendering in a such a way is not complex from a pixel shader perspective, but introduces lots of problems for pixel shader code relying on a standard projection. In effect, a pixel (x,y) coordinate and the 3D world coordinate is no longer direct and makes screen space algorithms much harder to implemenent for example.
This is why there are 2 approaches to solve this in an “easy” way:
-
render the image in a way which maximizes the centre region pixel density, and minimizes the outer regions pixel density. This consists naively to divide the image in 9 regions (3x3) with the centre one bigger and the side ones smaller. It is quite easy to implement and in practice is quite good.
-
render the mage in a way which maximizes the centre region pixel legibility with super sampling, and minimizes the outer regions pixel quality with under sampling. This is implemented at the video card driver level and in practice on Nvidia it is quite easy to use.
The former method gives higher pixel density in the centre region without changing the overall rendering cost: you render as much pixels as before over the same total render resolution. The VR API expects a traditional undistorted render which means once the rendering is done, there is an additional mandatory step, which doesn’t costs much either, which consists in projecting back the 9 areas into an even spaced 3x3 grid.
The latter method gives higher sampling quality in the centre region without changing the overall rendering cost either: you render as much pixels as before over the same total render resolution. Here, it is the video card driver which is modifying the pixel shader code handling internally so that it effectively raises the sampling in the centre and reduces the sampling on the periphery. This doesn’t require projecting back to an undistorted view prior submission to the VR API because it is already undistorted.
Both methods are supposed to even out the rendering cost. In other words with the same resolution, the rendering cost is similar. The difference is where is the rendering cost allocated the most and in both case it is in the centre region. In pushing these techniques a little bit more, you can also use them to maintaining the same rendering cost but with a higher rendering resolution. It is just a matter of “degrading” the sampling rate even more on the periphery while keeping same sampling rate in the centre for example.
The latter technique with Nvidia goes 1 step beyond even more: at the rendering engine level, you directly tell the driver the sampling rate based on a 16x16pixels block. This means you can selectively under sample regions of the view which are known to not having much details (moving objects, moving ground or periphery) and super sample the regions known to require more details (static or close by objects etc…). This is super easy to implement with the Nvidia API (1 function call only per frame) and you can derive the sampling rate texture (the 16x16 pixels blocks) directly from the TAA motion vector texture for example as a basis.
PS: this idea of using the TAA buffer is one I thought about when looking at how the Nvidia API is working. It might be highly game content dependant though and It might prove not good in practice with FS2020. I’ve had the chance a couple months ago to discussing it with a developer from a studio having implemented the NViidia API. He was holding a presentation about how they did implement the Nvidia API in their game, which was quite complex with features and edges detection etc… They hadn’t though about just using the TAA buffer but he told me he found the idea worth trying.