Quest 3 and MSFS - please make reports here

I haven’t tested thoroughly, but few people reported that DX12 works best with HAGS ON lately. It does work for me, and no regular stutters (never though I ever say that, in a couple of years of Reverb G2 it was NEVER the case, I always had some sort of regular stuttering or “rubber band” effects.)

It makes sense to be as it only takes 130mbps bitrate to look amazing without any noticeable compression artifacts, comparing to h264 needing 600-800mpbs to achieve that. My guess is that network connection is a most unreliable element and if it has breathing space is leads to better results. Latency can be lower with h264, but I don’t see any issues with latency, which VD is reporting to be mostly about 63 to 67, which is borderline but again, I don’t feel any latency so it must be OK.

It’s called “Remove HAM mask” or something like that. When I trigger it I can see video expanding in periphery. And I remember Matt saying that it didn’t affect performance at all, so why not.

Not sure if it makes any difference. Turbo on in OXRTK does seem to make a difference.

Both are legit ways to operate 3D cache, but there is a better way (I think) that I’m using. I set “auto” in BIOS, which means Windows tends to prefer faster CCD1 cores - because why not use faster cores for everything? Generally, only games benefit from 3D cache, so I’m using Process Lasso to force all games to use CCD0 and force everything else (and I run a Motion Simulator that I have built, along with a bunch of satellite software). I use “CPU sets” which is a softer version of CPU Affinity, meaning things can use other cores if they need to, but cores in a CPU set get priority. Yes, using game mode and proper driver configuration can automate setting games to use CCD0, but it won’t force everything else to CCD1. My way, I free up the CCD0 cores to MSFS and VR, and use VR on the top 4 cores of CCD0 just in case data needs to be transferred form game to VR because it’s slower to do between CCDs than within a single CCD. Everything else, including most system processes and all satellite software, runs on CCD1.

Mouse? I have the mouse in VR with a passion! So I’m biased. I do use the latest beta of VD. Check “pass controller data” and it’s all automatic. Switching to hand tracking is done via Quest 3 settings. It can be automatic with more or less sensitivity, or by double-tapping controllers together, or both. As for replacing the mouse: well, I’d say it’s good enough for the most part, but only if you’re willing to compromise. It’s also very subjective. It is by nature less precise than VR controllers. But because you don’t need to reach for controllers, it’s much more intuitive. VR controller implementation in MSFS is, well… let’s say, far from perfect. Hands emulating controllers can only be as good as controllers, so they carry all the associated problems plus add less precision. But, the mouse is a huge immersion-breaker for me - it’s a 2-dimensional representation of a 3-dimensional movement. Plus, it can be infuriating to hunt for click spots when it doesn’t want to attach to a particular 3D surface. But, you gain a huge immersion boost! You can naturally take your hand from a yoke or a throttle and reach to flip a switch etc. I usually still use a physical VR Controller for initial FMC programming, but mostly because in my motion rig if a reach below with a hand - the hand tracking and pinch trigger are very unreliable, as my hand is obscured by the frame of the rig and it’s just dark there. But buttons on the dash work just fine, as well as overhead switches. I can probably do FMC adjustments in flight with my hands, haven’t tried it yet. The laser mode is much less precise because tracking errors accumulate away fro the hand and when you pinch, the hand moves so it’s a bit challenging to use the laser. I try to only use direct mode for hand-tracking when I can. Tapping left finger to your right palm emulates B button, so if it’s set to switch - it does that. Sometimes I have to tap twice though, but it works. I always do that, because otherwise it’s easy to accidentally interact with random things while grabbing the yoke.

I can try that, as I haven’t really tested it thoroughly, just set it based on hearsay, and the fact that RESIZABLE BAR requires HAGS ON to function, and it’s a good thing I suppose.

I think it has something to do with CPU-to-GPU data transfer, along with Resizable Bar, it all allows for less wait and faster communicating between CPU and GPU, in theory.

3 Likes