renderToTexture intercept and synthetic vision implementation

I’ve got two questions that are related so I thought I would ask them together.

I saw that in p3d, one could create a camera and enable renderToTexture to map that camera to a vcockpit texture to drive a gauge.

I have not tested this yet, but I’d like to intercept the resulting texture and perform some operations on it before passing it to the cockpit to be displayed, does anyone know if this is possible?

To investigate, I thought the synthetic vision component of the G1000 might be implemented like this, but digging into it, it seems it’s actually some kind of bing map control with all the interesting parts hidden behind coherent. I guess this means its some cpp code somewhere in the sim itself instead of defined in html or js.

My goal is to create something like the synthetic vision - ala some false color view. It would be even better if I could get actual mesh or raycast data from the camera, but a texture would be a great start.