I did some measurements in London not long ago for data and RAM use with different settings.
Interestingly, autogen data costs more memory. But that could be because draw distance is lower with PG data on over London.
Flight from London City to Heathrow at 500ft, measuring data streamed from the server, max comitted physical RAM and max VRAM in use during the flight.
No PG Data
Terrain 800 0.74 GiB data streamed, max RAM use 20.4 GB + 5.0 GB Vram
Terrain 400 0.50 GiB data streamed, max RAM use 15.2 GB + 4.1 GB Vram
Terrain 200 0.34 GiB data streamed, max RAM use 13.2 GB + 3.7 GB Vram
Terrain 100 0.21 GiB data streamed, max RAM use 11.2 GB + 3.7 GB Vram
Terrain 50 0.16 GiB data streamed, max RAM use 9.8 GB + 3.5 GB Vram
PG Data
Terrain 800 5.25 GiB data streamed, max RAM use 14.4 GB + 4.5 GB Vram
Terrain 400 3.45 GiB data streamed, max RAM use 15.1 GB + 4.1 GB Vram
Terrain 200 2.04 GiB data streamed, max RAM use 12.3 GB + 3.5 GB Vram
Terrain 100 1.21 GiB data streamed, max RAM use 10.6 GB + 3.7 GB Vram
Terrain 50 0.60 GiB data streamed, max RAM use 9.9 GB + 3.5 GB Vram
This is much better than it was before:
Terrain 200 23.8 GB initial load, up to 41 GB committed landing at Heathrow.
However draw distance at Terrain 200 was much farther than it is now in London. That same flight consumed 4.38 GiB of data. Terrain 150 consumed 3.49 GiB of data, which is now Terrain 400.
(Draw distance in London got reduced a lot to compensate for the higher data density. The difference in NY is much smaller. It seems the higher density PG areas received a draw distance reduction to keep data consumption similar to lower density PG areas)