I believe it depends on the area; there are some mostly urban areas that have photogrammetric data, but that’s currently limited to IIRC the continental US (CONUS), Canada, various parts of mostly western Europe, Australia, and Japan (see Link to all photogrammetry cities and airports? and the link to a custom Google map there). Everywhere else, unless it’s a handcrafted 3D asset, to my understanding the buildings are autogen via Blackshark AI that takes a look at the satellite imagery and, using various data ported from resources like Open Street Maps, makes a guess at what the building should look like.
While the autogen buildings look believable and have cleaner lines, they can look a bit repetitive, and you can notice the lack of uniqueness compared to the photogrammetry of the real buildings; also, the autogen buildings often don’t look much like the real buildings as seen from the side, as the AI is often trying to guess off of only seeing the roof.
I think most of the photogrammetry is just used as-is, and they haven’t in most cases tried to have a person clean up the geometry and textures to look cleaner than the original point cloud and imagery construct.