When you are talking about essentially AI coding to be able to distinguish a large square drawn on ground as to whether it’s a barn or an apt building, you have to completely divorce your expectations from what your own brain can come up with. I’ve actually dabbled in AI programming and it is the most humbling thing you can do. Our eyes and minds can almost instantaneously scan a photo and through decades of experience quickly size up a scene and say yeah that’s a barn. We are not going pixel by pixel in the image trying to figure out where the edges of some blob of colors are as computer code must do. We can skip all those steps by accessing some tiny memory of a farm we once saw and immediately see that the image is similar and that must be a barn if it’s next to the other buildings and surroundings. For computer code to do that it takes an immense amount of programming and apply complicated rules to what different parts of the image tell you.
What defines a farm in an overhead image? A big green field and just a few buildings? Fine. Well guess what. That rule doesn’t always hold true. There are thousands of different farm layouts, different colors of crops, different sized buildings. There are actually multi-stage large buildings out in the middle of nowhere. Should we turn them all into barns?