I’ve noticed this all over the vast farming communities of Canada. Thousands of square kilometers of cattle, corn, and various other staples, and suddenly…an apartment building!
For anyone who doesn’t know what’s involved in making this change and is curious, I have added some details below. Keep in mind that I am not an expert in this area; I’ve only worked with neural nets on a small scale. Please don’t do your dissertation using my explanation:
The way these neural nets work is that first blackshark, (the company that provides the AI building generation capabilities) will need to train the neural network how to spot farms and rural structures.
Training a neural net involves building a set of training data (satellite pictures of farms, rural roads, crops in various conditions, etc.) to learn from. Next, people will individually annotate that enormous collection of pictures with labels (this is a farm, that is a silo, that is an irrigation ditch, and so on). It’s a lot like teaching someone using flash cards. Do this work a hundred thousand times, and the AI should pick up fairly quickly on the pattern. Then you send it off on its own without any help, and see what it comes up with, review for mistakes, and try to correct them.
The challenge can come when the AI does something really crazy and unexpected, like turning a grain silo, which seems so obvious to us, into some sort of garrish satantic monolith or maybe an apartment block. The AI has no way to communicate WHY it made the choice it did (unlike teaching a person), so the trainer is left to use their engineering intuition for determine why the neural net is acting crazy in one particular edge case. This can be a time consuming process, as there are many reasons why it can break; not enough training data, too much training data, bad labelling, etc.
You’ve also got some requirements that make this extra interesting: The AI system will need to stick a building in exactly the right spot so it matches up with the map. It will need to generate, within a set of parameters or “limits”, the right building, out of a big set of choices it will have. It needs to be able to work with various qualities of satellite images, deal with weather, shadows, and so on.
As far as I know, every time they make a change, the ENTIRE dataset for the WORLD needs be completely re-generated - this involves spinning up a large fleet of compute instances (servers specifically configured for doing this type of resource-intensive work), often hundreds or thousands of machines, and having them run at full-tilt for many hours, possibly even days.
Now granted, they’ve streamlined the process quite a lot, but this is still an involved and often tedious process.
The takeaway is that Blackshark.ai seems to have a pretty good system for world generation - I have no doubt this will continue to improve. Every time they train the system, it will get better and better. All hope is not lost!