I can only do guesswork here, and add to what has been already said by @N316TS : the issue is most likely not related to the (completeness, correctness of) translation itself, but rather may be related to how the “chinese characters” are stored. Keyword: “character encoding”.
Without going too much into boring, technical details: characters (letters and numbers like a, b, c, …, z and 1, 2, 3, …, 0 etc.) are “mapped” onto numbers. Because “numbers” is what computers “understand” (binary numbers, 0s and 1s, to be specific).
Now there are essentially two design criteria:
- What shall the largest number be, or in other words: “How many characters do we want to support”?
- Which character should be mapped onto what number?
As you can imagine there is a huge number of ways (permutations) of how such mappings could be done. That’s why there are standard mappings. Not quite the oldest, but perhaps best known such standard is called ASCII.
The problem here: it encodes characters with only 7bit, allowing only for 128 (= 2^7, “2 to the power of 7”) characters - including “special characters” like TAB, RETURN (“line break”) and what not. There are also several variants of “extended ASCII codes” like “Latin-1” and so forth, making use of the “full 8 bit in a byte”, allowing to encode umlauts like ä, ö and ü and also characters with accents like é, â etc. - but in the end we’re talking about a maximum of 256 characters (only).
This worked all great in the 60ies of the last century, for a couple of decades. Because most IT programs were translated in English only, and those “funny Europeans with their Ümläuts” were served with the “extended ASCII” mappings (where the fun was already starting, because the “extended part” varies from country to country: Latin1, Latin2, …up to Latin15. Not to mention that essentially operating system came up with their own encodings like Windows-1252 (CP-1252). But we digress
Now the world has moved on in the last couple of thousands of years and invented way more characters than just 256 different characters. Enter Chinese (and Emoji ;))!
What gives? We need more space! 8bit per character is simply not enough, we need 16bit (around 65’000+ possible characters) or even 32bit per character (many, many, many emojis). Enter Unicode which tries to define “the ultimate character mapping, once and for all”.
(Now Unicode by itself is so complex - keyword: “combined characters” - that it is a frequent root cause for various security exploits in various mobile messaging services, including desktop applications. But that’s a story for another time).
But yes, the implication here is that existing text storage as used “in FSX legacy code” needs to be identified (means: someone needs to search the code and find every “user-readable text” which may ever be presented to the user on screen. Note: by far not every “text” in a computer program is meant to be shown to the user on screen) and most importantly changed “to Unicode”. This may or may not be trivial work, depending on whether the text also needs to be “persisted” (in some file or database, and later read again into memory), means: “conversion code” may also be involved.
All this may make “simplified Chinese” not so simple to support after all