Discussion: April 1st, 2021 Development Update

Hi @Steeler2340, in general my response is: you seem to be getting way, WAY over complicated in what you think I was suggesting. Here’s how I see things:

  1. Performance bugs are slipping out into the public. This has happened multiple times.

  2. As much as you like to rail on a test only showing “something that changed yesterday, who knows what, is the culprit” - we’re currently at “hey we just did a release and people are having performance issues. Something that changed in the last 8 weeks, who knows what, is the culprit”. Finding out that something yesterday impacted performance would most likely be a huge step forward.

  3. Turing completeness has absolutely nothing to do with this conversation

  4. Please, please stop throwing Murphy’s law into this and implying that tests are so horribly unreliable that there isn’t any value in this. I’ve already started benchmarking various flights around France to compare before and after performance. MSFS has not been crashing every 10 seconds, my FPS monitoring software hasn’t been crashing every 10 seconds, my computer hasn’t been crashing every 10 seconds, and I have yet to have a windows update drop my performance. Be real here.

I feel like you are arguing that automated tests shouldn’t be started until you build a huge, complex, sophisticated, overly complicated infrastructure and process to support them. That’s nuts. Start simple to catch the most obvious bugs first, and refine (and expand) over time.

In my opinion of course,

That is because software engineering IS a complex business. That‘s what I have been trying to tell you.

And the rest of your response shows that you haven‘t understood the core message of my posting either. It is just a too complex tooic for the layman, it seems.

For instance:

Well, that was not even remotely my point here. So I don‘t understand why you keep repeating that your test was doable. Yes! It is doable! But not useful, for the reasons that I was repeatedly trying to explain to you, in simple terms.

But it seems that I failed. So go read and educate yourself with the link that I gave you, the one about the „testing pyramind“ by Martin Fowler. If automated testing is of any interest to you…

1 Like

Ahh, I see, “software engineering = deliberately making something more complicated than it needs to be”. Got it.


Horses for courses as well. To test the Benelux/France world update you need people with a good knowledge of the geographical areas. If you change the flight model or avionics you need experienced simmers and real world pilots. Things like UI , CTD problems need more professional develeopers/testers.

No, it’s how you use it. If you fly GA / VFR it is (mostly) great and (mostly) works. Too much of the IFR / tubeliner stuff is unusable.

The common excuse of “they’re not study level” is just that. This isn’t someone complaining that some tiny light doesn’t operate ; it’s basic things like the autopilot. Again, there are excuses such as “fly by hand” and the ever popular “yes, but look at the graphics” (you can’t see them at 10k) but these are just arm waving distractions.

No. You didn’t. Software engineering is mastering complexity by breaking it down into small, easily solvable pieces. Divide et impera is a very fundamental principle in software engineering.

Your suggestion of “flying around for hours and checking for performance regressions” was a nice idea, but as outlined before: useless as an automated test - because it specifically does not fall into the “small and easy” category.

And that was my entire point.

1 Like

Your point seems to translate into exactly what’s happening now…patches being released apparently without even doing basic, simple automated tests to determine if code changes have hosed up performance. And then instead of trying to figure out what change yesterday caused the issue, they have to look over what changes in the last two months caused it.

I have yet to see you actually suggest something that would help Microsoft get a handle on the situation, other than attempting to implement something that in your own words would be horribly complicated, and thus too expensive and take too long to implement.

What would YOU do?

This is a huge assumption you’re making. If true, it would be a risk that large companies don’t take. MS would have been out of business years ago if their testing process was slipshod.

I’m thinking that Asobo has at least one or two PCs running MSFS having issues just like the rest of us like dealing with GPU drivers and Windows updates.

Look, I really want to close the discussion about “automated tests here”, as you always seem to understand the complete opposite of what I am trying to explain here. I said that instead of testing “large chunks” (aka “smoke tests”, which also have a justified existence!) it is best practise to create many, many, MANY (unit) tests for small parts, and fewer tests (“integration tests”, “UI tests”, …) the "higher up the testing pyramid you go. Simply said.

And I gave you concrete examples of “testable units”, e.g. with the “geometry LOD generator”, refer again to my previous posts. And I even gave you links to an article by Martin Fowler (he’s like the Henry Ford, just like for computer science) to give you more background information about what I meant.

So I never said not to write any automated tests at all. And yeah, about the complexity of all this, let me finish up by saying that (in IT, but also generally in life) “it is easy to come up with a complex/hard solution, but exponentially harder to come up with an easy solution”. And writing good automated tests which immediatelly tell you which “unit” failed is in the later category as well.

Or said differently: your “fly around for hours and detect memory leaks” is easy to implement (“just setup a flight plan and let the simulation run, monitor the memory and FPS, …”), but compex/hard to derive any meaningful information (“which change exactly is now responsible for the memory / performance leak? Or what it some comany network issue instead? Can we quickly rerun the test? Oh no, we can’t quickly, takes another night…” etc.).

Maybe it is now clearer what I meant? Again, read up on the article that I gave you by Martin Fowler, if testing is of any interest to you. And follow there links therein, too :wink:

Again, that’s absolutely not what I said: I never said that implementing good tests is “too expensive” or “takes too long”: yes, it is hard to come up with an easy solution. But it’s well worth it, because once you have an easy solution you profit forever™.

“Make it as simple as possible, but not simpler!” - that’s the mantra that I have been taught during my computer science studies. And that - making solutions simple, easy and elegant - is the hard part in computer science (and in life in general - in fact, professor N. Wirth “stole” that quote from Albert Einstein ;))!

But I digress again…

And that is the bigger picture here that I had in my mind all the time: well - nothing! That’s the point. I am a happy camper!

I am not in the group of people that cries out loud “Show stopper! Can’t play this anymore!” because some terrain looks glitchy, some FPS are lost or, God forbid, “the trees are too high in front of my home airport!” (yes, really! We had such a post a couple of months ago).

I am not in the group of people that demands that “only certified pilots be part of some testing program”.

I am not in the group of people that insists of having purchased a simulator instead of game ("… but it’s called Flight Simulator!").

I am not in the group of people that expects the basic airplanes that come “out of the box” behave as close as possible to reality or that every button or autopilot behaves correctly / is implemented at all.

To be frank, the only reason why I noticed a drop of performance with the previous update (what was it, “world update 3”?) was because people wrote about it here in the forum. Yes, sure: stuff was a bit slower for me, too. But since I never bothered to even open the FPS counter I simply shrugged with my shoulders and carried on. Because in the area that I fly the drop was hardly noticeable (especially on my “mid-range” system, an iMac from 2017 - that system struggles anyway in keeping anything above 20 FSP at FullHD resolution with “High” settings. And I am fine with that!)

I appreciate your motivation to improve the product. Do so by reporting concise error reports and suggestions.

But I can’t stand the attitude of “how come that Asobo makes things worse with every patch! Rant!” because

a) as a software engineer I know about the complexity
b) it’s a freakin’ game only for 130 dollars (only)! I don’t even expect them to have this unit-tested all over the place (except for their “core gaming engine” perhaps) (*)
c) and most importantly: it is not true! Every update improved things overall!

Yes: the flaps issue was annoying, and as confirmed by Asobo due to a “code merge” which accidentally merged “experimental code” that was not supposed to go into the “stable branch” (and it was probably a simple number / multiplier that was wrong). Yes, the FPS performance drop was noticeably (especially now that I paid attention and the latest update really improved performance again). But it was quickly acknowledged by Asobo - here in the forum even! - and fixed with the next monthly update!

And that’s really all one can expect from a game. I have already had a LOT of fun, the graphics (clouds!) look epic and everything that is to come I regard as absolute BONUS! (And yes, I bought the Premium Deluxe (?) version, for nothing more than to “fund this project” - I couldn’t even tell you which airplanes or airports I got “extra” for that price).

So please all: stay constructive, don’t cry out like you have been betrayed when some aircraft system does not work the way it does in a multi-billion dollar aircraft. It’s a game! Despite being called simulator.

But the fact that people are able to enhance the aircrafts, like the A320NX project, and the fact that Asobo keeps updating / improving also the existing aircrafts shows that there is a huge potential and a huge dedication from Asobo / MS behind all this!

(*) Maybe someone here in the forum works in the gaming industry and can tell us about automated testing when producing a game? I could imagine that “game engines” that are to support a number of games are heavily covered with tests. And sure, Asobo is talking about a 10 years support timeframe, so I’d guess they also have a bunch of tests. But in general I don’t expect that a single game is heavily covered with tests, is it? Because you develop it, you sell it, you provide a couple of patches for a while… you move to the next game… I am not trying to “downplay” game development here at all, that’s just my guess based on “economic reasoning”. But I’d be interested to hear about it.


For the love of Pete, and his brother Repeat.:slightly_smiling_face:

Yes, yes, and yes! 2 weeks to test with random people that aren’t vetted or trusted. Really? I say slow this train down and do things the right way. ASOBO your schedule and procedure isn’t realistic for true testing. I don’t get why a software company partnered with Microsoft, that is in charge of one of the biggest flight sim releases in history, would operate like this. Or why MS would let them.

One or two? They should have 50 different hardware/driver configurations all running 24/7

1 Like

I have a question on something that is driving me batty

vetted? just exactly how do you propose to “vet” people for a task as mundane and determining whether or not a game is working properly?

You see, I’m not the company that said they could build this sim. ASOBO are the ones that are a big software developer that said they could do it.

It’s not like beta testing is a new idea anyway. This has been done successfully for decades and it’s been figured out already so just rinse and repeat.

It’s hard to disagree there. I filed eight (8!) zendesk tickets yesterday. Today I took the Icon A5 for a flight and already filed another issues (the AOA gauge in the Icon is broken since WO4).

Í get the impression that no pre-release testing was done at all. The bugs I filed where not hard-to-reproduce IFR rule issues that happen halfway into a 10 hour flight. I entered the cockpit, left the runway and noticed the AOA gauge does nothing. Of course this plane also shows well-known high priority bugs like the 50%-throttle after leaving the menu, the broken cloud layers in ATIS and the missing initial legs in the flight plan display. It took two minutes to see four bugs. And the Icon A5 is a standard edition plane.

Yes, I would like to see a World Update for Germany at some point but at the moment I would prefer no further world updates for the remainder of the year. I would much prefer to see trivial bugs fixed and less regressions going forward.


Another plane, another bug. The DA40NG has the RPM in the reds even at 50% throttle (I recall another plane that has the issue currently). Yes, created the second ticket today on Zendesk. It’s kinda hard to find a plane that didn’t not break with World Update 4, really.

Don’t get me wrong, I absolutely love the sim but it’s kind of hard to actually enjoy it lately.

1 Like

My discussion rule: be hard on the topic, but not on the person. Dear Asobo and Microsoft employees please try to forgive rude tone in the comments and try not to get frustated.
Hopefully you can hear the pain between the angry outbursts. The pain about “everybody is playing the new version, but I am stuck in the download loop” or “before the last update the game had little problems, but now I have big problems like only 10 frames per second and I can not go back to the old version”.

This is the “two steps forward, one step back”. We all want only forward. My thinking: Asobo/Microsoft can do better - something like “76T Bishop one frame per second” can be tested, found and fixed before public release. But for better quality Asobo/Microsoft has to slow down the pace - no more updates every two weeks. Do the manager want to do so? I don’t know.

To make it crystal clear: just change from two weeks to one month update does not cut the cake. First the number of development branches have to go down to say 4 branches maximum. Second a baseline has to be defined - if you want realistic flight model, you have to agree on a realistic flight model. Third the automated testing has to test as much of the baseline as possible. Fourth automated testing has to be supported by free testing: Real testers with “golden bug finding fingers” have to do free testing. Some of these “free testing tester” shall come from the community.

As Fred Brooks told us “there is no silver bullet”. But following my four suggestions will help your software quality as it helped our software quality (I design, program and test mission critical systems).

Yes. But we are all living in a real world, and “only forward” does not exist. As empirically proven every.second.that.passes.on.this.planet.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.