More data from this morning, around 7am CEST.
| # |
Start |
Duration |
Interval |
| 1 |
06:56:04 |
15.1s |
— |
| 2 |
06:56:56 |
15.1s |
52.5s |
| 3 |
06:57:50 |
14.8s |
53.5s |
| 4 |
06:58:45 |
15.1s |
54.6s |
| 5 |
06:59:40 |
15.1s |
55.7s |
| 6 |
07:00:37 |
15.1s |
57.0s |
| 7 |
07:01:34 |
15.1s |
57.1s |
| 8 |
07:02:33 |
15.1s |
59.2s |
| 9 |
07:03:33 |
15.1s |
59.4s |
| 10 |
07:04:34 |
15.1s |
61.4s |
| 11 |
07:05:37 |
15.1s |
62.5s |
| 12 |
07:06:40 |
15.1s |
63.7s |
| 13 |
07:07:45 |
15.1s |
64.4s |
| 14 |
07:08:50 |
15.1s |
65.3s |
| 15 |
07:09:57 |
15.1s |
66.9s |
| 16 |
07:11:05 |
19.6s |
67.2s |
| 17 |
07:12:18 |
15.1s |
73.0s |
| 18 |
07:13:26 |
15.1s |
68.2s |
| 19 |
07:14:36 |
15.1s |
69.5s |
| 20 |
07:15:47 |
15.1s |
71.0s |
| 21 |
07:16:59 |
15.1s |
72.0s |
| 22 |
07:18:12 |
15.1s |
73.2s |
| 23 |
07:19:26 |
15.1s |
74.2s |
| 24 |
07:24:41 |
15.1s |
75.3s |
| 25 |
07:26:00 |
15.1s |
79.0s |
| 26 |
07:27:20 |
15.1s |
79.6s |
| 27 |
07:28:41 |
15.1s |
81.0s |
| 28 |
07:30:04 |
15.1s |
82.7s |
| 29 |
07:31:28 |
15.1s |
84.0s |
| 30 |
07:32:53 |
15.1s |
85.0s |
| 31 |
07:34:19 |
15.1s |
86.3s |
| 32 |
07:35:47 |
15.1s |
87.8s |
| 33 |
07:37:16 |
15.1s |
88.8s |
| 34 |
07:38:46 |
15.1s |
89.6s |
| 35 |
07:40:17 |
15.1s |
91.2s |
| 36 |
07:41:49 |
15.4s |
92.0s |
| 37 |
07:43:22 |
15.1s |
93.4s |
| 38 |
07:44:57 |
15.1s |
94.6s |
| 39 |
07:46:32 |
15.1s |
95.5s |
| 40 |
07:48:09 |
15.1s |
97.2s |
| 41 |
07:49:47 |
15.1s |
97.8s |
| 42 |
07:51:26 |
15.1s |
99.2s |
- Duration: 15.1s (outlier: #16 at 19.6s)
- Interval: increases linearly, 52.2 + 1.12 × cycle_number seconds (R²=0.998)
- >20% of time frozen
When the freezes happen, they appear to happen anywhere: in the menu, in cinematics, in flight.
I took snapshots of the TCP connections before, during and after each freeze. I could not identify a correlation to specific requests. I still believe that they’re triggered by something external, but since I can’t provide evidence, you have to have faith. 
Note that I was running sim sessions for hours before the freezes started, including two 1h+ flights without any freezes whatsoever. I restarted the sim a bunch of times, I even restarted my system once (for unrelated reasons).
Since memory exhaustion has been brought up as a likely culprit: my system has 64GB and a 4090, only the sim is unresponsive during the freezes, the rest of the system appears completely unaffected. Nonetheless, I added some memory metrics to my script, so next time I encounter the issue, I’ll find out whether it’s related to resource usage on my system. I might even check if the issue persists when I run the sim with an empty Community/.
Time passes.
============================================================
FREEZE SUMMARY — 7 events
| # |
Start |
Dur |
Interval |
WS(pre) |
WS(post) |
WS delta |
Priv(pre) |
Priv(post) |
Priv delta |
Faults |
| 1 |
16:41:02 |
15.0s |
— |
13.77GB |
13.79GB |
+24.5MB |
21.08GB |
21.15GB |
+66.5MB |
+46153 |
| 2 |
16:42:04 |
15.0s |
61.8s |
13.72GB |
13.72GB |
+76KB |
21.18GB |
21.06GB |
-129.4MB |
+434 |
| 3 |
16:43:06 |
15.0s |
62.7s |
13.68GB |
13.68GB |
+492KB |
20.96GB |
20.96GB |
+204KB |
+633 |
| 4 |
16:44:10 |
15.0s |
63.9s |
13.69GB |
13.69GB |
+396KB |
21.00GB |
20.94GB |
-64.1MB |
+147 |
| 5 |
16:45:15 |
15.1s |
64.7s |
13.74GB |
13.74GB |
+564KB |
21.07GB |
21.07GB |
+452KB |
+209 |
| 6 |
16:46:21 |
15.0s |
66.0s |
13.76GB |
13.76GB |
+760KB |
21.07GB |
21.08GB |
+844KB |
+313 |
| 7 |
16:47:28 |
15.1s |
67.2s |
13.83GB |
13.83GB |
-260KB |
21.14GB |
21.14GB |
-1.7MB |
+483 |
Durations: 15.0s, 15.0s, 15.0s, 15.0s, 15.1s, 15.0s, 15.1s (avg 15.0s)
Intervals: 61.8s, 62.7s, 63.9s, 64.7s, 66.0s, 67.2s (avg 64.4s)
** Intervals are increasing (avg +1.1s per cycle) **
── Memory Growth ──
Session: 6.7 minutes (7 freezes)
Working set: 13.77GB → 13.83GB (+65.2MB over session)
Private mem: 21.08GB → 21.14GB (+58.1MB over session)
Page faults: 13459152 → 13943382 (+484230 over session)
System avail: 36.64GB → 36.25GB (-402.6MB)
Pagefile used: 36.45GB → 37.07GB (+631.3MB)
── Per-Freeze Deltas ──
WS avg=+3.8MB min=-260KB max=+24.5MB
Private avg=-18.2MB min=-129.4MB max=+66.5MB
Faults avg=+6910 min=+147 max=+46153
I asked Claude to analyze the raw data:
7 freezes, 16:41–16:47, all exactly 15.0s.
Intervals: 61.8, 62.7, 63.9, 64.7, 66.0, 67.2s (+1.07s/cycle, fits a quadratic with residuals <0.12s).
Memory is NOT the cause
-
System has 36GB free RAM, pagefile at 29%. No memory pressure.
-
WS steady at ~13.7GB, Private oscillates around 21–21.8GB with no sustained growth.
-
No memory threshold is being hit.
What happens during vs after freezes
During freeze: Mostly inert. When something does happen, it’s discrete -64MB Private drops in the first 1–2s (freezes 2 and 4), then silence for 13s. The 64MB quantum suggests a specific allocator block size (large-page region or mapped file). Most freezes (3, 5, 6, 7) show zero Private change.
After freeze (the real action): Every freeze is followed by a burst of 100–600MB allocation and 5,000–25,000 page faults/s in the first 5 seconds. The sim is catching up on a backlog that accumulated while it was frozen.
Bottom line
Memory monitoring confirms the freeze is not memory-related. The culprit is an internal sim operation that blocks the main thread for a fixed 15s timeout. The memory consequences (post-freeze allocation burst) are a side effect, not the cause. This is almost certainly a backend service call or internal synchronization point in the sim engine.
Doesn’t look like a memory issue to Claude and me. Any ideas what else to check?