hannes changed the topic of #mirage to: MirageOS are OCaml unikernels - https://mirage.io - this channel is logged at http://irclog.whitequark.org/mirage/ - MirageOS 3.7.1 is released - happy hacking!
Haudegen has quit [Quit: Bin weg.]
adhux0x0f0x3f has quit [Remote host closed the connection]
Haudegen has joined #mirage
ahou has joined #mirage
kuya has quit [Ping timeout: 240 seconds]
ahou has quit [Read error: Connection timed out]
verg has joined #mirage
mahmudov has quit [Ping timeout: 260 seconds]
_whitelogger_ has joined #mirage
aedc has joined #mirage
_whitelogger has joined #mirage
aedc has quit [Ping timeout: 240 seconds]
Haudegen has quit [Quit: Bin weg.]
aedc has joined #mirage
Haudegen has joined #mirage
adhux0x0f0x3f has joined #mirage
<hannes> and, any luck with a firewall that does compactions every 10 seconds?
Haudegen has quit [Quit: Bin weg.]
Haudegen has joined #mirage
aedc has quit [Ping timeout: 246 seconds]
Haudegen has quit [Quit: Bin weg.]
mahmudov has joined #mirage
Haudegen has joined #mirage
mahmudov has quit [Remote host closed the connection]
mahmudov has joined #mirage
<verg> hannes: didnt try that, but the one doing a compaction every time before printing ram stats seems to prevent "long term leakage".
<verg> has been runnning 20h now, and besides two dips (68 and 64mb) it has been hovering around 72MB the whole time.
<verg> (oh, actualy the dips look like i was mssing around manually at the time)
<verg> added an update to the ticket
<verg> restarting the mfwt this client is behind, see you on the other side.
mezu has joined #mirage
verg has quit [Ping timeout: 265 seconds]
<hannes> and earlier it was ~40 MB, now 72 MB?
<mezu> it used to be 42MB total with ~ half that free. for the last years.
<mezu> now it seems every client vm takes 4MB+ mem, with GC being ... lazy.
<mezu> the 72MB is the "free" space for the single-vm-but-restarting-hourly usecase ("backup fw vm")
<hannes> mmmhhh
<hannes> so it is more 28?
<mezu> i gave the backupvm 100MB for now, and the main one (lots of clients, fewer restarts) 300MB.
<hannes> and earlier 21?
<hannes> i am mainly interested in the used memory, not the assigned, neither the free ;p
<mezu> well, "used" is a complicated topic if it reports "65MB free" and "out of memory" at the same time. :P
<hannes> i agree
<hannes> the next question would be whom do you ask for bytes free/allocated?
<mezu> i mainly cranked up the total so far to see if that makes a difference ... which supports the "fragmentation" theory.
<mezu> so i guess OS.MM.Heap_pages.used and OS.MM.Heap_pages.total
<mezu> (my Gc.compact() call is in line 13 there)
<hannes> so it is what mini-os thinks about memory, which is fine (good); there's as well the OCaml gc whom you could ask (will report less used memory since the io-page/cstruct is not part of the ocaml gc-managed heap)
<mezu> hm. which one is Gc.compact() hitting? or both?
<hannes> tl;dr: the Gc.compact calls help you and we should figure out how to trigger this implicitly :)
<hannes> Gc.compact is mainly concerned about the gc heap, but it as well takes care to free those non-gc-managed pieces by running finalisers..
<mezu> yes, that would be one of the two issues i see currently. "call Gc.compact more frequently than daily". :P
<mezu> (the other is the "4mb per client".)
<hannes> was this "4mb per client" there before? (or did you not look, ...)?
Hrundi_V_Bakshi has joined #mirage
<mezu> i never really looked, but i would have hit a "4mb per client" limit _very_ hard a lot of times.
<hannes> ok
<mezu> my sys-mfwt has 17 running downstream vms right now, that is an "average" situation for it.
<mezu> (and it used to run on 42MB and i would have to do log digging to say how much used to be reported as free)
<hannes> i checked for tooling in respect to the new statmemprof -- unfortunately the tooling is not yet there (see https://github.com/ocaml/ocaml/pull/9230#issuecomment-580730423)
<mezu> you mentioned Gc triggers were removed in some place(s)?
<hannes> 17 * 4 > 42 in any case, so there seems to be a memory regression
<hannes> i mentioned that the call to Gc.compact (triggered by Io_page.get) are happening less often
<mezu> another unverfied impression: it seems to "leak harder" after memory got actualy used, like "more likely to not free up memory if the client vm shutting down did a bunch of serious traffic"
<mezu> (which again fits with fragmentation)
<hannes> and to add to the pile of things to investigate is https://github.com/ocaml/ocaml/pull/8809 -- a best-fit allocator which should help against fragmentation. NB this code is only in OCaml 4.10.0+ (which is not released, but in beta2)
jnavila has joined #mirage
jnavila has quit [Remote host closed the connection]
Hrundi_V_Bakshi has quit [Ping timeout: 268 seconds]
Haudegen has quit [Ping timeout: 265 seconds]
Hrundi_V_Bakshi has joined #mirage
Haudegen has joined #mirage
Hrundi_V_Bakshi has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]