fche changed the topic of #systemtap to: http://sourceware.org/systemtap; email systemtap@sourceware.org if answers here not timely, conversations may be logged
hpt has joined #systemtap
slowfranklin has joined #systemtap
KDr2 has joined #systemtap
KDr2 has quit [Read error: Connection reset by peer]
KDr2 has joined #systemtap
hpt has quit [Ping timeout: 245 seconds]
sscox has quit [Ping timeout: 246 seconds]
orivej_ has quit [Ping timeout: 250 seconds]
wcohen has quit [Ping timeout: 272 seconds]
orivej has joined #systemtap
<ggherdov>
Hello, is there a way to have systemtap collect a large amount of data (a few gigabytes)? I tried declaring an array of 100 million entries and it says "parse error: array size out of range".
<ggherdov>
I understand the common usage is to compute summaries (such as averages, histograms etc), but right now it would be handy to capture all hits of a tracepoint (and timestamps), keep them in memory and dump it all at "probe end".
sscox has joined #systemtap
<fche>
ggherdov, I assume we're talking about normal linux-kernel-module runtimes ... then beware as arrays etc. are all statically allocated, locked kernel ram.
<fche>
that said, on a huge memory machine, I suppose in principle we should be able to suck up a lot of memory if a user really needs that
<fche>
not sure where our limits are TBH.
<fche>
but yeah that particular limit I'd be glad to dismiss
<fche>
parse.cxx lines 2336/2337
<fche>
compare to INT_MAX instead I guess
<ggherdov>
fche: I see thanks
<fche>
would you like to draft a one-liner patch to change that limit?
<ggherdov>
fche: the thing is I'm now more convinced that I approaching my problem the wrong way. I look at systemtap and ask "give all the data, we'll see later what I do with it"... which is more of a job for perf or ftrace (I you don't want sampling).
<fche>
we think of stap as a multi-paradigm sort of tool - you can (should be able to) do it either way
<fche>
that said, in situ filtering / analysis is something pretty unique to stap
<ggherdov>
fche: anyways, if I have a large array of data and want to save the data at "probe end" time, I'd do that in a loop that iterates over the array elements, and I probably need to change MAXACTION as well. Is there a hardcoded limit for MAXACTION too?
orivej has quit [Ping timeout: 250 seconds]
<fche>
No, just the macro & its default value
<fche>
you'd probably want to use --suppress-time-limits though