MZPX has quit [Remote host closed the connection]
MZPX has joined ##openfpga
MZPX has quit [Read error: Connection reset by peer]
m_w has quit [Quit: leaving]
tecepe has quit [Ping timeout: 268 seconds]
tecepe has joined ##openfpga
bpye has quit [Quit: No Ping reply in 180 seconds.]
bpye has joined ##openfpga
talsit has joined ##openfpga
talsit has left ##openfpga [##openfpga]
doomlord has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
tecepe has quit [Remote host closed the connection]
carl0s has joined ##openfpga
tecepe has joined ##openfpga
digshadow has quit [Quit: Leaving.]
digshadow has joined ##openfpga
landley__ has joined ##openfpga
digshadow has quit [Quit: Leaving.]
doomlord has joined ##openfpga
digshadow has joined ##openfpga
doomlord has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
landley__ has quit [Ping timeout: 250 seconds]
DocScrutinizer05 has quit [Disconnected by services]
DocScrutinizer05 has joined ##openfpga
carl0s has quit [Quit: Leaving]
amclain has quit [Quit: Leaving]
clifford has quit [Ping timeout: 265 seconds]
landley__ has joined ##openfpga
clifford has joined ##openfpga
mifune has joined ##openfpga
mifune has joined ##openfpga
mifune has quit [Changing host]
Bike has quit [Quit: lexify]
mifune has quit [Quit: Leaving]
wolfspraul has quit [Remote host closed the connection]
landley__ has quit [Ping timeout: 260 seconds]
_whitelogger has joined ##openfpga
whitequark has joined ##openfpga
MZPX has joined ##openfpga
SpaceCoaster has joined ##openfpga
scrts has quit [Ping timeout: 260 seconds]
scrts has joined ##openfpga
wolfspraul has joined ##openfpga
MZPX has quit [Ping timeout: 265 seconds]
MZPX has joined ##openfpga
Bike has joined ##openfpga
doomlord has joined ##openfpga
wpwrak has quit [Remote host closed the connection]
<cr1901_modern> azonenberg: Where does "54" tons comes from? I haven't taken a statics/dynamics course in years lol
<azonenberg> cr1901_modern: that was going from some fire dept training slides i found on google, my training used a slightly lower number (6000 pounds per point of contact for 4x4 timbers)
<azonenberg> So 3 tons * 16 points of contact = 48 tons load limit for a 4x4 piece box crib
<azonenberg> probably depends on the type of wood thats common for construction in your part of the country
<azonenberg> compressive strength varies
<azonenberg> i used the range as i dont know what kind of wood they used in the video
<azonenberg> The nice thing about wood for supporting things like this is that it fails gradually and loudly
<azonenberg> So you have ample warning if you DO exceed the safe load limit
<azonenberg> But in the case of that video i am pretty confident they have a massive margin
* cr1901_modern looks up what a "crib" is
<cr1901_modern> Well... I'm a little surprised that such a small "point of contact" has such a large weight limit
<cr1901_modern> But Idk much materials science so lol
<cr1901_modern> azonenberg: Pedantic: Correct me if I'm wrong, but I don't think that crib is holding up the entire weight of the ferris wheel. Wouldn't it be holding up "weight of ferris wheel" * "proportion of surface area of ferris wheel under crib"
<azonenberg> cr1901_modern: I was extrapolating assuming a similar structure under all 4 legs
<azonenberg> or at least, uniform distribution of load
<azonenberg> s.t. that one crib was holding 1/16 the total weight
<cr1901_modern> azonenberg: I'm just remembering a stupid experiment in high school that demonstrated pressure by balancing a textbook on top of a sharpened pencil, and then placing the pencil on top of the textbook. Same weight, different surface area contact (thus different pressure)
<azonenberg> cr1901_modern: the primary concern in that situation is crushing the wood
<azonenberg> and for a 4x4" contact area the rule of thumb i was taught was 6000 pounds
<azonenberg> or 375 PSI but thats harder to remember
<azonenberg> Sounds like that has about a factor of ten safety margin
<azonenberg> at least
<azonenberg> as the compressive strength of most of those woods is many thousands of psi
<cr1901_modern> I don't see anything close to 375 PSI
<cr1901_modern> So the rule of thumb is a factor of 10 :P?
<azonenberg> the rule of thumb probably includes a healthy safety margin
<azonenberg> I was taught 6k pounds per point of contact on a 4x" overlap
<azonenberg> was the *safe working load*
<azonenberg> on a 4x4"
<cr1901_modern> azonenberg: You seem to imply with your link that compressive strength changes with surface area.
<cr1901_modern> "(2:28:47 PM) azonenberg: Sounds like that has about a factor of ten safety margin"
<cr1901_modern> I'm very confused now
<cr1901_modern> If your rule of thumb is 375 PSI, all of those woods have strengths in the thousands of PSI, and you don't know whether the rule of thumb compensates for that
<cr1901_modern> (2:31:21 PM) azonenberg: the rule of thumb probably includes a healthy safety margin
<cr1901_modern> then where the hell did 6000 come from XD
<whitequark> what
<whitequark> 6000 pounds is a force, 375 psi is force per unit area
<cr1901_modern> I know. But all those woods in the link are THOUSANDS of PSI
<cr1901_modern> But azonenberg said the rule of thumb doesn't account for that (or doesn't know for sure whether it does)
<cr1901_modern> So I'm wondering why it's "so low" relative to wood's real strength
<cr1901_modern> If it's a safety factor, cool! No more questions from me :P
<azonenberg> cr1901_modern: 6000 pounds is ~375 PSI * 16 square inches contact area
<azonenberg> for two 4x4 inch timbers at right angles
<azonenberg> We only train with 4 inch wide lumbers, larger pieces are obviously stronger
<azonenberg> So I figure the rule of thumb has a ~10x margin of safety vs the actual failure strength, which is what's linked in that page
<cr1901_modern> azonenberg: Ack. Makes sense now
<azonenberg> in other news...
<azonenberg> Trying to actually follow the full dependency graph for gcc is...
<azonenberg> nontrivial
<azonenberg> i'm in the middle of trying to puzzle over why gcc is trying to include a header file (via several layers of indirection)
<azonenberg> that didn't show up when i did a dependency scan
<azonenberg> Deep in /usr/include/c++/$VERSION/bits/
<cr1901_modern> B/c gcc paths are a clusterfuck that only Ian Lance Taylor understands, that's why
<azonenberg> Lol
<azonenberg> well for build reproducibility and caching purposes
<cr1901_modern> (or if the other devs understand, they just give glib responses on the mailing list whereas Ian will at least politely give the real answer)
<azonenberg> i need to have the FULL dependency list known in advance
<azonenberg> Because i actually build with --nostdinc
<azonenberg> and explicitly copy the hashed versions of the include files to the build directory in the __sysinclude__ directory, which i then -I
<cr1901_modern> azonenberg: Honestly, drop the mailing list a line and hope Ian responds
<azonenberg> cr1901_modern: i dont know if its a bug in gcc or how i'm using it yet
<azonenberg> still investigating
<azonenberg> All i know right now i
<azonenberg> is
<azonenberg> __sysinclude__/GNU_C++_4.9.2_i386-linux-gnu/bits/stl_function.h:1084:31: fatal error: backward/binders.h: No such file or directory
<azonenberg> # include <backward/binders.h>
<azonenberg> the include is there and should have been picked up by the dep scan
<azonenberg> unsure if g++ -E failed to see it, or if i parsed something wrong
<cr1901_modern> erm... o.0;
<azonenberg> here's the verbose output i'm staring at now
<azonenberg> oho here's the bug
<azonenberg> /usr/include/c++/4.9/backward/binders.h resolves to __sysinclude__/GNU_C++_4.9.2_i386-linux-gnu/binders.h
<azonenberg> that should turn into __sysinclude__/GNU_C++_4.9.2_i386-linux-gnu/backward/binders.h
<azonenberg> so its in my code, i dont know what is wrong yet
<azonenberg> but something in my logic that converts system-specific include paths to abstract include paths borked
<cr1901_modern> Is --nostdinc to remove platform differences?
<azonenberg> Basically, i pick one node and declare it the golden node
<azonenberg> for each toolchain
<azonenberg> all include files, libraries, etc are resolved on that node
<azonenberg> hashed and put into the shared cahce
<azonenberg> cache*
<azonenberg> than all build nodes use --nostdinc and pull all includes and libs from the cache by hash ID
<azonenberg> this means as long as you have the same gcc executable on each node, the rest of the system configuration is irrelevant for reproducibility
<azonenberg> Using --nostdinc also ensures that if i ever fail to detect a file in dependency scanning, the compile will abort
<azonenberg> rather than silently using a file not in my dependency list
<azonenberg> and potentially breaking reproducibility or failing to rebuild when it should
<cr1901_modern> Interesting idea
<azonenberg> i'll be doing the same thing with ld when i get to that point
<azonenberg> right now i'm still working on compilation
<azonenberg> well, more likely i'll link with gcc and not ld since it hides some of the complexities, but same deal under the hood
<azonenberg> basically internally to the system i have a "working copy" object for each connected client
<azonenberg> that maps relative path names, including the virtual __sysinclude__/$COMPILER/ directory under the project root, to sha-256 hashes
<azonenberg> which identify immutable objects like headers or libraries or a specific version of a compile source file
<azonenberg> compiled*
<cr1901_modern> "working copy" object?
<azonenberg> So the way Splash handles builds is a bit more like a cloud-y environment than something like make
<azonenberg> You have a central control server (eventually maybe >1, i havent implemented scalability there yet)
<azonenberg> a bunch of build servers
<azonenberg> and a bunch of developer clients
<azonenberg> Each dev runs a service on their workstation that runs an inotify watcher on their checkout of the code
<azonenberg> any time a file changes it gets hashed and the hash is pushed to the control server
<azonenberg> and the server records the (path, hash) tuple
<azonenberg> if the server doesn't have content for that hash yet, it asks the client to send it
<azonenberg> So the control server has the union of every file every dev has in their working copy
<azonenberg> every compiled object
<azonenberg> every system header, every library
<azonenberg> Every time a file changes, it runs a dependency scan to find the new set of deps for the output object
<azonenberg> And the dependencies are recorded by hash for that hash of the source file and flags
<azonenberg> So when build time comes around, it just pushes the set of dependency hashes and file names to the build server
<azonenberg> if the buidl server doesnt have content for those hashes in its local cache, it pulls them from the control server
<cr1901_modern> ahh cool
<azonenberg> compiles, pushes the binary and stdout/stderr back to the control server to get cached
<azonenberg> now if i push my changes
<azonenberg> you pull them
<azonenberg> and you hit build, you dont actually run a compile
<azonenberg> you get a binary from the control server's cache
<azonenberg> If you revert to a version of the code that you recently built, and build
<azonenberg> you dont even have to go to the server
<azonenberg> b/c the binary is probably in your clientside cache
<azonenberg> with FPGA stuff that has hour-or-more build times, you can imagine this is a time-safer
<azonenberg> saver*
<azonenberg> its less of a win with c/c++ but its easier to use the same infrastructure for the entire proejct
<azonenberg> especially when you have things like an fpga bitstream that includes a compiled C binary in block ram, etc
<azonenberg> You need cross-language cross-toolchain dependency tracking
<cr1901_modern> Better have a shitload of hard drive space for this
<cr1901_modern> *hard disk space is cheap something something*
<azonenberg> The cache will (eventually) have a cap and LRU eviction
<azonenberg> Right now it grows without bound :p
<azonenberg> But that isn't actually a big problem
<azonenberg> because i am constantly flushing the cache to test code changes with a clean slate :p
<azonenberg> oh so one of the other nice things about this system
<azonenberg> It caches stdout of the build as well as the binary
<azonenberg> So you will still see every compiler warning even when doing an incremental build of a different file
<azonenberg> i HATE making a tiny change and rebuilding and not seeing all of the warnings from before
<azonenberg> just because i didnt change the file the warning came from
<azonenberg> makes it easy to miss them
<azonenberg> and a make clean + make just to see the forgotten warnings is annoying
<azonenberg> vs just having them show up all the time, every time
<azonenberg> One of my biggest goals for this system is determinism
<azonenberg> No matter what is in the cache when you hit build, you get the same binaries (bit for bit) and same stdout
doomlord has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
doomlord has joined ##openfpga
doomlord has quit [Max SendQ exceeded]
MZPX has quit [Read error: Connection reset by peer]
doomlord has joined ##openfpga