whyrusleeping changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/ipfs/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- code of conduct at https://github.com/ipfs/community/blob/master/code-of-conduct.md -- sprints + work org at https://github.com/ipfs/pm/ -- community info at https://github.com/ipfs/community/
rschulman__ has joined #ipfs
sbruce has joined #ipfs
tilgovi_ has joined #ipfs
<whyrusleeping> pjz: it doesnt work, lol
<whyrusleeping> we've tried
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> jbenet: did you take a look at that gist?
<pjz> :( ah well, worth a shot :)
<pjz> whyrusleeping: what failed?
<whyrusleeping> some syscalls and networking stuff
<jbenet> whyrusleeping: interesting
www has quit [Ping timeout: 252 seconds]
<jbenet> whyrusleeping: could be a good wrapper of ipns. Definitely need something like that to make it easy to do things. The consistency and so on is complicated but yeah
rschulman__ has quit [Quit: rschulman__]
tilgovi has quit [*.net *.split]
domanic has quit [*.net *.split]
<whyrusleeping> yeah, the implementation i've got going for the docker registry 'works'
<whyrusleeping> but it will break if you try to do any other ipns stuff at the same time
<whyrusleeping> and its pretty slow
<ipfsbot> [go-ipfs] whyrusleeping pushed 1 new commit to feat/patch-create: http://git.io/vYJV6
<ipfsbot> go-ipfs/feat/patch-create cb89a6e Jeromy: let rm understand paths...
ThomasWaldmann has joined #ipfs
hellertime has joined #ipfs
pfraze has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> jbenet: what is the expected behaviour if i try to create a link with no name?
www1 has joined #ipfs
sbruce has quit [Read error: Connection reset by peer]
kalmi has joined #ipfs
sbruce has joined #ipfs
<jbenet> Error. We're moving to named links anyway
<whyrusleeping> cool cool
rschulman__ has joined #ipfs
<rschulman__> whyrusleeping: Just looked at that gist you posted earlier. All of those would be really helpful!
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
domanic has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ei-slackbot-ipfs> <zramsay> vendoring the go-ipfs-shell requires a ton of packages
<ei-slackbot-ipfs> <zramsay> would it be better to have one package that imports all of them
zramsay has joined #ipfs
kalmi has quit [Remote host closed the connection]
zramsay has quit [Client Quit]
notduncansmith has joined #ipfs
domanic has quit [Ping timeout: 255 seconds]
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> rschulman__: glad you think so!
<whyrusleeping> zramsay: i can work on that for you!
<whyrusleeping> @zramsay: o/
<whyrusleeping> its mostly me being lazy and not vendoring the commands lib into ipfs-shell
<ei-slackbot-ipfs> <zramsay> @whyareyousleeping: gotcha
<whyrusleeping> lol
<ei-slackbot-ipfs> <zramsay> learned a lot of about godep in the process
<whyrusleeping> i hate godep
<ei-slackbot-ipfs> <zramsay> eventually had to `godep save -r` from the ipfs repo
<whyrusleeping> yeah, 'godep save -r ./...' is pretty much all i know how to do
<whyrusleeping> yeah, i'm pretty excited for the vendor flag
<whyrusleeping> it will help
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
spikebike has quit [Ping timeout: 265 seconds]
spikebike has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
pfraze has joined #ipfs
ReactorScram has quit [Ping timeout: 264 seconds]
ReactorScram has joined #ipfs
rschulman__ has quit [Quit: rschulman__]
<alu> okay IPFS
<alu> I need to understand IPNS
<whyrusleeping> thatsnot quite how ipns works
<whyrusleeping> let me know if that link makes sense
<whyrusleeping> and also
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<alu> eh
<alu> Error: keychains not yet implemented
<whyrusleeping> from what command?
<alu> ipfs name publish avision QmW3dBdpDezmSVhwQUbzk61i2KkTijwvWJbcG3KJiyszng
<whyrusleeping> yeah, ipfs name publish QmW3dBdpDezmSVhwQUbzk61i2KkTijwvWJbcG3KJiyszng
<whyrusleeping> we're getting closer to having the extensible naming working
<alu> woops haha
<whyrusleeping> but for now we're stuck with only being able to use your peer ID
<alu> should I remember these hashes somewhere
<alu> published to <hash>: <hash>
<whyrusleeping> eh, one of them is the one you typed, and the other is your peer ID
reit has joined #ipfs
reit has quit [Client Quit]
reit has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> jbenet: this random EOF on the API failure is getting more annoying by the minute
<whyrusleeping> i want to stop what i'm doing to fix it, but thats just going farther down the rabbit hole
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
* jbenet wonders if it's related to the other rabbit hole on the http api we found before and punted on
<whyrusleeping> i reallllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllyyyy hope not
<whyrusleeping> and that only broke when i changed things
<jbenet> That's a lot of ls
<whyrusleeping> and it broke consistently
<jbenet> whyrusleeping: ok quick sanity check on filesystems
<whyrusleeping> alright
<whyrusleeping> lets check some sanity
<whyrusleeping> first, which filesytem?
<jbenet> whyrusleeping: it occurs to me that -- assuming files were always leaf data with no links -- unixfs can be represented by _only_ a directory
<whyrusleeping> hfs+?
<whyrusleeping> so intermediate blocks could just look like directories?
<jbenet> (I know the interesting part in _our_ unixfs is the dag chunking, and concatenation, but that's really more about data importing, not unix
<whyrusleeping> how would you tell a root level intermediate block from a large chunked file?
<jbenet> No, I just mean, ignore all the chunking for now, I think files are just any mdag object that have a "raw data" output, no?
<jbenet> So the interesting part becomes the directories?
<whyrusleeping> i mean, sure, if we dont chunk anything
<whyrusleeping> and that depends on your definition of interesting
mdem has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> jbenet: i wrote a program using ipfs-shell that gets that api error pretty readily
<whyrusleeping> any idea at all how to start debugging http api crap?
<whyrusleeping> ngrok is a bit too slow, because i have to throw a few hundred requests at it
<jbenet> Blame I'm working on linked data version of the core ipfs data structure. I'm thinking about how to layer stackstream. I need a way to extract the first layer of links out of a stackstream object. Also another possible reconciliation atm is to make stackstream a type, which can be applied or referenced. And a File to be a {link to a stackstream "combinator",
<jbenet> links to sub chunks, raw data}
<jbenet> whyrusleeping: Hm not sure. Maybe ask daviddias or mappum?
<whyrusleeping> daviddias: mappum *poke*
<jbenet> Blame also, when CALLing, does the child's OUTPUT get placed on the stack, or written out? I ask because the example on the readme implies the latter? Or I'm reading it wrong
<daviddias> reading
<whyrusleeping> tldr: http api randomly breaks, i can repro by throwing a ton of requests at it
<daviddias> is this related to the fact that pinbot sometimes can't make requests? Or for all use cases of the API
<daviddias> breaks as in, "process throws and dies"?
<whyrusleeping> daviddias: yeap, this is the problem pinbot is hitting
<jbenet> whyrusleeping: it's possible that it's stuck handing requests?
<whyrusleeping> and no, i either get an EOF client side, or connection reset by peer
<whyrusleeping> jbenet: i dont do requests async
<whyrusleeping> one after another
<jbenet> whyrusleeping: does it _die_ or get _stuck_ with N requests
<jbenet> whyrusleeping: so it just halts at Nth
<jbenet> whyrusleeping: or craps out?
<whyrusleeping> it doesnt halt, it errors out
<whyrusleeping> one sec
<jbenet> So: serially: kkkkkkkkkkkfffffff (where k is ok request and f is fail/error (that does indeed return)) ?
<whyrusleeping> kkkkkkkkkkkkkkkkkkf
<whyrusleeping> and i stop my program
<whyrusleeping> but without restarting the daemon, i can get more successes
<daviddias> woa, it is even from a different process. for a moment I thought it might be related with the http connection pool
<jbenet> whyrusleeping: what is the shape of those successes? Do the failures build up? Or one-offs?
<whyrusleeping> one offs
<jbenet> And, only when stressed?
* jbenet feels like an Http Doctor
<daviddias> does it get back to normal by itself or do you have to kill the api?
<whyrusleeping> i mean, i can put a sleep between each of the calls and see
<jbenet> Like if you pause for 100ms between requests, can you still trigger fail?
<whyrusleeping> i dont have to kill the api
<whyrusleeping> i'll try pausing
* jbenet wants to find out if it's just the network stack rejecting requests when under load
<whyrusleeping> well, requests arent coming in concurrently, i dont see why it would be load
<jbenet> Truez
<whyrusleeping> lol, adding in 100ms sleep is making it take 5ever
<jbenet> Wow, _five_ever? The universe will die before it finishes
<whyrusleeping> well, without the sleep, it fails less than 30 calls in
<whyrusleeping> with the sleep, it hits hundreds of calls
<jbenet> Odd.
<whyrusleeping> actually
<whyrusleeping> with the sleep, i havent seen ti fail
<whyrusleeping> awesome
<whyrusleeping> now
<whyrusleeping> building
<whyrusleeping> i get: "write error: No space left on device"
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> gonna go to sleep or something
<whyrusleeping> if anyone wants to try and debug this, i pushed the repro program to go-ipfs-shell/tests
<jbenet> whyrusleeping: will try it out
<jbenet> whyrusleeping: sleep well!
<whyrusleeping> <3
<jbenet> whyrusleeping: before you go-- wat do I do to try out registry now?
<whyrusleeping> I can push
<jbenet> Thanks
<whyrusleeping> build cmd/registry
<whyrusleeping> and then run it 'registry ipfsconfig.yaml'
<whyrusleeping> the config is in the root of the repo
<whyrusleeping> make sure you have an ipfs node running
<whyrusleeping> and DONT touch ipns while its running
<whyrusleeping> youll need to have latest ipfs-shell
<whyrusleeping> and my branch (feat/patch-create) of go-ipfs checked out
<jbenet> Like don't ipns name publish?
<jbenet> Can you put all that in a readme?
<whyrusleeping> i would, but its stil "WIP" so i didnt want to change the readme temporarily
<jbenet> Oh I mean _a_ readme somewhere could be a gist. I just want to make sure that it's exhaustive and all instructions are indeed there
<whyrusleeping> ah, okay
* whyrusleeping puts his pants back on
hellertime has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> <3 thanks
* whyrusleeping takes his pants off again
<whyrusleeping> gnite all!
<jbenet> Night!
* zignig averts his eyes and waves.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<zignig> jbenet: o/
<zignig> how goes ipfsing ?
<jbenet> hey zignig great! We've a couple meetups tomorrow and following day if you want to come to US NW
www1 has quit [Ping timeout: 244 seconds]
Leer10 has quit [Quit: Leaving]
<zignig> it's a long swim and i don't think i'd make it in time ;)
<zignig> but you can do a demo of astralboot if you like .... ;)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<alu> if I wanna setup an IPFS seedbox
<alu> any special steps or best practices recommended?
<zignig> alu , not really. Only to remember that IPFS is in heavy development , so it a moving target at the moment.
<zignig> what are you going to be using it for ?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<alu> distributed virtual reality 3D model repositories
<alu> i have a huge collection of 3D models im creating a 3D model viewer for inside janus
<alu> people can link to them / download or whatever
<zignig> nice, ipfs is a good fit , as people download them they make them available to everyone else.
<alu> heres a screenshot of the first possible look
<zignig> ls
<alu> the main model is in the middle with 10 portals surrounding it are hyperlinks to similar recommended models
<alu> main model will be on a rotating platform
<zignig> VR hyperlinks, very nice.
<alu> web pages as rooms and links as portals
<alu> its a collective spatial walkthrough of the internet
<alu> so I can have multiple hosts seed it right
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<zignig> ls
* zignig selects the correct window :/
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mdem has joined #ipfs
pfraze has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
sharky has joined #ipfs
Tv` has quit [Quit: Connection closed for inactivity]
keroberos has quit [Ping timeout: 252 seconds]
AndChat|77184 has joined #ipfs
mildred has joined #ipfs
reit has quit [Ping timeout: 244 seconds]
keroberos has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
AndChat|77184 has quit [Quit: Bye]
sendrecv has joined #ipfs
sendrecv has left #ipfs [#ipfs]
reit has joined #ipfs
atomotic has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
kbala has quit [Quit: Connection closed for inactivity]
domanic has joined #ipfs
mdem has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> hellowww
domanic has quit [Ping timeout: 260 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Quit: Leaving.]
null_radix has quit [Excess Flood]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
null_radix has joined #ipfs
reit has quit [Ping timeout: 260 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rschulman__ has joined #ipfs
mildred has joined #ipfs
rschulman__ has quit [Quit: rschulman__]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
kalmi has joined #ipfs
cow_2001 has quit [Quit: ASCII Muhammad - @o<-<]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Quit: Leaving.]
rschulman__ has joined #ipfs
mildred has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rschulman__ has quit [Quit: rschulman__]
hellertime has joined #ipfs
mildred has quit [Quit: Leaving.]
mildred has joined #ipfs
atomotic has quit [Quit: Textual IRC Client: www.textualapp.com]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
tilgovi_ has quit [Quit: No Ping reply in 180 seconds.]
tilgovi has joined #ipfs
<Blame> jbenet: OUTPUT always pushes to output.
<Blame> not to the stack. When CALL is preformed, it shares the same stack with the parent. It can "return" values just by putting them on the stack.
<Blame> I think the confusion is from the fact there are 2 types of arguments: stack-args and inline-args
notduncansmith has joined #ipfs
<Blame> inline args let us bootstrap values onto the stack, and to make some behaviors defined at "assembly time" rather than runtime.
notduncansmith has quit [Read error: Connection reset by peer]
<Blame> so "BLOCKID" in the example that is getting 'PUT" is the identifier for the block, which is then being 'CALL'ed (which consumes the identifier of the block to call from the stack to allow for some metaprogramming)
<Blame> since this is Turing complete, unless we add more explicit directives (like 'output this raw chunk') you will not be able to perfectly predict what chunks are being called without running the assembly. But you might be able to do some reasonable predicting.
<Blame> In the example use case, I am assuming that the 'CALL'ed blocks contain similar blocks and leaf nodes contain 'put rawdata; output' rather than the raw content itself. We could add a directive to 'but raw content of block on stack' or 'output raw content of given block'
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has joined #ipfs
dignifiedquire has joined #ipfs
atomotic has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has quit [Quit: Leaving]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rschulman__ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
pfraze has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
atomotic has quit [Quit: Textual IRC Client: www.textualapp.com]
therealplato has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mdem has joined #ipfs
Tv` has joined #ipfs
www1 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> lets see if i cant figure out this http crap
<lgierth> whyrusleeping: godspeed
<lgierth> o/
<lgierth> i owe you one if you can fix it, it lets my solarnet deploys fail out of the blue every now and then
mildred has quit [Ping timeout: 272 seconds]
<whyrusleeping> lgierth: yeah, i cant even push a docker image to the registry i made, it fails pretty often
<whyrusleeping> so, what i'm seeing so far is that the request gets made and rejected before the http handler gets called server side
<whyrusleeping> which is really not what i wanted to discover
<whyrusleeping> i was hoping we had something like 'if rand.Intn(100) == 0 {failSilently(now)}'
ThomasWaldmann has quit [Ping timeout: 255 seconds]
cmars has quit [Ping timeout: 264 seconds]
edsu_ has quit [Ping timeout: 255 seconds]
<whyrusleeping> and the error client side is coming from http.DefaultClient.Do(...)
<whyrusleeping> which is not code i'm going to try and debug
krl has quit [Ping timeout: 255 seconds]
edsu has joined #ipfs
krl has joined #ipfs
cmars has joined #ipfs
kalmi has quit [Ping timeout: 264 seconds]
<whyrusleeping> TIL: we have a root redirect option on our gateways
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
keroberos has quit [Excess Flood]
<rschulman__> whyrusleeping: I feel bad for you but better you than me.
* whyrusleeping shakes his fist at rschulman__
<lgierth> whyrusleeping: rejected as in conn refused, eh?
<whyrusleeping> lgierth: yeah
<whyrusleeping> i also just found a few data races in the daemon/commands lib
<whyrusleeping> does anyone else hate the editor that github is using for gists now?
con_ has quit [Ping timeout: 255 seconds]
<whyrusleeping> heres one of the races: https://gist.github.com/whyrusleeping/5ba411c4d02c92c99c66
<lgierth> ipfs-paste \o/
con_ has joined #ipfs
<whyrusleeping> lgierth: except i keep my nodes off the main network :P
<whyrusleeping> so i dont spam the network with my crap, and so daemon startup times are faster for development
<lgierth> right on makes sense
<rschulman__> whyrusleeping: so courteous
<whyrusleeping> lol, i think i found a bug in go: https://gist.github.com/whyrusleeping/1850fe43999ffd9b2243
<whyrusleeping> look at the stack on that one
ThomasWaldmann has joined #ipfs
keroberos has joined #ipfs
<whyrusleeping> ^ i'm getting the same errors that he is getting
headbite has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> i'm going to go with 'go is broken'
<whyrusleeping> lgierth: o/
<tperson> The transport is broken :P time to write your own.
<whyrusleeping> fffffffffff
<whyrusleeping> googled 'golang http transport sucks' brought me to this link http://stackoverflow.com/questions/17948827/reusing-http-connections-in-golang
<whyrusleeping> and the solution of reading everything from the body and closing it every time appears to work...
<whyrusleeping> maybe
<whyrusleeping> nevermind, it just makes the requests hang
<whyrusleeping> blech
<tperson> Hang?
<tperson> Like it can't close the connection
<whyrusleeping> no, hanging on a read because i closed the connection, lol
<whyrusleeping> one sec, trying something
mildred has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Tv`> goleveldb using scanf for parsing something?
<Tv`> that scares me
<whyrusleeping> Tv`: i think thats not a real stack trace
<whyrusleeping> oh, yeah
<Tv`> well that part looks believable
<rschulman__> whyrusleeping: I think i have a solution for you
<tperson> http problem?
<rschulman__> Use rustlang :P
<whyrusleeping> rschulman__: i fixed the problem
<whyrusleeping> its ugly
<Tv`> calling scanf from parsesomething
<Tv`> now you have been given ownership of 2 problems
<Tv`> but yeah either there's a code bug, the stack is busted, or the it misunderstood the stack
<whyrusleeping> tperson: kinda
<whyrusleeping> basically, with go's http transport, you *have* to close your response Bodys when youre done with them
<whyrusleeping> if you dont, things break horribly
<whyrusleeping> depending on how it feels
<whyrusleeping> Tv`: weird part though, is that it starts to fail well before running out of file descriptors
<whyrusleeping> like, five requests in it will fail
<Tv`> that's what resource leaks look like
<whyrusleeping> and by horribly, i mean, sporadically and in a manner thats not easy to reproduce
<whyrusleeping> and also by throwing one of three errors
<ipfsbot> [go-ipfs] whyrusleeping force-pushed feat/patch-create from cb89a6e to 95041f5: http://git.io/vYLXR
<ipfsbot> go-ipfs/feat/patch-create eeedf05 Jeromy: allow patch to optionally create intermediate dirs...
<ipfsbot> go-ipfs/feat/patch-create a52650d Jeromy: let rm understand paths...
<ipfsbot> go-ipfs/feat/patch-create 95041f5 Jeromy: more tests and better path handling in object...
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Encrypt has joined #ipfs
<rschulman__> whyrusleeping: Maybe we could do something simpler in the near term by adding a “ipfs file type” command that just returns “file”, “directory”, or “none”?
<rschulman__> sorry, this is about the ipns stuff from earlier.
mildred has quit [Ping timeout: 252 seconds]
<whyrusleeping> rschulman__: how can we tell something that is a file from something that just randomly has the same data as a file?
<rschulman__> jbenet was implying earlier that there is data in a unixfs node, though I haven’t figured out how to find it yet.
<rschulman__> that is unique data other than just
<rschulman__> whatever
<rschulman__> :)
bedeho has quit [Ping timeout: 246 seconds]
vijayee_ has joined #ipfs
<richardlitt> jbenet: your site is down, I think.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> richardlitt: which site?
<jbenet> Blame: ah that makes sense. Hm, I would've expected the output of one module to feed into the caller, that wa the caller can compose the outputs (a bit like pipes?). Maybe we can remove OUTPUT and instead "output" whatever is left on the stack? That could get a bit complicated. Another stack of outputs? --- I guess streaming is where it matters, want to be
<jbenet> able to output as we go. Hmmm
Encrypt has quit [Quit: Quitte]
<jbenet> Blame: also in terms of predicting I just want to predict the object ids called by the current module. Easier than all objects it ever references, but probably still subject to Turing completen problem
<jbenet> whyrusleeping: :( wow golang/http
<ipfsbot> [go-ipfs] whyrusleeping created fix/http-client-close (+1 new commit): http://git.io/vYLNa
<ipfsbot> go-ipfs/fix/http-client-close 50af582 Jeromy: attempt at properly closing http response bodies...
mildred has joined #ipfs
<jbenet> rschulman__ right now have to parse the protobuf and there's no way of knowing outside of the object whether it is unixfs or not, depends on the operation (whether it tries to less unixfs). Soon it will be able to know
<whyrusleeping> jbenet: yeah, its kinda fucked up and we have to hack around it
<jbenet> richardlitt: thanks I'll check it out
<richardlitt> jbenet: no problem
<ipfsbot> [go-ipfs] whyrusleeping opened pull request #1507: attempt at properly closing http response bodies (master...fix/http-client-close) http://git.io/vYLAc
<jbenet> It's busted alright. Need to get to laptop. I'm going to start 302ing to ipfs gateway. More reliable
<jbenet> Commented
mildred has quit [Read error: Connection reset by peer]
mildred has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
kalmi has joined #ipfs
<rschulman__> jbenet: ahh, I see, that’s too bad.
<whyrusleeping> jbenet: where in the cmdslib were you thinking?
<whyrusleeping> maybe i misunderstood
tilgovi has quit [Ping timeout: 240 seconds]
<whyrusleeping> jbenet: we havent run into this before because the ipfs command only runs one API call per process instance
<whyrusleeping> with the shell, we reuse the client (and subsequently, the transport)
ThomasWaldmann has quit [Remote host closed the connection]
ThomasWaldmann has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> why does the http transport need to cache connections?
<whyrusleeping> meh, lets disable that shit
<jbenet> whyrusleeping: cache connections to avoid creating a new TCP connection per request
<jbenet> whyrusleeping: otherwise TCP handshake, no?
<whyrusleeping> but we make a new connection *anyways*
<whyrusleeping> nothing we do supports reusing connections
<whyrusleeping> AFAIK
<whyrusleeping> although i know little about http
<jbenet> whyrusleeping: afaict -- from those links you showed -- this is a problem in the client, not the server. so closing the http request when done is the thing to do. i guess we don't know _when_ we end because requests may return, but keep writing...
<jbenet> oh no, that's not requests, that's responses.
<jbenet> whyrusleeping: i think the HTTP client reuses connections by default
<jbenet> but im not sure
<whyrusleeping> i think it tries to, and breaks
<whyrusleeping> there are several bugs open with similar behaviour
<whyrusleeping> its because the transport reuse connection logic is broken
<cryptix> hey guys
<daviddias> whyrusleeping it is trying to reuse the socket from the listener side? Even after having it closed?
<whyrusleeping> its trying to reuse the socket clientside i beleive
<jbenet> daviddias: no, the client. this is ok to do, and part of http.
<daviddias> I remember hitting some bugs in Node a while ago (now solved) where the requests multiplexed over one connection would be all aborted if one of the requests was canceled, but that was because of the connection pool
<jbenet> it's just that the lib implementation has bugs
atrapado has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<daviddias> ok. But wasn't whyrusleeping starting the requests from different processes? (which should mean different connection pools? )
<jbenet> daviddias: no, that's when it works fine. it breaks when it is the same process.
<jbenet> whyrusleeping: I'm ok using a new transport per request to bypass the bug, but it's going to be slower, particularly in command pipelines
<cryptix> sounds like request 2 is stuck until request 1 was handled when the client reuses the connection
<cryptix> (which is expected imho if you use the default pooling)
<daviddias> jbenet: ^ that was the test I was thinking of from yesterday
<jbenet> daviddias: that process is running many requests and printint out the first one that fails, i think. whyrusleeping ?
<daviddias> aaaaaah! Get it now
<daviddias> yeah, so I ran into the same problem in the past and had to step http agent to false so that the impl didn't do connection reuse
<daviddias> and yes, adds a lot of overhead
<jbenet> daviddias: funny that neither go or node got it right :)
<whyrusleeping> jbenet: correct
<jbenet> whyrusleeping: we could go through all the commands and try to close them. i mean they _should_ close things.
<jbenet> whyrusleeping: but in that case, we should probably have a test that verifies this per cmd. not sue if we can create a generic test that hits every request and tests.
<whyrusleeping> jbenet: or i can just do this: https://github.com/ipfs/go-ipfs/pull/1507#issuecomment-123809155
hellertime has quit [Quit: Leaving.]
<cryptix> whyrusleeping: if you go with tha transport, you might as well stick the client in the cmdlib client and not create a new one for each send
<cryptix> that*
<whyrusleeping> cryptix: true
<jbenet> no, that will trigger the same problem i think.
<cryptix> jbenet: it disables the conn polling
<cryptix> pooling**
<jbenet> it does not disable conn pooling, it only disables keepalives, no?
<whyrusleeping> jbenet: conn pooling and keepalives are one and the same according to go
<cryptix> .// DisableKeepAlives, if true, prevents re-use of TCP connections // between different HTTP requests.
<jbenet> oh interesting
<whyrusleeping> yeah
<whyrusleeping> cryptix beat me to it
<jbenet> ok im down. note the DefaultTransport has other opts
<cryptix> its an edge case, i dont think its 'broke in go'.. how much handholding do you want at this point?
<cryptix> well... thats another story
<cryptix> 'it looks like the first time the http client properly returns
<cryptix> empty body but the transport ends up returning the broken connection back to the pool.'
<whyrusleeping> if i make two separate requests, i shouldnt have to wait for one to 'finish' before i can launch the other
<whyrusleeping> thats a bug
<cryptix> nope
<cryptix> not in my book.
<cryptix> you need to know about the pooling, granted - but its exactly what the pooling is for, not making tcp handshakes for every request
<whyrusleeping> so if http.Get('X') and another http.Get('X') are made to the same server
<cryptix> if you want that.. or the first req you fire off will block indef (long polling anyone?) , you need to switch of the pooling or use a seperate transport for the first one
<whyrusleeping> and one of them fails because the other isnt 'done yet'
<whyrusleeping> thats not a bug?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> personally, i dont think so
<whyrusleeping> then they should mark that very clearly somewhere where everyone can see it
<cryptix> you assume the runtime (or http lib) governs your requests and decides at some point to make a seperate tcp con for the 2nd request
<cryptix> well.. its in the docs
<whyrusleeping> i'm able to repro with http.Get("...")
<jbenet> cryptix: the http lib is different from the http protocol. if the http lib allows you to do this: http://gateway.ipfs.io/ipfs/QmTjUt6gbLRT1ucbM7SGV6uZU9KiGK7z6m689Drc7XCSFc/paste it shouldn't break
<whyrusleeping> from import "net/http"
tilgovi has joined #ipfs
<whyrusleeping> a global variable like that, especially one thats critical (and a default) for most of the http calls, shouldnt have thread safety, or resource leakage issues that blatant
<cryptix> sigh
<cryptix> yea... sometimes i wonder why you guys hack this in go if you are offended by most of its decisions
<cryptix> just dont use the default transport and be done with it
<whyrusleeping> its not a problem with a decision, its a *BUG IN THE CODEBASE* --> https://github.com/golang/go/issues/8946
<jbenet> okay okay everyone relax :)
<whyrusleeping> blech, http
<cryptix> sorry everyone, whyrusleeping especially
<whyrusleeping> its alright, lol
<jbenet> languages are not perfect -- faaaaaar from it. every language you run into will have TONS of bugs. both in the code and in the decisions. just _using_ a language doesnt mean "i agree with everything the language things is right to do". even HTTP is riddled with problems, that people do not agree with. its sad, but yeah. And, we can try and improve languages,
<jbenet> but first we must be able to tell what we think is wrong.
<jbenet> and we're not offended by most of its decisions, otherwise, yeah, we wouldn't be using go. personally, i'm offended by only a few. i think maybe there's a sample bias here, where we only talk about things that we disagree with, as agreement is not worthy of note?
<jbenet> s/offended/disappointed\/annoyed/
<cryptix> yea, i see
<jbenet> for example, i really like the type signatures. and how interfaces are written-- super nice :)
<cryptix> still think 8946 isnt the problem here.. its a corner case of broken pool reuse when the server trashes the clients http connection. is that what the ipfs server is doing here?
<cryptix> if so, why does the server terminate a connection mid flight anyway?
<jbenet> that's a good point o/ -- we **should** be closing requests correctly
<whyrusleeping> the server *doesnt* close requests afaik
<whyrusleeping> it waits for the client to close the request
<cryptix> yup. client has to close the bodies
<cryptix> otherwise it fills up the pool
<cryptix> (which is why i elluded to long polling on the same transport beeing a problamatic assumption)
<whyrusleeping> i think we should just use a new transport per request
<jbenet> whyrusleeping: when will those open requests close then?
<jbenet> _something_ has to close them.
www1 has quit [Ping timeout: 240 seconds]
<whyrusleeping> jbenet: so then my currently open PR
semidreamless has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> whyrusleeping doesnt it also mean we have to go through all the commands and make sure they close the body?
<jbenet> whyrusleeping: i imagine one test could test all the commands and make sure the body of the response writer is closed correctly (passing a mocked closer), but not sure.
<whyrusleeping> jbenet: yeah, we *should* for correctness, but since we only use one connection per command invocation
<whyrusleeping> and the process dies and closes all its fd's
<whyrusleeping> it doesnt matter for the cli stuff
<whyrusleeping> but ipfs-shell consumers will need to close
<whyrusleeping> and thats going to be complicated as all hell
<jbenet> the API should be correct as consumers wont all be using our code (cli or shell)
<jbenet> like, if curl-ing breaks...
<whyrusleeping> the API *is* correct
<whyrusleeping> the client code is whats incorrect
<jbenet> i meant the command responses
<jbenet> do all the command responses close the body of the response correctly?
<sprintbot> Sprint Checkin! [whyrusleeping jbenet cryptix wking lgierth krl kbala_ rht__ daviddias dPow chriscool gatesvp]
<whyrusleeping> sprintbot: working on obnoxious http fun stuff
<jbenet> sprintbot: hopping trains and working on ipfsld and talk this eve
<daviddias> sprintbot: working on the routing layer stuff
semidreamless has quit [Quit: Leaving...]
<vijayee_> jbenet: I'm about done with a tool to do the tour in a workshopper fashion but I don't know what type of test cases the different chapters should have and also how should the standalone tool connect to the local ipfs to verify the user completed the commands
kalmi has quit [Remote host closed the connection]
<whyrusleeping> jbenet: what exactly do you mean by 'do all the command responses close the body of the response correctly?'
<whyrusleeping> for the CLI, i would be calling close as the last thing in cmd/ipfs/main.go
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> vijayee: for connecting, can use https://github.com/ipfs/go-ipfs-shell in go or https://www.npmjs.org/package/ipfs-api in node
<jbenet> whyrusleeping: i mean that in a Run() function of a command, if we return a channel or a reader, do we close it correctly?
<jbenet> otherwise the server will never close the response
<jbenet> (i imagine we do some of this right, or the cli client would hang
<whyrusleeping> yeah, we do that part right
<whyrusleeping> commands server side wont hang because of client inaction
<jbenet> well, i think they will (backpressure)
<jbenet> backpressure from tcp -> http response writer -> command response writer
<jbenet> (and this is correct)
<jbenet> but i mean, _will it close always_ when done?
<jbenet> cryptix's point made me think we may not be closing them all correctly
<whyrusleeping> jbenet: as far as i know, that all is correct
<whyrusleeping> i wouldnt be surprised if someone proved me wrong
<jbenet> hahaha
<whyrusleeping> but thats my current understanding of the codebase
<jbenet> i'll take a look
* whyrusleeping goes for a walk
border has joined #ipfs
<jbenet> yep, we're fine: "Body is closed after it is sent." and ResponseWriter has no Close http://golang.org/pkg/net/http/#ResponseWriter
<vijayee_> jbenet: thanks, I'll check out the go library. At some point will have to figure out what all the tests should be for passing each chapter
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> jbenet: just wanted to link this when we got into the http mess... :)
<cryptix> screams ipfs all over (at least the linking and distribution stuff)
<jbenet> cryptix: haha :)
kbala has joined #ipfs
<jbenet> just started reading it
<vijayee_> whoa, eight months in iranian solitary confinement
<vijayee_> for a blog?
vijayee_ has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
<whyrusleeping> jbenet: so, thoughts on http stuff?
<whyrusleeping> i need to get this fixed before i can continue pretty much anything else
<whyrusleeping> with either of those patches the docker stuff works, i was able to push an image to my local registry
<rschulman__> jbenet: Just reading that as well. Was mentioned by the guy in the IPFS twitter stream. :)
<rschulman__> assuming that might be you, cryptix?
<ogd> jbenet: where the hacks at today
<cryptix> rschulman__: oObsi is one of my twitter accs, yes :)
<rschulman__> :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> was quite interesting to have a 'i knew this translation flew through my timeline earlier'-moment with THAT article
<jbenet> ogd: ^H
<jbenet> ogd: i'll be there ~3pm. we've a meetup tonight: attending.io/events/ipfs-portland-meetup-the-permanent-distributed-web
<jbenet> cryptix: +1 for posting
<cryptix> i'll go and make some music now - had too much android today to do more coding
<jbenet> whyrusleeping: https://github.com/ipfs/go-ipfs/pull/1507/files works ? what ends up closing the client-side-response-body ?
vijayee_ has joined #ipfs
<whyrusleeping> os.Exit(1)
<whyrusleeping> in ipfs-shell i call res.Close() when i'm done
<whyrusleeping> jbenet: o.
rschulman__ has quit [Quit: rschulman__]
<border> why cool things always append in portland
<border> ?
mdem is now known as semidreamless
www has joined #ipfs
<whyrusleeping> cryptix: something going on with nasa today?
Encrypt has joined #ipfs
<cryptix> crewshift in about an hr
<Tv`> totally read that without the f
<cryptix> :)
rschulman__ has joined #ipfs
<jbenet> whyrusleeping: ipfs-shell doesnt use the cmds lib directly then?
<jbenet> nvm it does
<jbenet> should close automatically when reaching the end of the io.Reader
<jbenet> (we should, or return an io.ReadCloser and hope the user closes.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> something something io.Copy(io.Pipe(), resp.Reader()) && resp.Close()
<whyrusleeping> and asynchronously return the other pipe
<whyrusleeping> any other ideas on how to do it if not with a close like this?
<whyrusleeping> (and i think everyone was against one transport per client)
<jbenet> whyrusleeping: a close like that looks fine to me.
<whyrusleeping> jbenet: also, using a transport per client doesnt leave open file descriptors laying about
<jbenet> how so? (i thought using a transport per client wastes strictly more resources, not less).
<whyrusleeping> hrm, looking at it again i guess it depends on which api call i make
<whyrusleeping> Close route it is
zorun_ has quit [Ping timeout: 256 seconds]
zorun has joined #ipfs
<jbenet> richardlitt: well that was fun. the server which is running static.benet.ai is old, hasnt been updated in a while. even the docker images had been gc-ed so docker couldnt start after a failure. had to find the same image lying about in another server, and transfer it over. its in ipfs now, so yay.
<jbenet> whyrusleeping: docker save + docker load = easiest way to move docker images into ipfs
<ipfsbot> [go-ipfs] whyrusleeping force-pushed fix/http-client-close from 50af582 to 8286aba: http://git.io/vYqla
<ipfsbot> go-ipfs/fix/http-client-close 8286aba Jeromy: attempt at properly closing http response bodies...
<jbenet> {save, load} = images. {export, import} = containers.
rschulman__ has quit [Quit: rschulman__]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> richardlitt: thanks again for letting me know! (im almost at the point of switching to ipfs completely)
<whyrusleeping> jbenet: one other way we could do it, is to have the resp.Reader() return an io.ReadCloser
<whyrusleeping> so you can close that instead of having to call close on the response object
<jbenet> whyrusleeping: "13:11 <•jbenet> (we should, or return an io.ReadCloser and hope the user closes."
<jbenet> the problem with that is it's very annoying for the user.
<jbenet> i think the "close on EOF" makes sesne
<jbenet> sense*
<whyrusleeping> alright
<whyrusleeping> so 1507 is good? want to CR?
dignifiedquire has quit [Quit: dignifiedquire]
<jbenet> waiting for tests
<ipfsbot> [go-ipfs] jbenet deleted fix/http-client-close at 8286aba: http://git.io/vYquI
notduncansmith has joined #ipfs
<whyrusleeping> jbenet: how do i rerun tests on circleCI?
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> oh, nvm, had to be signed in
<whyrusleeping> duh
<jbenet> that helps :)
<jbenet> btw, you gave them permissions to everything?
<jbenet> to sign you up, circle asks for R/W to everything.
<whyrusleeping> yeah, i didnt see any options to turn down perms
<jbenet> (i use ipfsbot)
<whyrusleeping> ah, thats probably a good idea....
<jbenet> (creds on meldium)
<whyrusleeping> right
<whyrusleeping> let me go do that instead
<jbenet> i think you can disable perms in github settings somewhere.
<whyrusleeping> yeap, already done
<jbenet> wonder how many companies one can own through the various services.
<whyrusleeping> i normally remove perms from things i'm not using any more
<jbenet> would be good to have an http page that shows a list of services and #s of repos one can own through them.
<whyrusleeping> too bad that info isnt very public
<jbenet> could help them step up their security game, and make github finally solve the granular auth problem.
<jbenet> it is for public repos.
<jbenet> can crawl github or their sites.
<whyrusleeping> oooh, thats true
rschulman__ has joined #ipfs
rschulman__ has quit [Client Quit]
<whyrusleeping> interesting... i can still restart builds on travis even after revoking its github permissions
rschulman__ has joined #ipfs
rschulman__ has quit [Client Quit]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
why-via-nc has joined #ipfs
border has quit [Ping timeout: 256 seconds]
border_ has joined #ipfs
<why-via-nc> i can irc through netcat :P
<why-via-nc> i love text based protocols
why-via-nc has quit [Remote host closed the connection]
<ipfsbot> [go-ipfs] whyrusleeping force-pushed feat/patch-create from 95041f5 to f1d8b2f: http://git.io/vYLXR
<ipfsbot> go-ipfs/feat/patch-create d4a322e Jeromy: allow patch to optionally create intermediate dirs...
<ipfsbot> go-ipfs/feat/patch-create d0d3564 Jeromy: let rm understand paths...
<ipfsbot> go-ipfs/feat/patch-create f1d8b2f Jeromy: more tests and better path handling in object...
<alu> I wish I could go to that IPFS meetup
<alu> I'm on the other side of the country tho
headbite has quit [Ping timeout: 246 seconds]
<jbenet> alu: start one!
<jbenet> i think you and rschulman are both in DC area?
<alu> yeah
<alu> idk who rschulman is but it might be cool to dedicate a night at the hackerspace
therealplato has quit [Ping timeout: 246 seconds]
<jbenet> can someone make a map for ipfs/community with our locations? bonus points if it tracks the peer.ID of a node we can individually configure via a file on github or ipfs :)
<alu> after DEFCON I want to move out to the west coast
<alu> i been busy packing all day
vijayee_ has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
<thefinn93> daviddias: oh you're going to Portland to do a meetup?
<thefinn93> just got the mail from the ctrl-h mailing list
<thefinn93> wasnt expecting to see ipfs there
<thefinn93> not sure why, it makes total sense
therealplato has joined #ipfs
<daviddias> thanks to kyledrake, yes :)
<thefinn93> cool
<daviddias> we've been hacking from CTRLH since Monday and we'll be back to Seattle tomorrow
<thefinn93> ah cool
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> lgierth: ping
<alu> I'm trying to convince my friend from lainchan to come
<alu> he's in portland
<alu> gave a talk at dockercon, fucking brilliant for his age especially
<whyrusleeping> alu: would be cool to have him there, we're looking for people who are proficient with containers and container workflows
<alu> I'm trying hard lol
<lgierth> whyrusleeping: pong
<lgierth> oh hey i have a voice
<whyrusleeping> yep! makes it easier for me to see when youre online
<whyrusleeping> we should probably update pinbot with the latest ipfs-shell code
<whyrusleeping> it oughta fix the random failures we're seeing
<lgierth> make pinbot_ref && ansible-playbook solarnet.yml
<lgierth> :)
<whyrusleeping> something something unrecognized command 'ansible'
<whyrusleeping> maybe its better now
<lgierth> mhk, how do we give pinbot a newer ipfs-shell? i guess it'll go get the current master of it right?
<lgierth> so it'd only need a simple rebuild
<whyrusleeping> yeah, if its a fresh go-get
<lgierth> good good, on its way
mildred has quit [Quit: Leaving.]
G-Ray has joined #ipfs
pinbot has quit [Remote host closed the connection]
pinbot has joined #ipfs
<lgierth> whyrusleeping: there it is
* lgierth back to the cleaning chores
<whyrusleeping> wooo!
<whyrusleeping> thanks!
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> we have commits made on every hour of every day of the week: https://github.com/ipfs/go-ipfs/graphs/punch-card
<whyrusleeping> pretty neat
<jbenet> :)
<whyrusleeping> jbenet: question
<whyrusleeping> what happens in the case where s.ConnsToPeer doesnt have any connections yet?
<whyrusleeping> or is that not possible?
<jbenet> my thing is probably broken. it should have conns, but the async nature makes it possible not to, as you (and travis) have pointed out
Encrypt has quit [Quit: Quitte]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
www1 has joined #ipfs
www has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
semidreamless has quit [Quit: Connection closed for inactivity]
G-Ray has quit [Quit: Konversation terminated!]
<ipfsbot> [go-ipfs] whyrusleeping force-pushed fix/notif-test from dbc7a8f to 348f0bf: http://git.io/vYmmi
<ipfsbot> go-ipfs/fix/notif-test b60f494 Jeromy: fix race condition in notifications test...
<ipfsbot> go-ipfs/fix/notif-test 348f0bf Jeromy: fix same test in swarm...
notduncansmith has joined #ipfs
caseorganic has joined #ipfs
sff has joined #ipfs
notduncansmith has quit [Ping timeout: 260 seconds]
sff_ has quit [Ping timeout: 244 seconds]
atrapado has quit [Quit: Leaving]
<jbenet> whyrusleeping any luck?
<jbenet> daviddias, mappum, kbala, krl, kyledrake, caseorganic: do you want to present/demo anything tonight? i was thinking of having half the time (or more) be just demos :)
<caseorganic> jbenet: i'll likely just listen and maybe add some to the post
<jbenet> caseorganic +1
<daviddias> I know krl has been prepping the IPFS apps demo
<caseorganic> bret: want to demo anything this evening at the IPFS meetup?
<daviddias> I got nothing though :(
<bret> caseorganic: ermm.. all i have is a raspi2 running ipfs
<bret> i could demo that, but its not that interesting... just a node
<caseorganic> bret: ah, that's pretty neat though. see you this evening!
<bret> yeah, I can talk about how to set it up
<jbenet> bret: sweet
<daviddias> bret awesome!
<caseorganic> bret++
<bret> \o/
<whyrusleeping> jbenet: i think that the changes i made in p2p/net/mock/ are better
<whyrusleeping> wanna take a look at them?
<bret> are people going to be at ctlh earlier than 7?
<whyrusleeping> bret: i think they have been there all day
<caseorganic> bret: you're free to come by earlier!
<caseorganic> bret: we're all at ctlh right now. i've been here since 10a
<bret> ill prob grab dinner and take the max over
<lgierth> jbenet: regarding cjdns, i'm looking into multihash and secio first, and typing up what i have in mind so far. please yell if that's totally the wrong end
<lgierth> whyrusleeping daviddias ^
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth> cjdns' cryptoauth is basically what secio is too: an encrypted stream between two nodes
<lgierth> (except it's not actually a stream cause it doesn't provide reliability)
<lgierth> i figure lack of reliability is a problem
<ipfsbot> [go-ipfs] whyrusleeping force-pushed fix/notif-test from 348f0bf to 47cd70f: http://git.io/vYmmi
<ipfsbot> go-ipfs/fix/notif-test 47cd70f Jeromy: fix same test in swarm...
<whyrusleeping> jbenet: ^ thats probably the best i can do for that test
notduncansmith has joined #ipfs
www1 has quit [Ping timeout: 240 seconds]
notduncansmith has quit [Read error: Connection reset by peer]
www has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> lgierth: lack of reliability is ok for us, particularly if we allow udp.
<jbenet> lgierth: (quic is another story, so far we've ... relied ... on reliability
<jbenet> lgierth: but a lot of our protocols fit in one packet.