notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
jbenet: did you take a look at that gist?
<pjz>
:( ah well, worth a shot :)
<pjz>
whyrusleeping: what failed?
<whyrusleeping>
some syscalls and networking stuff
<jbenet>
whyrusleeping: interesting
www has quit [Ping timeout: 252 seconds]
<jbenet>
whyrusleeping: could be a good wrapper of ipns. Definitely need something like that to make it easy to do things. The consistency and so on is complicated but yeah
rschulman__ has quit [Quit: rschulman__]
tilgovi has quit [*.net *.split]
domanic has quit [*.net *.split]
<whyrusleeping>
yeah, the implementation i've got going for the docker registry 'works'
<whyrusleeping>
but it will break if you try to do any other ipns stuff at the same time
<whyrusleeping>
and its pretty slow
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to feat/patch-create: http://git.io/vYJV6
<ipfsbot>
go-ipfs/feat/patch-create cb89a6e Jeromy: let rm understand paths...
ThomasWaldmann has joined #ipfs
hellertime has joined #ipfs
pfraze has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
jbenet: what is the expected behaviour if i try to create a link with no name?
www1 has joined #ipfs
sbruce has quit [Read error: Connection reset by peer]
kalmi has joined #ipfs
sbruce has joined #ipfs
<jbenet>
Error. We're moving to named links anyway
<whyrusleeping>
cool cool
rschulman__ has joined #ipfs
<rschulman__>
whyrusleeping: Just looked at that gist you posted earlier. All of those would be really helpful!
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
domanic has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ei-slackbot-ipfs>
<zramsay> vendoring the go-ipfs-shell requires a ton of packages
<ei-slackbot-ipfs>
<zramsay> would it be better to have one package that imports all of them
zramsay has joined #ipfs
kalmi has quit [Remote host closed the connection]
zramsay has quit [Client Quit]
notduncansmith has joined #ipfs
domanic has quit [Ping timeout: 255 seconds]
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
rschulman__: glad you think so!
<whyrusleeping>
zramsay: i can work on that for you!
<whyrusleeping>
@zramsay: o/
<whyrusleeping>
its mostly me being lazy and not vendoring the commands lib into ipfs-shell
<alu>
ipfs name publish avision QmW3dBdpDezmSVhwQUbzk61i2KkTijwvWJbcG3KJiyszng
<whyrusleeping>
yeah, ipfs name publish QmW3dBdpDezmSVhwQUbzk61i2KkTijwvWJbcG3KJiyszng
<whyrusleeping>
we're getting closer to having the extensible naming working
<alu>
woops haha
<whyrusleeping>
but for now we're stuck with only being able to use your peer ID
<alu>
should I remember these hashes somewhere
<alu>
published to <hash>: <hash>
<whyrusleeping>
eh, one of them is the one you typed, and the other is your peer ID
reit has joined #ipfs
reit has quit [Client Quit]
reit has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
jbenet: this random EOF on the API failure is getting more annoying by the minute
<whyrusleeping>
i want to stop what i'm doing to fix it, but thats just going farther down the rabbit hole
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
* jbenet
wonders if it's related to the other rabbit hole on the http api we found before and punted on
<whyrusleeping>
i reallllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllyyyy hope not
<whyrusleeping>
and that only broke when i changed things
<jbenet>
That's a lot of ls
<whyrusleeping>
and it broke consistently
<jbenet>
whyrusleeping: ok quick sanity check on filesystems
<whyrusleeping>
alright
<whyrusleeping>
lets check some sanity
<whyrusleeping>
first, which filesytem?
<jbenet>
whyrusleeping: it occurs to me that -- assuming files were always leaf data with no links -- unixfs can be represented by _only_ a directory
<whyrusleeping>
hfs+?
<whyrusleeping>
so intermediate blocks could just look like directories?
<jbenet>
(I know the interesting part in _our_ unixfs is the dag chunking, and concatenation, but that's really more about data importing, not unix
<whyrusleeping>
how would you tell a root level intermediate block from a large chunked file?
<jbenet>
No, I just mean, ignore all the chunking for now, I think files are just any mdag object that have a "raw data" output, no?
<jbenet>
So the interesting part becomes the directories?
<whyrusleeping>
i mean, sure, if we dont chunk anything
<whyrusleeping>
and that depends on your definition of interesting
mdem has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
jbenet: i wrote a program using ipfs-shell that gets that api error pretty readily
<whyrusleeping>
any idea at all how to start debugging http api crap?
<whyrusleeping>
ngrok is a bit too slow, because i have to throw a few hundred requests at it
<jbenet>
Blame I'm working on linked data version of the core ipfs data structure. I'm thinking about how to layer stackstream. I need a way to extract the first layer of links out of a stackstream object. Also another possible reconciliation atm is to make stackstream a type, which can be applied or referenced. And a File to be a {link to a stackstream "combinator",
<jbenet>
links to sub chunks, raw data}
<jbenet>
whyrusleeping: Hm not sure. Maybe ask daviddias or mappum?
<whyrusleeping>
daviddias: mappum *poke*
<jbenet>
Blame also, when CALLing, does the child's OUTPUT get placed on the stack, or written out? I ask because the example on the readme implies the latter? Or I'm reading it wrong
<daviddias>
reading
<whyrusleeping>
tldr: http api randomly breaks, i can repro by throwing a ton of requests at it
<daviddias>
is this related to the fact that pinbot sometimes can't make requests? Or for all use cases of the API
<daviddias>
breaks as in, "process throws and dies"?
<whyrusleeping>
daviddias: yeap, this is the problem pinbot is hitting
<jbenet>
whyrusleeping: it's possible that it's stuck handing requests?
<whyrusleeping>
and no, i either get an EOF client side, or connection reset by peer
<whyrusleeping>
jbenet: i dont do requests async
<whyrusleeping>
one after another
<jbenet>
whyrusleeping: does it _die_ or get _stuck_ with N requests
notduncansmith has quit [Read error: Connection reset by peer]
tilgovi_ has quit [Quit: No Ping reply in 180 seconds.]
tilgovi has joined #ipfs
<Blame>
jbenet: OUTPUT always pushes to output.
<Blame>
not to the stack. When CALL is preformed, it shares the same stack with the parent. It can "return" values just by putting them on the stack.
<Blame>
I think the confusion is from the fact there are 2 types of arguments: stack-args and inline-args
notduncansmith has joined #ipfs
<Blame>
inline args let us bootstrap values onto the stack, and to make some behaviors defined at "assembly time" rather than runtime.
notduncansmith has quit [Read error: Connection reset by peer]
<Blame>
so "BLOCKID" in the example that is getting 'PUT" is the identifier for the block, which is then being 'CALL'ed (which consumes the identifier of the block to call from the stack to allow for some metaprogramming)
<Blame>
since this is Turing complete, unless we add more explicit directives (like 'output this raw chunk') you will not be able to perfectly predict what chunks are being called without running the assembly. But you might be able to do some reasonable predicting.
<Blame>
In the example use case, I am assuming that the 'CALL'ed blocks contain similar blocks and leaf nodes contain 'put rawdata; output' rather than the raw content itself. We could add a directive to 'but raw content of block on stack' or 'output raw content of given block'
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has joined #ipfs
dignifiedquire has joined #ipfs
atomotic has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has quit [Quit: Leaving]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rschulman__ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot>
go-ipfs/feat/patch-create a52650d Jeromy: let rm understand paths...
<ipfsbot>
go-ipfs/feat/patch-create 95041f5 Jeromy: more tests and better path handling in object...
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Encrypt has joined #ipfs
<rschulman__>
whyrusleeping: Maybe we could do something simpler in the near term by adding a “ipfs file type” command that just returns “file”, “directory”, or “none”?
<rschulman__>
sorry, this is about the ipns stuff from earlier.
mildred has quit [Ping timeout: 252 seconds]
<whyrusleeping>
rschulman__: how can we tell something that is a file from something that just randomly has the same data as a file?
<rschulman__>
jbenet was implying earlier that there is data in a unixfs node, though I haven’t figured out how to find it yet.
<rschulman__>
that is unique data other than just
<rschulman__>
whatever
<rschulman__>
:)
bedeho has quit [Ping timeout: 246 seconds]
vijayee_ has joined #ipfs
<richardlitt>
jbenet: your site is down, I think.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
richardlitt: which site?
<jbenet>
Blame: ah that makes sense. Hm, I would've expected the output of one module to feed into the caller, that wa the caller can compose the outputs (a bit like pipes?). Maybe we can remove OUTPUT and instead "output" whatever is left on the stack? That could get a bit complicated. Another stack of outputs? --- I guess streaming is where it matters, want to be
<jbenet>
able to output as we go. Hmmm
Encrypt has quit [Quit: Quitte]
<jbenet>
Blame: also in terms of predicting I just want to predict the object ids called by the current module. Easier than all objects it ever references, but probably still subject to Turing completen problem
<jbenet>
rschulman__ right now have to parse the protobuf and there's no way of knowing outside of the object whether it is unixfs or not, depends on the operation (whether it tries to less unixfs). Soon it will be able to know
<whyrusleeping>
jbenet: yeah, its kinda fucked up and we have to hack around it
<jbenet>
richardlitt: thanks I'll check it out
<richardlitt>
jbenet: no problem
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #1507: attempt at properly closing http response bodies (master...fix/http-client-close) http://git.io/vYLAc
<jbenet>
It's busted alright. Need to get to laptop. I'm going to start 302ing to ipfs gateway. More reliable
<whyrusleeping>
but we make a new connection *anyways*
<whyrusleeping>
nothing we do supports reusing connections
<whyrusleeping>
AFAIK
<whyrusleeping>
although i know little about http
<jbenet>
whyrusleeping: afaict -- from those links you showed -- this is a problem in the client, not the server. so closing the http request when done is the thing to do. i guess we don't know _when_ we end because requests may return, but keep writing...
<daviddias>
whyrusleeping it is trying to reuse the socket from the listener side? Even after having it closed?
<whyrusleeping>
its trying to reuse the socket clientside i beleive
<jbenet>
daviddias: no, the client. this is ok to do, and part of http.
<daviddias>
I remember hitting some bugs in Node a while ago (now solved) where the requests multiplexed over one connection would be all aborted if one of the requests was canceled, but that was because of the connection pool
<jbenet>
it's just that the lib implementation has bugs
atrapado has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<daviddias>
ok. But wasn't whyrusleeping starting the requests from different processes? (which should mean different connection pools? )
<jbenet>
daviddias: no, that's when it works fine. it breaks when it is the same process.
<jbenet>
whyrusleeping: I'm ok using a new transport per request to bypass the bug, but it's going to be slower, particularly in command pipelines
<cryptix>
sounds like request 2 is stuck until request 1 was handled when the client reuses the connection
<cryptix>
(which is expected imho if you use the default pooling)
<daviddias>
jbenet: ^ that was the test I was thinking of from yesterday
<jbenet>
daviddias: that process is running many requests and printint out the first one that fails, i think. whyrusleeping ?
<daviddias>
aaaaaah! Get it now
<daviddias>
yeah, so I ran into the same problem in the past and had to step http agent to false so that the impl didn't do connection reuse
<daviddias>
and yes, adds a lot of overhead
<jbenet>
daviddias: funny that neither go or node got it right :)
<whyrusleeping>
jbenet: correct
<jbenet>
whyrusleeping: we could go through all the commands and try to close them. i mean they _should_ close things.
<jbenet>
whyrusleeping: but in that case, we should probably have a test that verifies this per cmd. not sue if we can create a generic test that hits every request and tests.
<cryptix>
'it looks like the first time the http client properly returns
<cryptix>
empty body but the transport ends up returning the broken connection back to the pool.'
<whyrusleeping>
if i make two separate requests, i shouldnt have to wait for one to 'finish' before i can launch the other
<whyrusleeping>
thats a bug
<cryptix>
nope
<cryptix>
not in my book.
<cryptix>
you need to know about the pooling, granted - but its exactly what the pooling is for, not making tcp handshakes for every request
<whyrusleeping>
so if http.Get('X') and another http.Get('X') are made to the same server
<cryptix>
if you want that.. or the first req you fire off will block indef (long polling anyone?) , you need to switch of the pooling or use a seperate transport for the first one
<whyrusleeping>
and one of them fails because the other isnt 'done yet'
<whyrusleeping>
thats not a bug?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix>
personally, i dont think so
<whyrusleeping>
then they should mark that very clearly somewhere where everyone can see it
<cryptix>
you assume the runtime (or http lib) governs your requests and decides at some point to make a seperate tcp con for the 2nd request
<cryptix>
well.. its in the docs
<whyrusleeping>
i'm able to repro with http.Get("...")
<whyrusleeping>
a global variable like that, especially one thats critical (and a default) for most of the http calls, shouldnt have thread safety, or resource leakage issues that blatant
<cryptix>
sigh
<cryptix>
yea... sometimes i wonder why you guys hack this in go if you are offended by most of its decisions
<cryptix>
just dont use the default transport and be done with it
<cryptix>
sorry everyone, whyrusleeping especially
<whyrusleeping>
its alright, lol
<jbenet>
languages are not perfect -- faaaaaar from it. every language you run into will have TONS of bugs. both in the code and in the decisions. just _using_ a language doesnt mean "i agree with everything the language things is right to do". even HTTP is riddled with problems, that people do not agree with. its sad, but yeah. And, we can try and improve languages,
<jbenet>
but first we must be able to tell what we think is wrong.
<jbenet>
and we're not offended by most of its decisions, otherwise, yeah, we wouldn't be using go. personally, i'm offended by only a few. i think maybe there's a sample bias here, where we only talk about things that we disagree with, as agreement is not worthy of note?
<jbenet>
s/offended/disappointed\/annoyed/
<cryptix>
yea, i see
<jbenet>
for example, i really like the type signatures. and how interfaces are written-- super nice :)
<cryptix>
still think 8946 isnt the problem here.. its a corner case of broken pool reuse when the server trashes the clients http connection. is that what the ipfs server is doing here?
<cryptix>
if so, why does the server terminate a connection mid flight anyway?
<jbenet>
that's a good point o/ -- we **should** be closing requests correctly
<whyrusleeping>
the server *doesnt* close requests afaik
<whyrusleeping>
it waits for the client to close the request
<cryptix>
yup. client has to close the bodies
<cryptix>
otherwise it fills up the pool
<cryptix>
(which is why i elluded to long polling on the same transport beeing a problamatic assumption)
<whyrusleeping>
i think we should just use a new transport per request
<jbenet>
whyrusleeping: when will those open requests close then?
<jbenet>
_something_ has to close them.
www1 has quit [Ping timeout: 240 seconds]
<whyrusleeping>
jbenet: so then my currently open PR
semidreamless has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
whyrusleeping doesnt it also mean we have to go through all the commands and make sure they close the body?
<jbenet>
whyrusleeping: i imagine one test could test all the commands and make sure the body of the response writer is closed correctly (passing a mocked closer), but not sure.
<whyrusleeping>
jbenet: yeah, we *should* for correctness, but since we only use one connection per command invocation
<whyrusleeping>
and the process dies and closes all its fd's
<whyrusleeping>
it doesnt matter for the cli stuff
<whyrusleeping>
but ipfs-shell consumers will need to close
<whyrusleeping>
and thats going to be complicated as all hell
<jbenet>
the API should be correct as consumers wont all be using our code (cli or shell)
<jbenet>
like, if curl-ing breaks...
<whyrusleeping>
the API *is* correct
<whyrusleeping>
the client code is whats incorrect
<jbenet>
i meant the command responses
<jbenet>
do all the command responses close the body of the response correctly?
<whyrusleeping>
sprintbot: working on obnoxious http fun stuff
<jbenet>
sprintbot: hopping trains and working on ipfsld and talk this eve
<daviddias>
sprintbot: working on the routing layer stuff
semidreamless has quit [Quit: Leaving...]
<vijayee_>
jbenet: I'm about done with a tool to do the tour in a workshopper fashion but I don't know what type of test cases the different chapters should have and also how should the standalone tool connect to the local ipfs to verify the user completed the commands
kalmi has quit [Remote host closed the connection]
<whyrusleeping>
jbenet: what exactly do you mean by 'do all the command responses close the body of the response correctly?'
<whyrusleeping>
for the CLI, i would be calling close as the last thing in cmd/ipfs/main.go
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
and asynchronously return the other pipe
<whyrusleeping>
any other ideas on how to do it if not with a close like this?
<whyrusleeping>
(and i think everyone was against one transport per client)
<jbenet>
whyrusleeping: a close like that looks fine to me.
<whyrusleeping>
jbenet: also, using a transport per client doesnt leave open file descriptors laying about
<jbenet>
how so? (i thought using a transport per client wastes strictly more resources, not less).
<whyrusleeping>
hrm, looking at it again i guess it depends on which api call i make
<whyrusleeping>
Close route it is
zorun_ has quit [Ping timeout: 256 seconds]
zorun has joined #ipfs
<jbenet>
richardlitt: well that was fun. the server which is running static.benet.ai is old, hasnt been updated in a while. even the docker images had been gc-ed so docker couldnt start after a failure. had to find the same image lying about in another server, and transfer it over. its in ipfs now, so yay.
<jbenet>
whyrusleeping: docker save + docker load = easiest way to move docker images into ipfs
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed fix/http-client-close from 50af582 to 8286aba: http://git.io/vYqla
<ipfsbot>
go-ipfs/feat/patch-create d0d3564 Jeromy: let rm understand paths...
<ipfsbot>
go-ipfs/feat/patch-create f1d8b2f Jeromy: more tests and better path handling in object...
<alu>
I wish I could go to that IPFS meetup
<alu>
I'm on the other side of the country tho
headbite has quit [Ping timeout: 246 seconds]
<jbenet>
alu: start one!
<jbenet>
i think you and rschulman are both in DC area?
<alu>
yeah
<alu>
idk who rschulman is but it might be cool to dedicate a night at the hackerspace
therealplato has quit [Ping timeout: 246 seconds]
<jbenet>
can someone make a map for ipfs/community with our locations? bonus points if it tracks the peer.ID of a node we can individually configure via a file on github or ipfs :)
<alu>
after DEFCON I want to move out to the west coast
notduncansmith has quit [Ping timeout: 260 seconds]
sff_ has quit [Ping timeout: 244 seconds]
atrapado has quit [Quit: Leaving]
<jbenet>
whyrusleeping any luck?
<jbenet>
daviddias, mappum, kbala, krl, kyledrake, caseorganic: do you want to present/demo anything tonight? i was thinking of having half the time (or more) be just demos :)
<caseorganic>
jbenet: i'll likely just listen and maybe add some to the post
<jbenet>
caseorganic +1
<daviddias>
I know krl has been prepping the IPFS apps demo
<caseorganic>
bret: want to demo anything this evening at the IPFS meetup?
<daviddias>
I got nothing though :(
<bret>
caseorganic: ermm.. all i have is a raspi2 running ipfs
<bret>
i could demo that, but its not that interesting... just a node
<caseorganic>
bret: ah, that's pretty neat though. see you this evening!
<bret>
yeah, I can talk about how to set it up
<jbenet>
bret: sweet
<daviddias>
bret awesome!
<caseorganic>
bret++
<bret>
\o/
<whyrusleeping>
jbenet: i think that the changes i made in p2p/net/mock/ are better
<whyrusleeping>
wanna take a look at them?
<bret>
are people going to be at ctlh earlier than 7?
<whyrusleeping>
bret: i think they have been there all day
<caseorganic>
bret: you're free to come by earlier!
<caseorganic>
bret: we're all at ctlh right now. i've been here since 10a
<bret>
ill prob grab dinner and take the max over
<lgierth>
jbenet: regarding cjdns, i'm looking into multihash and secio first, and typing up what i have in mind so far. please yell if that's totally the wrong end
<lgierth>
whyrusleeping daviddias ^
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth>
cjdns' cryptoauth is basically what secio is too: an encrypted stream between two nodes
<lgierth>
(except it's not actually a stream cause it doesn't provide reliability)
<lgierth>
i figure lack of reliability is a problem
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed fix/notif-test from 348f0bf to 47cd70f: http://git.io/vYmmi
<ipfsbot>
go-ipfs/fix/notif-test 47cd70f Jeromy: fix same test in swarm...
<whyrusleeping>
jbenet: ^ thats probably the best i can do for that test
notduncansmith has joined #ipfs
www1 has quit [Ping timeout: 240 seconds]
notduncansmith has quit [Read error: Connection reset by peer]
www has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
lgierth: lack of reliability is ok for us, particularly if we allow udp.
<jbenet>
lgierth: (quic is another story, so far we've ... relied ... on reliability
<jbenet>
lgierth: but a lot of our protocols fit in one packet.