notduncansmith has quit [Read error: Connection reset by peer]
inconshr_ has joined #ipfs
inconshreveable has quit [Ping timeout: 256 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
patcon has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
<jbenet>
whyrusleeping that's not lame, that's totally fine.
notduncansmith has quit [Read error: Connection reset by peer]
uhhyeahbret has quit [Quit: WeeChat 1.2]
uhhyeahbret has joined #ipfs
<zignig>
how goes ipfsing ? been busy here are work and at home.
<zignig>
not had much programming time ! ( NOOOOOOOOOOOOOOO! ) ;)
hpk has quit [Ping timeout: 255 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has quit [Quit: Leaving]
reit has joined #ipfs
Wallacoloo has quit [Ping timeout: 276 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
hpk has joined #ipfs
patcon has joined #ipfs
tilgovi has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
patcon has quit [Ping timeout: 245 seconds]
rht__ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<rht__>
sorry noob question on godep: e.g. how to `godep update github.com/jbenet/goprocess`? It errs "cannot find package x in any of $GOROOT, $GOPATH"
<jbenet>
Need to go get it first then godep update <pkg>
<jbenet>
Doesn't play nice with go's universal pathinng and githubs fork model
<rht__>
the err is on other dependencies, not the goprocess package
<rht__>
likely because of issues with git submodule
<rht__>
so the question should probably be how to go get the godeps?
nessence_ has joined #ipfs
dread-alexandria has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dread-alexandria has quit [Quit: dread-alexandria]
sharky has quit [Ping timeout: 252 seconds]
sharky has joined #ipfs
kbala has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has joined #ipfs
Blame has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nessence_ has quit [Remote host closed the connection]
nessence_ has joined #ipfs
kbala has joined #ipfs
dread-alexandria has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Quit: Leaving.]
mildred has joined #ipfs
pfraze has quit [Remote host closed the connection]
reit has quit [Quit: Leaving]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
zabirauf has joined #ipfs
Wallacoloo has joined #ipfs
tilgovi has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
tilgovi has joined #ipfs
mildred1 has joined #ipfs
mildred has quit [Ping timeout: 244 seconds]
zabirauf has quit [Ping timeout: 256 seconds]
mildred1 has quit [Ping timeout: 272 seconds]
Tv` has quit [Quit: Connection closed for inactivity]
tilgovi has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has joined #ipfs
zabirauf has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rht__ has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
If you go get a package successfully it gets all the deps. Can you give me a command that fails so I can try to amend it?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
manu has quit [Ping timeout: 255 seconds]
inconshreveable has joined #ipfs
inconshr_ has quit [Ping timeout: 256 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
www has joined #ipfs
compleatang has joined #ipfs
<ipfsbot>
[go-ipfs] jbenet created fix-fuse-err (+1 new commit): http://git.io/vL0c8
<ipfsbot>
go-ipfs/fix-fuse-err e675377 Juan Batiz-Benet: fix fuse mount error in linux...
compleatang has quit [Quit: Leaving.]
<ipfsbot>
[go-ipfs] jbenet force-pushed fix-fuse-err from e675377 to bc85a63: http://git.io/vL0CD
<ipfsbot>
go-ipfs/fix-fuse-err bc85a63 Juan Batiz-Benet: fix fuse mount error in linux...
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot>
[go-ipfs] jbenet opened pull request #1391: fix fuse mount error in linux (master...fix-fuse-err) http://git.io/vL0CQ
nemik has quit [Read error: Connection reset by peer]
nemik has joined #ipfs
<ipfsbot>
[go-ipfs] jbenet closed pull request #1391: fix fuse mount error in linux (master...fix-fuse-err) http://git.io/vL0CQ
timgws has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<ipfsbot>
[go-ipfs] jbenet created update-goprocess-dep (+2 new commits): http://git.io/vL0RX
<ipfsbot>
go-ipfs/update-goprocess-dep c8abef0 Juan Batiz-Benet: godeps hack: chriscool/go-sleep remains vendored...
<ipfsbot>
go-ipfs/update-goprocess-dep c274a3f Juan Batiz-Benet: update goprocess dep...
notduncansmith has quit [Read error: Connection reset by peer]
compleatang has joined #ipfs
<jbenet>
cryptix o/
kbala has quit [Quit: Connection closed for inactivity]
<cryptix>
hey jbenet :)
<daviddias>
Morning cryptix :)
inconshreveable has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix>
hey daviddias
<cryptix>
jbenet: i fell into the trap of starting to implement a protocol helper for git yesterday. but could use some insight from somebdoy more familiar with the interface.. the one man document i found was only helpful so far
<cryptix>
maybe chris or whyrusleeping can shed some light on my questions
<jbenet>
cryptix: very cool. im not super familiar but maybe i can take a look?
<cryptix>
sure. so far i only delt with the 'fetch' capability (thats all gittorrent implements), i can fetch a bare repo under ipfs://$path already and answer to the 'list' command, wich wants a list of hashes and refs and than requests 'fetch $sha1 HEAD' (basically all refs afterwards)
<cryptix>
but i'm not sure on the format it expects than
<cryptix>
like, how to answer that 'fetch $hash $ref' command is beyond me from the docs
<jbenet>
cryptix: links pls?
<cryptix>
i basically wanted a 'post-hook' that publishes commits to ipfs since dtnconf. whyrusleeping git-ipfs-rehost already does a lot of that
<jbenet>
also maybe we want ipfs://ipfs/<hash> and ipfs://ipns/<hash> -- otherwise we'd need two protocol handlers
<jbenet>
though not sure, i'm as annoyed by "ipfs://ipfs" as anyone :)
<cryptix>
yea.. once the basics are done, having two helpers for ipfs and ipns is pretty trivial
<cryptix>
its just that it wants $proto://$path or $proto::$path which really annys me but meh..
<cryptix>
btw git clone http://$gateway/$path from git-ipfs-rehost already works - i just wanted nicer integration. someday you could have a 'git push' capable ipns remote for instance
<jbenet>
yeah exactly, i think this is definitely wanted
<jbenet>
im looking at the git source to find remote impls
<jbenet>
yep -- may want to ask cjb when he's online. (nyc, so should be around in a few hours)
<cryptix>
afaict it directly fetches the $sha1 hashes from the 'fetch $sha1 $refName' command requests
<cryptix>
maybe we could have another git-ipfs-rehost that unpacks the commits in a way that we can directly request those sha1 hahses from ipfs but yea.. thats where my git understanding gets muddy :)
<jbenet>
because we re-wrap all our objects with our merkledag format
<jbenet>
so the graph changes a bit
<jbenet>
what we could do is fetch the objects and look into them
<jbenet>
or have an "import git graph" thing that creates objects where link _names_ are the git sha1 hashes
<jbenet>
so we'd have mappings like $sha1 : $ipfsmultihash
<cryptix>
yup that sounds promising
<jbenet>
the git-ipfs-rehost is a great hack because it leverages the fact that git repos + the http transport use unix files :) -- but this protocol is more lower level
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
ok so-- can you help me trace a full request here? like what does git ask from our handler?
<cryptix>
yup - i guess you could also get away with just dumping the bare repo in the .git dir but i think its better to follow it's rules :)
<jbenet>
(may be useful to write it out as a set of pseudocode function calls in a gist)
<cryptix>
jbenet: sure - lets be ipfs agonistic for a second
<cjb>
if you run "git upload-pack ." in a repo you can see how it'd prepare the remote side of a fetch
<cjb>
(the normal way to do the fetch negotiation is to pipe git fetch-pack on one end to git upload-pack on the other end)
<jbenet>
ahh nice
<jbenet>
cjb: good work figuring all this stuff out.
<cjb>
`export GIT_TRACE=true; export GIT_TRACE_PACKET=true` was pretty helpful
<cjb>
if you do that and then run `git daemon`, you get a both-ways packet log and can see the whole transaction happening
Blame has joined #ipfs
<cjb>
though what I *really* wanted to help understand this was a way to see each function call a Unix binary makes -- like strace/ltrace, but for internal function calls rather than system/library calls
<cjb>
the fact that this tool doesn't exist is seriously my least favorite thing about Unix :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cjb>
I think dtrace can do it, but I'm on Linux
<jbenet>
cjb: yeah that'd be really nice. go is getting something like that soon -- though not sure if it will be only for special builds or arbitrary binaries
<cryptix>
that would be nice - the different languages involved dont make that easier (c, perl, python?)
<cjb>
cryptix: yeah, just for C/C++ is fine for me, I wanted it for the Git source
notduncansmith has quit [Read error: Connection reset by peer]
patcon has joined #ipfs
williamcotton has joined #ipfs
compleatang has quit [Quit: Leaving.]
<jbenet>
daviddias without spdy-- maybe take a look + get a feel for the go-ipfs dht implementation? (warning, it's very raw and basic. we've lots of improvement to do) -- the protobuf message says most of what you need: https://github.com/ipfs/go-ipfs/blob/master/routing/dht/pb/dht.proto -- ignore the coral part. can also take a look at the handlers here
notduncansmith has quit [Read error: Connection reset by peer]
equim has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cjb>
oleavr: nice! :)
equim has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
www has quit [Ping timeout: 265 seconds]
domanic has quit [Ping timeout: 256 seconds]
pfraze has joined #ipfs
<krl>
whyrusleeping: how get bw stats again?
<whyrusleeping>
krl: ipfs stats bw
<whyrusleeping>
and -poll to receive continuous updates
<krl>
ok cool thx
<krl>
didn't realize not all commands are listed in --help
<whyrusleeping>
krl: and with wm trays on the bottom: xfce, i3, and that one windows OS
<krl>
windows has placement support in menubar now
<whyrusleeping>
ah
<krl>
should just be xfce and those that might need some magic
<whyrusleeping>
does anyone know how to test for a specific exit code in a sharness test?
<cryptix>
whyrusleeping: $?
<whyrusleeping>
cryptix: thats it, thanks
<whyrusleeping>
hrm... i think theres something in sharness that does it
<whyrusleeping>
test_expect_code
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to http-eventlog: http://git.io/vLuPa
<ipfsbot>
go-ipfs/http-eventlog b4d1f6f Jeromy: add sharness test for log endpoint...
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed close-notify from 155c7ec to bdc03c6: http://git.io/vLuMZ
<ipfsbot>
go-ipfs/close-notify bdc03c6 Jeromy: select with context on closenotify signal...
marklock has quit [Ping timeout: 246 seconds]
domanic has joined #ipfs
<whyrusleeping>
lgierth: ping
<lgierth>
whyrusleeping: pong
<lgierth>
but gotta run in a few
<whyrusleeping>
ah, was just wondering what the gateway node setup is now
<whyrusleeping>
is it just the daemon running on the bare node?
<whyrusleeping>
or is there docker still involved?
<lgierth>
daemon in a docker container (:8080,:4001), nginx in a docker container (:80)
<whyrusleeping>
cool
djoot has joined #ipfs
blame1 has joined #ipfs
www has joined #ipfs
blame1 has quit [Remote host closed the connection]
blame1 has joined #ipfs
Blame has quit []
blame1 has quit [Remote host closed the connection]
Blame has joined #ipfs
Encrypt has joined #ipfs
<krl>
what would be the easiest way to start a daemon in offline mode? and toggle it later?
<whyrusleeping>
krl: we cant do that yet
<krl>
ok, i'll leave it out then for now
<whyrusleeping>
but you could 'kinda' do it by removing all bootstrap addresses
<whyrusleeping>
and then if you want to turn it 'on' you can swarm connect to some bootstrap peers
Tv` has joined #ipfs
<krl>
i just realized this will also not be very nice for new users trying to look at the webui
<whyrusleeping>
but you cant swarm disconnect yet
<whyrusleeping>
krl: yeah... it kinda has to load, lol
<krl>
ok, i'll drop it for now
<krl>
jbenet mocked some other things up that i'm not sure are there yet
<whyrusleeping>
yeah
<krl>
like stats on how many objects and how much local storage i used
<whyrusleeping>
krl: you could get number of objects with ipfs refs local
<whyrusleeping>
and total storage used could be calculated by enumerating each block and stat'ing it
<whyrusleeping>
(although, thats kinda slow)
<krl>
well, yeah, won't do that
<krl>
refs local | wc -l might not be the nicest way of doing this either
<krl>
lots of wasted time (b58 encoding, etc)
<whyrusleeping>
yeah
<krl>
i'm showing # peers now, will make everything else work before i look at more stats
<whyrusleeping>
it could be run in the background, with a 'calculating...' message until its done
<krl>
yeah, i'm doing the menu as a react app, so you can just shoot event stats for display at it
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to feat/filter: http://git.io/vLzKg
<ipfsbot>
go-ipfs/feat/filter 4e92b5b Jeromy: move filter to separate package...
domanic has quit [Remote host closed the connection]
<whyrusleeping>
krl: electron app works nicely :)
<krl>
whyrusleeping: it's getting better :)
<whyrusleeping>
only complaint is that it doesnt appear to clean up after itself nicely
<whyrusleeping>
but i'm sure thats in the works still
<krl>
whyrusleeping: a test for this is in PR
<whyrusleeping>
saweet
inconshreveable has joined #ipfs
domanic has joined #ipfs
Encrypt has quit [Quit: Eating time!]
domanic has quit [Ping timeout: 246 seconds]
<daviddias>
Folks, leaving earlier today to attend a Web/Tech/community event today in Lisbon :) will be back later at night :)
<whyrusleeping>
daviddias: have fun!
<whyrusleeping>
and evangelize ipfs ;)
<daviddias>
Thank you :) it will be one good opportunity for that!
<whyrusleeping>
:D
<daviddias>
Since these kind of events are more scarce around here ( comparing to where you live ) I need to make sure to attend them all :p
patcon has quit [Ping timeout: 256 seconds]
<alu>
hey guys
<alu>
do you think its a good idea to use a rolling reseeded hash for pulling auth codes
<alu>
SHA512(InititalNonce + password ) = H1
<alu>
next auth key
<alu>
we're testing openbazaar / bitcoin in janus
<alu>
SHA512(TimeofAuthCreation + password) = h2
<alu>
so Time of Auth creation is a notification sent to the phone of when the new two-factor request was created (arrives in headers)
<whyrusleeping>
alu: hrm...
<whyrusleeping>
that could work.
<whyrusleeping>
although, i am not a crypto expert, it could still be broken
www has quit [Ping timeout: 252 seconds]
<alu>
okay
<alu>
me n a friend are testing it now
m0ns00n has joined #ipfs
dread-alexandria has quit [Quit: dread-alexandria]
dandroid has joined #ipfs
williamcotton has quit [Ping timeout: 264 seconds]
* whyrusleeping
has a new favorite go package
kbala has joined #ipfs
domanic has joined #ipfs
rht__ has quit [Quit: Connection closed for inactivity]
alu has quit [Quit: WeeChat 0.3.8]
dandroid has quit [Ping timeout: 246 seconds]
temet has joined #ipfs
<temet>
hai hai
<whyrusleeping>
temet: hey there!
<temet>
looking through docs, about to set up ipfs on my vps
<temet>
and home server
<whyrusleeping>
temet: sweet :)
www has joined #ipfs
<whyrusleeping>
temet: let us know if anything is confusing or needs work
<temet>
so my question (as i RTFM in tandem) is, do you mirror all nodes 1:!
<temet>
1:1
alu has joined #ipfs
<whyrusleeping>
temet: what do you mean mirror?
<whyrusleeping>
as in, the content on them?
<alu>
back
<temet>
Okay, so lets say im peered with node x, do i automatically mirror all of x's additions to the ipfs network
<whyrusleeping>
temet: no, a node only stores content that they add themselves, or content that they request
<temet>
okay awesome
<whyrusleeping>
i feel like we need to have that answer somewhere more visible, it gets asked a lot
<temet>
yeah, i'm looking at security implications first, so that i atleast know what to button down after installation.
<whyrusleeping>
temet: yeah, ipfs is designed to not trust anyone but yourself
<whyrusleeping>
so other nodes content stays on other nodes unless you explicitly request it
domanic has quit [Ping timeout: 272 seconds]
inconshreveable has quit [Read error: Connection reset by peer]
<temet>
has any other tools been created to subscribe to lets call it, content channels. That is, lets say you have a manifest file that is always synced, but then it is distributed so that nodes always retrieve the newest versions of those files (i.e. a file listing for an entire static website)
inconshreveable has joined #ipfs
<temet>
if not, that's fine, i was looking at how extensible ipfs is, which is my draw to it
<whyrusleeping>
we just discuss things in issues there
<whyrusleeping>
nothing actually gets comitted to it
<lucasoutloud>
Oh, so it's kind of like using DHT, isn't it
<temet>
ah okay
<whyrusleeping>
lucasoutloud: content addressing uses the hash of a file to request it, that way, when you get the file, you can hash it to know if its what youre expecting
<whyrusleeping>
ipfs has all of that baked into the lowest layers so you can guarantee youre getting what you want
<temet>
it would also be trivial to bundle (along with a content channel) for non-repudiation issues
<temet>
i.e. detached sigs for content
<lucasoutloud>
whyrusleeping, nice, very nice. Thanks for your help!
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed feat/filter from 4e92b5b to 263b587: http://git.io/vLZYL
<ipfsbot>
go-ipfs/feat/filter 2631816 Jeromy: add in basic address dial filtering...
<ipfsbot>
go-ipfs/feat/filter 90ec71b Jeromy: broke filters out into a struct...
<ipfsbot>
go-ipfs/feat/filter ddc7aac Jeromy: filter incoming connections and add a test of functionality...
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed http-eventlog from b4d1f6f to 96e98a8: http://git.io/vLg9m
<ipfsbot>
go-ipfs/http-eventlog a676b5a Jeromy: move eventlogs to an http endpoint...
<ipfsbot>
go-ipfs/http-eventlog 67be6bb Jeromy: skip logs when no writers connected...
<ipfsbot>
go-ipfs/http-eventlog 90896f2 Jeromy: clean up unused log options...
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed close-notify from bdc03c6 to 85685bd: http://git.io/vLuMZ
<ipfsbot>
go-ipfs/close-notify 85685bd Jeromy: select with context on closenotify signal...
<ipfsbot>
go-ipfs/close-notify 22eb796 Jeromy: cancel contexts if http connections closes...
Guest96_ has joined #ipfs
warner has joined #ipfs
atrapado has joined #ipfs
williamcotton has joined #ipfs
Wallacoloo has quit [Quit: Leaving.]
temujin has joined #ipfs
<whyrusleeping>
jbenet: putting multistream into ipfs for protocol selection is fairly easy
<whyrusleeping>
but the hard part is getting it set up so the muxer can be selected
<whyrusleeping>
and i'm fairly certain we are going to have to put it into go-peerstream
<whyrusleeping>
or, maybe we could make multistream a transport for peerstream?
<whyrusleeping>
hmmm...
* whyrusleeping
tinkers
Encrypt has quit [Quit: Sleeping time!]
G-Ray has joined #ipfs
mildred has quit [Quit: Leaving.]
<jbenet>
krl: we could have a special command made for this, like `ipfs repo stats` which could count up the objects, total storage size, pins, etc
mildred has joined #ipfs
<jbenet>
sprintbot: lots of CR. fixed some bugs last night. wire + sys diagrams today.
<jbenet>
temet: yeah that (content channel) sounds good. we had some similar but very early thoughts around pub/sub groups using records on the dht.
<jbenet>
whyrusleeping: go-peerstream doesnt touch the bits in the connections, it only accepts + manages them.
<jbenet>
whyrusleeping: this should probably go within p2p/net/conn and p2p/host
<jbenet>
whyrusleeping: errr i think p2p/protocol
<temet>
hmm should it be implemented that high up?
<jbenet>
temet: the content channels will need a way to find each other, that's what the "routing" layer (today a dht) is for
void has joined #ipfs
<temet>
or... we can keep it within the content dispersal
<temet>
methodologies ... i.e. manifest file + signature
<jbenet>
temet: what do you mean?
<jbenet>
temet: im not following.
<temet>
and have those things distribute through ipfs.. and not have it sit so low on the OSI stack
<jbenet>
ahh ok.
<temet>
i.e. RSS feeds
<jbenet>
temet: well i think you'd want a mutable name, which is roughly what i described above.
<jbenet>
temet: but what you say makes sense-- you can probably do it on top of ipfs/ipns entirely.
<temet>
since it would be much easier to implement
<temet>
since it remains agnostic to ipfs internals
<jbenet>
yeah makes sense.
dandroid has joined #ipfs
tilgovi has quit [Ping timeout: 252 seconds]
www has quit [Ping timeout: 272 seconds]
<whyrusleeping>
jbenet, having it in p2p/protocol works for streams
<whyrusleeping>
but we have different handlers for streams vs raw connections
therealplato has quit [Ping timeout: 244 seconds]
mildred has quit [Quit: Leaving.]
<jbenet>
whyrusleeping: well we can lift up where the streaming muxer is added-- oh crap. you're right, it's under go-peerstream
<whyrusleeping>
yeap
<temet>
can you define any data node as a stream?
dandroid has quit [Ping timeout: 246 seconds]
<temet>
i.e. mknod fifo pipes
<jbenet>
temet: not in the lowest level, cause those are mutable. you could construct a "pipe object" that gives details where to connect to to receive the stream
<jbenet>
or _whom_ to, (ie. avoid listing addresses)
<temet>
will the stream be normally propagated through dht's? or will they be static fixed points like a video stream running on ffserver?
<temet>
basically, is it just a "hyperlink" to where to find the origin, or does the stream propagate through the nodes like a file would
tilgovi has joined #ipfs
<jbenet>
whyrusleeping that's tricky. could put one mss before go-peerstream and if they select non-muxer, use a simple base case transport for go-peerstream that doesnt allow opening streams
<jbenet>
whyrusleeping and then another mss on top of go-peerstream that routes to the proper handlers. maybe not "another", maybe the thing under go-peerstream (before) just checks the header, and selects the "go-peerstream transport" (spdy, yamux, single stream) based on that
<jbenet>
whyrusleeping i can write a single stream transport for you
<jbenet>
hey wking around?
<wking>
yeah
m0ns00n has quit [Quit: Leaving]
<wking>
are we figuring out transport negotiation with multistream-select? ;)
tilgovi has quit [Quit: No Ping reply in 180 seconds.]
tilgovi has joined #ipfs
<kbala>
hi jbenet, when would you like to talk today?
<jbenet>
wking: no, wanted to talk about docker
<wking>
ok
<wking>
sorry I've been slow there
<jbenet>
kbala: i could talk sooner, but maybe we could make it regular? 17:30 in both days? -- unless you need input now?
timgws has joined #ipfs
<kbala>
jbenet: 17:30 is good, i don't need input right away
dread-alexandria has quit [Quit: dread-alexandria]
<whyrusleeping>
jbenet: i was thinking about a single stream transport
<whyrusleeping>
i wasnt sure what would happen if the rest of the codebase decided to try and open streams on it
<whyrusleeping>
runs through all the commands and makes sure that they can marshal their types properly
pfraze has quit [Remote host closed the connection]
void has quit [Quit: ChatZilla 0.9.91.1 [Firefox 38.0/20150511103818]]
<jbenet>
kbala ok
<jbenet>
whyrusleeping: nice, that's useful.
<whyrusleeping>
yeah, its not quite working yet... some of the commands types have interfaces
<jbenet>
whyrusleeping: re stream, not sure, i think a bunch of things arent going to work :/
<whyrusleeping>
and i cant generate random data to fill interfaces easily
<whyrusleeping>
jbenet: yeah... i think go-ipfs is going to shit itself if we disable streams optionally
<cryptix>
whyrusleeping: re random data: for tests? what kind of interfaces?
tilgovi has quit [Ping timeout: 272 seconds]
<whyrusleeping>
its having trouble generating random multiaddrs inside one of the structs
<whyrusleeping>
but i can implement quick.Generator on the struct to get around that
<whyrusleeping>
reflection in go is so much more fun than in c#
<cryptix>
random values for structs? i've seen something for that a while back.. thought you meant semi-random garbage for parsing tests
tilgovi has joined #ipfs
patcon has quit [Ping timeout: 246 seconds]
<whyrusleeping>
cryptix: yeah, take a look at testing/quick.Value
<whyrusleeping>
its pretty nifty
<whyrusleeping>
i'm basically creating a new object of a given type using that, marshaling it to json and back, and then calling reflect.DeepEquals on it
<cryptix>
nice
* spikebike
ponders how to edit a forked go package from github that includes a bunch of imports from the non-forked package.
<cryptix>
spikebike: not sure what you want to achive. usually you can get by with just adding your forked remote to the repo, to publish your changes while leaving the imports intact
<whyrusleeping>
spikebike: cd $GOPATH/src/github.com/ipfs/go-ipfs && git remote add myfork git@github.com:spikebike/go-ipfs
<whyrusleeping>
git fetch myfork
<spikebike>
whyrusleeping: ah, perfect, thanks.
<jbenet>
sugarpuff: looks good!
<sugarpuff>
jbenet: thx!
<spikebike>
heh, unrelated to IPFS, just troubleshot a problem where advertising IPv6, but having it not working resulted in email delivery failures.
<spikebike>
RFCs and best practices advice not configuring mail servers that way, but vt.edu does.
G-Ray has quit [Remote host closed the connection]
pfraze has joined #ipfs
afdudley has quit [Ping timeout: 272 seconds]
afdudley has joined #ipfs
Guest96_ has quit [*.net *.split]
temet has quit [*.net *.split]
pinbot has quit [*.net *.split]
barnacs has quit [*.net *.split]
spikebike has quit [*.net *.split]
switchy has quit [*.net *.split]
sbruce has quit [Quit: WeeChat 0.4.2]
Guest96_ has joined #ipfs
temet has joined #ipfs
pinbot has joined #ipfs
spikebike has joined #ipfs
switchy has joined #ipfs
barnacs has joined #ipfs
<ipfsbot>
[go-ipfs] jbenet deleted http-eventlog at 96e98a8: http://git.io/vL2NY
<whyrusleeping>
jbenet: thoughts on multistream stuff?
<whyrusleeping>
do we want to require stream muxing?
<wking>
jbenet, whyrusleeping: If go-peerstream is about a muxed stream (and it seems to be), I don't think we want to jump through hoops to support go-peerstream over umuxed transport
<whyrusleeping>
yeah
<jbenet>
i think its going to be way more work to create a separate way to manage the connections
<jbenet>
go-peerstream takes care of the conn management for the p2p/net stack
<jbenet>
it's going to be tricky + possible cause problems to create a separate route in the p2p/net/swarm thing
<jbenet>
i do think it should be possible to speak only to one protocol.
atrapado has quit [Quit: Leaving]
<jbenet>
and yeah its annoying right now because the rest of ipfs expects to be able to open other streams
<jbenet>
but it may actually work out ok
<wking>
so does "go-peerstream over unmuxed transport" open up new unmuxed connections when you try to add a new stream?
<cryptix>
yey for now writing out eventlog!!
<cryptix>
also rht's work on consolidating logging and the ctx cleanup - very nice, i like that stuff
<jbenet>
wking yeah we could try to do that
<jbenet>
seems a bit tricky though
<whyrusleeping>
like, dial the peer again and call that a 'NewStream' ?
<whyrusleeping>
that seems really tricky indeed
<wking>
yeah. But I don't know how else you'd support something that needed parallel streams
therealplato has joined #ipfs
<whyrusleeping>
i think supporting the ability to speak a single protocol is a very noble cause
<whyrusleeping>
but its going to be a massive headache
lucasoutloud has quit [Ping timeout: 264 seconds]
<jbenet>
then maybe lets ease into it
<wking>
I think other alternatives are "don't support go-peerstream over a unmuxed transport" or "buffer/error if somebody tries to write to the non-top stream on an unmuxed transport"
<jbenet>
whyrusleeping: what if the first mss you find only has the muxers, e.g. yamux and spdy
<jbenet>
and the next one has the individual protocols
<whyrusleeping>
jbenet: yeah, thats easy
<jbenet>
lets do that for now until we find a legitimate need for speaking single protocol
<whyrusleeping>
do you mind if the first mss is a 'go-peerstream transport' ?
<whyrusleeping>
it makes it pretty easy to write
<jbenet>
a go-peerstream transport that selects other transports?
<whyrusleeping>
yeah
<jbenet>
thats sounds good tome
<whyrusleeping>
cool cool
temujin has quit [Ping timeout: 246 seconds]
<jbenet>
+1 on eventlogs
<jbenet>
its going to be sooo fast now
<whyrusleeping>
yeah, the memory consumption is already better
<whyrusleeping>
although, i've noticed recently that go-ipfs doesnt ever shut down cleanly...
<whyrusleeping>
it appears to have regressed at some point
nessence has quit [Remote host closed the connection]
<whyrusleeping>
and looking at the logs, it feels like context cancellation isnt propogating correctly
<whyrusleeping>
i see goroutines sitting in a select on a context that *should* have been cancelled
nessence_ has quit [Remote host closed the connection]
<cryptix>
ipv6 is also back? nice
<jbenet>
this will probably get easier to reason about when we split things back up
<jbenet>
we can look at the interfaces per module carefully
<jbenet>
whyrusleeping: can you push out gx?
<jbenet>
we should have an automatic pinner somewhere
<jbenet>
we could just run an http pinbot with a size cap
<whyrusleeping>
yeah, i can work more on gx
<jbenet>
brb
<whyrusleeping>
no, youre not allowed to leave
<whyrusleeping>
mars is using 30MB of memory right now
<cryptix>
i found the concept interesting but the impl frightens me
<cryptix>
but thats just my php bias
<spikebike>
cryptix: yeah, kinda odd, why not just provide a ssl proxy which would be way more efficient.
<cryptix>
i actually like the aspect that it is not my browser rendering the page
<cryptix>
like no js/css profiling at all
<spikebike>
doesn't opera do something like that?
<spikebike>
some magic proxy to minimize bandwidth and battery usage
<whyrusleeping>
spikebike: yeah, opera turbo or something
<cryptix>
i think they downsample images,and compress stuff but i'm not sure
williamcotton has quit [Ping timeout: 272 seconds]
<spikebike>
IPv6 definitely helps p2p things, but I do wonder if p2p services need a proxy to be practical for high adoption of users
<spikebike>
something like a wall wart/plug based arm, or a rasp pi type unit. That way it can track the blockchain/dht membership/events/messages, trade storage, bandwidth, CPU, etc.
<spikebike>
then the users phone can efficiently (battery, cpu, and bandwidth) poll the p2p network trading in on the resources your proxy earned.
<whyrusleeping>
spikebike: we've thought about having delegate nodes
<whyrusleeping>
where you can set up your ipfs node in the cloud
<whyrusleeping>
and have all your phones requests work through it
nessence has joined #ipfs
<whyrusleeping>
so you save data and dont have to worry about maintaining tons of dht connections
<spikebike>
Yeah. I've thought about that for an email like system. Your rasp-pi maintains your inbox+folders (larger), trades storage/cpu/bandwidth with it's peers, then your light weight client could check in as necessary.... even if your Rasp Pi died.
<spikebike>
bitmessage for instance is VERY intensive on bandwidth
<spikebike>
said proxy/p2p in the cloud could use power efficient push notification for events of note.
<cryptix>
jbenet whyrusleeping: guys.. i'm very impressed that this just worked. i added data on nodeA, than started getting it on nodeB and C. upload bw from A was slow, and since i knew another mirror of the file, i started wgeting it on B. while the initial get was still running, i started adding it as well. and the 'ipfs get' speed up to the speed of wget and completed without any issues. kudos guys. i'd have gues
<cryptix>
sed either the add or the get to belly up on this one.
<whyrusleeping>
cryptix: :D
<whyrusleeping>
bitswaps 'get' is pretty robust
<cryptix>
also the way the commands are wired up that the new data was able to complete the running get.. yea.. just shows the good layering, i guess :)
<cryptix>
'get' memory usage still seems pretty exessive. i'm at 163megs res ram while the file is only 33m/100. not sure whats going on. maybe it could hold of on reconstruction until more of the data is in the local repo? for slow transfers holding onto these large buffers seems weird.. maybe its something not cleaning up right too - wouldnt know what needs to stick in ram if the data is already streamed out
<spikebike>
cryptix: sounds a bit high, but I'd be more interested what happened when you download something 4x as larger
<spikebike>
large
<cryptix>
spikebike: i'm running this on a pi to see how ipfs reacts under contrains :)
<cryptix>
shooting with something larger than the system total ram feels unfair :))
<cryptix>
at least for starters
<spikebike>
old pi or new pi?
<cryptix>
old pi ^^
<spikebike>
mines got 1GB ram or so, downloading a 128mb file sounds reasonable to me
<jbenet>
back :) cryptix: glad to hear it! :) that's mostly whyrusleeping's good systems hacking
<cryptix>
yup - still i'm not sure what the reasonable overhead is for downloading any file.. i'd assume it shouldnt need more than 1x of the filesize in the process
<cryptix>
maybe 1.2 but at some point you should be able to start streaming out and release stuff
<jbenet>
cryptix: try it with ipfs cat?
<jbenet>
ipfs cat should be less mem intensive than get
<jbenet>
(it does less)
<cryptix>
jbenet: will try next, i want see of it can manage first :)
<spikebike>
cryptix: that's tricky when it comes in out of order, doubly so if you download rare blocks first and want them available for other peers
<cryptix>
spikebike: you can hold them in the fs repo.. would be tricky for nodes that 'just' have a s3 repo though..