whyrusleeping changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/ipfs/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- code of conduct at https://github.com/ipfs/community/blob/master/code-of-conduct.md -- sprints + work org at https://github.com/ipfs/pm/ -- community info at https://github.com/ipfs/community/
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
freedaemon has quit [Remote host closed the connection]
<kbala> hey whyrusleeping, in mock_peernet what does the struct{} value represent in the `connsByPeer map[peer.ID]map[*conn]struct{}` map
<whyrusleeping> kbala: it represents existence
gordonb has quit [Quit: gordonb]
<voxelot> add function didn't throw errors but dont see the hash in the return log
<voxelot> still not finding fs
_whitelogger____ has joined #ipfs
<kbala> storing the value?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> kbala sorry for time fail again today -- travel + other things consraining time. maybe email me Qs and will get back late tonight
<rschulman> krl: ping?
voxelot has quit [Ping timeout: 244 seconds]
<kbala> jbenet: np and that sounds good
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
thelinuxkid has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
cdata has quit [Ping timeout: 256 seconds]
tilgovi has quit [Read error: Connection reset by peer]
tilgovi has joined #ipfs
tilgovi has quit [Read error: Connection reset by peer]
tilgovi has joined #ipfs
<whyrusleeping> kbala: yeah, so in go, the pattern used to implement a set of X is to make a 'map[X]struct{}'
<whyrusleeping> since a struct{} doesnt take any space in memory
<rschulman> whyrusleeping: what is the argument to “cat” in the RPC again?
<whyrusleeping> rschulman: an ipfs path
<whyrusleeping> (a hash is a valid ipfs path)
<rschulman> no, I know
<rschulman> sorry
<rschulman> I meant literally
<rschulman> 5001/api/v0/cat?ipfs-path=<hash>
<rschulman> is that right?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> s/ipfs-path/arg/
<rschulman> ARG!
<rschulman> haha
<rschulman> thanks
<rschulman> this javascript api is really handy, but refuses to just let the RPC return normal json
<rschulman> and is putting it in this stream thing
* zignig is getting the same issue.
<zignig> ls is fine , cat is borked.
<zignig> whyrusleeping: FIXK
<whyrusleeping> zignig: huh?
<whyrusleeping> cat in what context?
<whyrusleeping> are we talking about node-ipfs-api?
<rschulman> zignig: Haha, I don’t think whyrusleeping has anything to do with the api
<whyrusleeping> or the go api?
<rschulman> the node one, I think
<whyrusleeping> i'm literally just now using the node api's cat
<whyrusleeping> i havent gotten it working yet
<zignig> whyrusleeping: should have read the irc backlog.
<zignig> the go-ipfs http api is giving me grief
<whyrusleeping> zignig: hrm... how is cat broken?
<whyrusleeping> or better, where in astralboot is it breaking?
<zignig> getting random EOF , connection closed and 500 Internal server errors.
<whyrusleeping> zignig: oh yeah, i was asking in #go-nuts earlier, and they said we're likely hitting a bug
<whyrusleeping> what you should do is dont use the default http client
<rschulman> oh, sorry, I just presumed, zignig!
<whyrusleeping> build your own client like i do here:
* whyrusleeping finds the code
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<zignig> whyrusleeping: it has been working just fine until the trailers and magic chunking streaming stuff was added in.
<whyrusleeping> yeap
<whyrusleeping> its because we're hijacking the connection server side
<zignig> shouldn't the default http client just work (TM) ?
<whyrusleeping> and the pooling client has a bug in it that cant deal with it that well
<zignig> why the change for hijacking ?
<whyrusleeping> so we can implement trailers
<whyrusleeping> the motivation for that entire change is this:
<whyrusleeping> we do something like 'ipfs cat X'
<whyrusleeping> and we have the first block of X, so we return '200 STATUS OK'
<whyrusleeping> and start streaming the data back
<whyrusleeping> we request the next blocks
<whyrusleeping> but something breaks
<whyrusleeping> so we stop the stream
<whyrusleeping> but how do we tell the client that it broke?
<whyrusleeping> answer:
<whyrusleeping> we use trailers
<whyrusleeping> how do we do trailers?
<whyrusleeping> according to go: you dont
<whyrusleeping> so we have to implement it ourselves by hijacking
<zignig> how many clients actually impliment trailers ?
<whyrusleeping> go's client does
<whyrusleeping> and curl does with some finagling
<zignig> ok I understand why now.
<whyrusleeping> yeah, it was very frustrating for me working through it all
<whyrusleeping> and getting those same random failures
<zignig> can we just default to normal http requests for cat , and have a streaming flag ?
<zignig> or have 'ipfs stream'
<whyrusleeping> what difference would that make?
<zignig> astralboot would work :P
<whyrusleeping> lol
<zignig> also with the streaming interface there is no content-size flags.
<whyrusleeping> we still set them
<whyrusleeping> i think
<zignig> nope
<whyrusleeping> hrm...
<whyrusleeping> how is the interface different on streaming?
<whyrusleeping> dont you still just read from the response Body?
<zignig> yep, but the content length is always unset , chunking is always on
<zignig> XStream header is set.
<whyrusleeping> how does that change what youre doing?
<whyrusleeping> i beleive for cat, XStream has always been set
<zignig> and oh yes, and it randomly fails
<zignig> I uses the content-length to hand to clients upstream ( astralboot does a kind of proxy thing with io.Copy )
cdata has joined #ipfs
<whyrusleeping> it should fix the random failures
<whyrusleeping> brb, getting food
<rschulman> ok, I’m hitting the sack
<rschulman> g’nite all
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
voxelot has joined #ipfs
voxelot has joined #ipfs
<zignig> whyrusleeping: turning off keep alives worked !
<zignig> yay.
<zignig> Still getting no Content-Length headers, which kind of borks out my ipfs filesystem.
domanic has quit [Ping timeout: 250 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Leer10 has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> zignig: what did you need those for?
<whyrusleeping> zignig: huh, we should totally be still sending length headers on cat
<whyrusleeping> i'm setting them
<whyrusleeping> zignig: you found a bug, thanks!
<whyrusleeping> well, kinda?
<whyrusleeping> i'm confused, i think go's http library will automatically write the headers you set out for you if you just return from the handler func
<whyrusleeping> which makes sense i guess
<whyrusleeping> but for some reason
<whyrusleeping> even though i am explicitly setting a content length header, its not showing up
<zignig> whyrusleeping: that's me bug finder .... ;)
<zignig> as for the content length , I need to get them so I can send them on to the client.
<zignig> when the iPXE client is getting it's image it uses the content length for the download.
<whyrusleeping> you grab that from the 'cat' though?
<whyrusleeping> where in astralboot is this?
<zignig> without content length image download takes ~2 minutes , with ~8 seconds.
<whyrusleeping> >.>
<whyrusleeping> how?
<zignig> I think it's how it handles chunking.
<zignig> I'm grabbing the size via content length for ipfsfs and just using file size for local file system.
<zignig> still the merkel dag has that info , it would be good to pass it as Content-Length properly.
<whyrusleeping> what happens if you set the Content-Length header to -1?
<whyrusleeping> zignig: so the thing is, i am passing it down
cdata has quit [Ping timeout: 240 seconds]
<whyrusleeping> i can see it being written
notduncansmith has joined #ipfs
<whyrusleeping> but on the receiving end, its not there
notduncansmith has quit [Read error: Connection reset by peer]
<zignig> the http client library says - http: invalid Content-Length of "-1"
<whyrusleeping> thats weird.
<zignig> yep weird.
<whyrusleeping> what is instead of setting the content length, you set 'Transfer-Encoding' to 'chunked'
<whyrusleeping> how does that affect the time?
<zignig> still slow.
<zignig> ok , that is weird.
<zignig> it hands me a length
<zignig> but the go http client does not.
<whyrusleeping> right? i've no idea whats going on
<zignig> chunked and Content-Length don't work together.
<whyrusleeping> huh
<whyrusleeping> we unfortunately have to use chunked now
<whyrusleeping> but...
<whyrusleeping> zignig: hit /api/v0/file/ls/<hash>
<whyrusleeping> and youll get the true size back
<whyrusleeping> alternatively
<whyrusleeping> we could just set the size in a different header :P
<whyrusleeping> whatever works best for you
<zignig> that would work, but it's an extra ipfs request.
<whyrusleeping> yeah, i feel ya
<whyrusleeping> would the extra header work for you?
<zignig> there just seems to be somthing really broken.
<whyrusleeping> ?
<zignig> it would , but I wanted it to be default http transport.
<zignig> will investigate further tonight.
<whyrusleeping> zignig: let me know how that goes, very interested
<whyrusleeping> maybe bring it up in #go-nuts?
<whyrusleeping> also, why does the default transport matter?
<zignig> so I could point astral boot at any http server and have it work.
Leer10 has joined #ipfs
<whyrusleeping> but the non-default transport works just the same as the default one?
<whyrusleeping> you can still point it at any http server
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> all that transport change does is disable connection pooling
<whyrusleeping> which isnt used through the ipfs api that i can tell
<zignig> i'll just convert to /file/ls
<zignig> but you may want to look at _why_ keep alives are not working with the hijacking.
<whyrusleeping> zignig: its a golang http client bug
<whyrusleeping> as far as i can tell
cdata has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
thelinuxkid has joined #ipfs
thelinuxkid has quit [Client Quit]
cdata has quit [Ping timeout: 264 seconds]
zabirauf has joined #ipfs
cdata has joined #ipfs
tilgovi has quit [Ping timeout: 256 seconds]
cdata1 has joined #ipfs
cdata has quit [Ping timeout: 255 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Ping timeout: 255 seconds]
cdata2 has joined #ipfs
tilgovi has joined #ipfs
cdata1 has quit [Ping timeout: 244 seconds]
cdata2 has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
sharky has quit [Ping timeout: 272 seconds]
cdata2 has joined #ipfs
sharky has joined #ipfs
voxelot has quit [Ping timeout: 252 seconds]
zabirauf has quit [Ping timeout: 252 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
voxelot has joined #ipfs
voxelot has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<zignig> whyrusleeping: you still there ?
<whyrusleeping> zignig: i shouldnt be
<whyrusleeping> lol
<zignig> ... but you are ....
<zignig> How much pain would it be to include a File-Size header ?
<whyrusleeping> yeah... i should have gone to sleep about an hour ago
<whyrusleeping> a File-Size header wouldnt be bad at all
<whyrusleeping> it would probably be like a two line change
<spikebike> Decentralization for the web, including IPFS URL above ^
<zignig> that would be AWESOME !
<whyrusleeping> spikebike: i saw that this morning, its pretty good
<whyrusleeping> zignig: could you file an issue for me? I'll write that up in the morning
<whyrusleeping> in the meantime you could apply this diff: https://gist.github.com/whyrusleeping/be292d8a1633b388536e
<zignig> issue filed .
<zignig> :)
<zignig> #1546
<zignig> so why arern't you sleeping whyrusleeping ?
<zignig> :P
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
cdata2 has quit [Ping timeout: 250 seconds]
cdata2 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
slothbag has joined #ipfs
<slothbag> hi guys, just upgraded to 0.3.6, still getting 403 - Forbidden on the webui
<slothbag> do i need to change a config settings or something?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
domanic has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
slothbag has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/]
cdata2 has quit [Ping timeout: 244 seconds]
<cryptix> hello ipfolkfs :)
domanic has quit [Remote host closed the connection]
voxelot has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jatb has joined #ipfs
<daviddias> hey hey :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
domanic has joined #ipfs
<cryptix> hey david how are you?
mildred has joined #ipfs
Tv` has quit [Quit: Connection closed for inactivity]
sbruce has quit [Ping timeout: 244 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<daviddias> doing good
<daviddias> taking the pre flight hours to tackle some random stuff
<daviddias> almost going to Berlim :)
<daviddias> and how are you?
<cryptix> oh distracting myself with nerdisms to not go insane over this stupid country
kbala has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<daviddias> :)
<daviddias> just learned that Win10 has a built in Package Manager
<daviddias> what an era!
<cryptix> not holding my breath on that one.. :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Quit: Leaving.]
randito has joined #ipfs
<randito> hello everyone
www has joined #ipfs
<cryptix> hey randito
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dignifiedquire has joined #ipfs
mildred has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
cdata2 has joined #ipfs
cdata2 has quit [Ping timeout: 255 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has quit [Remote host closed the connection]
reit has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has quit [Remote host closed the connection]
reit has joined #ipfs
randito has quit [Quit: leaving]
reit has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has joined #ipfs
hellertime has joined #ipfs
cjb has joined #ipfs
cdata2 has joined #ipfs
cdata2 has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
www has quit [Ping timeout: 255 seconds]
voxelot has joined #ipfs
m0ns00n has joined #ipfs
<m0ns00n> Hoi
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
slothbag has joined #ipfs
null_radix has quit [Excess Flood]
<cryptix> hey m0ns00n
<m0ns00n> Hey!
<m0ns00n> Just back from the states.
null_radix has joined #ipfs
<m0ns00n> Demoed our system and participated in the 30th anniversary celebration for the Amiga computer :)
<ReactorScram> what system
<m0ns00n> It should have been a 10000 person + event. :)
<m0ns00n> ReactorScram: FriendUP (friendos.com)
<m0ns00n> Finally got some verification from top engineers.
<m0ns00n> So we can with confidence say we have something unique.
<m0ns00n> Visited the Raspberry PI dude in San Francisco.
<m0ns00n> the authors of PHP
<m0ns00n> in Zend.
<m0ns00n> And much much more.
<m0ns00n> :)
voxelot has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
m0ns00n has quit [Remote host closed the connection]
slothbag has quit [Remote host closed the connection]
<pguth2> We were asking ourselves how much you can rely on pinned data. Say I pin file A. Can I safely remove file A to save storage space (so I don't have one copy in my regular FS and one in IPFS)?
<pguth2> (ping ThomasWaldmann)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Gaboose has joined #ipfs
<rschulman> pguth2: IPFS is still alpha software. While that is generally the point of pinning a hash, I’m not sure I would trust it just yet to keep vital data safe.
pfraze has joined #ipfs
<Gaboose> is there a way to do a "group pinning" with ipfs? i.e. release blocks that are replicated a lot, but reacquire them if they get rare
<Gaboose> as in "pinning as a group of users"
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<rschulman> Gaboose: You may be interested in the bitswap protocol
<rschulman> its discussed in the paper
<rschulman> it doesn’t do precisely what you’re talking about, but its similar.
<pguth2> We thought along those lines too, thanks rschulman
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
hellertime has quit [Read error: Connection reset by peer]
hellertime has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> got 5 eth letting my desktop mine overnight, not sure how much that really is
<cryptix> gmorning whyrusleeping :)
freedaemon has joined #ipfs
<rschulman> whyrusleeping: Nice.
<rschulman> GPU mining, I assume?
<rschulman> really wish I had my desktop setup so I could mine some eth'
<rschulman> my understanding is that a simple contract usually costs around .01 eth to get on the blockchain
<whyrusleeping> rschulman: yeah, gpu
<rschulman> all I have is my work assigned macbook air
<rschulman> has a GPU in it, but I don’t want to break things I don’t own. :)
<whyrusleeping> rschulman: that probably wont get you anywhere
<whyrusleeping> lol
<whyrusleeping> cryptix: gmornin!
<rschulman> how you doing this morning?
<rschulman> You’re up early. :)
<whyrusleeping> yeah, i just decided to get on my laptop before leaving the house
<whyrusleeping> i feel like crap
<whyrusleeping> which normally goes away once i get coffee
<rschulman> getting sick?
<rschulman> ah
<whyrusleeping> nah, i just hate mornings
<rschulman> you’re living in seattle now, right?
<whyrusleeping> yep!
<rschulman> cool
<rschulman> whyrusleeping: What’s going on with filecoin these days?
<rschulman> mostly quiet?
fleeky has quit [Quit: Leaving]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Quit: Leaving.]
_whitelogger____ has joined #ipfs
Encrypt has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> rschulman: waiting on ipfs to be more better
<whyrusleeping> our team doesnt have enough bandwidth for sustained development on both projects
Tv` has joined #ipfs
sbruce has joined #ipfs
therealplato has quit [Read error: Connection reset by peer]
<Gaboose> does anyone use the ipfs dht for custom app purposes?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> Gaboose: not that I am aware of, although i do see a decent amount of random traffic on it from time to time
<Gaboose> wondering what kind of things can be implemented on it
therealplato has joined #ipfs
<whyrusleeping> Gaboose: well, the dht itself is really just a KV store
<whyrusleeping> one that you can access from anywhere
<Gaboose> yea, but freely editable one
<Gaboose> so you can't trust it
<whyrusleeping> Gaboose: you cant *rely* on it, but you can trust it if you sign your data
voxelot has joined #ipfs
voxelot has joined #ipfs
<whyrusleeping> certain record types are treated specially by the network though
<whyrusleeping> for example, nobody can overwrite your public key stored in the dht
<whyrusleeping> and nobody can overwrite an ipns entry of a key they dont own
<voxelot> how's cjdns coming? i'd really like to study up on that and work with it
<whyrusleeping> voxelot: lgierth is the one working on that
<Gaboose> whyrusleeping: it'd be nice for the dht to have custom record types like that
<Gaboose> unwritable if you don't own the key
<Gaboose> or conflict free data types
<Gaboose> like grow-only sets
<Gaboose> or increase-only counters
<whyrusleeping> we have grow-only sets
<whyrusleeping> we currently use them to announce who has which blocks
<Gaboose> cool!
<whyrusleeping> there are going to be a lot of changes coming soon to the dht to make it better and faster
<cryptix> Gaboose: a friend of mine wrote his thesis about a dht like that... (about time he comes back from his traveling)
<Gaboose> i assume it's not as easy to use as putting a "crdt:" as prefix to the value
<whyrusleeping> Gaboose: right now, no. but we are hoping to make it that easy
atrapado has joined #ipfs
<whyrusleeping> the difficult part is making sure that it cant be abused
<Gaboose> how do you mean?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Gaboose> network slow down by the abundance of set elements?
<Gaboose> attacker not conforming to the grow only spec? the rest of the won't propagate his puts
<whyrusleeping> network slow down by spam
<Gaboose> that can be done already
<lgierth> voxelot: there's a bit of code that handles peer discovery via cjdns: https://github.com/ipfs/go-ipfs/pull/1316
<lgierth> voxelot: regarding integration of cjdns itself, there's no code yet
<whyrusleeping> yeah, the network right now is weak to abuse, if we're going to 'fix' the dht, we want to make it much safer
<lgierth> we're all gonna be in one place for battlemesh and cccamp for the next two weeks so that might change :)
<Gaboose> whyrusleeping: how soon is that?
<alu> Yo guys
<whyrusleeping> Gaboose: hopefully just a few weeks
<Gaboose> wow
ruby32 has joined #ipfs
<whyrusleeping> the dht refactor and ipns upgrade is the top on my list of things i need to accomplish
<whyrusleeping> theres some work jbenet is doing that i'm waiting on first
<voxelot> thanks for the link lgierth, i'll start looking into
<whyrusleeping> alu: quickly! add it to ipfs :D
<lgierth> voxelot: if you have questions about how the components of cjdns work together, i'm here
<alu> alright
<voxelot> thanks, i'm sure i will :)
<alu> └(・-・)┘
<whyrusleeping> lol
<whyrusleeping> hrmmm, the links are weird
<whyrusleeping> we need a tool to make it easy to import websites
<alu> woops
<alu> uhm
<alu> httrack?
<alu> wget?
<whyrusleeping> oh, maybe just some of the links are wonky
<whyrusleeping> like the people link
<alu> im going to use httrack
<alu> i just wget the site
<alu> it was uh
<whyrusleeping> ah
<alu> not the way to do it
<jbenet> yeah some of the links are broken, alu if you get it fixed up, I'll write up a blog post + tweet it at TBL
<alu> the wizard is working its magic now ;)
<whyrusleeping> jbenet: i wrote some javascripts
gordonb has joined #ipfs
notduncansmith has joined #ipfs
<jbenet> whyrusleeping: :0
<whyrusleeping> i'm making a dropbox thing, and refactoring the ipnsfs stuff in the process
<jbenet> where?
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> nice!
<whyrusleeping> i'm using feross's drag and drop thing to get files onto it
<whyrusleeping> but after i pulled that in i decided i didnt want to use patch
<whyrusleeping> so i'm making ipnsfs more generic (not required to be an ipns thing) and i'm gonna use that
<whyrusleeping> made some tweaks to the api mockup: https://gist.github.com/whyrusleeping/296323e370a1af67ecc3
cdata2 has joined #ipfs
cdata2 has quit [Ping timeout: 260 seconds]
<Gaboose> whyrusleeping: "we currently use them to announce who has which blocks" how is that grow only? what about when blocks get gc'ed
<Gaboose> and is there a place where i can track progress on the dht upgrade?
www has joined #ipfs
<Gaboose> cryptix: i would love to see that thesis you mentioned :)
<whyrusleeping> Gaboose: records of who has what expire after 24 hours
<rschulman> whyrusleeping: thank you for doing that!
<rschulman> do want
gordonb has quit [Quit: gordonb]
gordonb has joined #ipfs
<whyrusleeping> rschulman: any feedback on that interface?
<whyrusleeping> or about the naming of things?
<rschulman> question: are you envisioning multiple namespaces?
<rschulman> not tied to ipns?
<rschulman> but given a name that points to a different hash every time something changes?
<whyrusleeping> yeah, so when you create a new 'filesystem' you would specify its type
notduncansmith has joined #ipfs
<whyrusleeping> one type would be a key backed ipns filesystem
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> whyrusleeping ipns/mfs/kfs lgtm :)
<alu> yo
<alu> check out the newest openbazaar
<whyrusleeping> gordonb: hey
<whyrusleeping> your friend who is doing VR stuff
<whyrusleeping> should talk to alu here
<alu> o/
<whyrusleeping> hes doing VR stuff on ipfs
<rschulman> whyrusleeping: I love it, please make it happen!
<whyrusleeping> rschulman: okay, lol
<rschulman> :)
dignifiedquire has quit [Quit: dignifiedquire]
<alu> IPFS is showing great promise in being part of a decentralized metaverse design :)
<alu> we'll see where it goes
<Gaboose> i've seen you guys mention here and there namecoin, openbazaar, custom blockchain apps
<Gaboose> have you heard about ethereum, it's like a motherbed for such things
<alu> yeah
<whyrusleeping> Gaboose: yeah
<alu> I met that dude
<jbenet> whyrusleeping: can we bundle go-ipfs in FF / Chrome extensions?
<whyrusleeping> jbenet: uhm... good question
<whyrusleeping> gordonb: can we?
<gordonb> jbenet: yeah, it’s possible to bundle in FF extension. Not sure about Chrome.
<jbenet> whyrusleeping: we need an implementation of ipfs-shell interface that uses an embedded a node when there isnt a local gateway-- like it checks first, if not uses embedded node.
<jbenet> node*
cdata2 has joined #ipfs
cdata2 has quit [Ping timeout: 260 seconds]
<whyrusleeping> hmmm
<whyrusleeping> why not put that responsibility on the caller?
<whyrusleeping> i want to get good at ephemeral node stuff, but thats going to require a cleanup of the construction process
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> whyrusleeping: we need to come up with "the appropriate way to check + decide what to do", because that way people can build things without having to worry about the complexity themselves.
<whyrusleeping> fair enough
bmcorser has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
cdata2 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Gaboose> does 'ipfs dht findprovs' show all providers of a hash?
<Gaboose> if not, is there a way to get/estimate the rarity of a file?
<whyrusleeping> Gaboose: it shows as many providers as it finds
<whyrusleeping> up to 20 i think
gordonb has quit [Quit: gordonb]
<whyrusleeping> building a package import visualization for ipfs breaks my computer
<cryptix> :)
<whyrusleeping> its using all my ram
<whyrusleeping> and all my swap
<cryptix> i wonder if it breaks godoc.org
<cryptix> (hope they cache those)
<cryptix> nope.. looks like they dont :x
<whyrusleeping> gonna try rendering it on my big machine
<pjz> whyrusleeping: you should stop building it on that 486
pfraze has quit [Remote host closed the connection]
<whyrusleeping> pjz: processor : 39
<whyrusleeping> vendor_id : GenuineIntel
<whyrusleeping> cpu family : 6
<whyrusleeping> model : 62
<whyrusleeping> model name : Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
<pjz> 40 cores?
<whyrusleeping> yipyip
<pjz> that's... impressive
<whyrusleeping> MemAvailable: 123247968 kB
<pjz> 120GB?
<whyrusleeping> or rather: MemTotal: 131999644 kB
<whyrusleeping> 128GB
<pjz> oh, 128GB
<pjz> ::'s poor little laptop is only a quad-i7 w/ 16GB of RAM
<cryptix> what are you using to make the vis?
<pjz> presumably graphviz?
<cryptix> yea.. just use godoc.org then ^^
<whyrusleeping> davecheneys glyph tool
<whyrusleeping> it just keeps eating ram. i'm not sure if it will ever stop
<pjz> memleak!
notduncansmith has joined #ipfs
<whyrusleeping> we're at 31.1GB used now
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> it doesnt appear to be climbing anymore
<cryptix> looking at that graph, i think its not a leak ^^
<whyrusleeping> 32
<cryptix> which pkg are you targeting ?
<whyrusleeping> the tool
<whyrusleeping> cmd/ipfs
Encrypt has quit [Quit: Quitte]
<whyrusleeping> yeah, i'm killing this, its never gonna finish
<pjz> enh, let it run in the background for a while
<whyrusleeping> it hit 70GB ram
<whyrusleeping> i dont want it to start starving my VMs
<whyrusleeping> lol
<pjz> alternately: stop it and restart it under 'time' so you get specs on how long it takes. Then report it as a bug :)
<cryptix> whyrusleeping: gddo made it in 50 secs
<cryptix> i wonder what they run on
<cryptix> also, totally uncomprehensable ^^
<whyrusleeping> yeah, its probably better code, lol
<cryptix> QmY85zkPdKixWhtSVBuDXANR3Xc2LtXWn8omjkKWCeAhUE < without stdlib
<alu> oi..
<alu> hmm
<lgierth> heh what a mess
<whyrusleeping> i want to spend some time looking at packages that are imported and used in just one place
<whyrusleeping> and see if we really actually need to be using that
<whyrusleeping> this is just rediculous
<whyrusleeping> like, go-uuid is used only by go-datastore
<whyrusleeping> to generate a random key
<whyrusleeping> which i dont even think is a function we ever call
<whyrusleeping> and it could just use crypto/rand and an alphabet to make a random string
www has quit [Ping timeout: 256 seconds]
<cryptix> yea, i thought about that when i moved uuid but..
<cryptix> yaaaawn.. to much borring hacking and an iphone screen replacement.. i need to get outside for a minute
<cryptix> if i dont get hijacked i might be able to take a look at git-remote-ipfs again
<whyrusleeping> cryptix: wooo!
<cryptix> btw there are two nodes build with 1.5b3 up since yesterday
<cryptix> which reminds me - lgierth are you collecting GC stats on the public gateway?
<cryptix> whyrusleeping: i guess you know that too
<whyrusleeping> gc stats... i think so
<lgierth> cryptix: only gc duration at the moment
<whyrusleeping> and re: gc stuff, good!
<whyrusleeping> er, not gc, i meant go1.5
<lgierth> it's not too hard to track more metrics
<whyrusleeping> lgierth: do we capture things like panics?
<cryptix> i really feard the crash i saw was our code beeing weird ^^
<cryptix> it beeing an actual fuckup in the c->go runtime conversion was.. relaxing kind of ^^
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth> whyrusleeping: no - like, number of panics?
* cryptix needs to brush up on prometheus query lang
<cryptix> but after my bike trip :) l8ters
<whyrusleeping> lgierth: like, the stack dumps from panics
<whyrusleeping> cryptix: have fun!
<lgierth> whyrusleeping: if they're in the logs, we have them. but not really tracked anywhere
<alu> oh btw
<whyrusleeping> okay
mildred has joined #ipfs
* lgierth heading home
<whyrusleeping> the file exists
gordonb has joined #ipfs
<whyrusleeping> and it contains the 404 page
<alu> o.o
<alu> you keep finding the weird stuff
<alu> if thats the only error
<alu> ill swoop it up
<whyrusleeping> lol
<whyrusleeping> i just browse through it
<alu> I scanned a desk with phone
<alu> and now importing into IPFS hosted janusVR room
<alu> heres a preview :P
<alu> its fucking huge, rising out of the ground
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<alu> okay i made a video
<alu> i think my fucking music player popped in
<alu> it didnt
<alu> phew
<whyrusleeping> yer a wizard
gordonb has quit [Quit: gordonb]
<alu> :)
<alu> Can you view it?
gordonb has joined #ipfs
<whyrusleeping> yeeep, i see
<whyrusleeping> thats a weird lookin desk
<alu> lol
<alu> ... yeah it is
<whyrusleeping> whoa
<whyrusleeping> raising the dead
<whyrusleeping> and an acer laptop :P
<whyrusleeping> how did you record that?
<alu> the video?
<alu> I used simplescreenrecorder
<whyrusleeping> no, the desk
gordonb has quit [Quit: gordonb]
dignifiedquire has joined #ipfs
pfraze has joined #ipfs
<alu> you guys
<alu> look at the speed difference !!!
<alu> IPFS vs regular file server
<whyrusleeping> wat
notduncansmith has joined #ipfs
<whyrusleeping> thats pretty sweet
notduncansmith has quit [Read error: Connection reset by peer]
<alu> how does it do that
<alu> are people also seeding it?
<alu> they dont seed it just by clicking the link tho
<whyrusleeping> i guess it managed to get on a couple of gateway servers
<whyrusleeping> from you and I clicking it
<whyrusleeping> the gateways will reseed things for a little while
<whyrusleeping> automatically
<alu> woah..
<alu> okay is there a guide to setting up gateway servers?
<alu> did someone dockerize it
mildred1 has joined #ipfs
<whyrusleeping> you can just run 'ipfs daemon' on a node with a public ip
<whyrusleeping> and set the config gateway address to 0.0.0.0:8080 instead of 127.0.0.1:8080
<whyrusleeping> and then use nginx or something to proxy port 80 over to it
mildred has quit [Ping timeout: 272 seconds]
<jbenet> alu: yes: https://github.com/ipfs/blog/blob/master/src/1-run-ipfs-on-docker/index.md i've been a bum, behind on publishing it -_-
<alu> after you hit publish is it time to pour the champagne
<alu> ima test it out
<alu> had to restart X server ;_;
mildred1 has quit [Quit: Leaving.]
step21 has quit [Ping timeout: 244 seconds]
gordonb has joined #ipfs
mildred has joined #ipfs
<voxelot> eth -m on -G -a <coinbase> -i -v 8 //
<voxelot> that works right?
<voxelot> oops wrong channel :p
<whyrusleeping> voxelot: idk, i used geth
step21_ has joined #ipfs
<voxelot> used geth with eth?
<voxelot> might try that
<whyrusleeping> nah, just the geth thing by itself
<whyrusleeping> it was easy to install and has gpu mining
step21_ is now known as step21
gordonb has quit [Client Quit]
<voxelot> ohh geth has gpu now?
<whyrusleeping> ive got 5 eth so far, not sure what thats worth...
<voxelot> haha
<voxelot> me either
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
gordonb has joined #ipfs
<Gaboose> kraken already accepts bids for ether
<Gaboose> not sure if the price is going to change when they open the gates for asks
<Gaboose> but now it looks like 5 eth = 2 - 10 eur
<whyrusleeping> huh, cool
Gaboose has quit [Remote host closed the connection]
<sprintbot> Sprint Checkin! [whyrusleeping jbenet cryptix wking lgierth krl kbala_ rht__ daviddias dPow chriscool gatesvp]
<whyrusleeping> sprintbot: working on a dropbox type app and also working on ipnsfs->mfs stuff
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
gordonb has quit [Quit: gordonb]
gordonb has joined #ipfs
<pjz> is there an IPFS shell?
<pjz> like an old-school FTP client
<whyrusleeping> pjz: nope
<whyrusleeping> could probably make one
hellertime has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
cdata2 has quit [Quit: WeeChat 1.2]
voxelot has quit [Ping timeout: 256 seconds]
alu has quit [Ping timeout: 246 seconds]
thelinuxkid has joined #ipfs
<Luzifer> Hmm… http://knut.cc/image/0M1E2f3c0u0s `$ git describe --tags` => "v0.3.6" (rev d50def3)
<Luzifer> jbenet: o/
<whyrusleeping> Luzifer: 'export API_ORIGIN="*"'
<whyrusleeping> and rerun
<whyrusleeping> i'm fixing that today
alu has joined #ipfs
<jbenet> That's very sketch tho so be careful
<jbenet> * is dangerous
<Luzifer> works…
<whyrusleeping> would be great if we had tests for the webui
Eudaimonstro has joined #ipfs
<whyrusleeping> woooo... go1.5 doubles build times
<whyrusleeping> so excited
<jbenet> whyrusleeping: maybe switch constants to have the 5001? May want to ship this as a 0.3.7 fix...
<jbenet> Or 0.3.6-fix? What etiquette do we want?
<jbenet> 0.3.7 is fine with me
<jbenet> Ideally would grab port from config though.
<Luzifer> we're are not doing semantic versioning aren't we?
<jbenet> Luzifer: no not yet
<whyrusleeping> jbenet: could do 0.3.6-1
<whyrusleeping> use kernel versioning semantics
<jbenet> We could but I still want a vanity version in front, so <vanity>.<major>.<minor>.<patch>
<whyrusleeping> ah, so we're 0.0.3.6 right now then?
<Luzifer> O_O
notduncansmith has joined #ipfs
* whyrusleeping is a little confused
notduncansmith has quit [Read error: Connection reset by peer]
<Luzifer> that will confuse like everyone…
<jbenet> whyrusleeping: no, 0.3.6.1
<whyrusleeping> ah
* Luzifer likes semantic versioning and sticks to it :D
<jbenet> Luzifer: really? Were you very confused the first time you saw semver?
<jbenet> Luzifer semver doesn't work with end users.
<Luzifer> jbenet: and adding more dots and numbers works betteR?
<jbenet> End users only need to know about <vanity>.<major>
<whyrusleeping> wtf is vanity?
<jbenet> If that
<Luzifer> whyrusleeping: +1
<jbenet> A number the product developers use to signify a logical difference with _major_, fundamental changes from one product to another. Say iterm1 and iterm2, or os10
<jbenet> A number you can put in print and have it mean something, not a number that will be different tomorrow.
<Luzifer> mac os uses 3 numbers… 10.10.4…
<jbenet> Luzifer that you see, they have more under the hood and internally
<jbenet> And that sure as hell it's semver ;)
<jbenet> Isnt*
<jbenet> luzifer look at chrome or FF numbers
<whyrusleeping> http://semver.org/
<whyrusleeping> semver == three numbers, right?
<Luzifer> plus maybe additions like `-rc4`
<Luzifer> but in general yes.
<Luzifer> major.minor.patch… patch for bugfixes, minor for backwards compatible feature additions, major for breaking changes
<jbenet> We've discussed this in the past. Look at chan logs. Essentially: <vanity>.<semver>
* Luzifer just answered whyrusleepings question and now drives to the gym
gendale__ is now known as gendale_
<jbenet> Luzifer: yeah :) I just mean for more reasoning and otters perspectives
<jbenet> Others. Wow autocorrect.
<whyrusleeping> otters think you smell like fish
<jbenet> Not anymore. No longer by pike place
<freedaemon> lol @ autocorrect
<daviddias> Plaaanes ✈️
<daviddias> whyrusleeping: doubles build times? Why is that a good thing?
bmcorser has quit [Quit: Connection closed for inactivity]
<jbenet> daviddias: sarcasm \o/
<jbenet> daviddias: aren't planes so tiring?
<jbenet> I'm in line at EWR. You?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> whyrusleeping: want to fix the constants to use user's port and push it as 0.3.6.1 ? I can check and merge before taking off.
<whyrusleeping> sure
<jbenet> whyrusleeping: did we update the version number in the code?
<whyrusleeping> jbenet: yeah
<jbenet> Would be good to run webui tests with phantomjs
<whyrusleeping> jbenet: this is really ugly.
hellertime has joined #ipfs
<whyrusleeping> pulling the ports from the config and the protocols from some random global
<jbenet> What random global?
<whyrusleeping> localApiBlahThing = []string{"https://localhost", ....}
<jbenet> Thats a constant for default cors. Which is always local host/127.
<whyrusleeping> yeah
<whyrusleeping> i also wish multiaddr was easy to use
<jbenet> It's a constant.
<whyrusleeping> i cant easily pull an ip address or port out of a string
<jbenet> Btw put a tag like <api-port> in the constant to be replaced. Should NOT just stick 5001 on every origin, that will not work.
gordonb has quit [Quit: gordonb]
<whyrusleeping> what?
<ipfsbot> [go-ipfs] whyrusleeping pushed 1 new commit to fix/allowed-origins: http://git.io/vOqsP
<ipfsbot> go-ipfs/fix/allowed-origins 852e9f0 Jeromy: pull port from config and use protocol on origin check...
<whyrusleeping> jbenet: o/
gordonb has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
thelinuxkid has quit [Quit: Leaving.]
<whyrusleeping> jbenet: the commands/http/handler_test.go tests are broken and i havent the faintest clue whats going on in there
voxelot has joined #ipfs
voxelot has joined #ipfs
kbala has joined #ipfs
www has joined #ipfs
<jbenet> why are the constants in two places?
dignifiedquire has quit [Quit: dignifiedquire]
<jbenet> whyrusleeping you're overwriting cfg.CORSOpts.AllowedOrigins always.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dignifiedquire has joined #ipfs
<whyrusleeping> jbenet: i dont even know what to say
<jbenet> i'll fix it :)
<whyrusleeping> im doing that url parse thing because the origin has the path on the end
<whyrusleeping> the /ipfs/Qmasbaskdjalsgjs part
mildred has quit [Quit: Leaving.]
Encrypt has joined #ipfs
<jbenet> whyrusleeping: ... no? origins are supposed to only be the [scheme://host:port] part of the url
<whyrusleeping> jbenet: thats weird, thats not whats being sent
domanic has quit [Ping timeout: 246 seconds]
<whyrusleeping> the value returned from the header contained the path
<whyrusleeping> which is why i did the url parsing
<jbenet> whyrusleeping what browser??
<jbenet> also, this ServeOption stuff is so convoluted.
notduncansmith has joined #ipfs
<jbenet> doenst even get the server, not sure why.
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> jbenet: chrome
tilgovi has quit [Ping timeout: 244 seconds]
<alu> I want to set up dedicated seedsers
<dignifiedquire> daviddias: Found and fixed two bugs today in node-ipfs-swarm :)
<whyrusleeping> dignifiedquire: woo!
dignifiedquire has quit [Quit: dignifiedquire]
<jbenet> whyrusleeping: btw, multiaddr at least lets you split on ("/") and check things for "tcp" and "udp", whereas a net.Addr may or may not have ports, and may or may not be ip6
<jbenet> so you may get: "1.2.3.4:5555" or "[::1]"
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth> sprintbot: blocklist/whitelist
<lgierth> whyrusleeping: i assume we'll have localhost/whitelist too then?
Encrypt has quit [Quit: Quitte]
* lgierth heading to c-base
<ipfsbot> [go-ipfs] jbenet force-pushed fix/allowed-origins from 852e9f0 to e5eccd8: http://git.io/vOq9M
<ipfsbot> go-ipfs/fix/allowed-origins 6b67c09 Juan Batiz-Benet: corehttp: add net.Listener to ServeOption...
<ipfsbot> go-ipfs/fix/allowed-origins e5eccd8 Juan Batiz-Benet: fix cors: defaults should take the port of the listener...
<whyrusleeping> jbenet: oh yeah, multiaddr is definietly easier than normal net addrs
<jbenet> whyrusleeping: can you CR that real fast? o/ and test with webui?
<jbenet> (it works for me)
<whyrusleeping> sure
<whyrusleeping> jbenet: still appears to break for me...
<jbenet> it works for me :/ -- why is your chrome sending the whole url as an origin??
<jbenet> can you search for that? im fixing test
<whyrusleeping> okay
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> jbenet: apparently thats normal
<jbenet> link?
<whyrusleeping> both seem like the path is expected to be there
<jbenet> whyrusleeping: oh referer is fine
<jbenet> _origin_ is always scheme://<host>:<port>
<jbenet> (so you're right we need to parse out the origin from the referer
<whyrusleeping> ah, i see the difference
<whyrusleeping> we check "Origin" on one, and .Referrer() on the other
<whyrusleeping> my chrome is probably not setting the path on Origin
<whyrusleeping> why is http so complicated?
hellertime has quit [Quit: Leaving.]
<jbenet> whyrusleeping pull + test once more?
<jbenet> i fixed tests.
<ipfsbot> [go-ipfs] jbenet force-pushed fix/allowed-origins from e5eccd8 to 9d06375: http://git.io/vOq9M
<ipfsbot> go-ipfs/fix/allowed-origins 9d06375 Juan Batiz-Benet: fix cors: defaults should take the port of the listener...
<whyrusleeping> on it
ruby32 has quit [Quit: Leaving]
<whyrusleeping> i realized you force pushed after git opened up a commit message editor for me
<whyrusleeping> lol
<whyrusleeping> yay! it works!
notduncansmith has joined #ipfs
<whyrusleeping> ship it
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has quit [Ping timeout: 246 seconds]
<whyrusleeping> jbenet: o/
<lgierth> cannot use addr (type "github.com/jbenet/go-multiaddr".Multiaddr) as type "github.com/jbenet/go-multiaddr-net/Godeps/_workspace/src/github.com/jbenet/go-multiaddr".Multiaddr in argument to manet.Listen
<lgierth> :):)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> lgierth: yeah, thats intensely obnoxious
<whyrusleeping> Multiaddr should probably be an interface tbh
<lgierth> whyrusleeping: why?
<lgierth> or, how. the inner go newb is asking
<whyrusleeping> well, making it an interface backed by a private concrete type would prevent that issue you are seeing
<whyrusleeping> it wouldnt care if the type was exactly the same, it would just do a method check
<whyrusleeping> and if the method sets match, it wouldnt care
<lgierth> ah. yep
<lgierth> thank you
thelinuxkid has joined #ipfs
<whyrusleeping> mappum: ping
thelinuxkid has quit [Client Quit]
thelinuxkid has joined #ipfs
<whyrusleeping> is it bad to return values in javascript?
<whyrusleeping> is seems like everyone just passes in a callback to pass the result of the function to
<lgierth> returning is fine
<lgierth> just don't block :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> so dont do while (true) {}
<whyrusleeping> got it
www1 has joined #ipfs
www has quit [Ping timeout: 246 seconds]
voxelot has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
thelinuxkid has quit [Quit: Leaving.]
www1 has quit [Ping timeout: 250 seconds]
voxelot has joined #ipfs
voxelot has joined #ipfs
Eudaimonstro has quit [Remote host closed the connection]
SouBE has joined #ipfs
<SouBE> guys. I'm new to IPFS concept. but I'm wondering if it could make possible to implement distributed caching HTTP proxies
<SouBE> proxy to proxy content updates
<lgierth> that would be amazing
domanic has joined #ipfs
<lgierth> ipfs as a content cache
<SouBE> right
<lgierth> do you have something in mind?
<lgierth> cause i'm sure many would love to see that happen
<lgierth> (me too)
<SouBE> imagine you're on a cuise ship with a dodgy satellite uplink but you could share HTTP caches with fellow passengers
<SouBE> or a long haul flight
<lgierth> exactly :)
<lgierth> would be so great to finally have that
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> SouBE: so you can kinda already do that, as long as youre browsing content within ipfs
<whyrusleeping> you'll request the content from peers that you have contact with
freedaemon has quit [Remote host closed the connection]
<SouBE> yes, but that requires that original content is from IPFS network. in HTTP proxy concept there needs to be a way to fetch content IPFS network from HTTP first
<whyrusleeping> SouBE: yeah
<SouBE> maybe HTTP URIs could be hashed too?
MatrixBridge has quit [Ping timeout: 256 seconds]
<whyrusleeping> thats one way to go about it, have a lookup table from uri to the content hash
<SouBE> if there's a hash that represent URI like http://ipfs.io/styles/img/ipfs-logo-white.png, then you just need to have a local proxy on your machine that translates browser URI requirests to IPFS hashes
<SouBE> like IPFS references to directories it could probably reference to URIs similar way
<clever> biggest problem with making an http cache is dynamic content
MatrixBridge has joined #ipfs
<SouBE> well that's a problem with current HTTP caches too. you just need to adhere origins Cache-Control headers
<SouBE> but well designed apps lets proxies to cache at least some parts of it
<clever> even with cache-control headers, you cant know for sure if the content has changed or not
<lgierth> could write a plugin for varnish
<lgierth> ipfs upstreams
<jbenet> SouBE: that's an interesting idea-- we could run the HTTP Request/Response pair through some filters, that a) first determine if there's anything to cache, b) return a hash to look up.
<clever> the cache control headers tell you how long to keep the data and if you should recheck
<lgierth> instead of http upstreams
<clever> SouBE: IPFS relies on the contents of a given hash never changing, but even with cache-control headers, a given url can change at a later time
<Tv`> clever: that's what etag isfor
<clever> but not everything implements it
<clever> yeah, you could use the etag to solve some things
<Tv`> clever: not everything is safely cacheable
<Tv`> etag is the opt-in
<clever> yep, but some http servers may not send an etag for the cachable stuff
<clever> older servers
<clever> and for some things like a forum, you may want to cache an older copy of the threads, for offline viewing
<clever> and the cache-control headers just deny that entirely
<SouBE> maybe URIs cannot be directly mapped to hashes. there probably needs to be a distributed cache lookup process that finds the recent version of an URI content from IPFS network. result of that lookup process is a hash to IFPS object
<SouBE> and clients should narrow their cache lookups with time windows, for instance "does anybody have content of URI X that is no older than 60 minutes?"
<clever> there is a python program called http-replicator, which acts as a passive proxy, while saving everything in the correct directory structure
<clever> you could then just ipfs add the whole cached dir
<SouBE> nice idea
<clever> squid's caching doesnt maintain the filenames on disk, only in its internal db
<SouBE> also polipo has on-disk cache
<SouBE> in IPFS only creator of a directory can update contents of it?
<clever> it works just like git, all directories are read only
<clever> and the hash of the dir, is just the hash of its contents (name+hash of each child)
<SouBE> oh, ok
<clever> so if you do modify a directory, you can reuse the sub-dirs and files you didnt change, and your new version has a new hash
<clever> in git, every commit containts the hash for the root directory, which forms a tree containing every file in that version
notduncansmith has joined #ipfs
<clever> so git doesnt manage diffs between versions, it manages full snapshots, the state of every file, at every commit
notduncansmith has quit [Read error: Connection reset by peer]
<SouBE> so current version of IPFS could allow me to collect a massive HTTP cache with Polipo proxy, share a snapshot of it with IPFS and then others could use it with Linux OverlayFS
<clever> sounds right
<clever> and once you publish that root hash, it is effectively read-only
<clever> so once one person verifies the hash is safe, everybody can trust that it hasnt been modified/trojaned
<SouBE> we just would need to have a daemon/script that changes the underlying snapshot of cache directory to a newer one as soon as I release a new snapshot. for that some kind of distributed data feed would be required
<clever> IPNS may do that
<SouBE> true
Eudaimonstro has joined #ipfs
<clever> from what ive seen, IPNS is just a key=value store, mapping your public key to an IPFS dir object
<clever> that sort of lets you modify a directory without having to share things with somebody
<SouBE> there could be a bot that crawls the web constantly and shares its cache with IPFS. people could suggest and vote about sites the crawler is fetching
<SouBE> and a new snapshot could be released like in every 30 minutes
<clever> in terms of storage, the crawler would need to maintain a full copy of everything in ipfs
<clever> and to avoid doubling up, you would be best if you modify your http cache to read/store directly into ipfs
<SouBE> yah
<clever> and once you share the root hash, people can just download what they want from you
<clever> and anybody else that has it
<lgierth> whyrusleeping: i'm writing the other half of blocklist, and i'm thinking we could store it in ipfs itself, couldn't we?
<lgierth> ah mh updates