jbenet changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/jbenet/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- code of conduct at https://github.com/ipfs/community/blob/master/code-of-conduct.md -- sprint at https://github.com/ipfs/pm/issues/7
anshukla has quit [Remote host closed the connection]
anshukla has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
anshukla has quit [Ping timeout: 246 seconds]
inconshreveable has joined #ipfs
inconshr_ has joined #ipfs
anshukla has joined #ipfs
inconshreveable has quit [Ping timeout: 248 seconds]
stackmutt has joined #ipfs
stackmut_ has quit [Ping timeout: 252 seconds]
anshukla has quit [Read error: Connection reset by peer]
anshukla has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
hellertime has joined #ipfs
reit has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato1 has joined #ipfs
therealplato has quit [Ping timeout: 264 seconds]
pfraze has quit [Remote host closed the connection]
hellertime has quit [Read error: No route to host]
hellertime has joined #ipfs
nessence has joined #ipfs
inconshr_ has quit [Ping timeout: 248 seconds]
therealplato has joined #ipfs
therealplato1 has quit [Ping timeout: 252 seconds]
Wallacoloo has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Wallacoloo has quit [Quit: Leaving.]
tilgovi has quit [Ping timeout: 256 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
tilgovi has joined #ipfs
anshukla has quit [Remote host closed the connection]
anshukla has joined #ipfs
www has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
pfraze has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
headbite has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
headbite has joined #ipfs
anshukla has quit [Remote host closed the connection]
anshukla has joined #ipfs
reit has joined #ipfs
anshukla has quit [Ping timeout: 265 seconds]
temet has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
tilgovi has quit [Ping timeout: 256 seconds]
williamcotton has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
hellertime has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
anshukla has joined #ipfs
temet has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<daviddias> whyrusleeping skeptical as in "we have a better alternative" or as in "we are doomed" ? :)
anshukla has quit [Remote host closed the connection]
anshukla has joined #ipfs
temet has joined #ipfs
Wallacoloo has joined #ipfs
tilgovi has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<reit> i'm having trouble adding things, i'm trying to add a large folder full of images but ipfs eats all my ram then all my swap then it crashes whenever i try to do so
<reit> i could write a script to add all the files individually and then stitch them together with object patch i guess
<reit> for reference: there are >10000 files in the directory in question, add -r doesn't seem to cope with it
<wking> reit: yeah it tries to buffer them all in memory at the moment, because the commands package setup to optionally handle this with a single POST to the daemon's API
<wking> if you don't want to go the object-patch route, you could try just adding them to a FUSE-mounted /ipns/local/... directory
temet has quit [Ping timeout: 252 seconds]
<reit> oh i see, so right now the maximum add is less than (free ram + free swap) / 2 (because of i assume the api would grab a copy at the same time)
<reit> haven't really looked into ipns too much, what is /local for?
<wking> /ipns/local is just an alias for /ipns/<your-local-node's-ID>
<wking> and that maximum limit sounds plausible, although the sender could potentially be clever and free memory as it writes it out. I'd expect we're not that clever at the moment, but I haven't looked at the code ;)
<wking> and if you run the add without a local daemon running, you don't have to worry about the copy (but I think you still have to worry about the buffer-the-whole-tree-before-writing-anything issue
<wking> )
sharky has quit [Ping timeout: 264 seconds]
<reit> gotcha
<reit> so from your description, can i assume that /ipns/local is attached to/updated by your local hidden root node?
notduncansmith has joined #ipfs
<reit> that is to say, resolving it would result in a directory node containing all your directly pinned hashes?
therealplato1 has joined #ipfs
sharky has joined #ipfs
temet has joined #ipfs
notduncansmith has quit [Ping timeout: 255 seconds]
therealplato has quit [Ping timeout: 248 seconds]
<wking> reit: no, it's a separate idea from pinning
pfraze has quit [Remote host closed the connection]
<wking> there's a more detailed, but with occasional stale phrasing in section 3.7 of https://github.com/ipfs/papers/blob/master/ipfs-cap2pfs/ipfs-p2p-file-system.pdf
<wking> * a more detailed description
<whyrusleeping> daviddias: skeptical as in "this might suck"
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<reit> thanks wking
dread-alexandria has joined #ipfs
* zignig_ hands whyrusleeping a freshly polished de-skepitisizer.
reit has quit [Ping timeout: 252 seconds]
* whyrusleeping looks at the device and is skeptical
temet has quit [Ping timeout: 256 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has joined #ipfs
chriscool has joined #ipfs
<zignig_> whyrusleeping: press the mauve button !
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Blame2 has joined #ipfs
Blame has quit [Read error: Connection reset by peer]
temet has quit [Ping timeout: 252 seconds]
chriscool has quit [Ping timeout: 252 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Tv` has quit [Quit: Connection closed for inactivity]
torpor has joined #ipfs
temet has joined #ipfs
Wallacoloo has quit [Ping timeout: 248 seconds]
temet has quit [Read error: No route to host]
mildred has joined #ipfs
notduncansmith has joined #ipfs
williamcotton has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
anshukla has quit [Remote host closed the connection]
temet has joined #ipfs
anshukla has joined #ipfs
williamcotton has quit [Ping timeout: 256 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has quit [Ping timeout: 276 seconds]
<ipfsbot> [go-ipfs] wking pushed 2 new commits to tk/check-for-non-file-or-dir-modes: http://git.io/vtvHn
<ipfsbot> go-ipfs/tk/check-for-non-file-or-dir-modes e9dd5f7 W. Trevor King: commands: Pass FileInfo between the cli and daemon with a MIME header...
<ipfsbot> go-ipfs/tk/check-for-non-file-or-dir-modes 43fc75d W. Trevor King: commands/http/handler: Write a trailer with any error messages...
dread-alexandria has quit [Ping timeout: 255 seconds]
temet has joined #ipfs
dread-alexandria has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot> [go-ipfs] wking force-pushed tk/check-for-non-file-or-dir-modes from 43fc75d to 1ed0e24: http://git.io/vLyfs
<ipfsbot> go-ipfs/tk/check-for-non-file-or-dir-modes 1ed0e24 W. Trevor King: commands/http/handler: Write a trailer with any error messages...
<ipfsbot> [go-ipfs] wking force-pushed tk/check-for-non-file-or-dir-modes from 1ed0e24 to d593388: http://git.io/vLyfs
<ipfsbot> go-ipfs/tk/check-for-non-file-or-dir-modes 46b7889 W. Trevor King: commands/http/handler: Write a trailer with any error messages...
<ipfsbot> go-ipfs/tk/check-for-non-file-or-dir-modes d593388 W. Trevor King: commands: Pass FileInfo between the cli and daemon with a MIME header...
<ipfsbot> [go-ipfs] wking force-pushed tk/check-for-non-file-or-dir-modes from d593388 to dd9bad0: http://git.io/vLyfs
<ipfsbot> go-ipfs/tk/check-for-non-file-or-dir-modes a71c07a W. Trevor King: commands: Pass FileInfo between the cli and daemon with a MIME header...
<ipfsbot> go-ipfs/tk/check-for-non-file-or-dir-modes 2412cca W. Trevor King: commands/http/handler: Write a trailer with any error messages...
<ipfsbot> go-ipfs/tk/check-for-non-file-or-dir-modes 49c8aab W. Trevor King: t0040: Test named-pipe errors...
temet has quit [Ping timeout: 246 seconds]
tilgovi has quit [Ping timeout: 248 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has joined #ipfs
temet has quit [Ping timeout: 256 seconds]
warner has quit [Read error: Connection reset by peer]
warner has joined #ipfs
atomotic has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has quit [Ping timeout: 246 seconds]
anshukla has quit [Remote host closed the connection]
inconshreveable has joined #ipfs
anshukla has joined #ipfs
temet has joined #ipfs
<ipfsbot> [go-ipfs] wking pushed 1 new commit to tk/check-for-non-file-or-dir-modes: http://git.io/vtf3u
<ipfsbot> go-ipfs/tk/check-for-non-file-or-dir-modes a9eb139 W. Trevor King: Treat symlinks as symlinks symlinks...
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has quit [Ping timeout: 250 seconds]
gunn has joined #ipfs
inconshreveable has quit [Remote host closed the connection]
therealplato has joined #ipfs
therealplato1 has quit [Ping timeout: 264 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has joined #ipfs
djoot has quit [Ping timeout: 252 seconds]
djoot has joined #ipfs
temet has quit [Ping timeout: 250 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has joined #ipfs
temet has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has joined #ipfs
slothbag has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has quit [Ping timeout: 246 seconds]
temet has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
kbala has quit [Quit: Connection closed for inactivity]
<lgierth> krl: will take a shower and be on my way
guest449 has joined #ipfs
temet has quit [Ping timeout: 255 seconds]
crest has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
torpor has quit [Quit: Leaving.]
crest has joined #ipfs
gunn has quit [Quit: Textual IRC Client: www.textualapp.com]
temet has joined #ipfs
hellertime has joined #ipfs
guest449 has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has quit [Ping timeout: 255 seconds]
<slothbag> In the brave new IPFS world we need a better way of keeping track of content hashes.. so many times i've pinned content and then its very hard to find it again.. IPFS webui needs a file explorer like interface :)
Blame2 has quit [Remote host closed the connection]
Blame has joined #ipfs
temet has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
atomotic has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
therealplato1 has joined #ipfs
temet has quit [Read error: No route to host]
therealplato has quit [Ping timeout: 264 seconds]
therealplato has joined #ipfs
therealplato1 has quit [Ping timeout: 248 seconds]
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
temet has joined #ipfs
<ehd> how can i make my local ipfs binary talk to a remote ipfs server provided the API port is available to the client?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth> krl: ping
temet has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has joined #ipfs
atomotic has joined #ipfs
<krl> lgierth: omw!
slothbag has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/]
<lgierth> krl: don't hurry
<lgierth> i'm gonna go buy pants so i'll be there in ~ 1 h
<lgierth> see you in a bit
<krl> k cool
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has quit [Ping timeout: 246 seconds]
reit has joined #ipfs
reit has quit [Client Quit]
reit has joined #ipfs
temet has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has quit [Ping timeout: 264 seconds]
hellertime has quit [Read error: Connection reset by peer]
hellertime has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has joined #ipfs
domanic has joined #ipfs
temet has quit [Ping timeout: 265 seconds]
<zignig_> !pin QmarZJvnfZjwxg7iMwmzCLVoCVefhhc1ADkNdStLWvbmMH
zignig_ is now known as zignig
<zignig> !pin QmarZJvnfZjwxg7iMwmzCLVoCVefhhc1ADkNdStLWvbmMH
<pinbot> now pinning QmarZJvnfZjwxg7iMwmzCLVoCVefhhc1ADkNdStLWvbmMH
notduncansmith has joined #ipfs
<pinbot> pin QmarZJvnfZjwxg7iMwmzCLVoCVefhhc1ADkNdStLWvbmMH successful!
<zignig> !botsnack
<pinbot> om nom nom
notduncansmith has quit [Ping timeout: 246 seconds]
<reit> are there any plans to make it possible to pin ipns addresses such that it is possible to create an auto-updating mirror/backup of your web site/archive?
<ehd> uhh, that would be great
<ehd> also i would make my 24/7 ipfs host pin my other ipfs instances
<reit> yeah that's what i'm wanting to achieve too
<reit> i mean, it shouldn't be too hard to hack together a script to periodically run ipfs and reresolve the name, and then if it's changed recursively pin the new hash and unpin the old
atomotic has quit [Quit: Textual IRC Client: www.textualapp.com]
<reit> but i'd be a little concerned Something Bad would happen with the recursive unpinning
<reit> if it's not a properly integrated feature
temet has joined #ipfs
hellertime has quit [Read error: Connection reset by peer]
hellertime1 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
temet has quit [Ping timeout: 264 seconds]
ruby32 has joined #ipfs
<lgierth> docker and ipv6 is @fun@
<lgierth> "fun"
therealplato1 has joined #ipfs
<zignig> for certain imaginary values of fun, 2+3i fun units.
therealplato has quit [Ping timeout: 250 seconds]
<zignig> does anyone have a cool ipfs idea ?
pfraze has joined #ipfs
<mmuller_> zignig: I'd like to build an OS or a cryptocurrency around it :-)
<zignig> mmuller_: block chain is a cool idea , messy though.
<zignig> starting with a web of trust and dangling block chains off it would be a good start.
<zignig> I have been booting os's out of ipfs for a while. fixed a major bug just yesterday...
therealplato has joined #ipfs
therealplato1 has quit [Ping timeout: 276 seconds]
<mmuller_> yeah, I think it was you who brought up running a rkt container off of ipfs a while ago?
<zignig> yeah, that was moi. ;). with some more changes we should be able to boot as many ipfs instances as you can hold in RAM.
<zignig> good for network scale testing.
hellertime1 has quit [Read error: Connection reset by peer]
hellertime has joined #ipfs
<mmuller_> I actually have a low priority work goal for running containers off of IPFS instead of images.
<mmuller_> trying to get my intern to do it :-)
<zignig> more Stick less Carrot. >;D
<mmuller_> heh
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has quit [Ping timeout: 255 seconds]
<ipfsbot> [go-ipfs] wking pushed 1 new commit to tk/changelog: http://git.io/vtUkh
<ipfsbot> go-ipfs/tk/changelog f381a7d W. Trevor King: CHANGELOG.md: Add a note about #1414...
<reit> zignig: decentralizing archive.is would be a good step forwards
<reit> you could advertize it as privately mirror-able, unalterable and secure, and so on
therealplato has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
williamcotton has joined #ipfs
<lgierth> ok i don't wanna do any docker anymore :/
<lgierth> i want ipv6 more than i want containerization
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth> ʕノ•ᴥ•ʔノ ︵ ┻━┻
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nessence has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
williamc_ has joined #ipfs
williamcotton has quit [Read error: Connection reset by peer]
<whyrusleeping> lgierth: whats up?
<lgierth> whyrusleeping: it doesn't properly bind ipv6 ports when running the daemon with --userland-proxy=false
<lgierth> without that flag, ipv6 connections end up in nginx as ipv4 connections from 172.17.42.1 (docker0 bridge)
<lgierth> which is useless for me because i wanna use REMOTE_ADDR for authentication
<whyrusleeping> oooooh, yeah.
<whyrusleeping> that issue bit us too a while back. we had 'addrsplosion'
<whyrusleeping> it was bad
<lgierth> i'm trying to switch it around now, binding nginx only to ::
<whyrusleeping> computers died
<lgierth> i don't care about ipv4 traffic ending up as ipv6
<lgierth> yeah ok that doesn't work at all
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth> gonna get some chocolate and then try one more thing: using the official nginx image instead of dockerfile/nginx
prosodyContext_ is now known as prosodyContext
dread-alexandr-1 has joined #ipfs
dread-alexandria has quit [Ping timeout: 246 seconds]
domanic has quit [Ping timeout: 256 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Quit: Leaving.]
williamc_ has quit [Ping timeout: 256 seconds]
Tv` has joined #ipfs
patcon has joined #ipfs
<krl> jbenet: i pushed version bump to ipfsd-ctl/master and npm published, is this an acceptable procedure?
<krl> or should we do this as pull requests too?
krl has quit [Quit: WeeChat 0.3.8]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<richardlitt> hey ya'll
krl has joined #ipfs
<richardlitt> Does anyone by any chance know of a good resource for understanding the reason for the `_` in front of vars for CouchDB // PouchDB? Working on a project with jbenet and a bit confused by it.
<lgierth> richardlitt: that's just couchdb's convention
<lgierth> to keep them separate from actual document keys
<richardlitt> So, if I'm creating a _session doc, do I really need the _?
<lgierth> yes
<richardlitt> I keep running across an `illegal_database_name` error for it.
<richardlitt> `'Name: \'_session\'. Only lowercase characters (a-z), digits (0-9), and any of the characters _, $, (, ), +, -, and / are allowed. Must begin with a letter.'`
<lgierth> the session database is a bit special, that's all i know
<richardlitt> I just don't understand why we're using _ in front of _session if that's an issue for the DB
<lgierth> my last encounter with couchdb is long ago
<lgierth> _session is a thing of couchdb
<richardlitt> ugh. Ok. Thanks!
<lgierth> afaik
<richardlitt> I'll keep looking for documentation on it
<richardlitt> At the moment this doesn't make any sense.
<lgierth> whyrusleeping: the second time i get "nope we won't to ipv6 NAT" slapped in my face. the other week it was the openwrt devs, this time docker
Encrypt has joined #ipfs
<lgierth> ლ(ಠ益ಠლ)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> lgierth: :(
hrjet has joined #ipfs
<lgierth> "now with ipv6 support" haha
* lgierth ranting
<lgierth> gonna run nginx within the host instead of the container
<grawity> nginx sets V6ONLY for :: nowadays, doesn't it
<lgierth> nginx does fine with both ipv6 and dual
afdudley0 is now known as afdudley
<lgierth> it's docker that refuses to directly pass ipv6 traffic to the container
<whyrusleeping> lgierth: tell shykes :P
<lgierth> we want ipv6 nat!!
<whyrusleeping> i've got some weirdness happening after implementing the mark and sweep GC
<whyrusleeping> every time i reinit a node, there are three random keys that get GC'ed
<whyrusleeping> i cant figure out what they are
<lgierth> what's the mark criteria?
<whyrusleeping> direct pin, recursive pin, or a child of a recursive pin
domanic has joined #ipfs
<whyrusleeping> (or one of the objects the pinner is using internally)
<wking> lgierth: I haven't used it myself yet, but I was under the impression that Docker will (since v1.5) forward IPv6. https://docs.docker.com/articles/networking/#ipv6
<lgierth> wking: forward, but not masquerade
<wking> ah, right
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth> godspeed whyrusleeping
<lgierth> i'm gonna head home
<lgierth> this dark hacker space doesn't do me good
<whyrusleeping> lgierth: travel safely!
domanic has quit [Ping timeout: 250 seconds]
jb55_ has joined #ipfs
williamcotton has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping pushed 1 new commit to feat/mark-n-sweep: http://git.io/vtTrv
<ipfsbot> go-ipfs/feat/mark-n-sweep 42a73fe Jeromy: dont GC blocks used by pinner...
inconshreveable has joined #ipfs
nsh has joined #ipfs
nsh has quit [Changing host]
inconshr_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
inconshreveable has quit [Read error: Connection reset by peer]
anshukla has quit [Ping timeout: 264 seconds]
Encrypt has quit [Quit: Quitte]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rht__ has quit [Quit: Connection closed for inactivity]
dread-alexandria has joined #ipfs
dread-alexandr-1 has quit [Ping timeout: 252 seconds]
domanic has joined #ipfs
notduncansmith has joined #ipfs
* jbenet is alive, and catching up.
notduncansmith has quit [Read error: Connection reset by peer]
<krl> o/
dread-alexandria has quit [Quit: dread-alexandria]
jibber11 has joined #ipfs
Encrypt has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<sprintbot> Sprint Checkin! [whyrusleeping jbenet cryptix wking lgierth krl kbala_ rht__ daviddias dPow chriscool gatesvp]
<krl> sprintbot: fixed most of the issues pointed out with html-menu, will wrap that up and start looking at app separation
<lgierth> sprintbot: fought an uphil ipv6 battle against docker today... i've figured it out now and will move nginx on the gateways out of the container into the host
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> sprintbot: lots of CR + email today.
<jbenet> incomingggggg----
<jbenet> daviddias: yeah let's talk about node-ipfs today. either of suggested times (14 or 15:00 PDT) work for me -- you whyrusleeping?
<jbenet> daviddias: maybe we need to write the tests indutny asked for.
<jbenet> wking: i also think we should only have blocks that are merkle-objects but i think whyrusleeping uses them for other things.
<jbenet> One way to obviate this problem is to define raw data as a merkle-object with only data, and it allows us to point to arbitrary things people give us. But that would require a change in format and not sure it's TRTTD yet.
<jbenet> lgierth: is there a readme somewhere for "setup a vpn with cjdns"?
<jbenet> spikebike: we'll make the ipfs hoodies available soon, once we confirm it's the design we want :) -- still need to get mine. We'll also subsidize part of the cost for contributors.
<jbenet> "<wking> reit: yeah it tries to buffer them all in memory at the moment" (re: ipfs add.) Woahhhh srs? how did i miss this? we should fix that. cc whyrusleeping
<jbenet> slothbag: yeah agreed, want to spec out what "the explorer of your dreams" would do? we've a few descriptions lying around but the more we get, the more we'll hit the common subset of desired features.
<jbenet> ehd: this is clunky right now, you have to fool the cli, make a repo.lock and make a config with the "Addresses.API" key to the address of the remote API server. (this is crap, we should fix it, lets maybe file an issue about it)
<jbenet> mmuller_ "running containers off of IPFS instead of images." we're working on this too. take a look at https://github.com/docker/distribution/compare/master...wking:ipfs-storage-driver and the older https://github.com/ipfs/container-demos
<jbenet> wking: sorry for delay will get to review that today. BTW, i dont want to PR against docker until we're absolutely sure this is TRTTD. they're serving tons of people that require everything to be very safe + correct. while docker things def have bugs, i dont want _us_ to introduce problems. So let's build + CR + merge in our own form, and then run it in
<jbenet> production with the gateways + other machines for while.
<jbenet> lgierth: "re docker + ipv6" that sounds annoying :( -- we can ask in #docker or something. I can also put us in touch with someone there that handles all the networking stuff. Im sure they want to fix things and make them nicer for us.
<jbenet> lgierth: oh that thread's stupid. i responded https://github.com/docker/docker/issues/11518#issuecomment-114983255
<jbenet> done---
<mmuller_> jbenet: sweet! A match made in heaven. :-)
<jbenet> mmuller_ if you guys want to work with us on all this, feel welcome to
<mmuller_> oh, I would love to work with you on all of this. It is time I lack at this point.
<dPow> Good afternoon everyone!
<lgierth> jbenet: thanks for chiming in, i've subscribed to the issue. but for now i'm gonna move nginx out of the container since i wanna match fc00::/8 addresses for authentication. PR incoming
<jbenet> lgierth: sgtm
<lgierth> jbenet: regarding cjdns-vpn, there's this for ipv6: https://github.com/hyperboria/cjdns/blob/master/doc/tunnel.md -- and this for ipv4 (from "client setup" onwards): https://github.com/berlinmeshnet/visp
<lgierth> i've asked in #docker and will stick around a bit
<lgierth> wtf
<lgierth> malte janduda
<lgierth> i went to school with that dude
<mmuller_> jbenet: I was actually considering something even more aggressive (unless I'm missing it in container-demos)
<mmuller_> specifically, having docker (or rkt) mount /ipfs/$HASH as the read-only layer and dispensing with the image entirely
<jbenet> mmuller_ yeah we want to head that direction, and something even _more_ aggressive: replace the layers entirely with an "ipfs commit" that works like git commits.
<jbenet> so the "image" is a commit hash.
<mmuller_> cool
<jbenet> lgierth: small world :)
<lgierth> yeah indeed
<jbenet> dPow: good afternoon o/
<lgierth> \o/
<lgierth> whyrusleeping: happy birthday! http://asset-3.soup.io/asset/12306/5855_37f7.gif
inconshr_ has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> lgierth: that one is amazing
<lgierth> right?! soviet fairytale adaptations rock
<jbenet> lgierth: there's a full russian hobbit. https://youtu.be/Sl7w2Z0vGpA -- with a much more hardcore gandalf https://youtu.be/Sl7w2Z0vGpA?t=226 (if that is really what is being said)
<jbenet> (note, not real subtitles ;) )
<lgierth> haha great
<lgierth> timmy owes me money :D
<lgierth> ah i see
<lgierth> it's like the lord of the weed
<jbenet> hahah
inconshreveable has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> oh geeze
<jbenet> daviddias, mafintosh, whyrusleeping: more stream fun: https://github.com/hashicorp/yamux/issues/7
tilgovi has joined #ipfs
<jbenet> daviddias, mafintosh: yamux is _not_ spdy and _not_ http2. it's just _based on_ spdy.
<whyrusleeping> doesnt QUIC do streams in a similar manner to SPDY as well?
* jbenet checks
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> oh another fun thing "The lower 64 bits of the sequence number may be used as part of a cryptographic nonce; therefore, a QUIC endpoint must not send a packet with a sequence number that cannot be represented in 64 bits. If a QUIC endpoint transmits a packet with a sequence number of (2^64-1), that packet must include a CONNECTION_CLOSE frame with an error
<jbenet> code of QUIC_SEQUENCE_NUMBER_LIMIT_REACHED, and the endpoint must not transmit any additional packets."
<jbenet> 2^64 is a lot of packets, so i see why they're fine with this.
<jbenet> 2^64 / 100000 pkts/s / 3600s/hr / 24hr/day / 365days/yr = >5M years.
<jbenet> :)
<whyrusleeping> i guess thats reasonable...
<spikebike> heh, ya
<spikebike> for encryption 64 bits isn't much, for anything else it is
<jbenet> one would have to be doing 100B pkts/s before this was a reasonable concern.
<whyrusleeping> yeah, 100B/s is pretty small if youre brute forcing something
<spikebike> thus the protection of IPv6, I personally have 2^68 IPs at home.
<whyrusleeping> i can buy that much power on amazon for like $100/hr
<jbenet> i guess 1T pkts/s to be conservative. when will we be in danger of hitting that? 1-2 decades?
<spikebike> in my experience for most things it's not the packet/sec that's continuously increasing as much as the bandwidth
<whyrusleeping> the answer to "when will X operations per second be normal?" is always your best guess divided by two
<jbenet> (sigh i dislike when protocols have time bombs, what's reasonable today is what people will certainly be hurt by in a few decades, and _they will still use the thing because it's so hard to change protocols_ (eg we're still on IPv4 and we'll keep using IPv4 for a long time))
<spikebike> I suspect the QUIC standard will be long gone before 2^64 is an issue
<jbenet> spikebike: yeah that's what everyone always thinks.
<spikebike> dunno, 2^64 * 100 bytes is an astonishing amount of data/bandwidth
<spikebike> I was talking to someone about zfs and their 128 bit inode numbers
<spikebike> that would be justified by 2^65 bits in a file system
<spikebike> At the time that would have been the entire planets disk production for several years assuming that disks got bigger by a factor of 2 per year.... in a single file system
<jbenet> spikebike: we still use TCP/IPv4 today. that spec has changed many times in its history, but it's a show of the lifetime of protocols.
<spikebike> er, I think it was 2^65 sectors
<spikebike> jbenet: yeah, but tcp/IP is much lower in the stack than QUIC
<spikebike> how many billion computers depend heavily on QUIC? Apps?
<jbenet> today.
<spikebike> jbenet: would it be hard to just close the connection at 2^64-2 and try a reconnect?
<jbenet> that's what it's supposed to do, but note that is an annoying thing to deal with. it's a protocol wart. from an "occam's razor / parsimony" approach to science, this is a "more complex" thing to deal with.
<spikebike> the linux kernel had a similar problem with a quicky counter, was 2^32 1000th of a second
<spikebike> their response was to make it smaller, so that it was triggered more often and easier to ensure that it was handled correctly
<jbenet> there's a weird phenomenon in protocols where the "simple case today" is not actually simpler, it's simple given current assumptions and more complex given a different set of assumptions.
<jbenet> spikebike: yeah +1 to that approach. it doesnt hide the problems.
hellertime has quit [Quit: Leaving.]
<spikebike> even assuming 10G being common, and 100 byte packets, that's still over 40k years
<whyrusleeping> 40k years starting today without factoring in an increase in power over the years
<whyrusleeping> like, there are some computational tasks where its faster to wait ten years before starting the operation
<spikebike> assuming a doubling every decade that's still 160 years
<whyrusleeping> thats a better number
<spikebike> even machine uptimes and by nature a given node will be talking to many peers not just one that probably gives you another factor of 50
<spikebike> few protocols fail to evolve over a decade
<spikebike> take a closer analog, like say http
<whyrusleeping> except TCP
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> http has failed to evolve for the past 20 years, its finally trying to now
<spikebike> tcp is indeed entrenched, I wouldn't argue against it. However it seems exceedingly unlikly that something like QUIC is going to become as entrenched
<spikebike> especially when TLS and SPDY are a moving target as well.
<spikebike> just like protobufs is changing as are the dozen or so similar alternatives
<spikebike> sure if I was netflix and wanted to send all content to subscribers over a single peer <-> peer connection I'd worry about 2^64*100
<spikebike> Seems like sha256 has a much shorter lifetime than 2^64*100 bytes per connection
<jbenet> spikebike: my issue is less with this specific case, and more with the general case of introducing complexity to the protocol disguised as simplicity. you cant just implement it and put it in a thing, ship it, and expect it to be running just fine decades from now without warts appearing. "disconnect and reconnect" is an annoying thing you have to know about
<jbenet> and wrap beforehand.
<spikebike> I'm all for elegance and planning. Just seems like time/energy is better spent elsewhere and this particular limit seems very unlikely to ever be experienced.
www has quit [Quit: Leaving.]
www has joined #ipfs
<jbenet> yeah, this particular limit is not a big deal, but it does raise questions about what other timebombs may be there.
<spikebike> I do wonder if quantum computing will result in public key chaos.
<jbenet> the linux approach is the right one, it's like sha, its' well established that you'll have to switch eventually, so you can prepare for it.
<whyrusleeping> Tv`: question that I may have asked before, but why does the flatfs write to a tmpfile and then move the file into its location?
<whyrusleeping> whats the advantage over just writing the target file?
<Tv`> whyrusleeping: what if it crashes while writing
ryepdx has quit [Ping timeout: 256 seconds]
<jbenet> the stream thing is a hidden unadvertised problem, in the class of things that most users will never know about until things start failing unexpectedly, and i'm sensitive to that sort of thing
<whyrusleeping> Tv`: ah, so we dont get partially written files
<jbenet> (and sure, maybe they'll never reach it in this particular case, but it's still an annoying thing. also for context: https://github.com/hashicorp/yamux/issues/7 )
<whyrusleeping> got it
ryepdx has joined #ipfs
<jbenet> spikebike: this is a related but different problem. the id here is 32bits, so can run into it fast. (2^32 / 2 bidirection) / 1000 streams-opened/sec / 3600 sec/hr / 24 hr/day = hit it in 24.8 days
mildred has joined #ipfs
<spikebike> jbenet: sure, 2^32 is easy to hit in numerous cases.... 2^64 isn't just twice as big though ;-)
<jbenet> lol ofc
<spikebike> 2^64 doesn't sound that big, but could handle giving each person on the planet (lets assume 4B billion for ease) each their own complete ipv4 space.
<spikebike> and packets can't be a single byte, so you are 50-100x that
<jbenet> btw, spdy itself also uses 2^32 steam ids.
<spikebike> but handles rollover?
<whyrusleeping> PSA: the new chromebook pixels are actually really nice
<spikebike> whyrusleeping: I want one
notduncansmith has joined #ipfs
<whyrusleeping> its *very* comparable to a MBP
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike> it will be quite some time before the entire disk production of the planet will be more than 2^64*100 bytes
<spikebike> and the laws of physics are going to put a damper on the doubling thing
<spikebike> from what I can tell world disk supply is something like 140M with a (guess) around 1TB each
<jbenet> spikebike: it does not, spdy also advises breaking and reconnecting.
<jbenet> http2 kept this, and quic does the same.
<spikebike> ah, makese sense
<spikebike> connections are generally intended to be rather ephemeral
<jbenet> inteded by protocols like these.
<jbenet> TCP handles rollover of seqnos just fine.
<spikebike> sadly storage bits per device over the last decade seems to be decreasing not increasing
<jbenet> btw, i bet these engineers would've chosen something else if they werent designing specifically to multiplex browser streams.
<jbenet> browser connections are highly ephemeral
<jbenet> they would've not made these assumptions when thinking about SPDY/QUIC the way we are: connecting long-lived processes
<spikebike> phones have tiny storage, and even new $2,000 laptops have less storage than my 5 year old asus for $680.
jibber11 has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<spikebike> (like say the mentioned chromebook pixel)
<jbenet> spikebike: i think mobile storage bits are decreasing mostly because media playback is the main consumer use of storage, and media playback is now done via streaming (because library size is too big and because copyright (i kid you not, "streaming" is understood differently legally than "download" even though both ship the same bits to your computer).
<whyrusleeping> yeah, because streaming implies ephemeral
<spikebike> true
<whyrusleeping> and download implies permanent
<jbenet> but give it a couple years, netflix and co will be doing "library preloading" into home devices.
<jbenet> it's already happening at the ISP level.
<jbenet> because media resolution is also increasing. 4K video is not small.
<spikebike> yeah, some devices even allow that today
<jbenet> so netflix is (right now) working on predicting what you will watch and shipping it to you optimistically
<spikebike> I think play.google.com allows that today... download for offline viewing
* daviddias arrives home
<spikebike> yeah and netflix has a 45x6TB widget they will ship to any ISP for free
<spikebike> my asus tablet came with transformers on it
<daviddias> jbenet whyrusleeping I'm good to talk on node-ipfs now, are you available?
<spikebike> er a license... and a click to download and keep
<whyrusleeping> Tv`: would it be cheaper to write to the target file, and then stat it afterwards to check the correct number of bytes were written?
<whyrusleeping> deleting it on failure
<Tv`> whyrusleeping: it might also contain zeroes
<whyrusleeping> daviddias: my internet connection ATM is spotty, can chat via text
<whyrusleeping> Tv`: like a sparse file?
<spikebike> whyrusleeping: it's fastest to preallocate, standard procedure for most downloads is to download it to a different name and then rename/move (a very cheap operation) when done
<spikebike> like say rsync, most torrent clients, etc.
<Tv`> whyrusleeping: like a file that got space allocated to it, but the write to that space never hit the disk
<whyrusleeping> ah, okay
<jbenet> daviddias i'm available for video or text whenever
<whyrusleeping> i'm working on implementing a batch transaction interface for the datastore
<spikebike> not to mention the numerous failure modes (full disk, out of memory, process hung, reboot, panic, etc.)
Encrypt has quit [Quit: Quitte]
<daviddias> whyrusleeping what about in 30 mins or more? I'm thinking in going to the office now.
<jbenet> whyrusleeping: suggest only doing it for put. get batch semantics are hard to achieve. (makes it so you have to lock the datastore completely and not allow other interleaved ops. e.g. a naive batch put on fsdatastore just delays the sync call, but if you also want to batch reads, you have to add a lock and prevent other puts too.
<jbenet> or... maybe just relax the "batch get" definition to not provide consistent view of the data.
<whyrusleeping> jbenet: yeah, i was thinking the latter
<spikebike> torrent clients typically use a different name (or directory), once download complete they checksum the entire file (even though every downloaded block had a checksum), and only rename/publish/move it once the entire checksum is verified
<whyrusleeping> some things like s3 are going to have good gains on batch get
tilgovi has quit [Ping timeout: 256 seconds]
<jbenet> i.e. "go put a, A1 ; go batch( get a ) ; go put a, A2" may retrieve A1 or A2.
<whyrusleeping> yeah
<jbenet> providing consistency should be another thing (interface) altogether.
<whyrusleeping> also, i dont know how good of an improvement we're going to see with batch get on the flatfs
<whyrusleeping> the directory syncs are all different directories
pfraze has quit [Remote host closed the connection]
<whyrusleeping> unless two keys happen to be in the same bucket (unlikely)
<jbenet> whyrusleeping: i'd punt on it for now, just solve the problem at hand. can always fix it later (if its hidden behind the batch interface).
<whyrusleeping> well, i'm thinking its not going to be worth our time now
<jbenet> still thinking of doing b := ds.Batch(); b.Put(). b.Put(). b.Get(). b.Commit() ?
<whyrusleeping> yeah, thats the interface i have written up
<whyrusleeping> i have it implemented
<jbenet> yeah so it's ok if the b.Get()'s dont do anything special/different.
<whyrusleeping> issue is that we arent really saving any dir sync calls
<jbenet> worst case same perf as not doing a batch.
<jbenet> and if we magically fix it later, that's cool too.
<jbenet> oh really?
<whyrusleeping> yeah, since the flatfs is laid out /QmXX/YY/ZZZZZ
<jbenet> right?
<whyrusleeping> the YY dir is the one being synced
<jbenet> isnt there a flush for ZZZZ and YY?
<jbenet> both*
<whyrusleeping> ZZZZ is the file
<whyrusleeping> YY is getting synced
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> thats the file
<jbenet> yeah i know, but sync has more overhead than just the thing it touches
<whyrusleeping> no?
<jbenet> (at least on some kernels / fs-es)
m0ns00n has joined #ipfs
<m0ns00n> ;)
<whyrusleeping> sync'ing a file just syncs the file AFAIK
<whyrusleeping> which only touches that inode
<whyrusleeping> dir syncs happen on line 83 and 134
<jbenet> flushing the relevant buffer cache pages includes more than just touching _those_ pages.
<jbenet> it may cause a disk sync too, there's contention somewhere
<jbenet> it's not externalized in the client interface, but there is contention at various spots all the way down to the hardware.
<jbenet> like, disk caches are synced i think
<jbenet> (would depened on the disk being smart enough to allow concurrent calls for different sectors, etc.
<whyrusleeping> what i *want* to do is make the syncfs syscall
<whyrusleeping> i could do syscall.Syscall(syscall.SYS_SYNCFS)
pfraze has joined #ipfs
* whyrusleeping ponders
<daviddias> re: node-ipfs - For the sync up. The number one priority is to get interop, thanks do Indutny's new module `spdy-transport`, we are very close to have a full spdystream implementation in Node (and from that a node-peerstream). I've been playing with it and see how it behaves using both implementation together, but the node implementation throws an error on
<daviddias> the frame parsing (framer.js #L133). Indutny is looking for help on building more tests https://github.com/indutny/node-spdy/issues/208#issuecomment-113980811 which I'm happy too, of course :)
<jbenet> whyrusleeping: that would sync the entire filesystem :/
<jbenet> whyrusleeping: is there a "sync" version that takes a list of fds?
<whyrusleeping> jbenet: nope.
<jbenet> daviddias: may be good to help a bit on the tests
<whyrusleeping> i can just make multiple fsync calls
<jbenet> whyrusleeping: doesn't that obviate the batch then?
<jbenet> are any coalesced?
<jbenet> daviddias: is the error between node-spdy-transport and go-spdystream ?
<jbenet> daviddias: or node-spdy-transport and itself?
<daviddias> re: node-ipfs - As for the DHT, I've started understanding how it was implemented (thread https://github.com/ipfs/go-ipfs/issues/1396) and started hacking something on a separate branch. Since there will be changes to how it is implemented in go-ipfs too and without the final spec, I guess my best shot is really to implement as I understand from the go-ipfs,
<daviddias> writing a spec as I go, so that jbenet can recommend/guide me of how actually it should be implemented (for now or future version)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<daviddias> jbenet the error is between go-spdystream (client) and node-spdy-transport (server)
<daviddias> if I go with go-spdystream(server) and node-spdy-transport(client), it just stays silent/blocks (but that might be because I'm missing some options or something for the client. They are still not documented and I might be missing something on the code
<jbenet> daviddias: re dht, that sounds good to me. sorry on delay re spec. yeah maybe try putting your impl thoughts on a doc as we go and we can synthesize that into the spec as we go
<whyrusleeping> jbenet: i think that some will be coalesced, the disk may decide to cache a bunch at once, making subsequent ops noops
<jbenet> whyrusleeping: i'd ask Tv` what to do here, he know flatfs + filesystems better than me.
<jbenet> whyrusleeping: ah yeah, if you issue the writes before any of the syncs, the latter syncs will likely just noop.
<Tv`> having two conversations, i can ask very pointed questions but can't read scrollback right now
<Tv`> s/ask/answer/
<jbenet> daviddias: and node-transport works with itself just fine, right?
<whyrusleeping> Tv`: so, we're working on implmenting a batched put for the datastore
<whyrusleeping> with the goal of improving perf by reducing syscall costs
<jbenet> daviddias: i think the tests may reveal the problem. wonder if we can PDD this and test both go-spdystream and node-spdy-transport?
<jbenet> not sure if it's a good time investment though.
<whyrusleeping> daviddias: do you have a place for me to start looking at go-spdy/node-spdy interop?
<Tv`> whyrusleeping: sweet. you can coalesce the dir syncs when they happen to be in the same dirs, the OS should coalesce all the syncs if you just first do writes and then do syncs
<whyrusleeping> Tv`: so save all the fd's until the end and loop over them, calling Sync()?
<Tv`> whyrusleeping: there's no better general api for coalescing syncs, currently
<Tv`> whyrusleeping: yeah
<daviddias> jbenet I didn't manage to get node-spdy-transport to work as a client yet (just as a server), I believe it might be due to some options I'm missing (that is why sometimes having function arguments instead of options obj works best like documentation). I have to ping Indutny to document that.
<whyrusleeping> mmkay, sounds good
<daviddias> whyrusleeping I pushed the a test here - https://github.com/diasdavid/spdy-cross-test
<Tv`> whyrusleeping: you're gonna run into the limits of the syscall api pretty soon, and that's why i kept making noises about arena storage
<jbenet> guessing the tests here dont do much yet: https://github.com/indutny/spdy-transport/blob/master/test/both/transport-test.js ? (are these the ones indutny asked for help with?)
<Tv`> whyrusleeping: with arena storage, you can transform this whole thing into essentially the same problem as writing a journal
<Tv`> but it's more complexity
<daviddias> jbenet: Indutny asked for spdy parser tests like these https://github.com/indutny/spdy-transport/blob/master/test/http2/parser-test.js
<daviddias> which means that the parser in spdy has no test yet (kind of makes sense it breaks, I guess)
mildred has quit [Quit: Leaving.]
<daviddias> jbenet I agree that helping with the tests might identify the problem and be all that we need for it to interop. if there are still impl differences, so PDD all the way :D
<whyrusleeping> Tv`: arena storage?
* whyrusleeping googles
<Tv`> whyrusleeping: multiple objects in one file
<Tv`> whyrusleeping: probably with a header that contains a hash, etc
<whyrusleeping> ooooh, yeah. that would be nice
<jbenet> Tv` oh on our end? almost packfiles? (i guess minus the compression)
<Tv`> as an alternative to flatfs
<jbenet> right
<Tv`> with N times the complexity and M times the performance (M<<N, most likely)
<Tv`> and it gets worse because we're talking about the generic ipfs datastore, *not* ipfs objects, so it's not a CAS, and now you could end up with multiple versions of the same object in different arenas
<Tv`> i'm kinda hoping to do that for bazil at some point, but that's a pure CAS, so the problem is a lot simpler there
<whyrusleeping> Tv`: personally, i'd not mind having a CAS store for ipfs
<whyrusleeping> the datastore abstraction is nice... but perf is nicer
m0ns00n has quit [Quit: Leaving]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ehd> jbenet: thanks for getting back. if it helps i'll file an issue for easier ipfs client mode
<jbenet> ehd: yes please. i think something like `ipfs --api-addr=/ip4/... <cmd>` or `IPFS_API_ADDR=/ip4/... ipfs <cmd>` would be ideal
<jbenet> (env var easier to implement but more error prone.)
<jbenet> (and more convenient)
mildred has joined #ipfs
inconshreveable has quit [Ping timeout: 256 seconds]
mildred has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
<jbenet> thanks ehd
notduncansmith has quit [Read error: Connection reset by peer]
<ehd> thank you :)
<jbenet> or rather: "thanks evil hacker dude" much better that way
<ehd> hah. by the way, grncdr and i have put up VPSes as our main ipfs nodes for working on the editor. it didn't work too well behind our probably terrible NAT setups
<jbenet> ehd: oh sorry, let me merge that
<ehd> merge what?
<jbenet> so hetzner stops thinking ipfs is a botnet
<whyrusleeping> lol...
<spikebike> ehd: do your VPSs have ipv6?
<ehd> haha, mine is on DO
<ehd> spikebike: disabled it. did not want the docker ipv6 pain.
<ipfsbot> [go-ipfs] jbenet deleted feat/filter at 0bf6b39: http://git.io/vtI9X
<spikebike> ehd: DO Vpss are all docker?
<jbenet> whyrusleeping ehd or someone want to write up an example and link to it from https://github.com/ipfs/go-ipfs/issues/1226
<jbenet> ehd: wait, DO gave you a netscan?
<ehd> @jbenet: no
<jbenet> ehd: hang on, wait, i'm assuming things why "didn't work too well"?
<ehd> spikebike: no. i used docker-machine to create a fresh ubuntu based host via DO's API
<spikebike> ehd: ah, gotcha, I'm pondering moving to a different VPS provider to get ipv6
<ehd> jbenet: yeah, things didn't work so well as in the ipfs daemon needed restarts, we couldn't connect to each other or discover each other, and his fantastic router crapped out a few times
<spikebike> (and would run a IPFS node on it)
<jbenet> ehd: really? i run nodes on DO, our gateways are on DO
<ehd> spikebike: DO officially has ipv6 with plain ubuntu or whatever you like
<jbenet> ehd: we use docker for the gateways, and i run my node with initd script
<ehd> jbenet: we just moved to DO. before we were running locally on our home networks
<ehd> well, I did, no idea where he's hosting his
jbenet changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/jbenet/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- code of conduct at https://github.com/ipfs/community/blob/master/code-of-conduct.md -- sprints + work org at https://github.com/ipfs/pm/ -- community info at https://github.com/ipfs/community/
<whyrusleeping> ;w
<whyrusleeping> yes, write this buffer...
<jbenet> whyrusleeping winky duckface?
<whyrusleeping> nope, just my fingers trying to save a vim buffer
<ehd> neat
<ehd> docker-machine sets up the docker daemon as auto-starting via upstart. then i just created a container for ipfs with a restart policy of always. this will work great until my droplet is destroyed at some point
kbala has joined #ipfs
<jbenet> yeah that should be fine.
<ehd> setting up my own VPS, for my own stuff reminded me of when i was young and got my first root <3
<ehd> now my mobile phone's internet is faster than the fiber connection at the office. where did we go wrong? :D
<spikebike> ehd: if only 30 minutes of office traffic didn't zorch your battery ;-)
<spikebike> or in many cases zorch your monthly quota
<ehd> 1–2 minutes at 150mbit/s are enough to burn through it :)
tilgovi has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
inconshreveable has joined #ipfs
jb55- has joined #ipfs
jb55_ has quit []
therealplato has quit [Ping timeout: 272 seconds]
patcon has quit [Ping timeout: 252 seconds]
<Tv`> whyrusleeping: alright i'm back, if you have more stuff you want to talk about / if i didn't really respond earlier
<Tv`> ehd: my experience has been that typical LTE is faster than typical Time-Warner Cable.. :-/
<Tv`> also, my phone tethering wifi is called Hot Pocket, for a reason
<Blame> warning WIP: http://rawgit.com/BrendanBenshoof/c8bfbe19cdf93aae02eb/raw/c23c8129e2e0fea00adbfb6a79e26f27da27d978/chat.html You will need to resize the textarea becuase I suck at making pretty things in html
therealplato has joined #ipfs
<Tv`> whyrusleeping: i think the route toward pure CAS is what jbenet has been talking about as "ipfs objects for everything", where in the end all the mutable state you need is a singular root hash that's the current state of the node and the rest sits in CAS, or something like that
<whyrusleeping> Tv`: i think i got all my questions answered, but just a sanity check:
notduncansmith has joined #ipfs
<whyrusleeping> do all tempfile writes, then sync and close all tempfiles, then rename all tempfiles, then sync all the dirs, then sync the root dir
notduncansmith has quit [Read error: Connection reset by peer]
domanic has quit [Ping timeout: 264 seconds]
<Tv`> whyrusleeping: close/rename order is irrelevant, for dir sync you probably want a map[string]struct{} or something to do just one sync even if multiple files were created, and the root dir sync is only really needed if you created new dirs underneath
<Tv`> whyrusleeping: tl;dr "yes"
<whyrusleeping> okay, sweet
<whyrusleeping> you can sync after renaming?
<Tv`> that would ruin the point
<whyrusleeping> er, close
<Tv`> yeah it doesn't matter
<whyrusleeping> oh, odd, but alright
<Tv`> it's perhaps simpler to think of when you close first, then rename
<Tv`> but it's just an open file, you can rename open files on unix
jibber11 has joined #ipfs
<Tv`> then again, no guarantees for windows
<whyrusleeping> lol
<Tv`> whyrusleeping: collecting the dir basenames into a map[string]*os.File seems like a good idea
<Tv`> then at the end you can range that and sync the ones there
<Tv`> not sure if that'll really help ;)
<Tv`> i don't expect it to help much on ext4
<Tv`> ext4 sync tends to means "sync everything in journal up to this point"
<whyrusleeping> lol
<Tv`> btrfs can reorder and sync only the parts you want
<Tv`> but of course it suffers from the mad CoW disease
<Tv`> and sometimes goes off to count beans for a few seconds
<Tv`> and my usual summary of xfs is that it's a freight train: never really that fast, but very steady speed
jibber11 has quit [Client Quit]
<whyrusleeping> i love my zfs
jb55- is now known as jb55
<Tv`> i find btrfs to be very nice for a workload that has regular idle moments (so it's bookkeeping can catch up)
<Tv`> as in, my computers
<Tv`> for ceph, btrfs was regularly the fastest for a while, until it needed to do some catch-up work
<Tv`> so i wouldn't recommend it for a 100% busy system
williamcotton has quit [Ping timeout: 256 seconds]
<whyrusleeping> yeah, but most workloads are fairly bursty no?
<Tv`> exactly
<Tv`> btrfs does fine when not pushed too hard; xfs is best when really pushed consistently (it was designed to cope with e.g. streaming video to disk)
* whyrusleeping decides he should probably kill ext4 on his laptop
<Tv`> ext4 was always somewhere in the middle, between those; with fairly long delays on sync, but not many hiccups on the reading side
ruby32 has quit [Quit: ruby32]
jibber11 has joined #ipfs
<okket> I do not trust btrfs: a few months ago I had an unclean partition that caused my system to hang during boot, sometimes. it took weeks until found out that a fsck would solve this problem. i never had such problems with ext2/3/4 or even zfs.
<whyrusleeping> okket: i've seen that on my ext4 partition before
<whyrusleeping> once
<spikebike> yeah, btrfs is a very cool idea, but I'll wait till a major linux distro switches to it for 6 months
<spikebike> if people are screaming bloody murder then it will be good enough for me
<whyrusleeping> spikebike: its one of the default options in ubuntu since at least 14.04
<whyrusleeping> and fedora
<whyrusleeping> since like 19
<spikebike> By default I mean use ISO -> accept defaults and end up with btrfs /
<whyrusleeping> like, it wont do it if you just click continue a bunch, but its in a dropdown on the installer
<whyrusleeping> yeah...
<spikebike> fedora has claimed they will do that for the n+1 version for some 5 versions or so
<spikebike> one of the major docker folks recently gave up on btrfs which raised a flag with me
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jibber11 has quit [Client Quit]
<whyrusleeping> spikebike: yeah, i'm not sure what the docker thing was
<okket> imho ext2 for /boot ext3/4 for /root and zfs (maybe xfs, never tried it) for big data / raid
<whyrusleeping> zfs is very nice if youre running raid
jibber11 has joined #ipfs
<whyrusleeping> just scrub it every so often
<spikebike> zfs is designed for jbod though
<whyrusleeping> their raidz6 implementation is very nice
<spikebike> and yes all raids should be scrubbed regularly
<whyrusleeping> but yeah, they work well with jbod
<spikebike> whyrusleeping: heh, dunno, the whole raidz6 is never faster than a single disk bothers the hell out of me
<whyrusleeping> its faster than a single disk
<whyrusleeping> i have five WD reds and i get ~510MB/s reads
<spikebike> depends on the workload
<spikebike> try random reads
<okket> yes, mirror > raid, you can achieve redundancy with enough mirrors without sacrificing speed / latency
<spikebike> I noticed the lack of scaling, tracked one of the developers blogs and they said yeah, performance is never significantly better than a single disk
<spikebike> their answer was multiple raidz6's per pool to actually increase performance
<okket> also resilvering is way way faster with mirrors
<spikebike> the main problem is zfs doesn't trust the disk checksusm, so even a read of a single sector has to hit every disk and check the checksum
<spikebike> so for sequential that's not a big deal, you hit all disks anyways
<whyrusleeping> not every disk, but just the disks with parity for that read
<spikebike> but for random reeds it sucks
<spikebike> whyrusleeping: the parity calculation for raidz requires reading N-1 disks
<spikebike> raidz2 = n-2
<whyrusleeping> mmm, right
<spikebike> so basically it doesn't scale at all
<jbenet> whyrusleeping what about `--dry-run` instead of `--only-hash` ?
<jbenet> --dry-run is common to see in things
<whyrusleeping> jbenet: i proposed that the other day and someone thought it was a bad idea
<whyrusleeping> wking: was that you?
<whyrusleeping> either wking or cryptix
jibber11 has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<whyrusleeping> jbenet: my batch put for flatfs gets close to a 3x speedup on putting 20MB of 100k files
<whyrusleeping> (all at once)
<spikebike> whyrusleeping: I support HPC clusters (over a dozen) and the large number of users/apps turns even fairly sequential workloads into random access. So the difference between ext4+software RAID vs zfs is pretty substantial for my building blocks (16, 24, or 36 disks).
tilgovi has quit [Ping timeout: 248 seconds]
<jbenet> **shrug** ok.
<spikebike> I thought it was the linux port, but tracked it down to the checksum thing. I never trusted the old disk checksums much (1 in 512 chance of random corruption not being detected), but the newer 4k disks have a quite nice checksum.
<spikebike> a friend verified it on his x86 solaris box and got similar numbers
<whyrusleeping> huh, weird
<wking> whyrusleeping: I don't remember mentioning anything about --dry-run, and I can't grep it in my IRC logs or GitHub searches
<ipfsbot> [go-ipfs] jbenet closed pull request #1417: add option to only hash input (master...feat/only-hash) http://git.io/vLFt6
headbite has quit [Remote host closed the connection]
jibber11 has joined #ipfs
<whyrusleeping> jbenet: should we keep read batching as part of the interface?
headbite has joined #ipfs
inconshreveable has quit [Ping timeout: 248 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike> ah, better article on coreos + Btrfs
<jbenet> whyrusleeping: i think so? but it's ok if it doesnt do anything.
pfraze has quit [Remote host closed the connection]
<whyrusleeping> yeah, it doesnt need to do anything special
<whyrusleeping> but for the s3 datastore
jibber11 has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<whyrusleeping> it will be really nice to have
<Tv`> spikebike: the docker use case was ridiculous amounts of CoW cloning though; older btrfs versions have known problems where 1) they consume more metadata space than they reserve, resulting in ENOSPC 2) hang for ~2 minutes pruning old CoW metadata when removing subvolumes
<Tv`> that's not something that triggers in normal use, and my understanding is latest versions behave a lot better too
<whyrusleeping> jbenet: i wish the datastore interface was for []byte's
<whyrusleeping> i know we've had this conversation before
<whyrusleeping> but every time i touch the code, i want it to just be []bytes
<Tv`> whyrusleeping: +1 interface{} is silly when realistically things are written out anyway
<whyrusleeping> jbenet: we outnumber you. i guess that means we win and get to change it :P
<Tv`> passing anything non-[]byte is just asking for an explosion from a type assertion
<wking> jbenet: You still need to push your module-overview Gist as an ipfs/specs PR ;)
www1 has joined #ipfs
<whyrusleeping> jbenet: you still need to design the 'i love whyrusleeping' poster
<jbenet> whyrusleeping Tv` in memory only datastores that use straight up objects and never serialize to bytes should be possible. most "middle-ware" or "shim" datastores dont care. if you need []byte somewhere, do that where you need to, that's fine. export two interfaces, one with interface{} and one with []byte.
dread-alexandria has joined #ipfs
<jbenet> whyrusleeping: i dont recall signing up for that.
<jbenet> but fine.
<Tv`> jbenet: how many in-memory Datastores does ipfs use?
<whyrusleeping> oh, well okay
<jbenet> Tv` i use some in tools i use.
<whyrusleeping> so, we can keep go-datastore the way it is
<whyrusleeping> and then we can *also* have ipfs-datastore
<whyrusleeping> that uses bytes
<Tv`> frankly, unvendoring go-datastore and slapping a simpler version in ipfs itself would be my route
www has quit [Ping timeout: 252 seconds]
<jbenet> whatever it is it will be vendored. we're ripping out all these things from the main repo.
<jbenet> we dont have to use go-datastore, but we'll need an equivalent thing. we need to be able to swap out datastores and add things like S3 and so on easily. the requirement for a pluggable interface does not change. given that, there's really only a few ways to make a "get/put" interface.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> and im not going to waste much time discussing this now.
<jbenet> whyrusleeping: just export another interface for []byte if you need it from go-datastore. i strongly suspect you dont **need** it.
<whyrusleeping> i dont need it
<jbenet> wking: ack will do
<whyrusleeping> but i'd really like to avoid having the batch transaction.Put method not have to return an error
<whyrusleeping> which it will need if have to cast interface{} to []byte
<whyrusleeping> (leveldb's batching uses []byte)
<whyrusleeping> maybe it should just return an error...
<whyrusleeping> meh
<jbenet> it should return an error.
<whyrusleeping> mehhh
<whyrusleeping> fine
<jbenet> the transaction.Put should match the interface exactly.
<jbenet> it should be castable to a Datastore
<whyrusleeping> nope
<jbenet> (except query)
<whyrusleeping> transaction.Get cant really return a value
<whyrusleeping> so i have it accept a callback
<jbenet> it can return a channel with a value?
<jbenet> **shrug**
<jbenet> it doesnt have to
<jbenet> it would be nice if i could but whatever
<whyrusleeping> idk, do you prefer a channel? that would get really weird i think
<jbenet> at some point we're just fighting go's weak types and that's no fun
jibber11 has joined #ipfs
jibber11 has quit [Client Quit]
<jbenet> i thinka a "type GetResult struct { Value interface{}, Err error }" and have the transaction.Get return "chan <-GetResult" may be simpler than a callback.
<jbenet> callbacks are... callbacks.
jibber11 has joined #ipfs
<whyrusleeping> hmmmmm
<jbenet> no idea though, you're making this, do whatever.
<whyrusleeping> well, i want the CR process to not take forever, so i'd like to hash things out before getting to that point
<jbenet> fair enough
<whyrusleeping> my concern with returning a struct thing is that the user then has to import that package if they want to do anything other than wait on the channel then and there