whyrusleeping changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/ipfs/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- code of conduct at https://github.com/ipfs/community/blob/master/code-of-conduct.md -- sprints + work org at https://github.com/ipfs/pm/ -- community info at https://github.com/ipfs/community/
<ion> yay
<lgierth> +++
<spikebike> hrm, about 10% of my "ipfs swarm peers" results in Error: read tcp 127.0.0.1:47038->127.0.0.1:5001: use of closed network connection
<whyrusleeping> >.>
<ion> whyrusleeping: Will 0.4.0 have µTP? That would probably solve my bufferbloat problem better than TCP with a bandwidth limit setting.
<whyrusleeping> thats... interesting
<spikebike> if I run once a minute often 10-20 work, then 1 or 2 in a row die
<whyrusleeping> ion: we are going to be using UDT
<whyrusleeping> there isnt a good enough utp implementation in go yet
<ion> whyrusleeping: Will 0.4.0 have UDT? :-)
<whyrusleeping> ion: UDT is slated for 0.3.8 :)
<ion> whyrusleeping: Awesome
voxelot has quit [Ping timeout: 250 seconds]
<ion> I’m curious and haven’t seen this mentioned in anything i’ve read about IPFS so far: how will interplanetary communication be handled when you occasionally have a link (with a latency measured in minutes) between two planets and people would like to communicate between them?
<whyrusleeping> ion: it shouldnt be too different honestly, will just need to have higher timeouts
<whyrusleeping> we may want to include some extra delays before sending data through bitswap to see if we get block cancel
<whyrusleeping> and scale that based on the latency of the link
<whyrusleeping> but thats just an optimization
<ion> ok
<spikebike> found a cool paper about sorting peers into different DHTs based on latency
<whyrusleeping> spikebike: coral?
<spikebike> I don't think so, umm.
<spikebike> ah, nevermind, yeah, I think it's coral
jamescarlyle has joined #ipfs
<spikebike> covers 3 DHT's one planet wide.. maybe we'll need a 4th, solar system wide
Spinnaker has quit [Ping timeout: 252 seconds]
RX14 has quit [Ping timeout: 252 seconds]
RX14 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jnerule has quit [Ping timeout: 252 seconds]
sbruce has quit [Ping timeout: 252 seconds]
cblgh has joined #ipfs
jamescarlyle has quit [Ping timeout: 246 seconds]
lithp has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
samiswellcool has quit [Quit: Connection closed for inactivity]
<ion> Do you envision ISPs having IPFS cache boxes for their customers?
<achin> i wonder if, with a udp stack, you can more easily simulate network loss and latency (when compared to TCP)
<achin> ion: i assume that would require IPFS to be smart enough to pull from closer nodes before pulling from far away nodes
<whyrusleeping> achin: thats definitely planned, and right now it happens just as a result of the way bitswap is naively written
<ion> achin: Yeah, it would require some kind of cooperation between the nodes and the cache box.
<whyrusleeping> whoever responds first generally is the one who sends the block
<spikebike> the other DHT optimization paper I was looking at was this one: http://people.kth.se/~rauljc/p2p11/jimenez2011subsecond.pdf
jfis has joined #ipfs
<achin> maybe over time, an IPFS node can build up a sense of how responsive/fast a given peer is, and try to perfer it over others. this might naturally yield a behavior where fast cache boxes get perferred (as well as other lan-local nodes)
<fazo> yeah, another issue is that the fastest machine to answer (low latency) may have a lot less upload bandwidth than another machine with higher ping
<achin> (this seems like an interestion research topic. i wonder if there are any papers on this)
sbruce has joined #ipfs
<fazo> yeah bitswap is gonna be hard to design properly
<fazo> but even know the network works fine though
FunkyELF has quit [Quit: Leaving]
<achin> ion: it seems like your cache use-case is not too dissimliar from your interplanetary case -- there might be only a few nodes between Earth and Mars that have a good connection. they would have to "proxy" all traffic between the planets
hellertime has joined #ipfs
<achin> so you either need a configuration option to say "use this gateway node", or design the peer-to-peer heuristics so that it can naturally discover any bottlenecks in the topology
<fazo> or just set up mars with servers with huge storage to cache everything
<fazo> so that everything downloaded from earth using ipfs gets cached
<fazo> assuming mars is the minority.
<ion> fazo: Well, I have been bitten by the naive bitswap implementation. The daemon saturates my upstream bandwidth (initial swipe typo: my BBQ anxiety) at 250-ish kB/s while actual content makes it from A to B at 30 to 60 kB/s.
<spikebike> I've seen of ton of similar papers that explore various peer preferences based on latency, bandwidth, and network proximity (ops)
<fazo> also assuming that on mars everyone has at least gigabit marsnet connection
<achin> (it's ok, i have BBQ anxiety all the time)
<ion> Because of TCP and bufferbloat, that also results in huge latency in all communications.
<fazo> ion: wow, 250 kB/s upload. I can only manage 25 when there's no congestion
<spikebike> I've seen of ton of similar papers that explore various peer preferences based on latency, bandwidth, and network proximity (hops)
<fazo> I can't even use VoIP ffs
<ion> fazo: How fast does IPFS move content for you?
<fazo> If I want to use VoIP or upload stuff I need to use my mobile connection, which is 4G at dozens of mB/s. Too bad I have incredibly absurd monthly caps
<fazo> ion: fast enough, I built a 1.3 MB app and people could use it fine
<fazo> can't complain
<whyrusleeping> (no, go ahead and complain, it might make us work faster)
<fazo> considering I run OpenWRT with hardcore QoS scripts on my router
nicolagreco has joined #ipfs
<nicolagreco> jbenet: around?
<fazo> the frustration of a slow connection is horrible, no matter what I do I can't go faster than it does
<nicolagreco> I had some issue with my connection until now
<achin> ion: can you remind me the iftop command you recommended to observe ipfs traffic?
<ion> achin: Sorry, I'm in bed. Something like iftop -i eth0 -BP 'port 4001' but that's probably wrong in some regard.
<achin> that gets me close enough, thanks
<spikebike> iftop -i eth0 -BP -f 'port 4001'
<spikebike> that looks plausible anyways
<achin> things aren't too chatty at the moment -- about 20KB/s TX and 10KB/s RX
jvalleroy has quit [Quit: Leaving]
<spikebike> uranus.i.ipfs.io seems reliably one of the chattier nodes
<spikebike> didn't know we had that much bandwidth to uranus ;-)
<ion> achin: Try downloading a large thing [insert your mom joke]. To see the useful bandwidth, ipfs cat something | pv >/dev/null. Perhaps with a parameter for pv to use a long window for measuring the average, of just wait until the end and it will say the average for the entire download.
<achin> is uranus one of the bootstrap nodes?
<achin> anybody have a largish thing for me to download?
<achin> oh, i'll use davidar's PGP dump
magneto1 has quit [Ping timeout: 250 seconds]
<ipfsbot> [go-ipfs] whyrusleeping force-pushed ipns/patches from 3b29e26 to 8f5c610: http://git.io/vn0bZ
<ipfsbot> go-ipfs/ipns/patches dac65e7 Jeromy: Fix dht queries...
<ipfsbot> go-ipfs/ipns/patches 8f5c610 Jeromy: Implement ipns republisher...
lithp has joined #ipfs
<achin> ion: so while downloading a 38MB file, i see that the rate at which i'm writing to disk is about half of the overall network traffic
<achin> about 600KB/s versus 1200KB/s
<achin> but traffic dies down to its idle level pretty quickly
so has quit [Remote host closed the connection]
apophis has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nessence has quit [Ping timeout: 255 seconds]
<spikebike> achin: what exact command where you using?
<achin> ipfs cat /ipfs/Qmbs2DxMBraF3U8F7vLAarGmZaSFry3vVY5zytuN3BxwaY/sks-dump-0002.pgp | pv -ab > /dev/null
<achin> sudo iftop -i enp5s0f0 -f "port 4001" -BP
<ipfsbot> [go-ipfs] willglynn opened pull request #1740: Implement path validations (master...path_validation) http://git.io/vnELA
<spikebike> ah, k, was worried you skipped the -i and where counting it twice
<ipfsbot> [go-ipfs] whyrusleeping pushed 1 new commit to mfs-redo: http://git.io/vnEtv
<ipfsbot> go-ipfs/mfs-redo 748103b Jeromy: fixup comments...
<spikebike> Qmb8zoHBRxxpqmNk6k4hAimCxoCJ5BZ2KP7GhrBh8dG561 is a 2.2MB animation
<achin> probably too small to be useful for bandwidth measurements
<spikebike> yeah, depends on how fast your link is
intern has quit [Quit: Page closed]
<spikebike> I could put an ISO in if ya need it
<achin> naw, i'm good, thanks
<fazo> I've got a dump of every xkcd comic. It's 90 MB. Good luck downloading it from my computer though
<fazo> It's partly backed up on my server. Still, downloading from both might get you to 60-80 kB/s
<clever> if anybody else has ever saved a comic, they can pitch in
<fazo> yeah that's true
<fazo> last time people were reporting comics loading pretty fast, probably there's someone else serving the images
<whyrusleeping> fazo: yeah, when i was browsing it was pretty quick
jedahan has joined #ipfs
<fazo> ipfs is really working better than expected for me.
<spikebike> yeah, I've been generally impressed with the performance
<achin> fazo: hash?
jedahan has quit [Read error: Connection reset by peer]
<fazo> /ipfs/QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme
<fazo> Someone should build a viewer app
<fazo> included is also the script I wrote to back up xkcd.
<achin> neato
<fazo> yeah it crashed the first time it got to comic 404
<fazo> should have expected that LOL
<clever> the sort is bit off, 1, 10 100
Quiark has joined #ipfs
<fazo> yeah it's because it's string based not number based
<fazo> another reason to write a viewer app
apophis has quit [Quit: This computer has gone to sleep]
<achin> things are loading pretty quickly. about 700ms to load a directory, and 2000ms to load the img
<clever> 101, thats just wrong! lol
<fazo> achin: keep in mind I have 20 kB/s upload.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot> [go-ipfs] whyrusleeping pushed 1 new commit to private-blocks: http://git.io/vnEmG
<ipfsbot> go-ipfs/private-blocks ffbb648 Jeromy: comment corerepo gc method better...
<fazo> anyone want to try the new version of my bookmarking app? /ipfs/QmfX93JbzAzVZ6DKED1LyxzXeJ6Q1svZcTCnWJS82eryLd
<spikebike> my fuse mount of xkcd isn't very happy, not just slow, but errors
<whyrusleeping> spikebike: errors? such as?
<spikebike> $ ls
<spikebike> ls: cannot access 10 - Pi Equals: No such file or directory
<fazo> folders have spaces in the name, maybe it's that
bedeho has joined #ipfs
<spikebike> and stuff like: dr-xr-xr-x 0 bill bill 0 Aug 30 1754 1055 - Kickstarter
<spikebike> -????????? ? ? ? ? ? 1056 - Felidae
<whyrusleeping> thats interesting
<fazo> yeah I noticed the ??????? too.
<whyrusleeping> this on linux?
<fazo> didn't give me issues though
<whyrusleeping> or osx?
<fazo> Linux for me
<whyrusleeping> ipfs >= 0.3.8?
<spikebike> linux here as well
<spikebike> "AgentVersion": "go-ipfs/0.3.8-dev",
nessence has joined #ipfs
<fazo> ~ ❯❯❯ ipfs version ⏎
<fazo> ipfs version 0.3.8-dev
<fazo> whoops, copied an extra line
<whyrusleeping> mind filing an issue?
<fazo> I'm almost in bed (3 am here)
<spikebike> I can
<fazo> but if spikebike doesn't do it, I will
<fazo> thanks
<spikebike> or will soon
<fazo> well, I'll do it tomorrow if you didn't. I hope to remember to check
<fazo> looks like I'm going to bed. Bye!
fazo has quit [Quit: leaving]
Spinnaker has joined #ipfs
<spikebike> ah, well my imae viewer didn't like that some of the files named .png are .jpg's
<clever> ive stumbled upon much worse things, gifs that end in .jpg
Leer10 has joined #ipfs
<achin> `ipfs ls QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme` is hanging, oddly
<achin> (yet my local read-only gateway can browse to that dir)
<achin> i wonder if this is related to your fuse issues
<ion> achin: Are you in a fuse directory while running that command perchance?
<achin> the ipfs ls command? no
<nicolagreco> ./msg jbenet
<nicolagreco> ups
notduncansmith has joined #ipfs
<achin> trying to `ls /ipfs/QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme` is yielding the same fuse errors reported by spikebike
notduncansmith has quit [Read error: Connection reset by peer]
<achin> maybe it's some timeout in the fuse code
<_jkh_> OK, I have a slight newb question here: What exactly is the idea behind the ipfs fuse mount?
apophis has joined #ipfs
<_jkh_> I mean, it’s not writable so you can’t use it as a synonym for “ipfs add” (at least, it’s not writable on the FreeBSD implementation)
<_jkh_> and it doesn’t appear to show any view of a global namespace
<_jkh_> maybe the FreeBSD fuse support is just broken?
<achin> eventually it might be writable, but for now, think of it as a filesystem version of the read-only http gateway
<whyrusleeping> _jkh_: maybe? the /ipfs mount is readonly, but the /ipns mount is writeable
<achin> you give it a hash and it fetches it, figures out if it is a directory or a file, and presents the contents accordingly
<ion> /ipns/local is writeable on Linux at least.
simonv3 has quit [Quit: Connection closed for inactivity]
bjp has quit [Ping timeout: 240 seconds]
<_jkh_> Hmmm. Let me try adding some additional content
<_jkh_> grab a hash...
<_jkh_> achin: Ah, so /ipfs/meh is functionally equvalent to http://ipfs.io/ipfs/meh - ?
<achin> try this: cat /ipfs/QmdTYN7W8G7mw5hsvJYa32vp1sJmHAPMPc7kuhVPresvfT
<achin> _jkh_: yep
<_jkh_> ahhhh, OK, now the light bulb illuminates. Thanks, achin
<_jkh_> Hmmm. I wonder if that’s re-exportable over SMB. I must try that out on FreeNAS
<_jkh_> and no I have no real idea why you’d want to do that but...
<achin> sounds like dragon territory
<_jkh_> I like dragons!
<_jkh_> achin: also, hello achin. ;)
<spikebike> I think I'm just having fuse issues
<spikebike> $ ipfs cat /ipfs/QmdTYN7W8G7mw5hsvJYa32vp1sJmHAPMPc7kuhVPresvfT
<spikebike> Hello _jkh_
<achin> spikebike: i think there is something weird with this directory
<spikebike> ya
<spikebike> like ipfs downloaded the first N bytes of the dir, but not the rest
<spikebike> ls seems to often work
<spikebike> but ls -al takes a long time
<spikebike> (evne in a small dir)
<ion> stat(filename) seems to result in a block corresponding to the file being downloaded.
<ion> It would be more efficient if the directory object had the needed information.
<achin> that is probably a limitation of fuse -- it might not be smart enough to ask the parent directory for the metadata of one of its children
<spikebike> @kona://ipfs/QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme$ cd "997 - Wait Wait"
<spikebike> even that is take a LONG time
voxelot has joined #ipfs
<spikebike> mintues anyways
<achin> that sounds way too long
<spikebike> not sure if it doens't like the size of the dir or if it doesn't like the space
<spikebike> spaces
<achin> it's unlikely the spaces are to blame for a slowness problem
<achin> but i say that without knowing the ipfs code
<achin> (when things are coded without assuming that directories can contain spaces, they generally just fail)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike> if anyone can add to this with examples that would be great:
<whyrusleeping> _jkh_: you *should* be able to rexport it with smb, although there are definitely kinks in the fuse system, so do let us know when you find them
<achin> looks good to me. one formatting suggestion: turn each of your examples into blocks, by putting ``` at the start and end of each
jfis has quit [Ping timeout: 252 seconds]
<whyrusleeping> spikebike: thanks for posting the issue
<spikebike> whyrusleeping: sure, happy to help, I wonder if it's just uplink limited and there are timeouts
<spikebike> presumably that would get better over time
<whyrusleeping> spikebike: it might be timeouts, and they just arent manifested very well through the fuse interface
<whyrusleeping> i wonder if fuse understands some sort of ETIMEOUT type error...
<achin> is `ipfs ls QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme` working for anyone else?
<spikebike> very slow
* spikebike waits
<achin> i don't understand this. i see 8 providers, and loading http://localhost:8080/ipfs/QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme is instant.
<spikebike> maybe there's only so many outstanding requests possible and they are all in the process of timing out
<zignig> interesting thing that fits into ipfs quite nicely : http://pages.pathcom.com/~vadco/dawg.html
<spikebike> my ipfs ls is still hanging
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> achin: 'ipfs ls <hash>' will query each link of the root you give it to be able to print metadata for you
<whyrusleeping> although, if it loads on the gateway, then that means the gateway has done the same...
<achin> what kind of metadata? i thought it just printed out the size and name, which are stored in the links array of the root
<whyrusleeping> we grab the node type, to tell if its a directory or a file
<whyrusleeping> (or other)
<achin> ah, ok
<spikebike> ah, interesting
<spikebike> ERRO[19:22:38:000] Path Resolve error: context deadline exceeded module=core/server
wopi has quit [Read error: Connection reset by peer]
<spikebike> $ ipfs ls QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme
<spikebike> is still hanging
<whyrusleeping> yeah, still hanging for me on my node too
<achin> (i confirm that running `ipfs refs` on that hash is instant, as i'd expect)
wopi has joined #ipfs
<whyrusleeping> 'ipfs bitswap wantlist'
<whyrusleeping> see what its waiting on
<achin> only 6 things
<spikebike> ya 6 here as well
<whyrusleeping> interesting, theres only one thing on my wantlist, and i have the block
<achin> same size? here is what i'm waiting on: QmPScbj7RDYxqDwWxLMk3Zt7kh6ieLEtAkXCZvmGf6WtFV
nessence_ has joined #ipfs
<achin> uh, you should fetch taht file. that's not itself the thing i'm waiting on
nessence has quit [Ping timeout: 244 seconds]
<spikebike> weird
<whyrusleeping> mines slowly progressing actually
<spikebike> $ ipfs ls QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme
<spikebike> that never finished
<spikebike> but this does instantly
<spikebike> should be equivalent (ipfs command and localhost are running on same machine)
<spikebike> biking home I'll be online later
voxelot has quit [Ping timeout: 256 seconds]
Quiark_ has joined #ipfs
Quiark has quit [Ping timeout: 260 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
apophis has quit [Quit: This computer has gone to sleep]
notduncansmith has quit [Read error: Connection reset by peer]
apophis has joined #ipfs
apophis has quit [Client Quit]
Spinnaker has quit [Ping timeout: 252 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
apophis has joined #ipfs
apophis has quit [Ping timeout: 240 seconds]
apophis has joined #ipfs
sseagull has quit [Quit: leaving]
apophis has quit [Client Quit]
apophis has joined #ipfs
lithp has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
notduncansmith has joined #ipfs
od1n has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
cypher has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Eudaimonstro has joined #ipfs
kerozene has quit [Ping timeout: 244 seconds]
steve__ has quit [Ping timeout: 255 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nessence_ has quit [Read error: Connection reset by peer]
lithp has joined #ipfs
<davidar> whyrusleeping (IRC): object get is also instant
kerozene has joined #ipfs
cypher has joined #ipfs
notduncansmith has joined #ipfs
_jkh_ has quit [Read error: Connection reset by peer]
notduncansmith has quit [Read error: Connection reset by peer]
_jkh_ has joined #ipfs
<whyrusleeping> davidar: echo $CONTEXT
<davidar> whyrusleeping (IRC): ?
<davidar> QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme
<davidar> gateway and object get are instant
<davidar> ls hangs
hellertime has quit [Quit: Leaving.]
<whyrusleeping> davidar: yeah, ls is fetching all of the child blocks
<whyrusleeping> and it looks like one of them is not available or something...
<davidar> whyrusleeping (IRC): is it a good idea for ls to do that?
<whyrusleeping> davidar: well, that depends on your expectations of what information ls provides
<davidar> whyrusleeping (IRC): the same information as the dir index the gateway provides, is what I would have expected
<whyrusleeping> the gateway also runs the same command
rozap has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Quiark_ has quit [Ping timeout: 264 seconds]
nicolagreco has quit [Quit: nicolagreco]
amstocker has joined #ipfs
Tv` has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
_jkh_ has quit [Changing host]
_jkh_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jfis has joined #ipfs
amstocker has quit [Ping timeout: 265 seconds]
<zignig> whyrusleeping: shouldn't ls just get the top two levels of blocks ?
<davidar> whyrusleeping (IRC): so why is the gateway instant but ls isn't?
_jkh_ has quit [Read error: Connection reset by peer]
_jkh_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
amstocker has joined #ipfs
Quiark_ has joined #ipfs
brab has joined #ipfs
akhavr has quit [Quit: akhavr]
jamescarlyle has joined #ipfs
jamescarlyle has quit [Remote host closed the connection]
akhavr has joined #ipfs
captain_morgan has quit [Ping timeout: 240 seconds]
elima_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
amstocker has quit [Ping timeout: 244 seconds]
ygrek has joined #ipfs
apophis has quit [Quit: This computer has gone to sleep]
akhavr has quit [Ping timeout: 240 seconds]
akhavr has joined #ipfs
rendar has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
captain_morgan has joined #ipfs
stickyboy has joined #ipfs
<davidar> hi zignig
amstocker has joined #ipfs
<zignig> davidar: hey :)
<davidar> zignig, ever since i learned of dawg's, I've had an urge to write a library simply for the meme-value :p
<davidar> you dawg, I heard you like DAGs, so I put a DAWG in your MerkleDAG...
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rozap has quit [Ping timeout: 244 seconds]
<zignig> davidar: I have the same thought...
<zignig> interesting application for the data structure.
<davidar> yeah, I was thinking about it in the context of search indexes
<davidar> stuff a bunch of (word,location) pairs into a dawg
<zignig> yes, there was a paper posted here some time back about indexing dhts.
<davidar> yeah, i think i read it, and briefly talked to you about it
<zignig> oh yes, yes you did.
wopi has quit [Read error: Connection reset by peer]
<zignig> ipns really need to stabilize before much of the can move forward.
* zignig pokes whyrusleeping
<zignig> ;P
wopi has joined #ipfs
<davidar> hehe
captain_morgan has quit [Ping timeout: 240 seconds]
amstocker has quit [Ping timeout: 240 seconds]
<davidar> zignig, also ipld
<zignig> I like the idea , but i'm not sure where it sits.
<davidar> which would probably be ready by now if I didn't keep arguing about the details :p
<zignig> is it built into ipfs ? or an overlay thing.
<ipfsbot> [webui] masylum opened pull request #82: Bypass CORS in development by proxying `Origin` headers correctly (master...feature/bypass-cors-in-developement) http://git.io/vnu3W
<davidar> zignig, yeah, I'm still not entirely clear on the implementation details either
<davidar> my basic understanding is
<davidar> ipld add massive-database.json
<davidar> and it gets transparently broken into small ipfs blocks and lazily re-assembled at the other end
<davidar> so i can do something like
<davidar> ipld get massive-db.json/foo/bar/baz/7
<davidar> and it will only grab the blocks it needs to fulfill the request
<davidar> (ie only a small piece of the full db)
<zignig> ok, not at all what I thought.
<zignig> I thought it was a way to attach metadata to ipfs object so you can give them context.
<davidar> yeah, there's also that
xeon-enouf has quit [Ping timeout: 250 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<davidar> tbh a lot of the confusion is because different people think it has different purposes
<davidar> and nobody has really explained it well enough
xeon-enouf has joined #ipfs
<davidar> my personal opionion is that (apart from renaming ipld to something more helpful), the metadata stuff should be layered on top, rather than part of the spec
<zignig> I have skimmed that one before.
blame has quit [K-Lined]
ffmad has quit [K-Lined]
oleavr has quit [K-Lined]
ogd has quit [K-Lined]
Luzifer has quit [K-Lined]
true_droid has quit [K-Lined]
RJ2 has quit [K-Lined]
nrw has quit [K-Lined]
robmyers has quit [K-Lined]
sindresorhus has quit [K-Lined]
ehd has quit [K-Lined]
jhiesey has quit [K-Lined]
zmanian has quit [K-Lined]
cleichner has quit [K-Lined]
kumavis has quit [K-Lined]
sugarpuff has quit [K-Lined]
tibor has quit [K-Lined]
mafintosh has quit [K-Lined]
bigbluehat has quit [K-Lined]
grncdr has quit [K-Lined]
daviddias has quit [K-Lined]
mikolalysenko has quit [K-Lined]
yoshuawuyts has quit [K-Lined]
dandroid has quit [K-Lined]
SoreGums has quit [K-Lined]
jonl has quit [K-Lined]
prosodyC has quit [K-Lined]
feross has quit [K-Lined]
henriquev has quit [K-Lined]
vonzipper has quit [K-Lined]
anderspree has quit [K-Lined]
rubiojr has quit [K-Lined]
leeola has quit [K-Lined]
hosh has quit [K-Lined]
lohkey has quit [K-Lined]
kevin`` has quit [K-Lined]
risk has quit [K-Lined]
mappum has quit [K-Lined]
richardlitt has quit [K-Lined]
karissa has quit [K-Lined]
caseorganic has quit [K-Lined]
kyledrake has quit [K-Lined]
zrl has quit [K-Lined]
bret has quit [K-Lined]
<davidar> we've also been arguing about whether type information should be an explicit feature
screensaver has joined #ipfs
<davidar> actually, I'm just going to start calling it ipdb :p
<zignig> I think , arbitrary metadata with a few well defined keys ( mime-type , predessor , blah ) to objects.
<zignig> ipdb , not dbddbdip2/plural6
<zignig> ?
<davidar> lol
xeon-enouf has quit [Ping timeout: 240 seconds]
karissa has joined #ipfs
<zignig> boom ! netsplit.
<davidar> so, there's essentially two separate issues that have gotten all mixed up
henriquev has joined #ipfs
<zignig> yes.
<davidar> 1) transparently persisting arbitrary data structures to ipfs
kyledrake has joined #ipfs
richardlitt has joined #ipfs
<davidar> 2) a standard way of specifying metadata
kumavis has joined #ipfs
xeon-enouf has joined #ipfs
<davidar> jbenet: is that ^ about right?
bret has joined #ipfs
* zignig lifts the trapdoor to see if jbenet is hiding in the basement.
<davidar> well, I'm pretty sure that's right
<davidar> lol
<davidar> if it's not, my head's gonna 'splode
<davidar> anyway, personally i think it would make sense to cleanly separate the two issues
<davidar> zignig, haha, you just reminded me of that show that used to be on abc
<zignig> that's what I was thinking too. love that show....
<zignig> BURKE! where's my dinner ?!
<davidar> hehe, i'd totally forgotten about that show until you said that, it was awesome
<davidar> BURKE FOR IPFS MASCOT!!!
Guest96 has joined #ipfs
<zignig> he's the right golang blue too!
Soft has joined #ipfs
prosodyC has joined #ipfs
stickyboy has left #ipfs [#ipfs]
<Guest96> Hi guys. Quick Q regarding unpinning. I had pinned the IETF RFC archive and tried to unpin it when I got an error "too many open files" (which, incidentally I would also get all the time when just doing an ls on it which was why I was trying to remove it and start over). The problem is I can now no longer do an rm on the hash since ipfs says it's no longer pinned but I worry that it broke in the middle of recursively unpinning and I'm left
<Guest96> with a bunch of pinned cruft. How do I check this?
<zignig> nice. ;)
<zignig> will have a go a blender model of Burke.
<zignig> Guest96: if you try an ls the object without running the daemon , you can tell if it is local.
notduncansmith has joined #ipfs
<zignig> */a/at , if I get some time.
notduncansmith has quit [Read error: Connection reset by peer]
<Guest96> Well I know the top hash is gone but I don't have a reference to all the subhashes so I can't really ls or cat them.
<davidar> Guest96 (IRC): if you don't care about anything else you already have pinned, nuking .ipfs is also an option
<Guest96> Unfortunately I do care.
<davidar> hrm
<davidar> maybe add it again, increase your file descriptor limit, and then try removing again?
<davidar> zignig, s/blender/playdoh :p
bedeho has quit [Ping timeout: 252 seconds]
<Guest96> Ok, thx. Where do I increase the limit?
<davidar> Guest42 (IRC): what os are you running?
Luzifer has joined #ipfs
<Guest96> OS X, El Cap
<davidar> ah, no idea how to do it on osx
<Guest96> So this is an OS limitation rather than an ipfs issue?
<davidar> both, i'd say
<davidar> depends on how many files you think is reasonable for a process to have open
<Guest96> Hm, I'd rather that ipfs gets this sorted than fiddle with the os limitations - I presume they're there for a reason.
<Guest96> Seems odd anyway that it should need to have that many files open just to ls them.
<zignig> it's to do with the the way the networking is done as well, the utp transport should help significantly.
<Guest96> The fun of running an alpha :)
<davidar> maybe bump that last one to get it prioritised
<davidar> haha, yeah
<Guest96> Yeah, my ipfs daemon window is flooded with ERRO[09:09:10:000] too many open files, retrying in %dms200 module=flatfs
elima_ has quit [Ping timeout: 252 seconds]
* davidar brb
Guest96 is now known as NeoTeo
jbenet has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dignifiedquire has joined #ipfs
atomotic has joined #ipfs
elima_ has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
true_droid has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> i want 'save this page and all its assets to ipfs' MEOW!
Soft has quit [Read error: Connection reset by peer]
Soft has joined #ipfs
slothbag has joined #ipfs
samiswellcool has joined #ipfs
hpk has left #ipfs ["WeeChat 1.1.1"]
yosafbridge has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
yosafbridge has joined #ipfs
multivac has joined #ipfs
<davidar> .title
<cryptix> :)
<cryptix> Luzifer: lol - yea.. maube next week when my gf is on holidays :))
<cryptix> davidar: will update - the coralCDN approach looks interesting
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ygrek has quit [Ping timeout: 272 seconds]
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
<davidar> cryptix, yeah, makes it easier since you can rely on the browser to find all the required assets
brab has quit [Remote host closed the connection]
<davidar> also easier than writing a full proxy, I imagine
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<cryptix> i basically just want it for archival purposes and documentation
magneto1 has joined #ipfs
<ipfsbot> [go-ipfs] rht created test/gz (+1 new commit): http://git.io/vnu9u
<ipfsbot> go-ipfs/test/gz 6038c51 rht: Sharness: add test for compression level...
<ipfsbot> [go-ipfs] rht opened pull request #1742: Sharness: add test for compression level (master...test/gz) http://git.io/vnuHn
Encrypt has joined #ipfs
<davidar> cryptix: wget -pk http://...
caseorganic has joined #ipfs
feross has joined #ipfs
rubiojr has joined #ipfs
ogd has joined #ipfs
nrw has joined #ipfs
SoreGums has joined #ipfs
RJ2 has joined #ipfs
brab has joined #ipfs
screensaver has quit [Remote host closed the connection]
<davidar> is another option if you just want something quickly
cleichner has joined #ipfs
daviddias has joined #ipfs
ygrek has joined #ipfs
zrl has joined #ipfs
yoshuawuyts has joined #ipfs
screensaver has joined #ipfs
dignifiedquire has quit [Read error: Connection reset by peer]
mikolalysenko has joined #ipfs
hosh has joined #ipfs
dignifiedquire has joined #ipfs
bigbluehat has joined #ipfs
ehd has joined #ipfs
zmanian has joined #ipfs
lohkey has joined #ipfs
grncdr has joined #ipfs
vonzipper has joined #ipfs
kevin`` has joined #ipfs
leeola has joined #ipfs
tibor has joined #ipfs
sindresorhus has joined #ipfs
dandroid has joined #ipfs
risk has joined #ipfs
jonl has joined #ipfs
mafintosh has joined #ipfs
oleavr has joined #ipfs
robmyers has joined #ipfs
mappum has joined #ipfs
ffmad has joined #ipfs
notduncansmith has joined #ipfs
jhiesey has joined #ipfs
notduncansmith has quit [Ping timeout: 244 seconds]
blame has joined #ipfs
sugarpuff has joined #ipfs
G-Ray has joined #ipfs
anderspree has joined #ipfs
elima_ has quit [Quit: Leaving]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Quiark_ has quit [Ping timeout: 246 seconds]
<ion> ‘I want “save this page and recursively everything it links to to IPFS”’
<cryptix> ion: also fine by me :))
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
<ion> It would be cool if CoralCDN switched to IPFS one day.
<ion> As in, provide the same service but use IPFS as the backend.
<ion> Services like CoralCDN and the Wayback Machine would suddenly be able to interact.
JasonWoof has quit [Ping timeout: 260 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ei-slackbot-ipfs has quit [Remote host closed the connection]
ei-slackbot-ipfs has joined #ipfs
<daviddias> Aye IPFS people
Encrypt has quit [Quit: Postgraduate Tea Time]
<daviddias> Just opened this issue as a way to communicate my progress and where I'm at each week on the node-ipfs impl https://github.com/ipfs/node-ipfs/issues/30
<multivac> [ captain.log - IPFS Node.js implementation · Issue #30 · ipfs/node-ipfs · GitHub ] - github.com
<daviddias> it also has some points where I could use some help from other people
<rendar> cryptix: what is the coralcdn approach you mentioned before?
<daviddias> with time, more stuff will be added (I'm avoiding putting there what I know it is considerable unstable in terms of API, so that I don't confuse anyone)
<ion> Cool. Does go-ipfs have an equivalent? I’d like to subscribe to your newsletter.
<daviddias> Also richardlitt is making an IPFS book which is going to be pretty sweet that explains all the crazy acronyms we keep using in convo (sorry about that, blame the network people :P ) https://github.com/RichardLitt/ipfs-textbook
<multivac> [ RichardLitt/ipfs-textbook · GitHub ] - github.com
<daviddias> ion: yes :) we want to do that too. Right now the best way to subscribe is to watch the ipfs/node-ipfs repo
<daviddias> so you get a notification + email (if you have github email notifications) each time we shout something
<ion> Is subscribing to issues/30 not enough to get emails about updates to it?
<daviddias> ion: can't promise on emails, that is your github settings, but you will certainly get github notifications at minimum
<multivac> [ Configuring notification emails - User Documentation ] - help.github.com
<daviddias> hope it helps
<ion> I mean, given that i at least get emails about new comments to issues.
atomotic has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<multivac> [ rehost.ipfs.io · Issue #92 · ipfs/infrastructure · GitHub ] - github.com
<davidar> .w DAWG
<multivac> [WIKIPEDIA] Dawg | "Dawg or DAWG may refer to:Directed acyclic word graph, a computer science data structureDirected acyclic word graph, an alternative name for the Deterministic acyclic finite state automaton computer science data structureDawg, the nickname of American mandolinist David GrismanDawg, a fictional dog..." | https://en.wikipedia.org/wiki/Dawg
<dignifiedquire> daviddias: could you give me a hint on how to best extract “used storage” “count of objects” and “location” through the node api so I can display those stats
<dignifiedquire> also upload and download speed
<daviddias> sure, in a jiffy
<daviddias> upload and download speed is not really a thing
<daviddias> when a file is added, it is added to the local node
<multivac> [ electron-app/init.js at master · ipfs/electron-app · GitHub ] - github.com
<daviddias> this is how to get the number of peers connected
<multivac> [ electron-app/init.js at master · ipfs/electron-app · GitHub ] - github.com
<clever> how does it advertise the availability of that to the network?, or how does the network discover your node holds a copy?
hellertime has joined #ipfs
<dignifiedquire> I’m referring to this mockup from jbenet: https://github.com/ipfs/electron-app/issues/3#issuecomment-111764862 it says upload bw and download b
<multivac> [ menu bar improvements · Issue #3 · ipfs/electron-app · GitHub ] - github.com
<daviddias> clever: we use a DHT and a Record System to find which nodes hold what
<clever> daviddias: i'm guessing its key = nodelist, where key is a ipfs hash?
<daviddias> every stats comes from: https://github.com/ipfs/node-ipfsd-ctl
<multivac> [ ipfs/node-ipfsd-ctl · GitHub ] - github.com
<daviddias> dignifiedquire: sorry if it is misleading, but that is really a mockup, right now, we are able to provide the same stats that are available through the ipfs cli
<daviddias> jbenet: designed the mock up considering everything we want to have
<dignifiedquire> okay,
<dignifiedquire> but location and object stats are available through the webui as far as I can see?
<daviddias> clever: if you know DHT, it is pretty much that. If you are new to them, check the awesome gifs jbenet makes https://github.com/ipfs/specs/tree/master/protocol#merkledag-paths :D
<multivac> [ specs/protocol at master · ipfs/specs · GitHub ] - github.com
<daviddias> location is done through a clever hack from krl
<daviddias> object stats, yes
<clever> daviddias: i have been messing with DHT's for years
<clever> daviddias: so when your node comes online, or you add an object,it has to scan the DHT to find the node nearest to every piece of every file you have
<clever> and publish the whole mess
<clever> and possibly renew them at regular intervals?
<daviddias> we don't use the DHT to store data
<daviddias> we use the DHT to store pointers
<clever> yeah
<clever> but if i have 5000 files in my local node, it has to publish 5000 pointers to myself
<dignifiedquire> daviddias: this I guess? https://github.com/ipfs/webui/blob/master/js/getlocation.js
<multivac> [ webui/getlocation.js at master · ipfs/webui · GitHub ] - github.com
<daviddias> (like DHT generation 2.0 when Kademlia was generalized)
<clever> each on a different node
<clever> because the pointer is held on the node 'nearest' to the objects hash
<daviddias> clever: if you have 5000 MerkleDAG objs, yep, you publish 5000 pointers
<davidar> .w trie
<multivac> [WIKIPEDIA] Trie | "In computer science, a trie, also called digital tree and sometimes radix tree or prefix tree (as they can be searched by prefixes), is an ordered tree data structure that is used to store a dynamic set or associative array where the keys are usually strings. Unlike a binary search tree, no node in the..." | https://en.wikipedia.org/wiki/Trie
<clever> thats what i thought
<daviddias> dignifiedquire: you found it :)
<clever> daviddias: how long until those pointers expire?
<daviddias> clever: if I'm not mistaken, the default is half a day
<daviddias> but it can be really up to the user
<clever> ah
<clever> so it will have to re-publish them every 12 hours
<daviddias> as said, the default :)
<clever> sounds like a good config option for long running nodes
<daviddias> if you have a very stable node
<daviddias> exactly, that is where I was going for
<clever> yeah
<daviddias> or if you use IPFS for a internal data store (Like dynamoDB did with Chord) where you control the Churn Rate
<clever> is the pointer to your temporary dht key?, a long term public key?, or your ip?
<daviddias> you can even have expiration based on health checks instead of time
<daviddias> a IPFS node id is generated from it's Public Key
<clever> ah, so even if i restart my ipfs node,those pointers will remain valid?
<clever> and IP changes as well
<daviddias> yap
<daviddias> which is cool for mobile clients
<clever> and thats likely also key=ip+port in the DHT?
<daviddias> we don't use IP for keys
<daviddias> specially because we use multiaddr and not IP:Port pairs
<clever> hmmm, does the dht use random keys on each startup for the nodeid, or the long term one from the pubkey?
<daviddias> NodeID = B58.encode(pubkey)
<daviddias> or I mean
<clever> ah, so your nodeid on the dht is constant
bedeho has joined #ipfs
<daviddias> NodeID = multihash(pubkey, sha-256)
<daviddias> and we represent them as B58 encodings for readability
<clever> thats just normal sha256?
<daviddias> dignifiedquire: sorry, I feel I'm missing you an answer
<daviddias> https://github.com/ipfs/node-ipfsd-ctl is where you will get all the stats and information from the IPFS node
<multivac> [ ipfs/node-ipfsd-ctl · GitHub ] - github.com
<daviddias> I know it has 0 useful documentation
<daviddias> Could you please open issues with what you need to have access too
<daviddias> so one of us can pick that and make the proper function call and add it to the docs
<dignifiedquire> sure, will do
<clever> so your dht id changes on each startup
<clever> daviddias: the other DHT based project i'm following uses seperate keypairs for dht and long-term use
<clever> and a single user cant be tracked via his dht id
<dignifiedquire> should I add the location stuff into electron-app or should I add it to ipfsd-ctl ?
<clever> daviddias: it sounds like i can find your NodeID by just seeing who had file X first (doing a query after you post a file)
<daviddias> dignifiedquire: good question
<clever> daviddias: and then i can find your current address at any time, no mater how much you change it
<daviddias> right now it happens on JS browser land
<daviddias> but it could even be a property of the ipfs cli
<daviddias> that would be cool! :)
<daviddias> to test VPN and other tunnel configurations
<clever> thats more of a security problem in my eyes
<dignifiedquire> okay, will see if I can get it working, and if it does I’ll make a pr for ipfsd-ctl
<daviddias> clever: security has several dimensions and concerns
<daviddias> right now, your IP:port is exposed several times on the internet
<clever> yeah
<daviddias> and constantly
<clever> all depends on what your protecting against
notduncansmith has joined #ipfs
<daviddias> exactly
<clever> tox does the same thing, but there is no long term id that can be linked to your ip:port
<daviddias> so, I agree that you have a valid point
notduncansmith has quit [Read error: Connection reset by peer]
<clever> so its imposible to find out where you went after you restart the client
<multivac> [ Need stats reports for electron-app · Issue #18 · ipfs/node-ipfsd-ctl · GitHub ] - github.com
<daviddias> but depending on what you are doing, it is important to know how the system works and if it works for the specific case
<daviddias> that said, we also want to have 'delegated routing'
<daviddias> which means having proxy nodes for the DHT queries
<daviddias> and also! :D we want to support onion routing as a transport
<daviddias> all the options! :D
<clever> daviddias: for tox, there is a dht keypair, that gets remade on each startup, and it is possible to find the ip:port of a dht public key
<clever> but the dht key changes easily, so i cant trace a person
<clever> it then uses onion routing to contact ~5 random nodes, and tell them the long term public key
<clever> due to the onion,those 5 nodes dont know your dht key, or yourip
NeoTeo has quit [Quit: Textual IRC Client: www.textualapp.com]
<clever> and they act as a proxy for anybody wishing to contact you directly
<clever> though i dont see ipfs needing direct contact by a long-term id
<clever> you only ever need to know what short-term id's have a copy of file X
<daviddias> we support multi transports
<daviddias> these include from tcp, udp, to webrtc, websockets and even onion routing
<daviddias> so we can offer those properties by layering tor underneath
<daviddias> if the users have a need for it
<clever> tox does 90% of the work over udp, but the 1st hop can be tcp for clients under special situations
<daviddias> need to run now
<clever> that 1st hop acts as a tcp->udp bridge
<daviddias> (sorry)
<clever> kk
<daviddias> will be back after lunch time
<clever> i'll probly be heading to bed within the next 4 hours
<rendar> guys, what is ipld all about?
<dignifiedquire> daviddias: copy and paste ftw: it worked http://grab.by/KBaC
<davidar> .tell jbenet people keep asking me what ipld is about, but I still don't completely understand the overall goals either
<multivac> davidar: I'll pass that on when jbenet is around.
Soft has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nicolagreco has joined #ipfs
atomotic has joined #ipfs
nicolagreco has quit [Ping timeout: 250 seconds]
voxelot has joined #ipfs
voxelot has joined #ipfs
G-Ray has quit [Quit: Konversation terminated!]
G-Ray has joined #ipfs
<dignifiedquire> daviddias: I’m getting 403 forbidden respones in the webui for the xhr requests when starting the daemon from the electron-app, but all works fine when I start `ipfs daemon` on the terminal, any ideas what’s going wrong there?
<dignifiedquire> it’s running though, I’m getting stats and everything through the api
G-Ray has quit [Ping timeout: 252 seconds]
Soft has joined #ipfs
JasonWoof has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
atomotic has quit [Ping timeout: 246 seconds]
ygrek has quit [Remote host closed the connection]
ygrek has joined #ipfs
fazo has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<zignig> o/
<zignig> multivac: hello
kerozene has quit [Max SendQ exceeded]
voxelot has quit [Ping timeout: 264 seconds]
* daviddias back
<daviddias> dignifiedquire: might be the missing --unrestricted-api
<daviddias> although the webui was working before
<daviddias> which version of IPFS are you running
<dignifiedquire> I updated dependencies
<dignifiedquire> so might be the reason
<daviddias> whyrusleeping: any ideas on that?
wopi has quit [Read error: Connection reset by peer]
<multivac> [ ipfs/node-ipfs-api · GitHub ] - github.com
kerozene has joined #ipfs
wopi has joined #ipfs
yosafbridge has quit [Ping timeout: 240 seconds]
<dignifiedquire> how do I pass that via the api?
<daviddias> it was working before without any special option, not sure what changed to require a different option now
<daviddias> let's wait for feedback from whyrusleeping and/or krl
<richardlitt> @jbenet @whyrusleeping @daviddias @davidar @lgierth @krl @mildred @dignifiedquire @amstoker: Please add your To Dos to the sprint! https://etherpad.mozilla.org/nTqGOVynPT
<multivac> [ MoPad: nTqGOVynPT ] - etherpad.mozilla.org
Spinnaker has joined #ipfs
<ion> to DoS
slothbag has quit [Quit: Leaving.]
yosafbridge has joined #ipfs
<richardlitt> Thanks dignifiedquire.
<dignifiedquire> daviddias: I’ve upgraded https://github.com/ipfs/node-ipfsd-ctl/ to version 0.5 so my guess is that the issue is somewhere around here: https://github.com/ipfs/node-ipfsd-ctl/compare/75f39e9244a1b6888bd2c8a3090c630c9537e558...master
<multivac> [ ipfs/node-ipfsd-ctl · GitHub ] - github.com
<multivac> [ Comparing 75f39e9244a1b6888bd2c8a3090c630c9537e558...master · ipfs/node-ipfsd-ctl · GitHub ] - github.com
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
kerozene has quit [Max SendQ exceeded]
<lgierth> davidar: can we have multivac not resolve urls? it's a bit useless most of the time
kerozene has joined #ipfs
<lgierth> richardlitt: will do, thank you! i'll just quickly need to cycle somewher
<davidar> lgierth: ok :(
multivac has quit [Quit: KeyboardInterrupt]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<richardlitt> lgierth: No rush, will merge tonight. Just; today! :D
<richardlitt> fazo: Thanks for the help! I reformatted the document a bit.
<fazo> I'm glad you liked my answers :)
<fazo> I'm actually writing more right now.
<richardlitt> Cool! Be sure to rebase
<fazo> I also think that when the textbook gets expanded enough, we should host it on ipfs and build a viewer for it
<richardlitt> Drastic style changes.
<richardlitt> Yeah.
<fazo> richardlitt: yeah of course
<richardlitt> Also, I'm thinking; issues is not the way to do this.
<richardlitt> I think we should just do PRs, for everything, so no information is lost.
<fazo> uhm sounds good to me
<fazo> I had a look at the IPFS Example viewer, it's actually very simple
<richardlitt> I'm not sure. Just an idea. Maybe I'll open an issue about that.
<fazo> we can build a fork for the textbook
<richardlitt> and yes, goal is to eventually have it on fazo
<richardlitt> *ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nicolagreco has joined #ipfs
<richardlitt> Anyone have a good suggestion for a better DNS provider than DigitalOcean? With an extensive API and better key security options?
<ion> Running PowerDNS on your own server :-P
<richardlitt> jbenet: should I open the DNS issue in ipfs/ipfs? ipfs/infrastructure has no watchers, really...
ygrek_ has joined #ipfs
<richardlitt> Huh. That might work.
ygrek has quit [Ping timeout: 250 seconds]
<ion> (Among other things, PowerDNS is nicer to configure than Bind, has a better security history to my knowledge and has a nice interface to delegate certain queries to code of your own.)
<fazo> richardlitt: done rebasing. that was long
dignifiedquire has quit [Quit: dignifiedquire]
* ion gave a thought to commit histories within IPFS and realized that the HTTP gateway can provide an awesome repository browser.
dignifiedquire has joined #ipfs
<fazo> I was wondering if git can be integrated to use ipfs to ref commits and other objects and ipns for branches without forking it
<JasonWoof> is there a way I can publish a site (ie some html pages that link to eachother) on ipfs/ipns so that people can have a URL that links to a particular version of my site? Requirements: 1. pages link to eachother, not just linking to older pages. 2. people can still access that version, even if I don't want them to (ie I update stuff)
<fazo> JasonWoof: yes
<JasonWoof> good. how?
notduncansmith has joined #ipfs
<JasonWoof> can ipfs have path names at the end?
notduncansmith has quit [Read error: Connection reset by peer]
<fazo> JasonWoof: just add the folder with the index.html of your site inside
<fazo> yes it can
<fazo> I'll give you an example
<ion> JasonWoof: The ipfs link will point to a particular version, an ipns link can point to the newest version, and given pointers to previous versions you can traverse to any version you want.
<fazo> JasonWoof: this is a web app running on IPFS: /ipfs/QmfX93JbzAzVZ6DKED1LyxzXeJ6Q1svZcTCnWJS82eryLd
<fazo> all the static files (js, css) are hosted via IPFS. Even the fonts are
<fazo> and they all use relative path names to load
<fazo> another example: gateway.ipfs.io/ipfs//ipfs/QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme/about/
<fazo> this one has a relative path at the end
<_fil_> the last one gives an error
<_fil_> Path Resolve error: multihash length inconsistent: &{124 130 [210]}
<fazo> _fil_: there is a double slash in the link, my mistake
<fazo> gateway.ipfs.io/ipfs/ipfs/QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme/about/
<fazo> this one should work
<_fil_> nope
<_fil_> there is a double ipfs/ that's why
<_fil_> :p
<fazo> oh yeah
<fazo> my bad, again
<fazo> gateway.ipfs.io/ipfs/QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme/about/
<_fil_> np
<fazo> if you remove the /about/ at the end, you can see the root folder which has all the xkcd comics
<JasonWoof> ahh, got it working
<fazo> JasonWoof: it's great to host static content behind NAT :)
<JasonWoof> oh, that's a relief. before I didn't think you could have path names after ipfs/..../
<JasonWoof> and I realized that that made it so you couldn't have two files linking to eachother, because without path names, you can only link by hash, and when you change the link, you change the hash, and chicken-and-egg they can't link to eachother
<JasonWoof> so when you have path names, the ipfs hash is something akin to a git commit hash
<JasonWoof> ?
<fazo> JasonWoof: by the way to solve your versions problem, when you change the website add a "Previous Version" link that points to the old hash, then use ipfs add to add the new version of the website and publish its hash to your ipns. This way, every time you make a modification, using ipns people can see your latest version and click a link to go back 1 revision
<fazo> for example now you have an hash for your website right?
<fazo> when you change something, also add a link to that hash. Then add the modified website folder to ipfs again and you get a new hash. This new hash points to the new website which also happens to have a link to the old one
<fazo> then you use "ipns publish <the_new_hash>" so that your ipns points to the latest version of the website
<fazo> you can of course think of an ipfs hash as a git commit hash and an ipns hash as a git branch
<JasonWoof> I'm not running ipfs yet. I'm just learning about it at this point, and seeing if it'd be suitable to build on top of. I have two p2p projects in mind: 1. automated backups onto your friends computers 2. collaboration/consunsus building around page edits/etc using trust network
<JasonWoof> oh, that's so good!
<fazo> JasonWoof: both already possible with the current implementation, but you'll be able to do more and better when stuff gets updated
<fazo> for example there is a native js implementation in the works. When that's done, you could have a totally static website act as reddit or google docs
<JasonWoof> I thought all those pieces happened per-file, I didn't realize that the "commit"-like thing was in ipfs too
<ion> Just like a git repo has many kinds of objects, so does an ipfs repo. In the case of a directory object in IPFS, it’s exactly equivalent to a directory object in git. Both have blobs, too, and IPFS is going to have commit objects as well.
<ion> uh, s/an ipfs repo/IPFS/
<JasonWoof> is there a plan to add filetype data? last I checked the http proxy was guessing what mime type to send based on file contents
<fazo> JasonWoof: you can mount the entire IPFS in a folder if you want (like /ipfs) and then "cd /ipfs/<hash>" and see the files
<fazo> JasonWoof: I don't think there are plans for that I'm afraid
<JasonWoof> fazo: cool. I assumed it would be all files, but now there'd be directories in there?
<fazo> JasonWoof: yes, a directory is just a list of file hashes and their name
<fazo> JasonWoof: so when you open a directory in the browser it doesn't download the contents, just the names and their hashes
<JasonWoof> I'm worried about this conflict of interest: website owners (many of them) want control, so they only give out the kind of web address that will auto-update to the latest thing they publish (so ipNs)
<fazo> and if you have a 20GB directory and change 1 file, you don't have to readd everything else
<JasonWoof> visitors (sometimes) want to see a particular version, or have some assurance that a particular version (of a page or whole site) doesn't disappear just because the author decided it's done
<fazo> JasonWoof: well, nothing stops you from storing the ipfs hash corresponding to that ipns name all the time
<fazo> JasonWoof: nothing can disappear on IPFS as long as someone has a copy
<JasonWoof> so I'm still curious how it sticks together
<fazo> you could have a program track changes to the ipns name and store the hashes of every revision
<JasonWoof> example: joe publihes site, I have a link to /ipns/.../foo/bar/baz
<ion> If you care about the historical content of an IPNS object, just arrange your node to keep it pinned.
<fazo> yeah, ion's idea is nice
<fazo> pinning an hash means that when your local ipfs node does garbage collection, it doesn't delete that hash
<JasonWoof> I'm ok with just being able to get a link to a particular version (ipfs) from a link to the latest (ipns)
<JasonWoof> so you can pin ipfs and ipns?
<ion> IPNS only provides a pointer to the latest version.
<ion> IPNS provides a map from public keys to IPFS hashes.
<fazo> JasonWoof: don't know if pinning ipns is implemented, but it sure is possible
<JasonWoof> so, there's ipns/[jo's hash]/foo/[1,2,3]/about is there one ipfs record for jo's hash? or is there one for foo/1/about, etc, or one for each directory?
<fazo> JasonWoof: every file and directory has its own hash
<JasonWoof> I'm curious: at what level of the directory hierarchy can I switch to ipfs?
<ion> /ipns/[jo's pubkey] points to /ipfs/[jo's hash] which is a directory object recursively holding foo, foo/1, foo/1/about etc.
<fazo> A directory is just a map of names:hashes, like this: [ "foo":"foos_hash", "bar": "bars_hash" ]
<fazo> JasonWoof: any level
<achin> ion: are you still able to resolve my ipns name?
<ion> There is a hash for each of foo, foo/1, foo/1/about, the parent directory object will point to it.
<ion> achin: yes
<achin> ok cool, it survived
<fazo> JasonWoof: of course you can go lower but not higher unless you know the hash of a parent directory
Leer10 has quit [Read error: Connection reset by peer]
<fazo> also you can have hundreds of directory containing the same file, and to IPFS that's the same identical file since the hash is the same
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fazo> even if in different directories it has different names
<fazo> it's like symlinking happening automatically
<achin> in this context, "deduplication" is generally the term that is used
<JasonWoof> That's what I'm wondering... can I go higher? If I have ipns url of a sub, sub directory, can I get to an ipfs link for that version of the whole site? I don't think this is required, but it seems like it might be useful
zoro has joined #ipfs
<achin> nope
<JasonWoof> I understand that I can go from /ipns/.../ to /ipfs/.../
<fazo> JasonWoof: no, you can't do that, because there may be a lot of folders containing the same file, so how is IPFS supposed to know which one you want to go up to?
<fazo> the way IPFS is designed, that can't be implemented :(
<ion> A search engine indexing IPFS would be able to help.
<ion> given a thing in its index
pfraze has joined #ipfs
<JasonWoof> I mean go up a level in ipNs
<fazo> ion: plan to write design docs describing one soon
<ion> There is no such a thing as a level in IPNS, it’s a flat map from public keys to IPFS objects.
<JasonWoof> maybe I'm being silly. if I want to go up in ipns, I just resolve an ipns url with fewer path segments on the end
<fazo> JasonWoof: yeah of course it would work, but can't go higher than the hash
<fazo> ion: interested in designing an ipfs search engine with me?
<JasonWoof> I understand tracking stuff at the directory-listing level. Git tracks all sub-directories together (except for submodules, which are often problematic)... but git doesn't scale up very well past a certain point, and it's cool if you can make sites that scale up further
<achin> i missed the conversation above about filesystem metadata (i'll skim it later), but it would be neat if IPFS eventually grew better support for building non-filesystem datastrucdtures
<fazo> achin: could be done at application level, like resolving a directory structure into a JSON
<ion> fazo: I’m happy to participate in discussion on this channel if i have something to say but i’m afraid i don’t have the energy to do any actual work for it, sorry. But it is an interesting project for sure.
<fazo> ion: don't worry :) I understand perfectly
<achin> fazo: yes, but that seems unnecessary
<JasonWoof> I think it might work out well to store file types in the directory listing
<fazo> So my big plans is to build both a search engine and a community board. Search engine looks easier
<fazo> JasonWoof: yep, the MerkleDAG (file structure used by IPFS) is designed to be generic
<ion> The MIME type doesn’t belong in the directory list but in the file object (along with the links to the chunks).
<JasonWoof> so if people actually want the same file contents with different types (or, just if their tools insert it with different types, or their tool doesn't know the type) then we can still share storage space for the file contents, and have our sites/etc serve the file with the mime-type that we want
<ion> The directory object could of course provide that information redundantly (along with file size).
<ion> But the file itself should be authoritative.
<JasonWoof> ion: oh, i didn't realize files were chunked
<JasonWoof> I was hoping we could have the same file contents with different mime types, and (in that case) only have one "copy"/hash of the file contents
<JasonWoof> if that can be done efficiently without putting it in the directory, then... cool
<ion> JasonWoof: Run ipfs object get <hash> to see a JSON representation of an object.
<JasonWoof> oh, I'm so relieved that the file trees are stored in ipfs
yosafbridge has quit [Ping timeout: 246 seconds]
<achin> ok, neat so with `ipfs object` you can create a non-unixfs thing. for example: QmS7LsfgSaubKeVMEK7AzboXzG3b2XtTcEnNk8imcNXbpi
<achin> you can run 'ipfs ls' on that thing, but trying to 'ipfs get' it will fail (which is expected)
<JasonWoof> oh, this is so cool. I think I can use this
<ion> achin: QmfLu2kVu6pu5yjSuj1mJkDhPtvS4tC2V1nXkMN2AxCte6%
<achin> what is that % doing at the end?
notduncansmith has joined #ipfs
yosafbridge has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ion> Whoops, i copied it by mistake. My shell just added it to signify that the output did not have a terminating newline.
kevin`` has quit []
kevin`` has joined #ipfs
<achin> it's kinda goofy that you have to construct a json object to create a new object with arbitrary data
<JasonWoof> one more question... then I gotta go out for food:
pfraze has quit [Remote host closed the connection]
<JasonWoof> when my computer resolves ipns/(hash)/foo/bar (a file, not a directory)
dlight has quit [Ping timeout: 250 seconds]
<JasonWoof> can/do I get the ipFs of /foo, or does/can it resolve /bar without getting the /foo directory listing?
<achin> it first gets {hash} which tells it about foo which tells it about bar
<samiswellcool> I think it resolves hash->foo->bar
<achin> so you have to get the whole chain
<JasonWoof> sweet!
<ion> When resolving /ipns/<pubkey>/foo/bar, <pubkey> is resolved to <hash> and then /ipfs/<hash>/foo/bar is traversed to get the hash of bar and get the object.
pfraze has joined #ipfs
<JasonWoof> so awesome
<JasonWoof> I love it when I get inner-workings type details (after thinking a bunch about what I want to do) and find out that the inner-workings are well designed / suitable for what I want to do
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
Not__ has joined #ipfs
Not_ has quit [Ping timeout: 265 seconds]
dlight has joined #ipfs
bsm1175321 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
sseagull has joined #ipfs
atomotic has joined #ipfs
nicolagreco has quit [Quit: nicolagreco]
brab has quit [Ping timeout: 244 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<rendar> ion: so in the case of /ipfs/<hash>/foo/bar that <hash> is an hash of all the hashes of objects (or blobs) contained in /foo/bar?
<fazo> rendar: no, it's an hash of a map of names of objects mapped to their hash (which you view as a folder in the gui or in the file system)
<fazo> like this: [ "folderName": "folderHash", "someOtherFile": "itsHash" ]
G-Ray has joined #ipfs
<fazo> it's not exactly represented that way but it gets pretty close
<rendar> fazo: hmm
<lgierth> davidar: i just have trouble keeping up with the chan and the bot's url resolution doesn't particularly help ;)
voxelot has joined #ipfs
voxelot has quit [Changing host]
voxelot has joined #ipfs
<rendar> fazo: because i'm in habit with the git way to do, in git you have e.g. /a5/b398ed88c... which is a blob which in turns represents a directory object which has all the file hashes inside, so also that hash a5b39... is an hash of *hashes* (and not names as you mentioned) of the git's blobs, so in a design like this, you have the directory object represented as /.git/objects/<hash>, so in ipfs if
<rendar> have /ipfs/<hash>/files that means i can browse the hashed directory? because it seems in git you can't do that
<rendar> fazo: what i mean, in git you cannot refers to objects such as /<dir_hash>/files ... but you only have /<dir_hash> poinitng to a blob containing all other hashes you need
<fazo> render: correct, you can browse /ipfs/<hash> just as it was a normal directory
<fazo> well, in IPFS you can because /ipfs/<dir_hash> doesn't only have the hashes of the contents but also the names
<rendar> fazo: ok, but is that actually a normal directory? i mean, git saves directories as hashed files-blobs in the filesystem, does ipfs exploits the filesystem and saves a directory by creating a real one in the underlying filesystem?
proullon has left #ipfs [#ipfs]
<fazo> render: no, IPFS doesn't use the filesystem to store data
<fazo> render: I mean not directly
<rendar> ok
<fazo> render: IPFS directory objects are MerkleDAG objects if I'm not mistaken: you can mount them on a regular file system though
<fazo> you can mount IPFS in a folder and access all data using the filesystem
<rendar> so i cannot refer to /ipfs/<hash>/files with a normal filesystem path, but through ipfs commands, right?
<fazo> yes
<rendar> i see
<fazo> if you mount IPFS using "ipfs mount" you can also do refer to /ipfs/<hash>/files with a normal filesystem
<pjz> rendar: well, you can if you mount the dir use fuse
<fazo> for example: "ls /ipfs/<some_folder_hash>" would work
<pjz> s/use/using/
<rendar> pjz: exactly
<ion> rendar: IPFS blob, directory and (future) commit objects are semantically the same as the equivalent git objects.
<rendar> ion: i see
<pjz> and ipns published names are semantically like refs/heads, but are limited right now
Spinnaker has quit [Ping timeout: 256 seconds]
<rendar> so when i refer to /ipfs/<hash>/files ... what are the actual operations ipfs executes to let me get the file?
<fazo> it gets <hash> then gets the hash of "files" from that object
<fazo> then gets "files" and there you go
<fazo> s/gets "files"/gets "files"'s hash/
nicolagreco has joined #ipfs
<ion> Look up the link called “files” in the directory object that is <hash>, then get the object by the link hash.
<ion> Just like a git directory object.
<rendar> hmm
<rendar> ok but how it uses the merkledag tree in such a query?
Spinnaker has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ion> Uh, just like it was described above.
<ion> Get object, follow link, rinse, repeat.
<zoro> how ipfs is called permanent internet? if the file is not distributed until someone else downloads and seeds it. This would have been better that ipfs forces nearest nodes to store file whether they want it or not depending on how much they add files and a propotional ratio to their shared data they themselves keep other peers data, like this all files would be distributed
apophis has joined #ipfs
<ion> The topology is distributed, redundancy if an orthogonal concept.
<ion> is
wopi has quit [Read error: Connection reset by peer]
thomasreggi has joined #ipfs
wopi has joined #ipfs
thomasreggi has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rasty has joined #ipfs
<ion> If people want the content, redundancy will emerge. If no one wants it, there's no particular need to default to storing it in a redundant manner.
<rendar> ion: well, ok, i see that, but when i ask to git to give me the file with hash ab5deec8... it simply opens /ab/5dee.. and ipfs can do that as well, so how both git and ipfs use the markledag tree in such a case? what is the purpose?
dignifiedquire has quit [Quit: dignifiedquire]
pfraze has quit [Remote host closed the connection]
dignifiedquire has joined #ipfs
<ion> It's a mapping from the hash of an object to said object. That has a number of benefits. When you know a hash and request the object, you know for certain you got the right one by computing its hash (assuming a good enough hash function). Everyone holding the same content will assign it the same identifier.
<rendar> ion: mmm
<rendar> ion: ok, but to map from the hash of an object to the said object, isn't git simply opening the file whose name is that hash string?
<ion> Yes
<ion> Or asking a server to open that file and to send it to you
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<rendar> ion: exactly
lithp has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<rendar> ion: so, when you have such mapping, why a merkledag tree is needed at all? what other advantages it brings to you?
<whyrusleeping> rendar: how will you know that what i've sent you is what you've requested?
<rendar> whyrusleeping: you mean between 2 ipfs nodes?
<whyrusleeping> rendar: yeah
<rendar> whyrusleeping: i guess by hashing the data i get..
<rendar> then i check the hash the server sent is the same as the hash of data i got from the network
<ion> rendar: That mapping *is* the Merkle DAG.
<ion> The DAG means objects can link to others but there can be no loops.
<rendar> ion: what? having hashes and opening files whose filename is the hash string, how this can be the merkle dag?
<ion> Any system where you can store objects and get them back by their hash.
pfraze has joined #ipfs
* drathir wonder isnt a faster to search if single files with names have represented in way of hash_of_file/hash_of_name or somethin similar? or no difference?
<rendar> ion: from what i know the merkle trees lets you to have a root hash, then when you *change* a file (a leaf) you rehash all the objects in the tree branch, until the root hash, right?
<ion> rendar: There's no difference to updating any other immutable data structure. You recreate the objects on the path from the change to the root but all the other pointers can point to the previous counterparts.
<rendar> ion: ok, but lets consider this: if i can run int fd = open("abe3cdee...", ...); <- this is actually a mapping, because i'm mapping an hash string to its real content.. then i have 'fd' which i can read from and write to a socket to send the file
<rendar> so, why other data structures?
<ion> rendar: Which other data structures are you referring to?
G-Ray has quit [Ping timeout: 252 seconds]
<rendar> ion: merkle dag of course
thomasreggi has joined #ipfs
<rendar> ion: if i have an hash, and i need the file content of that hash, so i map that by simply doing fd = open("abe3cdee...", ...); why i'd need other data structures?
<ion> A directory full of objects named after their hashes *is* a Merkle DAG, no other data structures are really involved here.
nicolagreco has quit [Quit: nicolagreco]
<rendar> ion: ok
<rendar> ion: but from the code, i saw ipfs implements a merkledag
<ion> Yes
<rendar> so, why? if it stores blobs in the underlying filesystem with git, and you said that it is actually a merkledag, why implementing one? i can't follow this
Tv` has joined #ipfs
<fazo> I don't think it uses git, it just takes it as inspiration
<fazo> it stores blocks as a merkledag
<fazo> but I'm probably wrong
<rendar> fazo: sorry
<rendar> i meant *like* git
<whyrusleeping> we store blocks on disk in a flat filesystem approach in a similar way to how git does it
<rendar> not it is using git..
<ion> This one knows how to transparently get the object from other peers on the Internet if you don't have the one you requested on your computer.
thomasreggi has quit [Remote host closed the connection]
nicolagreco has joined #ipfs
<fazo> it is like git because it reuses data and stores patches, but I'm not 100% sure about this
<rendar> whyrusleeping: with flat filesystem approach you mean each files/blocks is stored like a new file whose name is its hash?
<ion> Git doesn't actually store patches.
<whyrusleeping> fazo: its like git because it stores data as content addressed
<whyrusleeping> rendar: yes, each file/block is stored as a file whose name is its hash
thomasreggi has joined #ipfs
<fazo> thanks for clarifying :)
thomasreggi has quit [Remote host closed the connection]
* whyrusleeping slowly wakes up
<whyrusleeping> i wish coffee was more of an instant effect
<fazo> whyrusleeping: try alternative sleep cycles
<rendar> whyrusleeping: ok, according to ion, this way to store file/blocks is a merkledag, but i can't get this, then why ipfs would implement a merkledag structure?
<fazo> unless you have a too fixed day routine you can make it work
<fazo> render: how do you store directories without a merkledag?
nicolagreco has quit [Client Quit]
<whyrusleeping> fazo: my hours arent super fixed, i could probably try polyphasic, but in the past its made me feel really weird
<whyrusleeping> and not super productive
<whyrusleeping> rendar: objects have a merkledag format
<whyrusleeping> which means that the data in any given object, is a merkledag object with data and links
<rendar> fazo: by creating a file whose name is the directory hash and its content hash, and store inside that file all the hashes that directory contains
<whyrusleeping> each object is stored independently on disk
<rendar> hmmm
<fazo> rendar: that way you can't store file names inside a directory object
<rendar> fazo: you can
<whyrusleeping> rendar: think of how real filesystems like ext4 work
<rendar> fazo: you simply stores names+hash
<whyrusleeping> they store files in a b-tree
<rendar> whyrusleeping yeah
<fazo> render: yes, but if you do that you just created a datastructure
<whyrusleeping> but really, theyre just writing chunks to the disk in a linear manner
<rendar> fazo: so?
<rendar> whyrusleeping: that's right
<whyrusleeping> all the data is just stored in a large array that is your disk
<fazo> so why not using the merkledag?
<rendar> whyrusleeping: ok i think i'm getting that
<fazo> I think the merkledag addresses other concerns, but I'm not sure
thomasreggi has joined #ipfs
<rendar> whyrusleeping: you mean that no matter how ipfs stores files/blocks, the important thing is that every directory is the hash of its *content* until the root directory which is the ultimate hash of all contents, like a git commit, or git HEAD
<rendar> whyrusleeping: and no matter how its implemented or stored, but *THAT* is a merkledag?
<fazo> rendar: maybe we should read the merkledag spec: https://github.com/ipfs/specs/tree/master/merkledag
<whyrusleeping> rendar: yeap
<whyrusleeping> youre getting it now
<whyrusleeping> we used to store things in leveldb
wopi has quit [Read error: Connection reset by peer]
<whyrusleeping> and we want to store things in s3 in the future
lithp has joined #ipfs
<rendar> whyrusleeping: ok now i see
<whyrusleeping> but that doesnt matter
<whyrusleeping> its still a merkledag
<rendar> fazo: thanks
<rendar> whyrusleeping: that's because the main property of a merkle dag is having a tree of hashes, that hash the *content* of the child nodes
wopi has joined #ipfs
<rendar> whyrusleeping: but in this way, ipfs is not a merkledag tree, but its a merkledag N-tree since you can store N directores in one directory and not only 2..
<ion> It's not a tree, it's a DAG
<rendar> ops, sorry
<rendar> you're right
<rendar> we defined it as merkle *dag* since the start of the discussion
<rendar> but when i thought to a merkle-tree i always thought to a binary tree
nessence has joined #ipfs
<zoro> how one of my bootstrap peer is serving my file when i'm offline (i mean he didn't download the file)
<whyrusleeping> zoro: the file is probably cached somewhere
nicolagreco has joined #ipfs
thomasreggi has quit [Remote host closed the connection]
ygrek_ has quit [Ping timeout: 244 seconds]
<zoro> @whyrusleeping: how ipfs is called permanent internet? if the file is not distributed until someone else downloads and seeds it. This would have been better that ipfs forces nearest nodes to store file whether they want it or not depending on how many files they add and a propotional ratio of other peers data they also have to store, like this all files would be distributed and be permanent
compleatang has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
<lithp> The current web is not permanent, no matter what
<ion> zoro: Did you see my response?
notduncansmith has quit [Read error: Connection reset by peer]
<lithp> ipfs can be made permanent, for content you wish to have kept around
<fazo> zoro: I have an aswer for that
<zoro> ion: yes, but i didn't get the answer
<fazo> zoro: https://github.com/fazo96/ipfs-textbook search for "What do you mean by permanent web?"
<whyrusleeping> zoro: permanent refers to the fact that links will never change, if you have an ipfs path to your content, it will always unambiguously refer to that content, and even if you cant access the data at the moment, it doesnt give you different data instead
compleatang has joined #ipfs
<zoro> @fazo: this is helpful
<fazo> zoro: IPFS uses links that don't tell your computer where the data is, but what it is, so that no matter who has it, as long as their packets reach you, you can get the file
<zoro> @whyrusleeping : yes now i get it
<fazo> and everyone that views or downloads a file stores it for a little while (or forever depending on their settings)
<whyrusleeping> 'a little while' is currently defined as 'until they remember to run a GC'
<fazo> whyrusleeping: me and richardlitt gave a big bump to the ipfs textbook yesterday
<whyrusleeping> ipfs textbook?
<whyrusleeping> this is new to me
<fazo> whyrusleeping: It's an effort to build a manual to explain IPFS to non technical people
<whyrusleeping> oooooh, me gusta mucho
<fazo> yeah we plan to leverage the example viewer to publish the textbook on ipfs
<whyrusleeping> awesome :)
akhavr has quit [Ping timeout: 250 seconds]
<ion> whyrusleeping: My two cents regarding your website efforts: it's great to have a video that explains and sells the thing on the front page, but it's bad to have multiple. *I* watched them because I was already interested but a random visitor might be considered about which one she should watch and just leave for easier stimulation in the form of cat gifs.
<ion> Uh
nicolagreco has quit [Quit: nicolagreco]
<ion> Confused about …
<fazo> ion: I remember seeing an active issue about fixing just that
atomotic has quit [Quit: Textual IRC Client: www.textualapp.com]
zoro has quit [Quit: Page closed]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> ion: yeah, agreed
akhavr has joined #ipfs
_jkh_ has quit [Changing host]
_jkh_ has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping force-pushed ipns/patches from 8f5c610 to 513bc6e: http://git.io/vn0bZ
<ipfsbot> go-ipfs/ipns/patches 671ae36 Jeromy: ipns record selection via sequence numbers...
<ipfsbot> go-ipfs/ipns/patches 513bc6e Jeromy: Implement ipns republisher...
<ipfsbot> go-ipfs/ipns/patches 7a63fd9 Jeromy: Fix dht queries...
nessence has quit [Remote host closed the connection]
john__ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ion> Hehe. I was googling for the GitHub compare URL format to see the diff from 8f5c610 to 513bc6e and encountered this: “Edit: These branches have been removed since the publishing of this post and can no longer be shown.” https://github.com/blog/612-introducing-github-compare-view
magneto1 has quit [Ping timeout: 240 seconds]
nessence has joined #ipfs
captain_morgan has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
apophis has quit [Quit: This computer has gone to sleep]
apophis has joined #ipfs
thomasreggi has joined #ipfs
akhavr has quit [Remote host closed the connection]
apophis has quit [Ping timeout: 265 seconds]
akhavr has joined #ipfs
Encrypt has joined #ipfs
thomasreggi has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
thomasreggi has joined #ipfs
apophis has joined #ipfs
akhavr has quit [Ping timeout: 272 seconds]
thomasreggi has quit [Remote host closed the connection]
thomasreggi has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Encrypt has quit [Ping timeout: 252 seconds]
<kyledrake> whyrusleeping ion I've got some ideas for the site to improve it for the introduction I'm looking forward to working on.
<ion> whyrusleeping, jbenet: When designing IPFS commit objects, please consider adding a variant of a merge object whose meaning is history rewriting: it will still link to all parents but user interfaces are meant to render it as if only the first parent existed unless the user requests to see the “rewritten” history in the secondary parent. This would alleviate the need for force pushes Git may have in order
<ion> to keep the history of a branch clean.
magneto1 has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
magneto1 has quit [Ping timeout: 250 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jedahan has joined #ipfs
apophis has quit [Ping timeout: 250 seconds]
atomotic has joined #ipfs
apophis has joined #ipfs
rendar has quit [Ping timeout: 246 seconds]
akhavr has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
rendar has joined #ipfs
jedahan has quit [Quit: Textual IRC Client: www.textualapp.com]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
simonv3 has joined #ipfs
pod is now known as p0d
p0d is now known as pod
ygrek_ has joined #ipfs
atomotic has left #ipfs ["Textual IRC Client: www.textualapp.com"]
ryepdx has quit [Ping timeout: 240 seconds]
ryepdx has joined #ipfs
<whyrusleeping> ion: interesting
<whyrusleeping> i wonder when we will see it in chrome beta
bsm1175321 has quit [Ping timeout: 255 seconds]
ygrek_ has quit [Ping timeout: 260 seconds]
vijayee_ has joined #ipfs
bsm1175321 has joined #ipfs
dlight has quit [Read error: Connection reset by peer]
lgierth has quit [Quit: WeeChat 1.0.1]
lgierth has joined #ipfs
lgierth has quit [Client Quit]
<ion> Re: https://github.com/ipfs/go-ipfs/issues/1738, couldn’t the new version add a UDT listener automatically when seeing that the config was written using an earlier version of go-ipfs the last time.
notduncansmith has joined #ipfs
lgierth has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
lgierth has quit [Client Quit]
lgierth has joined #ipfs
<whyrusleeping> ion: the issue is that its difficult to tell if the user has changed their config
<whyrusleeping> and we also dont want to have to leave code in the codebase forever that says 'if prev.version == X, do Y'
<whyrusleeping> and IMO, automatically changing the users configs isnt cool
<ion> ok
Encrypt has joined #ipfs
<spikebike> standard operating procedure for debian family is to offer the upgraded config, but offer upgrade, don't upgrade, and display difference. If upgrade is selected the config file is backed up before the change.
samiswellcool has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
<spikebike> wow, brotli looks better than I expected
notduncansmith has quit [Read error: Connection reset by peer]
kerozene has quit [Max SendQ exceeded]
kerozene has joined #ipfs
<fps> about the debian config file updates
<fps> that process is actually quite convenient for the package maintainer
<fps> it's basically just mentioning which files are config files in one of the control files
<fps> details are documented in the debian package maintainer documentation
<fps> which is horrible
<fps> but reasonably complete
RX14 has quit [Quit: Fuck this shit, I'm out!]
magneto1 has joined #ipfs
Encrypt has quit [Ping timeout: 246 seconds]
RX14 has joined #ipfs
RX14 has quit [Remote host closed the connection]
RX14 has joined #ipfs
apophis has quit [Quit: This computer has gone to sleep]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
zignig has quit [Ping timeout: 255 seconds]
nessence has quit [Remote host closed the connection]
<ion> The sucky part is that apt/dpkg doesn't keep track of the default config in the presently installed version of a package which would allow it to do a three-way merge with the default config in the new version.
zignig has joined #ipfs
nicolagreco has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
nicolagreco has quit [Client Quit]
wopi has joined #ipfs
nicolagreco has joined #ipfs
nicolagreco has quit [Client Quit]
<fps> ion: i suppose :)
Encrypt has joined #ipfs
<ion> If the system knew what changed in the package instead of just what changes there are between your modified config and the new package version, updates would involve much less manual merging.
nessence has joined #ipfs
nessence has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fps> ion: true :) is there a distro with such a smart package manager?
<fps> and also, why not just maintain all configs from packages in a git repo? ;) i mean by the package manager..
<fps> and while we're at it
compleatang has quit [Quit: Leaving.]
<fps> why not make all packages subrepositories in a git repo. the whole distribution being a git repo
<fps> lalala
* fps dances around like a madman
john__ has quit [Ping timeout: 246 seconds]
<vijayee_> what are the future plans around ipns
<vijayee_> will I be able to have multiple routes to hashes from a peerid?
jamescarlyle has joined #ipfs
apophis has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fazo> fps: NixOS does that
<fazo> the whole distro is a git repo.
<fps> ok
<fazo> it also does a lot of other amazing things for example it installs the OS using a single configuration file that details how your bootloader, kernel, desktop environment, packages, configs are set
ben_____ has joined #ipfs
<fazo> so if you back up that single configuration file and use it to reinstall you get the same exact setup back
<fazo> it also has previous configurations stored in the bootloader so if you change something and it doesn't even boot anymore, you can boot an old version
<fps> sounds pretty great. opened for later reading and testing in a vm :)
<fazo> it's my distro of choice
vijayee_ has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
<fazo> it's a revolutionary distro, I suggest lookit at their website
<fps> i have the manual open right now
<fazo> I can replicate my system just by saving configuration.nix and my /home/fazo
<fazo> I did it a few weeks ago to move my install to another disk
* fps drools
<fazo> I did this:
<fazo> mkfs.ext4 /dev/sdb1
<fazo> mount /dev/sdb1 /mnt
<ben_____> has Bitswap been implemented in ipfs ?
<fazo> nixos-install
<fazo> cp -r ~ /mnt/home/fazo
<fazo> rebooted and it worked
<fazo> I had an exact copy in the other disk :)
<fazo> nixos-install uses /mnt if you don't tell it where to install
<fps> obligtory question: many packages? :)
<fazo> it's also a source based distro so that you can customize packages (similar to gentoo) BUT it uses a binary cache so that it doesn't take 200 hours to install gnome
<drathir> lol wth is goin there?
<fps> *obligatory
<fazo> fps: yes, not as many as arch though
<fazo> they're very easy to write. A go package for example the ipfs one is 4 lines long.
<fps> fazo: the easier it is to add packages, the more it will grow i suppose
<fazo> yep, it's the easiest build system I ever used, mostly because of nix-shell
<fazo> nix-shell is a script that opens a shell with the environment you tell it to load, without changing your system
<fps> ok, you got me sold. i wanted to finish writing a parser tonight. but now i'll try setting it up in a vbox
<fazo> lol I'm.. glad I guess
<fazo> it needs more people doing packages
Spinnaker has quit [Ping timeout: 252 seconds]
<fps> oh nice. quote "Using virtual appliances in Open Virtualization Format (OVF) that can be imported into VirtualBox. These are available from the NixOS homepage.
<fazo> just make sure to switch to the unstable channel after you install, because stable is very old (december) and NixOS exploded in popularity in the last months
<fazo> so unstable made giant additions in the last months
<fazo> also the newest stable is due in a few weeks tops
<drathir> fazo: !ping
simonv3 has quit [Quit: Connection closed for inactivity]
<fazo> drathir: ?
<drathir> fazo: checking if You are not a spam bot...
<fazo> oh I saw the message now.
<fazo> nah, ask whyrusleeping or jbenet. I've been around for a little while
<fazo> it's sad that you saw me as a spam bot lol
<drathir> fazo: good to know... ^^
<fps> you failed the turing test
<fazo> maybe I am a spam bot.
<fazo> :(
Spinnaker has joined #ipfs
<noffle> darned nixos promo bots
<drathir> fazo: takin on mind the conversation with fps yea pretty looks like ;p
<fazo> I know what IPFS is and how to use it... but do I know what it *feels* like to use it?
<noffle> (nixos looks pretty spiffy though. looks like a good replacement once my arch install dies and I whine about how much work re-config is)
<drathir> fazo: btw You was the next in queue to check !ping, but as i see not needed ;p
<drathir> fazo: > fps
<fazo> yeah, when I saw fps complaining about distros not having features that NixOS has, I had to step in
<fazo> Sorry, I don't know what !ping is
<fps> !ping fps
<fps> hmm
<fps> fps: !ping
<drathir> i saw "similar" bot fights conversations before that why like to check ;p
<drathir> fazo fps sorry btw...
<fps> is !ping a no-op?
<fazo> drathir: np ^^ it was funny
jvalleroy has joined #ipfs
<fps> drathir: don't worry :)
jvalleroy has left #ipfs [#ipfs]
<fazo> by the way there is a problem with the IPFS package in NixOS (nothing major)
<fazo> if you run the daemon from a systemd service, mounting on fuse doesn't work
* drathir onlly one replacement to arch see... its openbsd...
<fazo> drathir: why not NixOS?
<fazo> it's a new distro but skyrocketing in popularity and development speed.
<fazo> and one of the only actually revolutionary distros
<drathir> bc arch is stable as rock the openbsd is the same... not really like too young distros...
<fazo> drathir: NixOS stable will be more stable than arch. Don't know about BSD
ben_____ has quit [Quit: Page closed]
notduncansmith has joined #ipfs
<fazo> also NixOS has a number of features that make it a lot more stable like rollbacking to before updating
<fazo> and reproducible configurations
notduncansmith has quit [Read error: Connection reset by peer]
<fps> now you're pushing it deliberately? :)
<fazo> eeeh sorry
<fps> maybe it's a hybrid system
<drathir> also the /me have workin arch installation older than 6y...
<fazo> stop me before it's too late
<fps> part human, part machine :)
<fazo> if I'm part machine, I hope I only run free software
<drathir> maybe after few years if will still egsist i take a look on it ^^
atrapado has joined #ipfs
chriscool has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
<lithp> scumbag life, the first time we tried to look at our source code the universe charged us $3 billion
wopi has joined #ipfs
<zmanian> tor-dev
zmanian has left #ipfs [#ipfs]
jamescarlyle has quit [Ping timeout: 240 seconds]
jamescarlyle has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
bedeho has quit [Ping timeout: 240 seconds]
apophis has quit [Quit: This computer has gone to sleep]
apophis has joined #ipfs
zmanian has joined #ipfs
chriscool has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Gennaker has joined #ipfs
Spinnaker has quit [Read error: Connection reset by peer]
ansuz has quit [Ping timeout: 252 seconds]
<fps> oh nice. vbox was killed by the oom killer during nixos-install
<fps> let's see how it copes with that ;)
ansuz has joined #ipfs
jhiesey has quit [Ping timeout: 252 seconds]
screensaver has quit [Remote host closed the connection]
jhiesey has joined #ipfs
feross has quit [Ping timeout: 252 seconds]
thomasreggi has quit [Remote host closed the connection]
jarofghosts has quit [Ping timeout: 252 seconds]
jarofghosts has joined #ipfs
feross has joined #ipfs
rvsv has joined #ipfs
pfraze has quit [Remote host closed the connection]
bsm1175321 has quit [Ping timeout: 260 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dstokes has quit [Ping timeout: 252 seconds]
sbruce has quit [Ping timeout: 252 seconds]
sbruce has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
Soft has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
dstokes has joined #ipfs
CounterPillow has quit [Read error: Connection reset by peer]
CounterPillow has joined #ipfs
Encrypt has quit [Quit: Sleeping time!]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has quit [Remote host closed the connection]
Soft has joined #ipfs
apophis has quit [Read error: Connection reset by peer]
apophis has joined #ipfs
pfraze has joined #ipfs
doei has quit [Ping timeout: 250 seconds]
nessence has quit [Remote host closed the connection]
nf has joined #ipfs
<nf> i'm getting this error "rm: cannot remove ‘ipfs’: Transport endpoint is not connected".....how can i unmount a folder
<achin> sudo umount /ipfs /ipns
nicolagreco has joined #ipfs
nicolagreco has quit [Client Quit]
<whyrusleeping> and when that tells you it fails because fuse is annoying, you can try 'sudo umount -f /dev/fuse'
<whyrusleeping> 'fuck you fuse, i do what i want'
<rendar> whyrusleeping but that will unmount all the fuse filesystems installs right?
<whyrusleeping> rendar: yeah. but it sure tells fuse who is boss
<rendar> lol yeah
<rendar> whyrusleeping: do you know what ipld is all about?
<whyrusleeping> rendar: uhm
<whyrusleeping> linked data
<rendar> what you mean?
<whyrusleeping> its supposed to be the 'new format for ipfs objects'
<rendar> like connecting one file to another one?
<achin> less important than 'what' is 'where' (as in where to find into about ipld)
<rendar> whyrusleeping: how that works?
<whyrusleeping> i'm really not the best person to ask about linked data, jbenet is the one working on
nicolagreco has joined #ipfs
<whyrusleeping> it
nicolagreco has quit [Client Quit]
<whyrusleeping> theres a repo for it here: https://github.com/ipfs/go-ipld
<rendar> whyrusleeping: ok thanks, but i asked just some clues to at least understand what you mean for "linked"
fiatjaf has joined #ipfs
<whyrusleeping> rendar: i honestly dont understand it well enough myself to give any sort of a reasonable answer
<whyrusleeping> i know its supposed to be utilizing json-ld to some extent: http://json-ld.org/
<whyrusleeping> and that jsonld page is probably a good place to start
<rendar> i see
<achin> we need to clone the ipfs team. there is not enough of them
<achin> a minimum of 6 whyrusleepings
<whyrusleeping> six of me? i dont think they make enough coffee for that
<achin> i assume if we can clone you, we can also clone the coffee maker
<whyrusleeping> oooooh, you are a bit more forward thinking than i...
<achin> i am sure if you had a little extra coffee, you would have gotten there
<whyrusleeping> lol
rendar has quit []
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<richardlitt> @jbenet @whyrusleeping @daviddias @davidar @krl @mildred @amstoker Will expect To dos by tomorrow! https://etherpad.mozilla.org/nTqGOVynPT
<whyrusleeping> richardlitt: ah, thanks for the reminder!
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
<noffle> richardlitt: when does the aforementioned sprint begin? monday 00:00 of each week?
<whyrusleeping> noffle: yeap!
<richardlitt> :)
lithp has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<noffle> thanks!
doei has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
simonv3 has joined #ipfs
doei has quit [Ping timeout: 240 seconds]