<spikebike>
hrm, about 10% of my "ipfs swarm peers" results in Error: read tcp 127.0.0.1:47038->127.0.0.1:5001: use of closed network connection
<whyrusleeping>
>.>
<ion>
whyrusleeping: Will 0.4.0 have µTP? That would probably solve my bufferbloat problem better than TCP with a bandwidth limit setting.
<whyrusleeping>
thats... interesting
<spikebike>
if I run once a minute often 10-20 work, then 1 or 2 in a row die
<whyrusleeping>
ion: we are going to be using UDT
<whyrusleeping>
there isnt a good enough utp implementation in go yet
<ion>
whyrusleeping: Will 0.4.0 have UDT? :-)
<whyrusleeping>
ion: UDT is slated for 0.3.8 :)
<ion>
whyrusleeping: Awesome
voxelot has quit [Ping timeout: 250 seconds]
<ion>
I’m curious and haven’t seen this mentioned in anything i’ve read about IPFS so far: how will interplanetary communication be handled when you occasionally have a link (with a latency measured in minutes) between two planets and people would like to communicate between them?
<whyrusleeping>
ion: it shouldnt be too different honestly, will just need to have higher timeouts
<whyrusleeping>
we may want to include some extra delays before sending data through bitswap to see if we get block cancel
<whyrusleeping>
and scale that based on the latency of the link
<whyrusleeping>
but thats just an optimization
<ion>
ok
<spikebike>
found a cool paper about sorting peers into different DHTs based on latency
<whyrusleeping>
spikebike: coral?
<spikebike>
I don't think so, umm.
<spikebike>
ah, nevermind, yeah, I think it's coral
<achin>
maybe over time, an IPFS node can build up a sense of how responsive/fast a given peer is, and try to perfer it over others. this might naturally yield a behavior where fast cache boxes get perferred (as well as other lan-local nodes)
<fazo>
yeah, another issue is that the fastest machine to answer (low latency) may have a lot less upload bandwidth than another machine with higher ping
<achin>
(this seems like an interestion research topic. i wonder if there are any papers on this)
sbruce has joined #ipfs
<fazo>
yeah bitswap is gonna be hard to design properly
<fazo>
but even know the network works fine though
FunkyELF has quit [Quit: Leaving]
<achin>
ion: it seems like your cache use-case is not too dissimliar from your interplanetary case -- there might be only a few nodes between Earth and Mars that have a good connection. they would have to "proxy" all traffic between the planets
hellertime has joined #ipfs
<achin>
so you either need a configuration option to say "use this gateway node", or design the peer-to-peer heuristics so that it can naturally discover any bottlenecks in the topology
<fazo>
or just set up mars with servers with huge storage to cache everything
<fazo>
so that everything downloaded from earth using ipfs gets cached
<fazo>
assuming mars is the minority.
<ion>
fazo: Well, I have been bitten by the naive bitswap implementation. The daemon saturates my upstream bandwidth (initial swipe typo: my BBQ anxiety) at 250-ish kB/s while actual content makes it from A to B at 30 to 60 kB/s.
<spikebike>
I've seen of ton of similar papers that explore various peer preferences based on latency, bandwidth, and network proximity (ops)
<fazo>
also assuming that on mars everyone has at least gigabit marsnet connection
<achin>
(it's ok, i have BBQ anxiety all the time)
<ion>
Because of TCP and bufferbloat, that also results in huge latency in all communications.
<fazo>
ion: wow, 250 kB/s upload. I can only manage 25 when there's no congestion
<spikebike>
I've seen of ton of similar papers that explore various peer preferences based on latency, bandwidth, and network proximity (hops)
<fazo>
I can't even use VoIP ffs
<ion>
fazo: How fast does IPFS move content for you?
<fazo>
If I want to use VoIP or upload stuff I need to use my mobile connection, which is 4G at dozens of mB/s. Too bad I have incredibly absurd monthly caps
<fazo>
ion: fast enough, I built a 1.3 MB app and people could use it fine
<fazo>
can't complain
<whyrusleeping>
(no, go ahead and complain, it might make us work faster)
<fazo>
considering I run OpenWRT with hardcore QoS scripts on my router
nicolagreco has joined #ipfs
<nicolagreco>
jbenet: around?
<fazo>
the frustration of a slow connection is horrible, no matter what I do I can't go faster than it does
<nicolagreco>
I had some issue with my connection until now
<achin>
ion: can you remind me the iftop command you recommended to observe ipfs traffic?
<ion>
achin: Sorry, I'm in bed. Something like iftop -i eth0 -BP 'port 4001' but that's probably wrong in some regard.
<achin>
that gets me close enough, thanks
<spikebike>
iftop -i eth0 -BP -f 'port 4001'
<spikebike>
that looks plausible anyways
<achin>
things aren't too chatty at the moment -- about 20KB/s TX and 10KB/s RX
jvalleroy has quit [Quit: Leaving]
<spikebike>
uranus.i.ipfs.io seems reliably one of the chattier nodes
<spikebike>
didn't know we had that much bandwidth to uranus ;-)
<ion>
achin: Try downloading a large thing [insert your mom joke]. To see the useful bandwidth, ipfs cat something | pv >/dev/null. Perhaps with a parameter for pv to use a long window for measuring the average, of just wait until the end and it will say the average for the entire download.
<achin>
is uranus one of the bootstrap nodes?
<achin>
anybody have a largish thing for me to download?
<achin>
oh, i'll use davidar's PGP dump
magneto1 has quit [Ping timeout: 250 seconds]
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed ipns/patches from 3b29e26 to 8f5c610: http://git.io/vn0bZ
<whyrusleeping>
_jkh_: you *should* be able to rexport it with smb, although there are definitely kinks in the fuse system, so do let us know when you find them
<achin>
looks good to me. one formatting suggestion: turn each of your examples into blocks, by putting ``` at the start and end of each
jfis has quit [Ping timeout: 252 seconds]
<whyrusleeping>
spikebike: thanks for posting the issue
<spikebike>
whyrusleeping: sure, happy to help, I wonder if it's just uplink limited and there are timeouts
<spikebike>
presumably that would get better over time
<whyrusleeping>
spikebike: it might be timeouts, and they just arent manifested very well through the fuse interface
<whyrusleeping>
i wonder if fuse understands some sort of ETIMEOUT type error...
<achin>
is `ipfs ls QmPVP4sDre9rtYahGvcjv3Fqet3oQyqrH5xS33d4YBVFme` working for anyone else?
<whyrusleeping>
davidar: yeah, ls is fetching all of the child blocks
<whyrusleeping>
and it looks like one of them is not available or something...
<davidar>
whyrusleeping (IRC): is it a good idea for ls to do that?
<whyrusleeping>
davidar: well, that depends on your expectations of what information ls provides
<davidar>
whyrusleeping (IRC): the same information as the dir index the gateway provides, is what I would have expected
<whyrusleeping>
the gateway also runs the same command
rozap has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Quiark_ has quit [Ping timeout: 264 seconds]
nicolagreco has quit [Quit: nicolagreco]
amstocker has joined #ipfs
Tv` has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
_jkh_ has quit [Changing host]
_jkh_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jfis has joined #ipfs
amstocker has quit [Ping timeout: 265 seconds]
<zignig>
whyrusleeping: shouldn't ls just get the top two levels of blocks ?
<davidar>
whyrusleeping (IRC): so why is the gateway instant but ls isn't?
_jkh_ has quit [Read error: Connection reset by peer]
_jkh_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
amstocker has joined #ipfs
Quiark_ has joined #ipfs
brab has joined #ipfs
akhavr has quit [Quit: akhavr]
jamescarlyle has joined #ipfs
jamescarlyle has quit [Remote host closed the connection]
akhavr has joined #ipfs
captain_morgan has quit [Ping timeout: 240 seconds]
elima_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
amstocker has quit [Ping timeout: 244 seconds]
ygrek has joined #ipfs
apophis has quit [Quit: This computer has gone to sleep]
akhavr has quit [Ping timeout: 240 seconds]
akhavr has joined #ipfs
rendar has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
captain_morgan has joined #ipfs
stickyboy has joined #ipfs
<davidar>
hi zignig
amstocker has joined #ipfs
<zignig>
davidar: hey :)
<davidar>
zignig, ever since i learned of dawg's, I've had an urge to write a library simply for the meme-value :p
<davidar>
you dawg, I heard you like DAGs, so I put a DAWG in your MerkleDAG...
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rozap has quit [Ping timeout: 244 seconds]
<zignig>
davidar: I have the same thought...
<zignig>
interesting application for the data structure.
<davidar>
yeah, I was thinking about it in the context of search indexes
<davidar>
stuff a bunch of (word,location) pairs into a dawg
<zignig>
yes, there was a paper posted here some time back about indexing dhts.
<davidar>
yeah, i think i read it, and briefly talked to you about it
<zignig>
oh yes, yes you did.
wopi has quit [Read error: Connection reset by peer]
<zignig>
ipns really need to stabilize before much of the can move forward.
* zignig
pokes whyrusleeping
<zignig>
;P
wopi has joined #ipfs
<davidar>
hehe
captain_morgan has quit [Ping timeout: 240 seconds]
amstocker has quit [Ping timeout: 240 seconds]
<davidar>
zignig, also ipld
<zignig>
I like the idea , but i'm not sure where it sits.
<davidar>
which would probably be ready by now if I didn't keep arguing about the details :p
<zignig>
is it built into ipfs ? or an overlay thing.
<ipfsbot>
[webui] masylum opened pull request #82: Bypass CORS in development by proxying `Origin` headers correctly (master...feature/bypass-cors-in-developement) http://git.io/vnu3W
<davidar>
zignig, yeah, I'm still not entirely clear on the implementation details either
<davidar>
my basic understanding is
<davidar>
ipld add massive-database.json
<davidar>
and it gets transparently broken into small ipfs blocks and lazily re-assembled at the other end
<davidar>
so i can do something like
<davidar>
ipld get massive-db.json/foo/bar/baz/7
<davidar>
and it will only grab the blocks it needs to fulfill the request
<davidar>
(ie only a small piece of the full db)
<zignig>
ok, not at all what I thought.
<zignig>
I thought it was a way to attach metadata to ipfs object so you can give them context.
<davidar>
yeah, there's also that
xeon-enouf has quit [Ping timeout: 250 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<davidar>
tbh a lot of the confusion is because different people think it has different purposes
<davidar>
and nobody has really explained it well enough
<davidar>
my personal opionion is that (apart from renaming ipld to something more helpful), the metadata stuff should be layered on top, rather than part of the spec
<zignig>
I have skimmed that one before.
blame has quit [K-Lined]
ffmad has quit [K-Lined]
oleavr has quit [K-Lined]
ogd has quit [K-Lined]
Luzifer has quit [K-Lined]
true_droid has quit [K-Lined]
RJ2 has quit [K-Lined]
nrw has quit [K-Lined]
robmyers has quit [K-Lined]
sindresorhus has quit [K-Lined]
ehd has quit [K-Lined]
jhiesey has quit [K-Lined]
zmanian has quit [K-Lined]
cleichner has quit [K-Lined]
kumavis has quit [K-Lined]
sugarpuff has quit [K-Lined]
tibor has quit [K-Lined]
mafintosh has quit [K-Lined]
bigbluehat has quit [K-Lined]
grncdr has quit [K-Lined]
daviddias has quit [K-Lined]
mikolalysenko has quit [K-Lined]
yoshuawuyts has quit [K-Lined]
dandroid has quit [K-Lined]
SoreGums has quit [K-Lined]
jonl has quit [K-Lined]
prosodyC has quit [K-Lined]
feross has quit [K-Lined]
henriquev has quit [K-Lined]
vonzipper has quit [K-Lined]
anderspree has quit [K-Lined]
rubiojr has quit [K-Lined]
leeola has quit [K-Lined]
hosh has quit [K-Lined]
lohkey has quit [K-Lined]
kevin`` has quit [K-Lined]
risk has quit [K-Lined]
mappum has quit [K-Lined]
richardlitt has quit [K-Lined]
karissa has quit [K-Lined]
caseorganic has quit [K-Lined]
kyledrake has quit [K-Lined]
zrl has quit [K-Lined]
bret has quit [K-Lined]
<davidar>
we've also been arguing about whether type information should be an explicit feature
screensaver has joined #ipfs
<davidar>
actually, I'm just going to start calling it ipdb :p
<zignig>
I think , arbitrary metadata with a few well defined keys ( mime-type , predessor , blah ) to objects.
<zignig>
ipdb , not dbddbdip2/plural6
<zignig>
?
<davidar>
lol
xeon-enouf has quit [Ping timeout: 240 seconds]
karissa has joined #ipfs
<zignig>
boom ! netsplit.
<davidar>
so, there's essentially two separate issues that have gotten all mixed up
henriquev has joined #ipfs
<zignig>
yes.
<davidar>
1) transparently persisting arbitrary data structures to ipfs
kyledrake has joined #ipfs
richardlitt has joined #ipfs
<davidar>
2) a standard way of specifying metadata
kumavis has joined #ipfs
xeon-enouf has joined #ipfs
<davidar>
jbenet: is that ^ about right?
bret has joined #ipfs
* zignig
lifts the trapdoor to see if jbenet is hiding in the basement.
<davidar>
well, I'm pretty sure that's right
<davidar>
lol
<davidar>
if it's not, my head's gonna 'splode
<davidar>
anyway, personally i think it would make sense to cleanly separate the two issues
<davidar>
zignig, haha, you just reminded me of that show that used to be on abc
<Guest96>
Hi guys. Quick Q regarding unpinning. I had pinned the IETF RFC archive and tried to unpin it when I got an error "too many open files" (which, incidentally I would also get all the time when just doing an ls on it which was why I was trying to remove it and start over). The problem is I can now no longer do an rm on the hash since ipfs says it's no longer pinned but I worry that it broke in the middle of recursively unpinning and I'm left
<Guest96>
with a bunch of pinned cruft. How do I check this?
<daviddias>
it also has some points where I could use some help from other people
<rendar>
cryptix: what is the coralcdn approach you mentioned before?
<daviddias>
with time, more stuff will be added (I'm avoiding putting there what I know it is considerable unstable in terms of API, so that I don't confuse anyone)
<ion>
Cool. Does go-ipfs have an equivalent? I’d like to subscribe to your newsletter.
<daviddias>
Also richardlitt is making an IPFS book which is going to be pretty sweet that explains all the crazy acronyms we keep using in convo (sorry about that, blame the network people :P ) https://github.com/RichardLitt/ipfs-textbook
<multivac>
[WIKIPEDIA] Dawg | "Dawg or DAWG may refer to:Directed acyclic word graph, a computer science data structureDirected acyclic word graph, an alternative name for the Deterministic acyclic finite state automaton computer science data structureDawg, the nickname of American mandolinist David GrismanDawg, a fictional dog..." | https://en.wikipedia.org/wiki/Dawg
<dignifiedquire>
daviddias: could you give me a hint on how to best extract “used storage” “count of objects” and “location” through the node api so I can display those stats
<dignifiedquire>
also upload and download speed
<daviddias>
sure, in a jiffy
<daviddias>
upload and download speed is not really a thing
<daviddias>
when a file is added, it is added to the local node
<daviddias>
dignifiedquire: sorry if it is misleading, but that is really a mockup, right now, we are able to provide the same stats that are available through the ipfs cli
<daviddias>
jbenet: designed the mock up considering everything we want to have
<dignifiedquire>
okay,
<dignifiedquire>
but location and object stats are available through the webui as far as I can see?
<daviddias>
location is done through a clever hack from krl
<daviddias>
object stats, yes
<clever>
daviddias: i have been messing with DHT's for years
<clever>
daviddias: so when your node comes online, or you add an object,it has to scan the DHT to find the node nearest to every piece of every file you have
<clever>
and publish the whole mess
<clever>
and possibly renew them at regular intervals?
<daviddias>
we don't use the DHT to store data
<daviddias>
we use the DHT to store pointers
<clever>
yeah
<clever>
but if i have 5000 files in my local node, it has to publish 5000 pointers to myself
<daviddias>
(like DHT generation 2.0 when Kademlia was generalized)
<clever>
each on a different node
<clever>
because the pointer is held on the node 'nearest' to the objects hash
<daviddias>
clever: if you have 5000 MerkleDAG objs, yep, you publish 5000 pointers
<davidar>
.w trie
<multivac>
[WIKIPEDIA] Trie | "In computer science, a trie, also called digital tree and sometimes radix tree or prefix tree (as they can be searched by prefixes), is an ordered tree data structure that is used to store a dynamic set or associative array where the keys are usually strings. Unlike a binary search tree, no node in the..." | https://en.wikipedia.org/wiki/Trie
<clever>
thats what i thought
<daviddias>
dignifiedquire: you found it :)
<clever>
daviddias: how long until those pointers expire?
<daviddias>
clever: if I'm not mistaken, the default is half a day
<daviddias>
but it can be really up to the user
<clever>
ah
<clever>
so it will have to re-publish them every 12 hours
<daviddias>
as said, the default :)
<clever>
sounds like a good config option for long running nodes
<daviddias>
if you have a very stable node
<daviddias>
exactly, that is where I was going for
<clever>
yeah
<daviddias>
or if you use IPFS for a internal data store (Like dynamoDB did with Chord) where you control the Churn Rate
<clever>
is the pointer to your temporary dht key?, a long term public key?, or your ip?
<daviddias>
you can even have expiration based on health checks instead of time
<daviddias>
a IPFS node id is generated from it's Public Key
<clever>
ah, so even if i restart my ipfs node,those pointers will remain valid?
<clever>
and IP changes as well
<daviddias>
yap
<daviddias>
which is cool for mobile clients
<clever>
and thats likely also key=ip+port in the DHT?
<daviddias>
we don't use IP for keys
<daviddias>
specially because we use multiaddr and not IP:Port pairs
<clever>
hmmm, does the dht use random keys on each startup for the nodeid, or the long term one from the pubkey?
<daviddias>
NodeID = B58.encode(pubkey)
<daviddias>
or I mean
<clever>
ah, so your nodeid on the dht is constant
bedeho has joined #ipfs
<daviddias>
NodeID = multihash(pubkey, sha-256)
<daviddias>
and we represent them as B58 encodings for readability
<clever>
thats just normal sha256?
<daviddias>
dignifiedquire: sorry, I feel I'm missing you an answer
<clever>
and they act as a proxy for anybody wishing to contact you directly
<clever>
though i dont see ipfs needing direct contact by a long-term id
<clever>
you only ever need to know what short-term id's have a copy of file X
<daviddias>
we support multi transports
<daviddias>
these include from tcp, udp, to webrtc, websockets and even onion routing
<daviddias>
so we can offer those properties by layering tor underneath
<daviddias>
if the users have a need for it
<clever>
tox does 90% of the work over udp, but the 1st hop can be tcp for clients under special situations
<daviddias>
need to run now
<clever>
that 1st hop acts as a tcp->udp bridge
<daviddias>
(sorry)
<clever>
kk
<daviddias>
will be back after lunch time
<clever>
i'll probly be heading to bed within the next 4 hours
<rendar>
guys, what is ipld all about?
<dignifiedquire>
daviddias: copy and paste ftw: it worked http://grab.by/KBaC
<davidar>
.tell jbenet people keep asking me what ipld is about, but I still don't completely understand the overall goals either
<multivac>
davidar: I'll pass that on when jbenet is around.
Soft has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nicolagreco has joined #ipfs
atomotic has joined #ipfs
nicolagreco has quit [Ping timeout: 250 seconds]
voxelot has joined #ipfs
voxelot has joined #ipfs
G-Ray has quit [Quit: Konversation terminated!]
G-Ray has joined #ipfs
<dignifiedquire>
daviddias: I’m getting 403 forbidden respones in the webui for the xhr requests when starting the daemon from the electron-app, but all works fine when I start `ipfs daemon` on the terminal, any ideas what’s going wrong there?
<dignifiedquire>
it’s running though, I’m getting stats and everything through the api
G-Ray has quit [Ping timeout: 252 seconds]
Soft has joined #ipfs
JasonWoof has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
atomotic has quit [Ping timeout: 246 seconds]
ygrek has quit [Remote host closed the connection]
ygrek has joined #ipfs
fazo has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<zignig>
o/
<zignig>
multivac: hello
kerozene has quit [Max SendQ exceeded]
voxelot has quit [Ping timeout: 264 seconds]
* daviddias
back
<daviddias>
dignifiedquire: might be the missing --unrestricted-api
<daviddias>
although the webui was working before
<daviddias>
which version of IPFS are you running
<dignifiedquire>
I updated dependencies
<dignifiedquire>
so might be the reason
<daviddias>
whyrusleeping: any ideas on that?
wopi has quit [Read error: Connection reset by peer]
<daviddias>
it was working before without any special option, not sure what changed to require a different option now
<daviddias>
let's wait for feedback from whyrusleeping and/or krl
<richardlitt>
@jbenet @whyrusleeping @daviddias @davidar @lgierth @krl @mildred @dignifiedquire @amstoker: Please add your To Dos to the sprint! https://etherpad.mozilla.org/nTqGOVynPT
notduncansmith has quit [Read error: Connection reset by peer]
kerozene has quit [Max SendQ exceeded]
<lgierth>
davidar: can we have multivac not resolve urls? it's a bit useless most of the time
kerozene has joined #ipfs
<lgierth>
richardlitt: will do, thank you! i'll just quickly need to cycle somewher
<davidar>
lgierth: ok :(
multivac has quit [Quit: KeyboardInterrupt]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<richardlitt>
lgierth: No rush, will merge tonight. Just; today! :D
<richardlitt>
fazo: Thanks for the help! I reformatted the document a bit.
<fazo>
I'm glad you liked my answers :)
<fazo>
I'm actually writing more right now.
<richardlitt>
Cool! Be sure to rebase
<fazo>
I also think that when the textbook gets expanded enough, we should host it on ipfs and build a viewer for it
<richardlitt>
Drastic style changes.
<richardlitt>
Yeah.
<fazo>
richardlitt: yeah of course
<richardlitt>
Also, I'm thinking; issues is not the way to do this.
<richardlitt>
I think we should just do PRs, for everything, so no information is lost.
<fazo>
uhm sounds good to me
<fazo>
I had a look at the IPFS Example viewer, it's actually very simple
<richardlitt>
I'm not sure. Just an idea. Maybe I'll open an issue about that.
<fazo>
we can build a fork for the textbook
<richardlitt>
and yes, goal is to eventually have it on fazo
<richardlitt>
*ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nicolagreco has joined #ipfs
<richardlitt>
Anyone have a good suggestion for a better DNS provider than DigitalOcean? With an extensive API and better key security options?
<ion>
Running PowerDNS on your own server :-P
<richardlitt>
jbenet: should I open the DNS issue in ipfs/ipfs? ipfs/infrastructure has no watchers, really...
ygrek_ has joined #ipfs
<richardlitt>
Huh. That might work.
ygrek has quit [Ping timeout: 250 seconds]
<ion>
(Among other things, PowerDNS is nicer to configure than Bind, has a better security history to my knowledge and has a nice interface to delegate certain queries to code of your own.)
<fazo>
richardlitt: done rebasing. that was long
dignifiedquire has quit [Quit: dignifiedquire]
* ion
gave a thought to commit histories within IPFS and realized that the HTTP gateway can provide an awesome repository browser.
dignifiedquire has joined #ipfs
<fazo>
I was wondering if git can be integrated to use ipfs to ref commits and other objects and ipns for branches without forking it
<JasonWoof>
is there a way I can publish a site (ie some html pages that link to eachother) on ipfs/ipns so that people can have a URL that links to a particular version of my site? Requirements: 1. pages link to eachother, not just linking to older pages. 2. people can still access that version, even if I don't want them to (ie I update stuff)
<fazo>
JasonWoof: yes
<JasonWoof>
good. how?
notduncansmith has joined #ipfs
<JasonWoof>
can ipfs have path names at the end?
notduncansmith has quit [Read error: Connection reset by peer]
<fazo>
JasonWoof: just add the folder with the index.html of your site inside
<fazo>
yes it can
<fazo>
I'll give you an example
<ion>
JasonWoof: The ipfs link will point to a particular version, an ipns link can point to the newest version, and given pointers to previous versions you can traverse to any version you want.
<fazo>
JasonWoof: this is a web app running on IPFS: /ipfs/QmfX93JbzAzVZ6DKED1LyxzXeJ6Q1svZcTCnWJS82eryLd
<fazo>
if you remove the /about/ at the end, you can see the root folder which has all the xkcd comics
<JasonWoof>
ahh, got it working
<fazo>
JasonWoof: it's great to host static content behind NAT :)
<JasonWoof>
oh, that's a relief. before I didn't think you could have path names after ipfs/..../
<JasonWoof>
and I realized that that made it so you couldn't have two files linking to eachother, because without path names, you can only link by hash, and when you change the link, you change the hash, and chicken-and-egg they can't link to eachother
<JasonWoof>
so when you have path names, the ipfs hash is something akin to a git commit hash
<JasonWoof>
?
<fazo>
JasonWoof: by the way to solve your versions problem, when you change the website add a "Previous Version" link that points to the old hash, then use ipfs add to add the new version of the website and publish its hash to your ipns. This way, every time you make a modification, using ipns people can see your latest version and click a link to go back 1 revision
<fazo>
for example now you have an hash for your website right?
<fazo>
when you change something, also add a link to that hash. Then add the modified website folder to ipfs again and you get a new hash. This new hash points to the new website which also happens to have a link to the old one
<fazo>
then you use "ipns publish <the_new_hash>" so that your ipns points to the latest version of the website
<fazo>
you can of course think of an ipfs hash as a git commit hash and an ipns hash as a git branch
<JasonWoof>
I'm not running ipfs yet. I'm just learning about it at this point, and seeing if it'd be suitable to build on top of. I have two p2p projects in mind: 1. automated backups onto your friends computers 2. collaboration/consunsus building around page edits/etc using trust network
<JasonWoof>
oh, that's so good!
<fazo>
JasonWoof: both already possible with the current implementation, but you'll be able to do more and better when stuff gets updated
<fazo>
for example there is a native js implementation in the works. When that's done, you could have a totally static website act as reddit or google docs
<JasonWoof>
I thought all those pieces happened per-file, I didn't realize that the "commit"-like thing was in ipfs too
<ion>
Just like a git repo has many kinds of objects, so does an ipfs repo. In the case of a directory object in IPFS, it’s exactly equivalent to a directory object in git. Both have blobs, too, and IPFS is going to have commit objects as well.
<ion>
uh, s/an ipfs repo/IPFS/
<JasonWoof>
is there a plan to add filetype data? last I checked the http proxy was guessing what mime type to send based on file contents
<fazo>
JasonWoof: you can mount the entire IPFS in a folder if you want (like /ipfs) and then "cd /ipfs/<hash>" and see the files
<fazo>
JasonWoof: I don't think there are plans for that I'm afraid
<JasonWoof>
fazo: cool. I assumed it would be all files, but now there'd be directories in there?
<fazo>
JasonWoof: yes, a directory is just a list of file hashes and their name
<fazo>
JasonWoof: so when you open a directory in the browser it doesn't download the contents, just the names and their hashes
<JasonWoof>
I'm worried about this conflict of interest: website owners (many of them) want control, so they only give out the kind of web address that will auto-update to the latest thing they publish (so ipNs)
<fazo>
and if you have a 20GB directory and change 1 file, you don't have to readd everything else
<JasonWoof>
visitors (sometimes) want to see a particular version, or have some assurance that a particular version (of a page or whole site) doesn't disappear just because the author decided it's done
<fazo>
JasonWoof: well, nothing stops you from storing the ipfs hash corresponding to that ipns name all the time
<fazo>
JasonWoof: nothing can disappear on IPFS as long as someone has a copy
<JasonWoof>
so I'm still curious how it sticks together
<fazo>
you could have a program track changes to the ipns name and store the hashes of every revision
<JasonWoof>
example: joe publihes site, I have a link to /ipns/.../foo/bar/baz
<ion>
If you care about the historical content of an IPNS object, just arrange your node to keep it pinned.
<fazo>
yeah, ion's idea is nice
<fazo>
pinning an hash means that when your local ipfs node does garbage collection, it doesn't delete that hash
<JasonWoof>
I'm ok with just being able to get a link to a particular version (ipfs) from a link to the latest (ipns)
<JasonWoof>
so you can pin ipfs and ipns?
<ion>
IPNS only provides a pointer to the latest version.
<ion>
IPNS provides a map from public keys to IPFS hashes.
<fazo>
JasonWoof: don't know if pinning ipns is implemented, but it sure is possible
<JasonWoof>
so, there's ipns/[jo's hash]/foo/[1,2,3]/about is there one ipfs record for jo's hash? or is there one for foo/1/about, etc, or one for each directory?
<fazo>
JasonWoof: every file and directory has its own hash
<JasonWoof>
I'm curious: at what level of the directory hierarchy can I switch to ipfs?
<ion>
/ipns/[jo's pubkey] points to /ipfs/[jo's hash] which is a directory object recursively holding foo, foo/1, foo/1/about etc.
<fazo>
A directory is just a map of names:hashes, like this: [ "foo":"foos_hash", "bar": "bars_hash" ]
<fazo>
JasonWoof: any level
<achin>
ion: are you still able to resolve my ipns name?
<ion>
There is a hash for each of foo, foo/1, foo/1/about, the parent directory object will point to it.
<ion>
achin: yes
<achin>
ok cool, it survived
<fazo>
JasonWoof: of course you can go lower but not higher unless you know the hash of a parent directory
Leer10 has quit [Read error: Connection reset by peer]
<fazo>
also you can have hundreds of directory containing the same file, and to IPFS that's the same identical file since the hash is the same
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fazo>
even if in different directories it has different names
<fazo>
it's like symlinking happening automatically
<achin>
in this context, "deduplication" is generally the term that is used
<JasonWoof>
That's what I'm wondering... can I go higher? If I have ipns url of a sub, sub directory, can I get to an ipfs link for that version of the whole site? I don't think this is required, but it seems like it might be useful
zoro has joined #ipfs
<achin>
nope
<JasonWoof>
I understand that I can go from /ipns/.../ to /ipfs/.../
<fazo>
JasonWoof: no, you can't do that, because there may be a lot of folders containing the same file, so how is IPFS supposed to know which one you want to go up to?
<fazo>
the way IPFS is designed, that can't be implemented :(
<ion>
A search engine indexing IPFS would be able to help.
<ion>
given a thing in its index
pfraze has joined #ipfs
<JasonWoof>
I mean go up a level in ipNs
<fazo>
ion: plan to write design docs describing one soon
<ion>
There is no such a thing as a level in IPNS, it’s a flat map from public keys to IPFS objects.
<JasonWoof>
maybe I'm being silly. if I want to go up in ipns, I just resolve an ipns url with fewer path segments on the end
<fazo>
JasonWoof: yeah of course it would work, but can't go higher than the hash
<fazo>
ion: interested in designing an ipfs search engine with me?
<JasonWoof>
I understand tracking stuff at the directory-listing level. Git tracks all sub-directories together (except for submodules, which are often problematic)... but git doesn't scale up very well past a certain point, and it's cool if you can make sites that scale up further
<achin>
i missed the conversation above about filesystem metadata (i'll skim it later), but it would be neat if IPFS eventually grew better support for building non-filesystem datastrucdtures
<fazo>
achin: could be done at application level, like resolving a directory structure into a JSON
<ion>
fazo: I’m happy to participate in discussion on this channel if i have something to say but i’m afraid i don’t have the energy to do any actual work for it, sorry. But it is an interesting project for sure.
<fazo>
ion: don't worry :) I understand perfectly
<achin>
fazo: yes, but that seems unnecessary
<JasonWoof>
I think it might work out well to store file types in the directory listing
<fazo>
So my big plans is to build both a search engine and a community board. Search engine looks easier
<fazo>
JasonWoof: yep, the MerkleDAG (file structure used by IPFS) is designed to be generic
<ion>
The MIME type doesn’t belong in the directory list but in the file object (along with the links to the chunks).
<JasonWoof>
so if people actually want the same file contents with different types (or, just if their tools insert it with different types, or their tool doesn't know the type) then we can still share storage space for the file contents, and have our sites/etc serve the file with the mime-type that we want
<ion>
The directory object could of course provide that information redundantly (along with file size).
<ion>
But the file itself should be authoritative.
<JasonWoof>
ion: oh, i didn't realize files were chunked
<JasonWoof>
I was hoping we could have the same file contents with different mime types, and (in that case) only have one "copy"/hash of the file contents
<JasonWoof>
if that can be done efficiently without putting it in the directory, then... cool
<ion>
JasonWoof: Run ipfs object get <hash> to see a JSON representation of an object.
<JasonWoof>
oh, I'm so relieved that the file trees are stored in ipfs
yosafbridge has quit [Ping timeout: 246 seconds]
<achin>
ok, neat so with `ipfs object` you can create a non-unixfs thing. for example: QmS7LsfgSaubKeVMEK7AzboXzG3b2XtTcEnNk8imcNXbpi
<achin>
you can run 'ipfs ls' on that thing, but trying to 'ipfs get' it will fail (which is expected)
<JasonWoof>
oh, this is so cool. I think I can use this
notduncansmith has quit [Read error: Connection reset by peer]
<ion>
Whoops, i copied it by mistake. My shell just added it to signify that the output did not have a terminating newline.
kevin`` has quit []
kevin`` has joined #ipfs
<achin>
it's kinda goofy that you have to construct a json object to create a new object with arbitrary data
<JasonWoof>
one more question... then I gotta go out for food:
pfraze has quit [Remote host closed the connection]
<JasonWoof>
when my computer resolves ipns/(hash)/foo/bar (a file, not a directory)
dlight has quit [Ping timeout: 250 seconds]
<JasonWoof>
can/do I get the ipFs of /foo, or does/can it resolve /bar without getting the /foo directory listing?
<achin>
it first gets {hash} which tells it about foo which tells it about bar
<samiswellcool>
I think it resolves hash->foo->bar
<achin>
so you have to get the whole chain
<JasonWoof>
sweet!
<ion>
When resolving /ipns/<pubkey>/foo/bar, <pubkey> is resolved to <hash> and then /ipfs/<hash>/foo/bar is traversed to get the hash of bar and get the object.
pfraze has joined #ipfs
<JasonWoof>
so awesome
<JasonWoof>
I love it when I get inner-workings type details (after thinking a bunch about what I want to do) and find out that the inner-workings are well designed / suitable for what I want to do
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
Not__ has joined #ipfs
Not_ has quit [Ping timeout: 265 seconds]
dlight has joined #ipfs
bsm1175321 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
sseagull has joined #ipfs
atomotic has joined #ipfs
nicolagreco has quit [Quit: nicolagreco]
brab has quit [Ping timeout: 244 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<rendar>
ion: so in the case of /ipfs/<hash>/foo/bar that <hash> is an hash of all the hashes of objects (or blobs) contained in /foo/bar?
<fazo>
rendar: no, it's an hash of a map of names of objects mapped to their hash (which you view as a folder in the gui or in the file system)
<fazo>
like this: [ "folderName": "folderHash", "someOtherFile": "itsHash" ]
G-Ray has joined #ipfs
<fazo>
it's not exactly represented that way but it gets pretty close
<rendar>
fazo: hmm
<lgierth>
davidar: i just have trouble keeping up with the chan and the bot's url resolution doesn't particularly help ;)
voxelot has joined #ipfs
voxelot has quit [Changing host]
voxelot has joined #ipfs
<rendar>
fazo: because i'm in habit with the git way to do, in git you have e.g. /a5/b398ed88c... which is a blob which in turns represents a directory object which has all the file hashes inside, so also that hash a5b39... is an hash of *hashes* (and not names as you mentioned) of the git's blobs, so in a design like this, you have the directory object represented as /.git/objects/<hash>, so in ipfs if
<rendar>
have /ipfs/<hash>/files that means i can browse the hashed directory? because it seems in git you can't do that
<rendar>
fazo: what i mean, in git you cannot refers to objects such as /<dir_hash>/files ... but you only have /<dir_hash> poinitng to a blob containing all other hashes you need
<fazo>
render: correct, you can browse /ipfs/<hash> just as it was a normal directory
<fazo>
well, in IPFS you can because /ipfs/<dir_hash> doesn't only have the hashes of the contents but also the names
<rendar>
fazo: ok, but is that actually a normal directory? i mean, git saves directories as hashed files-blobs in the filesystem, does ipfs exploits the filesystem and saves a directory by creating a real one in the underlying filesystem?
proullon has left #ipfs [#ipfs]
<fazo>
render: no, IPFS doesn't use the filesystem to store data
<fazo>
render: I mean not directly
<rendar>
ok
<fazo>
render: IPFS directory objects are MerkleDAG objects if I'm not mistaken: you can mount them on a regular file system though
<fazo>
you can mount IPFS in a folder and access all data using the filesystem
<rendar>
so i cannot refer to /ipfs/<hash>/files with a normal filesystem path, but through ipfs commands, right?
<fazo>
yes
<rendar>
i see
<fazo>
if you mount IPFS using "ipfs mount" you can also do refer to /ipfs/<hash>/files with a normal filesystem
<pjz>
rendar: well, you can if you mount the dir use fuse
<fazo>
for example: "ls /ipfs/<some_folder_hash>" would work
<pjz>
s/use/using/
<rendar>
pjz: exactly
<ion>
rendar: IPFS blob, directory and (future) commit objects are semantically the same as the equivalent git objects.
<rendar>
ion: i see
<pjz>
and ipns published names are semantically like refs/heads, but are limited right now
Spinnaker has quit [Ping timeout: 256 seconds]
<rendar>
so when i refer to /ipfs/<hash>/files ... what are the actual operations ipfs executes to let me get the file?
<fazo>
it gets <hash> then gets the hash of "files" from that object
<fazo>
then gets "files" and there you go
<fazo>
s/gets "files"/gets "files"'s hash/
nicolagreco has joined #ipfs
<ion>
Look up the link called “files” in the directory object that is <hash>, then get the object by the link hash.
<ion>
Just like a git directory object.
<rendar>
hmm
<rendar>
ok but how it uses the merkledag tree in such a query?
Spinnaker has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ion>
Uh, just like it was described above.
<ion>
Get object, follow link, rinse, repeat.
<zoro>
how ipfs is called permanent internet? if the file is not distributed until someone else downloads and seeds it. This would have been better that ipfs forces nearest nodes to store file whether they want it or not depending on how much they add files and a propotional ratio to their shared data they themselves keep other peers data, like this all files would be distributed
apophis has joined #ipfs
<ion>
The topology is distributed, redundancy if an orthogonal concept.
<ion>
is
wopi has quit [Read error: Connection reset by peer]
thomasreggi has joined #ipfs
wopi has joined #ipfs
thomasreggi has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rasty has joined #ipfs
<ion>
If people want the content, redundancy will emerge. If no one wants it, there's no particular need to default to storing it in a redundant manner.
<rendar>
ion: well, ok, i see that, but when i ask to git to give me the file with hash ab5deec8... it simply opens /ab/5dee.. and ipfs can do that as well, so how both git and ipfs use the markledag tree in such a case? what is the purpose?
dignifiedquire has quit [Quit: dignifiedquire]
pfraze has quit [Remote host closed the connection]
dignifiedquire has joined #ipfs
<ion>
It's a mapping from the hash of an object to said object. That has a number of benefits. When you know a hash and request the object, you know for certain you got the right one by computing its hash (assuming a good enough hash function). Everyone holding the same content will assign it the same identifier.
<rendar>
ion: mmm
<rendar>
ion: ok, but to map from the hash of an object to the said object, isn't git simply opening the file whose name is that hash string?
<ion>
Yes
<ion>
Or asking a server to open that file and to send it to you
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<rendar>
ion: exactly
lithp has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<rendar>
ion: so, when you have such mapping, why a merkledag tree is needed at all? what other advantages it brings to you?
<whyrusleeping>
rendar: how will you know that what i've sent you is what you've requested?
<rendar>
whyrusleeping: you mean between 2 ipfs nodes?
<whyrusleeping>
rendar: yeah
<rendar>
whyrusleeping: i guess by hashing the data i get..
<rendar>
then i check the hash the server sent is the same as the hash of data i got from the network
<ion>
rendar: That mapping *is* the Merkle DAG.
<ion>
The DAG means objects can link to others but there can be no loops.
<rendar>
ion: what? having hashes and opening files whose filename is the hash string, how this can be the merkle dag?
<ion>
Any system where you can store objects and get them back by their hash.
pfraze has joined #ipfs
* drathir
wonder isnt a faster to search if single files with names have represented in way of hash_of_file/hash_of_name or somethin similar? or no difference?
<rendar>
ion: from what i know the merkle trees lets you to have a root hash, then when you *change* a file (a leaf) you rehash all the objects in the tree branch, until the root hash, right?
<ion>
rendar: There's no difference to updating any other immutable data structure. You recreate the objects on the path from the change to the root but all the other pointers can point to the previous counterparts.
<rendar>
ion: ok, but lets consider this: if i can run int fd = open("abe3cdee...", ...); <- this is actually a mapping, because i'm mapping an hash string to its real content.. then i have 'fd' which i can read from and write to a socket to send the file
<rendar>
so, why other data structures?
<ion>
rendar: Which other data structures are you referring to?
G-Ray has quit [Ping timeout: 252 seconds]
<rendar>
ion: merkle dag of course
thomasreggi has joined #ipfs
<rendar>
ion: if i have an hash, and i need the file content of that hash, so i map that by simply doing fd = open("abe3cdee...", ...); why i'd need other data structures?
<ion>
A directory full of objects named after their hashes *is* a Merkle DAG, no other data structures are really involved here.
nicolagreco has quit [Quit: nicolagreco]
<rendar>
ion: ok
<rendar>
ion: but from the code, i saw ipfs implements a merkledag
<ion>
Yes
<rendar>
so, why? if it stores blobs in the underlying filesystem with git, and you said that it is actually a merkledag, why implementing one? i can't follow this
Tv` has joined #ipfs
<fazo>
I don't think it uses git, it just takes it as inspiration
<fazo>
it stores blocks as a merkledag
<fazo>
but I'm probably wrong
<rendar>
fazo: sorry
<rendar>
i meant *like* git
<whyrusleeping>
we store blocks on disk in a flat filesystem approach in a similar way to how git does it
<rendar>
not it is using git..
<ion>
This one knows how to transparently get the object from other peers on the Internet if you don't have the one you requested on your computer.
thomasreggi has quit [Remote host closed the connection]
nicolagreco has joined #ipfs
<fazo>
it is like git because it reuses data and stores patches, but I'm not 100% sure about this
<rendar>
whyrusleeping: with flat filesystem approach you mean each files/blocks is stored like a new file whose name is its hash?
<ion>
Git doesn't actually store patches.
<whyrusleeping>
fazo: its like git because it stores data as content addressed
<whyrusleeping>
rendar: yes, each file/block is stored as a file whose name is its hash
thomasreggi has joined #ipfs
<fazo>
thanks for clarifying :)
thomasreggi has quit [Remote host closed the connection]
* whyrusleeping
slowly wakes up
<whyrusleeping>
i wish coffee was more of an instant effect
<fazo>
whyrusleeping: try alternative sleep cycles
<rendar>
whyrusleeping: ok, according to ion, this way to store file/blocks is a merkledag, but i can't get this, then why ipfs would implement a merkledag structure?
<fazo>
unless you have a too fixed day routine you can make it work
<fazo>
render: how do you store directories without a merkledag?
nicolagreco has quit [Client Quit]
<whyrusleeping>
fazo: my hours arent super fixed, i could probably try polyphasic, but in the past its made me feel really weird
<whyrusleeping>
and not super productive
<whyrusleeping>
rendar: objects have a merkledag format
<whyrusleeping>
which means that the data in any given object, is a merkledag object with data and links
<rendar>
fazo: by creating a file whose name is the directory hash and its content hash, and store inside that file all the hashes that directory contains
<whyrusleeping>
each object is stored independently on disk
<rendar>
hmmm
<fazo>
rendar: that way you can't store file names inside a directory object
<rendar>
fazo: you can
<whyrusleeping>
rendar: think of how real filesystems like ext4 work
<rendar>
fazo: you simply stores names+hash
<whyrusleeping>
they store files in a b-tree
<rendar>
whyrusleeping yeah
<fazo>
render: yes, but if you do that you just created a datastructure
<whyrusleeping>
but really, theyre just writing chunks to the disk in a linear manner
<rendar>
fazo: so?
<rendar>
whyrusleeping: that's right
<whyrusleeping>
all the data is just stored in a large array that is your disk
<fazo>
so why not using the merkledag?
<rendar>
whyrusleeping: ok i think i'm getting that
<fazo>
I think the merkledag addresses other concerns, but I'm not sure
thomasreggi has joined #ipfs
<rendar>
whyrusleeping: you mean that no matter how ipfs stores files/blocks, the important thing is that every directory is the hash of its *content* until the root directory which is the ultimate hash of all contents, like a git commit, or git HEAD
<rendar>
whyrusleeping: and no matter how its implemented or stored, but *THAT* is a merkledag?
<whyrusleeping>
we used to store things in leveldb
wopi has quit [Read error: Connection reset by peer]
<whyrusleeping>
and we want to store things in s3 in the future
lithp has joined #ipfs
<rendar>
whyrusleeping: ok now i see
<whyrusleeping>
but that doesnt matter
<whyrusleeping>
its still a merkledag
<rendar>
fazo: thanks
<rendar>
whyrusleeping: that's because the main property of a merkle dag is having a tree of hashes, that hash the *content* of the child nodes
wopi has joined #ipfs
<rendar>
whyrusleeping: but in this way, ipfs is not a merkledag tree, but its a merkledag N-tree since you can store N directores in one directory and not only 2..
<ion>
It's not a tree, it's a DAG
<rendar>
ops, sorry
<rendar>
you're right
<rendar>
we defined it as merkle *dag* since the start of the discussion
<rendar>
but when i thought to a merkle-tree i always thought to a binary tree
nessence has joined #ipfs
<zoro>
how one of my bootstrap peer is serving my file when i'm offline (i mean he didn't download the file)
<whyrusleeping>
zoro: the file is probably cached somewhere
nicolagreco has joined #ipfs
thomasreggi has quit [Remote host closed the connection]
ygrek_ has quit [Ping timeout: 244 seconds]
<zoro>
@whyrusleeping: how ipfs is called permanent internet? if the file is not distributed until someone else downloads and seeds it. This would have been better that ipfs forces nearest nodes to store file whether they want it or not depending on how many files they add and a propotional ratio of other peers data they also have to store, like this all files would be distributed and be permanent
compleatang has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
<lithp>
The current web is not permanent, no matter what
<ion>
zoro: Did you see my response?
notduncansmith has quit [Read error: Connection reset by peer]
<lithp>
ipfs can be made permanent, for content you wish to have kept around
<whyrusleeping>
zoro: permanent refers to the fact that links will never change, if you have an ipfs path to your content, it will always unambiguously refer to that content, and even if you cant access the data at the moment, it doesnt give you different data instead
compleatang has joined #ipfs
<zoro>
@fazo: this is helpful
<fazo>
zoro: IPFS uses links that don't tell your computer where the data is, but what it is, so that no matter who has it, as long as their packets reach you, you can get the file
<zoro>
@whyrusleeping : yes now i get it
<fazo>
and everyone that views or downloads a file stores it for a little while (or forever depending on their settings)
<whyrusleeping>
'a little while' is currently defined as 'until they remember to run a GC'
<fazo>
whyrusleeping: me and richardlitt gave a big bump to the ipfs textbook yesterday
<whyrusleeping>
ipfs textbook?
<whyrusleeping>
this is new to me
<fazo>
whyrusleeping: It's an effort to build a manual to explain IPFS to non technical people
<whyrusleeping>
oooooh, me gusta mucho
<fazo>
yeah we plan to leverage the example viewer to publish the textbook on ipfs
<whyrusleeping>
awesome :)
akhavr has quit [Ping timeout: 250 seconds]
<ion>
whyrusleeping: My two cents regarding your website efforts: it's great to have a video that explains and sells the thing on the front page, but it's bad to have multiple. *I* watched them because I was already interested but a random visitor might be considered about which one she should watch and just leave for easier stimulation in the form of cat gifs.
<ion>
Uh
nicolagreco has quit [Quit: nicolagreco]
<ion>
Confused about …
<fazo>
ion: I remember seeing an active issue about fixing just that
nessence has quit [Remote host closed the connection]
john__ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ion>
Hehe. I was googling for the GitHub compare URL format to see the diff from 8f5c610 to 513bc6e and encountered this: “Edit: These branches have been removed since the publishing of this post and can no longer be shown.” https://github.com/blog/612-introducing-github-compare-view
notduncansmith has quit [Read error: Connection reset by peer]
apophis has quit [Quit: This computer has gone to sleep]
apophis has joined #ipfs
thomasreggi has joined #ipfs
akhavr has quit [Remote host closed the connection]
apophis has quit [Ping timeout: 265 seconds]
akhavr has joined #ipfs
Encrypt has joined #ipfs
thomasreggi has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
thomasreggi has joined #ipfs
apophis has joined #ipfs
akhavr has quit [Ping timeout: 272 seconds]
thomasreggi has quit [Remote host closed the connection]
thomasreggi has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Encrypt has quit [Ping timeout: 252 seconds]
<kyledrake>
whyrusleeping ion I've got some ideas for the site to improve it for the introduction I'm looking forward to working on.
<ion>
whyrusleeping, jbenet: When designing IPFS commit objects, please consider adding a variant of a merge object whose meaning is history rewriting: it will still link to all parents but user interfaces are meant to render it as if only the first parent existed unless the user requests to see the “rewritten” history in the secondary parent. This would alleviate the need for force pushes Git may have in order
<ion>
to keep the history of a branch clean.
magneto1 has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
<whyrusleeping>
i wonder when we will see it in chrome beta
bsm1175321 has quit [Ping timeout: 255 seconds]
ygrek_ has quit [Ping timeout: 260 seconds]
vijayee_ has joined #ipfs
bsm1175321 has joined #ipfs
dlight has quit [Read error: Connection reset by peer]
lgierth has quit [Quit: WeeChat 1.0.1]
lgierth has joined #ipfs
lgierth has quit [Client Quit]
<ion>
Re: https://github.com/ipfs/go-ipfs/issues/1738, couldn’t the new version add a UDT listener automatically when seeing that the config was written using an earlier version of go-ipfs the last time.
notduncansmith has joined #ipfs
lgierth has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
lgierth has quit [Client Quit]
lgierth has joined #ipfs
<whyrusleeping>
ion: the issue is that its difficult to tell if the user has changed their config
<whyrusleeping>
and we also dont want to have to leave code in the codebase forever that says 'if prev.version == X, do Y'
<whyrusleeping>
and IMO, automatically changing the users configs isnt cool
<ion>
ok
Encrypt has joined #ipfs
<spikebike>
standard operating procedure for debian family is to offer the upgraded config, but offer upgrade, don't upgrade, and display difference. If upgrade is selected the config file is backed up before the change.
samiswellcool has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
<spikebike>
wow, brotli looks better than I expected
notduncansmith has quit [Read error: Connection reset by peer]
kerozene has quit [Max SendQ exceeded]
kerozene has joined #ipfs
<fps>
about the debian config file updates
<fps>
that process is actually quite convenient for the package maintainer
<fps>
it's basically just mentioning which files are config files in one of the control files
<fps>
details are documented in the debian package maintainer documentation
<fps>
which is horrible
<fps>
but reasonably complete
RX14 has quit [Quit: Fuck this shit, I'm out!]
magneto1 has joined #ipfs
Encrypt has quit [Ping timeout: 246 seconds]
RX14 has joined #ipfs
RX14 has quit [Remote host closed the connection]
RX14 has joined #ipfs
apophis has quit [Quit: This computer has gone to sleep]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
zignig has quit [Ping timeout: 255 seconds]
nessence has quit [Remote host closed the connection]
<ion>
The sucky part is that apt/dpkg doesn't keep track of the default config in the presently installed version of a package which would allow it to do a three-way merge with the default config in the new version.
zignig has joined #ipfs
nicolagreco has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
nicolagreco has quit [Client Quit]
wopi has joined #ipfs
nicolagreco has joined #ipfs
nicolagreco has quit [Client Quit]
<fps>
ion: i suppose :)
Encrypt has joined #ipfs
<ion>
If the system knew what changed in the package instead of just what changes there are between your modified config and the new package version, updates would involve much less manual merging.
nessence has joined #ipfs
nessence has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fps>
ion: true :) is there a distro with such a smart package manager?
<fps>
and also, why not just maintain all configs from packages in a git repo? ;) i mean by the package manager..
<fps>
and while we're at it
compleatang has quit [Quit: Leaving.]
<fps>
why not make all packages subrepositories in a git repo. the whole distribution being a git repo
<fps>
lalala
* fps
dances around like a madman
john__ has quit [Ping timeout: 246 seconds]
<vijayee_>
what are the future plans around ipns
<vijayee_>
will I be able to have multiple routes to hashes from a peerid?
jamescarlyle has joined #ipfs
apophis has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fazo>
it also does a lot of other amazing things for example it installs the OS using a single configuration file that details how your bootloader, kernel, desktop environment, packages, configs are set
ben_____ has joined #ipfs
<fazo>
so if you back up that single configuration file and use it to reinstall you get the same exact setup back
<fazo>
it also has previous configurations stored in the bootloader so if you change something and it doesn't even boot anymore, you can boot an old version
<fps>
sounds pretty great. opened for later reading and testing in a vm :)
<fazo>
it's a revolutionary distro, I suggest lookit at their website
<fps>
i have the manual open right now
<fazo>
I can replicate my system just by saving configuration.nix and my /home/fazo
<fazo>
I did it a few weeks ago to move my install to another disk
* fps
drools
<fazo>
I did this:
<fazo>
mkfs.ext4 /dev/sdb1
<fazo>
mount /dev/sdb1 /mnt
<ben_____>
has Bitswap been implemented in ipfs ?
<fazo>
nixos-install
<fazo>
cp -r ~ /mnt/home/fazo
<fazo>
rebooted and it worked
<fazo>
I had an exact copy in the other disk :)
<fazo>
nixos-install uses /mnt if you don't tell it where to install
<fps>
obligtory question: many packages? :)
<fazo>
it's also a source based distro so that you can customize packages (similar to gentoo) BUT it uses a binary cache so that it doesn't take 200 hours to install gnome
<drathir>
lol wth is goin there?
<fps>
*obligatory
<fazo>
fps: yes, not as many as arch though
<fazo>
they're very easy to write. A go package for example the ipfs one is 4 lines long.
<fps>
fazo: the easier it is to add packages, the more it will grow i suppose
<fazo>
yep, it's the easiest build system I ever used, mostly because of nix-shell
<fazo>
nix-shell is a script that opens a shell with the environment you tell it to load, without changing your system
<fps>
ok, you got me sold. i wanted to finish writing a parser tonight. but now i'll try setting it up in a vbox
<fazo>
lol I'm.. glad I guess
<fazo>
it needs more people doing packages
Spinnaker has quit [Ping timeout: 252 seconds]
<fps>
oh nice. quote "Using virtual appliances in Open Virtualization Format (OVF) that can be imported into VirtualBox. These are available from the NixOS homepage.
<fazo>
just make sure to switch to the unstable channel after you install, because stable is very old (december) and NixOS exploded in popularity in the last months
<fazo>
so unstable made giant additions in the last months
<fazo>
also the newest stable is due in a few weeks tops
<drathir>
fazo: !ping
simonv3 has quit [Quit: Connection closed for inactivity]
<fazo>
drathir: ?
<drathir>
fazo: checking if You are not a spam bot...
<fazo>
oh I saw the message now.
<fazo>
nah, ask whyrusleeping or jbenet. I've been around for a little while
<fazo>
it's sad that you saw me as a spam bot lol
<drathir>
fazo: good to know... ^^
<fps>
you failed the turing test
<fazo>
maybe I am a spam bot.
<fazo>
:(
Spinnaker has joined #ipfs
<noffle>
darned nixos promo bots
<drathir>
fazo: takin on mind the conversation with fps yea pretty looks like ;p
<fazo>
I know what IPFS is and how to use it... but do I know what it *feels* like to use it?
<noffle>
(nixos looks pretty spiffy though. looks like a good replacement once my arch install dies and I whine about how much work re-config is)
<drathir>
fazo: btw You was the next in queue to check !ping, but as i see not needed ;p
<drathir>
fazo: > fps
<fazo>
yeah, when I saw fps complaining about distros not having features that NixOS has, I had to step in
<fazo>
Sorry, I don't know what !ping is
<fps>
!ping fps
<fps>
hmm
<fps>
fps: !ping
<drathir>
i saw "similar" bot fights conversations before that why like to check ;p
<drathir>
fazo fps sorry btw...
<fps>
is !ping a no-op?
<fazo>
drathir: np ^^ it was funny
jvalleroy has joined #ipfs
<fps>
drathir: don't worry :)
jvalleroy has left #ipfs [#ipfs]
<fazo>
by the way there is a problem with the IPFS package in NixOS (nothing major)
<fazo>
if you run the daemon from a systemd service, mounting on fuse doesn't work
* drathir
onlly one replacement to arch see... its openbsd...
<fazo>
drathir: why not NixOS?
<fazo>
it's a new distro but skyrocketing in popularity and development speed.
<fazo>
and one of the only actually revolutionary distros
<drathir>
bc arch is stable as rock the openbsd is the same... not really like too young distros...
<fazo>
drathir: NixOS stable will be more stable than arch. Don't know about BSD
ben_____ has quit [Quit: Page closed]
notduncansmith has joined #ipfs
<fazo>
also NixOS has a number of features that make it a lot more stable like rollbacking to before updating
<fazo>
and reproducible configurations
notduncansmith has quit [Read error: Connection reset by peer]
<fps>
now you're pushing it deliberately? :)
<fazo>
eeeh sorry
<fps>
maybe it's a hybrid system
<drathir>
also the /me have workin arch installation older than 6y...
<fazo>
stop me before it's too late
<fps>
part human, part machine :)
<fazo>
if I'm part machine, I hope I only run free software
<drathir>
maybe after few years if will still egsist i take a look on it ^^
atrapado has joined #ipfs
chriscool has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
<lithp>
scumbag life, the first time we tried to look at our source code the universe charged us $3 billion
wopi has joined #ipfs
<zmanian>
tor-dev
zmanian has left #ipfs [#ipfs]
jamescarlyle has quit [Ping timeout: 240 seconds]
jamescarlyle has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
bedeho has quit [Ping timeout: 240 seconds]
apophis has quit [Quit: This computer has gone to sleep]
apophis has joined #ipfs
zmanian has joined #ipfs
chriscool has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Gennaker has joined #ipfs
Spinnaker has quit [Read error: Connection reset by peer]
ansuz has quit [Ping timeout: 252 seconds]
<fps>
oh nice. vbox was killed by the oom killer during nixos-install
<fps>
let's see how it copes with that ;)
ansuz has joined #ipfs
jhiesey has quit [Ping timeout: 252 seconds]
screensaver has quit [Remote host closed the connection]
jhiesey has joined #ipfs
feross has quit [Ping timeout: 252 seconds]
thomasreggi has quit [Remote host closed the connection]
jarofghosts has quit [Ping timeout: 252 seconds]
jarofghosts has joined #ipfs
feross has joined #ipfs
rvsv has joined #ipfs
pfraze has quit [Remote host closed the connection]
bsm1175321 has quit [Ping timeout: 260 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dstokes has quit [Ping timeout: 252 seconds]
sbruce has quit [Ping timeout: 252 seconds]
sbruce has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
Soft has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
dstokes has joined #ipfs
CounterPillow has quit [Read error: Connection reset by peer]
CounterPillow has joined #ipfs
Encrypt has quit [Quit: Sleeping time!]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has quit [Remote host closed the connection]
Soft has joined #ipfs
apophis has quit [Read error: Connection reset by peer]
apophis has joined #ipfs
pfraze has joined #ipfs
doei has quit [Ping timeout: 250 seconds]
nessence has quit [Remote host closed the connection]
nf has joined #ipfs
<nf>
i'm getting this error "rm: cannot remove ‘ipfs’: Transport endpoint is not connected".....how can i unmount a folder
<achin>
sudo umount /ipfs /ipns
nicolagreco has joined #ipfs
nicolagreco has quit [Client Quit]
<whyrusleeping>
and when that tells you it fails because fuse is annoying, you can try 'sudo umount -f /dev/fuse'
<whyrusleeping>
'fuck you fuse, i do what i want'
<rendar>
whyrusleeping but that will unmount all the fuse filesystems installs right?
<whyrusleeping>
rendar: yeah. but it sure tells fuse who is boss
<rendar>
lol yeah
<rendar>
whyrusleeping: do you know what ipld is all about?
<whyrusleeping>
rendar: uhm
<whyrusleeping>
linked data
<rendar>
what you mean?
<whyrusleeping>
its supposed to be the 'new format for ipfs objects'
<rendar>
like connecting one file to another one?
<achin>
less important than 'what' is 'where' (as in where to find into about ipld)
<rendar>
whyrusleeping: how that works?
<whyrusleeping>
i'm really not the best person to ask about linked data, jbenet is the one working on