IlanGodik has quit [Quit: Connection closed for inactivity]
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
kvda has joined #ipfs
lkcl has quit [Ping timeout: 250 seconds]
lkcl has joined #ipfs
se3000 has quit [Quit: My iMac has gone to sleep. ZZZzzz…]
maxlath has quit [Quit: maxlath]
jholden has joined #ipfs
fleeky__ has joined #ipfs
fleeky_ has quit [Ping timeout: 244 seconds]
saintromuald has quit [Ping timeout: 244 seconds]
saintromuald has joined #ipfs
jedahan has joined #ipfs
Kane` has joined #ipfs
ecloud has quit [Ping timeout: 268 seconds]
Boomerang has quit [Quit: leaving]
dignifiedquire has quit [Quit: Connection closed for inactivity]
se3000 has joined #ipfs
jholden has quit [Ping timeout: 268 seconds]
flyingzumwalt has quit [Quit: Connection closed for inactivity]
kklas has quit [Quit: Leaving]
captain_morgan has quit [Remote host closed the connection]
captain_morgan has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Changing host]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
infinity0 has joined #ipfs
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jedahan has joined #ipfs
ivo_ has quit [Ping timeout: 245 seconds]
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
anonymuse has quit [Remote host closed the connection]
kvda has joined #ipfs
ivo_ has joined #ipfs
am2on[m]1 is now known as am2on
anonymuse has joined #ipfs
infinity0 has quit [Ping timeout: 260 seconds]
mildred1 has joined #ipfs
infinity0 has joined #ipfs
mildred has quit [Ping timeout: 250 seconds]
se3000 has quit [Quit: My iMac has gone to sleep. ZZZzzz…]
<pjz>
why would an `ipfs object stat` command take a long time?
<Mateon1>
pjz: Possibly because you don't have the hash, and the network is slow to respond. Otherwise, if you do have the hash, no idea.
<Mateon1>
s/hash/block
anonymuse has quit [Ping timeout: 240 seconds]
kulelu88 has quit [Quit: Leaving]
<pjz>
okay, it doesn't require actaully like fetching the blocks fully or anything, right?
robattila256 has quit [Ping timeout: 250 seconds]
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
anonymuse has joined #ipfs
anonymuse has quit [Remote host closed the connection]
zanadar has joined #ipfs
<Mateon1>
pjz: It requires fetching one block fully - the block described by the hash. But all links under that hash don't have to be fetched.
<pjz>
Mateon1: ah! okay, good to know!
* pjz
still working on pinbits.io
<Mateon1>
Also, note that CumulativeSize might suffer from overflow. I've seen a zipbomb on IPFS which had the directory size jumping all over the place.
<whyrusleeping>
i saw a zipbomb that was 2^100 bytes, lol
<pjz>
well, if someone tries to pin a zipbomb, it won't last very long ;)
<Mateon1>
pjz: The point is, you can overflow the size counter such that it looks like a few megabytes (or even a few bytes) even if you store a terabyte of data within.
galois_dmz has joined #ipfs
<pjz>
oic
<pjz>
but you're fine as long as you never unzip it
<pjz>
it's just blocks on ipfs
DiCE1904 has quit [Ping timeout: 260 seconds]
<Mateon1>
pjz: Just saying that in case you want to pin 'small' things only. It's difficult to tell if a thing is actually small due to the overflow.
<jbenet>
whyrusleeping: red = note done, green = done. or it will be that. see github.com/jbenet/depviz
<jbenet>
not done*
<pjz>
Mateon1: so you're saying that `ipfs object stat` can be wrong due to an overflow?
<Mateon1>
pjz: Yep, exactly
<pjz>
Mateon1: is there any way to detect said overflow? shouldn't ipfs itself catch that somehow?
<Mateon1>
Meh, I can't find any link to that zipbomb to illustrate the behavior, so I'll sacrifice my mfs filesystem
<Mateon1>
pjz: Not really, except fetching blocks under the directory affected by overflow and checking if their sizes all match
zanadar has quit [Ping timeout: 245 seconds]
<pjz>
what about just checking that the size fetched hasn't exceeded the size expected ?
<pjz>
I mean, I wouldn't expect to catch it before hand
<pjz>
but if I go to actually download the thing, it should shut off at or near the expected cumulative size, no?
jedahan has joined #ipfs
DiCE1904 has joined #ipfs
<whyrusleeping>
pjz: thats what i would do
<whyrusleeping>
check the root, and use its reported size to limit reading the rest
anonymuse has joined #ipfs
<pjz>
whyrusleeping: ...but all I'm doing is 'ipfs pin'... and then I have to wait.
<whyrusleeping>
ah
<whyrusleeping>
well
<pjz>
whyrusleeping: so, as I said, shouldn't *ipfs* catch this?
<whyrusleeping>
thats a tricky question
<pjz>
cumulative size is 64 bit, right?
<whyrusleeping>
the protobuf merkledag size field is only officially part of the protobuf portion of that code
<whyrusleeping>
and its not enforced
<whyrusleeping>
its just a helper really
<whyrusleeping>
with CBOR, you can construct objects however you please, and there won't necessarily be any form of size reporting
<whyrusleeping>
so the problem of "how big is this graph" becomes difficult
<pjz>
so you're saying that there's no reliable way to know that some random addr won't fill up your disks?
<whyrusleeping>
currently, yes
<pjz>
how do the gateways and web clients deal with this?
<whyrusleeping>
disk fills up -> gateway node borks and crashes -> gc -> restart the node
<Mateon1>
whyrusleeping: That doesn't seem like the perfect solution... I think we have to wait for working gc at runtime.
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<whyrusleeping>
but gateway requests get cancelled after one hour, so thats been basically rate limiting those sorts of things
<whyrusleeping>
Mateon1: yeah, its definitely not the best solution
<pjz>
but someone with fast enough net to your gateway could probably fill your disk
<whyrusleeping>
its not even a solution really, its just how we've set up our inf, and it happens to deal with things without going wrong
<whyrusleeping>
pjz: assuming that they can bitswap stuff to us fast enough
<whyrusleeping>
yea
<whyrusleeping>
this has been on our todo list for a while, just hasnt been high priority
<pjz>
okay, well, until it works, how hard would it be to add a config variable max_download_size that killed the download of any hashes over that size?
<whyrusleeping>
about as difficult as actually solving the problem
<whyrusleeping>
the thing is that we need to be able to track 'requests' individually
<whyrusleeping>
and see how much data a given request (like, ipfs cat, or ipfs pin) is using
<whyrusleeping>
which is a bit of insight we don't have right now
<whyrusleeping>
It can very likely be implemented in merkledag.FetchGraph
<Mateon1>
Anyway, the number that 64 bits can store is incredible. Right now I'm trying to demonstrate the overflow issue, but I'm at 13 EB, probably close to the overflow, but it took many iterations anyway
<Mateon1>
Doesn't help that I am doing it all by hand...
<Mateon1>
Right... I was one iteration from overflow
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<whyrusleeping>
yeah, 64 bits is pretty large
<whyrusleeping>
although i think the actual on-disk format is a varint, so its not *technically* limited to 64 bits
<whyrusleeping>
just the implementation is IIRC
galois_dmz has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
nunofmn has quit [Quit: Connection closed for inactivity]
mguentner has quit [Quit: WeeChat 1.6]
galois_dmz has quit [Ping timeout: 252 seconds]
<Mateon1>
Here's a demo of the overflow: https://ipfs.io/ipfs/QmW4Zd911qv7uaRM2vi79YsfHvZ93skxiCxLnwpgpf7hp3 - the size seems to be 11 GB but the folder hides the npm repo hash that's 723GB. I could drive down the size even lower if I wanted to spend the time to do so, potentially even to zero. But this is just a quick demo.
<whyrusleeping>
Mateon1: mind filing an issue about it?
mguentner has joined #ipfs
<Mateon1>
Right now, I'm taking care of something, but I can do so later.
<whyrusleeping>
Mateon1: thanks!
geemili has quit [Ping timeout: 240 seconds]
byouko has joined #ipfs
<byouko>
hey
xethyrion has joined #ipfs
_whitelogger has joined #ipfs
byouko has quit [Quit: Leaving]
jholden has joined #ipfs
infinity0 has quit [Ping timeout: 260 seconds]
mguentner has quit [Ping timeout: 260 seconds]
infinity0 has joined #ipfs
jholden has quit [Ping timeout: 260 seconds]
mguentner has joined #ipfs
rgrinberg has quit [Remote host closed the connection]
<noffle>
thanks richardlitt for representing ipget today :)
chris613 has quit [Quit: Leaving.]
jedahan has joined #ipfs
kvda has joined #ipfs
_whitelogger has joined #ipfs
anonymuse has quit [Remote host closed the connection]
mildred1 has quit [Ping timeout: 252 seconds]
rendar has joined #ipfs
wallacoloo___ has joined #ipfs
ulrichard has joined #ipfs
pfrazee has quit [Remote host closed the connection]
<dignifiedquire>
is the branch part on the main repo or another?
<victorbjelkholm>
dignifiedquire: from my fork
IlanGodik has joined #ipfs
<victorbjelkholm>
dignifiedquire: seems environment variables are removed for PRs coming from forks, maybe we should disable sauce labs on those?
d6e has joined #ipfs
<victorbjelkholm>
I'll redo the current PR I have to come from ipfs/js-ipfs but we should fix this, since all PRs from external contributors would fail right now
<dignifiedquire>
yes, in theory it shouldn't run sauce labs if it can't find the credentials
<victorbjelkholm>
dignifiedquire: should I open an issue about it in js-ipfs for now?
<dignifiedquire>
victorbjelkholm: open an issue on aegir please
<victorbjelkholm>
dignifiedquire: will do, thanks
jmjatlanta has joined #ipfs
wahah has joined #ipfs
<wahah>
hey guys, in nodejs lib p2p, if a node has multi incoming connections, do you suggest creating a new multiaddr to handle each one? Or just create one multiaddr and let all nodes connect to it?
cemerick has joined #ipfs
lkcl has quit [Read error: Connection reset by peer]
lkcl has joined #ipfs
<toXel>
hey! is it possible to run IPFS behind a proxy/firewall?
<wahah>
hey guys, in nodejs lib p2p, if a node has multi incoming connections, do you suggest creating a new multiaddr to handle each one? Or just create one multiaddr and let all nodes connect to it?
wahah has quit [Quit: ChatZilla 0.9.93 [Firefox 50.0/20161114144739]]
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
jokoon has quit [Quit: Leaving]
infinity0 has quit [Remote host closed the connection]
JustinDrake has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<pinbot>
now pinning /ipfs/QmWJe7zvzbKKoZ6rbmTxgbczn9FeS7zovrcawMkNYuqyuf
<pinbot>
[host 2] failed to pin /ipfs/QmWJe7zvzbKKoZ6rbmTxgbczn9FeS7zovrcawMkNYuqyuf: cannot store pin state: write /data/ipfs/datastore/369121.log: no space left on device
<dignifiedquire>
why do I need varints? all the sha3/keccak hashes are fine without varint
<dignifiedquire>
we can add varints later down the line when we actually add blake2
niek has joined #ipfs
niekie has quit [Read error: Connection reset by peer]
niek has quit [Read error: Connection reset by peer]
niekie has joined #ipfs
r04r is now known as WIldchild3
WIldchild3 is now known as r04r
ashark has quit [Ping timeout: 240 seconds]
<lgierth>
JustinDrake: hey thanks for that write up
<lgierth>
super useful
<JustinDrake>
Cool :) Thanks for IPFS.
galois_dmz has quit [Remote host closed the connection]
<JustinDrake>
@lgierth: I tried to find talks by you on YouTube. Are there any?
<lgierth>
hah not yet
<lgierth>
there will be more with packet switching progressing
<lgierth>
there's a lot to explain there technically, but also conceptionally and politically
<lgierth>
i gotta run, this cafe is closing
krs93 has joined #ipfs
<lgierth>
the political tl;dr of packet switching is this: the way a network is organized shapes what we can do with it, and an overlay network allows us to create spaces along our own rules, so it's essentially a shot at a militant democracy in networking :)
<lgierth>
(also just to avoid confusion, overlay network is the thing, packet switching is just an implementation detail)
sprint-helper has quit [Remote host closed the connection]
sprint-helper has joined #ipfs
<JustinDrake>
I share that vision for that overlay network; some of the technical details I'm not sure about
<lgierth>
yeah there's a lot of information on that missing or scattered
<lgierth>
just getting started :)
<lgierth>
ok gotta go
<JustinDrake>
I think you should give a talk on packet switching!
PseudoNoob has joined #ipfs
<JustinDrake>
I'm based in Cambridge, UK and can organise for you to speak if you're interested
ulrichard has quit [Remote host closed the connection]
maxlath has quit [Ping timeout: 245 seconds]
rugu has joined #ipfs
ylp1 has left #ipfs [#ipfs]
<rugu>
in libp2p, If a node hosts a connection on a multiaddr and a particular protocol, can multiple nodes dial to the same address of the primary host node?
mildred1 has quit [Ping timeout: 245 seconds]
maxlath has joined #ipfs
<JustinDrake>
rugu: Yes
<rugu>
JustinDrake: Do they have to connect on the same protocol or just the smae the address cause when I am trying, information gets passed only between the primary node and the last one that connected.
<JustinDrake>
What protocol is that?
maxlath has quit [Quit: maxlath]
Bill-Daffodil-14 has quit [Remote host closed the connection]
<rugu>
well basically, the example is of chat between two people, I want to extend to 3 and more.
<JustinDrake>
I understand. I think you should be able get this chat example to work with n people out of the box. I'm very new to IPFS though and haven't played with that example. You could try filing an issue.
<rugu>
ah well, I actually am the one who pushed this example, and am now stuck while trying to expand on further applications of libp2p :D
<rugu>
daviddis: Hey I had a question about libp2p and its use in ipfs. The chat example I had made a PR for, If I were to extend it for a 3 way or n-way channel, would I need each node to have multiple multiaddr or just one?
<daviddias>
whyrusleeping: when are you flying btw? :)
<whyrusleeping>
daviddias: sunday
<whyrusleeping>
i get into SFO around 11:30am
robattila256 has joined #ipfs
<daviddias>
rugu: well, you just need 'one', but you might be chatting over multiple transports, so you would need one for each transport
google77 has quit [Quit: Lost terminal]
<daviddias>
whyrusleeping: nice, early, I arrive at 1pm :)
<rugu>
in what sense? The listener handles an incoming connection at its address with some protocol, while the dialer node just, well, dials to it.
<whyrusleeping>
daviddias: niceee
<whyrusleeping>
:D
dmr has quit [Quit: Leaving]
<daviddias>
rugu: the first question is: if you were to do it when a TCP socket, how many ports do you have to listen for having multiple other processes dial to you?
<whyrusleeping>
rugu: hes saying, you might want to use tcp and udp and websockets
<whyrusleeping>
which means you need three addresses
<rugu>
In ipfs repo, I noticed each node starts with some basic bootstrap peers in its address book, which I assume are hosted by you guys.
rgrinberg has joined #ipfs
<rugu>
now say I have a file i am sharing on ipfs, and my friend who is also running a daemon , gets its hash and goes the ipfs link to fetch the file. Does the file get routed through the bootstrap peers?
<rugu>
or is it straight communication between my daemon and his?
<Kubuxu>
using bootstrap nodes you guys find each other and connect directly
<rugu>
Kubuxu: hmm, cause in the ipfs code, in the libp2p file, only the code for dialing to a node is given. If my daemon gets the other daemon's address through the bootstrap nodes, then which of us "handles" the connection and which one "dials" to it
ashark has joined #ipfs
<Kubuxu>
there isn't really a difference
<Kubuxu>
but in this case, if you are trying fetching a file from here, and you are not connected then you will be dialing to hi
<rugu>
A redundant connection should not be made is my point.
<Kubuxu>
him
<rugu>
if 1 fetches file from 2, then node 1 dials to node 2. Now after a bit, node 2 fetches a file from node 1, the connection already exists
<rugu>
and no new connection is made?
<mib_kd743naq>
the change from \x72 to \x55 for raw chunks has been committed into the cid repo a while ago
<mib_kd743naq>
is there an ETA when go-ipfs will pull that in (and subsequently ipfs.io will upgrade) ?
<mib_kd743naq>
whyrusleeping: ^^
<mib_kd743naq>
Kubuxu: ^^
teaspoon_ has joined #ipfs
mpierce has joined #ipfs
<whyrusleeping>
mib_kd743naq: sometime yesterday
<whyrusleeping>
ipfs.io upgrading can happen today
<whyrusleeping>
rugu: we only open one connection between each pair of peers
<mib_kd743naq>
whyrusleeping: awesome, thank you!
mildred1 has joined #ipfs
dcallagh has quit [Ping timeout: 268 seconds]
anewuser has joined #ipfs
JustinDrake has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
ygrek has joined #ipfs
dcallagh has joined #ipfs
<whyrusleeping>
mib_kd743naq: gonna take a look at your bugs today
<mib_kd743naq>
whyrusleeping: no rush, I have a lot of other stuff to test in the meantime
rugu has quit [Quit: ChatZilla 0.9.93 [Firefox 50.0/20161114144739]]
<whyrusleeping>
mib_kd743naq: also, on your 'faster block put' thing
<whyrusleeping>
if you write your own API client
<mib_kd743naq>
whyrusleeping: in fact the wishlist I *just* filed, would accelerate my testing by an order of magnitude, so I can file new bugs faster ;)
<whyrusleeping>
you can reuse http connections
<mib_kd743naq>
I do keepalive - it already reuses everything
ashark has quit [Ping timeout: 268 seconds]
<whyrusleeping>
huh, can you measure the perf on that?
<whyrusleeping>
so we can get an idea of how much overhead there is?
<mib_kd743naq>
hmmm... yes I could but that'd be tomorrow
<whyrusleeping>
dignifiedquire: looks like it got slower
<dignifiedquire>
Oo
<dignifiedquire>
that would mean js-ipfs was faster Oo
<whyrusleeping>
ooOoo
<dignifiedquire>
yeahhh which is why I'm confused
ashark has joined #ipfs
<dignifiedquire>
this is self compiled 0.4.4
<whyrusleeping>
make sure to run the go-ipfs daemon with --offline
<dignifiedquire>
also ipfs-whatever fails on master
<dignifiedquire>
okay trying that
<whyrusleeping>
otherwise youre going to be dealing with the network
<whyrusleeping>
it fails on master? uh oh
<whyrusleeping>
mind filing a bug report?
<dignifiedquire>
sure
apiarian_work has quit [Ping timeout: 248 seconds]
<dignifiedquire>
running with offline improves things, reload
cemerick has joined #ipfs
jchevalay has quit [Ping timeout: 260 seconds]
<whyrusleeping>
huh, interesting
<ruby32>
hi all, another question similar to my last. is there a megaupload-type service, essentially a web front-end for ipfs, that isn't specific to images like ipfs.pics?
<ruby32>
i'm guessing that would be simple enough to build myself "in a day" :p
<mib_kd743naq>
whyrusleeping: the more I think of the batch interface - the more sense it makes to stream it protobuf frames: then I can add whatever types I need in "one pipe"
<whyrusleeping>
mib_kd743naq: hrm... that seems like a bit more complexity
jmjatlanta has quit [Read error: Connection reset by peer]
<mib_kd743naq>
whyrusleeping: true, but on the other hand it is a "power user interface"
<whyrusleeping>
this is true
<whyrusleeping>
so, the way i see this working is as a `--input-enc` option (similar to how the object put commands work)
<Kubuxu>
whyrusleeping: do you have something you want me to focus right now?
<whyrusleeping>
Kubuxu: you would make me so incredibly happy if you could make/update the code coverage listings and maybe improve a couple of them
<mib_kd743naq>
whyrusleeping: like ` ... | ipfs block put --input-enc batch-stream - ` ?
<whyrusleeping>
next time it happens, find the peer who has the things you want, and check `ipfs swarm peers -v` to make sure there are two bitswap streams open to them
<Kubuxu>
kk
rendar has joined #ipfs
teaspoon_ has joined #ipfs
ygrek has joined #ipfs
anonymuse has joined #ipfs
ZaZ has quit [Read error: Connection reset by peer]
teaspoon_ has quit [Ping timeout: 256 seconds]
atrapado_ has joined #ipfs
teaspoon_ has joined #ipfs
JustinDrake has joined #ipfs
maxlath has joined #ipfs
<ansuz>
hi whyrusleeping
<ansuz>
is your ipfs node running?
<whyrusleeping>
no, it tried to run away so i killed it
<ansuz>
rip
<whyrusleeping>
ansuz: come to the US and visit me
<whyrusleeping>
i went all the way to paris just to see you once
galois_dmz has joined #ipfs
<ansuz>
twitter told me that donald trump would put me in a camp
kulelu88 has joined #ipfs
<ansuz>
I'm visiting toronto in january
<ansuz>
toronto is basically seattle
mildred has joined #ipfs
bsm117532 has quit [Ping timeout: 248 seconds]
<whyrusleeping>
i mean
<whyrusleeping>
like
<whyrusleeping>
kinda
<whyrusleeping>
theres a tower and water
<whyrusleeping>
and buildings
<ansuz>
and really good weed
<whyrusleeping>
oh that too
<ansuz>
basically seattle
<ansuz>
s/rain/snow/g
<whyrusleeping>
fair
<whyrusleeping>
but if you come to america, we can hang out in the trump camps together
<whyrusleeping>
making "made in america" products
<ansuz>
lol
<ansuz>
that sounds like a lot of fun
<ansuz>
maybe you can have the IPFS helicopter come pick me up?
<ansuz>
I'm just a lowly XWiki employee
bsm117532 has joined #ipfs
<whyrusleeping>
no IPFS helicopter, but the Protocol Labs airship might be able to stop by paris in the next loop around the planet
<ansuz>
if you can get me to toronto for january 13th, then I can cancel my flight
<ansuz>
the toronto meshnet conference is happening on the 14th
<theseb>
Kubuxu: wait...then what about all that stuff in the docs about an internet that will live forever with censorship resistance?
<theseb>
i was believin all the feel good hype
jedahan has joined #ipfs
jonnycrunch has quit [Client Quit]
<Kubuxu>
it isn't possible without breaking the laws of physics :/
<Kubuxu>
and economy
<Kubuxu>
and game theory
<Kubuxu>
:/
<Kubuxu>
I am off for today
jedahan_ has quit [Ping timeout: 265 seconds]
<lgierth>
game theory doesn't really have laws anyhow
<lgierth>
same for economy :D
<lgierth>
good night
<theseb>
from ipfs.io... "IPFS provides historic versioning (like git) and makes it simple to set up resilient networks for mirroring of data."... WHO does all that storage?
jonnycrunch has joined #ipfs
<Kubuxu>
where do you see in that sentence that someone stores something
<Kubuxu>
'makes it simple to set up'
jonnycrunch has quit [Client Quit]
<Kubuxu>
right now for example you have internet archives that is struggling to crawl the web and store data in form that allows for viewing
<Kubuxu>
it would be much easier with IFPS
<Kubuxu>
lgierth: base theorems of those sciences could be called laws
<whyrusleeping>
made another little tool to make my life easier
maxlath has quit [Quit: maxlath]
galois_dmz has joined #ipfs
galois_dmz has quit [Remote host closed the connection]
elico has quit [Quit: Leaving.]
<lgierth>
<3
sprint-helper has quit [Remote host closed the connection]
<lgierth>
my life already feels a little easier
sprint-helper has joined #ipfs
ashark has quit [Ping timeout: 240 seconds]
Qwertie has quit [Ping timeout: 265 seconds]
galois_dmz has joined #ipfs
cblgh is now known as testerino2123
Qwertie has joined #ipfs
SuperPhly has joined #ipfs
pfrazee has quit [Remote host closed the connection]
testerino2123 is now known as cblgh
<theseb>
Kubuxu: when you mentioned internet archives
<theseb>
Kubuxu: why did you say ipfs would making viewing easier?
galois_dmz has quit [Remote host closed the connection]
chris613 has joined #ipfs
anonymuse has quit []
<deltab>
theseb: by naming content using a verifiable hash, and not the name of a particular server to get it from, ipfs makes it possible to get content from anywhere on the network that has it
mirza has joined #ipfs
<deltab>
you can download content from others that have it, not just the central server, thereby reducing load on the central server and making more efficient use of the network
<deltab>
the more popular the content, the more nodes have it, so the more providers there are
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<deltab>
indeed there doesn't even have to be a central server
<theseb>
thanks
SuperPhly has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<deltab>
though if you want content to stay available 24/7 and it's not super popular, you'll need to keep a node with the content online, or get someone else to do that for you
wallacoloo___ has joined #ipfs
<deltab>
so that's not so different; on the other hand, anyone can do that, not just the original publisher
<deltab>
so there is the *potential* for content to stay around indefinitely, or to reappear after disappearing
<whyrusleeping>
plus, if it reappears after disappearing, you can guarantee its the right stuff
ulrichard has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
galois_dmz has quit [Remote host closed the connection]
yoosi has left #ipfs ["WeeChat 1.6"]
SuperPhly has joined #ipfs
palkeo has quit [Quit: Konversation terminated!]
SuperPhly has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
se3000 has quit [Quit: My iMac has gone to sleep. ZZZzzz…]