<sonatagreen>
if net neutrality gets destroyed we'll start prioritizing workarounds, I think
<sonatagreen>
people are smart
fingertoe has quit [Ping timeout: 246 seconds]
inconshreveable has quit [Client Quit]
<mungojelly>
the net doesn't even try to be neutral in terms of how much data any particular client gets from any particular server. heck the internet isn't neutral now in any sense. but certainly you can pay servers to serve things to you.
<sonatagreen>
that's. not what net neutrality means.
jamie_k_ has quit [Ping timeout: 256 seconds]
<sonatagreen>
i mean, yes that's a problem that needs to be solved
<sonatagreen>
but there is absolutely room for things to get worse.
<codehero>
net neutrality just means that all packages get treated equally
<ion>
That could theoretically be fixed by smarter algorithms, net neutraly violations could not.
<codehero>
so to make it ipfs relevant: when your isp starts to lower your bandwith for ipfs trafic or even completely blocks it, they'd be violating net neutrality
<codehero>
the only workaround for net neutrality is encryption, but there's also the possibility that encryption has to be approved and encryption without headers (encryption that just looks like random data) will be blocked
<spikebike>
codehero: that's not practical
<codehero>
well, yeah
<codehero>
luckily openvpn and ssh is widely used
<spikebike>
it's more like the opposite
<spikebike>
we have a whitelist of trusted partners that don't count against quota, and here's your tiny monthly bandwidth allocation
<codehero>
but then you still nede to make sure that the destination server has net neutrality
<spikebike>
codehero: yeah, and SSL can tunnel anything
<spikebike>
not to hard to make anything look like browser+ssl traffic
<codehero>
true
<mungojelly>
if the way we're being "fair" is that packets are treated equal whatever server they come from, then the advantage goes to who has more servers, thus everyone buying zillions of servers everywhere, and you have an equal right to trickle your stuff through the middle
<mungojelly>
the net is nothing anything like neutral in any practical sense, it is controlled in practice by a small number of people in a small number of organizations
<codehero>
really. we should work on open internet infrastructure
<spikebike>
ISPs are treading lightly so far.
<codehero>
except for verizon
<spikebike>
fortunately there's some things people scream bloody murder for
<codehero>
and at&t
<spikebike>
when comcast says "don't use netflix use our streaming video service" people laugh
<spikebike>
or cry
<spikebike>
and the politicians actually seem to care for once
<codehero>
and then we have politicians like jeb bush who proudly say they'll get rid of obama's new net neutrality laws
<codehero>
sure, the laws aren't idal, but just getting rid of them isn't going to fix things
<spikebike>
Jeb Bush, once a front-runner for the Republican presidential nomination, is slashing pay across the board for his struggling campaign as he attempts to regain traction just 100 days before the party’s first nominating contest.
<spikebike>
to his peril ;-)
<codehero>
oh yeah. comcast... i forgot about comcast. actually i think all isps are against net neutrality
<spikebike>
some are for it
<spikebike>
like say sonic
<codehero>
really?
<codehero>
oh
<codehero>
i think google is too if i'm not wrong
<spikebike>
but yeah, none of the huge ones
<spikebike>
yeah
<codehero>
but google fiber isn't widespread yet
<codehero>
unfortunately
<spikebike>
well of course since they are the one everyone wants to block
<codehero>
heh. yeah
<spikebike>
netflix needs to become an isp ;-)
<mungojelly>
the web has been intentionally slow-walked ever since it was invented, advanced only in limited ways that benefit established players, that's why it's so easy for things like ipfs now to point out the emperor has no clothes
<codehero>
spikebike: i think we should just get rid of isps
<sonatagreen>
codehero, there are people working on that
<spikebike>
yeah, I've been hoping for some p2p wireless protocol for smartphones, the density is high enough in many places to make it practical
<codehero>
sonatagreen: oh, really?
<codehero>
oh. mesh networks
<codehero>
yeah
<codehero>
the problem is that you really need to get more than the majority involved to make that work
<sonatagreen>
maybe
<mungojelly>
mesh networking would dominate right now except that it's illegal. people don't seem to consider that even though they sorta know it.
<sonatagreen>
it's illegal?
<sonatagreen>
i thought it was just unpopular
<mungojelly>
people try to get meshes to work in the small amount of messy spectrum that's not so much designated for them as accidentally left less regulated at the moment. it's illegal to broadcast on most frequencies.
<spikebike>
not illegal
<spikebike>
yeah, but it's plenty legal on some
<mungojelly>
it is illegal to broadcast on most frequencies, including all of the most useful frequences for mesh networking. you know this if you allow yourself to think about it.
<spikebike>
the wifi ranges are fairly open for example
<sonatagreen>
wifi is legal and likely to remain so
<spikebike>
yeah there would be a backlash if people tried to take it
<sonatagreen>
and you can pry sneakernet out of my cold dead hands
<spikebike>
amusingly $billions were spent upgrading networks to 4g in the usa
<spikebike>
and for the most part people are happy to use most of hteir bandwidth at home/work/coffeeshop on wifi
<spikebike>
so they hoped to make huge $$$ on it and it's not been nearly as profitable as they hoped
<codehero>
hm. i thought there were laws limiting wifi range
<spikebike>
nope
<codehero>
like the signal can't leave your property
<spikebike>
going 50 miles is plenty legal
<spikebike>
ha, no
<codehero>
oh. okay
<codehero>
that's cool
<mungojelly>
it's just a shitty range for going very far, it's amazing what they've squeezed out of it.
<spikebike>
hell in most apartments I can pick up a dozen
<spikebike>
well thats where mesh comes in
<spikebike>
say a college campus likely has several phones in any given small area
<sonatagreen>
...I should look into doing mesh networking
<spikebike>
and if you are latency tolerant most highways work as well
<mungojelly>
mark my words they will make transmission on those frequencies illegal as well if it's any threat. consider the power dynamics rather than superficial claims.
<spikebike>
wifi is costing the 4G folks billions
<spikebike>
they were hoping bandwidth caps would make them huge money, instead most consumers just watch youtube and music streaming over wifi
<sonatagreen>
we need a secure distributed way to pay people for bandwidth
<spikebike>
dunno, I like the bit-torrent method
<spikebike>
tit-for-tat
Not_ has quit [Ping timeout: 268 seconds]
<sonatagreen>
mmm, i think that's solving a different problem
<spikebike>
no fancy reputation, slow block chains, proof of work, just something that works and my 10 year old can understand
<spikebike>
yeah ipfs isn't a replacement for cellular or mesh networks
charley has joined #ipfs
<spikebike>
IPFS is a bandwidth consumer, not a bandwidth provider
<sonatagreen>
right
<sonatagreen>
i was getting offtopic
<mungojelly>
is bittorrent really tit-for-tat, isn't it more like disconnecting from noncompliant peers (but sometimes randomly giving them another chance)
<mungojelly>
bittorrent is an algorithm for sharing bandwidth, it does nothing to reward storage, it doesn't reward seeds in other words
charley has quit [Ping timeout: 260 seconds]
<bitemyapp>
mungojelly: the problem is that unless you're using a private tracker, there's no way to reward seeds.
<bitemyapp>
mungojelly: seeders, pretty much by definition, have everything you have to offer
<mungojelly>
you can give them pieces from some other file. bittorrent just happens not to negotiate about that.
<mungojelly>
i mean for a good reason obviously, that complicates things tremendously
Guest73396 has joined #ipfs
screensaver has quit [Ping timeout: 244 seconds]
fingertoe has joined #ipfs
<whyrusleeping>
mungojelly: bittorrent uses rarest first and choking mostly
<mungojelly>
yeah i was just reading about it yesterday, it's an awesome algorithm
<whyrusleeping>
yeap!
<whyrusleeping>
propshare is really neat too, but not really effective if only partially deployed
<sonatagreen>
does ipfs's bitswap do the trade-with-other-files thing?
<sonatagreen>
also, uh, how does "you never download any file you haven't explicitly asked for" work with interplanetary (or interstellar!) latency?
bedeho has joined #ipfs
<mungojelly>
you transmit a request to the planet containing your data, and they send back immediately the data you requested, and then minutes later you receive it. so you ask for whole big trees at once i guess. :D
<sonatagreen>
so it's at least two trips instead of one
<mungojelly>
maybe both sides preemptively send new data to be cached though i guess
<sonatagreen>
we definitely need 'subscribe to an ipns' where your peers know you ongoingly want updates
<sonatagreen>
definitely some time within the next hundred years
<sonatagreen>
gonna be vital, mark my words.
cemerick has joined #ipfs
<fingertoe>
RSS feeds ought not be above the paygrade of IPFS.
<ion>
jbenet has talked about nodes preëmptively getting content you might want soon near you.
<mungojelly>
hm should i add an rss to my ipns
legobanana has joined #ipfs
<sonatagreen>
sure
<ion>
uh, that was not the right word
<ion>
predictively
<sonatagreen>
maybe, i feel weird about that for reasons i can't quite articulate
<sonatagreen>
something about privacy/analytics and also net neutrality
hellertime has joined #ipfs
<mungojelly>
privacy is great but publicity also helps sometimes, this chat for instance just broadcasts these messages senselessly which works fine since we're just chatting
<sonatagreen>
sure but
<sonatagreen>
i don't want my neighbors to be /encouraged/ to keep logs of my downloading porn
<sonatagreen>
it's a bad default expectation to set
fingertoe has quit [Ping timeout: 246 seconds]
<ion>
It’s not like IPFS is going to broadcast your browsing habits to third parties, more like someone running a server with a lot of storage near you with a cron job mirroring Wikipedia, the latest packages for $distro and whatever else their users/customers are likely to want.
fingertoe has joined #ipfs
<sonatagreen>
mm
<M-davidar>
ion (IRC): well, anyone can listen to which blocks you're requesting
<sonatagreen>
sure
<ion>
People already do that with Youtube/Netflix cache boxes in your ISP’s data center etc. IPFS makes that transparently possible for any content.
<sonatagreen>
but it's one thing for that to be currently possible
<ion>
M-davidar: yeah
<sonatagreen>
and another to expect and rely on it like it's a good thing
<sonatagreen>
i want to treat it as a bug rather than a feature, even if it's labeled WONTFIX
Not_ has joined #ipfs
<sonatagreen>
feature request:
<M-davidar>
I suspect the recommended thing to do is to use something like tor for sensitive transfers
<M-davidar>
But obviously that has performance issues
<ion>
None of this *has* to be based on analytics.
<sonatagreen>
when doing 'ipfs add -r foo', symlinks to /ipfs/QmFoo or /ipns/QmBar should be translated into ipfs object links
<sonatagreen>
at least optionally
<M-davidar>
sonatagreen (IRC): I think that used to be the case, but they needed actual symlinks
<sonatagreen>
understandable
<M-davidar>
But yeah, it should still be an option
<sonatagreen>
yeah
<M-davidar>
Especially since the gateway can't resolve symlinks
<M-davidar>
I've argued with whyrusleeping about it previously ;)
<sonatagreen>
especially if I want to gradually start using ipfs /as/ my filesystem, replacing files with symlinks to the ipfs hashes
<sonatagreen>
...hm.
pfraze has quit [Remote host closed the connection]
norn has quit [Ping timeout: 268 seconds]
<sonatagreen>
there should be tools to support that use case.
<sonatagreen>
hmmmmm.
<M-davidar>
sonatagreen (IRC): why do you need symlinks then?
<sonatagreen>
user-friendly names
<M-davidar>
You don't need symlinks for that...
<sonatagreen>
and updateable, ideally when i edit a file it should transparently rewrite the link to point to the new content
<M-davidar>
Yeah, that's what mfs is for
<M-davidar>
ipfs files --help
pfraze has joined #ipfs
Johannean has quit [Killed (Sigyn (Spam is off topic on freenode.))]
<sonatagreen>
mfs?
hoboprimate has quit [Quit: hoboprimate]
<sonatagreen>
...ooh. ooooooh.
<sonatagreen>
with --mount
<sonatagreen>
/ipns/local ~just works~
<sonatagreen>
it does all the things
* sonatagreen
excite
Guest73396 has quit [Ping timeout: 252 seconds]
<sonatagreen>
/slight/ issue of publishing to the universe by default, but eh
<mungojelly>
that's a safe assumption unless you're airgapped
<M-davidar>
ion (IRC): there is a valid argument that ipfs allows a lot more tracking by having everything public (like bitcoin transactions)
<M-davidar>
Not sure what the plan is to deal with that
<sonatagreen>
'use tor'
<sonatagreen>
i mean, no need to reinvent the wheel, right?
<mungojelly>
sometimes the supposed trackability of bitcoin does result in some pretty graphs of things. but usually no one bothers on average. so who knows what that means.
<M-davidar>
Yeah, but tor is like using dialup
<sonatagreen>
i2p is pretty fast
<mungojelly>
tor is altruistic.
lazyUL has joined #ipfs
<M-davidar>
mungojelly (IRC): yeah, but they have those mixing services for bitcoin
<M-davidar>
Or whatever they're called
captain_morgan has joined #ipfs
<M-davidar>
sonatagreen (IRC): how does i2p differ from tor?
<mungojelly>
i'm sure we could think of how to confuse things on ipfs if we wanted.
<mungojelly>
yeah i haven't really looked into i2p what's the deal
<mungojelly>
should i use that?
<sonatagreen>
M-davidar, i2p focuses on what tor calls hidden services or .onion sites, and sacrifices what tor calls exit nodes
<sonatagreen>
it's more cloistered as a result, but it works very well at what it does
<M-davidar>
sonatagreen (IRC): sounds a lot like hyperboria then
<mungojelly>
but anyway the thing is all the talk about efficiency and if we can fudge things to get it to work, that's talking about video at this point. images, programs, pictures, all easy. video will be easy next.
<sonatagreen>
hyperboria is, if i understand correctly, a replacement for, like, isps
<sonatagreen>
whereas i2p is an anonymizing overlay network like tor or freenet
<sonatagreen>
they're solving different problems
<mungojelly>
so we could keep talking about how things are impossibly infeasibly inefficient because they can only transmit video and not VR, or we could just enjoy what's easy for once.
<sonatagreen>
mungojelly, that's a really interesting philosophy
norn has joined #ipfs
<sonatagreen>
a sort of retro internet
<mungojelly>
yeah i want to start something on ipfs now that's like gopher aesthetic
<mungojelly>
gopher clearly had unexpressed potential
<M-davidar>
sonatagreen (IRC): in hyperboria you're anonymous to everyone but your immediate peers iirc
lazyUL has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<M-davidar>
How does i2p implement anonymity?
<mungojelly>
i'm guessing by onioning??
<sonatagreen>
i think it's broadly similar to tor, i'm fuzzy on the details
<sonatagreen>
they call it 'garlic routing'
<mungojelly>
YUM :)
<whyrusleeping>
ipfs has 'wooden stake' router
<whyrusleeping>
kill vampire twice as good as garlic router!
<sonatagreen>
haha
<mungojelly>
we need like a general protocol for things pointing back to previous versions of themselves
<sonatagreen>
git?
<mungojelly>
yeah well it has to integrate with that graph for sure
<sonatagreen>
hm
<sonatagreen>
does anyone know where to get the ipfs hashes for neocities?
<mungojelly>
it doesn't quite fit right somehow, like git has the previous versions as in it'll reconstitute them for me, but i can't ipfs pin the versions, we're all talking different hashes i guess!?
<mungojelly>
yeah i couldn't find them either, it said on the profile pages but i didn't understand where that was
<sonatagreen>
hmm
domanic has joined #ipfs
rehelmin has joined #ipfs
border0464 has quit [Ping timeout: 256 seconds]
<mungojelly>
new on /ipns/QmSy5XjpgqhD4t436czqP1sgcGSxx7n4p7JC8hVEDsnDs7 is a silly python script that downloads the first few keys it hears on the network, plus some cool fonts it fished up when i tried it, anyone recognize them? :D
border0464 has joined #ipfs
<mungojelly>
ipfs.io couldn't see it until i reset my daemon seemed like, if it doesn't work i reset the daemon to poke it into working, what might be more productive behavior than that, looking at an error log or something? :)
voxelot has joined #ipfs
voxelot has joined #ipfs
daviddias_ is now known as daviddias
<daviddias>
whyrusleeping: ahahhaha
<ipfsbot>
[node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWECF
<ipfsbot>
node-ipfs-api/solid-tests 39072b0 David Dias: update ipfsd
bedeho has quit [Ping timeout: 260 seconds]
<grahamperrin>
I fell asleep earlier with not enough scrollback to see whether anyone answered my question about Wuala, is anyone from
<grahamperrin>
is anyone from the 'old' Wuala involved in IPFS?
<grahamperrin>
thanks alu I have two cats to keep me company and Friday night was three and a half hours ago :-)
r04r is now known as zz_r04r
<grahamperrin>
oh, erm, awesome cats have left the room.
grahamperrin has quit [Quit: Leaving]
grahamperrin has joined #ipfs
predynastic has quit [Remote host closed the connection]
rubrician has joined #ipfs
O47m341 has joined #ipfs
Oatmeal has quit [Ping timeout: 240 seconds]
<deltab>
mungojelly: you can pin older hashes, if you can find them. git-style version history is planned, but hasn't been defined yet
<mungojelly>
maybe i'll have my little publish.sh hack also append the hash to an old hashes file (:
rehelmin has quit [Quit: Leaving.]
<mungojelly>
i had been thinking for the past few months, couldn't you just make a web with hashes pointing to things, and it seemed so doable but not really quite, but here all/most of the work is done so i can just play with it yay
<daviddias>
you have to have go-ipfs under github.com folder
<daviddias>
for the deps path to work
<codehero>
oh
<codehero>
let's see
<codehero>
ah. there we go
<codehero>
hrm. what does mfs mean?
gperrin has joined #ipfs
grahamperrin has quit [Read error: Connection reset by peer]
bedeho has quit [Ping timeout: 255 seconds]
<daviddias>
mutable file system
<daviddias>
mfs === unixfs 2.0 === files API
gperrin has quit [Ping timeout: 260 seconds]
<daviddias>
you might see those names all over code, when we release it on 0.4.0, things will be more consistent
voxelot has quit [Ping timeout: 256 seconds]
voxelot has joined #ipfs
voxelot has joined #ipfs
voxelot has quit [Client Quit]
grahamperrin has joined #ipfs
<codehero>
ohhh. sweet
<ion>
Oh, heh. I have been noticing work on this “mfs” thing and waiting in expectation for an “ipfs mfs” command. :-P
schizogregarine has joined #ipfs
grahamperrin has quit [Remote host closed the connection]
Guest73396 has quit [Ping timeout: 240 seconds]
<ipfsbot>
[node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWEwe
<ipfsbot>
node-ipfs-api/solid-tests e45161b David Dias: reorg tests, update PR state
Guest73396 has joined #ipfs
schizogregarine has quit [Remote host closed the connection]
<daviddias>
ion: it is available on 0.4.0 as `files`
<daviddias>
it is pretty sweet, I must say :)
shode has joined #ipfs
qgnox has quit [Ping timeout: 246 seconds]
qgnox has joined #ipfs
rehelmin has quit [Quit: Leaving.]
<ipfsbot>
[go-ipfs] chriscool force-pushed sharness-coverage-script from d2b57d4 to e1ef4d6: http://git.io/vCQp5
<ipfsbot>
go-ipfs/sharness-coverage-script 97a80e9 Christian Couder: Add first version of sharness_test_coverage_helper.sh...
<ipfsbot>
go-ipfs/sharness-coverage-script 5abf294 Christian Couder: coverage_helper: remove .ipfs...
<ipfsbot>
go-ipfs/sharness-coverage-script 741c0e5 Christian Couder: coverage_helper: remove more cruft...
<ipfsbot>
[node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWErg
<ipfsbot>
node-ipfs-api/solid-tests 4edcebd David Dias: list tests
sharky has quit [Ping timeout: 240 seconds]
sharky has joined #ipfs
hellertime has quit [Quit: Leaving.]
Guest73396 has quit []
tinybike has joined #ipfs
qgnox has quit [Ping timeout: 268 seconds]
pfraze has quit [Remote host closed the connection]
ruoti has joined #ipfs
<ipfsbot>
[node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWEK8
<ipfsbot>
node-ipfs-api/solid-tests 3b0a786 David Dias: rm unnecessary comments
pfraze has joined #ipfs
acidhax_ has joined #ipfs
kode54 has joined #ipfs
pfraze has quit [Remote host closed the connection]
acidhax_ has quit [Remote host closed the connection]
acidhax has joined #ipfs
acidhax has quit [Remote host closed the connection]
acidhax has joined #ipfs
<kode54>
ipfs.pics seems like an interesting idea, but the code up on github looks like a small horror story
<kode54>
part of the interface involves a node.js server that exists only to handle commands to the ipfs shell command
acidhax has quit [Remote host closed the connection]
<kode54>
the php parts do a lot of requests with backtick quoted curl commands
<Xe>
yeah
<kode54>
the only limitation it appears to place on the images is that it shrinks them to a maximum size of 1080x1080 using imagemagic's convert command, and limits to 100 images per hour
<Xe>
it is less than ideal
<kode54>
yours?
<Xe>
no
<kode54>
it isn't too bad
<Xe>
god no
<Xe>
i would have done it in go
<kode54>
oh
<Xe>
and embedded ipfs into it
<Xe>
lol
<kode54>
it also returns an error on any image larger than 7MB, rather than attempt to fetch and display it
Guest73396 has joined #ipfs
<kode54>
which is an interesting safety
<kode54>
the ipfs interface page also pipes the request from a curl command on the node.js thing to check the size
<kode54>
then pipes the actual file from the ipfs daemon
<kode54>
before either emitting it as is, or sending an attach header first
<kode54>
I guess ipfs doesn't have any metadata on the files it contains
<kode54>
so it merely sends a fixed image/jpeg header and a filename of test.jpg
<kode54>
not saying I could have done better
<kode54>
because I wouldn't have even tried
pfraze has joined #ipfs
Guest73396 has quit [Ping timeout: 246 seconds]
legobanana has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
cemerick has quit [Ping timeout: 265 seconds]
<ipfsbot>
[go-ipfs] chriscool pushed 1 new commit to sharness-coverage-script: http://git.io/vWEPe
<ipfsbot>
go-ipfs/sharness-coverage-script 1daca55 Christian Couder: coverage_helper: use 'ipfs commands --flags'
fingertoe has quit [Ping timeout: 246 seconds]
<Stskeeps>
anybody who's investigating the mobile angle of ipfs? ie take into consideration bandwidth concerns/process wakeups/not being a full dht node?
fingertoe has joined #ipfs
<whyrusleeping>
Stskeeps: not as of yet.
<whyrusleeping>
one idea we had was just to delegate all calls to a trusted node
<whyrusleeping>
you could set up your server/vps to be your ipfs node
<whyrusleeping>
and your phone would just proxy all requests through that
Tv` has quit [Quit: Connection closed for inactivity]
<Stskeeps>
whyrusleeping: yeah, one way but not really sustainable for larger deployments.. i guess it comes down to what actually happens in idle state, when fetching files etc everything is fine to take up resources
<Stskeeps>
(i do mobile OS's for a living)
<Stskeeps>
is there any way to flip ipfs into being very verbose about what it's doing at a given moment?
<Stskeeps>
ipfs daemon that is
pfraze has quit [Remote host closed the connection]
domanic has quit [Ping timeout: 256 seconds]
<whyrusleeping>
oh,yeah
<whyrusleeping>
a few ways
<whyrusleeping>
first is running 'ipfs log level all debug'
<whyrusleeping>
(there is a bug in the log level setting that i will be fixing soon, but that command should work)
<Stskeeps>
thanks
<whyrusleeping>
Stskeeps: the other way is to run 'ipfs log tail'
<whyrusleeping>
that will output json eventlogs
<whyrusleeping>
for everything 'significant' the daemon is doing
<Stskeeps>
and the dht model was kademlia? need to look into my old P2P course materials..
<whyrusleeping>
yeap! i recommend the kademlia paper, its quite good
<Guest73396>
Nice : ) I was trying on my terminal earlier using this: https://ipfs.io/docs/install/ but it didn't work until I started using ./ipfs rather than ipfs before all commands
ilyaigpetrov has joined #ipfs
<ilyaigpetrov>
Hi, ipfs. I was reading some pub/sub proposals and come up with gossip alternative [ http://git.io/vWuLt ] for multicast. I would appreciate if you tell me weather it is useless or not in ipfs and in general. Thanks.
acknowledged has joined #ipfs
<Stskeeps>
i see various mentions of filesystem-level encryption + signing support .. do i understand that correctly as when inserting a file into ipfs i can encrypt it through it and view it again through something like http://localhost:8080/ipfs/$hash?key=passphrase
<ilyaigpetrov>
Stskeeps: as I see it: you may drop a link to a file, the url contains identity and a file hash, the file itself contains signature of identity. This way you may associate files with identities via signing.
<ilyaigpetrov>
the url looks like /ipns/identity_key/object_hash
<M-davidar>
Stskeeps (IRC): AFAIK your local node will be able to decrypt it transparently
Guest73396 has quit [Ping timeout: 256 seconds]
arpu has quit [Quit: Ex-Chat]
sysadam has quit [Quit: Page closed]
<ilyaigpetrov>
Stskeeps: see the ipfs paper, section on ipns
<Stskeeps>
thanks
<ipfsbot>
[go-ipfs] chriscool pushed 1 new commit to sharness-coverage-script: http://git.io/vWuYF
<ipfsbot>
go-ipfs/sharness-coverage-script d5da16f Christian Couder: test/Makefile: add coverage target...
martinkl_ has joined #ipfs
acknowledged has quit [Ping timeout: 265 seconds]
<cryptix>
gmorning
quincunxial has joined #ipfs
ei-slackbot-ipfs has quit [Remote host closed the connection]
ei-slackbot-ipfs has joined #ipfs
arpu has joined #ipfs
border0464 has quit [Ping timeout: 256 seconds]
ei-slackbot-ipfs has quit [Remote host closed the connection]
border0464 has joined #ipfs
border0464 has quit [Remote host closed the connection]
gritzko_ has joined #ipfs
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
dignifiedquire has quit [Quit: dignifiedquire]
Encrypt has quit [Read error: Connection reset by peer]
Encrypt has joined #ipfs
harlan_ has quit [Quit: Connection closed for inactivity]
gritzko_ has quit [Ping timeout: 256 seconds]
dignifiedquire has joined #ipfs
zz_r04r is now known as r04r
gritzko_ has joined #ipfs
Encrypt has quit [Quit: Quitte]
NeoTeo has quit [Quit: ZZZzzz…]
dignifiedquire has quit [Quit: dignifiedquire]
nicolagreco has joined #ipfs
gritzko_ has quit [Remote host closed the connection]
jo_mo has joined #ipfs
kerozene has quit [Ping timeout: 240 seconds]
dignifiedquire has joined #ipfs
kerozene has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
shadeheim has joined #ipfs
<shadeheim>
hi. I've just been reading up on ipfs. If I host a static website, do I have to promote it and serve something in demand to people as with a torrent, or is it automatically distributed to all users running the service?
nicolagreco has quit [Quit: nicolagreco]
<shadeheim>
e.g., if I host for a day and go offline, will it be accessible for a period of time, or would it need to be popular?
<shadeheim>
still accessible to others, I mean
<edcryptickiller>
I don't know. I think freenet is a better way to do that.
<edcryptickiller>
But turn your website offline and try to access it using ipfs.io
<shadeheim>
yep suppose it doesn't hurt to try. might learn something
<jo_mo>
Someone else notes that you'd use filecoin for that
<mungojelly>
we can manually pin something for you to keep it alive if you'd like
<mungojelly>
or the gateway will automatically keep something for you for a while, but then it does have to forget things sooner or later
<shadeheim>
I'm not sure I have anything worth sharing yet, but thanks for the offer. I'm interested in the idea of keeping something like a time capsule though. that wouldn't get lost
<jo_mo>
mungojelly: does that mean when someone distributes illegal content and then requests it from a couple of gateways, they'll host the content (for some time)?
<mungojelly>
jo_mo: sure, of course, what do you think it runs the data through a legality filter, it's just data as far as the gateway knows
<shadeheim>
mungojelly: Im getting Path Resolve error: could not resolve name.
<jo_mo>
mungojelly: no I don't expect it to filter anything, I just fear tons of gatways getting charged because they distribute e.g. movies
<mungojelly>
shadeheim: yeah it still says that often, sometimes it works
<shadeheim>
will that get better as ipfs gets more users?
<shadeheim>
I see it now
<mungojelly>
i think it'll get better if it's improved, mostly. it's very alpha. the algorithm for exchanging data is still simple/bad, etc.
<shadeheim>
would webpages need to have absolute links with the hashes in to other pages?
<mungojelly>
more users could make it work either better or worse or a little of both :D
<mungojelly>
shadeheim: you can link to a hash of a document, or you can link to the hash of an agent you trust to update the link to the document
<gamemanj>
like, say, the person who wrote the document :)
<shadeheim>
I'm talking specifically about if I uploaded my own site. say I had a sitemap, with links to other pages in a directory
<mungojelly>
so like my hash is QmSy5XjpgqhD4t436czqP1sgcGSxx7n4p7JC8hVEDsnDs7 and right now i'm pointing it at QmRzcgYDHSua7D4CUJjHC9TG97qCF6tS1WDrN7CPmg1fTU
dignifiedquire has joined #ipfs
<shadeheim>
so you point to an agent?
<mungojelly>
you can link either to /ipns/QmSy5XjpgqhD4t436czqP1sgcGSxx7n4p7JC8hVEDsnDs7/something.txt and then you'll get whatever i point to as being something.txt, or you can point to /ipfs/QmRzcgYDHSua7D4CUJjHC9TG97qCF6tS1WDrN7CPmg1fTU/something.txt and that will ALWAYS refer to that SAME something.txt (but someone will have to be hosting it for it to be retrievable)
<mungojelly>
if you say "ipfs cat QmRzcgYDHSua7D4CUJjHC9TG97qCF6tS1WDrN7CPmg1fTU/something.txt" it should say back "hmmmmmmmmm yeah" and then if you say "ipfs get" or "ipfs pin" on that hash you'll take your own permanent copy, Make Copies For Yourself as francis e dec said
edcryptickiller has left #ipfs [#ipfs]
grahamperrin has joined #ipfs
<mungojelly>
but if you say "ipfs cat QmSy5XjpgqhD4t436czqP1sgcGSxx7n4p7JC8hVEDsnDs7/something.txt" then i can change at any time what that means
<shadeheim>
interesting
<shadeheim>
so to get someone to "host" a version of your site, you would ask them to pin it, and then link people to their hash?
Encrypt has joined #ipfs
<mungojelly>
shadeheim: if different people pin the same file(s), they always get the same hash, so if two people add the same thing, they'll both help when there are requests for that hash
<mungojelly>
if there's a single bit different, the hash of the file will be different, and requests for one or the other will be treated separately
<shadeheim>
so the download of that hash will be faster with more people pinning?
<shadeheim>
and active
<mungojelly>
yeah, it should be, the more copies in the network the more chances you have to get the pieces you need.
<mungojelly>
at the moment it's still very sloppy about trading bits. i believe the current algorithm is that when you request a hash, anyone on the network who has that hash immediately starts sending you everything they have of it.
<mungojelly>
there's only a few hundred nodes on the network so it doesn't matter much yet.
<locusf>
excuse me, but where is the ipfs paper at?
<locusf>
mungojelly: ok, could that be in the topic if theres room?
<mungojelly>
i don't have ops so you can change it if i can :)
<shadeheim>
mungojelly: so the files don't get downloaded in chunks from faster peers right now?
<mungojelly>
so i just changed what QmSy5XjpgqhD4t436czqP1sgcGSxx7n4p7JC8hVEDsnDs7/something.txt points to, now it points at a file that says "i have changed this", the old version is still available on the network but only if you know its hash
<mungojelly>
shadeheim: i tried to find it in the code and i couldn't actually find it (but i don't speak go really). from what i hear, it's still just totally naive.
<shadeheim>
I do like the simplicity of it
<mungojelly>
protocol labs seems to have a plan where they get rich by making a blockchain where you can negotiate storage rental, so they're working on that i think rather than any cooperative system
grahamperrin has quit [Quit: Leaving]
<locusf>
mungojelly: yeah ok :)
Guest73396 has joined #ipfs
legobanana has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
hoboprimate has joined #ipfs
nicolagreco has joined #ipfs
thomasre- has quit [Remote host closed the connection]
geir_ has quit [Remote host closed the connection]
<ipfsbot>
[node-ipfs-api] Dignifiedquire opened pull request #83: [discuss] Switch to superagent instead of raw http.request (master...superagent) http://git.io/vWuQO
dignifiedquire has quit [Quit: dignifiedquire]
cemerick has joined #ipfs
dignifiedquire has joined #ipfs
<ion>
shadeheim: (I'm responding before having read the rest of the discussion.) You need to have a node seeding the content. Others who have explicitly visited your site may be serving the files they happened to load for a while. Filecoin will let you incentivize others to keep seeding the content in the future.
sonatagreen has joined #ipfs
<ion>
jo_mo: American gateways will be able to comply with DMCA and be legally safe (along with the negative sides of DMCA, e.g. trolls existing because you can't charge for “mistaken” requests).
domsch has joined #ipfs
<domsch>
what is the best way to keep track of hashes?
<domsch>
pretty much just save it locally in a file?
<sonatagreen>
that's pretty much what I've been doing
<domsch>
yeh I'll just do that then
<domsch>
btw do you suggest a specific server provider?
<domsch>
I suppose DigitalOcean will be fine? Any spec requirements?
dignifiedquire has quit [Quit: dignifiedquire]
<jo_mo>
ion: not sure how it's in the US, but in Germany people get charged a lot when they use torrents for copyrighted content. Copyright holders can't detect downloading, but since they seed the files, too, they get charged for it. The legal issue I see with IPFS is that you can request a file from anyone and they start seeding it
<sonatagreen>
are there any good jurisdictions?
<ion>
mungojelly: Links within your site should probably be relative in general, and the pages could have a <link rel="last" href="/ipns/…/this-page"> as a pointer to the latest version.
<shadeheim>
ion: would pages get indexed by search engines this way?
<ion>
shadeheim: Pages on public gateways are already getting indeed by search engines. If IPFS gains traction, search engines will implement native IPFS support.
<ion>
indexed
<shadeheim>
interesting
<shadeheim>
I like the look of Filecoin.
<shadeheim>
is it coming any time soon?
<ion>
They are prioritizing IPFS work at the moment because Filecoin will need a robust IPFS.
<shadeheim>
true
<shadeheim>
I really think it could take off if they get it right
<shadeheim>
just the legal grey areas involved could put people off
<mungojelly>
copying something is a read and a write. stopping something from being copied means watching whether writes are similar to reads. god is the final appeals court and has ruled.
dysbulic has joined #ipfs
quincunxial has quit [Remote host closed the connection]
dignifiedquire has joined #ipfs
grasper has joined #ipfs
martinkl_ has joined #ipfs
akarthik_ has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
rendar has quit [Ping timeout: 256 seconds]
nicolagreco has quit [Quit: nicolagreco]
fiatjaf has joined #ipfs
nicolagreco has joined #ipfs
rendar has joined #ipfs
shadeheim has quit [Read error: Connection reset by peer]
Encrypt has quit [Read error: Connection reset by peer]
Guest73396_w has joined #ipfs
Encrypt has joined #ipfs
cemerick has quit [Ping timeout: 250 seconds]
Guest73396 has quit [Ping timeout: 252 seconds]
edward_ has joined #ipfs
<edward_>
hi all
<edward_>
I have just installed ipfs on a aws server
<achin>
hi
<edward_>
i want to try the webui
<edward_>
but even I open the port on my security group
Guest73396 has joined #ipfs
<edward_>
i cant acess to the webui
<achin>
the webui by default listens on a local address
<achin>
so it's not accessible from outside the machine
<achin>
you can either restart teh daemon and have it listen on a public IP, or use ssh tunneling
Guest73396_w has quit [Ping timeout: 250 seconds]
mercora has quit [Ping timeout: 246 seconds]
<achin>
i would probably prefer the ssh tunneling method, since it'll keep your ipfs node secure
Guest73396 has quit [Ping timeout: 260 seconds]
captain_morgan has joined #ipfs
<edward_>
ok I go search for this
<sonatagreen>
Is there an api/cli way to generate a new ipns keypair?
dysbulic has quit [Ping timeout: 246 seconds]
grahamperrin has joined #ipfs
<gamemanj>
well, you could always do mv ~/.ipfs ~/.ipfs-bk ; ipfs init ; cp ~/.ipfs/config $SOMEPLACE ; rm -r ~/.ipfs ; mv ~/.ipfs-bk ~/.ipfs
<sonatagreen>
eww
<gamemanj>
yeah, I don't like it either
<gamemanj>
I just run my set of IPFS key-crunching processes in separate users... maybe I'm insane?
<achin>
i'd look at go-ipfs. the code to generate a keypair is probably pretty easy to extract into a small standalone program
mercora has joined #ipfs
nicolagreco has quit [Quit: nicolagreco]
nicolagreco has joined #ipfs
dysbulic has joined #ipfs
<sonatagreen>
I specifically want it to be available through the API
pfraze has joined #ipfs
<ion>
sonatagreen: Key management is planned but not implemented.
shadeheim has joined #ipfs
<edward_>
exit
edward_ has quit [Quit: leaving]
<sonatagreen>
The docs say: ipfs name publish [<name>] <ipfs-path>
<sonatagreen>
if I specify [<name>], where does it look to find the corresponding privkey?
captain_morgan has quit [Ping timeout: 246 seconds]
<haadcode>
whyrusleeping: sounds good. looking forward to get my hands on that and implement stuff using it. what are the deps for it?
<achin>
ilyaigpetrov: it woudln't be a DAG then
<revolve>
whyrusleeping: would you like to have a look at my p2p caching proxy?
<revolve>
it's in python
<ilyaigpetrov>
achin: I came up with another solution, IPNS should be resolved to a blockchain of commits.
bedeho has joined #ipfs
<ilyaigpetrov>
achin: so you always can look up old commits and they are within network forever
<haadcode>
whyrusleeping: I suppose those same deps (or keystore in itself?) is a dependency for ipns to be able to publish to any key pair?
<achin>
what are "commits" in the context of ipfs?
<ilyaigpetrov>
achin: lets put it is a hash of object kept in ipfs
<ilyaigpetrov>
achin: is commit in git similar to blockchain?
<ion>
A git commit is an object that has metadata (timestamp, message, author etc), zero or more pointers to parent commits and one pointer to a directory tree object.
<ilyaigpetrov>
yes, similar
<whyrusleeping>
haadcode: the deps are mostly the dev0.4.0 branch
<whyrusleeping>
a git commit, and ipfs object and a blockchain block are all merkle[tree/dag] nodes
* whyrusleeping
mostly afk today
<haadcode>
whyrusleeping: you mean 0.4.0 is a depedency for keystore or ipns able to publish to any key pair?
<haadcode>
whyrusleeping: nevertheless, sounds good and hoping you'll get to it soon enough :)
<ilyaigpetrov>
Even if ipns would resolve to commit, it still would be easy to kill ipns holder.
<whyrusleeping>
haadcode: both
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<whyrusleeping>
ilyaigpetrov: there is no code for this right now, but there is nothing preventing other peers from republishing an ipns record on behalf of another peer
<whyrusleeping>
the author can publish the record with an initial large EOL, and then other peers may retransmit it through the network up until that EOL
<whyrusleeping>
(and the EOL can be hundreds of years in the future if they want)
grasper has quit [Killed (Sigyn (Spam is off topic on freenode.))]
<ilyaigpetrov>
EOF=End of Life, I guess. So, if EOL is set to an age, nobody (nor creater) can overwrite it?
countertendency has joined #ipfs
<ilyaigpetrov>
well, ipns which resolves to a commit is still a nice idea, isn't it? Or needles complexity?
<ilyaigpetrov>
by commit I mean a node of a linked list pointing backwards in history.
<achin>
i guess i'm a little confused, because that's pretty much what IPNS is right now, if you assume "linked list" to mean "DAG"
martinkl_ has joined #ipfs
ludbb has joined #ipfs
gperrin has joined #ipfs
countertendency has quit [Ping timeout: 264 seconds]
<ludbb>
hey, can you help me understand how files are distributed through node, if distributed at all? my ~/.ipfs/datastore and ~/.ipfs/blocks are rather small, so it certainly doesn't work like a single store. But what happens when I ipfs add some file? Does some node always keep all the added files, is there a setting for that? What happens if I remove the file? Can someone or a group of people create some sort of whitelist?
<ion>
When you “ipfs add” a file, it will be pinned, i.e. added to a list of objects to be retained when you run “ipfs repo gc”. You can list the objects you have pinned with “ipfs pin ls” and you can unpin an object with “ipfs pin rm <hash>”.
<ion>
The objects are stored in ~/.ipfs/blocks.
<ludbb>
is ipfs add similar to git add then git commit?
<ion>
“ipfs add” doesn’t create anything like a commit object, what it creates is like tree and blob objects in git.
<ludbb>
it creates a hash though
<achin>
git has a concept of an "index" where you can stage things before you add them to the repo. ipfs doesn't have this, though
<ion>
Yes, every object has a hash in IPFS and in Git.
<ion>
Not only commit objects.
<ludbb>
achin: isn't that because ipfs add roughly skips the "git add" part and combines "git add" with "git commit"?
<achin>
yes, you could think of it like that. but don't try too hard to map every ipfs operation to an operation in git (it won't be very useful)
<ludbb>
ok, so what happens after a ipfs add? does my node try to send copies to nodes close to mine based on the dht?
<demize>
ipfs add is more like git-hash-object
<ludbb>
sure sure, sorry about trying to stablish something like that
<achin>
no. no data gets sent when you "ipfs add" something
<ludbb>
uhm, someting must be sent because I can visit ipfs.io/ipfs/<that-hash> and see its contents
<achin>
that's because the ipfs.io gateways ask the DHT "who has this hash", and your node replied "i have it"
<ludbb>
ah nice!
<achin>
so only when the ipfs gateway nodes asked for a hash did your node send something over the network
<ludbb>
thank you, pretty cool indeed
<ludbb>
so how can I send copies of my files? or is that not possible?
<achin>
this means if you add something, but no one ever asks for it, and then your node goes offline, that data is effectively gone
<ion>
ludbb: Nothing is sent before the node running at ipfs.io requests that file from your node, and it will never do that unless someone asked it from the gateway.
<ludbb>
or, rather, how can I distribute copies? or is that something that ipfs doesn't do?
<ion>
ludbb: By giving someone the hash and waiting for them to download it.
<ludbb>
hmm.. I see
<ion>
ludbb: A future project, Filecoin, will add a mechanism for incentivizing others to store content on your behalf.
<ludbb>
why the coin part? is there a project that preceeds that to implement some kind of blockchain?
<Stskeeps>
ooi, will ipfs pin download it immediately and first return when downloaded?
<achin>
Stskeeps: yep
<Stskeeps>
oki
<ion>
ludbb: It will be a project that implements some kind of a blockchain.
<achin>
so if it looks like "ipfs pin" is hanging, it's because it's busy downloading stufff
<ludbb>
how I can tell if something is hanging with ipfs, btw?
<ludbb>
the ipfs log tail spits too much information
<achin>
demize: i think you're right, that's the closest analog in git
<ludbb>
and ~/.ipfs/logs is always empty
Encrypt has quit [Quit: Quitte]
<achin>
ludbb: i'm not sure. you'll soon get an idea about how long certain commands normally take and then use your experience to figure out if it's stuck or not
<ion>
Better status indicators in the output is a thing that will happen at some point.
<achin>
some operations have a progress indicator, though, like "ipfs get"
<ion>
That one can update at a very slow rate.
pseudospherical has joined #ipfs
ygrek_ has joined #ipfs
<achin>
yes, but at least it updates :)
<ludbb>
ipfs pin ls shows "recursive" next to each hash, what exactly it means here?
<ludbb>
isn't there documentation about this instead of examples?
<achin>
the examples to a pretty good job explaining these concepts, so it's useful as documentation too
<ion>
ludbb: It is possible to choose whether to pin all its links transitively along with an object or not, “recursive” means all the links are also kept around.
<ludbb>
thank you
<mungojelly>
oh ok so we can make an ipfs html web like this file:///ipfs/QmbtE6DV5f61ARDdK2D78Ya1yoBi72MUEBtyxsFJajuP7g ! does any other web like that exist to link to?
<mungojelly>
that just has links as <a href="/ipfs/<hash>">something else</a>
<ludbb>
so in the pin ls is there something I should do when it says "recursive"? I'm trying to understand how to make a copy of an existing node and deploy that to a new node. It seems I can combine pin ls with cat, save all to a tar file and then run ipfs tar add in the other node
<codehero>
you could do something like "ipfs:<hash>" but the client needs to know the target program
<ludbb>
but it seems there's no docs for the tar subcommand, I'm not sure what it actually expects
gperrin has quit [Quit: Leaving]
<demize>
ludbb: `ipfs tar --help`
<ludbb>
I read that..
<ludbb>
all it says is export a tar file from ipfs import a tar file into ipfs
grahamperrin has quit [Quit: Leaving]
<demize>
Yes?
<ludbb>
what exactly should the tar contain? a bunch of files named whatever I want?
<demize>
uh
<demize>
It literally just imports a tarball into ipfs. So whatever you want it to contain.
<ludbb>
I'm trying to recreate a node elsewhere, so I want to make sure it matches the existing node
<ludbb>
should the files be named equal to its hash in ipfs?
<ludbb>
or that doesn't matter?
<demize>
The contents of the tarball is completely irrelevant to ipfs.
<mungojelly>
hopefully the tar cat and tar add successfully form a loop and meet at the same hash you'd hope
sonatagreen has quit [Ping timeout: 264 seconds]
<ludbb>
I see, but I'm not sure why you're assuming this is all obvious when there's barely any documentation about it. Let me give a different example: I run ipfs add X, ipfs add Y, this gives me two hashes, is the path in tar cat one of those hashes so I would need to run tar cat twice?
<ludbb>
or can I do tar -czvf a.tar.gz X Y and then ipfs add a.tar.gz ?
<ludbb>
or does it have to be .tar, and doesn't uncompress from gunzip?
<demize>
You take a tarball. You run `ipfs tar add` on it. You take that hash. You run `ipfs tar cat` on that hash.
<demize>
And you get the tarball back.
<ludbb>
ok, but if I want to get the individual files inside the tar, is that possible or not? because based on what you just told me it's not
<ludbb>
ipfs/<hash-of-tar>/<something-inside-the-tar?> is that how it works?
<ludbb>
if it just consumes a tar and spits back a tar, why is that even needed? ipfs add would do the same no?
<mungojelly>
i don't know what it does either but it sounds easy enough to try?
<demize>
The tarball isn't extracted and added as a directory, no, but you can certainly get the links of the individual files, yes.
<ludbb>
yeah.. I'm going to stop asking, sorry for bothering you all.
<demize>
Anyway, the difference is that it's not a single big blob of binary data.
<mungojelly>
i went to try it out but i forget how to make a tar
<demize>
You can play around with `ipfs tar add` and then `ipfs object links` and `ipfs object data`
<demize>
mungojelly: tar -cf foo.tar file1 file2 ...
<mungojelly>
ok -f marks the outfile, -c tells it to make a tar, seems redundant, tar, TAR, TAR!! make a tar, tar! tar.
<demize>
No, it tells it to create an archive.
<ludbb>
mungojelly: how do you tell whether it's extracting or creating a tar?
jo_mo has quit [Quit: jo_mo]
<demize>
As opposide to extract one.
<ludbb>
yep
<mungojelly>
i thought extracting was "untar"
chriscool has quit [Ping timeout: 268 seconds]
<ludbb>
tar -x..
ygrek_ has quit [Quit: ygrek_]
<ludbb>
at least this, I can guarantee you, is heavily documented
acidhax has joined #ipfs
sonatagreen has joined #ipfs
<mungojelly>
so there's no other webpages yet with /ipfs/ links? is that a bad idea? seems easy enough to make a few and see what happens
<mungojelly>
no just links that say /ipfs/<hash> so it goes through fuse
<sonatagreen>
by webpages with /ipfs/ links, do you mean websites hosted on ipfs, or public gateways, or what?
jo_mo has joined #ipfs
<mungojelly>
like this one file:///ipfs/QmbtE6DV5f61ARDdK2D78Ya1yoBi72MUEBtyxsFJajuP7g
<ludbb>
I severely doubt more than a handful people have /ipfs mounted, so maybe what will happen is that nothing will get displayed. Besides, isn't your browser going to reject to load that? just like it will reject to load file:///etc/passwd?
<mungojelly>
that goes to a crappy handwritten html page that links to another page that says "hello world" and that's all, a sad little empty web
<mungojelly>
only a handful of people use ipfs at all so of course you're right in general, but it works fine for me
<ludbb>
what about the second part, where your browser rejects to load file:/// unless you're executing from file:/// already?
jamie_k_ has joined #ipfs
<ludbb>
even then it might reject, firefox used to reject loading anything that involved ..
<mungojelly>
hm what? i don't know that rule
grahamperrin has joined #ipfs
<achin>
when writing webpages to be hosted in IPFS, the current recommendtation to to use links like <a href="/ipfs/QmHash">foo</a>
<mungojelly>
achin: what pages are there linked like that?
<mungojelly>
if someone would kindly link to /ipfs/QmbtE6DV5f61ARDdK2D78Ya1yoBi72MUEBtyxsFJajuP7g then i could try your page and see if the links only work for me because i already have the data :)
sonatagreen has joined #ipfs
border0464 has joined #ipfs
<mungojelly>
i guess i'll also bother to set up a second node here hrm
<ludbb>
that's linking to https://ipfs.io/ipfs/<hash>, if you're wondering mungojelly
<ludbb>
what's the actual point in trying to load from file:///ipfs/ ? if the person is running ipfs then localhost:port/ipfs/<hash> will work just fine with <a href="/ipfs/..."> links
<achin>
right. that's why href="/ipfs/QmHash" is the recomentation. because it'll work if you've loaded the page from an ipfs.io gateways, your own localhost gateway, or your local file system with the /ipfs/ fuse mount
<mungojelly>
achin: ok got it! the relative links work either with the gateway or locally depending on which way you go. that's awesome.
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
<mungojelly>
but um there's no pages linking to one another? and cat pictures? just links to all of xkcd and hello worlds so far or what's going on?!
<achin>
yes, and so do absolute URLs (URLs that start with a slash). but the key is to not include the protocol scheme (http:// or file://... leave that off)
nicolagreco has quit [Quit: nicolagreco]
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
grahamperrin has quit [Quit: Leaving]
nicolagreco has joined #ipfs
grahamperrin has joined #ipfs
<achin>
i'm not sure what you mean "there is no pages linking to one another" -- we just created a few test pages that do just this
<mungojelly>
yes there's those now.. is that all? :o
grahamperrin has quit [Client Quit]
<ludbb>
mungojelly: you never know, you can't easily share them
fingertoe has joined #ipfs
<achin>
no, not at all. but i don't have a list of all webpages on IPFS
<mungojelly>
well if i find out about any i'm going to link to them, but i just keep hearing there's the integer sequences, which i love those sequences but i don't fear the loss of that data :)
nicolagreco has quit [Client Quit]
<ludbb>
I'm doing some experiments too mungojelly, I can share something later but it might not have cat pictures
<sonatagreen>
Not cats, but cute animal pictures: /ipfs/QmPsoJ3qSxegFwVr4kKbtdzaMBHTjFZ7uBLkMPzsBdQ2QM
<mungojelly>
there actually is an example cat picture or two, i should dig up those to link to
acidhax has quit [Remote host closed the connection]
<mungojelly>
sonatagreen: ok yay thanks i linked to there :)
acidhax_ has joined #ipfs
voxelot has joined #ipfs
voxelot has joined #ipfs
acidhax_ has quit [Remote host closed the connection]
<ion>
ludbb: If both nodes are running, you can just transfer the pin list to the other and pin the list there and let IPFS take care of moving the data around.
<mungojelly>
i linked to it and pinned it. what a satisfying feeling, to make a link without being completely helpless whether it'll stay lunk.
<sonatagreen>
i know right? <3
reit has quit [Quit: Leaving]
<achin>
idea for an IPFS service -- you provide a hash for a .giv, and it converts it to a wemb and gives you back that the webm hash
<sonatagreen>
This, this is what the web is supposed to be!
<achin>
s/giv/gif/
<multivac>
achin meant to say: idea for an IPFS service -- you provide a hash for a .gif, and it converts it to a wemb and gives you back that the webm hash
<mungojelly>
cool yeah we could expose services like that right here even
<mungojelly>
dear imaginary convertbot: convert QmFoo to .gif please
<mungojelly>
(make sure it doesn't work without the please to teach people manners)
<sonatagreen>
haha
cemerick has joined #ipfs
NeoTeo has joined #ipfs
<mungojelly>
it's like a wonderland web. everything's the same but everything's different. links never break but they all point backwards.
<sonatagreen>
Everything points backwards really, you just never noticed it.
martinkl_ has joined #ipfs
<sonatagreen>
A link means "Something cool was here last I checked, if you're lucky it might still be there now."
<mungojelly>
no that's just the problem with the old web, things point forwards tentatively, they're like "whatever untrustworthy.corporation says to do some indefinite length of time in the future" which is such a vague link it's unbelievable but the future is the one little thing it gives you
<mungojelly>
like here's a link to the up to the datest news from a bunch of trolls http://reddit.com/ but there's no way for me to link to reddit when it was good except archive.org
<sonatagreen>
oh this reminds me, I want a way to do a hybrid ipfs/ipns link
<sonatagreen>
like
<mungojelly>
wiki was a brilliant idea because it made a little bit of the web a little bit like ipfs for as much as you could trust the wiki host at least
<sonatagreen>
it tells you the ipns to ask for the latest version, but also the ipfs of the last known version as a fallback
<mungojelly>
does html not have a syntax for fallbacks? i don't remember one but there really ought to be
<sonatagreen>
good question
reit has joined #ipfs
<ion>
sonatagreen: I'd be happy with pages having a <link rel="last"> pointing to an IPNS address and IPFS-capable browsers detecting if there's a newer version and letting you get to it with one click. It might be even nicer if IPFS objects themselves could have an optional link to an IPNS location so that detection would work for any file format.
<sonatagreen>
Those are both good suggestions.
<sonatagreen>
I want as much integration as possible to make it seamless and effortless, this is an important feature.
<sonatagreen>
And the reverse operation, it should be very easy to go from an ipns to the currently-pointed ipfs.
<revolve>
the idea here is to default to the most replicated revision of a resource, but to also permit you to select others manually
<sonatagreen>
I'm not feeling it
<revolve>
what's up with it?
jamie_k_ has quit [Ping timeout: 265 seconds]
<sonatagreen>
(1) Yet Another Fancy Daemon Thing is automatically an uphill battle, I only have so much attention (and ram on my netbook) to run things.
jamie_k___ has joined #ipfs
<sonatagreen>
(2) it feels like it's solving a different problem than the one i have
<sonatagreen>
like, that's for wikis, this is for blogs/rss
<sonatagreen>
(3) it feels messy/squishy/inelegant
<revolve>
what's for wikis?
<sonatagreen>
uroko
<revolve>
no, it's for hypermedia
<sonatagreen>
ok but like
<revolve>
it's a web caching proxy
<sonatagreen>
in terms of what kind of use case it works well for
<revolve>
but like, it coordinates with other users of the same software using the same routing mechanism as IPFS/bittorrent
<sonatagreen>
it's for things with many authors and complicated consensus-finding
<revolve>
I've also got to tell you that the ram usage is minimal
<sonatagreen>
whereas I'm talking about things with one author and you just need to pick the latest version
<sonatagreen>
that's very good (grumbles at ipfs taking 10-20%) but brainshare stands
<revolve>
sonatagreen: what's for things with many authors?
<sonatagreen>
uroko
<revolve>
when I say revision I mean one singular version of the entire resource
<revolve>
IE a url pointing to an image is dealt with as being a revision of that path on that domain
chriscool has quit [Ping timeout: 264 seconds]
<sonatagreen>
also, 'most replicated' sounds vulnerable to sibyl attacks
<revolve>
someone changes the file at that location, visiting again results in a new revision object, at that path, on that domain.
<revolve>
and when the site goes down it will then ask peers if they know anyone with data for that path
<revolve>
who can then tell you who has data for the hashes of which revisions
<revolve>
you'd then probably opt for the most replicated and settle on which peers to ask for the data, make sure it matches, and then serve that in place
<revolve>
so nothing disappears so long as someone online has a copy of something and they're willing to share when asked
<revolve>
yeah it's really early days yet; implementing s/kademlia and eigentrust are to do yet
jamie_k__ has joined #ipfs
<revolve>
so this thing just backs up the web with its link structure intact
jamie_k___ has quit [Ping timeout: 268 seconds]
dignifiedquire has joined #ipfs
<sonatagreen>
i. i really don't. like.
<sonatagreen>
why.
<sonatagreen>
why should i pick this over all the other things out there.
<revolve>
Bit hard to parse what you're saying there. Those are intended to be questions, right?
<sonatagreen>
those are intended to be verbal flailing.
<sonatagreen>
i am having difficulty organizing my thoughts into coherent.
<revolve>
OK so what alternatives are you thinking of?
<sonatagreen>
first i have to struggle to understand what problems it's solving.
<revolve>
IPFS: afaict it's solving a different issue and it replicates data by telling peer nodes to store. Maelstrom: Windows-only, is not libre.
<sonatagreen>
it seems like, uroko is fundamentally a key->value thing.
<revolve>
Uroko: You store /only/ what you request. Things can disappear through neglect.
<sonatagreen>
you query a name/key/url, and get back a value/content/data.
<sonatagreen>
yes?
<revolve>
that's akin to saying the web is fundamentally a url->resource thing. yes.
<sonatagreen>
and the association between keys and values is determined by... a popularity contest?
<revolve>
no it permits multiple strategies; most replicated, most recent, least recent, least replicated
hoboprimate has joined #ipfs
<revolve>
the dht is about 1kloc if you'd like to hack on it, too
<ludbb>
hmm.. how do I enable CORS on my local node?
<sonatagreen>
but at some point you're just asking people their personal opinions on what value should be associated with a given key, and then somehow collating those opinions to form your own opinion?
<sonatagreen>
there's no, like, crypto authenticating some opinions as more inherently valid than others?
<revolve>
no; you saying you will share a resource means you will visit peers whose node IDs are near to the hash of the url you're storing for (or the hash of the content for in-network resources)
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
<revolve>
anyone wanting to retrieve that data will know to visit those peers through the same reasoning you followed. they'll then be referred to you for the data.
<ludbb>
thank you sonatagreen
<revolve>
then they just make a request to your instance and you replicate some data to a peer
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
<revolve>
revisions are private by default though; you're wouldn't be visiting all the nodes who's node IDs are near to the urls you're visiting and telling them you're storing data for all your casual browsing, unless you want to do that
<ludbb>
sonatagreen: that didn't seem to work, in the config I see the API section now includes the HTTPHeaders, should it be set for Gateway instead?
<sonatagreen>
ummmm
<sonatagreen>
i don't know, help
<ludbb>
when I try curl -X OPTIONS <local-gateway> it returns Method OPTIONS not allowed: read only access, maybe that's the issue?
<sonatagreen>
did you try restarting the daemon?
<ludbb>
yes
<sonatagreen>
possibly
<sonatagreen>
ok, if you go into your ~/.ipfs/config (or wherever the config file is on nonlinux)
<revolve>
sonatagreen: are you on osx?
<sonatagreen>
no, lubuntu
<revolve>
k
<ludbb>
I'm on osx, but the config is at the same place. Something I should should look for?
<sonatagreen>
and near the top should be a stanza with like "API": { ...stuff... }
<ludbb>
sure, it's there
<sonatagreen>
I'm going to paste a link in a minute to what I have there in my config
<sonatagreen>
as soon as this finishes uploading
<sonatagreen>
dunno why it's taking so long
<sonatagreen>
*should* be /ipfs/QmcwWLSEBzcQUCZ9bKzmLtQ1KRdGGfH9S15WssoVzs53eG
<sonatagreen>
oh, my daemon wasn't running
<sonatagreen>
ok, that link definitely *should* work now
<ludbb>
ah, right, let me try that. makes more sense now
<sonatagreen>
So for my blog I'm serving up a directory with webfonts
<sonatagreen>
and because of how it's licensed, I'm doing the /whole/ directory, which includes a lot of tiny files for individual glyphs
<ludbb>
I don't know.. this doesn't work for me, the browser never sends an OPTIONS request
legobanana has joined #ipfs
<ludbb>
maybe be easier if I run ipfs add and then use that
<sonatagreen>
and I would really like to not have to recursively trawl that whole big thing every time I want to insert an update to my blog
<ludbb>
(ipfs add on the .js file that is trying to load the other file)
<sonatagreen>
but I don't know of a good way to include a link to a precomputed hash in a directory structure that I'm adding recursively
<sonatagreen>
this would be /very helpful/
<ludbb>
hm... if I have a resource R that I'm changing constantly, and so ipfs add R will return a new hash after each change, is there a way to use ipfs name to link to R so the hash for the latest "ipfs add R" is always the same?
<ludbb>
that is, ipfs name resolve <some hash> would get resolved to the latest hash from R?
<sonatagreen>
mmm, maybe.
<sonatagreen>
normally you would inform ipfs of updates by doing "ipfs name publish <latest hash>"
<ludbb>
oh, that always produces the same hash?
<sonatagreen>
but i think if you keep (a copy of?) R directly under /ipns/local then that might do it
<sonatagreen>
yes, the ID of your node
<ludbb>
so I would use the ID to point to R?
<sonatagreen>
yes
silotis has quit [Remote host closed the connection]
<ludbb>
ah, interesting. That solves it indeed
<ludbb>
thanks again
<sonatagreen>
(if we're communicating successfully at least, maybe try it to make sure)
silotis has joined #ipfs
r04r is now known as b3cky
b3cky is now known as r04r
<ludbb>
do other nodes cache this name resolution too?
<sonatagreen>
I'm unclear on that
<sonatagreen>
(see /ipfs/Qmd96rUFx2xzJXK1EanSJVJrS99qibvxgyWnz5vDhtKbUR for a simple example of updating a name)
<ludbb>
sure, that example is what I meant
<sonatagreen>
excellent
<ludbb>
:)
harlan_ has joined #ipfs
<locusf>
what was the usecase of using ethereum combined with ipfs?
<sonatagreen>
I forget
<Stskeeps>
locusf: 'blockchain storage'?
<Stskeeps>
or how was it
<sonatagreen>
I think something along the lines of stuff that's server-side dynamic
<locusf>
oh ok
<sonatagreen>
Is it possible to tell from a bare hash whether it's /ipfs/ or /ipns/ ?
<achin>
nope
<ion>
Well, if it returns a public key, you can see if there’s an IPNS signature by it. That doesn’t work yet but AFAIK they are meaning to unify them in that way.
<achin>
but you can't know a priori if a hash is a peerID hash or a merkelnode hash
<ion>
yeah
<sonatagreen>
I thought there might be some metadata in the first few bits
<sonatagreen>
or something
<ion>
The metadata only tells the hash algorithm and length.
<sonatagreen>
but that makes sense
<ion>
Nothing about what was hashed.
<victorbjelkholm>
uh-oh, just found out I can't have multiple node-ipfs-api instances to connect to many nodes in the same time...
<ion>
ouch
wopi has quit [Read error: Connection reset by peer]
<victorbjelkholm>
How can I now build my giant cluster of IPFS machines?! :'(
wopi has joined #ipfs
<ion>
Finish davidar’s Haskell API and use it. :-P
<victorbjelkholm>
hah, a lot of things have to happen before I even dip my toe in Haskell
<victorbjelkholm>
I'm comfy in my little evergreen js world
cemerick has quit [Ping timeout: 264 seconds]
pseudospherical has quit [Ping timeout: 250 seconds]
<flounders>
It's not too hard to pick up.
<flounders>
You will feel like you have a major headache until you've got some experience with it though.
<ion>
I’m not sure that’s any different from learning any other class of programming languages for the first time.
<flounders>
It's not but going from one imperative language to another I at least didn't expect to feel like I was starting over.
<ion>
Sure, going from ML to Haskell will also be easy.
<victorbjelkholm>
Trying to pick up Clojure now, more interesting changing paradigm. But it's hard, lots of new concepts
Encrypt has joined #ipfs
<achin>
new things are already great. expands your experience
<ion>
achin: Yeah. You’ll also end up being better in the languages you already knew. Although you may also end up being more frustrated with them if you have encountered a much nicer language.
<achin>
spoken like to true haskeller! :P
<achin>
(you are right, of course)
<achin>
s/to/a/
<multivac>
achin meant to say: spoken like a true haskeller! :P
NeoTeo has quit [Quit: ZZZzzz…]
<ion>
I knew it would be good to have a clear boundary between impure and pure code for years but I failed to really put that into practice. The cultures of all the languages I knew so far did not help. I finally got a lot of practice doing that when learning Haskell and that experience helps me do that in any language.
<flounders>
Pattern matching, strong typing and efficient recursion spoil you.
<vanila>
pattern matching is so good i had to add it to scheme
suprarenalin has joined #ipfs
<achin>
does anyone know if the public read-only gateway can return something like the output of "ipfs ojbect get" ?
<ion>
achin: I have never seen a mention of it having that capability anywhere, FWIW.
<achin>
bummer
<vanila>
there is a program called ipfs on freebsd!
<vanila>
"saves and restores information for NAT and state tables"
<achin>
hehe
<vanila>
that's a shame.. have to call the real ipfs with full path
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
wtbrk has quit [Quit: Leaving]
<sonatagreen>
*shrug*
<sonatagreen>
a couple design suggestions, though:
<sonatagreen>
1) it should (optionally?) ping wikipedia first, and only use the cached version if the live site can't be reached or the page has been deleted or something
<domsch>
that is what it's doing right now
<sonatagreen>
so that it's possible at least in principle to stay up-to-date
<sonatagreen>
the readme suggests the opposite
<sonatagreen>
it says it looks for a cached version first, only going to wikipedia if it can't find one
<sonatagreen>
2) it should be able to grab other wikis as well, if possible
<domsch>
yeh the problem is that by constantly calling Wikipedia we're flooding their API and they are not really happy about that (discussed it with them previously)
<sonatagreen>
aha
<domsch>
so I much rather run a script once a day that updates the backed up pages
<domsch>
2) yeh definitely agree with that. Right now this is just a super simple proof of concept
<domsch>
adding support for other Wiki's won't be difficult at all
<sonatagreen>
I was under the impression that it only calls WP if a user actually requests the particular page
<domsch>
since they all use the same API
<sonatagreen>
so they shouldn't get pinged /more/ than if we just used them directly
<domsch>
in a previous version (the instant-wikipedia one), I pretty much made an API call on each key enter (i.e. keyup)
amade has quit [Quit: leaving]
<domsch>
but yeh in this right now it's not *that* big of an issue. But it's sort of against the philosophy to go back to Wikipedia after it's been backed up on IPFS
<sonatagreen>
sure
<sonatagreen>
but... in that case you need some sort of within-ipfs way to update, don't you?
gamemanj has quit [Ping timeout: 250 seconds]
<sonatagreen>
or are you okay with being stuck on one particular version of each page forever
<victorbjelkholm>
domsch, wow! Awesome, great work with it. Looks interesting
<domsch>
yeh like I don't think there is any other way to perhaps call the Wikipedia API once a day with all the IPFS stored pages and compare them if there are any changes
<sonatagreen>
I was imagining that it would call the API for a page whenever a user requests that page
<sonatagreen>
or, possibly, whenever a user requests that page, unless the cached version is less than 24 hours old
<domsch>
Victor you made IPFSBin right? Thanks a lot for making it, helped me out a lot in figuring out the Node stream ^_^
<sonatagreen>
ipfsbin is very helpful
<victorbjelkholm>
domsch, hah, yeah, "made" since it's really simple. But glad that you liked it :)
<victorbjelkholm>
have some interesting additions to it, but need another (bah) project to be running before
<domsch>
I'll hopefully launch my Ethereum voting application this week, just need to make some last tweaks
wopi has quit [Read error: Connection reset by peer]
<domsch>
btw Victor, where are you hosting ipfsbin?
wopi has joined #ipfs
<victorbjelkholm>
domsch, sounds like a cool idea. Digital Ocean I think
<victorbjelkholm>
yeah, is DO
<victorbjelkholm>
but is a trivial, I have yet to dared to run anything serious at DO
<domsch>
what are the specs of your server?
<victorbjelkholm>
is the simplest one
<domsch>
ou ok that's nice, thanks :)
<victorbjelkholm>
the only thing it's doing is serving static content and running/contacting the ipfs daemon
<kpcyrd>
the memory leak is pretty bad. I have 16GB of ram and that's still not enough to add 120GB of files without intensive swapping
<ion>
kpcyrd: As a workaround, you can run “ipfs add” while the daemon is not running.
<kpcyrd>
ion: ipfs daemon isn't running yet. ipfs add is allocating 10's of gigabytes
<ion>
kpcyrd: ok :-\
<kpcyrd>
and oom kicked in..
<fiatjaf>
does anyone have station binaries for windows somwhere?
ludbb has quit [Quit: leaving]
<kpcyrd>
I'm not sure where the leak happens.. it obviously doesn't keep all files in ram because that would fail much faster
<kpcyrd>
but > 16 GB is a lot of data
NeoTeo has joined #ipfs
NeoTeo has quit [Remote host closed the connection]
<victorbjelkholm>
dignifiedquire, thanks for the input about the config thingy. Do you know concretely where the issue is? Trying to pinpoint it right now
<fingertoe>
Error: Failed to set config value: Wrong config type, expected bool (maybe use --json?) - I am trying to run mount, but it isn't playing nice.. Doesn't like the
<fingertoe>
"DontCheckOSXFUSE": true, line
Encrypt has quit [Quit: sleeping time!]
<fingertoe>
I am using OSx
<dignifiedquire>
victorbjelkholm: no sorry haven’t looked more closely, will try to check tomorrow
<sonatagreen>
gotta say, this really looks like a design error:
<victorbjelkholm>
dignifiedquire, ah, ok. No worries, thanks for the feedback though
<vanila>
since when can you put HTML code/templates inside js?
<ansuz>
reactjs
<ansuz>
facebook made it, I think
<vanila>
is there a way to estimate how many people are hosting a particular hash?
<victorbjelkholm>
ansuz mappum the hash is simply because it was simple to setup
<victorbjelkholm>
took me about 2 hour to do the entire thing! But need to have some rehost of the content, so building a solution for that now, so I can focus on ipfsbin after that :)
<victorbjelkholm>
vanila, html inside js is called jsx
<sonatagreen>
right, the old hash doesn't seem to have actually got saved and I lost the data, /ipfs/QmcbXUmc7fdLEHhUg7BXm6gB3w3xFc39NoF1gcHdkYZ93S
<sonatagreen>
...hmm, does it linkify if i do this? ipfs://QmcbXUmc7fdLEHhUg7BXm6gB3w3xFc39NoF1gcHdkYZ93S/
<sonatagreen>
alas, no.
<ion>
Does in this terminal. Although that URI scheme would be wrong for IPFS.
<sonatagreen>
would it?
<sonatagreen>
I mean, I see that
<ion>
A hyperlink from the document ipfs://QmFoo into /ipfs/QmBar would resolve as ipfs://QmFoo/ipfs/QmBar
<ion>
The URI scheme for IPFS will look like <scheme>:/ipfs/QmFoo where <scheme> is still undecided to my knowledge. It might be “fs” unless someone has a better idea.
<sonatagreen>
in my browser (with the firefox gateway redirect plugin), ipfs:QmBar is interpreted as /ipfs/QmBar, and ipfs:/ipfs/QmBar is interpreted unhelpfully as /ipfs/ipfs/QmBar
<sonatagreen>
and... ipfs:QmBar seems correct to me?
<sonatagreen>
ipfs is how you're retrieving the content, and QmBar is the name/identifier of the thing you want
<sonatagreen>
though ipfs:/ipfs/QmBar should also work
voxelot has joined #ipfs
voxelot has quit [Changing host]
voxelot has joined #ipfs
<ion>
/ipfs/QmBar is the standard path of the content on IPFS. If you want to use a URI, just add <scheme>: in front but don’t change the path.
<shadeheim>
is it possible to build in a revocation ability to ipfs for the original uploader of a file? so if e.g., a bully makes a website on someone when they are a kid, but decides its cruel when they're more mature, they would have the power to completely destroy the traces of it (at least that version of it), or at the very least their endorsement of it?
<ansuz>
unpin it and make a new page apologizing for being a jerk
<ansuz>
s/page/file
<multivac>
ansuz meant to say: unpin it and make a new file apologizing for being a jerk
harlan_ has quit [Quit: Connection closed for inactivity]
<ion>
The disadvantages of others having the technological ability to dictate to my node “hey, that data you have, delete it” would be greater than any advantages.
<shadeheim>
I think so too, but I think there ought to be a metadata in the file to say when the original uploader no longer endorses its circulation, perhaps with an ability for them to add a description/link to their new file
pfraze has joined #ipfs
<mungojelly>
tombstones
<shadeheim>
not in the file itself because it would alter the hash. that would be the problem though I suppose
<ion>
shadeheim: IPFS objects having an optional link to an IPNS path and IPFS-capable browsers being able to say “there is a newer version of this page, click here to open it” should cover that.
<shadeheim>
yep
<shadeheim>
I mean it isn't like the web doesn't already have basic metadata to change how a page/file gets distributed, so I don't think it would add too much bloat to add some basic extra fields
<shadeheim>
there would need to be some standards set though, so it didn't change all the time
<sonatagreen>
(as far as possible, use existing standards please)
<sonatagreen>
shadeheim, a thing the ex-bully could do in that case would be to simply unpin the file from their own node, so that they were no longer participating in distributing it
<sonatagreen>
and maybe contact other people who were hosting it and request that they stop distributing it as well
<sonatagreen>
(there would be no way to force them to do so, of course, but they might be persuaded by reasoned arguments)