notduncansmith has quit [Read error: Connection reset by peer]
voxelot has joined #ipfs
voxelot has joined #ipfs
<zignig>
whyrusleeping: turning off keep alives worked !
<zignig>
yay.
<zignig>
Still getting no Content-Length headers, which kind of borks out my ipfs filesystem.
domanic has quit [Ping timeout: 250 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Leer10 has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
zignig: what did you need those for?
<whyrusleeping>
zignig: huh, we should totally be still sending length headers on cat
<whyrusleeping>
i'm setting them
<whyrusleeping>
zignig: you found a bug, thanks!
<whyrusleeping>
well, kinda?
<whyrusleeping>
i'm confused, i think go's http library will automatically write the headers you set out for you if you just return from the handler func
<whyrusleeping>
which makes sense i guess
<whyrusleeping>
but for some reason
<whyrusleeping>
even though i am explicitly setting a content length header, its not showing up
<zignig>
whyrusleeping: that's me bug finder .... ;)
<zignig>
as for the content length , I need to get them so I can send them on to the client.
<zignig>
when the iPXE client is getting it's image it uses the content length for the download.
<whyrusleeping>
you grab that from the 'cat' though?
<whyrusleeping>
where in astralboot is this?
<zignig>
without content length image download takes ~2 minutes , with ~8 seconds.
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Quit: Leaving.]
randito has joined #ipfs
<randito>
hello everyone
www has joined #ipfs
<cryptix>
hey randito
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dignifiedquire has joined #ipfs
mildred has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
cdata2 has joined #ipfs
cdata2 has quit [Ping timeout: 255 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has quit [Remote host closed the connection]
reit has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has quit [Remote host closed the connection]
reit has joined #ipfs
randito has quit [Quit: leaving]
reit has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has joined #ipfs
hellertime has joined #ipfs
cjb has joined #ipfs
cdata2 has joined #ipfs
cdata2 has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
www has quit [Ping timeout: 255 seconds]
voxelot has joined #ipfs
m0ns00n has joined #ipfs
<m0ns00n>
Hoi
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
slothbag has joined #ipfs
null_radix has quit [Excess Flood]
<cryptix>
hey m0ns00n
<m0ns00n>
Hey!
<m0ns00n>
Just back from the states.
null_radix has joined #ipfs
<m0ns00n>
Demoed our system and participated in the 30th anniversary celebration for the Amiga computer :)
<ReactorScram>
what system
<m0ns00n>
It should have been a 10000 person + event. :)
<m0ns00n>
ReactorScram: FriendUP (friendos.com)
<m0ns00n>
Finally got some verification from top engineers.
<m0ns00n>
So we can with confidence say we have something unique.
<m0ns00n>
Visited the Raspberry PI dude in San Francisco.
<m0ns00n>
the authors of PHP
<m0ns00n>
in Zend.
<m0ns00n>
And much much more.
<m0ns00n>
:)
voxelot has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
m0ns00n has quit [Remote host closed the connection]
slothbag has quit [Remote host closed the connection]
<pguth2>
We were asking ourselves how much you can rely on pinned data. Say I pin file A. Can I safely remove file A to save storage space (so I don't have one copy in my regular FS and one in IPFS)?
<pguth2>
(ping ThomasWaldmann)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Gaboose has joined #ipfs
<rschulman>
pguth2: IPFS is still alpha software. While that is generally the point of pinning a hash, I’m not sure I would trust it just yet to keep vital data safe.
pfraze has joined #ipfs
<Gaboose>
is there a way to do a "group pinning" with ipfs? i.e. release blocks that are replicated a lot, but reacquire them if they get rare
<Gaboose>
as in "pinning as a group of users"
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<rschulman>
Gaboose: You may be interested in the bitswap protocol
<rschulman>
its discussed in the paper
<rschulman>
it doesn’t do precisely what you’re talking about, but its similar.
<pguth2>
We thought along those lines too, thanks rschulman
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has joined #ipfs
hellertime has quit [Read error: Connection reset by peer]
hellertime has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
got 5 eth letting my desktop mine overnight, not sure how much that really is
<cryptix>
gmorning whyrusleeping :)
freedaemon has joined #ipfs
<rschulman>
whyrusleeping: Nice.
<rschulman>
GPU mining, I assume?
<rschulman>
really wish I had my desktop setup so I could mine some eth'
<rschulman>
my understanding is that a simple contract usually costs around .01 eth to get on the blockchain
<whyrusleeping>
rschulman: yeah, gpu
<rschulman>
all I have is my work assigned macbook air
<rschulman>
has a GPU in it, but I don’t want to break things I don’t own. :)
<whyrusleeping>
rschulman: that probably wont get you anywhere
<whyrusleeping>
lol
<whyrusleeping>
cryptix: gmornin!
<rschulman>
how you doing this morning?
<rschulman>
You’re up early. :)
<whyrusleeping>
yeah, i just decided to get on my laptop before leaving the house
<whyrusleeping>
i feel like crap
<whyrusleeping>
which normally goes away once i get coffee
<rschulman>
getting sick?
<rschulman>
ah
<whyrusleeping>
nah, i just hate mornings
<rschulman>
you’re living in seattle now, right?
<whyrusleeping>
yep!
<rschulman>
cool
<rschulman>
whyrusleeping: What’s going on with filecoin these days?
<rschulman>
mostly quiet?
fleeky has quit [Quit: Leaving]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Quit: Leaving.]
_whitelogger____ has joined #ipfs
Encrypt has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
rschulman: waiting on ipfs to be more better
<whyrusleeping>
our team doesnt have enough bandwidth for sustained development on both projects
Tv` has joined #ipfs
sbruce has joined #ipfs
therealplato has quit [Read error: Connection reset by peer]
<Gaboose>
does anyone use the ipfs dht for custom app purposes?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
Gaboose: not that I am aware of, although i do see a decent amount of random traffic on it from time to time
<Gaboose>
wondering what kind of things can be implemented on it
therealplato has joined #ipfs
<whyrusleeping>
Gaboose: well, the dht itself is really just a KV store
<whyrusleeping>
one that you can access from anywhere
<Gaboose>
yea, but freely editable one
<Gaboose>
so you can't trust it
<whyrusleeping>
Gaboose: you cant *rely* on it, but you can trust it if you sign your data
voxelot has joined #ipfs
voxelot has joined #ipfs
<whyrusleeping>
certain record types are treated specially by the network though
<whyrusleeping>
for example, nobody can overwrite your public key stored in the dht
<whyrusleeping>
and nobody can overwrite an ipns entry of a key they dont own
<voxelot>
how's cjdns coming? i'd really like to study up on that and work with it
<whyrusleeping>
voxelot: lgierth is the one working on that
<Gaboose>
whyrusleeping: it'd be nice for the dht to have custom record types like that
<Gaboose>
unwritable if you don't own the key
<Gaboose>
or conflict free data types
<Gaboose>
like grow-only sets
<Gaboose>
or increase-only counters
<whyrusleeping>
we have grow-only sets
<whyrusleeping>
we currently use them to announce who has which blocks
<Gaboose>
cool!
<whyrusleeping>
there are going to be a lot of changes coming soon to the dht to make it better and faster
<cryptix>
Gaboose: a friend of mine wrote his thesis about a dht like that... (about time he comes back from his traveling)
<Gaboose>
i assume it's not as easy to use as putting a "crdt:" as prefix to the value
<whyrusleeping>
Gaboose: right now, no. but we are hoping to make it that easy
atrapado has joined #ipfs
<whyrusleeping>
the difficult part is making sure that it cant be abused
<Gaboose>
how do you mean?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Gaboose>
network slow down by the abundance of set elements?
<Gaboose>
attacker not conforming to the grow only spec? the rest of the won't propagate his puts
<rschulman>
whyrusleeping: I love it, please make it happen!
<whyrusleeping>
rschulman: okay, lol
<rschulman>
:)
dignifiedquire has quit [Quit: dignifiedquire]
<alu>
IPFS is showing great promise in being part of a decentralized metaverse design :)
<alu>
we'll see where it goes
<Gaboose>
i've seen you guys mention here and there namecoin, openbazaar, custom blockchain apps
<Gaboose>
have you heard about ethereum, it's like a motherbed for such things
<alu>
yeah
<whyrusleeping>
Gaboose: yeah
<alu>
I met that dude
<jbenet>
whyrusleeping: can we bundle go-ipfs in FF / Chrome extensions?
<whyrusleeping>
jbenet: uhm... good question
<whyrusleeping>
gordonb: can we?
<gordonb>
jbenet: yeah, it’s possible to bundle in FF extension. Not sure about Chrome.
<jbenet>
whyrusleeping: we need an implementation of ipfs-shell interface that uses an embedded a node when there isnt a local gateway-- like it checks first, if not uses embedded node.
<jbenet>
node*
cdata2 has joined #ipfs
cdata2 has quit [Ping timeout: 260 seconds]
<whyrusleeping>
hmmm
<whyrusleeping>
why not put that responsibility on the caller?
<whyrusleeping>
i want to get good at ephemeral node stuff, but thats going to require a cleanup of the construction process
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
whyrusleeping: we need to come up with "the appropriate way to check + decide what to do", because that way people can build things without having to worry about the complexity themselves.
<whyrusleeping>
fair enough
bmcorser has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
cdata2 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Gaboose>
does 'ipfs dht findprovs' show all providers of a hash?
<Gaboose>
if not, is there a way to get/estimate the rarity of a file?
<whyrusleeping>
Gaboose: it shows as many providers as it finds
<whyrusleeping>
up to 20 i think
gordonb has quit [Quit: gordonb]
<whyrusleeping>
building a package import visualization for ipfs breaks my computer
<whyrusleeping>
would be great if we had tests for the webui
Eudaimonstro has joined #ipfs
<whyrusleeping>
woooo... go1.5 doubles build times
<whyrusleeping>
so excited
<jbenet>
whyrusleeping: maybe switch constants to have the 5001? May want to ship this as a 0.3.7 fix...
<jbenet>
Or 0.3.6-fix? What etiquette do we want?
<jbenet>
0.3.7 is fine with me
<jbenet>
Ideally would grab port from config though.
<Luzifer>
we're are not doing semantic versioning aren't we?
<jbenet>
Luzifer: no not yet
<whyrusleeping>
jbenet: could do 0.3.6-1
<whyrusleeping>
use kernel versioning semantics
<jbenet>
We could but I still want a vanity version in front, so <vanity>.<major>.<minor>.<patch>
<whyrusleeping>
ah, so we're 0.0.3.6 right now then?
<Luzifer>
O_O
notduncansmith has joined #ipfs
* whyrusleeping
is a little confused
notduncansmith has quit [Read error: Connection reset by peer]
<Luzifer>
that will confuse like everyone…
<jbenet>
whyrusleeping: no, 0.3.6.1
<whyrusleeping>
ah
* Luzifer
likes semantic versioning and sticks to it :D
<jbenet>
Luzifer: really? Were you very confused the first time you saw semver?
<jbenet>
Luzifer semver doesn't work with end users.
<Luzifer>
jbenet: and adding more dots and numbers works betteR?
<jbenet>
End users only need to know about <vanity>.<major>
<whyrusleeping>
wtf is vanity?
<jbenet>
If that
<Luzifer>
whyrusleeping: +1
<jbenet>
A number the product developers use to signify a logical difference with _major_, fundamental changes from one product to another. Say iterm1 and iterm2, or os10
<jbenet>
A number you can put in print and have it mean something, not a number that will be different tomorrow.
<Luzifer>
mac os uses 3 numbers… 10.10.4…
<jbenet>
Luzifer that you see, they have more under the hood and internally
notduncansmith has quit [Read error: Connection reset by peer]
dignifiedquire has joined #ipfs
<whyrusleeping>
jbenet: i dont even know what to say
<jbenet>
i'll fix it :)
<whyrusleeping>
im doing that url parse thing because the origin has the path on the end
<whyrusleeping>
the /ipfs/Qmasbaskdjalsgjs part
mildred has quit [Quit: Leaving.]
Encrypt has joined #ipfs
<jbenet>
whyrusleeping: ... no? origins are supposed to only be the [scheme://host:port] part of the url
<whyrusleeping>
jbenet: thats weird, thats not whats being sent
domanic has quit [Ping timeout: 246 seconds]
<whyrusleeping>
the value returned from the header contained the path
<whyrusleeping>
which is why i did the url parsing
<jbenet>
whyrusleeping what browser??
<jbenet>
also, this ServeOption stuff is so convoluted.
notduncansmith has joined #ipfs
<jbenet>
doenst even get the server, not sure why.
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
jbenet: chrome
tilgovi has quit [Ping timeout: 244 seconds]
<alu>
I want to set up dedicated seedsers
<dignifiedquire>
daviddias: Found and fixed two bugs today in node-ipfs-swarm :)
<whyrusleeping>
dignifiedquire: woo!
dignifiedquire has quit [Quit: dignifiedquire]
<jbenet>
whyrusleeping: btw, multiaddr at least lets you split on ("/") and check things for "tcp" and "udp", whereas a net.Addr may or may not have ports, and may or may not be ip6
<jbenet>
so you may get: "1.2.3.4:5555" or "[::1]"
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<lgierth>
sprintbot: blocklist/whitelist
<lgierth>
whyrusleeping: i assume we'll have localhost/whitelist too then?
Encrypt has quit [Quit: Quitte]
* lgierth
heading to c-base
<ipfsbot>
[go-ipfs] jbenet force-pushed fix/allowed-origins from 852e9f0 to e5eccd8: http://git.io/vOq9M
<ipfsbot>
go-ipfs/fix/allowed-origins 6b67c09 Juan Batiz-Benet: corehttp: add net.Listener to ServeOption...
<ipfsbot>
go-ipfs/fix/allowed-origins e5eccd8 Juan Batiz-Benet: fix cors: defaults should take the port of the listener...
<whyrusleeping>
jbenet: oh yeah, multiaddr is definietly easier than normal net addrs
<jbenet>
whyrusleeping: can you CR that real fast? o/ and test with webui?
<jbenet>
(it works for me)
<whyrusleeping>
sure
<whyrusleeping>
jbenet: still appears to break for me...
<jbenet>
it works for me :/ -- why is your chrome sending the whole url as an origin??
<jbenet>
can you search for that? im fixing test
<whyrusleeping>
okay
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
both seem like the path is expected to be there
<jbenet>
whyrusleeping: oh referer is fine
<jbenet>
_origin_ is always scheme://<host>:<port>
<jbenet>
(so you're right we need to parse out the origin from the referer
<whyrusleeping>
ah, i see the difference
<whyrusleeping>
we check "Origin" on one, and .Referrer() on the other
<whyrusleeping>
my chrome is probably not setting the path on Origin
<whyrusleeping>
why is http so complicated?
hellertime has quit [Quit: Leaving.]
<jbenet>
whyrusleeping pull + test once more?
<jbenet>
i fixed tests.
<ipfsbot>
[go-ipfs] jbenet force-pushed fix/allowed-origins from e5eccd8 to 9d06375: http://git.io/vOq9M
<ipfsbot>
go-ipfs/fix/allowed-origins 9d06375 Juan Batiz-Benet: fix cors: defaults should take the port of the listener...
<whyrusleeping>
on it
ruby32 has quit [Quit: Leaving]
<whyrusleeping>
i realized you force pushed after git opened up a commit message editor for me
<whyrusleeping>
lol
<whyrusleeping>
yay! it works!
notduncansmith has joined #ipfs
<whyrusleeping>
ship it
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has quit [Ping timeout: 246 seconds]
<whyrusleeping>
jbenet: o/
<lgierth>
cannot use addr (type "github.com/jbenet/go-multiaddr".Multiaddr) as type "github.com/jbenet/go-multiaddr-net/Godeps/_workspace/src/github.com/jbenet/go-multiaddr".Multiaddr in argument to manet.Listen
<lgierth>
:):)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
Multiaddr should probably be an interface tbh
<lgierth>
whyrusleeping: why?
<lgierth>
or, how. the inner go newb is asking
<whyrusleeping>
well, making it an interface backed by a private concrete type would prevent that issue you are seeing
<whyrusleeping>
it wouldnt care if the type was exactly the same, it would just do a method check
<whyrusleeping>
and if the method sets match, it wouldnt care
<lgierth>
ah. yep
<lgierth>
thank you
thelinuxkid has joined #ipfs
<whyrusleeping>
mappum: ping
thelinuxkid has quit [Client Quit]
thelinuxkid has joined #ipfs
<whyrusleeping>
is it bad to return values in javascript?
<whyrusleeping>
is seems like everyone just passes in a callback to pass the result of the function to
<lgierth>
returning is fine
<lgierth>
just don't block :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
so dont do while (true) {}
<whyrusleeping>
got it
www1 has joined #ipfs
www has quit [Ping timeout: 246 seconds]
voxelot has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
thelinuxkid has quit [Quit: Leaving.]
www1 has quit [Ping timeout: 250 seconds]
voxelot has joined #ipfs
voxelot has joined #ipfs
Eudaimonstro has quit [Remote host closed the connection]
SouBE has joined #ipfs
<SouBE>
guys. I'm new to IPFS concept. but I'm wondering if it could make possible to implement distributed caching HTTP proxies
<SouBE>
proxy to proxy content updates
<lgierth>
that would be amazing
domanic has joined #ipfs
<lgierth>
ipfs as a content cache
<SouBE>
right
<lgierth>
do you have something in mind?
<lgierth>
cause i'm sure many would love to see that happen
<lgierth>
(me too)
<SouBE>
imagine you're on a cuise ship with a dodgy satellite uplink but you could share HTTP caches with fellow passengers
<SouBE>
or a long haul flight
<lgierth>
exactly :)
<lgierth>
would be so great to finally have that
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
SouBE: so you can kinda already do that, as long as youre browsing content within ipfs
<whyrusleeping>
you'll request the content from peers that you have contact with
freedaemon has quit [Remote host closed the connection]
<SouBE>
yes, but that requires that original content is from IPFS network. in HTTP proxy concept there needs to be a way to fetch content IPFS network from HTTP first
<whyrusleeping>
SouBE: yeah
<SouBE>
maybe HTTP URIs could be hashed too?
MatrixBridge has quit [Ping timeout: 256 seconds]
<whyrusleeping>
thats one way to go about it, have a lookup table from uri to the content hash
<SouBE>
if there's a hash that represent URI like http://ipfs.io/styles/img/ipfs-logo-white.png, then you just need to have a local proxy on your machine that translates browser URI requirests to IPFS hashes
<SouBE>
like IPFS references to directories it could probably reference to URIs similar way
<clever>
biggest problem with making an http cache is dynamic content
MatrixBridge has joined #ipfs
<SouBE>
well that's a problem with current HTTP caches too. you just need to adhere origins Cache-Control headers
<SouBE>
but well designed apps lets proxies to cache at least some parts of it
<clever>
even with cache-control headers, you cant know for sure if the content has changed or not
<lgierth>
could write a plugin for varnish
<lgierth>
ipfs upstreams
<jbenet>
SouBE: that's an interesting idea-- we could run the HTTP Request/Response pair through some filters, that a) first determine if there's anything to cache, b) return a hash to look up.
<clever>
the cache control headers tell you how long to keep the data and if you should recheck
<lgierth>
instead of http upstreams
<clever>
SouBE: IPFS relies on the contents of a given hash never changing, but even with cache-control headers, a given url can change at a later time
<Tv`>
clever: that's what etag isfor
<clever>
but not everything implements it
<clever>
yeah, you could use the etag to solve some things
<Tv`>
clever: not everything is safely cacheable
<Tv`>
etag is the opt-in
<clever>
yep, but some http servers may not send an etag for the cachable stuff
<clever>
older servers
<clever>
and for some things like a forum, you may want to cache an older copy of the threads, for offline viewing
<clever>
and the cache-control headers just deny that entirely
<SouBE>
maybe URIs cannot be directly mapped to hashes. there probably needs to be a distributed cache lookup process that finds the recent version of an URI content from IPFS network. result of that lookup process is a hash to IFPS object
<SouBE>
and clients should narrow their cache lookups with time windows, for instance "does anybody have content of URI X that is no older than 60 minutes?"
<clever>
there is a python program called http-replicator, which acts as a passive proxy, while saving everything in the correct directory structure
<clever>
you could then just ipfs add the whole cached dir
<SouBE>
nice idea
<clever>
squid's caching doesnt maintain the filenames on disk, only in its internal db
<SouBE>
also polipo has on-disk cache
<SouBE>
in IPFS only creator of a directory can update contents of it?
<clever>
it works just like git, all directories are read only
<clever>
and the hash of the dir, is just the hash of its contents (name+hash of each child)
<SouBE>
oh, ok
<clever>
so if you do modify a directory, you can reuse the sub-dirs and files you didnt change, and your new version has a new hash
<clever>
in git, every commit containts the hash for the root directory, which forms a tree containing every file in that version
notduncansmith has joined #ipfs
<clever>
so git doesnt manage diffs between versions, it manages full snapshots, the state of every file, at every commit
notduncansmith has quit [Read error: Connection reset by peer]
<SouBE>
so current version of IPFS could allow me to collect a massive HTTP cache with Polipo proxy, share a snapshot of it with IPFS and then others could use it with Linux OverlayFS
<clever>
sounds right
<clever>
and once you publish that root hash, it is effectively read-only
<clever>
so once one person verifies the hash is safe, everybody can trust that it hasnt been modified/trojaned
<SouBE>
we just would need to have a daemon/script that changes the underlying snapshot of cache directory to a newer one as soon as I release a new snapshot. for that some kind of distributed data feed would be required
<clever>
IPNS may do that
<SouBE>
true
Eudaimonstro has joined #ipfs
<clever>
from what ive seen, IPNS is just a key=value store, mapping your public key to an IPFS dir object
<clever>
that sort of lets you modify a directory without having to share things with somebody
<SouBE>
there could be a bot that crawls the web constantly and shares its cache with IPFS. people could suggest and vote about sites the crawler is fetching
<SouBE>
and a new snapshot could be released like in every 30 minutes
<clever>
in terms of storage, the crawler would need to maintain a full copy of everything in ipfs
<clever>
and to avoid doubling up, you would be best if you modify your http cache to read/store directly into ipfs
<SouBE>
yah
<clever>
and once you share the root hash, people can just download what they want from you
<clever>
and anybody else that has it
<lgierth>
whyrusleeping: i'm writing the other half of blocklist, and i'm thinking we could store it in ipfs itself, couldn't we?