anonymuse has quit [Remote host closed the connection]
tinybike has joined #ipfs
fleeky has joined #ipfs
fractex has quit [Ping timeout: 250 seconds]
flyingzumwalt has joined #ipfs
flyingzumwalt has quit [Client Quit]
computerfreak has quit [Remote host closed the connection]
fleeky has quit [Ping timeout: 244 seconds]
chris613 has quit [Ping timeout: 244 seconds]
fleeky has joined #ipfs
matoro has joined #ipfs
fractex has joined #ipfs
<Remram[m]>
`/ipns/ipfs.io` works on my host, but I don't see a TXT record for ipfs.io? How does that work?
chris613 has joined #ipfs
fleeky has quit [Ping timeout: 252 seconds]
fleeky has joined #ipfs
<whyrusleeping>
Remram[m]: it also will use _dnslink.DOMAIN
anonymuse has joined #ipfs
anonymuse has quit [Remote host closed the connection]
<Remram[m]>
_dnslink.ipfs.io. 83 IN TXT "dnslink=/ipfs/QmTzQ1JRkWErjk39mryYw2WVaphAZNAREyMchXzYQ7c15n"
<Remram[m]>
I see
anonymuse has joined #ipfs
<Remram[m]>
Shouldn't there be a mention of ipfs in the identifier for these entries instead of "dnslink" which could be anything? :P
<ekleog>
hmm… why isn't that a dnslink=/ipns/foobar, with foobar being a key? afaiu, it would make updates to the website less dependent on the delay implied by dns
<deltab>
Remram[m]: no git-like commits or history yet: I think the plan is to build that on IPLD (linked data), which is in development
<Remram[m]>
I was wondering that as well
<jakobvarmose>
ekleog, But that would make it dependent on ipns updates, which are also slow
<deltab>
is that because it's not possible to tell which of two conflicting records is newer?
<ekleog>
hmm, I thought I understood that ipns was faster to propagate than dns
rardiol has quit [Ping timeout: 244 seconds]
<deltab>
e.g. you publish Qmfoo and later Qmbar, and queries bring back both
dignifiedquire has quit [Quit: Connection closed for inactivity]
<ekleog>
deltab: there is a version number in ipns records
<jakobvarmose>
deltab, I have don't know why it's slow
<deltab>
oh, didn't know that
<ekleog>
so when you get back both you send back the newer to everyone who sent you the old
<ekleog>
(according to the .pdf which was the only source of current-ish information I found)
<jakobvarmose>
deltab, As I understand it is intended as a more extensible format compared to the current merkledag format
<Remram[m]>
Also I have no idea what `ipfs files` is. I get that it creates a root directory and stuff into it, but is that published anywhere?
<jakobvarmose>
So it can replace both merkledag and unixfs. And probably be used for a lot of other things too
<deltab>
Remram[m]: yeah, as I understand it it's a mutable interface to the underlying immutable structure, like git's index
decadentism has joined #ipfs
ebel has quit [Ping timeout: 260 seconds]
<deltab>
maybe that's just mfs though
<Remram[m]>
So at some point you are supposed to get the id of the root with `ipfs files stat /` and put that in ipns somewhere?
<jakobvarmose>
Remram[m], Theoretically the gateway could stream the files without storing them, but I'm pretty sure it stores them until the next garbage collection
ebel has joined #ipfs
fleeky has joined #ipfs
tmg has joined #ipfs
fleeky has quit [Ping timeout: 244 seconds]
<Remram[m]>
Also what files does the webui list?
<Remram[m]>
it definitely doesn't list every file I have, it's not the pinned ones either
gmcquillan has quit [Ping timeout: 244 seconds]
fleeky has joined #ipfs
anonymuse has quit [Remote host closed the connection]
apiarian has quit [Ping timeout: 252 seconds]
JesseW has joined #ipfs
apiarian has joined #ipfs
r04r is now known as zz_r04r
mgue has quit [Read error: Connection reset by peer]
<Remram[m]>
the code seems to indicate that it lists pinned objects
tmg has quit [Ping timeout: 252 seconds]
Oatmeal has quit [Ping timeout: 240 seconds]
jgantunes has quit [Quit: Connection closed for inactivity]
decadentism has quit [Ping timeout: 264 seconds]
reit has joined #ipfs
mgue has quit [Ping timeout: 265 seconds]
fleeky has joined #ipfs
tmg has joined #ipfs
fleeky has quit [Ping timeout: 250 seconds]
ELLIOTTCABLE is now known as ec
jaboja has quit [Ping timeout: 244 seconds]
mgue has joined #ipfs
Oatmeal has joined #ipfs
cemerick has joined #ipfs
flyingzumwalt has joined #ipfs
tmg has quit [Quit: leaving]
tmg has joined #ipfs
chris613 has quit [Ping timeout: 250 seconds]
chris613 has joined #ipfs
flyingzumwalt has quit [Client Quit]
Israel_ has joined #ipfs
ygrek_ has quit [Ping timeout: 244 seconds]
Israel_ has quit [Read error: Connection reset by peer]
<Remram[m]>
Is there a way to pin a ipns path?
<Remram[m]>
Get the new values automatically, drop the old content and pin the new one?
<whyrusleeping>
Remram[m]: define 'automatically'
<Remram[m]>
without the user entering commands
<Remram[m]>
My use case is a laptop or not-always-on machine producing and publishing content, and another machine acting as a "seedbox" always having the latest version of the data pinned
<Remram[m]>
or backup machine
<Remram[m]>
or gateway
<Tangent128>
(how long can one count on the /ipns/ record persisting in the case of publisher down, seedbox up, anyways?)
<Remram[m]>
Well that's also something you'd want to "pin" but I assume it's harder
<whyrusleeping>
Tangent128: up to 24 hours if nobody is republishing it
<whyrusleeping>
you can have a third party republish the existing record for some time after that
<Tangent128>
Only the person with the privkey can republish, corre-
jakobvarmose has quit [Read error: Connection reset by peer]
intercat has quit [Read error: Connection reset by peer]
<whyrusleeping>
Remram[m]: so, does this seedbox just periodically poll for the latest values?
<Remram[m]>
Well until you have pubsub working
<Tangent128>
oh, is there a way to say you want to republish a given record?
<whyrusleeping>
Remram[m]: heh, then you already know where this train of thought goes
<whyrusleeping>
Tangent128: not yet, thats getting higher and higher on the todo list
<whyrusleeping>
mind actually filing a feature request issue for it?
<whyrusleeping>
Remram[m]: yeap, once we have a pubsub implementation we're going to make more progress there
<whyrusleeping>
we could however mock out how that system could work, and implement it using polling instead of pubsub, for now
<whyrusleeping>
if youre interested in contributing that would be a really great thing to have done :)
mgue has quit [Quit: WeeChat 1.5]
<Remram[m]>
I don't really think it's blocking, in fact you could implement this without polling either -- doing this whenever a resolve happens (possibly manually) would be progress already
<Remram[m]>
I don't really know Go but I could look into it if I get some time
<whyrusleeping>
Remram[m]: you have to be careful with the expectations of the command though, users might expect that 'oh, its pinned'
<whyrusleeping>
when it really won't happen until X criteria is met
<Remram[m]>
well as long as something is pinned we're good, even if it's the old version
<Remram[m]>
right now scripting this is hard because you have to keep track of what to unpin
<Remram[m]>
(also, yes, there needs to be a way to keep the ipns record around)
<Tangent128>
It'd probably be nice if something like "ipfs files cp /ipns/hash /named-pin" could work.
<Tangent128>
As far as just having a "update pin once" command you could cron
mgue has joined #ipfs
<whyrusleeping>
Tangent128: that should have the effect of pinning the object in question
<whyrusleeping>
except cp by default does shallow copies, so it wont be guaranteed to be on your machine
<whyrusleeping>
(could maybe add a flag to cp that ensures all data gets pulled locally)
<Tangent128>
(to clarify, if you access the shallow data, the files you fetch will be thereafter pinned, correct?)
<whyrusleeping>
Tangent128: yeap
<whyrusleeping>
the files namespace does a 'best effort' pin
<whyrusleeping>
it will keep things around if you have them
<whyrusleeping>
but if you don't, its okay with it
<Tangent128>
So pins are just GC roots, not a promise to fetch it. The pin command just enforces the fetch on its own?
<whyrusleeping>
yeap
<whyrusleeping>
well
<whyrusleeping>
the guarantee is that something pinned recursively will "always stay local"
doctorwhat has quit [Ping timeout: 240 seconds]
<whyrusleeping>
or rather, recursive pin = "this object and all of its descendants will always be local"
<whyrusleeping>
the files namespace uses a special pin that we havent exposed yet, the best effort pin
<whyrusleeping>
which says "any subset of this content i appear to be interested in, keep around. If i dont already have it, don't bother"
Bheru27 has quit [Read error: Connection reset by peer]
Bheru27 has joined #ipfs
rardiol has quit [Ping timeout: 260 seconds]
wavis has quit [Quit: Connection closed for inactivity]
JesseW has quit [Ping timeout: 240 seconds]
j12t_ has quit [Remote host closed the connection]
j12t has joined #ipfs
anonymuse has quit [Remote host closed the connection]
j12t has quit [Ping timeout: 240 seconds]
anewuser has quit [Quit: anewuser]
mgue has quit [Ping timeout: 264 seconds]
quoo2 has joined #ipfs
quoo1 has quit [Ping timeout: 264 seconds]
DmZDsfZoQv has quit [Remote host closed the connection]
mgue has joined #ipfs
VmtcXzLmiz has joined #ipfs
VmtcXzLmiz has quit [Remote host closed the connection]
chris613 has quit [Quit: Leaving.]
<Remram[m]>
Is there a way to publish something without making a copy of the data (original + ipfs store)?
<Remram[m]>
that stuff can be big
<Tangent128>
Would moving it into the store and symlinking to /ipfs/ work?
<Remram[m]>
The reverse would be a lot better but I guess it's a solution
pfrazee has quit [Remote host closed the connection]
cemerick has quit [Ping timeout: 276 seconds]
JesseW has joined #ipfs
rgrinberg has quit [Ping timeout: 250 seconds]
<Remram[m]>
What about a way to allow multiple users (private keys) to update some namespace (as in ipns)? I guess it's doable at routing level (like orbit-db does?) but hmm
grewalsat has quit []
Tv` has quit [Quit: Connection closed for inactivity]
_rht has joined #ipfs
j12t has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping created deps/rm-randbo (+1 new commit): https://git.io/v67mZ
<ipfsbot>
go-ipfs/deps/rm-randbo d1de49c Jeromy: remove randbo dep, its no longer needed...
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #3118: remove randbo dep, its no longer needed (master...deps/rm-randbo) https://git.io/v67ml
cryptix has joined #ipfs
cryptix has quit [Ping timeout: 244 seconds]
herzmeister has quit [Ping timeout: 276 seconds]
toxip has quit [Ping timeout: 240 seconds]
herzmeister has joined #ipfs
toxip has joined #ipfs
M-brain has joined #ipfs
zombu2 has joined #ipfs
<zombu2>
evening
tmg has quit [Ping timeout: 240 seconds]
dignifiedquire has joined #ipfs
cryptix has joined #ipfs
mgue has quit [Ping timeout: 264 seconds]
Mateon1 has joined #ipfs
ygrek_ has joined #ipfs
jakobvarmose has joined #ipfs
jakobvarmose_ has joined #ipfs
JesseW has quit [Ping timeout: 265 seconds]
m0ns00n has joined #ipfs
mgue has joined #ipfs
toxip has quit [Ping timeout: 258 seconds]
toxip has joined #ipfs
j12t has quit [Ping timeout: 244 seconds]
<daviddias>
dignifiedquire: checking your PR now
<daviddias>
so, by reducing the dependencies, namely, ipfs-merkle-dag
<daviddias>
the memory issues are gone?
<daviddias>
ah, circle is still not happy
G-Ray has joined #ipfs
<ipfsbot>
[webui] greenkeeperio-bot opened pull request #473: Update chai-enzyme to version 0.5.1
zorglub27 has joined #ipfs
j12t has joined #ipfs
s_kunk has quit [Ping timeout: 258 seconds]
jgantunes has joined #ipfs
f[x] has joined #ipfs
_rht has quit [Quit: Connection closed for inactivity]
ygrek_ has quit [Ping timeout: 252 seconds]
mw[m] has joined #ipfs
herzmeister has quit [Quit: Leaving]
<dignifiedquire>
daviddias: it's by reducing the dependencies inherent to wreck that I got it back on a realistic level
<daviddias>
yeah, makes webpack go less nuts
<dignifiedquire>
we can probably get circle back if we make the15mb file test smaller
<daviddias>
The 15MB was added as a sweet spot because we were not flushing things probably before
<daviddias>
but then again
<daviddias>
15MB out of 1.7GB of the natual vm size
<daviddias>
WebPack has just some sorte of file explosion
<dignifiedquire>
but it doesn't make sense, js-ipfs works fine and has much more dependencies
<daviddias>
good point
fleeky has joined #ipfs
herzmeister has joined #ipfs
s_kunk has joined #ipfs
j12t has quit [Ping timeout: 276 seconds]
<ipfsbot>
[js-ipfs] diasdavid pushed 1 new commit to feat/interface-ipfs-core-over-ipfs-api-tests: https://git.io/v67BM
<ipfsbot>
js-ipfs/feat/interface-ipfs-core-over-ipfs-api-tests 5f02303 David Dias: feat(object-http): support protobuf encoded values
fleeky has quit [Remote host closed the connection]
toxip has quit [Ping timeout: 240 seconds]
fleeky has joined #ipfs
toxip has joined #ipfs
<ipfsbot>
[js-ipfs] diasdavid pushed 1 new commit to feat/interface-ipfs-core-over-ipfs-api-tests: https://git.io/v670P
<ipfsbot>
js-ipfs/feat/interface-ipfs-core-over-ipfs-api-tests f7a668d David Dias: feat(config-http): return error if value is invalid
f[x] has quit [Ping timeout: 250 seconds]
<dignifiedquire>
daviddias: sooo, we need to talk about streaming and js-ipfs-api
<daviddias>
with what regards?
<dignifiedquire>
with regards to ipfs log
<dignifiedquire>
fetch is not able to do streaming in the browser, nor is it able to cancel a request so "ipfs log" just hangs there, as the request is never ended by the server either
<dignifiedquire>
right, but we still need to write a template + the faq for richard
infinity0 has quit [Remote host closed the connection]
<dignifiedquire>
I'm improving the streaming solution on js-ipfs-api right now, then will open up libp2p-swarm, and go through everything that's not working yet
<daviddias>
sounds good, thank you :)
infinity0 has joined #ipfs
cow_2001 has joined #ipfs
nothingmuch has quit [Ping timeout: 276 seconds]
nothingmuch has joined #ipfs
mvorg has joined #ipfs
cryptix has quit [Ping timeout: 244 seconds]
<dignifiedquire>
daviddias: webrtc tests are not passing on the current pull branch
compleatang has quit [Quit: Leaving.]
<dignifiedquire>
where they passing for you?
<daviddias>
they need the interface-connection to be linked
<dignifiedquire>
it is
<daviddias>
hm, they pass to me
<daviddias>
what error are you getting?
<dignifiedquire>
now it is's passing
<dignifiedquire>
relinked it
<dignifiedquire>
nvm
<daviddias>
Could you report that on the PR though? It is easier than shifting focus
<dignifiedquire>
daviddias: sweet, Chrome actually has stream support + cancel in fetch so I can at least support real streaming there :)
<daviddias>
yeah, that is what fetch was designed for, right?
<daviddias>
However, does go-ipfs http-api do the thing that a http-api should do for streaming?
<dignifiedquire>
well it just sends data endlessly on a http request
<dignifiedquire>
that's the right thing, for a definition of streaming ;)
<dignifiedquire>
sweet log tests passing properly in Chrome :)
<daviddias>
rad!
toxip has quit [Ping timeout: 244 seconds]
ebel has quit [Ping timeout: 260 seconds]
toxip has joined #ipfs
ebel has joined #ipfs
computerfreak has joined #ipfs
<ipfsbot>
[js-ipfs-api] diasdavid created feat/block-api-fix (+1 new commit): https://git.io/v676o
<ipfsbot>
js-ipfs-api/feat/block-api-fix 88fb60a David Dias: feat(block): make the block api follow the interface definition
<ipfsbot>
[js-ipfs-api] diasdavid opened pull request #356: feat(block): make the block api follow the interface definition (master...feat/block-api-fix) https://git.io/v6767
Encrypt has quit [Quit: Quitte]
zorglub27 has quit [Ping timeout: 244 seconds]
<ipfsbot>
[js-ipfs-api] dignifiedquire pushed 1 new commit to fetch: https://git.io/v67Xq
<ipfsbot>
js-ipfs-api/fetch 25bf1d1 Friedel Ziegelmayer: fix: log works in Node.js and Chrome
Encrypt has joined #ipfs
zorglub27 has joined #ipfs
apiarian has quit [Ping timeout: 244 seconds]
apiarian has joined #ipfs
Encrypt has quit [Quit: Quitte]
cemerick has joined #ipfs
<dignifiedquire>
daviddias: it'll be a bit more work getting things ready for other browsers, as I need to shim things to use xhr fallback
<dignifiedquire>
not sure it is worth it though
<dignifiedquire>
will work on pull things for now
cryptix has joined #ipfs
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
<zignig>
o/
<jakobvarmose_>
I never noticed before, but go-ipfs uses a lot of memory (12.5 GB virtual right now). Anybody knows why?
<ion>
Virtual memory is irrelevant, you'll want to look at RSS
<jakobvarmose_>
resident is at 5.3 GB, so that is also pretty high I think
<ion>
agreed
herzmeister has quit [Quit: Leaving]
<Kubuxu>
jakobvarmose_: which version are you running? We fixed few memory leaks in 0.4.3-rc
<ipfsbot>
[webui] greenkeeperio-bot opened pull request #474: Update cheerio to version 0.22.0
<dignifiedquire>
daviddias: do we need to support stream muxing with multiplex in libp2p-swarm? because if so we need to port multiplex
<daviddias>
We don't need
<daviddias>
we have it to showcase how libp2p is versatile
<daviddias>
but multiplex is less realiable than spdy
<dignifiedquire>
okay so I skip the tests and open an issue for porting multiplex?
<daviddias>
yep
<daviddias>
tag with the help wanted and difficulty
abbaZaba has joined #ipfs
abbaZaba has quit [Client Quit]
ashark_ has joined #ipfs
<dignifiedquire>
done
<dignifiedquire>
making good progress on the swarm tests, most things are passing now :)
<daviddias>
:D
<daviddias>
spdy as well?
<dignifiedquire>
spdy in node is passing on libp2p-swarm tests yes
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-aegir-7.0.1 (+1 new commit): https://git.io/v65k5
<ipfsbot>
js-ipfs/greenkeeper-aegir-7.0.1 537c302 greenkeeperio-bot: chore(package): update aegir to version 7.0.1...
<dignifiedquire>
I hate aegir 7
<daviddias>
:(
<dignifiedquire>
-.-
<dignifiedquire>
but it's super bad for a library to ship a polyfill that attaches itself globally
<dignifiedquire>
but babel is not good enough to transpile everything
<dignifiedquire>
so really not sure what to do
<dignifiedquire>
I do get the feeling we should just include the polyfill in our tests + update the documentation that IF you are are running in a pure ES5 environment you need to include the polyfill yourself,
<dignifiedquire>
that's the suggested approach from the babel docs
<dignifiedquire>
for libraries
<dignifiedquire>
and we are shipping libraries here
<daviddias>
but that defeats the purpose
<dignifiedquire>
no it does not
<daviddias>
of having a nice ES5 version
<daviddias>
*our purpose
Encrypt has quit [Ping timeout: 276 seconds]
pfrazee has joined #ipfs
<dignifiedquire>
but for what price?
<daviddias>
hmm
<dignifiedquire>
I have already spend days on this
<dignifiedquire>
*spent
<daviddias>
The ES5 version is a dev enabler
<dignifiedquire>
for whom?
<dignifiedquire>
who is actually running against a pure es5 env, without transpiling?
Kay[m] has left #ipfs ["User left"]
<daviddias>
Asking the devs that want to do require('module') to add a polyfil, when require('module/src') doesn't require anything, means that we are just adding an extra step and therfore defeats our purpose of creating a version to reduce friction
<daviddias>
I could understand that we are not in 2015 anymore and that shipping ES6 code with the current Node.js and browser landscape is totally ok
<daviddias>
reducing buildstep madness and dev time
<dignifiedquire>
but for those running in node they don't need to include the polyfill
<dignifiedquire>
the node build doesn't change
<dignifiedquire>
and we run through ci to ensure that things are working without polyfill in node >=4
<dignifiedquire>
and if you run require in the browser you already have to do some work
<dignifiedquire>
and in all current browsers things will still work even without a polyifll
<dignifiedquire>
*polyfill
<daviddias>
Could you make the case? on a issue
<dignifiedquire>
where?
<daviddias>
One thing to make a strong point for it, is having one of these banners with the browser verisons we support http://npmjs.org/stream-http
<daviddias>
ipfs/community
<daviddias>
it influences the js-dev-guidelines
<dignifiedquire>
that banner comes when we actually run our tests there
<daviddias>
it can even be a PR to the js-dev-guidelines
<dignifiedquire>
which we haven't set up yet
<dignifiedquire>
and will 10x our ci runtime
<dignifiedquire>
unless we pay for it
<daviddias>
dignifiedquire: can't we have that just for release?
<dignifiedquire>
no, because you want to have these test on PRs so you actually know if you break something
<dignifiedquire>
otherwise you are like, oh lets make a release but then all the things are broken
<dignifiedquire>
you could do it only on master though
<daviddias>
Well, I believe that it is unreal to expect to run tests on 40 platforms by commit and expect devs to be productive
<ipfsbot>
[js-ipfs-api] diasdavid created fix/http-api-doesnt-care (+1 new commit): https://git.io/v654U
<ipfsbot>
js-ipfs-api/fix/http-api-doesnt-care 5e0ede9 David Dias: fix(block): cope with the fact that sometimes go-ipfs sends x-stream, other times it doesn't
spilotro has joined #ipfs
<ipfsbot>
[webui] greenkeeperio-bot opened pull request #476: Update css-loader to version 0.24.0
<ipfsbot>
[js-ipfs-api] diasdavid deleted fix/http-api-doesnt-care at 5e0ede9: https://git.io/v65Ba
<dignifiedquire>
npm installs are on an all time slow today :(
<ipfsbot>
[js-ipfs-api] diasdavid tagged v8.0.1 at b049041: https://git.io/v65Rc
<Remram[m]>
It seems to me that ipfs desperately needs better way of pinning stuff
<Remram[m]>
It's the one claim from your website that is not really realized
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-ipfs-api-8.0.1 (+1 new commit): https://git.io/v650T
<ipfsbot>
js-ipfs/greenkeeper-ipfs-api-8.0.1 2cf67f1 greenkeeperio-bot: chore(package): update ipfs-api to version 8.0.1...
<Remram[m]>
"Each network node stores only content it is interested in" < actually you store content you looked up before, and still it might be pruned from under you
cketti has joined #ipfs
<ipfsbot>
[js-ipfs] diasdavid pushed 1 new commit to feat/interface-ipfs-core-over-ipfs-api-tests: https://git.io/v65E8
<ipfsbot>
js-ipfs/feat/interface-ipfs-core-over-ipfs-api-tests 5e6387d David Dias: feat(block-core): add compliance with interface-ipfs-core on block-API
peterix has quit [Quit: No Ping reply in 180 seconds.]
peterix has joined #ipfs
j12t has joined #ipfs
rardiol has joined #ipfs
tmg has quit [Ping timeout: 265 seconds]
zorglub27 has quit [Remote host closed the connection]
zorglub27 has joined #ipfs
tmg has joined #ipfs
<ipfsbot>
[js-ipfs-api] greenkeeperio-bot opened pull request #360: Update aegir to version 7.0.1
<ipfsbot>
[webui] greenkeeperio-bot opened pull request #477: Update eslint-config-standard to version 6.0.0
<ipfsbot>
[webui] greenkeeperio-bot opened pull request #478: Update ipfs-api to version 8.0.1
<whyrusleeping>
Remram[m]: 'ipfs pin add'
Akaibu has quit [Quit: Connection closed for inactivity]
ajp has quit [Ping timeout: 258 seconds]
<Remram[m]>
eg I want to store part of archive.org's content (not the full content, but I want to help)
<Remram[m]>
manually pinning IPFS hashes doesn't realize that goal
<whyrusleeping>
Remram[m]: alright, ideally, what does your contribution to that effort look like?
<whyrusleeping>
what UX would you like to see?
ajp has joined #ipfs
cemerick has joined #ipfs
cyberwolf has joined #ipfs
tmg has quit [Ping timeout: 258 seconds]
cyberwolf has quit [Quit: Konversation terminated!]
<pjz>
Also, pinning may not be as useful as caching
<pjz>
if people are really using ipfs to access archive.org stuff, it might be useful for you to daily have your ipfs node pull up the 'most popular' pages so it will cache (and serve) them
<pjz>
then you'd be saving archive.org bandwidth
<whyrusleeping>
yeah, thats definitely something we're interested in doing
<7JTABLZG6>
[js-ipfs] diasdavid closed pull request #433: Update interface-ipfs-core to version 0.14.0
<1JTAA3WLF>
[js-ipfs-api] diasdavid deleted greenkeeper-interface-ipfs-core-0.14.0 at 7f98b23: https://git.io/v65oX
<ipfsbot>
[js-ipfs] diasdavid deleted greenkeeper-interface-ipfs-core-0.14.0 at 6b13811: https://git.io/v65o1
<pjz>
oh, hm, IWBNI 'most downloaded' hashes were queryable
<pjz>
maybe 'most downloaded in the last <timeperiod>'
<ipfsbot>
[js-ipfs] greenkeeperio-bot opened pull request #436: Update ipfs-api to version 8.0.0
<ipfsbot>
[webui] greenkeeperio-bot opened pull request #479: Update ipfs-api to version 8.0.0
<pjz>
then people could help just by saing 'ipfs sharetheload <node>' and it would query the node for what was most-downloaded, download one copy and advertise it
<pjz>
maybe that would be useful?
<ipfsbot>
[go-ipfs] whyrusleeping pushed 2 new commits to master: https://git.io/v65Kt
<ipfsbot>
go-ipfs/master 6b7c2fe Jeromy: improve test coverage on merkledag package...
opal has quit [Quit: Have you eaten ground beef recently? Ground beef is the result of everything outside of the cow's bones (including nerves) being ground up. Ground beef often contains prions (misfolded proteins) which due to mammalic protein metabolism act virally and wi]
opal has joined #ipfs
jpaglier has joined #ipfs
jpaglier has left #ipfs [#ipfs]
cwahlers has quit [Read error: Connection reset by peer]
<whyrusleeping>
if you have any other questions beyond that, ask for clarification there
<Kubuxu>
I am off for today, good night.
<zombu2>
nn
<whyrusleeping>
in regards to a 'central node', we current have some gateway nodes that are used for bootstrapping your connections to the network. They are not necessary and you can either use your own nodes for bootstrapping, or rely on other methods (such as multicast DNS)