dominic__ has quit [Read error: Connection reset by peer]
dominic__ has joined #ipfs
cherry-xw has quit [Ping timeout: 268 seconds]
Xe has joined #ipfs
cherry-xw has joined #ipfs
dominic__ has quit [Read error: Connection reset by peer]
dominic__ has joined #ipfs
chriscool1 has joined #ipfs
chriscool1 has quit [Client Quit]
dandevelo has joined #ipfs
Neomex has joined #ipfs
yuhl has joined #ipfs
chriscool1 has joined #ipfs
dandevelo has quit [Ping timeout: 260 seconds]
dominic__ has quit [Read error: Connection reset by peer]
dominic__ has joined #ipfs
cherry-xw has quit [Ping timeout: 240 seconds]
dominic__ has quit [Read error: Connection reset by peer]
dominic__ has joined #ipfs
ashark has joined #ipfs
anewuser has quit [Quit: anewuser]
stoically has quit [Ping timeout: 272 seconds]
obensource has joined #ipfs
ashark has quit [Ping timeout: 248 seconds]
dominic__ has quit [Read error: Connection reset by peer]
dominic__ has joined #ipfs
coderdecoder01 has joined #ipfs
dominic__ has quit [Read error: Connection reset by peer]
dominic__ has joined #ipfs
talonz has joined #ipfs
ylp1 has joined #ipfs
dimitarvp has joined #ipfs
smuten has joined #ipfs
dominic__ has quit [Read error: Connection reset by peer]
dominic__ has joined #ipfs
talonz has quit [Ping timeout: 252 seconds]
dandevelo has joined #ipfs
espadrine has quit [Ping timeout: 248 seconds]
phamnuwen[m] has joined #ipfs
ONI_Ghost has joined #ipfs
dominic__ has quit [Read error: Connection reset by peer]
dominic__ has joined #ipfs
cipres has quit [Ping timeout: 268 seconds]
dandevelo has quit [Ping timeout: 260 seconds]
gmoro has quit [Ping timeout: 264 seconds]
dominic__ has quit [Read error: Connection reset by peer]
dominic__ has joined #ipfs
metaphysician has joined #ipfs
talonz has joined #ipfs
muvlon_ has joined #ipfs
dominic__ has quit [Read error: Connection reset by peer]
dominic__ has joined #ipfs
kkwok has joined #ipfs
dominic__ has quit [Read error: Connection reset by peer]
dominic__ has joined #ipfs
talonz has quit [Remote host closed the connection]
talonz has joined #ipfs
neuthral has quit [Ping timeout: 260 seconds]
neuthral has joined #ipfs
ensbro has joined #ipfs
ensbro has quit [Max SendQ exceeded]
neuthral has quit [Ping timeout: 250 seconds]
ensbro has joined #ipfs
neuthral has joined #ipfs
dominic__ has quit [Read error: Connection reset by peer]
dominic__ has joined #ipfs
ensbro has quit [Read error: Connection reset by peer]
ONI_Ghost has quit [Ping timeout: 252 seconds]
metaphysician has quit [Ping timeout: 265 seconds]
dominic__ has quit [Read error: Connection reset by peer]
bomb-on has quit [Quit: zzz]
ashark has joined #ipfs
ashark has quit [Ping timeout: 248 seconds]
uzzrroOOOO has quit [Ping timeout: 248 seconds]
neuthral has quit [Ping timeout: 268 seconds]
vmx has joined #ipfs
neuthral has joined #ipfs
muvlon_ has quit [Quit: Leaving]
bomb-on has joined #ipfs
ralphtheninja has joined #ipfs
Ellenor is now known as Reinhilde
maxzor has joined #ipfs
dandevelo has joined #ipfs
vsimon has quit [Remote host closed the connection]
cxl000 has joined #ipfs
vsimon has joined #ipfs
special has quit [Ping timeout: 240 seconds]
ashark has joined #ipfs
sneak has quit [Quit: Connection closed for inactivity]
ashark has quit [Ping timeout: 260 seconds]
clemo has joined #ipfs
tombusby has quit [Ping timeout: 272 seconds]
tombusby has joined #ipfs
<dbolser>
dang, missed the message to me.. who spake?
<dbolser>
(I got the alert, but lastlog is empty)
<ion>
<Poeticode> dbolser: hm I'm a networking newb, so I don't quite know what that means, but I've just got apache forwarding to my local IPFS gateway with some caching. Then got a TXT record on my DNS that points to my most recent IPFS hash.
kkwok has quit [Ping timeout: 260 seconds]
hipboi has joined #ipfs
Martle_ has quit [Read error: Connection reset by peer]
<dbolser>
ion: thanks
<dbolser>
Poeticode: ahh... I'm afraid I don't understand that!
<JCaesar>
Hm, can I somehow find out how many nodes a pin add will require? I've got something like "Fetched/Processed 29372 nodes" and am wondering how many more…
bege has joined #ipfs
Neomex has quit [Remote host closed the connection]
<fiatjaf_>
on js-libp2p, what exactly I must to do send a simple text message to someone else? suppose I've already found the peers and I'm about to call .dial() on them? should I specify a name for my protocol in the dial() call? and on the other hand I must call .handle() with that same protocol name? then what? should I call connect()? what do I do with the Connection object passed to .dial() and .handle() callbacks?
clemo has quit [Remote host closed the connection]
dpl has joined #ipfs
robattila256 has quit [Ping timeout: 276 seconds]
robattila256 has joined #ipfs
<daviddias>
hey hey
<daviddias>
yeah fiatjaf_, we use pull-streams because of error handling and memory allocation (Node.js Streams are brutal), see more info here -> https://github.com/ipfs/js-ipfs/issues/362
<daviddias>
fiatjaf_: when you get that conn object, you just write to it
<daviddias>
so that you get more familiar with all the Streams family
yuhl has joined #ipfs
jdevoo has joined #ipfs
metaphysician has joined #ipfs
rodolf0 has joined #ipfs
riveter has quit [Ping timeout: 248 seconds]
ashark has joined #ipfs
<victorbjelkholm>
this just hit Show HN: "Instead of uploading the dropped files to an URL, this subclass of Dropzone.js publishes them to IPFS with js-ipfs (no running local nodes needed)." Looks sweet https://github.com/fiatjaf/ipfs-dropzone
<victorbjelkholm>
seems to be made by fiatjaf who exists here, great work :)
<victorbjelkholm>
fiatjaf: on the website, would be nice to be able to zoom out enough to see the entire world
maxzor has quit [Read error: Connection reset by peer]
jdevoo has joined #ipfs
bege has joined #ipfs
<bege>
@fiatjaf_ thanks
shizy has quit [Ping timeout: 252 seconds]
<Poeticode>
dbolser: what's the confusing part? I'm running an Apache Webserver that points to my IPFS node. Then in my DNS, I created a TXT record with the value `dnslink=/ipfs/$most_recent_hash_from_running_ipfs_add/`
tg has quit [Ping timeout: 252 seconds]
SevenTimes has joined #ipfs
yuhl has joined #ipfs
toXel has joined #ipfs
pguth64 has joined #ipfs
flannelmouth has quit [Ping timeout: 248 seconds]
ulrichard has quit [Read error: Connection reset by peer]
ech0_42 has quit [Remote host closed the connection]
lbd has quit [Quit: leaving]
jdevoo has quit [Remote host closed the connection]
toXel has quit [Client Quit]
jdevoo has joined #ipfs
stoically has joined #ipfs
neuthral has quit [Ping timeout: 240 seconds]
leeola has joined #ipfs
toXel has joined #ipfs
letmutx has joined #ipfs
yuhl has quit [Quit: yuhl]
stoically has quit [Remote host closed the connection]
shizy has joined #ipfs
neuthral has joined #ipfs
dpl has quit [Remote host closed the connection]
stoically has joined #ipfs
SevenTimes has quit [Remote host closed the connection]
step21 has quit [Remote host closed the connection]
neuthral has quit [Ping timeout: 256 seconds]
neuthral has joined #ipfs
step21_ has joined #ipfs
step21_ is now known as step21
corby has quit [Ping timeout: 255 seconds]
noobineer has joined #ipfs
noobineer has quit [Max SendQ exceeded]
bomb-on has quit [Quit: zzz]
Jesin has quit [Quit: Leaving]
maxzor_ has quit [Remote host closed the connection]
mildred has joined #ipfs
mildred has quit [Client Quit]
lord| has quit [Quit: WeeChat 2.0.1]
mildred has joined #ipfs
bomb-on has joined #ipfs
maxzor has joined #ipfs
neuthral has quit [Ping timeout: 256 seconds]
headframe has joined #ipfs
neuthral has joined #ipfs
yosafbridge has quit [Quit: Leaving]
ralphtheninja has joined #ipfs
yosafbridge has joined #ipfs
psidhu has joined #ipfs
taravancil has left #ipfs [#ipfs]
Encrypt has joined #ipfs
<psidhu>
I have a bit of confusion on how ipfs works. I understand that you can add things to ipfs, what I don't necessarily understand is node to node. Can you setup your own private ipfs network?
<psidhu>
also, what happens when a node goes away, does the data (file) disappear as well?
rendar has quit [Ping timeout: 240 seconds]
ashark has quit [Ping timeout: 264 seconds]
<letmutx>
"what happens when a node goes away, does the data (file) disappear as well?" depends on whether any other node has it.
<psidhu>
but I thought ipfs removes duplicate files?
<psidhu>
which was my next question.. is there caching? sounds like there is from your response.
<letmutx>
yes, there is caching.
<letmutx>
duplicates may not be permanent
<psidhu>
so the protocol helps determine where there's more latency, then caches a copy of the file to a node in that area?
ashark has joined #ipfs
<letmutx>
yes. when you transfit data from one to another, intermediate nodes may cache it.
<psidhu>
ah, that makes sense, especially with the p2p aspect of it
<letmutx>
"you setup your own private ipfs network" - removing the bootstrap nodes and adding your node to the bootstrap list may help you setup private network.
<psidhu>
hm, ok. but there's no explicit support for that currently?
Neomex has quit [Read error: Connection reset by peer]
<letmutx>
not really sure.
<psidhu>
gotcha
rendar has joined #ipfs
rendar has quit [Changing host]
rendar has joined #ipfs
ashark has quit [Ping timeout: 240 seconds]
<psidhu>
well thanks for the help. cleared up some stuff for me.
robattila256 has quit [Ping timeout: 248 seconds]
ashark has joined #ipfs
jaboja has joined #ipfs
special has joined #ipfs
special is now known as Guest95477
maxzor has quit [Remote host closed the connection]
robattila256 has joined #ipfs
ylp has quit [Remote host closed the connection]
neuthral has quit [Ping timeout: 248 seconds]
neuthral has joined #ipfs
Guest99489 has joined #ipfs
droman has joined #ipfs
ylp has joined #ipfs
Encrypt has quit [Quit: Quit]
Guest99489 has quit [Quit: Leaving]
maxzor has joined #ipfs
dandevelo_ has quit [Ping timeout: 240 seconds]
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<gozala>
daviddias: is there somewhere I can read about the ipfs pubsub ? I’m somewhat confused how it fits into content addressable file system given that it’s neither content addressable nor file system :)
<gozala>
daviddias: or would it be fare to assume that it’s something other protocol that just happened to be bundled with ipfs for convenience ?
<lgierth>
gozala: yes the latter
<Ronsor_>
I'd say the latter thing
<Ronsor_>
it's convenient to have a method of two-way communication with any sort of p2p network
<doronbehar[m]>
q
<lgierth>
it's one of the primitives exposed by libp2p, and ipfs exposes that to the API/CLI
<lgierth>
gozala: pubsub is very useful for exchanging ipfs hashes ;) ipns is in the process of being based on pubsub too (in addition to the dht as a more permanent store)
<Ronsor_>
pubsub ipns is faster
<Ronsor_>
dhts are slow
<Ronsor_>
which is not good if you have a changing resource and a need to do continual lookups
<Ronsor_>
for torrents, it's fine since once it starts you have the peers
<daviddias>
gozala: exactly what lgierth brought up
<daviddias>
through a good PubSub network, you can enable many other distributed web scenarios
<gozala>
lgierth: daviddias thanks that does confirms my intuition about it and gives me stuff to read upon
<daviddias>
gozala: something on the lines of "what IP Multicast never got a chance to be"
<gozala>
daviddias: lgierth So I have this angle that I’m interested in exploring hence my inquiry here
<lgierth>
shoot
<gozala>
I don’t know closely you’ve being following up beaker browser stuff
<lgierth>
on and off
<lgierth>
(i'm loving it)
<Ronsor_>
isnt' beaker browser mac only?
<gozala>
but there is ton of interesting stuff had being coming off of it lately
<gozala>
Ronsor_: it’s electron based so I’d assume it’s not
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
<gozala>
lgierth: specifically there is notion of apps that are loaded off dat:.. urls
<gozala>
where you can read from any archive and write to archive you’re controlling
<gozala>
lgierth: it’s actually moving towards slightly different path
<gozala>
where app (viewer) is on one dat
<gozala>
and store is another
<gozala>
and there is now twitter clone, medium clone, gist clone
<gozala>
and more and more is coming up every day
<lgierth>
in ipfs this would basically boil down to wiring the name.publish() api to a UI element for picking a key/identity (simplest case)
<gozala>
There are some limitations that I would not go into (unless asked to) that IPFS would not have
<daviddias>
gozala yeah, I've seen that they are pushing lots of experiments. Had a good chat with Tara at mozfest and how they pick the direction to try new things
<daviddias>
gozala are you familiar with MFS?
<gozala>
MFS ?
<gozala>
daviddias: acronym does not ring a ball
<lgierth>
ipfs-companion lately started exposing window.ipfs to content pages, so that's any site/app could modify itself and publish the new ipns entry
<gozala>
bell
atrapado_ has quit [Quit: Leaving]
<daviddias>
an mfs instance and a IPNS name and you have a full File System API and a way to subscribe and publish
<lgierth>
it's called files
<lgierth>
wasn't mfs just an early codename?
<daviddias>
gozala: we haven't documented it properly, but the gist is that if you do "ipfs files --help" you get access to the API that lets you create a virtual directory tree on IPFS
<daviddias>
it feels just like a folder
<daviddias>
it has paths /a/b/c
<daviddias>
you can write files
<daviddias>
copy files
<daviddias>
mv files
ygrek has joined #ipfs
<lgierth>
pull in files from somewhere else
<gozala>
daviddias: oh I see what your telling me
<daviddias>
lgierth: MFS is the internal subsystem. Files is the overarching API that does that and more things
<lgierth>
ack
<lgierth>
right, it's called mfs in go-ipfs internals too
<lgierth>
oh, let me file an issue for the name.publish() => UI dialog thing
<gozala>
daviddias: lgierth what I was trying to say I think beaker is a very successful viechle for running distributed app experiments
<daviddias>
and so, MFS is smart enough that if you write to /a/b/c/d it will update the internal directory structure (names to hashes) so that if you need to send to the network and update of the folder you can just `ipfs files stat /` and get the latest root hash for your directory
Caterpillar has joined #ipfs
<lgierth>
yeah it is indeed
<daviddias>
true that
<gozala>
Other that just having a browser it’s also IMO due to simple API exposed to the app authors
<gozala>
I would be interested in having something like beaker for IPFS
<gozala>
or maybe even part of the beaker
<lgierth>
have you seen ipfs-companion in brave?
<gozala>
daviddias: lgierth related (but not apparent) to this does peer-ids are ipns hashes ?
<gozala>
I guess what I’m trying to point out is, beaker end up exposing quite a small API to enable experimentation
<lgierth>
yeah a domain specific API, instead of a general purpose API
<lgierth>
well i guess it's general purpose for beaker's domain :)
<lgierth>
i think the building blocks for a similar api in ipfs are there
<gozala>
further more it would be nice if there was somewhat common beaker like API that app authors could use
chriscool1 has quit [Ping timeout: 260 seconds]
<gozala>
lgierth: daviddias in other words I’m interested in pursuing beaker under IPFS
<daviddias>
the reality is that IPFS as a protocol does more things and I don't want to mean that there isn't something we can do about it but rather, instead of trying to trim the protocol, we can just have nice building blocks on top that users can pick for their use case
<daviddias>
like ipfs-pubsub-room
<daviddias>
or y-ipfs-connector
<gozala>
and would like to find out if I could get some support etc..
<daviddias>
absolutely
<daviddias>
if I understand correctly, there is an intent for the beaker browser to start using muon (same as brave) instead of electron
<daviddias>
which means that the "Component extensions" will be there
<gozala>
I was not aware of that
<daviddias>
for beaker as well
<gozala>
daviddias: I’m not following I’m not sure I understand 'Component extensions’ reference here
<daviddias>
In Brave, thanks to Muon, we can run IPFS in a Component extension which is basically an extension with special privileges such as access to WebRTC and ability to register protocol handlers
<gozala>
I see
<gozala>
daviddias: I guess I’m less interested in extensions and more in making ipfs an integral part, but that’s a different matter
<gozala>
daviddias: BTW how does ipfs protocol handler deals with content origin ?
<Ronsor_>
content origin
<Ronsor_>
not that problem again
<gozala>
when I prototyped that with electron there was no way to control that
<Ronsor_>
so umm yeah
<Ronsor_>
still waiting on those base32 hashes
M-flyingzumwalt has joined #ipfs
<Ronsor_>
case insensitivity is necessary
<lgierth>
Ronsor_: it's fine in muon, not an issue here
<gozala>
Ronsor_: yeah I know I was the one compiling about it :)
<gozala>
lgierth: oh how’s that ?
<daviddias>
gozala: I'm also interested in making IPFS a protocol that is accessible and integrated in every Web Browser and I guess I'm now more focused on the path towards that after the many conversations and experiments :D
<lgierth>
yeah you were super helpful in figuring out ipfs:// URLs and dweb: URIs
<daviddias>
Ronsor_: we have CIDs which can be represented in base32 or base16
<Ronsor_>
cool
<lgierth>
daviddias: the issue is that browser usually just lowercase the host part (i.e. the hash in ipfs://). QmFoo => qmfoo => trash
<gozala>
daviddias: that’s totally reasonable, I’m not trying to pursue switching gears, just trying to figure out if I can experiment with this
<Ronsor_>
even if they didn't, case sensitive hosts would confuse the crap out of most users
<Ronsor_>
since "FaCeBook.com" is the same as "facebook.com"
fredthomsen has joined #ipfs
<Ronsor_>
ideally we could replace the
<Ronsor_>
ideally we could replace the "web" with an ipfs-based network
<Ronsor_>
where all pages are markdown
<daviddias>
gozala: lot's of good feedback here. I talked a ton with pedrot and others about the urgency to get a nice and documented File System like API through IPFS to solve the lack of a proper File System API in the browser, we need to get back at it
<Ronsor_>
daviddias: what about FileReader?
<Ronsor_>
any real filesystem api would be a security issue unless it's sandboxed
<Ronsor_>
then it'd be 'why not fake it on indexeddb or localstorage?'
<daviddias>
When IPFS is running on your browser tab, it is using indexeddb
<Ronsor_>
I know that
<gozala>
daviddias: lgierth: Let me rephrase my question about the content origin
<gozala>
so I noticed in brave demo url’s are ipfs://hash
<Ronsor_>
I'm just saying the lack of a proper file system api isn't technically a problem since it's a necessary (IMO) limitation
<gozala>
so I assume you either went with non standard url and origin is likely `ipfs://` or something or you figured out a way to fix that somehow
<gozala>
I’m curious which one ?
Mateon1 has quit [Ping timeout: 240 seconds]
Mateon1 has joined #ipfs
<lgierth>
it's supposed to be a WHATWG URL with a non-special scheme
<lgierth>
we were hoping it'd just obey the same origin logic
<lgierth>
but haven't tested it yet to be honest
<Kythyria[m]>
Well, you _could_ create a sandboxed filesystem API, but there's little call for it unless porting things that assume a filesystem too much.
<lgierth>
gozala: as in, my expectation is the origin would simply be ipfs://QmFoo for a URL of ipfs://QmFoo/some/path.html
<Kythyria[m]>
(and it's quite common to make assumptions that don't hold up on the original OS anyway)
chriscool1 has joined #ipfs
<gozala>
lgierth: I think it’s going to be "ipfs://“ unless muon does something different there
<daviddias>
lgierth: gozala right now it doesn’t do that as the protocol handler in muon basically say “someone requested on your protocol, now you figure it out “
<lgierth>
just because it's a non-special scheme?
<daviddias>
And we haven’t gotten to the part of implementing such security protections
<gozala>
makes it standard scheme which causes the host to be lowercased
<lgierth>
> Otherwise the scheme will behave like the file protocol, but without the ability to resolve relative URLs.
<gozala>
if registerStandardSchemes is not calld
<lgierth>
aha nice
jaboja has quit [Ping timeout: 256 seconds]
<gozala>
then it’s not lowercased but origin appears to be just scheme (on electron)
<lgierth>
does firefox do this similarly? i.e. non-standard schemes are handled like file:?
<lgierth>
ah wait, hehe
<gozala>
at least on surface API in Muon and electron is the same
<lgierth>
generally no additional url schemes in firefox, yet :)
<lgierth>
ok, awesome, so this reads to me as: we'll do the base32 CID migration, and then we can also have secure origins
<gozala>
lgierth: I’m not sure what the state of affairs is in firefox right now
<gozala>
but it was similar to electron’s but there was more controls
<gozala>
like you could do redirects
<lgierth>
no progress -- we'll try to actively contribute to it
<gozala>
lgierth: so is the plan to just switch to base32 everywhere
<lgierth>
reason we need the base32 CID migration is that currently the default hash encoding is case-sensitive, which would lead to very annoying UX issues (user copy-pastes hash, hash gets lowercased, user gets error)
<gozala>
or do redirect kind of a thing ?
<lgierth>
switch
<lgierth>
and where possible redirect to the same hash but encoded in base32
<gozala>
ah ok
<gozala>
lgierth: year so redirect isn’t possible sadly in electron as far as I can tell
<gozala>
but again if base32 becomes canonical way it’s probably not a big deal
<lgierth>
we already want to do this anyway because on the gateway everything is the same origin too :/ "https://ipfs.io" -- and suborigins aren't going to be implemented everywhere anytime soon. so we want to redirect ipfs.io/ipfs/hash to hashbase32.ipfs.dweb.link, so that it gets its own origin
<gozala>
cool
<lgierth>
we haven't yet checked if browsers generally allow domain parts to be longer than the 63 bytes allowed for "labels" in the host name spec
<gozala>
lgierth: so one last question
<gozala>
or rather rephrase of it
<lgierth>
we might have to split with a dot :/ and then accept dots as padding, sigh :/
<lgierth>
yeah
<gozala>
so I say I have this beaker ipfs browser
<gozala>
and I connect to a some ipfs://Queuou
<gozala>
is there way I can tell how many peers are there for that specific content ?
<lgierth>
nope
<lgierth>
you can get "at least X"
<gozala>
lgierth: maybe I should explain my use case rather than ask specific question
<lgierth>
it's `ipfs dht findprovs <hash> | wc -l` with go-ipfs, i'm not 100% sure js-ipfs has it too
<gozala>
lgierth: so what I’d like to be able to do is have UI
<gozala>
that shows who seeded you the content
<gozala>
actually let me take a few steps back
<gozala>
idea is when you start a browser first
<gozala>
it set’s you up
fredthomsen has quit [Quit: Leaving.]
<gozala>
meaning creates an IPNS node and lets you mount profile that is some metadata in the ipfs
<gozala>
like your avatar maybe links etc..
<gozala>
so your browser is essentially a node
<gozala>
then when you load some app from ipfs / ipns network
<gozala>
I would like you to be able to let you explore who contributed in getting it to you
<gozala>
and potentially allow you to explore their profiles (assuming they created ones and decided to make it public)
<lgierth>
yeah -- basically like an enhanced progress report. not just "100%" - 1.3 MB from QmSomeNode, 4.2 MB from QmOtherNode"
<lgierth>
* not just "100%", but also "[...]"
<gozala>
lgierth: yeah that’s part of it but another aspect is build a social connections
<gozala>
like to discover their feed and potentially send them a message and what not
jaboja has joined #ipfs
<lgierth>
sounds great :):)
fredthomsen has joined #ipfs
<gozala>
lgierth: so with that I’m trying to understand what info is available on peers ?
<lgierth>
there is no info directly connecting the data you fetched from the peers you fetched it from
<gozala>
lgierth: in some way I’d need to have a mapping from peer seeding to their ipns node hash
<lgierth>
you can do a findprovs call separately, and likely it includes some peers you want, but might just be *any* peers that the network reports have the content
<lgierth>
this info can be added though
<lgierth>
as kind of an extended progress report, similar to the progress on files.add()/cat(), but with more detailed peer info
<lgierth>
then you run into the question of how long you want to keep this info though. whether to store it, etc.
<gozala>
lgierth: so let me make sure I understand. 1. There is no way currently to tell what data come from which peers, but could possibly be added). 2. You could do `findprovs` which I presume gives you some peers in the network right now that have that content.
<lgierth>
exactly ^
<gozala>
lgierth: ok
<gozala>
lgierth: so say I did get the list of peers, is there way to map from each to their ipns hash ?
<lgierth>
also note that the dht is only one possible content routing mechanism (= find nodes who have certain content), there's also room for mechanisms that are locally aware
<lgierth>
their peerid is a possible ipns hash. but they might not be publishing any ipns at all. or they might be publishing with a different ipns key
<gozala>
lgierth: ok perfect
<gozala>
yeah if peers want to stay anonymous that’s totally fine
<lgierth>
it's basically only incidental that the PeerID (which is a libp2p-level concept) is the same as the default IPNS hash. they are not bound to each other at all apart from that
<gozala>
which is what the not publishing or publishing under diff ipns key would be
<gozala>
lgierth: that’s a good coincidence as I was trying to do it in dat :)
<lgierth>
:) it's an incident that might become less likely in the future
<gozala>
sigh, ok
<gozala>
so how then would you recommend to do this mapping ?
<lgierth>
a gap i noticed here: a convention for how to broadcast updates. for example it could say, if you "fork" a thing, publish the new ipns or ipfs hash to a pubsub channel named after the thing's original hash
<gozala>
that would not depend on this (potentially temporary) incident
Reinhilde is now known as Ellenor
<lgierth>
then the original publisher could listen for "forks" of their thing, and add them to a history/network data structure that's linked to from the thing
<gozala>
lgierth: wait you lost me there
<lgierth>
or that history/network data structure could be maintained in a distributed fashion (=> crdt)
<gozala>
what do you mean by “fork” a thing ?
<lgierth>
display it, make a change, publish it, find other people reading/working on the same document (different hashes/versions, but same document)
<gozala>
all I’m trying to do is to allow a peer provide it’s metadata
<gozala>
lgierth: I think we’re talking about different things here
<lgierth>
a peer as in ipns, yes? not a peer as in libp2p
chriscool1 has quit [Quit: Leaving.]
<lgierth>
because ipfs content and ipns entries are completely decoupled from the actual node daemon
<gozala>
peer as in nodes that have content that I’m looking at in my browser now
<gozala>
so either peers that have ipfs://mytwitterclone
<lgierth>
so these abstractions don't really allow "let's send a message to the publisher of this thing" in the sense of just opening a stream
<gozala>
or ipns://mytwitterclone
bwn has quit [Quit: Quit]
<lgierth>
you'd use pubsub to communicate with them on a certain topic derived from the thing
<gozala>
lgierth: oh I see what you’re talking about
<gozala>
lgierth: my assumption there was
<gozala>
that my profile will list my mailbox address as in pubsub channel that’m watching
<gozala>
so you can send me a message
<lgierth>
(or: an ipfs hash of a message)
<gozala>
yeah
<gozala>
so from what I gathered currently if I get peers via findprovs
<gozala>
and then look at their `ipns://peer.id/inbox`
<gozala>
assuming such exists it will have a channel where I can publish to send a message
<gozala>
more like `ipns://${peer.id}/inbox`
rodolf0 has quit [Ping timeout: 265 seconds]
<gozala>
but then it also seems that peer.id just happen to be ipns address today but that might change
fredthomsen has quit [Quit: Leaving.]
wmoh1 has quit [Ping timeout: 248 seconds]
jaboja has quit [Quit: Leaving]
detran` has quit [Ping timeout: 248 seconds]
<gozala>
so my question was how do I get from `peer` to some `ipfs/ipns` address that they chose to put their metadata under ?