pfrazee has quit [Remote host closed the connection]
<whyrusleeping>
jbenet: fantastic, thank you!
<MikeFair>
Has anyone considered how, technically, to implement a consensus algorithm ove the ipfs swarm?
<AniSkywalker>
MikeFair raft?
<AniSkywalker>
Oh do you mean cluster?
<MikeFair>
nm the question; so I had a thought about how code running on many nodes can communicate they are part of the same swarm/code by registering announcement of the same ipns address
<MikeFair>
AniSkywalker: Yeah; clustering; how does network know "where to send a message"?
<MikeFair>
I guess I'm actually talking about the more general case
aquentson1 has quit [Ping timeout: 240 seconds]
<MikeFair>
How do I write code that presents a service to the netwrok via a cluster; something where accessing the address is connecting to the service
fleeky_ has joined #ipfs
<MikeFair>
jbenet, whyrusleeping: We had a problem earlier with people accessing: ipfs name resolve -r daclubhouse.net ; and ipfs ls /ipns/daclubhouse.net
<MikeFair>
whyrusleeping: How about just in the CLI?
<MikeFair>
whyrusleeping: In principal I've experienced/learned ; lazy evaluation of linked resources is always better
<MikeFair>
whyrusleeping: err principle; you can provide a smart agent to precache links in an async manner if its predict you'll actually need them ; but it's always two separate requests from different agents
<MikeFair>
That dirty little secret that says "anything that is expected that it "should" be there; if it isn't local to the source; will invariably not be there for more reasons than previously thought" :)
<MikeFair>
Even local to the source stuff won't be there; though it's statistically rare enough to ignore; and when they do happen; typically it's an indicator of bigger problems
<MikeFair>
(like files we wrote to a local hard drive are fairly good at being there next time we look; resources on linked machines outside of our control; not as much)
<lgierth>
dryajov: resolve is handled by the browser
<dryajov>
diasdavid: without that, what does supporting DNS means in this context?
<dryajov>
right...
<MikeFair>
I'm looking forward to in-network DNS hosting :)
<lgierth>
it's just a workaround, once browser let us do dns lookups manually, we can do resolve() too. what it effectively means is that you have no control over whether it'll resolve A or AAAA
<lgierth>
other impls (go) are already getting resolve()
<lgierth>
js-ipfs needs to understand the same addresses
<dryajov>
ah... I see... so this basically means to support the new dns multiaddress format and do the correct url mapping for websockets
<lgierth>
yeah, and /wss
<lgierth>
which websockets over https, where you need a domain name matching the certificate
<lgierth>
*which is
<dryajov>
right... which is what webrtc-star is doing right now...
<MikeFair>
dryajov: I'm just reading that comment now; but if i'm reading that right it's trying to choose which connection to open based on the DNS multi-reponses? Am I right?
<lgierth>
yeah resolve() turns /dns4/example.com into /ip4/1.2.3.4
* MikeFair
is +1 on /dns (it's what I expected to see based on /ipfs /ipns)
<dryajov>
right..
<dryajov>
basically, formatting the dns multiaddr to a correct ws/http url
<lgierth>
this is intentionally minimal and only what we needed right now
<lgierth>
/dns is a larger issue and we weren't looking to solve it all
<MikeFair>
(A) that typically means 'open the first'; (B) Why not open them all if you're connecting to a node that can deal with multipathing (some connections will fail); and (C) if not all, then the first 4 and the first 6
<dryajov>
diasdavid lgierth the tests in webrtc-star are failing because the dns multiaddrs are /ws/ not /wss/... just though i mentioned here... will push a PR shortly
<lgierth>
MikeFair: resolve() returns an array of multiaddrs
<lgierth>
what you pick is up to the caller
<lgierth>
dryajov: nice, thanks :)
<MikeFair>
lgierth: oh; I thought you were talking about that you are the caller and figuring out what to do about the array of multiaddrs
Foxcool has joined #ipfs
<lgierth>
oh ok. usually applications nowadays do round robin if it's multiple A records
<lgierth>
and some things do round robin for AAAA too
<MikeFair>
lgierth: but the applications can't make that decision so the resolvers round robin the requests
<lgierth>
the ones that don't do AAAA round robin, always pick the first response, claiming something about backward compat
<lgierth>
(i can look it up if you want to know)
<lgierth>
yeah
<lgierth>
it might make sense to have a resolveOne() function too, not sure
<lgierth>
i wasn't ready to make that call without implementing and playing with it first :)
<lgierth>
what do you think?
<lgierth>
can you think of other resolution schemes apart from round robin and always-first?
<lgierth>
oh and the multipath thing you said
<MikeFair>
lgierth: I can think of an option "one" that isn't "always first"; there's also priority weightings
<MikeFair>
lgierth: but from a users API perspective; give me all and give me all/0/
<MikeFair>
lgierth: give me all/1/
<MikeFair>
if you're just talking about responses to a query
<lgierth>
give me all, word
<MikeFair>
with priority weightings it's like round round, but not all answers are equal (typically the weight different because they aren't the same bandwidth)
cyanobacteria has quit [Ping timeout: 245 seconds]
<MikeFair>
but to make sure I understand this right; this about looking up addresses of a DAG entry based on the DNS name as content?
<lgierth>
no, purely about addressing things on the network
<lgierth>
doesn't even have to be ipfs nodes
<lgierth>
dnslink isn't affected
<MikeFair>
Oh so data has a multiaddr
<MikeFair>
(all the nodes that provide it)
<MikeFair>
?
<lgierth>
no this is about addressing processes :) /dns4/ipfs.io/tcp/443/wss/ipfs/Qmfoo addresses an ipfs process on the internet
<lgierth>
then you can connect to it and see which multistream protocols it speaks
<MikeFair>
I'd expect that to expand ipfs.io into multiple ipv4 addresses
<lgierth>
the /ipfs space there is confusing because it's *not* the same as in fs:/ipfs/* paths (we'll be switching it to /p2p because of this confusion, and because it's really addressing libp2p nodes, not ipfs nodes)
<lgierth>
MikeFair: yeah cool :)
<lgierth>
/dns4.resolve() => [/dns4, /dns4, ...], and same for /dns6
<MikeFair>
like /dns4/ipfs.io/10.20.50.2/tcp/443/
<MikeFair>
or ipfs.io/dns4/a/10.20.50.2/tcp/443/
<lgierth>
you'd keep the unresolved multiaddr around to keep the dns context, i think
<MikeFair>
lgierth: humans are going to type names; keep the addrs grouped or the programmer will go nuts trying to keep things efficient
<lgierth>
because what you proposes means the /dns4 part can now have 1, or 2, or 3 segments, and that makes it hard to work with
<MikeFair>
lgierth: and embedded array works to
<MikeFair>
lgierth: I was more just pointing out that resolve should reply with "what was asked" and "all you need"
<MikeFair>
I guess dns4 == A and dns6 == AAAA
<lgierth>
yes
<lgierth>
i'm more wondering about why to keep the domain name in the resolve() result multiaddrs
<MikeFair>
but it makes it hard to query SRV records if you don't keep the record type
<lgierth>
ah. yeah this all doesn't cover any other dns records
<MikeFair>
lgierth: Think of it as "cluster name" or "authority context"
<lgierth>
and is intentionally minimal in order to not make unneccessary decisions now which will bite us later ;)
<lgierth>
later = when people come up with the need for SRV or MX records
<MikeFair>
lgierth: if you remove the name that was used to look it up; it will be less useful to cache
wallacoloo_____ has quit [Quit: wallacoloo_____]
suttonwilliamd_ has quit [Ping timeout: 256 seconds]
Foxcool has quit [Ping timeout: 276 seconds]
<lgierth>
yes this will very likely work differently than /dns4 and /dns6
<lgierth>
i haven't looked into SRV in the context of multiaddr, to be honest
<lgierth>
purely A and AAAA
<MikeFair>
lgierth: So just that I'm clear about; what/when is this resolve expected to be called?
<lgierth>
when you want to dial the address
<MikeFair>
lgierth: SRV records reply with PORT and ADDRESS
<MikeFair>
lgierth: they look different _service._proto._srv.domain.dom iirc
<lgierth>
yeah
<lgierth>
just saying nobody has brought up an interest in SRV+multiaddr so far, so nobody has looked into it either i guess :)
<MikeFair>
lgierth: So if I'm the code; my user has just typed "get me /ipfs/someCID"
<MikeFair>
lgierth: hehe; which is why I'm kind of pointing out, keeping the record types around; even if their usually ignored; won't bite you
<lgierth>
ok but you haven't told me yet why it's *useful* for the case of dns4 and dns6 :)
<MikeFair>
lgierth: Me being the code then say "Hey Daemon! Resolve /ipfs/someCID" to which I'm expected to open connections
<lgierth>
i have one case in mind where it might be useful, but is probably not worth the added complexity
<MikeFair>
lgierth: Oh because applications (like bonjour and zeroconf) use SRV records for service registration and announcement
<MikeFair>
it's how resolution works in those systems; A records aren't used
<lgierth>
that's cool, i need to resolve A and AAAA records though ;)
<MikeFair>
it's includes the 443 part that's been blindly hardcoded into the response above
<MikeFair>
lgierth: Not exactly; if someone types the properly formatted DNS name of an SRV record; you're supposed to use that reply
Boomerang has quit [Remote host closed the connection]
<MikeFair>
which is why they look different to say "Don't use an A!"
<MikeFair>
and if you just want the DNS resolved address/why not just use the local systems DNS resolver?
suttonwilliamd has joined #ipfs
<MikeFair>
It seems that what you're really expressing isn't dns4 / dns6 but ipv4 and ipv6
<MikeFair>
you're trying to open an IP connection to that address
<MikeFair>
s/address/content id
<MikeFair>
the user provided ipfs.io
<MikeFair>
but what you're seeking is a way to actually open a link to ipfs.io; and so ipfs.io gets transformed to its "DNS Reply"/port/servicename/ipfs/Qm...
<lgierth>
yeah, connect to ipfs.io, over ipv4
<lgierth>
what would "DNS Reply"/port/servicename/ look like?
<MikeFair>
So ipfs.io/ipfs/wss/Qm... -> [/ipv4/x.y.w.z/tcp/443/wss, ...]
<lgierth>
(multiaddrs are supposed to mirror how the protocols encapsulate each other)
<lgierth>
and contain *all* information neccessary to make the connection
<MikeFair>
then it's: [/dns/ipfs.io/ipv4/x.y.w.z/tcp/443/wss, ...]
<lgierth>
that's why they're explicit about ip, ipv4, tcp, the port, etc.
<lgierth>
hrm
<lgierth>
i see what you mean, mhhh
<MikeFair>
then it's: [/dns/ipfs.io/A/x.y.w.z/tcp/443/wss, ...]
<lgierth>
i think we can keep A out of that. it's an implementation detail of dns
<lgierth>
might be different with SRV or other types, but A and AAAA are directly coupled to ipv4 and v6
<MikeFair>
It's the record type of teh dns object; but ok
tmg has joined #ipfs
<lgierth>
then i need to know that A means ipv4, and i need to know it *outside* of the resolver, so that's added complexity
<lgierth>
but i think /dns4/ipfs.io/ip4/1.2.3.4/tcp/443 might be useful
<lgierth>
i.e. keeping the whole /dns4 part
<lgierth>
and if there's an /ip4 part encapsulated in that, the resolver knows it's already resolved, which simple enough
<MikeFair>
it's not dns4
HostFat_ has joined #ipfs
AniSkywalker has joined #ipfs
<AniSkywalker>
This is getting borderline ridiculous but here I am.
<MikeFair>
it's dns returning an array of results, some ipv4 and some ipv6
HostFat__ has quit [Ping timeout: 260 seconds]
<lgierth>
right -- just walked away from directly using the /dns protocol name because i wanted to keep it minimal
<MikeFair>
like udp versus tcp ports
<MikeFair>
why /dns4 and not /dns again?
<lgierth>
because /dns means thinking about *all of dns*
<lgierth>
and i wanted just dns for ipv4, and dns for ipv6
<MikeFair>
this handles all of dns /dns/ipfs.io/ipv4/w.x.y.z/tcp/port
<MikeFair>
You just call an A record ipv4
<SchrodingersScat>
has anyone ever uploaded ipfs to ipfs?
<lgierth>
my impression of that idea ^ was that you'd resolve /dns/ipfs.io to /dns/ipfs.io/ip4/1.2.3.4
<AniSkywalker>
SchrodingersScat Sure, gx :)
<lgierth>
we can't *require* to have the ip address in there, because that'd defeat most of the purpose of using dns names
<MikeFair>
lgierth: I get your point
<MikeFair>
not that's not what I was thinking
<MikeFair>
but now that I see it you're way it starts /ipv4/
<MikeFair>
err your way
<lgierth>
it's a very abstract and complicated topic and i tried to take only on a very small chunk of that problem space
tmg has quit [Ping timeout: 240 seconds]
<MikeFair>
So yeah /dns/ipfs.io/webconference/Qm... -> [/ipv4/w.x.y.z/tcp/N/...
<MikeFair>
I'd personally rather see /dns/ipfs.io/... -> [/ipfs/CID1, /ipfs/CID2]
<MikeFair>
and then those resolve to peer links
<lgierth>
yeah that'd be for example /dnslink/ipfs.io => [/ipfs/CID1, ...]
<lgierth>
right now it's mashed into /ipns :/
<lgierth>
should be cleanly separated
<MikeFair>
CID in this case could pointto a service hosted by a peer
<lgierth>
ah, a peerid
<lgierth>
word
<lgierth>
/dnsaddr :)
<MikeFair>
but it's a specific thing that peer is providing
<lgierth>
same concept as dnslink, but for looking up multiaddrs in TXT records
fleeky__ has joined #ipfs
<MikeFair>
I'm just trying to stick with ipfs as ipfs for addressing -- peer ids are the moral equivalent of ip addresses
<MikeFair>
my ipfs API should "open a link to this ContentAddress"
<AniSkywalker>
hsanjuan I don't even know how to put this... I've literally refactored the entire ipfs-cluster codebase.
<MikeFair>
and that because a link to peer address providing that content
<MikeFair>
s/because/becomes
<MikeFair>
which involves some magic in the daemon to figure "how" to connect me
<lgierth>
some magic = peer routnig
<lgierth>
:)
<lgierth>
you can ipfs swarm connect /ipfs/Qmfoo
<MikeFair>
right but I'm just saying from a programmer's API; my resolver never gives me anything but an ipfs content address
<lgierth>
in fact it'll ignore the address if you do ipfs swarm connect /ip4/..., and just unconditionally do peer routing
<lgierth>
MikeFair: oh yeah <3 got it
<MikeFair>
ANd what I like about that is /ipfs/ContentAddressOfMultiPartyConference
<MikeFair>
And there's a bunch of peer ids that are announcing they provide that
<lgierth>
and then everybody just listens for pubsub messages with that path as its topic
<lgierth>
(that's how orbit works i think)
<lgierth>
you have an object being the channel metadata, and then nodes publishing pubsub message with their latest data for that channel
<MikeFair>
In this case; the connector code would query "who provides: Qm..." and that would return PeerIds; which could have links established :)
<MikeFair>
lgierth: I was thinking stream oriented connections
<MikeFair>
lgierth: ipfs session Qm....
<lgierth>
i think ipfs already does everything you just said
ShalokShalom has quit [Quit: No Ping reply in 180 seconds.]
<lgierth>
pubsub, content routing, peer routing
<lgierth>
the routing is exposed as `ipfs dht findprovs` and `ipfs dht findpeer`
<lgierth>
ok going to bed
ShalokShalom has joined #ipfs
<lgierth>
thanks for the discussion!
<MikeFair>
lgierth: yes, exactly what I'm saying; this is ipfs native type material; I don't think they allow me to provide a bidirectional comm link to those endpoints
<MikeFair>
lgierth: that's the new thing
voxelot has quit [Read error: Connection reset by peer]
<lgierth>
aah, exposing the streams on the cli/api
voxelot has joined #ipfs
<MikeFair>
lgierth: my local ipfs based app opens a pipe/connection to the daemon and the daemon replies "ok you're now CID Qm..."
<lgierth>
Magik6k has a pull request exposing corenet to the cli
<lgierth>
and do e.g. ip tunnels over it
<lgierth>
MikeFair: i think you could that with just a bit of glue, and that corenet PR
<MikeFair>
lgierth: great! because that's what the programmer's API should resolve to; so when I want to do a video conference over ipfs; I don't think about webSockets
<MikeFair>
lgierth: My ipfs library could; but not me :)
<MikeFair>
I think I saw it earlier; but I look again
<lgierth>
well check out how e.g. openbazaar or mediachain are implementing their protocols on top of libp2p
<lgierth>
the interface that you get is a stream
<MikeFair>
lgierth: I agree to your point about the resolution; if the query is for dns then only a CNAME record would include /dns in the responce
<lgierth>
(unrelated to the corenet PR)
<MikeFair>
I'm also hoping to "authenticate" to ipfs and get some context of a "user session" that apps can store data into
<MikeFair>
kind of like "localStorage" on a browser but is really "ipfsStorage" so my session "lives in the cloud" :)
<MikeFair>
lgierth: Thanks for being open to the chat! :)
<AniSkywalker>
lgierth So I have a gigantic refactor of ipfs-cluster. What are the chances it gets merged?
<AniSkywalker>
hsanjuan isn't around for a few days
MajorScale[m] has joined #ipfs
cyanobacteria has joined #ipfs
<lgierth>
great -- you'll have to wait for hsanjuan with that though
<MikeFair>
AniSkywalker: what's it do?
<lgierth>
good night!
<MikeFair>
lgierth: g'night!
<AniSkywalker>
MikeFair completely reorganizes ipfs-cluster, changes aspects of it to be more Go-like.
<MikeFair>
cool
<MikeFair>
Go likes beating me to a pulp before yielding to me will ;)
<MikeFair>
err my
wallacoloo_____ has joined #ipfs
<AniSkywalker>
Oh, also swapped the logger with logrus
<AniSkywalker>
MikeFair it's actually ridiculous how much refactoring has taken place
<AniSkywalker>
I don't even know if it will compile at first--ciclic dependencies :(
<MikeFair>
AniSkywalker: on a git branch I assume?
<MikeFair>
passes all tests :)
<AniSkywalker>
Fork + branch + local right now :P
<MikeFair>
push to fork; what's it like to compile
<MikeFair>
I'd be willing to try clone / build
<MikeFair>
I haven't installed much Go related packages yet so I'd like run afoul of any dependency problems; I have run into the sistuation where a dependency used an older version of a vendor provided package that caused method resolution to fail on the newer functions
<MikeFair>
(that i was using)
<MikeFair>
(I was modifying their module at the time)
<AniSkywalker>
Not done yet :P
<AniSkywalker>
Once I'm getting close to 95% or so I'll push it
<AniSkywalker>
I'm going through and making error messages useful MikeFair (as I switch them to logrus)
<AniSkywalker>
Especially for RPC; one of the reasons logrus is great is because it exports to data-friendly if needed.
* MikeFair
relaxes in a sigh of contentment; "meaningful messages" ahhhh. :)
<AniSkywalker>
Thus, making RPC error debugging useful can provide us with valuable insight into the protocol.
AkhILman has quit [Read error: No route to host]
<AniSkywalker>
Does 'logrus.WithError(err).WithFields(logrus.Fields{"peerId": pid, "svcName": "Cluster", "svcMethod": "PeerManagerRmPeerShutdown", "clusterId": c.id}).Error("error calling RPC")' count as meaningful?
tmg has joined #ipfs
PrinceOfPeeves has quit [Quit: Leaving]
pfrazee has quit [Remote host closed the connection]
chris613 has quit [Quit: Leaving.]
AkhILman has joined #ipfs
<MikeFair>
AniSkywalker: Only if you can also append whatever it was that caused it to decide it had a problem ;)
<AniSkywalker>
Yeah, that's the 'WithError' part
<MikeFair>
AniSkywalker: oh, yeah;
<MikeFair>
"Failed to establish RPC connection"
<MikeFair>
(the concept of the message "What was I doing" and "What happened when I did"
<AniSkywalker>
Yeah, hold on got an updated version of that
<MikeFair>
that ipfs cluster link page is a pretty dens read for a newcomer
<MikeFair>
Wouldn't that be "Error shutting down Peer Link" or something like taht
<MikeFair>
AniSkywalker: Do you know the results of the cluster as a virtual peer node conversation?
<AniSkywalker>
What I'm going for is, search for RPC errors, filter by type.
<AniSkywalker>
MikeFair that's what the format's about
<AniSkywalker>
Since we're basically also debugging the protocol, it's useful to know these things.
<AniSkywalker>
I.e. "what's the most frequent RPC error?"
<MikeFair>
AniSkywalker: ahh "RPC Error: Failed to execute Remote Procedure"
<AniSkywalker>
Oh I dunno, sure
<MikeFair>
AniSkywalker: ahh "RPC Error: Failed to execute Peer Shutdown"
<AniSkywalker>
Oh is that what it says?
<MikeFair>
PeerManagerRmPeerShutdown
<AniSkywalker>
Where are you looking MikeFair?
<MikeFair>
svcmethod
<MikeFair>
from your hastepin link
geir_ has quit [Quit: SafeBouncer - znc.safenetwork.org:4876]
<AniSkywalker>
MikeFair are you suggesting an error message or saying that's the result? :P
<MikeFair>
AniSkywalker: Oh I was suggesting the error message so that when I search filter for "RPC Error" I'd easily the service name on the same line
<AniSkywalker>
Well, it's more about machine consumption.
<AniSkywalker>
it's rough but it's the general idea
<whyrusleeping>
switching logging to logrus from what?
<AniSkywalker>
logger
<AniSkywalker>
whyrusleeping because we need to be able to analyze the log messages, especially RPC drops, etc
<AniSkywalker>
Which logrus is especially good at--log analysis
<AniSkywalker>
plus it looks cool, but mostly the former ^.^
<hsanjuan>
AniSkywalker: please ask before getting into big endevours. cluster uses ipfs/go-log not by chance, but because that is why the whole libp2p stack uses
tr909 has quit [Ping timeout: 240 seconds]
<AniSkywalker>
Is there a reason for that?
<hsanjuan>
also small incremental PRs will have better chances for a merge than huge refactors
<hsanjuan>
s/why/what the whole libp2p stack uses
<AniSkywalker>
hsanjuan according to the 'go-log' repo, "It currently uses a modified version of go-logging to implement the standard printf-style log output."
<whyrusleeping>
Yeah, we use the logging package we use because it more cleanly separates 'event' type logs from 'trace' type logs
<whyrusleeping>
We spent a lot of time using various logging packages (including logrus) and ended up where we are
<AniSkywalker>
whyrusleeping logrus does that IMHO better, allowing you to specify the levels and also different fields, etc.
<AniSkywalker>
whyrusleeping was there a specific reason to reject logrus?
<AniSkywalker>
Because it makes analyzing protocol failures very good. I.E. noticing RPC drops
<hsanjuan>
MikeFair: it's true the ipfs-cluster README is dense. Would you help me by explaining how it would look best for you as a newcomer (best if you put it in an issue)
mguentner has quit [Quit: WeeChat 1.7]
<whyrusleeping>
unecessary complexity, a lack of nice logging features that i wanted (primarily, the lack of filename/line numbers in log messages)
<whyrusleeping>
our current logging library can do what you want just fine, and if you have specific cases where it could be better, i would be open to improving that package
anewuser has quit [Ping timeout: 252 seconds]
<AniSkywalker>
whyrusleeping that's entirely possible IIRC, but the main reason I like logrus is because ipfs-cluster will likely be running not on local nodes but on production machines, where being able to categorize and aggregate logs is a huge plus.
<whyrusleeping>
what i said stands.
john1 has joined #ipfs
<AniSkywalker>
That's fine. I respect the decision. In that case, I'd like these features: the ability to add fields/errors attached separate from the message, to make aggregation easy and feasible
<AniSkywalker>
The ability to parse the log easily and query it in logstash etc. is uniquely critical to ipfs-cluster, given the target environment.
mguentner has joined #ipfs
shizy has joined #ipfs
<MikeFair>
AniSkywalker: Could you aggregate go-logger with the "WithFields" concept?
<MikeFair>
(or whatever it is the project is using?)
<AniSkywalker>
with logrus, you would set the output formatter to JSON or similar and load it into logstash
<MikeFair>
AniSkywalker: And more to your point of production machines; tapping into the stream exposed by a remote PeerIDs to its logs would be really useful
<MikeFair>
AniSkywalker: It also says it's "completely compatible" with something; could that mean you could submit the logrus output through the ipfs project's logger
<whyrusleeping>
AniSkywalker: also, avoid comitting your editor config files
<AniSkywalker>
Oh my bad
<AniSkywalker>
thought I put that in gitignore
<MikeFair>
AniSkywalker: It sounds like you're advocated for a particular text message format and calling API; both which could sent to a PrintF style logger
tr909 has joined #ipfs
<redlizard>
Greetings. I'm doing some research into ipfs, and I wondered whether there are any estimates as to the number of active users of ipfs?
<redlizard>
I understand that accurate measurements are tricky if not impossible, of course.
<dansup>
there are dozens of us, dozens!
<AniSkywalker>
whyrusleeping could you follow the dht to figure out a user count?
<whyrusleeping>
redlizard: there are roughly 500 nodes online at any given moment (consistently throughout the day) on the main network
anewuser has joined #ipfs
<whyrusleeping>
redlizard: But every given day I see 5,000 or more nodes come online and offline
<redlizard>
whyrusleeping: Alright, thanks! That's an excellent indication to start with.
<AniSkywalker>
whyrusleeping my last "complaint" was that in the logging that I did see, there was a lot of "fmt.Sprintf("something something something: %s, %s" , foo, bar)" when there would be no context for what the replacements are.
<AniSkywalker>
Whether or not that's the logging library's fault I don't know, but I do know logrus forces the developer to think about each log message.
<redlizard>
whyrusleeping: So maybe 20,000 users using it regularly, perhaps?
<AniSkywalker>
redlizard that's not unreasonable if a bit conservative
<AniSkywalker>
There's a lot of us :)
<redlizard>
AniSkywalker: I would sort of expect users running ipfs at all to mostly have it running whenever their computer is awake and internet connected.
<AniSkywalker>
When I'm trying to run a cluster on Google cloud platform and I want to feed the log data into logrus to figure out where the RPC bottlenecks are, that's logrus. Also, the events you show are essentially what I showed in the hastebin, if not the primary purpose.
<redlizard>
It's usually ran as a daemon that is there whether you are actively using it or not, right?
<AniSkywalker>
redlizard Well ipfs yes, but we're working on ipfs-cluster
<AniSkywalker>
Which is a bunch of nodes coming together to act like one node in the network.
<AniSkywalker>
The use-case is for companies trying to offer hosted IPFS pinning, or groups of people with similar concerns (federal climate data, etc) redlizard
<redlizard>
AniSkywalker: Hm... I expect that raises the (number of users occasionally pulling data off ipfs) / (number of connected nodes on a daily basis) ratio considerably.
<MikeFair>
redlizard: Yes; if you want to count "using it" as serving/participating; but if you extend that definition to include "consumers of content" then you include all people who see ipfs.io
<AniSkywalker>
Well, they act like one node. It certainly increases the commercial possibilities.
anewuser has quit [Ping timeout: 256 seconds]
<AniSkywalker>
redlizard what if I told you that Dropbox already uses something similar to IPFS for their backend storage?
<AniSkywalker>
They chunk data into blocks, much like us, and store those blocks.
<redlizard>
AniSkywalker: That's kinda cheating given that ipfs is largely a "collect a bunch of well-proven techniques into one awesome whole" project, no? ;-)
<redlizard>
Naturally the backing techniques are in common use!
<AniSkywalker>
Obviously not well known if FTP is still popular.
<AniSkywalker>
However, you're correct.
<AniSkywalker>
IPFS is an assemblage of the best parts of all the great technologies.
<MikeFair>
(*At least for it's intended use cases; it doesn't do anonymity or privacy exceptionally well)
<redlizard>
MikeFair: Clearly y'all should focus on libeverything instead :-)
<MikeFair>
redlizard: of course! despite there's mathematical proofs that define such a thing to be impossible!
<redlizard>
Bah. Excuses.
<AniSkywalker>
We have libp2p for libeverything
<AniSkywalker>
and multiformats
<MikeFair>
redlizard: Like I can't authenticate you are so-and-so with any level of assurances and keep your identity hidden from myself at the same time :)
<redlizard>
MikeFair: Next you're gonna tell me that ipfs doesn't do full CAP either!
<MikeFair>
redlizard: of course it does! I'm just going to redefine the parameters of success for what full CAP means in any given situation! :)
<MikeFair>
problem was already solved! :)
<AniSkywalker>
Caddyserver might be able to warn people about compromised TLS
* MikeFair
thinks he should file a PR request to redefine/declare all problems by adopting a change the definition of succes to whatever it did at the time. :)
<redlizard>
MikeFair: I'm a fan of the "write TRIVIAL in increasingly large fonts until you successfully intimidate the audience" technique.
<MikeFair>
hehe
Guest17873 has joined #ipfs
<MikeFair>
redlizard: I'm curious if you're thinking of readers of the ipfs.io website as "active users"
<MikeFair>
(it's likely not the only one; just the only one I know of)
shizy has quit [Quit: WeeChat 1.7]
<redlizard>
MikeFair: The ipfs.io webserver serves an ipfs mount?
<redlizard>
Why of course it does.
mguentner2 has joined #ipfs
<redlizard>
Anyway, I'm only counting people running the actual software :)
<redlizard>
[One of them, anyway.]
<AniSkywalker>
That's the best thing... you can have IPFS and most people don't even know it.
mguentner has quit [Ping timeout: 240 seconds]
* MikeFair
nods.
<MikeFair>
redlizard: Beyond a mount; it's a gateway
<MikeFair>
redlizard: So http://ipfs.io/ip[fn]s/Qm... will serving anything from ipfs
<MikeFair>
redlizard: Handy for "Could other people on the planet really see that thing I out out there?
<Mateon1>
substack: I should probably clarify, as from my understanding, in order to actually render anything using these tiles you have to parse recursively all the way down. Is that correct?
<Mateon1>
I meant more like vector tiles
<substack>
Mateon1: you have to read the meta.json files to figure out your extents
<substack>
or you can use the peermaps command
<substack>
npm i -g peermaps
<substack>
for example you can run this command to print out all the o5m files for hawaii: peermaps files -156.064270 18.9136925 -154.8093872 20.2712955
<Mateon1>
substack: Right, but in order to render an area, you have to parse all o5m tiles intersecting that area. Sorry if I'm using the wrong terminology here, not a mapping addict :)
<substack>
yes
<substack>
I'm working on a tile format
<Mateon1>
I'm asking if it's possible to get "rough" tiles at different scales, like is useful for a map viewer
<substack>
there's nothing to do that right now
<substack>
I mean there are tools to do this in osm land but there's a lot of shit to wade through
<substack>
and when you find something it's generally pretty hard to stand it up correctly
mildred4 has joined #ipfs
<substack>
it was already quite a feat to get ad-hoc extracts working, but there's lots more to be done
ecloud_wfh is now known as ecloud
[1]MikeFair has joined #ipfs
anewuser has quit [Quit: anewuser]
MikeFair has quit [Ping timeout: 260 seconds]
[1]MikeFair is now known as MikeFair
<Mateon1>
Huh, why does 'ipfs pin' only request one hash at a time? That seems incredibly inefficient.
<Mateon1>
I can probably do it faster with some parallel xargs
Caterpillar has quit [Quit: bfoijdsfojdsfoijdeoijfdsgoijdsfoijsdoijfsoidjfoijdsf]
jkilpatr has quit [Quit: Leaving]
jkilpatr has joined #ipfs
ygrek has joined #ipfs
cyanobacteria has joined #ipfs
matoro has joined #ipfs
espadrine_ has quit [Ping timeout: 240 seconds]
betei2 has quit [Quit: Lost terminal]
Aranjedeath has quit [Quit: Three sheets to the wind]
leeola has joined #ipfs
Aranjedeath has joined #ipfs
galois_d_ has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
galois_dmz has quit [Client Quit]
Caterpillar has joined #ipfs
maxlath has joined #ipfs
espadrine has joined #ipfs
cemerick has quit [Ping timeout: 240 seconds]
cemerick has joined #ipfs
Vladislav has quit [Ping timeout: 258 seconds]
Encrypt has joined #ipfs
jkilpatr has quit [Quit: Leaving]
jkilpatr has joined #ipfs
whereswaldon has joined #ipfs
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
jkilpatr has quit [Client Quit]
jkilpatr has joined #ipfs
grusr has joined #ipfs
cemerick has quit [Ping timeout: 256 seconds]
cemerick has joined #ipfs
whereswaldon has quit [Remote host closed the connection]
patience has joined #ipfs
Boomerang has quit [Quit: Lost terminal]
Mizzu has quit [Ping timeout: 240 seconds]
chungy has quit [Ping timeout: 245 seconds]
JustinDrake has joined #ipfs
<JustinDrake>
Is there a flag to opt-in to CIDv1, e.g. for `ipfs add`?
chungy has joined #ipfs
<whyrusleeping>
JustinDrake: not yet. I can probably add that quickly though
JustinDrake has quit [Quit: JustinDrake]
pfrazee has quit [Read error: Connection reset by peer]
bastianilso has joined #ipfs
kanred has quit [Read error: Connection reset by peer]
kanred has joined #ipfs
<grusr>
Hi. I have a folder with some movies from the Blender foundations open project, https://www.blender.org/features/projects/. Sintel or Big Buck Bunny might be the ones most people know of. The movies are creative commons licensed. Is it ok to paste the link here?
<lgierth>
grusr: yes sure
<lgierth>
i think sintel is already somewhere out there
<grusr>
Yep, I have seen Sintel out there, I have some more copies with higher resolution.
<lgierth>
grusr: what kind of connection is your node on? just wondering if it's bitswap not behaving very stable or whether it's the connection
<lgierth>
kyledrake: cool i'm back
<kyledrake>
K one minute, need to get laptop
cemerick has quit [Ping timeout: 240 seconds]
Guest154554[m] has joined #ipfs
<grusr>
lgierth : I am on a fibre connection. I have 3 nodes, one of them is sitting outside of my firewall and all the files should be on that node with 100Mbit/s up.
john1 has quit [Ping timeout: 276 seconds]
mildred1 has joined #ipfs
JustinDrake has quit [Quit: JustinDrake]
<grusr>
lgierth: The other 2 nodes have some of the data, on different ips but on the same connection one of those is behind a firewall and a VPN but should have an open port and the other one is behind a VPN but without the firewall.
pfrazee has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
mildred has quit [Ping timeout: 252 seconds]
mikeal has joined #ipfs
galois_d_ has joined #ipfs
Mizzu has joined #ipfs
galois_dmz has quit [Ping timeout: 256 seconds]
jkilpatr has quit [Ping timeout: 255 seconds]
galois_d_ has quit [Client Quit]
UraniumFighters1 has joined #ipfs
UraniumFighters1 has left #ipfs [#ipfs]
galois_dmz has joined #ipfs
cxl000 has quit [Quit: Leaving]
JustinDrake has joined #ipfs
JustinDrake has quit [Client Quit]
grosscol has joined #ipfs
pfrazee has joined #ipfs
slothbag has joined #ipfs
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 260 seconds]
<lgierth>
cool .)
sprint-helper1 has joined #ipfs
sprint-helper has quit [Write error: Broken pipe]
jkilpatr has joined #ipfs
bastianilso has quit [Quit: bastianilso]
Mizzu has quit [Ping timeout: 264 seconds]
pcre has joined #ipfs
grosscol has quit [Quit: Leaving]
kanred has quit [Remote host closed the connection]
<grusr>
lgierth: Are you putting all data from that hash onto the storage node?