Kubuxu changed the topic of #ipfs to: go-ipfs v0.4.5 is out: https://dist.ipfs.io/go-ipfs | Week 5+6: 1) IPLD https://git.io/vDkS7 2) CI/CD https://git.io/vDk9v 3) Orbit https://git.io/vDk9U | Roadmap: https://waffle.io/ipfs/roadmaps | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | FAQ: https://git.io/voEh8 | Logs: https://botbot.me/freenode/ipfs/ | Code of Conduct: https://git.io/vVBS0
shizy has quit [Ping timeout: 240 seconds]
wallacoloo_____ has joined #ipfs
pfrazee has quit [Remote host closed the connection]
<whyrusleeping> jbenet: fantastic, thank you!
<MikeFair> Has anyone considered how, technically, to implement a consensus algorithm ove the ipfs swarm?
<AniSkywalker> MikeFair raft?
<AniSkywalker> Oh do you mean cluster?
<MikeFair> nm the question; so I had a thought about how code running on many nodes can communicate they are part of the same swarm/code by registering announcement of the same ipns address
<MikeFair> AniSkywalker: Yeah; clustering; how does network know "where to send a message"?
<AniSkywalker> MikeFair https://raft.github.io/
<AniSkywalker> [How It's Made intro]
<MikeFair> I'm not talking a particular algo
maxlath has quit [Quit: maxlath]
<MikeFair> I guess I'm actually talking about the more general case
aquentson1 has quit [Ping timeout: 240 seconds]
<MikeFair> How do I write code that presents a service to the netwrok via a cluster; something where accessing the address is connecting to the service
fleeky_ has joined #ipfs
<MikeFair> jbenet, whyrusleeping: We had a problem earlier with people accessing: ipfs name resolve -r daclubhouse.net ; and ipfs ls /ipns/daclubhouse.net
<MikeFair> http://ipfs.io/ipns/daclubhouse.net worked fine; and I could access them on my own node ok
<MikeFair> I was able to issue: ipfs dht findpeer theirpeerid
<MikeFair> How can we tell why that didn't work; what would I have looked for?
fleeky__ has quit [Ping timeout: 258 seconds]
pfrazee has joined #ipfs
<whyrusleeping> MikeFair: the ipfs name resolve command works for me
<MikeFair> whyrusleeping: That was the thing we saw ; worked for some
<MikeFair> it obviosuly worked for ipfs.io
<whyrusleeping> MikeFair: yeah, heres the issue
<whyrusleeping> that entry points to a folder
<MikeFair> (because we were all able to see the data from it)
<whyrusleeping> that folder contains an index.html
<whyrusleeping> so when you request it through the gateway, it just finds the index.html and serves that
<whyrusleeping> and it works
<whyrusleeping> but when you 'ls' a directory, it tries to resolve each of the objects in that directory
<whyrusleeping> so if one of those objects is unavailable, the ls won't complete
<MikeFair> whyrusleeping: ipfs name resolve -r daclubhouse.net returned "could not resolve" it never got that far
<whyrusleeping> MikeFair: hrm, is your daemon running?
<MikeFair> whyrusleeping: Ahh but I can see about the ls hanging issue
<MikeFair> yep
<whyrusleeping> the name resolve just worked for me
<whyrusleeping> but i think you might need to add the /ipns/daclubhouse.net
<whyrusleeping> the prefix is important i beleive
<MikeFair> yeah; that was what we were seeing; two brand new nodes and kythyria[m] couldn't resolve
<whyrusleeping> 'ipfs ls --resolve-type=false /ipns/daclubhouse.net' should work
<MikeFair> whyrusleeping: they couldn't even find the id of the directory the resolve points to
<MikeFair> but that might have been an "object is missing" problem
maciejh has quit [Ping timeout: 276 seconds]
<MikeFair> whyrusleeping: But I've pinned that directory several times; but maybe an object was out of cache already
<whyrusleeping> "could not resolve" is an ipns dht issue
<MikeFair> Hmm; and if a missing object were the case; why could I run ipfs ls locally on my machine?
<MikeFair> whyrusleeping: known issue; or just a thatswhereitsgottabe type issue
matoro has quit [Ping timeout: 252 seconds]
<whyrusleeping> That might be a bug?
<whyrusleeping> its worth reporting
espadrine has quit [Ping timeout: 260 seconds]
<MikeFair> where?
<MikeFair> go-ipfs?
<MikeFair> whyrusleeping: And updgrading resolve type=false to be the default would be "useful"
<MikeFair> whyrusleeping: I tested the prefix thing; at least with ipfs name resolve -r, either way works
dignifiedquire has quit [Quit: Connection closed for inactivity]
matoro has joined #ipfs
<whyrusleeping> yes, in go-ipfs
<whyrusleeping> and yeah, that might be a good thing to have as the default
<whyrusleeping> but it essentially constitutes a breaking change in the API
<whyrusleeping> so we have to discuss and come to agreement on it
AniSkywalker has quit [Quit: Textual IRC Client: www.textualapp.com]
<MikeFair> whyrusleeping: How about just in the CLI?
<MikeFair> whyrusleeping: In principal I've experienced/learned ; lazy evaluation of linked resources is always better
<MikeFair> whyrusleeping: err principle; you can provide a smart agent to precache links in an async manner if its predict you'll actually need them ; but it's always two separate requests from different agents
<MikeFair> That dirty little secret that says "anything that is expected that it "should" be there; if it isn't local to the source; will invariably not be there for more reasons than previously thought" :)
<MikeFair> Even local to the source stuff won't be there; though it's statistically rare enough to ignore; and when they do happen; typically it's an indicator of bigger problems
<MikeFair> (like files we wrote to a local hard drive are fairly good at being there next time we look; resources on linked machines outside of our control; not as much)
chris613 has joined #ipfs
cemerick has joined #ipfs
john1 has joined #ipfs
john1 has quit [Ping timeout: 240 seconds]
cyanobacteria has joined #ipfs
_whitelogger has joined #ipfs
jeraldv has joined #ipfs
cemerick has quit [Ping timeout: 268 seconds]
matoro has quit [Ping timeout: 260 seconds]
tmg has quit [Ping timeout: 268 seconds]
<dryajov> diasdavid: just getting around to working on this - https://github.com/ipfs/js-ipfs/pull/740
matoro has joined #ipfs
<MikeFair> Would someone walk through how/why changing the data address an ipns address references doesn't change the ipns address?
<MikeFair> I get this part: "an ipns address is the hash of its public key"
<dryajov> daviddias: got a couple of questions tho... I looked at the discussion regarding DNS for multiaddr here - https://github.com/multiformats/multiaddr/issues/22#issuecomment-268015055, and it makes sense except for the part where the resolve isn't implemented
<lgierth> dryajov: resolve is handled by the browser
<dryajov> diasdavid: without that, what does supporting DNS means in this context?
<dryajov> right...
<MikeFair> I'm looking forward to in-network DNS hosting :)
<lgierth> it's just a workaround, once browser let us do dns lookups manually, we can do resolve() too. what it effectively means is that you have no control over whether it'll resolve A or AAAA
<lgierth> other impls (go) are already getting resolve()
<lgierth> js-ipfs needs to understand the same addresses
<dryajov> ah... I see... so this basically means to support the new dns multiaddress format and do the correct url mapping for websockets
<lgierth> yeah, and /wss
<lgierth> which websockets over https, where you need a domain name matching the certificate
<lgierth> *which is
<dryajov> right... which is what webrtc-star is doing right now...
<MikeFair> dryajov: I'm just reading that comment now; but if i'm reading that right it's trying to choose which connection to open based on the DNS multi-reponses? Am I right?
<lgierth> yeah resolve() turns /dns4/example.com into /ip4/1.2.3.4
* MikeFair is +1 on /dns (it's what I expected to see based on /ipfs /ipns)
<dryajov> right..
<dryajov> basically, formatting the dns multiaddr to a correct ws/http url
<lgierth> this is intentionally minimal and only what we needed right now
<lgierth> /dns is a larger issue and we weren't looking to solve it all
<MikeFair> (A) that typically means 'open the first'; (B) Why not open them all if you're connecting to a node that can deal with multipathing (some connections will fail); and (C) if not all, then the first 4 and the first 6
<dryajov> diasdavid lgierth the tests in webrtc-star are failing because the dns multiaddrs are /ws/ not /wss/... just though i mentioned here... will push a PR shortly
<lgierth> MikeFair: resolve() returns an array of multiaddrs
<lgierth> what you pick is up to the caller
<lgierth> dryajov: nice, thanks :)
<MikeFair> lgierth: oh; I thought you were talking about that you are the caller and figuring out what to do about the array of multiaddrs
Foxcool has joined #ipfs
<lgierth> oh ok. usually applications nowadays do round robin if it's multiple A records
<lgierth> and some things do round robin for AAAA too
<MikeFair> lgierth: but the applications can't make that decision so the resolvers round robin the requests
<lgierth> the ones that don't do AAAA round robin, always pick the first response, claiming something about backward compat
<lgierth> (i can look it up if you want to know)
<lgierth> yeah
<lgierth> it might make sense to have a resolveOne() function too, not sure
<lgierth> i wasn't ready to make that call without implementing and playing with it first :)
<lgierth> what do you think?
<lgierth> can you think of other resolution schemes apart from round robin and always-first?
<lgierth> oh and the multipath thing you said
<MikeFair> lgierth: I can think of an option "one" that isn't "always first"; there's also priority weightings
<MikeFair> lgierth: but from a users API perspective; give me all and give me all/0/
<MikeFair> lgierth: give me all/1/
<MikeFair> if you're just talking about responses to a query
<lgierth> give me all, word
<MikeFair> with priority weightings it's like round round, but not all answers are equal (typically the weight different because they aren't the same bandwidth)
cyanobacteria has quit [Ping timeout: 245 seconds]
<MikeFair> but to make sure I understand this right; this about looking up addresses of a DAG entry based on the DNS name as content?
<lgierth> no, purely about addressing things on the network
<lgierth> doesn't even have to be ipfs nodes
<lgierth> dnslink isn't affected
<MikeFair> Oh so data has a multiaddr
<MikeFair> (all the nodes that provide it)
<MikeFair> ?
<lgierth> no this is about addressing processes :) /dns4/ipfs.io/tcp/443/wss/ipfs/Qmfoo addresses an ipfs process on the internet
<lgierth> then you can connect to it and see which multistream protocols it speaks
<MikeFair> I'd expect that to expand ipfs.io into multiple ipv4 addresses
<lgierth> the /ipfs space there is confusing because it's *not* the same as in fs:/ipfs/* paths (we'll be switching it to /p2p because of this confusion, and because it's really addressing libp2p nodes, not ipfs nodes)
<lgierth> MikeFair: yeah cool :)
<lgierth> /dns4.resolve() => [/dns4, /dns4, ...], and same for /dns6
<MikeFair> like /dns4/ipfs.io/10.20.50.2/tcp/443/
<MikeFair> or ipfs.io/dns4/a/10.20.50.2/tcp/443/
<lgierth> you'd keep the unresolved multiaddr around to keep the dns context, i think
<MikeFair> lgierth: humans are going to type names; keep the addrs grouped or the programmer will go nuts trying to keep things efficient
<lgierth> because what you proposes means the /dns4 part can now have 1, or 2, or 3 segments, and that makes it hard to work with
<MikeFair> lgierth: and embedded array works to
<MikeFair> lgierth: I was more just pointing out that resolve should reply with "what was asked" and "all you need"
<MikeFair> I guess dns4 == A and dns6 == AAAA
<lgierth> yes
<lgierth> i'm more wondering about why to keep the domain name in the resolve() result multiaddrs
<MikeFair> but it makes it hard to query SRV records if you don't keep the record type
<lgierth> ah. yeah this all doesn't cover any other dns records
<MikeFair> lgierth: Think of it as "cluster name" or "authority context"
<lgierth> and is intentionally minimal in order to not make unneccessary decisions now which will bite us later ;)
<lgierth> later = when people come up with the need for SRV or MX records
<MikeFair> lgierth: if you remove the name that was used to look it up; it will be less useful to cache
wallacoloo_____ has quit [Quit: wallacoloo_____]
suttonwilliamd_ has quit [Ping timeout: 256 seconds]
Foxcool has quit [Ping timeout: 276 seconds]
<lgierth> yes this will very likely work differently than /dns4 and /dns6
<lgierth> i haven't looked into SRV in the context of multiaddr, to be honest
<lgierth> purely A and AAAA
<MikeFair> lgierth: So just that I'm clear about; what/when is this resolve expected to be called?
<lgierth> when you want to dial the address
<MikeFair> lgierth: SRV records reply with PORT and ADDRESS
<MikeFair> lgierth: they look different _service._proto._srv.domain.dom iirc
<lgierth> yeah
<lgierth> just saying nobody has brought up an interest in SRV+multiaddr so far, so nobody has looked into it either i guess :)
<MikeFair> lgierth: So if I'm the code; my user has just typed "get me /ipfs/someCID"
<MikeFair> lgierth: hehe; which is why I'm kind of pointing out, keeping the record types around; even if their usually ignored; won't bite you
<lgierth> ok but you haven't told me yet why it's *useful* for the case of dns4 and dns6 :)
<MikeFair> lgierth: Me being the code then say "Hey Daemon! Resolve /ipfs/someCID" to which I'm expected to open connections
<lgierth> i have one case in mind where it might be useful, but is probably not worth the added complexity
<MikeFair> lgierth: Oh because applications (like bonjour and zeroconf) use SRV records for service registration and announcement
<MikeFair> it's how resolution works in those systems; A records aren't used
<lgierth> that's cool, i need to resolve A and AAAA records though ;)
<MikeFair> it's includes the 443 part that's been blindly hardcoded into the response above
<MikeFair> lgierth: Not exactly; if someone types the properly formatted DNS name of an SRV record; you're supposed to use that reply
Boomerang has quit [Remote host closed the connection]
<MikeFair> which is why they look different to say "Don't use an A!"
<MikeFair> and if you just want the DNS resolved address/why not just use the local systems DNS resolver?
suttonwilliamd has joined #ipfs
<MikeFair> It seems that what you're really expressing isn't dns4 / dns6 but ipv4 and ipv6
<MikeFair> you're trying to open an IP connection to that address
<MikeFair> s/address/content id
<MikeFair> the user provided ipfs.io
<MikeFair> but what you're seeking is a way to actually open a link to ipfs.io; and so ipfs.io gets transformed to its "DNS Reply"/port/servicename/ipfs/Qm...
<lgierth> yeah, connect to ipfs.io, over ipv4
<lgierth> what would "DNS Reply"/port/servicename/ look like?
<MikeFair> So ipfs.io/ipfs/wss/Qm... -> [/ipv4/x.y.w.z/tcp/443/wss, ...]
<lgierth> (multiaddrs are supposed to mirror how the protocols encapsulate each other)
<lgierth> and contain *all* information neccessary to make the connection
<MikeFair> then it's: [/dns/ipfs.io/ipv4/x.y.w.z/tcp/443/wss, ...]
<lgierth> that's why they're explicit about ip, ipv4, tcp, the port, etc.
<lgierth> hrm
<lgierth> i see what you mean, mhhh
<MikeFair> then it's: [/dns/ipfs.io/A/x.y.w.z/tcp/443/wss, ...]
<lgierth> i think we can keep A out of that. it's an implementation detail of dns
<lgierth> might be different with SRV or other types, but A and AAAA are directly coupled to ipv4 and v6
<MikeFair> It's the record type of teh dns object; but ok
tmg has joined #ipfs
<lgierth> then i need to know that A means ipv4, and i need to know it *outside* of the resolver, so that's added complexity
<lgierth> but i think /dns4/ipfs.io/ip4/1.2.3.4/tcp/443 might be useful
<lgierth> i.e. keeping the whole /dns4 part
<lgierth> and if there's an /ip4 part encapsulated in that, the resolver knows it's already resolved, which simple enough
<MikeFair> it's not dns4
HostFat_ has joined #ipfs
AniSkywalker has joined #ipfs
<AniSkywalker> This is getting borderline ridiculous but here I am.
<MikeFair> it's dns returning an array of results, some ipv4 and some ipv6
HostFat__ has quit [Ping timeout: 260 seconds]
<lgierth> right -- just walked away from directly using the /dns protocol name because i wanted to keep it minimal
<MikeFair> like udp versus tcp ports
<MikeFair> why /dns4 and not /dns again?
<lgierth> because /dns means thinking about *all of dns*
<lgierth> and i wanted just dns for ipv4, and dns for ipv6
<MikeFair> this handles all of dns /dns/ipfs.io/ipv4/w.x.y.z/tcp/port
<MikeFair> You just call an A record ipv4
<SchrodingersScat> has anyone ever uploaded ipfs to ipfs?
<lgierth> my impression of that idea ^ was that you'd resolve /dns/ipfs.io to /dns/ipfs.io/ip4/1.2.3.4
<AniSkywalker> SchrodingersScat Sure, gx :)
<lgierth> we can't *require* to have the ip address in there, because that'd defeat most of the purpose of using dns names
<MikeFair> lgierth: I get your point
<MikeFair> not that's not what I was thinking
<MikeFair> but now that I see it you're way it starts /ipv4/
<MikeFair> err your way
<lgierth> it's a very abstract and complicated topic and i tried to take only on a very small chunk of that problem space
tmg has quit [Ping timeout: 240 seconds]
<MikeFair> So yeah /dns/ipfs.io/webconference/Qm... -> [/ipv4/w.x.y.z/tcp/N/...
<MikeFair> I'd personally rather see /dns/ipfs.io/... -> [/ipfs/CID1, /ipfs/CID2]
<MikeFair> and then those resolve to peer links
<lgierth> yeah that'd be for example /dnslink/ipfs.io => [/ipfs/CID1, ...]
<lgierth> right now it's mashed into /ipns :/
<lgierth> should be cleanly separated
<MikeFair> CID in this case could pointto a service hosted by a peer
<lgierth> ah, a peerid
<lgierth> word
<lgierth> /dnsaddr :)
<MikeFair> but it's a specific thing that peer is providing
<lgierth> same concept as dnslink, but for looking up multiaddrs in TXT records
fleeky__ has joined #ipfs
<MikeFair> I'm just trying to stick with ipfs as ipfs for addressing -- peer ids are the moral equivalent of ip addresses
fleeky_ has quit [Ping timeout: 256 seconds]
<AniSkywalker> When you refactor too hard http://imgur.com/a/9L9YP
<MikeFair> my ipfs API should "open a link to this ContentAddress"
<AniSkywalker> hsanjuan I don't even know how to put this... I've literally refactored the entire ipfs-cluster codebase.
<MikeFair> and that because a link to peer address providing that content
<MikeFair> s/because/becomes
<MikeFair> which involves some magic in the daemon to figure "how" to connect me
<lgierth> some magic = peer routnig
<lgierth> :)
<lgierth> you can ipfs swarm connect /ipfs/Qmfoo
<MikeFair> right but I'm just saying from a programmer's API; my resolver never gives me anything but an ipfs content address
<lgierth> in fact it'll ignore the address if you do ipfs swarm connect /ip4/..., and just unconditionally do peer routing
<lgierth> MikeFair: oh yeah <3 got it
<MikeFair> ANd what I like about that is /ipfs/ContentAddressOfMultiPartyConference
<MikeFair> And there's a bunch of peer ids that are announcing they provide that
<lgierth> and then everybody just listens for pubsub messages with that path as its topic
<lgierth> (that's how orbit works i think)
<lgierth> you have an object being the channel metadata, and then nodes publishing pubsub message with their latest data for that channel
<MikeFair> In this case; the connector code would query "who provides: Qm..." and that would return PeerIds; which could have links established :)
<MikeFair> lgierth: I was thinking stream oriented connections
<MikeFair> lgierth: ipfs session Qm....
<lgierth> i think ipfs already does everything you just said
ShalokShalom has quit [Quit: No Ping reply in 180 seconds.]
<lgierth> pubsub, content routing, peer routing
<lgierth> the routing is exposed as `ipfs dht findprovs` and `ipfs dht findpeer`
<lgierth> ok going to bed
ShalokShalom has joined #ipfs
<lgierth> thanks for the discussion!
<MikeFair> lgierth: yes, exactly what I'm saying; this is ipfs native type material; I don't think they allow me to provide a bidirectional comm link to those endpoints
<MikeFair> lgierth: that's the new thing
voxelot has quit [Read error: Connection reset by peer]
<lgierth> aah, exposing the streams on the cli/api
voxelot has joined #ipfs
<MikeFair> lgierth: my local ipfs based app opens a pipe/connection to the daemon and the daemon replies "ok you're now CID Qm..."
<lgierth> Magik6k has a pull request exposing corenet to the cli
<lgierth> and do e.g. ip tunnels over it
<lgierth> MikeFair: i think you could that with just a bit of glue, and that corenet PR
<MikeFair> lgierth: great! because that's what the programmer's API should resolve to; so when I want to do a video conference over ipfs; I don't think about webSockets
<MikeFair> lgierth: My ipfs library could; but not me :)
<lgierth> ah, got it
<lgierth> yep
<MikeFair> I think I saw it earlier; but I look again
<lgierth> well check out how e.g. openbazaar or mediachain are implementing their protocols on top of libp2p
<lgierth> the interface that you get is a stream
<MikeFair> lgierth: I agree to your point about the resolution; if the query is for dns then only a CNAME record would include /dns in the responce
<lgierth> (unrelated to the corenet PR)
<MikeFair> I'm also hoping to "authenticate" to ipfs and get some context of a "user session" that apps can store data into
<MikeFair> kind of like "localStorage" on a browser but is really "ipfsStorage" so my session "lives in the cloud" :)
<MikeFair> lgierth: Thanks for being open to the chat! :)
<AniSkywalker> lgierth So I have a gigantic refactor of ipfs-cluster. What are the chances it gets merged?
<AniSkywalker> hsanjuan isn't around for a few days
MajorScale[m] has joined #ipfs
cyanobacteria has joined #ipfs
<lgierth> great -- you'll have to wait for hsanjuan with that though
<MikeFair> AniSkywalker: what's it do?
<lgierth> good night!
<MikeFair> lgierth: g'night!
<AniSkywalker> MikeFair completely reorganizes ipfs-cluster, changes aspects of it to be more Go-like.
<MikeFair> cool
<MikeFair> Go likes beating me to a pulp before yielding to me will ;)
<MikeFair> err my
wallacoloo_____ has joined #ipfs
<AniSkywalker> Oh, also swapped the logger with logrus
<AniSkywalker> MikeFair it's actually ridiculous how much refactoring has taken place
<AniSkywalker> I don't even know if it will compile at first--ciclic dependencies :(
<MikeFair> AniSkywalker: on a git branch I assume?
<MikeFair> passes all tests :)
<AniSkywalker> Fork + branch + local right now :P
<MikeFair> push to fork; what's it like to compile
<MikeFair> I'd be willing to try clone / build
<MikeFair> I haven't installed much Go related packages yet so I'd like run afoul of any dependency problems; I have run into the sistuation where a dependency used an older version of a vendor provided package that caused method resolution to fail on the newer functions
<MikeFair> (that i was using)
<MikeFair> (I was modifying their module at the time)
<AniSkywalker> Not done yet :P
<AniSkywalker> Once I'm getting close to 95% or so I'll push it
<AniSkywalker> I'm going through and making error messages useful MikeFair (as I switch them to logrus)
<AniSkywalker> Especially for RPC; one of the reasons logrus is great is because it exports to data-friendly if needed.
* MikeFair relaxes in a sigh of contentment; "meaningful messages" ahhhh. :)
<AniSkywalker> Thus, making RPC error debugging useful can provide us with valuable insight into the protocol.
AkhILman has quit [Read error: No route to host]
<AniSkywalker> Does 'logrus.WithError(err).WithFields(logrus.Fields{"peerId": pid, "svcName": "Cluster", "svcMethod": "PeerManagerRmPeerShutdown", "clusterId": c.id}).Error("error calling RPC")' count as meaningful?
tmg has joined #ipfs
PrinceOfPeeves has quit [Quit: Leaving]
pfrazee has quit [Remote host closed the connection]
chris613 has quit [Quit: Leaving.]
AkhILman has joined #ipfs
<MikeFair> AniSkywalker: Only if you can also append whatever it was that caused it to decide it had a problem ;)
<AniSkywalker> Yeah, that's the 'WithError' part
<MikeFair> AniSkywalker: oh, yeah;
<MikeFair> "Failed to establish RPC connection"
<MikeFair> (the concept of the message "What was I doing" and "What happened when I did"
<AniSkywalker> Yeah, hold on got an updated version of that
<AniSkywalker> MikeFair https://hastebin.com/abagotevax.vbs
<MikeFair> that ipfs cluster link page is a pretty dens read for a newcomer
<MikeFair> Wouldn't that be "Error shutting down Peer Link" or something like taht
<MikeFair> AniSkywalker: Do you know the results of the cluster as a virtual peer node conversation?
<AniSkywalker> What I'm going for is, search for RPC errors, filter by type.
<AniSkywalker> MikeFair that's what the format's about
<AniSkywalker> Since we're basically also debugging the protocol, it's useful to know these things.
<AniSkywalker> I.e. "what's the most frequent RPC error?"
<MikeFair> AniSkywalker: ahh "RPC Error: Failed to execute Remote Procedure"
<AniSkywalker> Oh I dunno, sure
<MikeFair> AniSkywalker: ahh "RPC Error: Failed to execute Peer Shutdown"
<AniSkywalker> Oh is that what it says?
<MikeFair> PeerManagerRmPeerShutdown
<AniSkywalker> Where are you looking MikeFair?
<MikeFair> svcmethod
<MikeFair> from your hastepin link
geir_ has quit [Quit: SafeBouncer - znc.safenetwork.org:4876]
<AniSkywalker> MikeFair are you suggesting an error message or saying that's the result? :P
<MikeFair> AniSkywalker: Oh I was suggesting the error message so that when I search filter for "RPC Error" I'd easily the service name on the same line
<AniSkywalker> Well, it's more about machine consumption.
<AniSkywalker> Which leads to your consumption.
<AniSkywalker> Have a look at the outputs of https://github.com/sirupsen/logrus
<MikeFair> if it's for a machine; then great
<MikeFair> you've got all the fields
<AniSkywalker> Well it's for both
<AniSkywalker> You can search it like that with this too :P
<MikeFair> He who writes the code makes the call
<MikeFair> :)
<MikeFair> I like the way it sounds from what's been described
<AniSkywalker> I'll push it as soon as I finish this file
<MikeFair> You've got the fields in there; I was concerned I'd only see "error" or "error in procedure" on the line that my search text returned
<AniSkywalker> warning: it probably won't compile at first
<AniSkywalker> still have to play around with structure
<AniSkywalker> Main objectives were switching the logger to logrus and making the structure a bit more sane
<whyrusleeping> I'm gone for like, an hour, and theres 262 new messages in chat? damn
<whyrusleeping> its busy in here
<AniSkywalker> MikeFair done :D
<AniSkywalker> whyrusleeping I just refactored ipfs-cluster
<AniSkywalker> pushing a preliminary. Not sure it even builds but hey it's progress
<whyrusleeping> refactored?
redlizard has joined #ipfs
<whyrusleeping> that seems... unnecessary?
<whyrusleeping> and cague
<whyrusleeping> vague*
<AniSkywalker> whyrusleeping you'll see, I completely re-organized it to make it modular
<whyrusleeping> mmmm
<AniSkywalker> Switched logging to logrus too
<MikeFair> whyrusleeping: And upgraded the error message reporting
<AniSkywalker> whyrusleeping MikeFair https://github.com/20zinnm/ipfs-cluster/tree/refactor
<AniSkywalker> it's rough but it's the general idea
<whyrusleeping> switching logging to logrus from what?
<AniSkywalker> logger
<AniSkywalker> whyrusleeping because we need to be able to analyze the log messages, especially RPC drops, etc
<AniSkywalker> Which logrus is especially good at--log analysis
<AniSkywalker> plus it looks cool, but mostly the former ^.^
<hsanjuan> AniSkywalker: please ask before getting into big endevours. cluster uses ipfs/go-log not by chance, but because that is why the whole libp2p stack uses
tr909 has quit [Ping timeout: 240 seconds]
<AniSkywalker> Is there a reason for that?
<hsanjuan> also small incremental PRs will have better chances for a merge than huge refactors
<hsanjuan> s/why/what the whole libp2p stack uses
<AniSkywalker> hsanjuan according to the 'go-log' repo, "It currently uses a modified version of go-logging to implement the standard printf-style log output."
<whyrusleeping> Yeah, we use the logging package we use because it more cleanly separates 'event' type logs from 'trace' type logs
<whyrusleeping> We spent a lot of time using various logging packages (including logrus) and ended up where we are
<AniSkywalker> whyrusleeping logrus does that IMHO better, allowing you to specify the levels and also different fields, etc.
<AniSkywalker> whyrusleeping was there a specific reason to reject logrus?
<AniSkywalker> Because it makes analyzing protocol failures very good. I.E. noticing RPC drops
<hsanjuan> MikeFair: it's true the ipfs-cluster README is dense. Would you help me by explaining how it would look best for you as a newcomer (best if you put it in an issue)
mguentner has quit [Quit: WeeChat 1.7]
<whyrusleeping> unecessary complexity, a lack of nice logging features that i wanted (primarily, the lack of filename/line numbers in log messages)
<whyrusleeping> our current logging library can do what you want just fine, and if you have specific cases where it could be better, i would be open to improving that package
anewuser has quit [Ping timeout: 252 seconds]
<AniSkywalker> whyrusleeping that's entirely possible IIRC, but the main reason I like logrus is because ipfs-cluster will likely be running not on local nodes but on production machines, where being able to categorize and aggregate logs is a huge plus.
<whyrusleeping> what i said stands.
john1 has joined #ipfs
<AniSkywalker> That's fine. I respect the decision. In that case, I'd like these features: the ability to add fields/errors attached separate from the message, to make aggregation easy and feasible
<AniSkywalker> The ability to parse the log easily and query it in logstash etc. is uniquely critical to ipfs-cluster, given the target environment.
mguentner has joined #ipfs
shizy has joined #ipfs
<MikeFair> AniSkywalker: Could you aggregate go-logger with the "WithFields" concept?
<MikeFair> (or whatever it is the project is using?)
<AniSkywalker> with logrus, you would set the output formatter to JSON or similar and load it into logstash
<AniSkywalker> query away!
<AniSkywalker> MikeFair see the top of the readme https://github.com/sirupsen/logrus
<MikeFair> AniSkywalker: And more to your point of production machines; tapping into the stream exposed by a remote PeerIDs to its logs would be really useful
<whyrusleeping> AniSkywalker: 'Event' and 'EventBegin' are what you want: https://github.com/ipfs/go-log/blob/master/log.go#L74
<MikeFair> AniSkywalker: What I'm saying is upgrade the middle layer that the project has already standaradized on and give it new features
<whyrusleeping> also, see `ipfs log tail` for an example of its output
<AniSkywalker> I see, so you like the ability to set a time window? whyrusleeping
<AniSkywalker> That's interesting. I'll take a look at that.
<AniSkywalker> Also whyrusleeping I forgot https://texlution.com/post/colog-prefix-based-logging-in-golang/
<MikeFair> AniSkywalker: It also says it's "completely compatible" with something; could that mean you could submit the logrus output through the ipfs project's logger
<whyrusleeping> AniSkywalker: also, avoid comitting your editor config files
<AniSkywalker> Oh my bad
<AniSkywalker> thought I put that in gitignore
<MikeFair> AniSkywalker: It sounds like you're advocated for a particular text message format and calling API; both which could sent to a PrintF style logger
tr909 has joined #ipfs
<redlizard> Greetings. I'm doing some research into ipfs, and I wondered whether there are any estimates as to the number of active users of ipfs?
<redlizard> I understand that accurate measurements are tricky if not impossible, of course.
<dansup> there are dozens of us, dozens!
<AniSkywalker> whyrusleeping could you follow the dht to figure out a user count?
<whyrusleeping> redlizard: there are roughly 500 nodes online at any given moment (consistently throughout the day) on the main network
anewuser has joined #ipfs
<whyrusleeping> redlizard: But every given day I see 5,000 or more nodes come online and offline
<redlizard> whyrusleeping: Alright, thanks! That's an excellent indication to start with.
<AniSkywalker> whyrusleeping my last "complaint" was that in the logging that I did see, there was a lot of "fmt.Sprintf("something something something: %s, %s" , foo, bar)" when there would be no context for what the replacements are.
<AniSkywalker> Whether or not that's the logging library's fault I don't know, but I do know logrus forces the developer to think about each log message.
<AniSkywalker> Which ultimately leads to beautiful things like https://hastebin.com/abagotevax.vbs
<redlizard> whyrusleeping: So maybe 20,000 users using it regularly, perhaps?
<AniSkywalker> redlizard that's not unreasonable if a bit conservative
<AniSkywalker> There's a lot of us :)
<redlizard> AniSkywalker: I would sort of expect users running ipfs at all to mostly have it running whenever their computer is awake and internet connected.
<AniSkywalker> When I'm trying to run a cluster on Google cloud platform and I want to feed the log data into logrus to figure out where the RPC bottlenecks are, that's logrus. Also, the events you show are essentially what I showed in the hastebin, if not the primary purpose.
<redlizard> It's usually ran as a daemon that is there whether you are actively using it or not, right?
<AniSkywalker> redlizard Well ipfs yes, but we're working on ipfs-cluster
<AniSkywalker> Which is a bunch of nodes coming together to act like one node in the network.
<AniSkywalker> redlizard https://github.com/ipfs/ipfs-cluster
* redlizard reads
<AniSkywalker> The use-case is for companies trying to offer hosted IPFS pinning, or groups of people with similar concerns (federal climate data, etc) redlizard
<redlizard> AniSkywalker: Hm... I expect that raises the (number of users occasionally pulling data off ipfs) / (number of connected nodes on a daily basis) ratio considerably.
<MikeFair> redlizard: Yes; if you want to count "using it" as serving/participating; but if you extend that definition to include "consumers of content" then you include all people who see ipfs.io
<AniSkywalker> Well, they act like one node. It certainly increases the commercial possibilities.
anewuser has quit [Ping timeout: 256 seconds]
<AniSkywalker> redlizard what if I told you that Dropbox already uses something similar to IPFS for their backend storage?
<AniSkywalker> They chunk data into blocks, much like us, and store those blocks.
<redlizard> AniSkywalker: That's kinda cheating given that ipfs is largely a "collect a bunch of well-proven techniques into one awesome whole" project, no? ;-)
<redlizard> Naturally the backing techniques are in common use!
<AniSkywalker> Obviously not well known if FTP is still popular.
<AniSkywalker> However, you're correct.
<AniSkywalker> IPFS is an assemblage of the best parts of all the great technologies.
<MikeFair> (*At least for it's intended use cases; it doesn't do anonymity or privacy exceptionally well)
<redlizard> MikeFair: Clearly y'all should focus on libeverything instead :-)
<MikeFair> redlizard: of course! despite there's mathematical proofs that define such a thing to be impossible!
<redlizard> Bah. Excuses.
<AniSkywalker> We have libp2p for libeverything
<AniSkywalker> and multiformats
<MikeFair> redlizard: Like I can't authenticate you are so-and-so with any level of assurances and keep your identity hidden from myself at the same time :)
<redlizard> MikeFair: Next you're gonna tell me that ipfs doesn't do full CAP either!
<MikeFair> redlizard: of course it does! I'm just going to redefine the parameters of success for what full CAP means in any given situation! :)
<MikeFair> problem was already solved! :)
<AniSkywalker> Caddyserver might be able to warn people about compromised TLS
anewuser has joined #ipfs
* MikeFair thinks he should file a PR request to redefine/declare all problems by adopting a change the definition of succes to whatever it did at the time. :)
<redlizard> MikeFair: I'm a fan of the "write TRIVIAL in increasingly large fonts until you successfully intimidate the audience" technique.
<MikeFair> hehe
Guest17873 has joined #ipfs
<MikeFair> redlizard: I'm curious if you're thinking of readers of the ipfs.io website as "active users"
<MikeFair> (it's likely not the only one; just the only one I know of)
shizy has quit [Quit: WeeChat 1.7]
<redlizard> MikeFair: The ipfs.io webserver serves an ipfs mount?
<redlizard> Why of course it does.
mguentner2 has joined #ipfs
<redlizard> Anyway, I'm only counting people running the actual software :)
<redlizard> [One of them, anyway.]
<AniSkywalker> That's the best thing... you can have IPFS and most people don't even know it.
mguentner has quit [Ping timeout: 240 seconds]
* MikeFair nods.
<MikeFair> redlizard: Beyond a mount; it's a gateway
<MikeFair> redlizard: So http://ipfs.io/ip[fn]s/Qm... will serving anything from ipfs
<MikeFair> redlizard: Handy for "Could other people on the planet really see that thing I out out there?
<redlizard> Ah, neat!
Guest17873 has quit [K-Lined]
chungy has quit [Ping timeout: 240 seconds]
apiarian has quit [Ping timeout: 268 seconds]
chungy has joined #ipfs
<AniSkywalker> ok night
<MikeFair> AniSkywalker: gnight
<AniSkywalker> MikeFair pushed a few changes
<AniSkywalker> ok im done for tonight
AniSkywalker has quit [Quit: Textual IRC Client: www.textualapp.com]
yar has quit [Excess Flood]
yar has joined #ipfs
<whyrusleeping> redlizard: yeah, i wouldnt feel out of place saying 20,000
<whyrusleeping> There are quite a few different groups that are using it in their own networks
<whyrusleeping> for example, open bazaars network is completely separated from the main network right now
<MikeFair> whyrusleeping: Do you know it takes for the daemon to expost localhost:5001/api?
<whyrusleeping> MikeFair: its exposed locally, on the loopback address
<whyrusleeping> are you saying you want to expose it to external things?
<MikeFair> no I'm saying I get 404 Not Found
<whyrusleeping> thats normal, theres nothing at that exact endpoint
<whyrusleeping> try like, /api/v0/id
<MikeFair> oh nm; it's the particular api call ippastebin was hitting
<MikeFair> Access-Control-Oring
<MikeFair> thanks
<MikeFair> whyrusleeping: Where is the work done on the internode binary transmission protocol?
<whyrusleeping> libp2p
<MikeFair> Hmm; does that have the concept of messages/objects
<MikeFair> Specifically I'd like to try out a concept of "interned strings" on nodes
<MikeFair> Something that goes for replacing all strings with a 24-bit hash + 8 bit collision id
<MikeFair> So all frequently repeated text because a 32-bit id in the network
<MikeFair> (4 bytes)
<MikeFair> well 5 because of the type identifier that says "this is an interned string
<MikeFair> s/because/would get turned into
aquentson has joined #ipfs
tmg has quit [Ping timeout: 260 seconds]
Foxcool has joined #ipfs
mildred has quit [Ping timeout: 256 seconds]
<MikeFair> How do I see the DAG entry for : QmeBmMobTdv2m7HqhT8NygjguUL5Ra3SXQD5KVfNPsLEKa
<MikeFair> nm
muvlon has quit [Ping timeout: 245 seconds]
mildred has joined #ipfs
ygrek has quit [Ping timeout: 260 seconds]
ShalokShalom has quit [Quit: No Ping reply in 180 seconds.]
helminthiasis has quit [Remote host closed the connection]
muvlon has joined #ipfs
ShalokShalom has joined #ipfs
speos has joined #ipfs
saintromuald has quit [Ping timeout: 245 seconds]
ulrichard has joined #ipfs
ygrek has joined #ipfs
w00dsman has joined #ipfs
Foxcool has quit [Ping timeout: 264 seconds]
mildred has quit [Ping timeout: 264 seconds]
captain_morgan has quit [Remote host closed the connection]
saintromuald has joined #ipfs
dignifiedquire has joined #ipfs
Foxcool has joined #ipfs
wallacoloo_____ has quit [Quit: wallacoloo_____]
mildred has joined #ipfs
maciejh has joined #ipfs
Foxcool has quit [Ping timeout: 240 seconds]
saintromuald has quit [Ping timeout: 240 seconds]
arpu has quit [Ping timeout: 240 seconds]
Foxcool has joined #ipfs
ylp has joined #ipfs
chungy has quit [Quit: ZNC - http://znc.in]
arpu has joined #ipfs
captain_morgan has joined #ipfs
gmoro has joined #ipfs
<jbenet> @daviddias or other js-ipfs-api people: in node, why does `ipfsApi(...).add(path, ...)` NOT give me a nice fs? it gives me multipart garbage: https://ipfs.io/ipfs/QmbPJ7vRoT4HBHJAMyu5YhtuzyNByYMBxxW9h8qW9no45e (it thinks that's the resulting hash)
chungy has joined #ipfs
<daviddias> jbenet: checking
Mizzu has joined #ipfs
Foxcool has quit [Ping timeout: 240 seconds]
jeraldv has quit [Ping timeout: 240 seconds]
jeraldv has joined #ipfs
ShalokShalom_ has joined #ipfs
dem is now known as demize
ShalokShalom has quit [Ping timeout: 276 seconds]
ShalokShalom_ has quit [Read error: Connection reset by peer]
<dryajov> @jbennet: wrong content type?
G-Ray_ has joined #ipfs
rendar has joined #ipfs
robattila256 has quit [Ping timeout: 260 seconds]
saintromuald has joined #ipfs
<substack> text rendering adventure complete, now I can focus on some other shit for a while https://gateway.ipfs.io/ipfs/QmSvzc7DZQpvJESF3oqkAKomfY9QCoEcoqbRS2rUqvDxhR
<jbenet> substack: that's pretty cool
<substack> jbenet: also mikola has got p2p routing working!
<substack> I'm going to whip up some visuals for it
<jbenet> wow, that's great news
<substack> still some perf gains to be had but it seems to generate plausible results
<r0kk3rz> substack: that sounds really really cool
<substack> will know for sure when we actually can draw the routes
<substack> still need some other pieces like a p2p geocoder but ¯\_(ツ)_/¯
cyanobacteria has quit [Ping timeout: 258 seconds]
<substack> soon: p2p offline google maps
<Mateon1> substack: Ah, that's cool. Have you created tiles from your peermaps data? Or does everybody have to make their own from the osm/o5m files?
<damongant> a wild substack
<damongant> I love your github mate.
maciejh has quit [Ping timeout: 240 seconds]
<substack> Mateon1: QmNPkqYfis1XV2CcAyE9ByttxGnvvtVJ4VfFXtbBWnd7fW
<Mateon1> substack: I should probably clarify, as from my understanding, in order to actually render anything using these tiles you have to parse recursively all the way down. Is that correct?
<Mateon1> I meant more like vector tiles
<substack> Mateon1: you have to read the meta.json files to figure out your extents
<substack> or you can use the peermaps command
<substack> npm i -g peermaps
<substack> for example you can run this command to print out all the o5m files for hawaii: peermaps files -156.064270 18.9136925 -154.8093872 20.2712955
<Mateon1> substack: Right, but in order to render an area, you have to parse all o5m tiles intersecting that area. Sorry if I'm using the wrong terminology here, not a mapping addict :)
<substack> yes
<substack> I'm working on a tile format
<Mateon1> I'm asking if it's possible to get "rough" tiles at different scales, like is useful for a map viewer
<substack> there's nothing to do that right now
<substack> I mean there are tools to do this in osm land but there's a lot of shit to wade through
<substack> and when you find something it's generally pretty hard to stand it up correctly
mildred4 has joined #ipfs
<substack> it was already quite a feat to get ad-hoc extracts working, but there's lots more to be done
ecloud_wfh is now known as ecloud
[1]MikeFair has joined #ipfs
anewuser has quit [Quit: anewuser]
MikeFair has quit [Ping timeout: 260 seconds]
[1]MikeFair is now known as MikeFair
<Mateon1> Huh, why does 'ipfs pin' only request one hash at a time? That seems incredibly inefficient.
<Mateon1> I can probably do it faster with some parallel xargs
betei2 has joined #ipfs
betei2 has quit [Client Quit]
maxlath has joined #ipfs
<jbenet> !pin QmVNk5fhsdzBdKxNBXodet6DsWQCjAWbmooBWxRscLeqRq
<pinbot> usage: !pin <hash> <label>
<jbenet> !pin QmVNk5fhsdzBdKxNBXodet6DsWQCjAWbmooBWxRscLeqRq asciinema-selfhost intro
<pinbot> now pinning on 8 nodes
<pinbot> pinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/QmVNk5fhsdzBdKxNBXodet6DsWQCjAWbmooBWxRscLeqRq
<jbenet> !pin QmdxuPWGuCQrf4zGQtxYqxUWJDMWG5wRScn6SpfLKGTTAx asciinema-selfhost intro cover
<pinbot> now pinning on 8 nodes
<pinbot> pinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/QmdxuPWGuCQrf4zGQtxYqxUWJDMWG5wRScn6SpfLKGTTAx
wkennington has quit [Quit: Leaving]
Foxcool has joined #ipfs
tmg has joined #ipfs
aquentson has quit [Ping timeout: 256 seconds]
maciejh has joined #ipfs
aquentson has joined #ipfs
vapid has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
anewuser has joined #ipfs
cornu[m] has joined #ipfs
mildred has quit [Ping timeout: 240 seconds]
mildred4 has quit [Ping timeout: 240 seconds]
gmoro has quit [Remote host closed the connection]
gmoro has joined #ipfs
<Mateon1> Oh wow, generating map tiles is a pain
<Mateon1> I'm definitely not doing this on Windows
<Mateon1> Needs Mapnik, PostgreSQL + PostGIS, and I haven't read through the whole wiki page yet
john1 has quit [Quit: WeeChat 1.7]
espadrine has joined #ipfs
Encrypt has joined #ipfs
mildred has joined #ipfs
mildred4 has joined #ipfs
john1 has joined #ipfs
ygrek has quit [Ping timeout: 255 seconds]
kthnnlg has joined #ipfs
bastianilso has joined #ipfs
jkilpatr_ has quit [Ping timeout: 276 seconds]
Encrypt has quit [Quit: Quit]
bastianilso has quit [Read error: Connection reset by peer]
bastianilso has joined #ipfs
betei2 has joined #ipfs
kthnnlg has quit [Remote host closed the connection]
UHck_ has joined #ipfs
jkilpatr has joined #ipfs
cemerick has joined #ipfs
cemerick has quit [Ping timeout: 276 seconds]
cemerick has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
ebarch has joined #ipfs
UHck_ has quit [Ping timeout: 256 seconds]
pcre has quit [Ping timeout: 240 seconds]
Foxcool has quit [Ping timeout: 258 seconds]
Foxcool has joined #ipfs
Encrypt has joined #ipfs
slothbag has quit [Quit: Leaving.]
caiogondim has joined #ipfs
arpu has quit [Remote host closed the connection]
PrinceOfPeeves has joined #ipfs
suttonwilliamd_ has joined #ipfs
baikal has quit [Ping timeout: 240 seconds]
[0__0] has quit [Remote host closed the connection]
[0__0] has joined #ipfs
suttonwilliamd has quit [Ping timeout: 240 seconds]
Guest154195[m] has joined #ipfs
dPow has quit [Read error: Connection reset by peer]
baikal has joined #ipfs
kode54 has quit [Ping timeout: 258 seconds]
dPow has joined #ipfs
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 245 seconds]
Caterpillar has joined #ipfs
cemerick has quit [Ping timeout: 268 seconds]
maxlath has joined #ipfs
kode54 has joined #ipfs
kanzure has quit [Ping timeout: 240 seconds]
kanzure has joined #ipfs
cwahlers has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Foxcool has quit [Ping timeout: 240 seconds]
suttonwilliamd__ has joined #ipfs
suttonwilliamd_ has quit [Ping timeout: 256 seconds]
DiCE1904 has joined #ipfs
AniSkywalker has joined #ipfs
<haad> !pin Qmaf9b7UDuyfSYRUuHQae7zrBH7V31Y5qEaRstNXCQvJbQ orbit-test
<pinbot> now pinning on 8 nodes
<pinbot> pinned on 8 of 8 nodes (0 failures) -- https://ipfs.io/ipfs/Qmaf9b7UDuyfSYRUuHQae7zrBH7V31Y5qEaRstNXCQvJbQ
shizy has joined #ipfs
<haad> if anyone has a second, jump in to Orbit fromthe link ^, wanna test something
<haad> join #ipfs
<betei2> I'm in
<haad> see anything?
<betei2> nope - just an empty channel
<haad> I see someone else
suttonwilliamd_ has joined #ipfs
suttonwilliamd__ has quit [Ping timeout: 240 seconds]
tmg has quit [Ping timeout: 240 seconds]
cwahlers has joined #ipfs
ashark has joined #ipfs
<haad> thank you whoever was lol, got tested what I needed to test :)
Foxcool has joined #ipfs
Foxcool has quit [Ping timeout: 258 seconds]
jkilpatr has quit [Quit: Leaving]
jkilpatr has joined #ipfs
lgierth changed the topic of #ipfs to: go-ipfs v0.4.5 is out: https://dist.ipfs.io/go-ipfs | Wekek 7+8: 1) IPFS in web browsers https://git.io/vDyDE 2) Private networks https://git.io/vDyDh 3) Cluster https://git.io/vDyyt | Roadmap: https://waffle.io/ipfs/roadmaps | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | FAQ: https://git.io/voEh8 | Logs: https://botbot.me/freenode/ipfs/ | Code of Conduct: https://
lgierth changed the topic of #ipfs to: go-ipfs v0.4.5 is out: https://dist.ipfs.io/go-ipfs | Wekek 7+8: 1) Web browsers https://git.io/vDyDE 2) Private networks https://git.io/vDyDh 3) Cluster https://git.io/vDyyt | Roadmap: https://waffle.io/ipfs/roadmaps | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | FAQ: https://git.io/voEh8 | Logs: https://botbot.me/freenode/ipfs/ | Code of Conduct: https://git.io/v
lgierth changed the topic of #ipfs to: go-ipfs v0.4.5: https://dist.ipfs.io/go-ipfs | Wekek 7+8: 1) Web browsers https://git.io/vDyDE 2) Private networks https://git.io/vDyDh 3) Cluster https://git.io/vDyyt | Roadmap: https://waffle.io/ipfs/roadmaps | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | FAQ: https://git.io/voEh8 | Logs: https://botbot.me/freenode/ipfs/ | Code of Conduct: https://git.io/vVBS0
<lgierth> sorry about the topic spam
bastianilso has quit [Quit: bastianilso]
sprint-helper1 has quit [Remote host closed the connection]
sprint-helper has joined #ipfs
Foxcool has joined #ipfs
espadrine_ has joined #ipfs
<Mateon1> haad: Might have been me, I opened orbit up a few hours ago, and had no peers, so left it in the background
espadrine has quit [Ping timeout: 260 seconds]
Foxcool has quit [Read error: Connection reset by peer]
pfrazee has joined #ipfs
AniSkywalker has quit [Quit: Textual IRC Client: www.textualapp.com]
<richardlitt> sprint-helper: botsnack
<sprint-helper> om nom nom
<richardlitt> sprint-helper: help
<sprint-helper> Feedback: https://github.com/ipfs/sprint-helper
<sprint-helper> Correct usage: sprint-helper: announce <args> | next | now | tomorrow | help
Foxcool has joined #ipfs
Foxcool has quit [Ping timeout: 258 seconds]
speos has quit [Remote host closed the connection]
Foxcool has joined #ipfs
maxlath has quit [Ping timeout: 258 seconds]
ylp has quit [Quit: Leaving.]
w00dsman has quit [Quit: Leaving]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
ulrichard has quit [Remote host closed the connection]
maxlath has joined #ipfs
Caterpillar has joined #ipfs
sprint-helper has quit [Remote host closed the connection]
sprint-helper has joined #ipfs
Foxcool has quit [Ping timeout: 240 seconds]
mildred4 has quit [Ping timeout: 240 seconds]
krzysiekj has quit [Ping timeout: 258 seconds]
krzysiekj has joined #ipfs
cemerick has joined #ipfs
mildred has quit [Ping timeout: 255 seconds]
<nicolagreco> If anyone is into VR, please participate into this: https://github.com/ipfs/notes/issues/219 !
AniSkywalker has joined #ipfs
MikeFair has quit [Ping timeout: 256 seconds]
mildred has joined #ipfs
matoro has quit [Ping timeout: 256 seconds]
AniSkywalker has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
ygrek has joined #ipfs
ShalokShalom has joined #ipfs
matoro has joined #ipfs
AniSkywalker has joined #ipfs
AniSkywalker has quit [Client Quit]
Boomerang has joined #ipfs
mildred has quit [Ping timeout: 255 seconds]
<kevina> whyrusleeping, Kubuxu: could we get https://github.com/ipfs/go-ipfs/pull/3693 in?
Boomerang has quit [Ping timeout: 240 seconds]
<kevina> want to avoid another conflict when rebasing https://github.com/ipfs/go-ipfs/pull/3671 (pin progress)
<kevina> cool, thanks
Boomerang has joined #ipfs
kanred has joined #ipfs
Encrypt has quit [Quit: Quit]
mildred has joined #ipfs
ygrek has quit [Ping timeout: 245 seconds]
G-Ray_ has quit [Quit: G-Ray_]
galois_d_ has joined #ipfs
maxlath has quit [Ping timeout: 260 seconds]
dimitarvp has joined #ipfs
galois_dmz has quit [Ping timeout: 256 seconds]
Vladislav has joined #ipfs
Guest154416[m] has joined #ipfs
aquentson has joined #ipfs
aquentson1 has quit [Ping timeout: 240 seconds]
robattila256 has joined #ipfs
A124 has quit [Ping timeout: 240 seconds]
maciejh has quit [Ping timeout: 240 seconds]
A124 has joined #ipfs
matoro has quit [Ping timeout: 240 seconds]
Caterpillar has quit [Quit: bfoijdsfojdsfoijdeoijfdsgoijdsfoijsdoijfsoidjfoijdsf]
jkilpatr has quit [Quit: Leaving]
jkilpatr has joined #ipfs
ygrek has joined #ipfs
cyanobacteria has joined #ipfs
matoro has joined #ipfs
espadrine_ has quit [Ping timeout: 240 seconds]
betei2 has quit [Quit: Lost terminal]
Aranjedeath has quit [Quit: Three sheets to the wind]
leeola has joined #ipfs
Aranjedeath has joined #ipfs
galois_d_ has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
galois_dmz has quit [Client Quit]
Caterpillar has joined #ipfs
maxlath has joined #ipfs
espadrine has joined #ipfs
cemerick has quit [Ping timeout: 240 seconds]
cemerick has joined #ipfs
Vladislav has quit [Ping timeout: 258 seconds]
Encrypt has joined #ipfs
jkilpatr has quit [Quit: Leaving]
jkilpatr has joined #ipfs
whereswaldon has joined #ipfs
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
jkilpatr has quit [Client Quit]
jkilpatr has joined #ipfs
grusr has joined #ipfs
cemerick has quit [Ping timeout: 256 seconds]
cemerick has joined #ipfs
whereswaldon has quit [Remote host closed the connection]
patience has joined #ipfs
Boomerang has quit [Quit: Lost terminal]
Mizzu has quit [Ping timeout: 240 seconds]
chungy has quit [Ping timeout: 245 seconds]
JustinDrake has joined #ipfs
<JustinDrake> Is there a flag to opt-in to CIDv1, e.g. for `ipfs add`?
chungy has joined #ipfs
<whyrusleeping> JustinDrake: not yet. I can probably add that quickly though
JustinDrake has quit [Quit: JustinDrake]
pfrazee has quit [Read error: Connection reset by peer]
bastianilso has joined #ipfs
kanred has quit [Read error: Connection reset by peer]
kanred has joined #ipfs
<grusr> Hi. I have a folder with some movies from the Blender foundations open project, https://www.blender.org/features/projects/. Sintel or Big Buck Bunny might be the ones most people know of. The movies are creative commons licensed. Is it ok to paste the link here?
<lgierth> grusr: yes sure
<lgierth> i think sintel is already somewhere out there
<grusr> Yep, I have seen Sintel out there, I have some more copies with higher resolution.
<grusr> QmS6wQ5842gmBtsfApLKaNwkfmrSYePiJgXm2LRSvQntr2 Folder with movies from the Blender foundations open project, https://www.blender.org/features/projects/
pfrazee has joined #ipfs
<lgierth> if that's cool for you i'll copy that to one of our storage node
<grusr> lgierth: Please do.
<lgierth> :)
<kyledrake> lgierth still down for a call in 20 mins?
<lgierth> kyledrake: yep! gotta run to the supermarket but will bbe back in a few
<kyledrake> lgierth take your time!
<kyledrake> Ping me when you're ready
gmoro_ has joined #ipfs
gmoro has quit [Ping timeout: 240 seconds]
cxl000 has joined #ipfs
garrettr[m] has joined #ipfs
cemerick has quit [Ping timeout: 240 seconds]
cemerick has joined #ipfs
JustinDrake has joined #ipfs
<JustinDrake> @whyrusleeping: That would be great. Should I open an issue?
<whyrusleeping> JustinDrake: yes please!
<lgierth> grusr: what kind of connection is your node on? just wondering if it's bitswap not behaving very stable or whether it's the connection
<lgierth> kyledrake: cool i'm back
<kyledrake> K one minute, need to get laptop
cemerick has quit [Ping timeout: 240 seconds]
Guest154554[m] has joined #ipfs
<grusr> lgierth : I am on a fibre connection. I have 3 nodes, one of them is sitting outside of my firewall and all the files should be on that node with 100Mbit/s up.
john1 has quit [Ping timeout: 276 seconds]
mildred1 has joined #ipfs
JustinDrake has quit [Quit: JustinDrake]
<grusr> lgierth: The other 2 nodes have some of the data, on different ips but on the same connection one of those is behind a firewall and a VPN but should have an open port and the other one is behind a VPN but without the firewall.
pfrazee has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
mildred has quit [Ping timeout: 252 seconds]
mikeal has joined #ipfs
galois_d_ has joined #ipfs
Mizzu has joined #ipfs
galois_dmz has quit [Ping timeout: 256 seconds]
jkilpatr has quit [Ping timeout: 255 seconds]
galois_d_ has quit [Client Quit]
UraniumFighters1 has joined #ipfs
UraniumFighters1 has left #ipfs [#ipfs]
galois_dmz has joined #ipfs
cxl000 has quit [Quit: Leaving]
JustinDrake has joined #ipfs
JustinDrake has quit [Client Quit]
grosscol has joined #ipfs
pfrazee has joined #ipfs
slothbag has joined #ipfs
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 260 seconds]
<lgierth> cool .)
sprint-helper1 has joined #ipfs
sprint-helper has quit [Write error: Broken pipe]
jkilpatr has joined #ipfs
bastianilso has quit [Quit: bastianilso]
Mizzu has quit [Ping timeout: 264 seconds]
pcre has joined #ipfs
grosscol has quit [Quit: Leaving]
kanred has quit [Remote host closed the connection]
<grusr> lgierth: Are you putting all data from that hash onto the storage node?
<lgierth> yeah
<lgierth> just ipfs refs -r
Arathorn has quit [Quit: ZNC - http://znc.in]
patience has quit [K-Lined]
Guest15002 has joined #ipfs
<grusr> lgierth: Nice. There is alot of more data that can be published for free from these movies, I might do some more if I get the time.
<lgierth> grusr: awesome :) check out github.com/ipfs/archives too
<lgierth> we try to get dataset hashes, and the code used to produce them in there
<Mateon1> I wish WARC files were slightly more seamless to work with, if that was the case, I would archive _everything_
palkeo has joined #ipfs
palkeo has joined #ipfs
palkeo has quit [Changing host]
tmg has joined #ipfs
<grusr> lgierth: Have been doing a little bit of reading in /archives, only been on ipfs for a few days so there is much to learn yet.
<lgierth> it's the best rabbit hole i've ever been in
Mossfeldt has joined #ipfs
rawtaz has quit [Quit: bailing]
pfrazee_ has joined #ipfs
pfrazee has quit [Ping timeout: 264 seconds]
MikeFair has joined #ipfs
wallacoloo_____ has joined #ipfs
MikeFair has quit [Ping timeout: 240 seconds]
MikeFair has joined #ipfs
pfrazee_ has quit [Ping timeout: 240 seconds]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
pfrazee has joined #ipfs
ashark has quit [Ping timeout: 276 seconds]
pfrazee has quit [Ping timeout: 255 seconds]
Guest15002 has quit [K-Lined]
Encrypt has quit [Quit: Sleeping time!]
pcre has quit [Remote host closed the connection]
galois_d_ has joined #ipfs
slumberproof has joined #ipfs
fleeky__ has quit [Ping timeout: 260 seconds]
aquentson has joined #ipfs
fleeky_ has joined #ipfs
chriscool has quit [Ping timeout: 255 seconds]
galois_dmz has quit [Ping timeout: 256 seconds]
aquentson1 has quit [Ping timeout: 276 seconds]
mikeal has quit [Quit: Connection closed for inactivity]
stoopkid has quit [Quit: Connection closed for inactivity]
espadrine has quit [Ping timeout: 240 seconds]
realisation has joined #ipfs
matoro has quit [Ping timeout: 255 seconds]
shizy has quit [Ping timeout: 260 seconds]