stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.4.18 and js-ipfs 0.34 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Con
<void09> yes that works in the scenario of one stable tunnel
<void09> but many ephemeral ones?
lambskin[m] has joined #ipfs
ctOS has joined #ipfs
<MikeFair> void09: I thought TOR would reroute the tunnel over time as the "stable tunnel" ran
<MikeFair> void09: Or is the route through the TOR network established at connection time
<MikeFair> You can certainly tear down and rebuild the ssh connection periodically
<void09> not exactly sure on how the tor internals work and in what measure can they be changed, but most setups do change tunnels quite often (tor browers, whonix gateway)
<MikeFair> That way the IPFS network stays stable
<void09> yes, ssh does have autoreconnect on connection drop, and it will just use whatever path tor has open at the moment
<void09> i'm just thinking one tor tunnel doesn't have that much bandwidth
<MikeFair> The folks talking about it seem to say you run TOR client on both sides, then have the SSH traffic redirected through TOR using onion routing
<void09> oh but that's way too many hops
<void09> 6 hops
<void09> the speed would be crap serving just one peer.. let alone 100
<MikeFair> I really don't understand what you're trying to do then; anything "over TOR" in my book, would mean using the onion routing
<void09> uhm.. either way it's called onion routing i think
<MikeFair> how are you not going to get crap BW using TOR endpoints?
<void09> what I meant was using tor entry point -> middle node -> exit node
<void09> the tor to tor tunnel usinng .onion addresses uses the hidden service thingie, which double the ammount of hops/nodes usesd
<void09> used
<void09> and the speed will the be the lowest one of the 6 tor nodes you use
<void09> much worse chance of getting crap speed :)
<void09> than vs 3
<void09> tor can be pretty fast these days, if not too many hops
<void09> I got 20-30mbit regularly downloading files
<void09> mbps*
<void09> I remember when i discovered I can use tor to download torrents on my studen campus restrictive (port 80 and 443) proxy
<void09> I know how ssh over tor would work, it's just not enough by default
<ctOS> .onion domains generally get faster performance than anything needing to route through an exit node. there is enormous relay bandwidth in tor, but limited exit bandwidth. plus, you end up needing fewer hops when going to a .onion compared to entry->relay->exit->whereever.
<void09> really ?
<ctOS> Yeah.
<void09> my experience acessing hidden services has been pretty shitty
<void09> laggy/slow
<void09> but perhaps they all run on shit servers
DnrkasEFF1 has quit [Read error: Connection reset by peer]
<ctOS> void09: then you're misunderstanding what happens.
DnrkasEFF has joined #ipfs
<lambskin[m]> yeah everyone goes through only a few fast nodes on tor
<lambskin[m]> as far as I know on i2p everyone contributes which I think is interesting
<lambskin[m]> so it should be faster the more people use it rather than slower
hurikhan77 has joined #ipfs
<void09> yes, especially if some fat pipe nodes donate all their bw :P
<ctOS> The thing is, how is the onion service operator to limit bandwidth/resources per connection? They don't know anything about you! So per-connection and overall bandwidth is usually restricted quite heavily on any public onion service.
<void09> ctOS, never thought of that :o makes sense
<ctOS> lambskin[m]: it's also riskier from a legal perspective as you don't have the option to not be an entry node.
<void09> isn't there perfect plausible deniability in i2p ?
<lambskin[m]> you don't? couldn't you just not share the clearnet?
<void09> well, if the laws of the place you live in make that relevant
<lambskin[m]> using it for clearnet access isn't even that interesting
<void09> share the clearnet?
<ctOS> void09: my own site just shuts down the onion service if it gets too much traffic. I notably use opportunistic onion routing for a publicly available website to increase performance for tor browser users.
BeerHall has joined #ipfs
<lambskin[m]> isn't that what an entrynode is for?
<void09> aren't we talking about i2p ?
<ctOS> lambskin[m]: well, known terrorist A connects to your IP 1. that kind of connection would attract attention.
<void09> there's no "entry nodes" in i2p afaik
<void09> everybody is equal
<ctOS> there is. the first connection in the chain is the entry. however you want to think about it.
<lambskin[m]> well I mean, plausible deniablity
<lambskin[m]> like he said
kakra has quit [Ping timeout: 245 seconds]
neutron[m] has joined #ipfs
<ctOS> lambskin[m]: sure, but that is a riskier and more time consuming strategy than not having to deny anything in the first place.
<void09> ctOS, but the whole idea is that nobody but the one connecting thorugh you to make the tunnel knows you are the entry ,right?
<void09> knows for sure*
<ctOS> void09: if you're monitoring IP 2 and it makes a connection to IP 1, it doesn't really matter where in the chain you are. The only direct link are for entry nodes, however.
<lambskin[m]> I don't think they're going to blame the person a known terrorist chooses to connect to
<ctOS> lambskin[m]: you'd be one-hop removed from a known terrorist. given what we know of the NSA, you'd definitely be monitored for life.
<lambskin[m]> if you use i2p you're already on a list somewhere
<void09> well if they are NSA they know you are using i2p in the first place, from traffic analysis
<void09> and thus they wouldn't just randomly target all direct peers. it's not like a trusted peer network where you connect through your friends
<lambskin[m]> the only solution is to make these things ubiquitous
<lambskin[m]> so that everyone is using them in some form
<void09> exactly
<ctOS> that is what we have tor for.
<MikeFair> Has anyone considered making a transport layer in IPFS work over I2P or TOR? (aka make /tor/xyz.onion; and whatever the equivalent of an i2p address is work?)
<void09> you mean /ipfs/xyz.onion ?
<void09> on xyz.onion/ipfs/X123..
<MikeFair> lambskin[m]: Actualy, they very much blame the "enablers" of terrorists
<ctOS> MikeFair: you don't need a transport layer. just set up a onion service address, force all your connections through to,r and change the announcement in your config to only announce your onion service.
<MikeFair> lambskin[m]: Which is why you have to have "Terms of Service" and such
<void09> modern technology is the enabler of terrorists
<lambskin[m]> but they can't blame everyone
<lambskin[m]> it would be like blaming google
<MikeFair> void09: No, I'm thinking of multiaddresses in IPFS
<void09> ctOS, we were just talking about this with MikeFair a bit earlier
<MikeFair> lambskin[m]: It depends on a lot of conditions;
<void09> now sure how much you followed on the conversation
<MikeFair> lambskin[m]: Generally speaking you are responsible for what happens on your own equipment
<ctOS> void09: poor access to mental health services, lack of opportunities, and poverty should be way higher on your list there than technology.
<void09> ctOS, my aim is to host a node behind tor (be it an exit node, or .onion service)
<MikeFair> lambskin[m]: Being an open relay to the world and completely burying your head in the sand as to how/what anyone is using your open relay for doesn't absolve you of responsibility
<void09> ctOS, I didn't mean that literally, I just meant when searching for what enables terrorists, technologically. pretty much everthing post gsm. the internet itself
<lambskin[m]> yeah but if everyone was using i2p it would be impossible to figure out who was hosting or connecting to what
<lambskin[m]> they caught that one guy making bomb threats at his university because he was the only person there using tor
<void09> not impossible, just terribly difficult. the more difficult as more people use it
<ctOS> void09: what you'd need to get an all onion-routed service is a bootstrap server on and for .onion addresses.
<lambskin[m]> it would definitely be harder to catch someone doing sketchy stuff by trying to find a flaw in the technology then it would be by some other method
<void09> ctOS, i'd need a longer explanation than that. not even sure what that means
<MikeFair> ctOS: That's kind of what I was curious about, what would it take to make IPFS daemons understand /tor/xyz.onion and /i2p/address addresses in multiaddresses
<void09> It would take devs not on the payroll of the NSA :D
<void09> (tinfoil hat mode on)
<ctOS> MikeFair: nothing. use a dns address and dnstor (tordns? whatever) route all traffic through a virtual network interface with a transparent tor SOCKS proxy. done.
<MikeFair> hehe - well yes, I believe the US government has gone for the 51% attack on the TOR network; but I'm not looking for privacy or anonymity here; I'm realy just looking to eliminate the whole "private/public IP/Port" connection issues
<ctOS> void09: you can configure what address you announce to DHT from your client. change that config to only announce your onion address. now, you could connect to the general-purpose global DHT network but with poor results. ideally, you'd want to connect to other onion-routed peers so a dedicated tor DHT bootstrapping node for tor peers should give you much better results.
<MikeFair> So when I look at my "ipfs id" list of "addresses"; I'd see "/dns/somedomain.name/ipfs/MYPEERID" ?
<ctOS> MikeFair: don't quote me on the exact format, but I believe it's "/dns/ctOSspewspewgarbagehashhash.onion/ipfs/MYPEERID"
<ctOS> client's capable of resolving the domain would know how to connect to it.
skywavesurfer_ has quit [Quit: ZNC - https://znc.in]
<ctOS> I had this working back in October. I'll see if I made some notes about it somewhere ... .
<MikeFair> ctOS: So the IPFS daemon doesn't have to listen to anything but its own local IPv4/v6 addresses?
<void09> ctOS, yes but this would be limited to only .onion to .onion exchanges..
<MikeFair> Again, mostly I'm just noticing that TOR/I2P have also had to solve the "connect through firewalls" dilemma
<void09> because the normal ipfs gateways don't understand an .onion domain and routing
<ctOS> MikeFair: it would listen on a given port on the local loopback interface. the onion service would be configured to listen on the same port and forward traffic into/outfrom tor.
skywavesurfer has joined #ipfs
<ctOS> void09: that is also what you would want, yes.
<ctOS> If you're thinking of connecting to the public DHT through an exit relay then abort - abort - abort. It's a terrible idea.
<void09> ctOS, well, I don't know. I was thinking of using the exit nodes at first. would be much simpler
<void09> lol what :D
<void09> I just did that
<void09> it works but for a short while
<ctOS> Yeah, stop doing that.
<MikeFair> I was looking at this line from the I2P WikiPedia page: "and even the end points ("destinations") are cryptographic identifiers (essentially a pair of public keys), so that neither sender nor recipient of a message need to reveal their IP address to the other side or to third-party observers."
<void09> actually it works but the results are inconsistent.. probably due to the frequent address changes
<ctOS> void09: run "ipfs id"
<void09> ctOS, QmPHZLsnPKZu59mFSjxsPh4n1ud3LDyMmyhmYnMkMftzaC
<MikeFair> Given that comment, it seems that IPFS over I2P could really help make Peer connections easier
<ctOS> void09: look in the "addresses" section
<void09> MikeFair, please do read on how i2p works... it's quite fascinating
<ctOS> which addresses are you announcing?
<void09> sses": [
<void09> "/ip4/127.0.0.1/tcp/4001/ipfs/QmPHZLsnPKZu59mFSjxsPh4n1ud3LDyMmyhmYnMkMftzaC",
<void09> "/ip6/::1/tcp/4001/ipfs/QmPHZLsnPKZu59mFSjxsPh4n1ud3LDyMmyhmYnMkMftzaC",
<void09> "/ip4/10.152.152.11/tcp/4001/ipfs/QmPHZLsnPKZu59mFSjxsPh4n1ud3LDyMmyhmYnMkMftzaC"
<void09> hmm doens't look so well :<
<ctOS> how would anyone possibly connect to those addresses?
<void09> how on earth did I get a file shared and a connection to download from this gateway then
<void09> but it did
<void09> somehow
Xenguy has quit [Ping timeout: 245 seconds]
<ctOS> you may have been able to download a file from the network, but you've definitely not been able to announce a new file that would be routeable to anyone on the network.
<void09> ctOS, yes I did
<void09> I download cat.jpg and posted a file which I managed to download
<void09> from another machine
<ctOS> on your local network?
<ctOS> did you disable local peer discovery?
Xenguy has joined #ipfs
<void09> uhm kind of, but it should be isolated. it's a vm i set up in virtualbox with networking set to "Internal network"
<void09> for the purposes of forwarding all traffic through tor
<void09> which is the whonix gateway
<void09> restarting node now to show
<ctOS> void09: well, given the three addresses you announced. how would anyone outside your own local network know how to connect to you?
<void09> I have no idea but it worked, after a while
<MikeFair> ctOS: He was able to share /ipfs/QmRUSUiEJ1qxG9cptYoQHopPRrAEs6wV1TMHhtdKzn9Q29
<void09> also other 3 subsequent versions of that :)
<ctOS> void09: try sharing a new file, and ask someone here to fetch the file.
<void09> ok, QmbmkPjCe7mkeWeG4YnpobWz1bRVmgAH6ECk41oJuft4MC
<MikeFair> ctOS: Oh, you know, perhaps because of its size it's ending up in the DHT directly
<void09> what ? :\
<void09> dht caches tiny files?
<ctOS> y
<void09> ok then nvm. crap experiment :D
<void09> how big do I have to make it so it doesn't get saved by dht?
<ctOS> source 5 megabytes from /dev/rand
<ctOS> dd if=/dev/urandom of=experimental.garbage bs=5M count=1
<void09> yeah ok, just have to wait a bit for the daemon to properly connect
<MikeFair> void09: It seems: Small values (equal to or less than 1KB) are stored directly on the DHT.
<MikeFair> That may have changed since the white paper was published
<ctOS> Anyhow, it won't work. You've made yourself unreachable. You need to manually configure a routeable announcement address(es). Your options are: your public IP address, or a onion service address listening on your machine.
<void09> can't I use the tor exit node address?
<D_> I found a prov for that ...Juft4MC address
<void09> addresses*
<ctOS> This is why people say "don't use BitTorrrent for Tor!"
<D_> One of the addresses that findpeer provides for the peer seems to be on the Internet, though, so
<void09> they just don't want all the bandwidth sucked up :P
<ctOS> void09: no. you don't have any control over those. you can open a port temporarily (like up to 45 seconds or something.) That isn't useful as DHT moves slower than you can reannounce a new address.
<D_> And yes it's a tor exit node
<void09> ok just added 5 mb of junk QmcUUCRyrGaYxMtyEWAmQHb3rX25DrEBPcDAderXT8KDbF
<void09> D_ wow :D
<D_> 23.129.64.105
<void09> how was that possible then ?
<D_> Well you published it
<D_> I can't connect to you through it so I can't fetch the actual data
<ctOS> run "ipfs id" again and look for that IP
<void09> ipfs id shows the same ip addresses
<void09> localhost/lan
<ctOS> void09: not the one D_ saw?
<void09> is the ipfs add file.txt persistent across node restarts?
<D_> It pins by default unless you tell it not to
<void09> ok so it is
<D_> yep
<ctOS> the open port on the tor exit node should close within seconds. the port will remain closed for some time before it's releases for reuse.
<ctOS> I'm surprised it got included, though. the bootstrapping servers must have rewritten the announcement to include the source IP.
<void09> yes, somebody rewrote it somewhere, cause it seems ipfs does not fetch the external address.
<ctOS> I was not aware that was a thing.
<void09> but wait, how does then ipfs work behind NAT with no UPNP ?
<void09> it's the same situation there, more or less
<void09> are there nat hole punching relays inside ipfs ?
<ctOS> Regardless, you shouldn't be doing this. You'll be announcing tons of useless/short-lived/soon-to-be-unreachable IP/port pairs.
<ctOS> Stick to announcing your onion service address.
<void09> ctOS, I haven't got one yet :)
<ctOS> Ideally to an DHT boostrapper living inside .onion.
<void09> are you worrying about me spamming the dht ?
<void09> what if a million ip addresses did it, each with 10.000 entries ?
<void09> my tor experiments shouldn't really matter, right ? :)
<void09> as in, drop in an ocean
<ctOS> void09: I'm more worried about the pointless traffic you'll be directing at the tor exit releys.
nighty- has joined #ipfs
<void09> oh, other ipfs nodes will think they're ipfs nodes, right?
<ctOS> well, if you announce a super popular hash with no other peers ... voilá: ddos.
MikeFair has quit [Ping timeout: 255 seconds]
<void09> how is that hash super popular if there are no other peers ?
mischat has joined #ipfs
<ctOS> think of it this way: you're taking out magazine subscriptions for various magazines, and then moving to another house before the first magazine gets to your front door. there you take out subscriptions for the same magazines, and then move again. and again.
<ctOS> void09: you've just discovered the cure for cancer and are sharing it freely on IPFS at hashX.
ddahl has quit [Ping timeout: 250 seconds]
<void09> ctOS, yeah I can visualize how that works. I thought the DHT was faster than a tor tunnel lifetime
<void09> ctOS, well if I can cause harm just toying around like a noob, then maybe you should be more concerned about what an ill-intentioned attacker can do ?
<void09> harm as in pointless traffic
<void09> anyway, no popular hashes shared here. internet these days has much bandiwdth. no harm done :P
<void09> i ran a speedtest.. 20mbps on the exit node. just that speedtest is more traffic than all the ipfs experimenting I could do in a day
ddahl has joined #ipfs
<ctOS> Anyhow, you need to solve the problem of what address to announce before you continue down this path.
mischat has quit [Ping timeout: 258 seconds]
<void09> hm. indeed. I guess I can just spam all of the tor exit node addresse in use, right?
<void09> ipfs cat /ipfs/QmcUUCRyrGaYxMtyEWAmQHb3rX25DrEBPcDAderXT8KDbF
<void09> this worked eventually :D
<void09> i let it open.. and i got a "Bell" notification for Konsole
<void09> the 5mb random spam
ddahl_ has joined #ipfs
<void09> all 5.MB of it
ddahl has quit [Read error: Connection reset by peer]
<void09> 5.4MB
<void09> maybe I should add a movie, see what happens
The_8472 has quit [Ping timeout: 252 seconds]
dimitarvp has quit [Quit: Bye]
The_8472 has joined #ipfs
}ls{ has joined #ipfs
ddahl_ has quit [Ping timeout: 264 seconds]
_whitelogger has joined #ipfs
gmoro has quit [Ping timeout: 268 seconds]
nonono has quit [Ping timeout: 255 seconds]
xcm has quit [Remote host closed the connection]
ddahl_ has joined #ipfs
xcm has joined #ipfs
mischat has joined #ipfs
ygrek has joined #ipfs
mischat has quit [Ping timeout: 252 seconds]
mischat has joined #ipfs
ddahl_ has quit [Ping timeout: 264 seconds]
mischat has quit [Ping timeout: 252 seconds]
ebarch has quit [Quit: The Lounge - https://thelounge.chat]
DnrkasEFF has quit [Quit: Leaving.]
mischat has joined #ipfs
DnrkasEFF has joined #ipfs
ddahl_ has joined #ipfs
DnrkasEFF has left #ipfs [#ipfs]
renich has joined #ipfs
mischat has quit [Ping timeout: 258 seconds]
zeden has joined #ipfs
mischat has joined #ipfs
ddahl_ has quit [Ping timeout: 250 seconds]
ebarch has joined #ipfs
Xenguy has quit [Ping timeout: 245 seconds]
Xenguy has joined #ipfs
Belkaar has quit [Ping timeout: 250 seconds]
xcm has quit [Read error: Connection reset by peer]
Belkaar has joined #ipfs
Belkaar has joined #ipfs
xcm has joined #ipfs
mischat has quit [Ping timeout: 264 seconds]
ddahl_ has joined #ipfs
mischat has joined #ipfs
ddahl_ has quit [Ping timeout: 250 seconds]
mischat has quit [Ping timeout: 258 seconds]
ddahl_ has joined #ipfs
Mateon1 has quit [Ping timeout: 245 seconds]
shizy has quit [Quit: WeeChat 2.4]
shizy has joined #ipfs
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
ygrek has quit [Ping timeout: 250 seconds]
ddahl_ has quit [Ping timeout: 264 seconds]
appa_ has quit [Ping timeout: 246 seconds]
user_51 has quit [Ping timeout: 245 seconds]
user_51 has joined #ipfs
ddahl_ has joined #ipfs
}ls{ has quit [Ping timeout: 255 seconds]
placer14 has quit [Quit: placer14]
daMaestro has quit [Quit: Leaving]
ddahl_ has quit [Ping timeout: 264 seconds]
}ls{ has joined #ipfs
placer14 has joined #ipfs
ctOS has quit [Quit: Connection closed for inactivity]
ddahl_ has joined #ipfs
ddahl_ has quit [Ping timeout: 264 seconds]
lordcirth has quit [Read error: Connection reset by peer]
lordcirth has joined #ipfs
placer14 has quit [Quit: placer14]
ddahl_ has joined #ipfs
Xenguy has quit [Ping timeout: 246 seconds]
OUv is now known as OliverUv
Xenguy has joined #ipfs
ddahl_ has quit [Ping timeout: 250 seconds]
zeden has quit [Quit: WeeChat 2.3]
ddahl_ has joined #ipfs
ddahl_ has quit [Ping timeout: 264 seconds]
lordcirth has quit [Remote host closed the connection]
lordcirth has joined #ipfs
colinbrosseau1 has joined #ipfs
MDude has quit [Ping timeout: 250 seconds]
ddahl_ has joined #ipfs
ddahl_ has quit [Ping timeout: 264 seconds]
blakej[m] has joined #ipfs
ddahl_ has joined #ipfs
ddahl_ has quit [Ping timeout: 250 seconds]
lordcirth has quit [Remote host closed the connection]
lordcirth has joined #ipfs
xlued has quit [Quit: The Lounge - https://thelounge.github.io]
xlued has joined #ipfs
<lordcirth> If I want to download the same large file(s) to, say, 30 PCs in a LAN, and I install go-ipfs on all 30, mount /ipfs, and tell them all to pull, IPFS ought to use 1-2x the file size, right? Are there any config tweaks I should make to optimize this usecase?
<lordcirth> They are unfortunately firewalled from outside connections, but do have internet access.
ddahl_ has joined #ipfs
Mateon2 has joined #ipfs
lordcirth has quit [Remote host closed the connection]
lordcirth has joined #ipfs
ddahl_ has quit [Ping timeout: 264 seconds]
_whitelogger has joined #ipfs
ddahl_ has joined #ipfs
ignaloidas has quit [Quit: Leaving]
ddahl_ has quit [Ping timeout: 250 seconds]
lidel` has joined #ipfs
ddahl_ has joined #ipfs
lidel has quit [Ping timeout: 246 seconds]
lidel` is now known as lidel
ddahl_ has quit [Ping timeout: 250 seconds]
ddahl_ has joined #ipfs
ddahl_ has quit [Ping timeout: 250 seconds]
juh_[m] has left #ipfs ["User left"]
zane has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
zane has joined #ipfs
zane has quit [Client Quit]
renich has quit [Ping timeout: 245 seconds]
ddahl_ has joined #ipfs
clemo has joined #ipfs
ddahl_ has quit [Ping timeout: 250 seconds]
Chaos[m] is now known as chaos[m]
<graylan[m]> best way i know how is opening the port , then ipfs swarm connect peer , then take the pins in a .txt and use ipfs pin add to add them. on each node......
<graylan[m]> also txt file cant have any bad characters or spaces or you get ref error
lostSquirrel has joined #ipfs
ddahl_ has joined #ipfs
ylp has joined #ipfs
fazo has joined #ipfs
IRCsum has quit [Remote host closed the connection]
IRCsum has joined #ipfs
ddahl_ has quit [Ping timeout: 264 seconds]
Pulse2496 has joined #ipfs
KaitoDaumoto has joined #ipfs
Xexe has left #ipfs [#ipfs]
ddahl_ has joined #ipfs
appa has quit [Ping timeout: 272 seconds]
blallo has quit [Ping timeout: 268 seconds]
lord| has quit [Ping timeout: 255 seconds]
lord| has joined #ipfs
ddahl_ has quit [Ping timeout: 264 seconds]
manray has quit [Ping timeout: 264 seconds]
ddahl_ has joined #ipfs
BertyCoX- has joined #ipfs
KaitoDaumoto has quit [Ping timeout: 255 seconds]
Ai9zO5AP has joined #ipfs
manray has joined #ipfs
ddahl_ has quit [Ping timeout: 250 seconds]
Caterpillar has quit [Read error: Connection reset by peer]
Caterpillar has joined #ipfs
Caterpillar has quit [Remote host closed the connection]
ddahl_ has joined #ipfs
Caterpillar has joined #ipfs
mischat has joined #ipfs
mischat has quit [Remote host closed the connection]
mischat has joined #ipfs
ddahl_ has quit [Ping timeout: 250 seconds]
zakzakzak[m] is now known as hogehogehoge[m]
mischat has quit []
mischat has joined #ipfs
ddahl_ has joined #ipfs
xcm has quit [Read error: Connection reset by peer]
xcm has joined #ipfs
mischat_ has joined #ipfs
fxpester has joined #ipfs
ddahl_ has quit [Ping timeout: 250 seconds]
mischat has quit [Ping timeout: 250 seconds]
chiui has joined #ipfs
mischat_ has quit [Ping timeout: 255 seconds]
vmx has joined #ipfs
mischat has joined #ipfs
spinza has quit [Quit: Coyote finally caught up with me...]
ddahl_ has joined #ipfs
mischat has quit [Remote host closed the connection]
mischat has joined #ipfs
mischat has quit [Ping timeout: 255 seconds]
mowcat has joined #ipfs
ddahl_ has quit [Ping timeout: 252 seconds]
spinza has joined #ipfs
mischat has joined #ipfs
mischat has quit [Remote host closed the connection]
ddahl_ has joined #ipfs
mischat has joined #ipfs
mikro2nd has joined #ipfs
mischat_ has joined #ipfs
mischat has quit [Remote host closed the connection]
ddahl_ has quit [Ping timeout: 250 seconds]
ddahl_ has joined #ipfs
jpaa_ has quit [Remote host closed the connection]
ddahl_ has quit [Ping timeout: 264 seconds]
gmoro has joined #ipfs
BeerHall has quit [Remote host closed the connection]
BeerHall has joined #ipfs
mischat_ has quit [Remote host closed the connection]
mischat has joined #ipfs
fazo has quit [Read error: Connection reset by peer]
fazo has joined #ipfs
ddahl_ has joined #ipfs
ctOS has joined #ipfs
mischat has quit [Ping timeout: 272 seconds]
ddahl_ has quit [Ping timeout: 250 seconds]
dasj19 has joined #ipfs
ddahl_ has joined #ipfs
mischat has joined #ipfs
ddahl_ has quit [Ping timeout: 264 seconds]
fazo has quit [Read error: Connection reset by peer]
fazo has joined #ipfs
fazo has quit [Client Quit]
fazo has joined #ipfs
ddahl_ has joined #ipfs
ddahl_ has quit [Ping timeout: 250 seconds]
ddahl_ has joined #ipfs
mikro2nd has quit [Ping timeout: 245 seconds]
fazo_ has joined #ipfs
fazo has quit [Read error: Connection reset by peer]
fazo_ is now known as fazo
malaclyps has quit [Read error: Connection reset by peer]
malaclyps has joined #ipfs
_whitelogger has joined #ipfs
Pulse2496 has quit [Quit: ⠟⠥⠊⠞]
placer14 has joined #ipfs
zeden has joined #ipfs
zeden has quit [Client Quit]
zeden has joined #ipfs
driedstr[m] has joined #ipfs
mischat_ has joined #ipfs
whhy001[m] has joined #ipfs
mischat has quit [Ping timeout: 244 seconds]
mischat_ has quit [Remote host closed the connection]
mischat has joined #ipfs
mischat_ has joined #ipfs
toxync01 has quit [Ping timeout: 246 seconds]
mischat has quit [Ping timeout: 268 seconds]
toxync01 has joined #ipfs
achin has quit [Changing host]
achin has joined #ipfs
ctOS has quit [Quit: Connection closed for inactivity]
appa_ has joined #ipfs
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
ctOS has joined #ipfs
wking has quit [Ping timeout: 245 seconds]
ddahl_ has quit [Quit: Leaving]
djdv has quit [Quit: brb]
djdv has joined #ipfs
ddahl has joined #ipfs
ZaZ has joined #ipfs
ygrek has joined #ipfs
ddahl has quit [Ping timeout: 264 seconds]
dimitarvp has joined #ipfs
placer14 has quit [Quit: placer14]
Guest30504 has quit [Remote host closed the connection]
ddahl has joined #ipfs
mowcat has quit [Remote host closed the connection]
placer14 has joined #ipfs
mikro2nd has joined #ipfs
matt-h has joined #ipfs
ddahl has quit [Ping timeout: 268 seconds]
BeerHall has quit [Quit: BeerHall]
driedstr[m] has left #ipfs ["User left"]
ddahl has joined #ipfs
mischat_ has quit [Remote host closed the connection]
mischat has joined #ipfs
MDude has joined #ipfs
ddahl has quit [Ping timeout: 250 seconds]
IRCsum has quit [Remote host closed the connection]
mischat has quit [Ping timeout: 250 seconds]
IRCsum has joined #ipfs
bycunka[m] has joined #ipfs
bycunka[m] has left #ipfs [#ipfs]
ddahl has joined #ipfs
mischat has joined #ipfs
f0x has quit [Remote host closed the connection]
mateusbs17|afk is now known as mateusbs17
f0x has joined #ipfs
f0x has quit [Remote host closed the connection]
f0x has joined #ipfs
ddahl has quit [Ping timeout: 264 seconds]
clemo has quit [Ping timeout: 255 seconds]
ddahl has joined #ipfs
lordcirth_ has joined #ipfs
fxpester has quit [Ping timeout: 244 seconds]
ddahl has quit [Ping timeout: 264 seconds]
<pburton[m]> Please join us for the IPFS Weekly Call! https://github.com/ipfs/team-mgmt/issues/881
ddahl has joined #ipfs
ddahl has quit [Ping timeout: 252 seconds]
matth has joined #ipfs
justyns[m] has joined #ipfs
matt-h has quit [Ping timeout: 246 seconds]
pecastro_ has joined #ipfs
pecastro has quit [Ping timeout: 244 seconds]
ddahl has joined #ipfs
<lordcirth_> pburton[m], Am I reading correctly that it starts in ~40 minutes?
mischat_ has joined #ipfs
mischat has quit [Ping timeout: 240 seconds]
wking has joined #ipfs
FabrixXm[m]1 has left #ipfs ["User left"]
IRCsum has quit [Remote host closed the connection]
IRCsum has joined #ipfs
whhy001[m] has left #ipfs ["User left"]
ddahl has quit [Ping timeout: 250 seconds]
IRCsum has quit [Remote host closed the connection]
nighty- has quit [Quit: Disappears in a puff of smoke]
IRCsum has joined #ipfs
IRCsum has quit [Remote host closed the connection]
jpaa has joined #ipfs
IRCsum has joined #ipfs
<shoku[m]> Is there a recurring calendar invite btw?
<shoku[m]> I’ll start attending 😃
dasj19 has quit [Quit: dasj19]
manray has quit [Ping timeout: 255 seconds]
kruemelmonster[m has joined #ipfs
ylp has quit [Quit: Leaving.]
ddahl has joined #ipfs
zane has joined #ipfs
mischat_ has quit [Remote host closed the connection]
ddahl has quit [Ping timeout: 250 seconds]
yeehi has joined #ipfs
mischat has joined #ipfs
<jacobheun> It's in the community calendar. You can find the link here https://github.com/ipfs/community#calendar
mischat has quit [Ping timeout: 255 seconds]
rain1 has left #ipfs ["WeeChat 1.6"]
ddahl has joined #ipfs
MikeFair has joined #ipfs
ddahl has quit [Ping timeout: 264 seconds]
BertyCoX- has quit [Ping timeout: 245 seconds]
lidel has quit [Ping timeout: 246 seconds]
KaitoDaumoto has joined #ipfs
<shoku> So, dumb question, but I'm on the team call right now
<shoku> Sounds super Protocol Labs specific
<shoku> Is this open to everyone, or IPFS contributors only?
<shoku> (is Portia on irc?)
lidel has joined #ipfs
ddahl has joined #ipfs
Kakky has joined #ipfs
ZaZ1 has joined #ipfs
<lordcirth_> shoku, I'm pretty sure lurking is for everyone, at least.
ZaZ has quit [Ping timeout: 244 seconds]
<postables[m]> AFAIK (haven't attended because I'm never up earlier than 11am or 12pm lmao) it's protocol labs check ins, progress updates,etc... But its open to the community for participation and transparency purposes
<postables[m]> So kinda like a stand-up but anyone can join and see whats going on (I could be slightly off by this)
<shoku> Yep that's what it is apparently :)
plexigras has joined #ipfs
vmx has quit [Remote host closed the connection]
<aschmahmann[m]> postables, shoku: It depends on the meeting. While all the meetings on the public calendar are open to all, some of the meetings happen to have more non-protocol labs contributors attending regularly and some have less.
ddahl has quit [Ping timeout: 268 seconds]
<shoku[m]> Gotcha, thanks
<shoku[m]> so you're working on js-ipfs?
mischat has joined #ipfs
chiui has quit [Ping timeout: 272 seconds]
thomasan_ has joined #ipfs
<lordcirth_> I am hoping to do a PR to go-ipfs soon, adding --human to 'ipfs bitswap stat'
clemo has joined #ipfs
<lordcirth_> Hopefully not hard.
thomasan_ has quit [Remote host closed the connection]
dasj19 has joined #ipfs
nuh^ has quit []
seba- has quit [Ping timeout: 245 seconds]
seba- has joined #ipfs
<postables[m]> aschmahmann: maybe if I'm awake early enough one day I'll attend 😂
thomasan_ has joined #ipfs
<aschmahmann[m]> shoku: not part of js-ipfs, but was crashing today. Mostly on the Go side.
<aschmahmann[m]> postables: sounds good :)
mischat has quit [Remote host closed the connection]
BenLubar has quit [Quit: ZNC 1.7.1 - https://znc.in]
BenLubar has joined #ipfs
fazo has quit [Quit: fazo]
manray has joined #ipfs
Kakky has quit [Quit: Leaving]
thomasan_ has quit [Remote host closed the connection]
thomasan_ has joined #ipfs
manray has quit [Ping timeout: 246 seconds]
vkli[m] has joined #ipfs
manray has joined #ipfs
henriquev has joined #ipfs
renich has joined #ipfs
arthur has joined #ipfs
renich has quit [Remote host closed the connection]
dqx_ has joined #ipfs
renich has joined #ipfs
mischat has joined #ipfs
dqx_ has quit [Quit: .]
ZaZ1 has quit [Quit: Leaving]
rain1 has joined #ipfs
rain1 has quit [Client Quit]
rain1 has joined #ipfs
jesse22 has joined #ipfs
joocain2 has quit [Remote host closed the connection]
joocain2_ has joined #ipfs
clemo has quit [Ping timeout: 245 seconds]
blallo has joined #ipfs
clemo has joined #ipfs
georgyo has joined #ipfs
thegoose51 has joined #ipfs
renich_ has joined #ipfs
renich has quit [Ping timeout: 240 seconds]
mikro2nd has quit [Ping timeout: 245 seconds]
jesse22_ has joined #ipfs
jesse22 has quit [Ping timeout: 250 seconds]
jesse22_ has quit [Client Quit]
joeyh has joined #ipfs
jesse22 has joined #ipfs
stoopkid has joined #ipfs
jesse22 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
M100c1p43r[m] has joined #ipfs
Ai9zO5AP has quit [Ping timeout: 246 seconds]
Ai9zO5AP has joined #ipfs
<stebalien> go-ipfs 0.4.19-rc1 has been released: https://dist.ipfs.io/go-ipfs/v0.4.19-rc1
<stebalien> Please test extensively, we'd like to cut a final release ASAP.
jesse22 has joined #ipfs
stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.4.19-rc1 and js-ipfs 0.34 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of
jesse22 has quit [Client Quit]
sknebel has quit [Ping timeout: 250 seconds]
<lordcirth_> cool!!
<mattober[m]> stebalien: do you have a changelog for 0.4.19?
<lordcirth_> I see that it's already in snap. (snap install ipfs --channel=edge_
<lordcirth_> s/_/)/
<mattober[m]> thank you!
jesse22 has joined #ipfs
<lordcirth_> I wish I had the network speed to set EnableRelayHop
<lordcirth_> "On to the gateway, there's a new Gateway.NoFetch option to configure the gateway to only serve locally present files. This makes it possible to run an IPFS node as a gateway to serve content of your choosing without acting like a public proxy."
<lordcirth_> Nice!
<lordcirth_> Orgs / companies will like that
<mattober[m]> oh my god that's huge
<mattober[m]> i just saw that
<lordcirth_> And DHT perf. Can't wait to try that when I get home
<lordcirth_> stebalien, as part of testing, should we try using the badger datastore?
<lordcirth_> Also, how can I migrate to badger? Is there a (good) way to dump everything I have in datastore to disk so I can re-add it?
<lordcirth_> If I just do 'something | xargs ipfs get' then I would write both file A, and a directory containing file A, and so on.
<stebalien> lordcirth_: please do. It's still experimental but we'd like to switch to it by default in the next release.
<lordcirth_> stebalien, why does the "Badger has reached 1.0" link point to a project called Dgraph? Is badger on top of dgraph? A rename?
<Obo[m]1> @stebalien Is it very simple to convert between the current DB to Badger?
<stebalien> lordcirth_: dgraph makes badger
<stebalien> GitHub - ipfs/ipfs-ds-convert: Command-line tool for converting datastores (e.g. from FlatFS to Badger)
<stebalien> Warning, it's slow and may eat your data. Backup first.
<lordcirth_> I'm in the situation where I have 500GB of hard drive space to spare, but my internet is terrible. So I will happily keep backups / snapshots of everything under the sun
<lordcirth_> stebalien, thanks! by "slow", do you know what it tends to bottleneck on? CPU?
<Obo[m]1> Will definitely be backing up. Is there any way to verify a conversion completed without errors?
<lordcirth_> ipfs pin --verify might be useful?
<Obo[m]1> If not, I may have to do some manual ipfs-cluster magic
<lordcirth_> stebalien, should the daemon be stopped or running during the convert?
<stebalien> lordcirth_: I think the issue is just disk io. Note: I've never actually used it. I've just been told that it's slow.
<stebalien> lordcirth_: stopped.
<lordcirth_> Ah ok. Yeah I would expect a naive conversion that goes one object at a time to be a lot of seeks.
sknebel has joined #ipfs
<stebalien> I'm guessing the main problem is syncing. There's really not much we can do about seeking.
<lordcirth_> Hmm, currently my blocks are on a ZFS raid1 of 7200rpm HDDs. I wonder if adding a ZIL on my SSD would speed up the convert.
Ai9zO5AP has quit [Quit: WeeChat 2.3]
thomasan_ has quit [Remote host closed the connection]
thomasan_ has joined #ipfs
thomasan_ has quit [Remote host closed the connection]
luc1ph3r[m] has left #ipfs ["User left"]
thomasan_ has joined #ipfs
appa has joined #ipfs
<MikeFair> IS there a command to list all the currently cached CIDs?
<MikeFair> One "brute force" way to verify would be to readd all the existing CID data and ensure that no new CIDs appeared
<MikeFair> (If the data extracted from requesting the CID doesn't readd as the same CID, then extracting the data changed/corrupted it; otherwise you got back out what was put in)
astrojuanlu[m] has joined #ipfs
plexigras has quit [Ping timeout: 246 seconds]
dasj19 has quit [Quit: dasj19]
thomasan_ has quit [Remote host closed the connection]
thomasan_ has joined #ipfs
<MikeFair> Has anyone thought of a way to create a distributed build system using IPFS?
<MikeFair> Rather than making my machine recompile all its own files; somehow associating an object file to a particular source file for a particular build configuration
<MikeFair> and architecture?
<MikeFair> (and compiler obviously)
<rain1> that seems like it could work
magicbit has quit [Ping timeout: 246 seconds]
magicbit has joined #ipfs
<MikeFair> Could IPLD be used to "publish" the mappings?
<MikeFair> Not sure how trustable that would be though
zeden has quit [Quit: WeeChat 2.3]
DnrkasEFF has joined #ipfs
WhizzWr has quit [Quit: Bye!]
caveat has quit [Excess Flood]
WhizzWr has joined #ipfs
caveat has joined #ipfs
nivekuil has quit [Ping timeout: 264 seconds]
lnostdal has quit [Ping timeout: 244 seconds]
renich has joined #ipfs
renich_ has quit [Ping timeout: 255 seconds]
<lordcirth> MikeFair, with reproducible builds, you could have some number of volunteers run a build. They all get the same binary, they each sign that it's the real one, and you have a trust list where if N of M trusted people signed it, you install it.
nivekuil has joined #ipfs
<MikeFair> I could see something like that
lnostdal has joined #ipfs
spinza has quit [Quit: Coyote finally caught up with me...]
<MikeFair> I've had a hard time creating any kind of N of M validation over IPFS though
<MikeFair> Hmm, I suppose some kind of PubSub system; where a machine can say "What's the object file for XYZ" where XYZ somehow describes the required configuration and source file
<lordcirth> MikeFair, each volunteer has an IPNS name - their identity that you trust for building purposes. It points to a directory containing the hashes of all their finished builds.
<MikeFair> lordcirth: The question is how does your machine know to ask them for it
<lordcirth> For each v in volunteers, lookup /ipns/$v/pkgname/version/sha256sum.
<lordcirth> At some point you need the human user to choose who to trust, that's unavoidable
<MikeFair> Oh, so maye in the GH repo there's a "ipnskey" address?
<MikeFair> that represents the project
<MikeFair> err the projects reproducible builds repository
<lordcirth> You would probably want the author(s) to be on the list, yes
<MikeFair> OKay, this is getting somewhere; assuming Project has ProjKey which is an IPNS address to an IPLD database; and is also used for a PubSub channel
thegoose51 has quit [Quit: i died]
<MikeFair> Some listener service could be subbed to the ProjKey PubSub topic/channel and listen for volunteers to broadcast object file CIDs
<MikeFair> That listener service could correlate the incoming signed attestations
<MikeFair> And rebuild/republish the IPLD database; then update the IPNS key
<MikeFair> I guess it doesn't necessarily even have to be an IPLD database; just a really well thought build tree
<MikeFair> I was thinking IPLD to store/correlate all the configuration options
spinza has joined #ipfs
vyzo has quit [Quit: Leaving.]
vyzo has joined #ipfs
thomasan_ has quit [Remote host closed the connection]
thomasan_ has joined #ipfs
lostSquirrel has quit [Ping timeout: 245 seconds]
thomasan_ has quit [Remote host closed the connection]
mischat has quit [Remote host closed the connection]
thomasan_ has joined #ipfs
renich has quit [Quit: renich]
abaisteabaisteab has joined #ipfs
<postables[m]> @MikeFair: there's IPLD docker registry
<postables[m]> Obo `ipfs repo verify` if your repo didn't convert correctly you wouldn't be able to start your node
<MikeFair> postables[m]: This is kind of a similar idea; how do people add new entries to the registry; I suspect it's similar to some kind of register/rebuild process
<postables[m]> I mean if it's a registry you have opened to the public it's pretty easy just let other people access it.
<MikeFair> the problem is updating
<MikeFair> People need to be able to publish to it
<postables[m]> Run your IPFS node with the IPLD docker registry and have it publicly accessible and it should take care of everything.
<postables[m]> How is updating the problem? You would probably need some layer of authorization so that people can't just openly remove stuff
<MikeFair> postables[m]: Because people need to make object files and publish them
<MikeFair> and there's no way to just "trust" that hte correlated object file is the right file
<postables[m]> Well you only let people update objects they have published.
<postables[m]> I don't think there's anyway you can safely do this without some layer of authentication
<MikeFair> right, we were talking about people signing their attestations and object files; using N of M
<MikeFair> perhaps some kind of reputation score (prefer project devs others?)
<MikeFair> err over others
<MikeFair> it's still jsut a brainstorming idea at the moment
<MikeFair> But it seems feasible that if I could describe the config file + arch + platform + compiler (+ compiler options) as a directory name or IPLD db name or some other identifier; then you ought to be able to download the binary build files
<MikeFair> What I was hoping/thinking is that people could publish their own distinct build trees
IRCsum has quit [Remote host closed the connection]
Taoki has joined #ipfs
IRCsum has joined #ipfs
yeehi has quit [Ping timeout: 245 seconds]
IRCsum has quit [Remote host closed the connection]
dimitarvp has quit [Quit: Bye]
IRCsum has joined #ipfs
abaisteabaisteab has left #ipfs [#ipfs]
mowcat has joined #ipfs
hchchchchchchchc has joined #ipfs
Michaelt3chguy[m has joined #ipfs
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
sammacbeth has joined #ipfs
}ls{ has quit [Excess Flood]
masterdonx has quit [Ping timeout: 246 seconds]
BertyCoX- has joined #ipfs
KaitoDaumoto has quit [Ping timeout: 246 seconds]
masterdonx has joined #ipfs