stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.5.1 and js-ipfs 0.43.1 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Conduct: https://git.io/vVBS0
mir100 has joined #ipfs
maxzor has quit [Ping timeout: 246 seconds]
jesse22 has quit [Ping timeout: 244 seconds]
spossiba has quit [Ping timeout: 240 seconds]
codebam has quit [Quit: leaving...]
spossiba has joined #ipfs
codebam has joined #ipfs
cubemonk1y has quit [Remote host closed the connection]
Ecran10 has quit [Quit: Going offline, see ya! (www.adiirc.com)]
catonano has quit [Ping timeout: 260 seconds]
Taoki has joined #ipfs
Jesin has joined #ipfs
cipres has quit [Ping timeout: 240 seconds]
mowcat has quit [Remote host closed the connection]
MrSparkle has joined #ipfs
Mateon1 has quit [Remote host closed the connection]
codebam has quit [Quit: leaving...]
<void09> hi. is there some default upload bandiwdth throttle in the ipfs node? using latest git, no custom configs
<void09> asking cause I added a video file on another pc in my network, and attempted to play it from here. video pauses every second for a bit to load. I thought maybe it did not use the local ip but the external ip, although that should not be an issue as i have gbit internet connection.
<void09> ran jnettop and I see it's streaming from 192.168.0.x
<void09> but the speeds are somewhere between 2-3 MB/s second, and not more
Belkaar has quit [Ping timeout: 264 seconds]
Belkaar_ has joined #ipfs
voidwalker09[m] has joined #ipfs
RoboFlex13 has joined #ipfs
<void09> ah, i just found the problem. was using --mount to have ipfs accessible on /ipfs path. seems the performance there is quite terrible for some reason ?
RoboFlex12 has quit [Ping timeout: 246 seconds]
<void09> seeing up to 45MB/s on http://localhost:8080..
Ringtailed-Fox has joined #ipfs
Ringtailed-Fox has quit [Read error: Connection reset by peer]
Mateon1 has joined #ipfs
RingtailedFox has quit [Ping timeout: 258 seconds]
ry60003333 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Acacia has quit [Read error: Connection reset by peer]
Acacia has joined #ipfs
spinza has quit [Ping timeout: 258 seconds]
ry60003333 has joined #ipfs
bmwiedemann2 has joined #ipfs
JankLoogi has joined #ipfs
spinza has joined #ipfs
<void09> ipfs-api-mount doing 11.5MB/s. a lot better than 2.5MB of the daemon mount, but still not great.
jrt is now known as Guest72464
Guest72464 has quit [Killed (adams.freenode.net (Nickname regained by services))]
jrt has joined #ipfs
<void09> can i actually add a file to ipfs on linux without duplicating it ? i did ipfs add --nocopy, after ipfs config --json Experimental.FilestoreEnabled true, and restarting the daemon. it just stops after 768kb.
<void09> what am I doing wrong ?
codebam_ has joined #ipfs
<JCaesar> That's odd. Otherwise everything is working fine, you can read that file, and add files to ipfs without nocopy?
codebam_ is now known as codebam
<void09> yes, all is fine oterhwise
user_51 has joined #ipfs
<void09> hm wait, i see the process is reading from disk.. 600MB/s
<void09> although the progress bad stopped at 768KB
Jeanne-Kamikaze has joined #ipfs
<void09> ok after waiting for a few minutes and probably reading the whole 78GB of the file, I get : Error: cannot add filestore references outside ipfs root (/home/
<void09> wouldn't it have been easier if it checked for this before reading the whole file ?
<void09> this is so silly.. now i have to symlink everything I add to ipfs ?
<void09> I come back after a year and see ipfs is exactly as dysfunctional as last time :\
<void09> is there some hidden option somewhere that allows me to add files outside /home/user ?
user_51_ has quit [Ping timeout: 240 seconds]
zeden has quit [Quit: WeeChat 2.9]
<JCaesar> ln -s / root ?
<JCaesar> (I'm also curious about what happens when you hand the urlstore file urls. anyway, no changes here.)
<JCaesar> And no, ipfs is not quite as dysfunctional as a year ago, I feel. The DHT/node state changes have really improved resolution times a lot.
<void09> I mean for the most basic use case I can think of, adding a file outside /home to ipfs
<void09> all my sensitive data is in /home anyway, the rest is just media
<JCaesar> The odd part is that that's an artificial limitation without any way of disabling it…
<void09> ok, i linked it and now it works
<void09> but the performance is rather disapointing.. i mean, it can't saturage a gbit link not even on local network ? :/
<void09> saturate*
<void09> I'm seein 30-45MB/s wih wget from localhost http
<void09> maybe there's latency issues accumulating, and no read ahead cache
jesse22 has joined #ipfs
jesse22 has quit [Client Quit]
mindCrime has quit [Ping timeout: 244 seconds]
<void09> even serving cached blocks is just 200MB/s :/
dqx_ has quit [Ping timeout: 256 seconds]
jcea has quit [Quit: jcea]
<jessicara> yeah i noticed when i got it working only about 700k out where if were hitting bandwidth limits would be around 2100k out. unfortunately wasn't a good test because apparently have intermittent 40%-ish packet loss
<jessicara> tcp really does not like losing packets
<void09> 40% packet loss ? :/
<jessicara> yeah, will find out what this is within 2 days probably
<jessicara> i suspect it's the router isp gave years ago finally admitting defeat
<JCaesar> at 40% you're not gonna get ISDN speeds, no?
<jessicara> given it's all fine, tested all the cables, but it's every link involving it going up/down randomly
<jessicara> depends, there's windows of time without the packet loss
<jessicara> depends when it happens
<void09> I had my router doing that when runnings ipfs last year. too many connections at once
<jessicara> yeah, thought that might be the case here too so tried with nothing running. still same behaviour after everything rebooted to clear it out and a new ip
Nact has joined #ipfs
vmagic[m] has joined #ipfs
Belkaar_ has quit [Ping timeout: 260 seconds]
Belkaar has joined #ipfs
Belkaar has quit [Changing host]
Belkaar has joined #ipfs
chanbakjsd[m] has joined #ipfs
bmwiedemann2 has quit [Ping timeout: 260 seconds]
MrSparkle has quit [Ping timeout: 244 seconds]
Jeanne-Kamikaze has quit [Ping timeout: 272 seconds]
MrSparkle has joined #ipfs
lektrik has quit [Ping timeout: 264 seconds]
va3vna has joined #ipfs
codebam has quit [Quit: leaving...]
codebam has joined #ipfs
tsrt^ has quit []
ffl^ has joined #ipfs
codebam has quit [Quit: leaving...]
codebam has joined #ipfs
detran_ has quit [Quit: ZNC 1.7.5 - https://znc.in]
detran has joined #ipfs
Nact has quit [Quit: Konversation terminated!]
TheLugal__ has left #ipfs [#ipfs]
detran has quit [Quit: ZNC 1.7.5 - https://znc.in]
Thominus has quit [Ping timeout: 240 seconds]
stoopkid has joined #ipfs
bmwiedemann2 has joined #ipfs
ylp has joined #ipfs
pecastro has joined #ipfs
supercoven has joined #ipfs
VoiceOfReason has quit [Quit: Free ZNC ~ Powered by LunarBNC: https://LunarBNC.net]
silotis has quit [Quit: No Ping reply in 180 seconds.]
VoiceOfReason has joined #ipfs
silotis has joined #ipfs
ylp has quit [Ping timeout: 272 seconds]
ylp has joined #ipfs
mrt_ has quit [Ping timeout: 240 seconds]
BPC has quit [Ping timeout: 260 seconds]
mrte has joined #ipfs
coniptor has quit [Ping timeout: 240 seconds]
<ipfs-stackbot1> New IPFS question on StackOverflow: how to upload files to IPFS using nodejs - https://stackoverflow.com/questions/63916136/how-to-upload-files-to-ipfs-using-nodejs
maggotbrain has quit [Ping timeout: 244 seconds]
ffl^ has quit []
coniptor has joined #ipfs
maggotbrain has joined #ipfs
aifele[m] has quit [Quit: Idle for 30+ days]
crcastle[m] has quit [Quit: Idle for 30+ days]
liqsliu[m]1 has quit [Quit: Idle for 30+ days]
grahamsnz[m] has quit [Quit: Idle for 30+ days]
spossiba has quit [Ping timeout: 244 seconds]
BPC has joined #ipfs
<chanbakjsd[m]> Huh. There's inactive kicks here? Interesting
stoopkid has quit [Quit: Connection closed for inactivity]
coniptor has quit [Ping timeout: 260 seconds]
spossiba has joined #ipfs
Colpop4323 has joined #ipfs
Colpop4323 has quit [Remote host closed the connection]
B-Baka[m] has joined #ipfs
Colpop4323 has joined #ipfs
noone has joined #ipfs
Colpop4323 has quit [Remote host closed the connection]
VoiceOfReason has quit [Quit: Free ZNC ~ Powered by LunarBNC: https://LunarBNC.net]
noone has quit [Remote host closed the connection]
Colpop4323 has joined #ipfs
dharmateja has joined #ipfs
VoiceOfReason has joined #ipfs
Colpop4323 has quit [Remote host closed the connection]
maggotbrain has quit [Ping timeout: 272 seconds]
Ecran10 has joined #ipfs
leotaku has quit [Ping timeout: 272 seconds]
Colpop4323 has joined #ipfs
Colpop4323 has quit [Remote host closed the connection]
lrl[m] has joined #ipfs
lrl[m] has left #ipfs [#ipfs]
leotaku has joined #ipfs
maggotbrain has joined #ipfs
worc3131 has quit [Ping timeout: 260 seconds]
discopatrick has joined #ipfs
ipfs-stackbot1 has quit [Remote host closed the connection]
ipfs-stackbot1 has joined #ipfs
<deltab> chanbakjsd[m]: just for the bridged users, I think, due to the persistent history in Matrix
AnotherFoxGuy has quit [Remote host closed the connection]
drathir_tor has quit [Remote host closed the connection]
drathir_tor has joined #ipfs
AnotherFoxGuy has joined #ipfs
AnotherFoxGuy has quit [Client Quit]
AnotherFoxGuy has joined #ipfs
aLeSD has joined #ipfs
Ecran10 has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<void09> any devs here ? when will we have 10gbps transfer speeds over the network ? :)
<void09> local 10gbps network of course*
dharmateja has quit [Ping timeout: 256 seconds]
Swant has quit [Read error: Connection reset by peer]
aLeSD has quit [Quit: Leaving]
hurikhan77 has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
hurikhan77 has joined #ipfs
<tomoxtechno[m]> <void09 "any devs here ? when will we hav"> what speed do you get locally?
zeden has joined #ipfs
<void09> tomoxtechno[m]: I get 200MB/s when wget-ing from localhost http gateway, if the file is already cached
<void09> 35-45MB/s from local network (1gbps link)
<void09> using latest git as of 12 hours ago
<swedneck> well you're not gonna get 10gbps over a 1gbps link :P
<void09> I know, but if I get just 35MB/s consistently over 1gbps at 0.1ms network latency, no way it's going to go faster if I chance nics to 10gbps
<void09> I would have tested over 10gbps but there's no point
<swedneck> fwiw i don't think wget from your gateway is optimal, there's probably all kinds of overhead added
<void09> Also not sure how to translate that http gateway 35MB/s to a filesystem mount.. the python api mount tool only does 10-11MB/s :(
<void09> swedneck: really now ? what would be optimal then ? ipfs cat ?
<swedneck> probably `ipfs get`?
<void09> ok let me try
<swedneck> although then you're bottlenecked by writing to a file..
<swedneck> i mean, testing multiple things can't hurt, and should give you a better overview of performance
<void09> writing shouldn't be bottlenecked, I got 3GB/s nvme
<void09> also reading from ssd raid array on the host
<swedneck> then i think `ipfs get` is the most "realistic" test
<void09> same speed with ipfs get, 30-50MB, averaging at about 39MB/s or so
<void09> maybe just a tad faster
<swedneck> yeah i seem to be getting the same
dharmateja has joined #ipfs
<void09> long way to go from 40MB/s to 900MB/s :(
jessicara has quit [Ping timeout: 260 seconds]
kapcom01[m] has joined #ipfs
jessicara has joined #ipfs
MasseR has quit [Ping timeout: 240 seconds]
MasseR has joined #ipfs
<aschmahmann[m]> void09: 900MB/s > 1gpbs, but nonethless you're right Bitswap doesn't really make full use of the available bandwidth. I haven't looked too much into the Bitswap internals but I suspect this boils down to Bitswap being a request-response protocol where we ask peers for blocks of data, this makes the download speed dependent on latency, shape of the graph, and size of the wantlist (I'm not sure what the max is here).
<aschmahmann[m]> For high throughput 1 to 1 downloads GraphSync should be plenty fast, however go-ipfs currently only has the ability to upload data to peers using GraphSync and not download data using it (FYI it's not high priority to make go-ipfs use GraphSync for fetching data since it still requires a bunch of work to set it up to to work well in many to 1 downloading situations and there are other things at the top of the TODO list)
Adbray has quit [Quit: Ah! By Brain!]
Caterpillar2 has joined #ipfs
Caterpillar has quit [Ping timeout: 256 seconds]
Caterpillar has joined #ipfs
Caterpillar2 has quit [Ping timeout: 256 seconds]
mokulus has joined #ipfs
coniptor has joined #ipfs
<void09> oh I see. actually 10gbps is 1250MB/s, but 900MB/s or so should be realistic speed to get
<void09> yeah but the situation is pretty bad if it can only do 30-40MB/s at 0.1ms network latency
<void09> I can get 70MB/s torrent downloads from 1 peer, so bittorrent must be doing something better :)
<void09> 70MB/s from peers in other countries
<void09> probably multiple parallel block requests. does ipfs only ask for one block at a time?
<aschmahmann[m]> void09: If you are in control of the shape of the data you want there are some parameters you can tweak in `ipfs add` to increase your download speed. For example, you could increase the chunk size from 256kiB to 1MiB (see `ipfs add --help` for more info)
ylp has quit [Read error: Connection reset by peer]
ylp1 has joined #ipfs
<void09> oh cool, will try that. won't that change hash though ?
mokulus has quit [Quit: WeeChat 2.9]
<aschmahmann[m]> IPFS asks for many hashes at a time, but if the DAG is deep might need to read a couple layers before it expands enough. Yes, it will change the hash.
Adbray has joined #ipfs
mokulus has joined #ipfs
worc3131 has joined #ipfs
<aschmahmann[m]> > 70MB/s from peers in other countries
<aschmahmann[m]> Multiple peers or a single peer? With only speculating on the two protocols my guess is that there are some optimizations you can do when there's a single swarm for a single dataset like BitTorrent has (e.g. if my swarm has 10 peers and there are a million blocks I could tell them to each send me block numbers where n % peer number = 0) with Bitswap this is a little more complicated since there is no .torrent file with
<aschmahmann[m]> all the blocks listed and it's not guaranteed that that people with a root node also have the child nodes.
Thominus has joined #ipfs
<aschmahmann[m]> This is part of why a download protocol that operates on graphs instead of arbitrary blocks should be able to reach higher throughputs.
<void09> 70MB/s for a single peer. Can get 220MB/s which is the maximum of my 2 connections on a swarm with a few fast peers
<ipfsbot> Andrei Vukolov @twdragon posted in --migrate option leads to impossibility of daemon restart - https://discuss.ipfs.io/t/migrate-option-leads-to-impossibility-of-daemon-restart/9067/1
bmwiedemann2 has quit [Ping timeout: 256 seconds]
<void09> blah how do you use a 1MB chunk size when adding
<void09> ipfs add --chunker=size-1048576 --nocopy
<aschmahmann[m]> looks right
KempfCreative has joined #ipfs
PyHedgehog has joined #ipfs
Adbray has quit [Quit: Ah! By Brain!]
<void09> but that different hash on different chunk size kills it for me :( defeats the purpose
dharmateja has quit [Quit: WeeChat 2.3]
<void09> was there really no way to have the same main hash ?
<aschmahmann[m]> That's why I said "If you are in control of the shape of the data", if you're publishing it then you can control the shape but if you're just one of many people who have this data then it won't help.
<aschmahmann[m]> changing any part of the graph at all results in a different root hash. That's what makes the data immutable and fully self describing.
<aschmahmann[m]> > was there really no way to have the same main hash ?
<void09> yes but, change default chunk size in ipfs => new files added won't be joined with the old ones
<void09> old links will die
<void09> not sure why ipfs defaults to 256kb, i'd have used something like 2MB instead
<swedneck> for deduplication, i'm pretty sure
bmwiedemann2 has joined #ipfs
<swedneck> with smaller block sizes, you can deduplicate more
jr has joined #ipfs
<void09> not sure if anyone did an actual scientific measurement over a huge real world dataset, but at first sight, I very seriously doubt it's worth the extra overhead
<void09> you will "duplicate" a lot more in metadata traffic than you'll ever gain by deduplicating 1.7MB worth of data
_jrjsmrtn has quit [Ping timeout: 260 seconds]
<aschmahmann[m]> > old links will die
<aschmahmann[m]> That's not really accurate. Changing the default chunk size in ipfs means that if we both add the file X to IPFS we won't get the same DAG, but you can still download existing DAGs just fine.
<aschmahmann[m]> Given that in order to download data using IPFS someone has to give you a CID (or a mutable pointer that turns into a CID) this means you're already using somebody else's formatting. I'm not sure how common the use case is of 1000 people all `ipfs add`ing some shared data and wanting the same CID to pop out. There are definitely use cases, but I don't think it's particularly common.
<void09> usually modified files change in size too, so that will ruin deduplication anyway
<aschmahmann[m]> * > old links will die
<aschmahmann[m]> That's not really accurate. Changing the default chunk size in ipfs means that if we both add the file X to IPFS we won't get the same DAG, but you can still download existing DAGs just fine.
<aschmahmann[m]> Given that in order to download data using IPFS someone has to give you a CID (or a mutable pointer that turns into a CID) this means you're already using somebody else's formatting. I'm not sure how common the use case is of 1000 people all `ipfs add`ing some shared data and wanting the same CID to pop out. There are definitely use cases, but I don't think it's particularly common.
__jrjsmrtn__ has joined #ipfs
<void09> I was thinking of ipfs as a ed2k/torrent network replacement
<void09> on ed2k you get the same hash for the same file added, always. in torrent you have a hash per dataset that changes with block size, so useless anyway. but torrent v2 specifications create hashes per files too, so that will be fixed.
<aschmahmann[m]> > modified files change in size too, so that will ruin deduplication anyway
<aschmahmann[m]> That's not necessarily true, e.g. append-only logs. Additionally, there are other chunking schemes like rabin or buzzhash that utilize the content to determine chunk boundaries. This makes even arbitrary inserts into files share a very large portion of their data
<void09> if hashes are prone to change, then the network will fragment itself and it will defeat the purpose
<void09> oh, i had no idea about those
<void09> but then someone has to actually use those chunking schemes
<void09> which will result in a different hash
<void09> so the swarm will be split
<swedneck> unless someone else adds the same data with another chunker, nothing will change
<void09> yes, but the chunker not being the default one, means that if you want the advantages of mid-file modification deduplication, you have to generate another hash
<aschmahmann[m]> void09: what is the scenario you're imagining in which we both add the same data to IPFS and are sad about the different resulting hashes? I'm sure such scenarios exist, but understanding where you're coming from will help.
<void09> ashley777[m]: simple, immortal permanent links. files that never die
<swedneck> i feel like you're misunderstanding something about how IPFS works
<void09> that's the whole appeal of content based addressing
<void09> some content = some address, not multiple addresses
<void09> and if sometimes in the future something needs to be changed about that, for security or whatever reasons, there should be a mechanism for the whole of ipfs nodes to automatically migrate to the new hashes, and nodes that link old ones to new ones should exist, so that old links do not become obsolete
<swedneck> if you have already added some data to ipfs and shared the CID, then as long as someone keeps serving that data it will never become unavailable
prataprc has joined #ipfs
<void09> yes, but i cannot co-share with other peers that have a different hashing method used
<swedneck> it doesn't matter if you stop hosting the data in favour of a new CID, if someone else on the network has already pinned it then anyone can keep downloading the old CID
<swedneck> void09: why not? You can absolutely pin 2 CIDs that contain the same data at the same time
<void09> yes but I will be the only one serving the old CID
<swedneck> unless someone else pins it, which is extremely easy to do
<void09> yes but it won't happen automatically, that was my point
<void09> anyway 256kb seems a bit small to me. the losses are probably a lot bigger than the potential gains
<void09> internet speeds are getting faster and faster.. lots of people now on 100-1000mbps download speeds
<void09> I feel like trying to save a few hundred kb on a tiny percent of the total potential files on ipfs, adding metadata overhead/latency to request etc, is just killing the whole thing uselessly
<aschmahmann[m]> > yes, but i cannot co-share with other peers that have a different hashing method used
<aschmahmann[m]> again, can you give a more complete scenario? For example take the following scenario:
<aschmahmann[m]> Alice makes a new file called AliceFile and adds it to IPFS, she publishes the CID QmABC on her blog and over time people start downloading it. Two years later the default chunker changes, and when Bob goes to Alice's blog and sees QmABC he gets it from Alice and all the other users who have already shared that blog. The fact that the default chunker has changed in now way impacts the file Bob gets.
<void09> I'm not thining about blogs, websites and such here.. that's very mutable content over the course of time
<void09> thinking*
<void09> I'm thinking about bigger files, media, linux isos, etc
jr has quit [Quit: leaving]
<void09> which have to remain immutable, and new peers with newer ipfs versions sharing that file not resulting in a split swarm
<void09> unless there's mechanism to join those 2 swarms, but then you'd have to be aware it's the same file, just using two different hashes
<swedneck> "remain immutable"?
<aschmahmann[m]> My example above wasn't about the file type it was more about how do you even DISCOVER the CID of the file. The example above shows AliceFile being linked from a blog, but there's no reason it couldn't be a Linux ISO, it's just a link
<swedneck> why would multiple people add a linux iso to their nodes, instead of getting the CID of the ISO from something like the distro's website?
<aschmahmann[m]> ^^ this
<swedneck> it doesn't seem to be a big problem for torrent, so i don't see why it'd be a problem for IPFS
<swedneck> the only thing i would change is making a content-aware chunker the default, so you don't have to manually specify a different chunking method for large binary files
mokulus has quit [Quit: WeeChat 2.9]
dharmateja has joined #ipfs
jcea has joined #ipfs
bmwiedemann2 has quit [Ping timeout: 272 seconds]
detran has joined #ipfs
<void09> swedneck: yes that's a good idea
<void09> yeah switching to 1MB/s blocks improved ipfs get speed, from 40MB/s to 65MB/s
<void09> still not maximum, but a big improvement nonetheless
<void09> is 1MB/s the max you can set?
<void09> 1 MB blocks*
<aschmahmann[m]> yep, that's the highest things are guaranteed to work with. The reason there is a max at all is basically to prevent various types of attacks (e.g. someone sending you GBs of garbage data).
<aschmahmann[m]> btw there's some on-going work on evaluating different chunking strategies that ribasushi is interested in looking into. He started with https://github.com/ipfs-shipyard/DAGger to give us some evaluation tools.
<jessicara> one scenario i can think of is restoring parially corrupted files, and only having your own hash known
<void09> heh I can stream 100mbps 4k video from myself, to ipfs gateway, and back to myself :D
Taoki has joined #ipfs
<jessicara> but normally in that case, like with torrents, would seek out a specific cid/torrent which has it first
<jessicara> (hash from before the corruption)
<void09> for anyone with 100+mbps download to try
<void09> also works with vlc it seems
dharmateja has quit [Ping timeout: 244 seconds]
Taoki has quit [Remote host closed the connection]
<void09> or mpv http://localhost:8080/ipfs/QmZbh6sLwy3Fq3QrBGCvW5uMmzhn1NTeAfBp3kyDVtTbKX I guess since most people here probably run ipfs themselves
<jessicara> for now it's off here, everything slightly heavy is
<void09> it's 100mbps , video + sound + whatever overhead
<jessicara> if are testing lines, not a good idea to be bombarding
<void09> it's off ?
<jessicara> my end is
<void09> oh you mean ipfs node?
<jessicara> and i don't have 100mbps
<jessicara> yeah
<void09> not sure if it's still a router killer in the latest git :D
<void09> I remember I had to reboot the router every once in a while.. took some time until I figured out it was ipfs
<jessicara> yeah, this is a concern. just got a new router to try instead...
<jessicara> and whatever was going on with the old one became permenant
<swedneck> i haven't had any router issues
<void09> it was isp provided router, not built for thousands of live connections
<jessicara> nat and udp might be a problem with these
Taoki has joined #ipfs
Taoki has quit [Client Quit]
nst^ has joined #ipfs
Taoki has joined #ipfs
stoopkid has joined #ipfs
cipres has joined #ipfs
Colpop4323 has joined #ipfs
Colpop4323 has quit [Remote host closed the connection]
outreached has quit [Remote host closed the connection]
<Mikaela[meow]> I have had router issues which have been helped by using a VPN and I haven't figured out if https://1.1.1.1/beta also helps
Guest31244 has quit [Changing host]
Guest31244 has joined #ipfs
Guest31244 has joined #ipfs
Guest31244 is now known as georgyo[m]
georgyo[m] has quit [Quit: authenticating]
georgyo[m] has joined #ipfs
militias has joined #ipfs
prataprc has quit [Quit: Connection closed for inactivity]
spossiba has quit [Ping timeout: 240 seconds]
coniptor has quit [Ping timeout: 256 seconds]
spossiba has joined #ipfs
Newami has joined #ipfs
coniptor has joined #ipfs
teknari[m] has quit [Quit: Idle for 30+ days]
godgunman[m] has quit [Quit: Idle for 30+ days]
raucao has quit [Ping timeout: 265 seconds]
raucao has joined #ipfs
Newami has quit [Quit: Leaving]
cipres has quit [Ping timeout: 272 seconds]
Ecran10 has joined #ipfs
dqx_ has joined #ipfs
kiryin has joined #ipfs
ylp1 has quit [Quit: Leaving.]
Ecran10 has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<void09> var DefaultBufSize = 1048576
<void09> DefaultBufSize is the buffer size for gets. for now, 1MB, which is ~4 blocks. TODO: does this need to be configurable?
<void09> anyone know anything about this ? how can it be changed ? would it affect download performance at high speeds?
zeden has quit [Quit: WeeChat 2.9]
militias has quit [Remote host closed the connection]
Caterpillar has quit [Ping timeout: 258 seconds]
Caterpillar has joined #ipfs
bmwiedemann2 has joined #ipfs
rvklein[m] has joined #ipfs
PISCES has joined #ipfs
<swedneck> i'd probably suggest opening an issue on github
dethos has joined #ipfs
cipres has joined #ipfs
drathir_tor has quit [Remote host closed the connection]
drathir_tor has joined #ipfs
cubemonk1y has joined #ipfs
Caterpillar2 has joined #ipfs
chessboard has joined #ipfs
Caterpillar has quit [Ping timeout: 265 seconds]
Caterpillar2 is now known as Caterpillar
stoopkid has quit [Quit: Connection closed for inactivity]
dethos has quit [Ping timeout: 240 seconds]
Ecran10 has joined #ipfs
<rvklein[m]> hello
<swedneck> o/
nst^ has quit []
ViCi has quit [Ping timeout: 264 seconds]
mindCrime has joined #ipfs
ViCi has joined #ipfs
bmwiedemann2 has quit [Ping timeout: 260 seconds]
cipres has quit [Ping timeout: 256 seconds]
supercoven has quit [Ping timeout: 272 seconds]
cubemonk1y has quit [Quit: Lost terminal]
Taoki has joined #ipfs
qqqqqq has joined #ipfs
qqqqqq has left #ipfs [#ipfs]
RoboFlex13 has quit [Quit: Leaving]
mindCrime_ has joined #ipfs
mindCrime has quit [Ping timeout: 240 seconds]
ry60003333 has quit [Ping timeout: 244 seconds]
dethos has joined #ipfs
cipres has joined #ipfs
Caterpillar2 has joined #ipfs
Caterpillar has quit [Ping timeout: 264 seconds]
Caterpillar has joined #ipfs
Caterpillar2 has quit [Ping timeout: 272 seconds]
cipres has quit [Ping timeout: 272 seconds]
<ipfsbot> @system posted in Welcome to IPFS Weekly 105 - https://discuss.ipfs.io/t/welcome-to-ipfs-weekly-105/9069/1
KempfCreative has quit [Ping timeout: 264 seconds]
ry60003333 has joined #ipfs
chessboard has quit [Remote host closed the connection]
graffiti has joined #ipfs
tryte has quit [Remote host closed the connection]
Thominus has quit [Remote host closed the connection]
tryte has joined #ipfs
mowcat has joined #ipfs
carla[m] has left #ipfs ["User left"]
codebam has quit [Quit: leaving...]
Thominus has joined #ipfs
cipres has joined #ipfs
codebam has joined #ipfs
zeden has joined #ipfs
codebam has quit [Read error: Connection reset by peer]
ipfs-stackbot1 has quit [Remote host closed the connection]
MDude has joined #ipfs
ipfs-stackbot1 has joined #ipfs
dethos has quit [Ping timeout: 272 seconds]
dqx_ has quit [Read error: Connection reset by peer]
dqx_ has joined #ipfs
crest has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
crest has joined #ipfs
pecastro has quit [Ping timeout: 240 seconds]
`Alison has quit [Ping timeout: 244 seconds]
hexa- has quit [Quit: WeeChat 2.7.1]