stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.5.0 and js-ipfs 0.43.1 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Conduct: https://git.io/vVBS0
Ecran has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<JCaesar>
rubenkelevra: That may be true, but as long as you still have dnslink in the loop, IPFS doesn't even solve that.
<RubenKelevra[m]>
Well, my domain uses DNSSec. But I didn't even want to use DNS at all. They can just use IPNS to sign the database-folder-cid
KindTwo has joined #ipfs
jorge_ has quit [Quit: jorge_]
<JCaesar>
Also, I would prefer it if not every "network layer" has to be responsible for security on its own. Coincidentally, archlinux has a solution for that https://bugs.archlinux.org/task/45657 and just somehow doesn't use it.
KindOne has quit [Ping timeout: 272 seconds]
<JCaesar>
(Also, I think right now with 0.5.0, the speed is plenty as is.)
KindTwo is now known as KindOne
<JCaesar>
The discussion whether it's beneficial or not aside, I want you to face one thing: It's not going to happen in the short run. Maybe it has some chance of happening at all if you do it. At least from my experience, pacman isn't exactly a fast-moving project.
<RubenKelevra[m]>
Well, it wasn't in the past, but in the last year it's rather quickly progressing. :)
<RubenKelevra[m]>
Also, there's interest in the idea - just not enough trust in the whole IPFS-project at the moment. I'm sure this will change quite soon when filecoin launches.
<RubenKelevra[m]>
> Also, I would prefer it if not every "network layer" has to be responsible for security on its own.
<JCaesar>
Hm, then: Couldn't this level of trust be increased in the arch community by demonstrating the usefulness of the IPFS arch mirror without first requiring modifications to pacman?
<RubenKelevra[m]>
True. But I don't like the idea to trust just a random hoster with my database updates.
<RubenKelevra[m]>
ˈt͡sɛːzaɐ̯: isn't thats that what I'm already doing right now?
<RubenKelevra[m]>
also: the modifications would be minuscule: If you add a "ipfs-download-host" in the settings pacman would first try to fetch the file via the CID from this HTTP-gateway and then from the first server.
<RubenKelevra[m]>
Nothing would change for the standard installation, unless you add the URL from a web-gateway.
<RubenKelevra[m]>
Since it already knows how the filename is, basically all information is already there, you just need the CID for each file.
<RubenKelevra[m]>
I think the reason why they hesitate is just because IPFS is still somewhat in the progress of getting rock solid.
<RubenKelevra[m]>
I mean, there are still like 5-6 blocking bugs before I would recommend it for a production system. But it's getting there. :)
<JCaesar>
nyeh, I do think their hesitation runs deeper than that. ipfs certainly isn't the only solution in this space and it would be neat for pacman to be agnostic to that.
<RubenKelevra[m]>
I get that... on the glance there are some other solutions. But I still think IPFS is the only viable one.
<RubenKelevra[m]>
I mean, which other solutions offer clusters plus simple "following" of a cluster-content, like IPFS does?
stavros has joined #ipfs
<stavros>
I have an IPFS cluster installation with one node, and it's pinning some files. However, the number of pins queued decreases and then increases again without me pinning anything new. Does IPFS cluster retry failed files?
benjamingr__ has quit [Quit: Connection closed for inactivity]
zerogue has quit [Quit: Goodbye Cruel World]
zerogue has joined #ipfs
Belkaar has joined #ipfs
Belkaar has joined #ipfs
Belkaar_ has quit [Ping timeout: 272 seconds]
<RubenKelevra[m]>
stavros: The IPFS cluster daemon tries to repin failed pins. There's a timeout for pinning commands in the config file for the cluster. This timeout gets reset on each block which gets received. If the blocks get received too slowly, the cluster will jump to the next job and recover later.
<stavros>
RubenKelevra[m], I have set the timeout to something appropriate, but I need the cluster to have a longer retry interval, otherwise it keeps retrying the same failed pins and never gets to new ones. Is there some way I can adjust that?
<RubenKelevra[m]>
stavros: yes
<RubenKelevra[m]>
stavros: IIRC this should be pin_recover_interval
<stavros>
RubenKelevra[m], ahh, thank you!
<RubenKelevra[m]>
stavros: I use 1 minute timeout for pin/unpin. But they are rather long. Just remember, the pin timeout gets reset after each 256 KByte download. So if your source can keep up with every x seconds 256 times n clustermembers, you probably want to add some buffer on top and use this value.
<RubenKelevra[m]>
stavros: You can also adjust how many concurrent pins are happening, I use 1, because of IO load on HDD followers. But if you just got SSD followers, this doesn't matter.
<stavros>
My pins aren't very available so I use 50 concurrent, 1 min for pin_timeout sounds about right, no?
<RubenKelevra[m]>
stavros: 50 concurrent sounds pretty high
<RubenKelevra[m]>
stavros: I would start with 1, and increase it if you're not satisfied with the speed.
<stavros>
RubenKelevra[m], It's not about the speed, my problem is that one unreasponsive pin just freezes every other one
<stavros>
And I have a few thousand of those, so it basically never gets around to pin anything
<RubenKelevra[m]>
stavros: ah! got you. How large are pins?
<RubenKelevra[m]>
Are these individual files? Or full folders with stuff in it? Because it might take many roundtrips to fetch folders etc and this can lead to a stall as well (I guess)
<stavros>
A bit of both, but mostly full folders
<stavros>
Pins range from a few kb to some gb
<stavros>
Right now IPFS is only doing 200 KB/s with 50 pins on an SSD
<RubenKelevra[m]>
Well, if you run 50 jobs at the same time, the sending side might have a hard time keeping up with all this random io
<stavros>
It's not a single side, it's disparate pins from all over the network, so that should be fine
<RubenKelevra[m]>
ah! okay
<stavros>
Your suggestion seems to have basically fixed my problem, thanks!
<stavros>
I looked at the docs but didn't realize cluster would repin
<RubenKelevra[m]>
stavros: cool!
<RubenKelevra[m]>
stavros: If you run 50 simultaneous pin operation, make sure to increase the connections to the network on the ipfs daemon. This way the connections doesn't need to be dropped all the time and reestablished when some data is on the same peers.
<stavros>
RubenKelevra[m], Hmm, which parameter is that?
<RubenKelevra[m]>
stavros: on the ipfs config, Swarm.ConnMgr.HighWater and Swarm.ConnMgr.LowWater
ipfs-stackbot1 has quit [Remote host closed the connection]
<stavros>
RubenKelevra[m], Oh, I thought those were related to memory... They're set to 600/900, is that reasonable?
<RubenKelevra[m]>
That's the default
<stavros>
Hmm, maybe I should double it
<RubenKelevra[m]>
So when 900 connections are reached, the conn manager will kill connections down to 600
<RubenKelevra[m]>
(its a bit more complex than that)
<stavros>
Oh wow, that much? I'll bump it up some more, thanks... How large is large?
<RubenKelevra[m]>
stavros: 10000 connections is probably fine for your workload. It will avoid that your daemon has to close connections when they might be useful in the future. And the other side will (mostly) close down connections instead
buttocks has quit [Remote host closed the connection]
<RubenKelevra[m]>
stavros: yeah ... it's using quite a lot of memory if you can REALLY run 50 simultaneous pins and not just wait for some connections to be established ;)
<stavros>
Hmm
<RubenKelevra[m]>
I would go for 4 or maybe 8 simultaneous pinning operations.
<stavros>
Even though most will just be idle?
<RubenKelevra[m]>
But disclaimer: I'm just a user of IPFS, not a developer - maybe wait for a developer to answer your questions :)
<stavros>
RubenKelevra[m], well this is useful, so I'll experiment with your suggestion and see
<stavros>
I've set it to 8 for now
<RubenKelevra[m]>
stavros: Yeah, you still have to search 50 times simultaneous the DHT for information
<RubenKelevra[m]>
That's what was blocking your probably earlier on
<stavros>
Oh hmm
<RubenKelevra[m]>
stavros: and make sure to not use QUIC at the moment, there are some leaking timer issues (I still think they are not fixed in 0.5.1). All experimental options should be deactivated, if you don't want to experiment ;)
<stavros>
That's not enabled by default, is it?
<stavros>
RubenKelevra[m], so I don't need to have --recover in my crontab, right? It happens automatically
<RubenKelevra[m]>
stavros: quic can be activated in the ipfs settings - but isn't active by default. Just wanted to make sure you're not running out of memory because of that
<stavros>
RubenKelevra[m], yeah, I didn't enable it, thanks
<RubenKelevra[m]>
stavros: I don't use crontabs at all for IPFS, what do you refer to?
<stavros>
Running recovery manually, looks like it's unnecessary
<RubenKelevra[m]>
ah!
<RubenKelevra[m]>
no you don't have to
<stavros>
Great, thank you for the info! The cluster seems much happier now
<RubenKelevra[m]>
stavros: welcome
sscriptwr has joined #ipfs
turona has quit [Ping timeout: 265 seconds]
turona has joined #ipfs
jcea has quit [Quit: jcea]
john33579 has joined #ipfs
john33579 has left #ipfs [#ipfs]
stavros has quit [Remote host closed the connection]
gavlee has quit []
mowcat has joined #ipfs
mowcat has quit [Remote host closed the connection]
KempfCreative has quit [Ping timeout: 246 seconds]
Taoki has quit [Read error: Connection reset by peer]
Taoki has joined #ipfs
xcm has quit [Killed (orwell.freenode.net (Nickname regained by services))]
xcm has joined #ipfs
katian_ has joined #ipfs
xcm has quit [Ping timeout: 246 seconds]
xcm has joined #ipfs
katian_ has quit [Remote host closed the connection]
daregap has joined #ipfs
craigo has quit [Ping timeout: 240 seconds]
gavlee has joined #ipfs
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
xcm has quit [Read error: Connection reset by peer]
xcm has joined #ipfs
lufi has joined #ipfs
lufi has quit [Changing host]
alephnull has quit [Ping timeout: 256 seconds]
alephnull has joined #ipfs
rendar has joined #ipfs
sz0 has joined #ipfs
bengates has joined #ipfs
bengates has quit [Remote host closed the connection]
bengates has joined #ipfs
pecastro has joined #ipfs
bengates has quit [Quit: Leaving...]
fabianhjr has quit [Quit: Leaving.]
bengates has joined #ipfs
bengates has quit [Remote host closed the connection]
bengates has joined #ipfs
dethos has joined #ipfs
user_51 has joined #ipfs
risk has joined #ipfs
bleb_ has joined #ipfs
xentec_ has joined #ipfs
SOO7- has joined #ipfs
Ecran has joined #ipfs
Vaelatern_ has joined #ipfs
chokeonit has joined #ipfs
xentec has quit [*.net *.split]
peeonyou has quit [*.net *.split]
Vaelatern has quit [*.net *.split]
nsh has quit [*.net *.split]
bleb has quit [*.net *.split]
SOO7 has quit [*.net *.split]
SOO7- is now known as SOO7
endvra has quit [Read error: Connection reset by peer]
k_jenner has quit [Remote host closed the connection]
k_jenner has joined #ipfs
cipres has joined #ipfs
ZaZ has quit [Read error: Connection reset by peer]
fazo96 has joined #ipfs
amitizle has joined #ipfs
cipres has quit [Quit: leaving]
Sajesajama_ has quit [Quit: Leaving]
Sajesajama has joined #ipfs
KeiraT has quit [Ping timeout: 240 seconds]
Soo_Slow has joined #ipfs
KeiraT has joined #ipfs
amitizle has left #ipfs ["aight imma head"]
KempfCreative has joined #ipfs
fazo96 has quit [Ping timeout: 260 seconds]
daregap has quit [Quit: daregap]
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has joined #ipfs
jcea has joined #ipfs
opal has quit [Quit: No Ping reply in 180 seconds.]
opal has joined #ipfs
jigawatt has joined #ipfs
joocain2 has quit [Remote host closed the connection]
joocain2 has joined #ipfs
jesse22 has joined #ipfs
p8m has joined #ipfs
Newami has joined #ipfs
KempfCreative has quit [Remote host closed the connection]
Newami has quit [Remote host closed the connection]
KempfCreative has joined #ipfs
dethos has joined #ipfs
ipfs-stackbot1 has quit [Remote host closed the connection]
ipfs-stackbot1 has joined #ipfs
jorge_ has joined #ipfs
is_null has joined #ipfs
p8m has quit [Quit: birdd]
p8m has joined #ipfs
p8m has quit [Remote host closed the connection]
p8m has joined #ipfs
cxl000 has quit [Ping timeout: 260 seconds]
Vaelatern_ is now known as Vaelatern
cxl000 has joined #ipfs
<dadepo[m]>
Is it just me, but often times I find that it takes so long for the chat rooms to load
ZaZ has joined #ipfs
<swedneck1>
you're on matrix.org, which is slow
<swedneck1>
also that's not really relevant to this room, that's a #matrix:matrix.org thing
<dadepo[m]>
<swedneck1 "also that's not really relevant "> got it
vyzo has quit [Ping timeout: 260 seconds]
<xergiok[m]>
Is it possible to see how many peers are sharing a specific hash?
<swedneck1>
`ipfs dht findprovs <key>`
<swedneck1>
"Find peers that can provide a specific value, given a key."
<xergiok[m]>
<key> being the CID?
<swedneck1>
e.g. `ipfs dht findprovs /ipfs/bafybeidhycjoffkqxrzx3gutcf5quq3tczkikxfsocdayt6o3lzqiqqj4q` gives me 20 peers
<xergiok[m]>
Gotcha. Thanks!
hqdruxn08__ has joined #ipfs
<swedneck1>
pipe it into `wc -l` to get a number instead of a list of peers
hqdruxn08_ has quit [Ping timeout: 256 seconds]
AnsgarSchmidt[m] has joined #ipfs
sz0 has joined #ipfs
<TraderOne[m]>
For IPFS adoption it would be good to make deal with distributions to use it for ISOs.
Arwalk has quit [Read error: Connection reset by peer]
<TraderOne[m]>
But for some reason even torrent iso distribution is not standard. Most stick to plain http with mirrors. Is much easier to keep ipfs up to date then mirrors.
Arwalk has joined #ipfs
__jrjsmrtn__ has joined #ipfs
_jrjsmrtn has quit [Ping timeout: 240 seconds]
bengates has quit [Read error: Connection reset by peer]
bengates has joined #ipfs
kivutar has quit [Ping timeout: 256 seconds]
kivutar has joined #ipfs
xelra has quit [Remote host closed the connection]
jesse22 has quit [Ping timeout: 260 seconds]
xelra has joined #ipfs
jesse22 has joined #ipfs
<swedneck1>
i presume it's just because people don't bother with torrent
<swedneck1>
would be lovely if they could just add a js node to their websites and have people download directly from the ipfs network
bengates has quit [Ping timeout: 260 seconds]
mowcat has quit [Remote host closed the connection]
jorge_ has quit [Quit: jorge_]
lsrugo[m] has joined #ipfs
jorge_ has joined #ipfs
jorge_ has quit [Ping timeout: 260 seconds]
jorge_ has joined #ipfs
jorge_ has quit [Read error: Connection reset by peer]
jorge_ has joined #ipfs
Xeyame has quit [Quit: Client quit]
Xeyame has joined #ipfs
Arwalk has quit [Read error: Connection reset by peer]
puminya[m] has joined #ipfs
Arwalk has joined #ipfs
opal has quit [Ping timeout: 240 seconds]
opal has joined #ipfs
xcm has quit [Read error: Connection reset by peer]
rendar has quit []
xcm has joined #ipfs
fabianhjr has joined #ipfs
<swedneck1>
is there any way to see the deduplicated size of an object?
<swedneck1>
`ipfs object stat` seems to show the size as it would be when downloaded to disk
aLeSD has joined #ipfs
jorge_ has quit [Quit: jorge_]
fazo96 has joined #ipfs
MrSparkle has quit [Remote host closed the connection]
MrSparkle has joined #ipfs
jorge_ has joined #ipfs
jorge_ has quit [Client Quit]
Xeyame has quit [Quit: Client quit]
jorge_ has joined #ipfs
<danimal-moo[m]>
NixOS uses hash-addressable packages, and would probably love to have their vast Nixpkgs repo distributed over IPFS. There have been discussions about that in the past, but they stalled on technical issues. The peeps in the nixos room could probably help you gather historical data on past attempts in that direction.
<danimal-moo[m]>
IIRC, on problem was that Git and IPFS have different hash algorithms, and some sort of hash translation key-value store would need to be build and deployed
<danimal-moo[m]>
*one
<danimal-moo[m]>
And nixpkgs is just a git repo
jorge_ has quit [Quit: jorge_]
Xeyame has joined #ipfs
<danimal-moo[m]>
ISO distros would be a much simpler targent, actually
aerth has left #ipfs [#ipfs]
sscriptwr has quit [Ping timeout: 258 seconds]
Aranjedeath has joined #ipfs
bbigras has joined #ipfs
Constantinoples has joined #ipfs
fleeky has quit [Ping timeout: 240 seconds]
jorge_ has joined #ipfs
craigo has quit [Quit: Leaving]
fleeky has joined #ipfs
jorge_ has quit [Read error: Connection reset by peer]
jorge_ has joined #ipfs
JackK has quit [Remote host closed the connection]
caskd has quit [Read error: Connection reset by peer]
caskd has joined #ipfs
D_ has quit [Ping timeout: 240 seconds]
JackK has joined #ipfs
D_ has joined #ipfs
jorge_ has quit [Quit: jorge_]
endvra has quit [Read error: Connection reset by peer]
jorge_ has joined #ipfs
jorge_ has quit [Remote host closed the connection]
jorge_ has joined #ipfs
endvra has joined #ipfs
mowcat has joined #ipfs
<aLeSD>
hi all. How could I set the maximum number of connection for my ipfs and ipfs-cluster node ? they are killing my router
dqx_ has joined #ipfs
jorge_ has quit [Read error: Connection reset by peer]
Jesin has quit [Quit: Leaving]
Pie-jacker875 has quit [Read error: Connection reset by peer]
thr3m has joined #ipfs
Pie-jacker875 has joined #ipfs
Jesin has joined #ipfs
thr3m has quit [Remote host closed the connection]
seba- has quit [Ping timeout: 264 seconds]
jorge_ has joined #ipfs
jorge_ has quit [Read error: Connection reset by peer]
seba- has joined #ipfs
jorge_ has joined #ipfs
dqx_ has quit [Ping timeout: 260 seconds]
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has joined #ipfs
fazo96 has quit [Ping timeout: 260 seconds]
ZaZ has quit [Read error: Connection reset by peer]
jorge_ has quit [Remote host closed the connection]
rodolf0 has joined #ipfs
jorge_ has joined #ipfs
citizen__ has joined #ipfs
KempfCreative has quit [Ping timeout: 264 seconds]
citizen__ has quit [Remote host closed the connection]
citizen_stig has joined #ipfs
mowcat has quit [Remote host closed the connection]
herronjo[m] has joined #ipfs
citizen_stig has quit [Ping timeout: 260 seconds]
Soo_Slow has quit [Quit: Soo_Slow]
bren2010 has quit [Ping timeout: 272 seconds]
bren2010 has joined #ipfs
citizen_stig has joined #ipfs
citizen_stig has quit [Ping timeout: 272 seconds]
fleeky has quit [Quit: Ex-Chat]
fimbaz_ has quit [Quit: ZNC 1.7.2+deb3 - https://znc.in]
fleeky has joined #ipfs
fimbaz has joined #ipfs
xcm has quit [Read error: Connection reset by peer]
xcm has joined #ipfs
chokeonit has quit [Ping timeout: 260 seconds]
peeonyou has joined #ipfs
dqx_ has joined #ipfs
nimaje has quit [Read error: Connection reset by peer]
risk has quit [Quit: Connection closed for inactivity]
nimaje has joined #ipfs
indisturbed has joined #ipfs
citizen_stig has joined #ipfs
dqx_ has quit [Ping timeout: 256 seconds]
citizen_stig has quit [Ping timeout: 244 seconds]
dqx_ has joined #ipfs
akymaky[m] has joined #ipfs
ThereIwas has joined #ipfs
rodolf0 has quit [Quit: Leaving]
<danimal-moo[m]>
aLeSD: Looking at this old issue from 2015, https://github.com/ipfs/go-ipfs/issues/1482, it appears that setting connection limits is still a to-do item for go-ipfs. Does anyone know if this is still relevant in 2020?
<danimal-moo[m]>
NVM, see last comment, regarding `Swarm.ConnMgr.HighWater`
<danimal-moo[m]>
I have a rather large question of my own, I just don't want to bury aLeSD's Q before it gets any action
pecastro has quit [Ping timeout: 256 seconds]
Ecran has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<danimal-moo[m]>
After some thought, I can simplify my question. Does anyone know of an IPFS proxy that would accept an HTTPS url, pin the content from fetching that url, and return an IPFS CID/link?
jorge__ has joined #ipfs
<danimal-moo[m]>
I would be interacting with this proxy from client-side JS.
benjamingr__ has quit [Quit: Connection closed for inactivity]
jorge_ has quit [Ping timeout: 256 seconds]
cipres has quit [Quit: leaving]
`Alison has quit [Quit: ZNC 1.7.4+deb4 - https://znc.in]