aschmahmann changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.7.0 and js-ipfs 0.52.3 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Conduct: https://git.io/vVBS0
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Ping timeout: 260 seconds]
tango-rango[m] has left #ipfs ["User left"]
eldritch has quit [Read error: Connection reset by peer]
arandomcomrade has joined #ipfs
jarodit has joined #ipfs
andi- has quit [Ping timeout: 248 seconds]
arcatech has quit [Quit: Be back later.]
_jrjsmrtn has joined #ipfs
__jrjsmrtn__ has quit [Ping timeout: 240 seconds]
andi- has joined #ipfs
CGretski has joined #ipfs
eldritch has joined #ipfs
GvP has quit [Quit: Going offline, see ya!]
cris has quit []
liyouhong has joined #ipfs
CGretski has quit [Quit: No Ping reply in 180 seconds.]
cris has joined #ipfs
arcatech has joined #ipfs
sanya2[m] has joined #ipfs
arcatech has quit [Ping timeout: 258 seconds]
sanya2[m] has left #ipfs ["User left"]
CGretski has joined #ipfs
mowcat has quit [Remote host closed the connection]
arcatech has joined #ipfs
jcea has quit [Ping timeout: 250 seconds]
lewky has quit [Ping timeout: 252 seconds]
arcatech has quit [Ping timeout: 245 seconds]
arcatech has joined #ipfs
GvP has joined #ipfs
arcatech has quit [Quit: Be back later.]
CGretski has quit [Remote host closed the connection]
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Ping timeout: 240 seconds]
arcatech has joined #ipfs
arcatech has quit [Quit: Be back later.]
arcatech has joined #ipfs
ipfs-stackbot has quit [Remote host closed the connection]
ipfs-stackbot has joined #ipfs
arcatech has quit [Ping timeout: 258 seconds]
jesse22 has joined #ipfs
lewky has joined #ipfs
jrt is now known as Guest52626
Guest52626 has quit [Killed (card.freenode.net (Nickname regained by services))]
jrt has joined #ipfs
CGretski has joined #ipfs
arcatech has joined #ipfs
arcatech has quit [Ping timeout: 250 seconds]
KempfCreative has joined #ipfs
opa has joined #ipfs
opa7331 has quit [Ping timeout: 258 seconds]
baojg has joined #ipfs
CGretski has quit [Quit: No Ping reply in 180 seconds.]
dpl has quit [Ping timeout: 260 seconds]
KempfCreative has quit [Ping timeout: 260 seconds]
kesenai has joined #ipfs
voker57 has quit [Quit: No Ping reply in 180 seconds.]
voker57 has joined #ipfs
baojg has quit [Remote host closed the connection]
arcatech has joined #ipfs
baojg has joined #ipfs
mindCrime_ has quit [Ping timeout: 268 seconds]
arcatech has quit [Ping timeout: 245 seconds]
<ipfsbot> ehsan shariati @ehsan6sha posted in Ipfs-cluster with relay - https://discuss.ipfs.io/t/ipfs-cluster-with-relay/10669/1
kn0rki has joined #ipfs
zeden has quit [Quit: WeeChat 3.0.1]
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Ping timeout: 268 seconds]
jarodit has quit [Ping timeout: 240 seconds]
jesse22_ has joined #ipfs
jesse22 has quit [Ping timeout: 250 seconds]
hurikhan77 has quit [Ping timeout: 246 seconds]
arcatech has joined #ipfs
baojg has quit [Remote host closed the connection]
jesse22_ has quit [Remote host closed the connection]
arcatech has quit [Client Quit]
jesse22 has joined #ipfs
jesse22 has quit [Ping timeout: 252 seconds]
jesse22 has joined #ipfs
jarodit has joined #ipfs
LiftLeft has quit [Ping timeout: 268 seconds]
Raina[m] has joined #ipfs
bengates has joined #ipfs
bengates has quit [Read error: Connection reset by peer]
devnull has joined #ipfs
jesse22 has quit [Ping timeout: 250 seconds]
devnull is now known as Guest20947
baojg has joined #ipfs
bsdman has quit [Ping timeout: 252 seconds]
bsdman has joined #ipfs
hurikhan77 has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
Guest20947 has quit [Quit: Guest20947]
Guest20947 has joined #ipfs
royal_screwup21 has joined #ipfs
pakxo_ has joined #ipfs
royal_screwup21 has quit [Ping timeout: 240 seconds]
pakxo has quit [Ping timeout: 240 seconds]
pakxo_ is now known as pakxo
arandomcomrade has quit [Read error: Connection reset by peer]
devnull has joined #ipfs
devnull is now known as Guest92979
tedious has joined #ipfs
<fyrri> morning
bengates has joined #ipfs
ylp has joined #ipfs
dpl has joined #ipfs
bengates has quit [Read error: Connection reset by peer]
bengates has joined #ipfs
pecastro has joined #ipfs
bengates has quit [Remote host closed the connection]
bengates has joined #ipfs
bengates has quit [Read error: Connection reset by peer]
GvP_ has joined #ipfs
bengates has joined #ipfs
GvP has quit [Ping timeout: 268 seconds]
bengates has quit [Remote host closed the connection]
bengates has joined #ipfs
bengates has quit [Read error: Connection reset by peer]
bengates has joined #ipfs
Nact has joined #ipfs
royal_screwup21 has joined #ipfs
aLeSD_ has quit [Quit: Leaving]
aLeSD has joined #ipfs
Ringtailed-Fox has quit [Read error: Connection reset by peer]
Ringtailed_Fox has joined #ipfs
royal_screwup21 has quit [Ping timeout: 260 seconds]
sknebel_ is now known as sknebel
baojg has quit [Remote host closed the connection]
iltutmish[m] has quit [Quit: Idle for 30+ days]
flomaysta[m] has quit [Quit: Idle for 30+ days]
kallisti5[m] has quit [Quit: Idle for 30+ days]
la_rochefoucauld has quit [Quit: Idle for 30+ days]
holy_back[m] has quit [Quit: Idle for 30+ days]
tibo1503[m] has quit [Quit: Idle for 30+ days]
DavidMc[m] has quit [Quit: Idle for 30+ days]
dickinthewood[m] has quit [Quit: Idle for 30+ days]
jrt is now known as Guest49620
Guest49620 has quit [Killed (rothfuss.freenode.net (Nickname regained by services))]
jrt has joined #ipfs
jrt has quit [Ping timeout: 268 seconds]
astroanax has quit [Quit: quit]
royal_screwup21 has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
dpl_ has joined #ipfs
dpl has quit [Ping timeout: 240 seconds]
grumble has quit [Quit: K-Lined]
grumble has joined #ipfs
Ringtailed_Fox has quit [Read error: Connection reset by peer]
kesenai has quit [Ping timeout: 252 seconds]
Ringtailed_Fox has joined #ipfs
Encrypt has joined #ipfs
royal_screwup21 has quit [Quit: Connection closed]
royal_screwup21 has joined #ipfs
baojg has joined #ipfs
jrt has joined #ipfs
royal_screwup21 has quit [Ping timeout: 252 seconds]
royal_screwup21 has joined #ipfs
arthuredelstein has joined #ipfs
kesenai has joined #ipfs
kesenai has quit [Remote host closed the connection]
lawid has quit [Ping timeout: 240 seconds]
lawid has joined #ipfs
tech_exorcist has joined #ipfs
_jrjsmrtn has quit [Ping timeout: 246 seconds]
lawid has quit [Ping timeout: 260 seconds]
__jrjsmrtn__ has joined #ipfs
dpl__ has joined #ipfs
dpl_ has quit [Ping timeout: 240 seconds]
lawid has joined #ipfs
Caterpillar2 is now known as Caterpillar
royal_screwup21 has quit [Ping timeout: 260 seconds]
Magic_ has joined #ipfs
lawid_ has joined #ipfs
lawid has quit [Ping timeout: 268 seconds]
o1lo01ol1o has joined #ipfs
kafl has joined #ipfs
ZeusEd has joined #ipfs
konubinix has quit [Quit: Coyote finally caught me]
konubinix has joined #ipfs
konubinix has quit [Client Quit]
baojg_ has joined #ipfs
baojg has quit [Ping timeout: 258 seconds]
jcea has joined #ipfs
ZeusEd has left #ipfs [#ipfs]
royal_screwup21 has joined #ipfs
Ringtailed-Fox has joined #ipfs
Ringtailed_Fox has quit [Read error: Connection reset by peer]
<AmariyahWhite[m]> morning
zeden has joined #ipfs
Ringtailed-Fox has quit [Read error: Connection reset by peer]
Ringtailed-Fox has joined #ipfs
royal_screwup21 has quit [Quit: Connection closed]
royal_screwup21 has joined #ipfs
justanotherdude has joined #ipfs
royal_screwup21 has quit [Ping timeout: 260 seconds]
dpl_ has joined #ipfs
dpl__ has quit [Ping timeout: 260 seconds]
Guest9 has joined #ipfs
|NecoDiscord[m] has quit [Ping timeout: 248 seconds]
MissLavenderDisc has quit [Ping timeout: 248 seconds]
MissLavenderDisc has joined #ipfs
|NecoDiscord[m] has joined #ipfs
Guest9 has quit [Client Quit]
nikolayclfx has joined #ipfs
bsm1175321 has joined #ipfs
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Ping timeout: 268 seconds]
<LevelUp[m]> How fast is IPFS? Should we use a different server for caching IPFS data to display previews faster?
<McSinyx[m]> why don't you cache it on that node directly?
<Discordian[m]> IPFS is as fast as the nodes the data is on. From my experience, it's surprisingly fast
<LevelUp[m]> I'm saying using another server as an intermediary to the end user, is this worth it/
<Discordian[m]> Like something that isn't IPFS?
<LevelUp[m]> Yeah like S3
cp- has quit [Quit: Disappeared in a puff of smoke]
<Discordian[m]> It's up to you, really. If you don't have the bandwidth to serve the content you want to serve to the scale you expect, you could always use S3 as well.
baojg_ has quit [Remote host closed the connection]
theseb has joined #ipfs
royal_screwup21 has joined #ipfs
Encrypt has quit [Quit: Quit]
XORed has quit [Read error: Connection reset by peer]
XORed has joined #ipfs
<eleitl[m]> Speed of serving content would depend on I/O as well. So if you have SSD instances, that would help.
<eleitl[m]> When testing a few days ago I was pulling down 5 MB/s from a single spindle on a slow low-end, on 1G network. Need to test SSD and 10G on beefy servers. And, of course, if you have many IPFS nodes it will add up.
<eleitl[m]> * When testing a few days ago I was pulling down 5 MB/s from a single spindle on a slow low-end server, on 1G network. With about 2 TB pinned, filestore. Need to test SSD and 10G on beefy servers. And, of course, if you have many IPFS nodes it will add up.
<eleitl[m]> * When testing a few days ago I was pulling down 5 MB/s from a single spindle on a slow low-end server, on 1G network. With about 2 TB pinned, filestore. Few MB files each. Need to test SSD and 10G on beefy servers. And, of course, if you have many IPFS nodes it will add up.
royal_screwup21 has quit [Quit: Connection closed]
royal_screwup21 has joined #ipfs
<eleitl[m]> While speaking about testing, I'm seeing a slowdown on my 6 TB dataset pin (filestore, ~/.ipfs on SSD). Probably, I/O overhead. Badgerdb would have been probably faster, but no can do with only 4 GB RAM (71% of it full right now, pinning job 67% through, ETA ~2 days).
<eleitl[m]> I'll try badgerdb on a 2 TB dataset, same server specs.
john2gb0 has joined #ipfs
royal_screwup21 has quit [Ping timeout: 240 seconds]
john2gb0 has quit [Read error: Connection reset by peer]
john2gb0 has joined #ipfs
john2gb0 has quit [Read error: Connection reset by peer]
john2gb0 has joined #ipfs
john2gb0 has quit [Read error: Connection reset by peer]
ceuric01[m] has joined #ipfs
john2gb0 has joined #ipfs
Magic_ has quit [Ping timeout: 245 seconds]
john2gb0 has quit [Read error: Connection reset by peer]
john2gb0 has joined #ipfs
<Discordian[m]> I like reading your benchmarks
john2gb0 has quit [Read error: Connection reset by peer]
john2gb0 has joined #ipfs
john2gb0 has quit [Read error: Connection reset by peer]
dpl_ has quit [Ping timeout: 260 seconds]
john2gb0 has joined #ipfs
john2gb0 has quit [Read error: Connection reset by peer]
john2gb0 has joined #ipfs
john2gb0 has quit [Read error: Connection reset by peer]
john2gb0 has joined #ipfs
john2gb0 has quit [Read error: Connection reset by peer]
john2gb0 has joined #ipfs
ipfs-stackbot has quit [Remote host closed the connection]
ipfs-stackbot has joined #ipfs
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Ping timeout: 240 seconds]
Magic_ has joined #ipfs
<ipfsbot> @WouterGlorieux posted in I'm developing a ranked choice voting app on IPFS: Hivemind - https://discuss.ipfs.io/t/im-developing-a-ranked-choice-voting-app-on-ipfs-hivemind/10674/1
obensource has quit [Ping timeout: 268 seconds]
theseb has quit [Quit: Leaving]
LiftLeft has joined #ipfs
<ipfsbot> Andrea Macchieraldo @macchie posted in Peer Discovery across different Networks - https://discuss.ipfs.io/t/peer-discovery-across-different-networks/10675/1
upekkha has quit [Quit: upekkha]
upekkha has joined #ipfs
cp- has joined #ipfs
venue has joined #ipfs
arcatech has joined #ipfs
venue has quit [Quit: venue]
bflanagin[m] has quit [Quit: Idle for 30+ days]
mmumu[m] has quit [Quit: Idle for 30+ days]
tech_exorcist has quit [Remote host closed the connection]
tech_exorcist has joined #ipfs
<ipfsbot> @system posted in Welcome to IPFS Weekly 129 - https://discuss.ipfs.io/t/welcome-to-ipfs-weekly-129/10679/1
supercoven has joined #ipfs
mindCrime_ has joined #ipfs
bengates has quit [Remote host closed the connection]
bengates has joined #ipfs
drathir_tor has quit [Ping timeout: 240 seconds]
ylp has quit [Quit: Leaving.]
ib07 has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
bengates has quit [Ping timeout: 246 seconds]
drathir_tor has joined #ipfs
Magic_ has quit [Ping timeout: 246 seconds]
mowcat has joined #ipfs
gogs has joined #ipfs
gogs has left #ipfs [#ipfs]
<RomeSilvanus[m]> Hmm. I managed to nuke my ipfs install and re setup it
veegee has joined #ipfs
<RomeSilvanus[m]> Using 'nohash' is taking around 15 seconds per 700MB file
<RomeSilvanus[m]> * Using `DontHash: true` is taking around 15 seconds per 700MB file
<RomeSilvanus[m]> Still faster then before but shouldn't checking the modtime&size only be almost instantly?
<Discordian[m]> That doesn't sound very fast :S, it's literally just running Stat on it :S
<Discordian[m]> I figured it'd be fairly quick, unless seek times are really that bad (they were on my old HDD)
<RomeSilvanus[m]> I can't really believe it's that since other apps like rclone go through (multiple) lots of files very fast
<RomeSilvanus[m]> Just some bigger files
<Discordian[m]> Hmm I shuould make it explicitly say if it's hashing or using size+modtime
<Discordian[m]> Maybe it's hashing, could be typo, or wrong version?
Ringtailed-Fox has quit [Ping timeout: 245 seconds]
<RomeSilvanus[m]> It's 5.0, the one you linked before
royal_screwup21 has joined #ipfs
<RomeSilvanus[m]> I mean, it did the whole folder in 10 minutes instead 45 minutes now.
<Discordian[m]> Oh well that does seem significant at least
<Discordian[m]> Oh most of those files took under 1s in that output to
<Discordian[m]> * Oh most of those files took under 1s in that output too
jesse22 has joined #ipfs
<RomeSilvanus[m]> It finished almost instantly
<Discordian[m]> Oh shit wow
<Discordian[m]> Gonna have to look into how it does that
dotdotok[m] has joined #ipfs
<Discordian[m]> Your times there too, look like it could be hashing still, hard to say.. if it's slow though, and it's because of os.Stat, gonna have to look into whatever rsync does.
<Discordian[m]> * Your times there too, look like it could be hashing still, hard to say.. if it's slow though (and really not hashinng), and it's because of os.Stat, gonna have to look into whatever rsync does.
<ipfsbot> DONG @songbo posted in The file uploaded with windows desktop can not be checked or got via https://ipfs.io/ipfs/xxxxxx .I wonder if the file is uploaded - https://discuss.ipfs.io/t/the-file-uploaded-with-windows-desktop-can-not-be-checked-or-got-via-https-ipfs-io-ipfs-xxxxxx-i-wonder-if-the-file-is-uploaded/10680/1
<Discordian[m]> Gonna change that output, I'd do it rn, but I'm really sick
<RomeSilvanus[m]> Don't die.
<RomeSilvanus[m]> Which even then seems to be significantly faster then ipfs-sync hashing stuff.
<Discordian[m]> Yeah that's how I do it, more-or-less, just isn't as fast for some reason, that's why I suspect it's not really hashing, or os.Stat is slow
<Discordian[m]> Oh yeah 4min23s is pretty fast
<RomeSilvanus[m]> Honestly I can only go from their docs since I can't read Go, but if they can do it that fast then you should also be able to in some way.
<Discordian[m]> Yeah I can read their source code no problem
<RomeSilvanus[m]> So there seems to be a lot of potential for speed ups.
<RomeSilvanus[m]> Rclone also does standaard 4 files in parallel.
<Discordian[m]> Honestly shocked it's not insanely fast as-is, makes me miss C a bit, but I'm sure I'll get it figured ut
<Discordian[m]> * Honestly shocked it's not insanely fast as-is, makes me miss C a bit, but I'm sure I'll get it figured out
<RomeSilvanus[m]> * Rclone also does standard 4 files in parallel.
<Discordian[m]> Oh 4 in parallel? I don't do that
<Discordian[m]> I figured that wouldn't have a benefit on HDDs, no? Or do they optimise?
venue has joined #ipfs
<RomeSilvanus[m]> Idk, it's pretty fast even if I set it to something like 20 checks
arcatech has quit [Quit: Be back later.]
<Discordian[m]> If parallel processing has a benefit, that's pretty easy to handle with Go
<Discordian[m]> I'd look into it, but I'm trying to not die, and the code is a bit torn open from what I was working on last nightr
<Discordian[m]> * I'd look into it, but I'm trying to not die, and the code is a bit torn open from what I was working on last night
<paraz[m]> Heat... Sore throat? Hot water bottle on your throat. Sinus? Hot water bottle on your face. Diarrhea vomiting? Hot water bottle on your gut. And stay hydrated. Best wishes!
<RomeSilvanus[m]> If you die I will inherit the contents of your fridge.
<RomeSilvanus[m]> Also get well soon !
<Discordian[m]> Haha thanks guys, I'm sure I'll be good to go soon, just gotta take it easy for at least a day
<Discordian[m]> Also my fridge doesn't have much, I mostly eat ramen.
<RomeSilvanus[m]> w e e b
<Discordian[m]> Haha I like good spicy ramen for breakfast, good start to the day
<Discordian[m]> <RomeSilvanus[m] "w e e b"> なに?!
<RomeSilvanus[m]> ㄚ卂爪卂ㄒ乇Ҝㄩᗪ卂丂ㄒㄖ卩
<Discordian[m]> So definitely has a benefit
venue has left #ipfs [#ipfs]
Newami has joined #ipfs
heii has joined #ipfs
Newami has quit [Quit: Leaving]
arcatech has joined #ipfs
Guest92979 has quit [Remote host closed the connection]
Guest20947 has quit [Remote host closed the connection]
jarodit has quit [Remote host closed the connection]
Nact has quit [Read error: Connection reset by peer]
r357[m] has joined #ipfs
arcatech has quit [Ping timeout: 258 seconds]
arcatech has joined #ipfs
arcatech_ has joined #ipfs
arcatech has quit [Ping timeout: 265 seconds]
heii has quit [Quit: heii]
drathir_tor has quit [Remote host closed the connection]
drathir_tor has joined #ipfs
royal_screwup21 has quit [Quit: Connection closed]
royal_screwup21 has joined #ipfs
lawid_ has quit [Ping timeout: 252 seconds]
R11R[m] has left #ipfs ["User left"]
lawid has joined #ipfs
royal_screwup21 has quit [Ping timeout: 240 seconds]
ahmed517 has joined #ipfs
lawid has quit [Ping timeout: 265 seconds]
arthuredelstein has quit [Ping timeout: 268 seconds]
dpl_ has joined #ipfs
Encrypt has joined #ipfs
arcatech has joined #ipfs
arcatech_ has quit [Ping timeout: 258 seconds]
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Ping timeout: 260 seconds]
royal_screwup21 has joined #ipfs
kn0rki has quit [Quit: Leaving]
arcatech has quit [Quit: Be back later.]
royal_screwup21 has quit [Quit: Connection closed]
o1lo01ol1o has quit [Remote host closed the connection]
royal_screwup21 has joined #ipfs
justanotherdude has quit [Quit: RAGEQUIT]
justanotherdude has joined #ipfs
xelra_ has joined #ipfs
xelra has quit [Ping timeout: 240 seconds]
leixy has joined #ipfs
atymchuk has joined #ipfs
nemo-12345 has joined #ipfs
leixy has quit []
royal_screwup21 has quit [Quit: Connection closed]
royal_screwup21 has joined #ipfs
justanotherdude has quit [Quit: RAGEQUIT]
royal_screwup21 has quit [Ping timeout: 240 seconds]
atymchuk_ has joined #ipfs
<RomeSilvanus[m]> It's doing that a lot now
<RomeSilvanus[m]> Basically any file
atymchuk has quit [Ping timeout: 260 seconds]
Arwalk has quit [Ping timeout: 252 seconds]
Arwalk has joined #ipfs
<RomeSilvanus[m]> Hm, do you still collect every file and folder before going through them?
supercoven has quit [Ping timeout: 260 seconds]
leixy has joined #ipfs
leixy has quit [Client Quit]
<AmariyahWhite[m]> yes
royal_screwup21 has joined #ipfs
venue has joined #ipfs
royal_screwup21 has quit [Ping timeout: 265 seconds]
venue has quit [Client Quit]
venue has joined #ipfs
atymchuk has joined #ipfs
atymchuk_ has quit [Ping timeout: 240 seconds]
ahmed517 has quit [Quit: Connection closed for inactivity]
atymchuk has quit [Ping timeout: 240 seconds]
<Discordian[m]> Yes, it has a whole list in memory from the hash step
<Discordian[m]> Idk why it's finding duplicates, or even how
<RomeSilvanus[m]> This is a new ipfs install without anything added before
<Discordian[m]> Should be harmless, but a bit strange
dpl_ has quit [Read error: Connection reset by peer]
<Discordian[m]> Old version ensured there was no duplicates, new version ignores the situation and moves on
<Discordian[m]> * Old version ensured there was no duplicates, new version ignores the situation and moves on (faster)
<RomeSilvanus[m]> Well it’s trying to add it multiple times
<Discordian[m]> Very peculiar, is there any pattern to the duplicates? I just use filepath.Walk to build the list
<RomeSilvanus[m]> I do t see any. It looks like it just started randomly.
<Discordian[m]> Could the process of somehow gotten rebooted?
<Discordian[m]> I'll investigate anyways, wonder if a link or something to do with case-sensitivity caused it. Either way worst case I can throw the list in a map (doesn't allow duplicates), and just do it that way
<RomeSilvanus[m]> Seems to have run normally
venue has quit [Quit: venue]
<RomeSilvanus[m]> Hmm okay
<RomeSilvanus[m]> I see in the logs that I says [ipns name] not found, generating
venue has joined #ipfs
<RomeSilvanus[m]> But it threw that in the middle after it already did a lot of files
<RomeSilvanus[m]> Then right after it started from the beginning
<RomeSilvanus[m]> So somehow it got reset while adding files to ipfs
<Discordian[m]> Very strange, do you happen to have multiple entries?
<RomeSilvanus[m]> Entries what?
<Discordian[m]> In your config, do you happen to have the same dir added twice?
<RomeSilvanus[m]> No
<Discordian[m]> Because "not found, generating" is a first step
<Discordian[m]> Right after hashing, it checks IPNS
<Discordian[m]> Hmm or maybe there is a duplication happening, I haven't seen it happen before, I'll have to read the code
<RomeSilvanus[m]> Hmm. When I did the rclone speed test it actually synced some new files and renamed a few directories. There were lots of not found errors right before the not found error because the folders got renamed.
<RomeSilvanus[m]> Maybe that threw it off.
<Discordian[m]> Oh yeah a rename could throw it off, but it should correct itself
<RomeSilvanus[m]> I see, it went through all the entries but couldnt find them. Then apperantly that was the last folder in the list and after that it started new.
Encrypt has quit [Quit: Quit]
<RomeSilvanus[m]> Well, it did continue, just tried to add every file again with the error until it got to the next dir.
<Discordian[m]> Idk you might be right though, I can't think clearly, and I haven't seen the bug, but looking at the code, it looks like it might be adding twice on new keys.
<RomeSilvanus[m]> * Well, it did continue, just tried to add every file again with the error until it got to the next dir in the config.
<Discordian[m]> Like in your exact scenario, to me, I think you're right, I think it does it twice.
Mateon2 has joined #ipfs
Mateon1 has quit [Ping timeout: 246 seconds]
Mateon2 is now known as Mateon1
<Discordian[m]> Interesting, I think it's a bit of relic code from before the DB was stable
<Discordian[m]> Yeah sorry about that, almost certain that's exactly what's happening there. Probably didn't catch it before, because before it'd just overwrite without error, now it moves on, and displays the error
<Discordian[m]> Before DB, it didn't add twice because there was no DB to compare it to. After DB, it didn't add twice because I already had generated keys
<RomeSilvanus[m]> But it already made a key for it at the start.
<Discordian[m]> Oh true wtf
<Discordian[m]> (This is why I'm not coding rn lmao)
psyk has joined #ipfs
<Discordian[m]> So it actually grabs a list of keys, checks if any are ipfs-sync keys, if they are, THEN it checks the DB
<Discordian[m]> If they aren't, then it generates a key, and adds the dir
<Discordian[m]> Eh only sorta
<RomeSilvanus[m]> The error occurred after it finished it's run. It made a list tat the beginning but while it was going through that list files changed so it couldn't find them anymore. And when it reached the end of the list it started again. So something happened there that prevented it from going to the next config entry.
<Discordian[m]> It'll do the DB step first, and add all the files, then check if the key actually exists still
<RomeSilvanus[m]> * The error occurred after it finished it's run. It made a list at the beginning but while it was going through that list files changed so it couldn't find them anymore. And when it reached the end of the list it started again. So something happened there that prevented it from going to the next config entry.
<Discordian[m]> So it may have broke in 2 ways, but I'm pretty sure fresh keys will always add twice
<Discordian[m]> Funny, idk how I missed that before
<Discordian[m]> I should check if DB is enabled on that second add, if it is, skip the second add.
<RomeSilvanus[m]> Also wouldn't it be faster to just look ahead a few folders and simultaneously start the hashing process instead of generating the full file list at first?
<RomeSilvanus[m]> Because looking ahead shouldn't take that much I/O so you can just run the hashing at the same time.
<Discordian[m]> I don't think it takes that long to build the filelist itself though
<Discordian[m]> Like what it does is build a hashmap, then process the hashmap.
<RomeSilvanus[m]> Well. The 2TB/15M files folder started at 21:27. It's 00:16 now.
<Discordian[m]> Yeah but you're well after the file list has been generated
<RomeSilvanus[m]> I though it does this after every restart?
<Discordian[m]> It does the hashing step, not the adding step on every restart
<Discordian[m]> Only does add if the file actual updates
<RomeSilvanus[m]> I'm not sure if we talk about the same right now.
<RomeSilvanus[m]> I meant the initial step where it says hashing on the root dir, not the individual files.
<RomeSilvanus[m]> * I'm not sure if we talk about the same right now.
<RomeSilvanus[m]> I meant the initial step where it says hashing on the root dir in the config, not the individual files.
<Discordian[m]> Oh okay
<Discordian[m]> How long does that step take? That's running `filePathWalkDir(path)`
<Discordian[m]> Hours?
<RomeSilvanus[m]> > Well. The 2TB/15M files folder started at 21:27. It's 00:16 now.
<RomeSilvanus[m]> Still running
<Discordian[m]> It's not on individual files yet? Sheesh, I thought you were adding files, or that was the smaller dir a while ago?
<RomeSilvanus[m]> That was the 27GB dir.
<Discordian[m]> Ah
<RomeSilvanus[m]> It was done hashing in like a second.
<Discordian[m]> That's good at least
<tedious> Wow what kind of hardware hashes 27gb in a second?
<Discordian[m]> It's not really hashing
<Discordian[m]> Just need to update the outputted message
venue has quit [Quit: venue]
<RomeSilvanus[m]> An really old i7 with 8 cores and 4 slow as hdds
<Discordian[m]> He turned hashing off IIRC so it's just comparing filesize+modtime
<RomeSilvanus[m]> * A really old i7 with 8 cores and 4 slow ass hdds
<tedious> I don't really know what that means then.
<Discordian[m]> Yeah I'll look into making it hash as it walks I suppose
venue has joined #ipfs
<RomeSilvanus[m]> That why I mean you could just collect files and check/add them at the same time. It just seems like an obvious speedup to run these task in parallel instead of wasting available I/O by waiting until the collecting files step is finished.
<Discordian[m]> Honestly I wonder if I can do it concurrently, have it build the list at the same time it processes the list.
<RomeSilvanus[m]> Which should even more visible on ssds since the have really high iops
royal_screwup21 has joined #ipfs
<Discordian[m]> Yeah I can have it walk, and simply block when the process pool is empty, waiting for more files. That way you can have the stats runs and whatnot at the same time it tries to build a larger list.
<Discordian[m]> Probably need twice as much memory, gonna have to calc your memory req
<Discordian[m]> At this rate I'll just use redis lmao
<RomeSilvanus[m]> I mean, don't give yourself too much work. It was just an idea. :v
<Discordian[m]> Nah it's all good fun
venue has quit [Client Quit]
<RomeSilvanus[m]> A few other applications such as rclone work that way. They just do looking and checking at the same time. Their progress indicator such as files/size/time just jumps higher every few seconds while they go through the filesystem.
<Discordian[m]> Yeah I'll do that, I'll calculate memory usage too, might need to just rely on LevelDB a bit more, and my convenient map a bit less
<Discordian[m]> Haha it certainly jumps sometimes
<Discordian[m]> Apparently 5mil files would only be around 190mb memory usage anyways on idle
<Discordian[m]> Unless the file paths were super long, gonna recalc
<Discordian[m]> Jumped to 649mb est lmao
<tedious> Oh does path length increase memory usage?
<Discordian[m]> Yeah it does
<tedious> Ok I need to plan for that then.
<Discordian[m]> We're talking about ipfs-sync BTW, not just the IPFS daemon
<tedious> Is that the thing that automatically downloads updates to pins?
<RomeSilvanus[m]> I have some files with asian charcters and I use ⁄ and ፡ as replacement for / and : in some files and folders. So some paths can get kidna long if I understand utf8 correctly.
<Discordian[m]> <tedious "Is that the thing that automatic"> Yeah
<tedious> Hmm ok.
<Discordian[m]> <RomeSilvanus[m] "I have some files with asian cha"> Hmm, I also realised I actually store path in memory twice rn, for convenience lmao
<tedious> And if you pin somebody's file that's 20 folders deep in their tree do you replicate the folders too?
venue has joined #ipfs
<RomeSilvanus[m]> Wouldn't one global variable suffice?
poundwise has joined #ipfs
<Discordian[m]> <tedious "And if you pin somebody's file t"> Currently it only supports syncing your own files/directories from your filesystem, onto IPFS. Soon it'll support syncing other people's pins, and when it does, you'll be able to map out what directories and files you want to sync
<Discordian[m]> <RomeSilvanus[m] "Wouldn't one global variable suf"> It is in one global variable.
<RomeSilvanus[m]> Just because you said storing it twice
<Discordian[m]> I'm going to change it to just use leveldb and do away with the global in-memory hash table.
<Discordian[m]> Also going to remove the no-db feature in the process. It's next-to useless, no one uses it, and it's the only reason I have the in-memory table.
<RomeSilvanus[m]> Actually my overly complicated youtube downloader uses ⁄ and ፡ a lot and generally keeps every utf character except / and : (which it replaces) so internally these paths are a lot longer than the 250 standard size. (probably). Idk how much more memory all these utf characters take.
<tedious> Discordian[m]: Ok so if somebody is sharing /ipfs/data/category/sub-category/sub-sub-category/sub-sub-sub-category/2021/03/31/topic/folder/file.something I could tell it to just store everything in /ipfs/data/ipfs-hash-here-1234567890etc/ ?
<Discordian[m]> Yeah, it'll remap whatever you want to MFS
<tedious> Oh but it still needs to store the chunks somewhere else right?
<Discordian[m]> I want people to be able to pin portions of websites they use, and keep those pins updated
<Discordian[m]> Yeah it'll still have to store the pieces of data you want to save
<Discordian[m]> You won't have to store the entire recursive pin though
<tedious> I really should just get this stuff running and make a test install then throw it away after I figure out how to do things smartly.
<tedious> I'm clueless so far.
<Discordian[m]> Honestly messing around with it is a fast way to learn
<Discordian[m]> I'm barely 2 months deep myself
<tedious> Has anyone written a little book about what to expect and how to lay things out?
<tedious> I've watched a bunch of videos but they all download 1 file and call it a day.
<Discordian[m]> No, but I will be making videos on how my tools work, and how to build tools like ipfs-sync, and new ideas.
<Discordian[m]> Well maybe someone made a book, but I haven't heard of it
<tedious> Well I hope you get them linked on the website or pinned to the github somewhere obvious.
<Discordian[m]> Yeah I found conventional IPFS things hard to find, so I set out to build some
<Discordian[m]> The videos? Yeah I can do that
<Discordian[m]> They'll be sponsored
<tedious> Nice!
<Discordian[m]> The goal is really to show other developers the building blocks, and how they can be used. Going to try to make it as accessible as possible so really anyone can follow along.
<RomeSilvanus[m]> I think go-filestore might be dead
<Discordian[m]> Nah the chatter was just moved to go-ipfs, I need to sift through where they're at
<Discordian[m]> Tbh I might end up writing a Go plugin for ipfs-sync <-> go-ipfs, interface directly, can run the same code as IPFS without the overhead.
<RomeSilvanus[m]> I only see a bot keeping some dependencies updated and no other code changes for months
<Discordian[m]> Yeah because the library itself is mostly feature complete, the way they use it though on the go-ipfs side evolves over there sometimes though
<RomeSilvanus[m]> Oh!
<tedious> Have they made an easier way for people to share large collections of pins with friends or is it still just sending huge text files of hashes and pinning them with a script or something?
<Discordian[m]> Yeah took me about a day to figure that out
<Discordian[m]> I'm working on that with ipfs-sync, but you could use a collaborative cluster rn I believe
<Discordian[m]> Sharing pins is a big thing I want
<Discordian[m]> So ipfs-sync will have that before v1
<Discordian[m]> There are 2 issues that when complete, will make that really simple
nemo-12345 has quit [Quit: nemo-12345]
<tedious> Well mad respect for you if you've only been doing this for 2 months and you're already making big tools for everyone. :)
<Discordian[m]> Thanks! It's honestly easier than it looks I think, I honestly believe my videos on the subject will make lots of people feel like the process is actually quite simple fundamentally.
<tedious> Only if your a programmer. :)
<RomeSilvanus[m]> Don't forget to tell every to like and subscribe 15 times
<tedious> I'm not so I just have to hope everyone else does that work.
<Discordian[m]> Idr how I end my videos I'll check
<Discordian[m]> <tedious "I'm not so I just have to hope e"> Oh yeah, you'll definitely have results to play with 🙂
<Discordian[m]> What OS do you use BTW?
<tedious> I love linux. :)
<Discordian[m]> I tell people to like, sub, and comment, but only once lol
<Discordian[m]> Ayy, me too. ipfs-sync probably doesn't work on Windows yet, but no one has asked for it either
tech_exorcist has quit [Remote host closed the connection]
tech_exorcist has joined #ipfs
<tedious> I just like it so much more.
<Discordian[m]> Me too 🙂
royal_screwup21 has quit [Quit: Connection closed]
royal_screwup21 has joined #ipfs
KempfCreative has joined #ipfs
drathir_tor has quit [Ping timeout: 240 seconds]
<ipfsbot> ehsan shariati @ehsan6sha posted in Ipfs-cluster equivalent for swarm connect? - https://discuss.ipfs.io/t/ipfs-cluster-equivalent-for-swarm-connect/10687/1
<RomeSilvanus[m]> Uh! It managed to do it !
<RomeSilvanus[m]> After 4 hours !
pecastro has quit [Ping timeout: 260 seconds]
<RomeSilvanus[m]> What format is the .log file actually supposed to be?
<RomeSilvanus[m]> I see some plaintext and a lot of
drathir_tor has joined #ipfs
bsm1175321 has quit [Ping timeout: 260 seconds]
KempfCreative has quit [Ping timeout: 252 seconds]
venue has left #ipfs [#ipfs]
bsm1175321 has joined #ipfs
opal has quit [Ping timeout: 240 seconds]
<Discordian[m]> It just dumps text to stderr or something using `log`
jesse22 has quit [Ping timeout: 258 seconds]
hhes has quit [Ping timeout: 248 seconds]
hhes has joined #ipfs
jarodit has joined #ipfs
bsm1175321 has quit [Ping timeout: 240 seconds]