aschmahmann changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.7.0 and js-ipfs 0.52.3 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Conduct: https://git.io/vVBS0
<Discordian[m]>
I figure sha224 should be fine, fast enough, shouldn't have to worry about collisions
<Discordian[m]>
I figure sha224 should be fine, fast enough, shouldn't have to worry about collisions
<RomeSilvanus[m]>
Idk, something simpler and faster, like md5 or sha1. Since its just going through local files it should be fine.
<RomeSilvanus[m]>
Idk, something simpler and faster, like md5 or sha1. Since its just going through local files it should be fine.
KeiraT has quit [Remote host closed the connection]
KeiraT has quit [Remote host closed the connection]
<lemmi>
RomeSilvanus[m]: there are soooo much faster and better options than md5/sha1. personally i like the blake family
<lemmi>
RomeSilvanus[m]: there are soooo much faster and better options than md5/sha1. personally i like the blake family
<lemmi>
not too exoctic, but still very fast and secure
<lemmi>
not too exoctic, but still very fast and secure
<Discordian[m]>
xxhash seems to beat murmur
<Discordian[m]>
xxhash seems to beat murmur
natinso- has quit [Remote host closed the connection]
natinso- has quit [Remote host closed the connection]
natinso^ has quit [Remote host closed the connection]
natinso^ has quit [Remote host closed the connection]
The_8472 has quit [Ping timeout: 240 seconds]
The_8472 has quit [Ping timeout: 240 seconds]
<RomeSilvanus[m]>
Yeh, as I said, I don't really know hashing stuff that well. My assumption was just that you could probably use the fastest hashing algorithm there is since its only for a local db and would probably speed it up compared to sha265 since there aren't really any security concerns such as malicious attacks.
<RomeSilvanus[m]>
Yeh, as I said, I don't really know hashing stuff that well. My assumption was just that you could probably use the fastest hashing algorithm there is since its only for a local db and would probably speed it up compared to sha265 since there aren't really any security concerns such as malicious attacks.
<Discordian[m]>
Totally untested BTW, but should work fine
<Discordian[m]>
Totally untested BTW, but should work fine
<Discordian[m]>
It'll probably update every file though so, there's that
<Discordian[m]>
It'll probably update every file though so, there's that
ipfs-stackbot has quit [Remote host closed the connection]
ipfs-stackbot has quit [Remote host closed the connection]
<Discordian[m]>
The db will be even smaller now
<Discordian[m]>
The db will be even smaller now
<RomeSilvanus[m]>
Aroudn 25 seconds for a 1.5 GB file
<RomeSilvanus[m]>
Aroudn 25 seconds for a 1.5 GB file
<RomeSilvanus[m]>
* Around 25 seconds for a 1.5 GB file
<RomeSilvanus[m]>
* Around 25 seconds for a 1.5 GB file
<Discordian[m]>
It does seem faster, but it's hard to tell because it was already extremely fast for me lol
<Discordian[m]>
It does seem faster, but it's hard to tell because it was already extremely fast for me lol
<RomeSilvanus[m]>
Is it actually making something like a file list before it starts hashing files?
<RomeSilvanus[m]>
Is it actually making something like a file list before it starts hashing files?
<Discordian[m]>
Yeah
<Discordian[m]>
Yeah
<RomeSilvanus[m]>
Okay, that explains why it doesnt show any new output on my 6Tb folder
<RomeSilvanus[m]>
Okay, that explains why it doesnt show any new output on my 6Tb folder
<RomeSilvanus[m]>
It just takes ages to make a file list
<RomeSilvanus[m]>
It just takes ages to make a file list
<Discordian[m]>
Wow how many files? lmao
<Discordian[m]>
Wow how many files? lmao
<RomeSilvanus[m]>
Damn you, HDDs
<RomeSilvanus[m]>
Damn you, HDDs
<Discordian[m]>
I've only tested it with SSDs
<Discordian[m]>
I've only tested it with SSDs
<RomeSilvanus[m]>
Around 12 or 14 million I think
<RomeSilvanus[m]>
Around 12 or 14 million I think
<RomeSilvanus[m]>
Mostly <1mb files
<RomeSilvanus[m]>
Mostly <1mb files
<Discordian[m]>
Holy shit lmfao
<Discordian[m]>
Holy shit lmfao
The_8472 has joined #ipfs
The_8472 has joined #ipfs
<RomeSilvanus[m]>
Thats one of the smaller folders
<RomeSilvanus[m]>
Thats one of the smaller folders
LiftLeft has quit [Quit: Bye]
LiftLeft has quit [Quit: Bye]
<Discordian[m]>
1.2GB of 18k files took around 3s on my SSD lol
<Discordian[m]>
1.2GB of 18k files took around 3s on my SSD lol
<Discordian[m]>
For the local db step at least
<Discordian[m]>
For the local db step at least
<Discordian[m]>
Not adding it all into IPFS, oh no, that takes forever
<Discordian[m]>
Not adding it all into IPFS, oh no, that takes forever
LiftLeft has joined #ipfs
LiftLeft has joined #ipfs
<Discordian[m]>
I can probably improve that quite a bit, if `files/write` gets fixed, that'll speed it up too I think
<Discordian[m]>
I can probably improve that quite a bit, if `files/write` gets fixed, that'll speed it up too I think
<RomeSilvanus[m]>
Do you actually need to make a whole filelist first? Why not just statr at the root and go through all folders, only looking ahead a bit?
<RomeSilvanus[m]>
Do you actually need to make a whole filelist first? Why not just statr at the root and go through all folders, only looking ahead a bit?
<RomeSilvanus[m]>
* Do you actually need to make a whole filelist first? Why not just start at the root and go through all folders, only looking ahead a bit?
<RomeSilvanus[m]>
* Do you actually need to make a whole filelist first? Why not just start at the root and go through all folders, only looking ahead a bit?
willscott has quit [Quit: -]
willscott has quit [Quit: -]
<Discordian[m]>
I could modify it to not do that, currently it's using a generic function I use in all sorts of places
<Discordian[m]>
I could modify it to not do that, currently it's using a generic function I use in all sorts of places
<RomeSilvanus[m]>
Ah yes, it's readaing all files to ipfs now since the hashes mismatch
<RomeSilvanus[m]>
Ah yes, it's readaing all files to ipfs now since the hashes mismatch
<RomeSilvanus[m]>
* Ah yes, it's readding all files to ipfs now since the hashes mismatch
<RomeSilvanus[m]>
* Ah yes, it's readding all files to ipfs now since the hashes mismatch
<Discordian[m]>
Yup lmao
<Discordian[m]>
Yup lmao
<RomeSilvanus[m]>
Seems kinda it would be faster to get a folder done if you just go with a bit of lookahead
<RomeSilvanus[m]>
Seems kinda it would be faster to get a folder done if you just go with a bit of lookahead
<RomeSilvanus[m]>
At least on non-SSDs
<RomeSilvanus[m]>
At least on non-SSDs
<RomeSilvanus[m]>
I'd switch to an SSD server but they're too expensive for the amount of space I need
<RomeSilvanus[m]>
I'd switch to an SSD server but they're too expensive for the amount of space I need
<Discordian[m]>
Yeah I bet
<Discordian[m]>
Yeah I bet
<Discordian[m]>
Well it'll be about the same speed I'd imagine, no? As when it builds the file list, it's just looking up data in the filesystem, no?
<Discordian[m]>
Well it'll be about the same speed I'd imagine, no? As when it builds the file list, it's just looking up data in the filesystem, no?
<Discordian[m]>
* Well it'll be about the same speed I'd imagine. As when it builds the file list, it's just looking up data in the filesystem, no?
<Discordian[m]>
* Well it'll be about the same speed I'd imagine. As when it builds the file list, it's just looking up data in the filesystem, no?
<Discordian[m]>
I thought that's all listings did anyways
<Discordian[m]>
I thought that's all listings did anyways
<RomeSilvanus[m]>
Maybe right. I kinda assumed it would be faster since you could start hashing files right away while simultaneously collection them.
<RomeSilvanus[m]>
Maybe right. I kinda assumed it would be faster since you could start hashing files right away while simultaneously collection them.
<RomeSilvanus[m]>
Instead of having to wait for the processs to finish first.
<RomeSilvanus[m]>
Instead of having to wait for the processs to finish first.
<RomeSilvanus[m]>
* Instead of having to wait for the process to finish first.
<RomeSilvanus[m]>
* Instead of having to wait for the process to finish first.
<Discordian[m]>
Might be if it's touching the files at all
<Discordian[m]>
Might be if it's touching the files at all
<Discordian[m]>
Not sure if it is or not, really. Doesn't look like it is
<Discordian[m]>
Not sure if it is or not, really. Doesn't look like it is
<Discordian[m]>
This way just makes the code easier to work with tbh
<Discordian[m]>
This way just makes the code easier to work with tbh
<RomeSilvanus[m]>
Why can't I randomly find like 1PB of SSDs on the street ;_;
<RomeSilvanus[m]>
Why can't I randomly find like 1PB of SSDs on the street ;_;
<Discordian[m]>
lmao that'd be nice
<Discordian[m]>
lmao that'd be nice
<RomeSilvanus[m]>
Sorry to bother you with all my plebian HDD problems :v
<RomeSilvanus[m]>
Sorry to bother you with all my plebian HDD problems :v
<Discordian[m]>
Haha it's cool, I'm sure you just want to see it hashing sooner to see the new speed ll
<Discordian[m]>
Haha it's cool, I'm sure you just want to see it hashing sooner to see the new speed ll
<Discordian[m]>
* Haha it's cool, I'm sure you just want to see it hashing sooner to see the new speed lol
<Discordian[m]>
* Haha it's cool, I'm sure you just want to see it hashing sooner to see the new speed lol
plexigras2 has quit [Ping timeout: 265 seconds]
plexigras2 has quit [Ping timeout: 265 seconds]
<Discordian[m]>
Looks like over a few GB, you'll definitely save a few seconds with the new algo. So several TB is definitely some time savings.
<Discordian[m]>
Looks like over a few GB, you'll definitely save a few seconds with the new algo. So several TB is definitely some time savings.
<RomeSilvanus[m]>
Nice
<RomeSilvanus[m]>
Nice
<RomeSilvanus[m]>
👌
<RomeSilvanus[m]>
👌
<Discordian[m]>
Thanks for the recommendation, love finding new things. Now I use 3, 3rd-party libs lol
<Discordian[m]>
Thanks for the recommendation, love finding new things. Now I use 3, 3rd-party libs lol
pecastro has quit [Ping timeout: 256 seconds]
pecastro has quit [Ping timeout: 256 seconds]
obsidianorder[m] has joined #ipfs
obsidianorder[m] has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
baojg has joined #ipfs
mowcat has quit [Remote host closed the connection]
mowcat has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
baojg has quit [Remote host closed the connection]
__jrjsmrtn__ has quit [Ping timeout: 246 seconds]
__jrjsmrtn__ has quit [Ping timeout: 246 seconds]
_jrjsmrtn has joined #ipfs
_jrjsmrtn has joined #ipfs
mindCrime_ has joined #ipfs
mindCrime_ has joined #ipfs
mindCrime_ has quit [Ping timeout: 264 seconds]
mindCrime_ has quit [Ping timeout: 264 seconds]
royal_screwup21 has quit [Quit: Connection closed]
royal_screwup21 has quit [Quit: Connection closed]
royal_screwup21 has joined #ipfs
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Ping timeout: 264 seconds]
royal_screwup21 has quit [Ping timeout: 264 seconds]
}ls{ has quit [Ping timeout: 240 seconds]
}ls{ has quit [Ping timeout: 240 seconds]
}ls{ has joined #ipfs
}ls{ has joined #ipfs
gwillen has joined #ipfs
gwillen has joined #ipfs
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has joined #ipfs
Arwalk has joined #ipfs
baojg has joined #ipfs
baojg has joined #ipfs
mindCrime_ has joined #ipfs
mindCrime_ has joined #ipfs
jcea has quit [Ping timeout: 240 seconds]
jcea has quit [Ping timeout: 240 seconds]
royal_screwup21 has joined #ipfs
royal_screwup21 has joined #ipfs
mindCrime_ has quit [Ping timeout: 246 seconds]
mindCrime_ has quit [Ping timeout: 246 seconds]
royal_screwup21 has quit [Ping timeout: 260 seconds]
royal_screwup21 has quit [Ping timeout: 260 seconds]
shadowslug[m] has joined #ipfs
shadowslug[m] has joined #ipfs
royal_screwup21 has joined #ipfs
royal_screwup21 has joined #ipfs
kuno1 is now known as thekuno
kuno1 is now known as thekuno
royal_screwup21 has quit [Ping timeout: 240 seconds]
royal_screwup21 has quit [Ping timeout: 240 seconds]
chiui has joined #ipfs
chiui has joined #ipfs
tedious has joined #ipfs
tedious has joined #ipfs
chiuii has quit [Ping timeout: 246 seconds]
chiuii has quit [Ping timeout: 246 seconds]
tedious has quit [K-Lined]
tedious has quit [K-Lined]
elusive has quit [Ping timeout: 260 seconds]
elusive has quit [Ping timeout: 260 seconds]
chachasmooth has quit [Ping timeout: 265 seconds]
chachasmooth has quit [Ping timeout: 265 seconds]
chachasmooth has joined #ipfs
chachasmooth has joined #ipfs
jrt is now known as Guest94799
jrt is now known as Guest94799
jrt has joined #ipfs
jrt has joined #ipfs
monkey__ has joined #ipfs
monkey__ has joined #ipfs
chiui has quit [Ping timeout: 244 seconds]
chiui has quit [Ping timeout: 244 seconds]
Guest94799 has quit [Ping timeout: 256 seconds]
Guest94799 has quit [Ping timeout: 256 seconds]
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has quit [Read error: Connection reset by peer]
}ls{ has quit [Quit: real life interrupt]
}ls{ has quit [Quit: real life interrupt]
Arwalk has joined #ipfs
Arwalk has joined #ipfs
monkey__ has quit [Quit: Connection closed]
monkey__ has quit [Quit: Connection closed]
decentral has quit [Ping timeout: 246 seconds]
decentral has quit [Ping timeout: 246 seconds]
joocain2 has quit [Remote host closed the connection]
joocain2 has quit [Remote host closed the connection]
joocain2 has joined #ipfs
joocain2 has joined #ipfs
PhilipChan[m] has joined #ipfs
PhilipChan[m] has joined #ipfs
royal_screwup21 has joined #ipfs
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Ping timeout: 246 seconds]
royal_screwup21 has quit [Ping timeout: 246 seconds]
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has joined #ipfs
Arwalk has joined #ipfs
opa has joined #ipfs
opa has joined #ipfs
opa7331 has quit [Ping timeout: 265 seconds]
opa7331 has quit [Ping timeout: 265 seconds]
Nact has quit [Quit: Konversation terminated!]
Nact has quit [Quit: Konversation terminated!]
Ringtailed-Fox has quit [Read error: Connection reset by peer]
Ringtailed-Fox has quit [Read error: Connection reset by peer]
Ringtailed_Fox has joined #ipfs
Ringtailed_Fox has joined #ipfs
thekuno has quit [Quit: WeeChat 3.0.1]
thekuno has quit [Quit: WeeChat 3.0.1]
treora has quit [Quit: blub blub.]
treora has quit [Quit: blub blub.]
treora has joined #ipfs
treora has joined #ipfs
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has quit [Read error: Connection reset by peer]
TheMeltdown_ has joined #ipfs
TheMeltdown_ has joined #ipfs
Arwalk has joined #ipfs
Arwalk has joined #ipfs
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has joined #ipfs
Arwalk has joined #ipfs
arcatech has quit [Quit: Be back later.]
arcatech has quit [Quit: Be back later.]
Ringtailed_Fox has quit [Read error: Connection reset by peer]
Ringtailed_Fox has quit [Read error: Connection reset by peer]
Ringtailed_Fox has joined #ipfs
Ringtailed_Fox has joined #ipfs
veegee has quit [Quit: veegee]
veegee has quit [Quit: veegee]
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has quit [Read error: Connection reset by peer]
<rak[m]>
can relay with peer ID QmW9m57aiBDHAkKj9nmFSEn7ZqrcF1fZS4bipsTCHburei exist on IP 147.75.195.153, as well as on different IP like B.X.Y.Z simultaneously?
<rak[m]>
can relay with peer ID QmW9m57aiBDHAkKj9nmFSEn7ZqrcF1fZS4bipsTCHburei exist on IP 147.75.195.153, as well as on different IP like B.X.Y.Z simultaneously?
<rak[m]>
or it can only be on 147.75.195.153?
<rak[m]>
or it can only be on 147.75.195.153?
<rak[m]>
who manages those three relays anyway?
<rak[m]>
who manages those three relays anyway?
veegee has joined #ipfs
veegee has joined #ipfs
ctOS has quit [Quit: Connection closed for inactivity]
ctOS has quit [Quit: Connection closed for inactivity]