rpifan has quit [Remote host closed the connection]
anewuser has joined #ipfs
further has joined #ipfs
further has quit [Ping timeout: 240 seconds]
Guest12644 has quit [Remote host closed the connection]
Lymkwi has joined #ipfs
Lymkwi is now known as Guest76095
<jnes>
What happens when I run ipfs add ? is the file uplodaded somewhere?
jkilpatr has quit [Ping timeout: 255 seconds]
<jnes>
or does it save it some local index for my own daemon?
<jnes>
ok so it's added locally somewhere. can i list files that I've added?
<engdesart>
jnes: IIRC, it's broken down into blocks if above a certain size and stored locally, and it is broadcasted over the network that "computer A has blocks X, Y, Z..." etc.
<engdesart>
The hashes that specify what blocks represent what files are broadcasted too.
dimitarvp has quit [Quit: Bye]
anewuser has quit [Quit: anewuser]
infinity0 has quit [Remote host closed the connection]
<jnes>
How am I btw supposed to know where files end up after running `ipfs get #` ?
<jnes>
... `jsipfs cat QmP4LhCW1Uvm3TyMZ3sZandojef7W1j3przv82BSa1LjdH` that is
<jnes>
A different file, a larger one is not available on neither gateway.ipfs.io nor my own local machine.. on my machine from which is published the file, i can naturally retrieve it though.
ccii has joined #ipfs
ccii1 has quit [Ping timeout: 246 seconds]
fetlock_coco_fis has joined #ipfs
<Kubuxu>
jnes: possibly connectivity problems, if you are using js-ipfs it might be a part of a problem
<Kubuxu>
nodejs NAT traversal might be an issue there
<Kubuxu>
I would recommend using go-ipfs on normal system
<Kubuxu>
also we could write a book named "Weird things NATs do"
}ls{ has quit [Quit: real life interrupt]
shizy has joined #ipfs
<pjz>
jnes: ipfs pin ls --type=recursive
<pjz>
jnes: and you can tag a path on after the hash, if you did a recursive add
<jfmherokiller[m]>
i suggest avoid people running ipfs pin ls without the type param because it can run for a long time
hashkarma has joined #ipfs
<hashkarma>
I have 100.000 hashes of various files, is there a cryptographic hash function or system that will allow me to verify the integrity of any one file with just the hash of all the 100.000 hashes?
<hashkarma>
some of these 100.000 files were stored in IPFS and lost over the course of time
<hashkarma>
the hash of 100.000 hashes is preserved on Ethereum
<hashkarma>
at a future date, i have the content address of one file but would like to use the hash of hashes stored on Ethereum to verify that this file existed when the hash of hashes was created
<hashkarma>
storing a trivial amount of additional bits for the files that still exist is within scope, no information including these additonal bits for the lost files are available
<hashkarma>
the ability to do something like this will make IPFS a lot more valuable without having to worry about scaling issues of Ethereum
jhand has quit [Quit: Connection closed for inactivity]
<jfmherokiller[m]>
well mixing a bitcoin thing with a filesystem sounds like it would be a lot of trouble and possibly costly
hashkarma has quit [Quit: Page closed]
ajbouh has quit [Ping timeout: 276 seconds]
hashkarma has joined #ipfs
<hashkarma>
well peter todd told that it is not a lot of trouble
Bhootrk_ has quit [Read error: Connection reset by peer]
akagetsu01 has joined #ipfs
tglman has quit [Ping timeout: 240 seconds]
m0ns00n has joined #ipfs
tglman has joined #ipfs
aceluck has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
museless has quit [Ping timeout: 255 seconds]
jkilpatr has joined #ipfs
_whitelogger has joined #ipfs
xelra has quit [Ping timeout: 240 seconds]
jaboja has joined #ipfs
rendar has joined #ipfs
bingus has quit [Ping timeout: 248 seconds]
droman has joined #ipfs
droman has quit [Remote host closed the connection]
droman has joined #ipfs
droman has quit [Remote host closed the connection]
<limbo_>
Anyone know how to speed up hashing files for adding them to ipfs? I want to add them in place, but the CPU on the machine they're on is far too slow to do that.
xelra has joined #ipfs
<limbo_>
Also, where can I find an arm64 package for debian? there's only an amd64 snap available.
clownpriest has joined #ipfs
bingus has joined #ipfs
tglman has quit [Ping timeout: 276 seconds]
dimitarvp has joined #ipfs
igorline has quit [Ping timeout: 240 seconds]
ylp has quit [Ping timeout: 248 seconds]
mildred has joined #ipfs
akagetsu01 has quit [Quit: Connection closed for inactivity]
shizy has joined #ipfs
<cehteh>
limbo_: if hashing becomes cpu bound not i/o bound there isnt much you can do, i'd expect the hashing algorithms are already pretty well optimized (that would be the only way to improve it)
clownpriest has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
anewuser has quit [Read error: Connection reset by peer]
anewuser has joined #ipfs
<limbo_>
Is there even a way of explicitly telling ipfs what the hash is for a file?
<cehteh>
guess not, that would be dangerous
mildred has quit [Ping timeout: 240 seconds]
Guest76095 has quit [Remote host closed the connection]
tglman has joined #ipfs
<limbo_>
Why? Isn't it possible to edit the files after the fact anyway?
<limbo_>
I'm thinking of a usecase like people adding a torrent to a client for files they already have.
Lymkwi has joined #ipfs
Lymkwi is now known as Guest12569
<limbo_>
I sort of had this problem before, when I uninstalled and reinstalled the ipfs snap package in dev mode, and none of the files I added were there anymore.
shizy has quit [Ping timeout: 240 seconds]
<cehteh>
no editing files in place is imposisble ipfs is content addressed
clownpriest has joined #ipfs
mildred has joined #ipfs
<r0kk3rz>
limbo_: torrents tend to hash the file again anyway to verify
bielewelt has quit [Quit: Leaving.]
jaboja has quit [Ping timeout: 240 seconds]
kaotisk has joined #ipfs
maxlath has quit [Quit: maxlath]
anewuser has quit [Ping timeout: 240 seconds]
m0ns00n has quit [Quit: quit]
anewuser has joined #ipfs
m0ns00n has joined #ipfs
pcctw has joined #ipfs
<limbo_>
cehteh: even if you use the in-place flag when adding?
<limbo_>
r0kk3rz: Depends on the client. The most important verification is when data's recieved.
<limbo_>
Just mentioning it because I want to add about 2TB of content in place, and that would probably take a couple of days.
<Noxarivis[m]>
Does anyone know already working P2P application that uses DHT over Tor? (preferably with >1000 nodes)
<Magik6k>
limbo_, you can try using blake2b, it should be faster -> ipfs add --hash blake2b-256
<Magik6k>
(or blake2s if you are on 32bit)
<limbo_>
Magik6k: I'll try it out. I presume ipfs is smart enough to not hash files more than once if it doesn't need to, right?
jaboja has joined #ipfs
<Magik6k>
If it can't detect that it's the case it probably will. When file is being added it is only hashed once
<Magik6k>
limbo_, You can also run the add with --local to not announce it to the network while adding and after the add just run 'ipfs dht provide rootHash'
<limbo_>
I plan on making these files available. does announcing to the network make the add process slower?
<limbo_>
Another issue: "Error: cannot add filestore references outside ipfs root (<my home folder>)"
<Magik6k>
when you run provide the files should become available
<limbo_>
What's the proper way to deal with this problem?
<limbo_>
Magik6k: good to know. I'll keep that in mind if/when I use this seriously.
clownpriest has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<r0kk3rz>
so, you're trying to hash 2tb with an rpi or something?
<limbo_>
r0kk3rz: pretty much.
<limbo_>
Why can't I add filestore references outside of the root directory?
koo6 has quit [Ping timeout: 246 seconds]
<limbo_>
adding a symlink worked just fine.
anewuser has quit [Ping timeout: 240 seconds]
<limbo_>
Magik6k: thanks for that advice, it's running about 40% faster. I presume this will result in different hashes though.
<Magik6k>
Yes, it will
<limbo_>
so, will other nodes be unable to find these files if they use a different hash? If so, how does that not cause data duplication?
anewuser has joined #ipfs
tg has quit [Ping timeout: 255 seconds]
<r0kk3rz>
it will cause duplication
<r0kk3rz>
but so long as all nodes use the blake2 algo it will be ok
<r0kk3rz>
for adding, i mean
jkilpatr has quit [Quit: Leaving]
jkilpatr has joined #ipfs
savoir-faire is now known as linuxdood
<limbo_>
is there any system in place or planned to map hashes to hashes for the same files made with different algorythms?
clownpriest has joined #ipfs
<limbo_>
Hypothetically, there could be multiple copies of this data I'm adding already stored, and I'd have no way of knowing without hashing it with every available algorythm.
<r0kk3rz>
i dont think that really makes sense, generally most things will use the default sha256
shizy has joined #ipfs
ckwaldon has quit [Ping timeout: 240 seconds]
ashark has joined #ipfs
<r0kk3rz>
a better solution is probably to just add it on another machine, and copy the .ipfs directory
jaboja has quit [Ping timeout: 276 seconds]
linuxdood is now known as linuxdood__
linuxdood__ is now known as savoir-faire
tg has joined #ipfs
rodolf0 has joined #ipfs
Guest12569 has quit [Remote host closed the connection]
Lymkwi has joined #ipfs
Lymkwi is now known as Guest52443
<cehteh>
and the fuse frontend will prolly never be fixed :/
jaboja has joined #ipfs
jaboja has quit [Ping timeout: 240 seconds]
jaboja has joined #ipfs
<clownpriest>
you guys think we'll ever have to deal with gov trying to regulate ipfs?
<clownpriest>
i could see it happening
<r0kk3rz>
of course, its what governments do
<clownpriest>
"let me get this straight....you want to dissolve the monopoly of data control and democratize the cloud? BUWAHAHAHAHA......i dont think so"
<clownpriest>
"silly hackers...we control the clouds"
jhand has joined #ipfs
Milanello1998 has joined #ipfs
cornland has joined #ipfs
<clownpriest>
frankly surprised you've gotten this far. maybe they'll just buy u all the filecoin and thats how they subvert it
<clownpriest>
up*
ulrichard has quit [Remote host closed the connection]
jaboja has quit [Ping timeout: 248 seconds]
<fiatjaf>
how can publish something to IPFS but do not broadcast it, such that it is public, but visible only for those who know the hash?
<r0kk3rz>
fiatjaf: i dont think thats possible
ylp has joined #ipfs
ylp has quit [Client Quit]
<limbo_>
r0kk3rz: I don't suppose it's possible to do that with --nocopy, is it?
<r0kk3rz>
limbo_: nocopy just keeps the file where it is, rather than copying it to the blockstore
<limbo_>
so, I could still do that as long as the directory it's added from is in the same place?