<film42424242>
horrified: For example each hash from "ipfs refs QmcNvmYWQgzeqkeT5bTUCTggXPhL4gWLEJnAKWtiaCyTGX"
<film42424242>
I would think in the MerkleDAG each hash would point to the hash that references the file (the one I pasted above). So I'm not sure how that's traced backwards to each ref.
<film42424242>
I think I'm missing something pretty obvious, but can't figure out what that is.
<horrified>
wait, how can a chunk know about what file it is a part of?
<horrified>
that's like putting a file's hash sum into the file itself - not possible
<Mateon1>
That's impossible, like repeatedly break SHA256 impossible
<film42424242>
How does the file know its chunks is my question.
<horrified>
a block of data can not know what other blocks reference it
<Mateon1>
It has a list of hashes
<Mateon1>
You can examine it with `ipfs dag get Hash`
<film42424242>
Thanks Mateon1 !
<Mateon1>
Note that this isn't the exact format, as files and directories use the unixfs format on top of protobuf, while ipfs dag explores IPLD on cbor
<Mateon1>
AKA, ipfs dag get | ipfs dag put - will NOT result in the same file
<film42424242>
horrified: Yeah, I understand that. I'm just not sure how the data structure is setup. If everything is in a merkle tree, how does the "file hash" know about its chunks.
<Mateon1>
The hash itself knows nothing, you have to ask the IPFS network for a block that matches that hash
<film42424242>
That is assuming chunks are the leaf nodes here.
<film42424242>
Oh gotcha. So I first ask for info about the block containing the hash and someone in the network answers saying, "hey I know the children nodes for this hash" ?
<film42424242>
I'm sure this seems like a pretty basic implementation question. For what its worth I was reading in the ipfs/specs repo, but the merkle dag page doesn't get far before the "WIP" message.
<Mateon1>
Nope, you get a block that matches that cryptographic hash. In case it's a directory, the block *contains* a reference (more hashes) to all files it contains
<Mateon1>
It's a list of pairs of the form "filename", Hash
<Mateon1>
Wrapped up in protobuf binary format
<film42424242>
Oh gotcha!
<film42424242>
Does that cause some issues for things like the wikipedia mirrior on ipfs that has millions of files in one directory?
<Mateon1>
Yes, that's why we implemented directory sharding, which splits directories into a HAMT data structure, saved as IPFS blocks
<Mateon1>
(Hash Array Mapped Trie)
* film42424242
just had his mind blown
chase_ has quit [Ping timeout: 260 seconds]
<horrified>
what does HAMT look like, conceptually?
charley has quit [Remote host closed the connection]
Mateon3 has joined #ipfs
Mateon1 has quit [Ping timeout: 240 seconds]
Mateon3 is now known as Mateon1
<Mateon1>
I timed out, sorry if I missed anything
<horrified>
Mateon1: can you describe in a couple of words what HAMT implementation looks like?
charley has joined #ipfs
<Mateon1>
Nope :P
<Mateon1>
It's dark magic to me
charley has quit [Remote host closed the connection]
<Magik6k>
horrified, It basically splits dirrectory in smaller 'buckets', you can look at it by looking at english wiki mirror for example: `ipfs dag get QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/58wiki/`
<film42424242>
^ That's what I was just looking at.
<film42424242>
Thanks for all the help! My knowledge increased like 1000x tonight :)
_whitelogger has joined #ipfs
Oatmeal has quit [Ping timeout: 276 seconds]
ygrek has quit [Remote host closed the connection]
ygrek has joined #ipfs
zpaz has joined #ipfs
ygrek_ has joined #ipfs
ygrek has quit [Ping timeout: 240 seconds]
ygrek_ has quit [Ping timeout: 255 seconds]
ygrek has joined #ipfs
ygrek has quit [Remote host closed the connection]
ygrek has joined #ipfs
ygrek has quit [Remote host closed the connection]
<FritzTheCat[m]>
gentoo is for ricers lol
<FritzTheCat[m]>
really
ajbouh has joined #ipfs
FritzTheCat[m] has left #ipfs ["User left"]
ajbouh has quit [Quit: ajbouh]
Oatmeal has joined #ipfs
zpaz has quit [Quit: Leaving]
charley has joined #ipfs
charley has quit [Ping timeout: 276 seconds]
droman has quit []
spacebar_ has joined #ipfs
chriscool has joined #ipfs
fredthomsen has joined #ipfs
dimitarvp has quit [Quit: Bye]
anewuser has quit [Quit: anewuser]
<jfmherokiller[m]>
ive noticed an interesting issue if you add a directory containing alot of files which are the same the adding process can fail because it tries to access a file piece or a "put-#" file which already has a file handle open
<jfmherokiller[m]>
ive noticed an interesting issue if you add a directory containing alot of files which are the same the adding process can fail because it can triy to access a file piece or a "put-#" file which already has a file handle open
<jfmherokiller[m]>
ive noticed an interesting issue if you add a directory containing alot of files which are the same the adding process can fail because it can try to access a file piece or a "put-#" file which already has a file handle open
Bombe has quit [Ping timeout: 240 seconds]
ygrek has joined #ipfs
dPow has quit [Remote host closed the connection]
upperdeck has quit [Ping timeout: 240 seconds]
upperdeck has joined #ipfs
dPow has joined #ipfs
_rht has joined #ipfs
upperdeck has quit [Ping timeout: 255 seconds]
Bombe has joined #ipfs
dPow has quit [Remote host closed the connection]
espadrine_ has quit [Ping timeout: 240 seconds]
upperdeck has joined #ipfs
serverlessnomad has joined #ipfs
kaotisk has quit [Read error: Connection reset by peer]
kaotisk has joined #ipfs
chriscool has quit [Ping timeout: 240 seconds]
kaotisk-irc has joined #ipfs
kaotisk has quit [Ping timeout: 240 seconds]
chriscool has joined #ipfs
charley has joined #ipfs
spacebar_ has quit [Quit: spacebar_ pressed ESC]
charley has quit [Ping timeout: 276 seconds]
clownpriest has quit [Ping timeout: 255 seconds]
jaboja has quit [Remote host closed the connection]
<kpcyrd>
jfmherokiller[m]: which operating system are you on?
<kpcyrd>
that sounds like windows
<jfmherokiller[m]>
well i was getting that issue on osx
<jfmherokiller[m]>
it was when i was making a backup of bash.org
ygrek has quit [Remote host closed the connection]
<jfmherokiller[m]>
also does there exist a simple way to wrap the http services ipfs provides in https?
<jfmherokiller[m]>
something like `nc 127.0.0.1 5001 <&1 | nc -l 8001 >&0` but for https?
ygrek has joined #ipfs
_whitelogger has joined #ipfs
<jfmherokiller[m]>
well i mainly just want to expose an ssl version for my use on localhost because for whatever reason the ipfs-api js library when used in the browser is being forced to use https
<jfmherokiller[m]>
this doesnt happen if i run it from localhost:8080 but if i run it from ipfs.io it fails
<jfmherokiller[m]>
this doesnt happen if i run it from localhost:8080 but if i run it from ipfs.io it does
<kpcyrd>
jfmherokiller[m]: this is due to browser security reasons, I think there's an RFC by google around trying to define localhost as 127.0.0.1 and therefore declaring it secure, partly because of this
<jfmherokiller[m]>
i think i can get around it by using an older version of the library but that library also has bugs
ygrek has quit [Ping timeout: 255 seconds]
saki has joined #ipfs
_whitelogger has joined #ipfs
_whitelogger has joined #ipfs
chriscool has quit [Ping timeout: 240 seconds]
_whitelogger has joined #ipfs
charley has joined #ipfs
<kpcyrd>
jfmherokiller[m]: do you mean the node.js client or the browser client?
<jfmherokiller[m]>
browser
<jfmherokiller[m]>
chrome specificly
<kpcyrd>
that is probably not going to work either. your browser is blocking this