<redfish>
galois_dmz: did you happen measure the traffic for larger files (and compare it to the file size)? (this data is interesting to learn)
pfraze has quit [Remote host closed the connection]
gmcquillan has quit [Quit: gmcquillan]
Encrypt has quit [Quit: Sleeping time!]
pfraze has joined #ipfs
pfraze has quit [Remote host closed the connection]
ashark has joined #ipfs
rgrinberg has quit [Ping timeout: 276 seconds]
chungy has quit [Ping timeout: 260 seconds]
wallacoloo has joined #ipfs
pfraze has joined #ipfs
ashark has quit [Ping timeout: 240 seconds]
chungy has joined #ipfs
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
chungy has quit [Ping timeout: 258 seconds]
r04r is now known as zz_r04r
pfraze has quit [Remote host closed the connection]
chungy has joined #ipfs
dmr has joined #ipfs
patcon has joined #ipfs
<galois_dmz>
I had not done such measurements, but I can do a quick one now.
rgrinberg has joined #ipfs
dmr has quit [Ping timeout: 272 seconds]
cketti has quit [Quit: Leaving]
<galois_dmz>
Creating a 1kB file causes 150-200kB of network traffic per node; creating a 10kB or 100kB file causes about the same traffic. Creating a 1MB file causes slightly more traffic (and results in a file where "ipfs files stat" lists 4 child blocks instead of 1, as it did for all the others);
<galois_dmz>
Creating a 10MB file causes way more traffix: about 1.1MB for the creating node, and about 550kB for each of the other two nodes.
<galois_dmz>
(that 10MB file has 44 child blocks)
<galois_dmz>
Anyway, it seems like a whole lot of overhead, especially for small files, for maintaining the DHT. But I don't know what is expected, having little prior experience with ipfs.
chungy has quit [Ping timeout: 260 seconds]
<galois_dmz>
Anyway, I'm heading out of the office (I'm using this for a project at work), but I'll stay on the channel and see what I see when I come back after the long weekend. Thanks in advance for any insights/assistance!
chris613 has joined #ipfs
chungy has joined #ipfs
pfraze has joined #ipfs
chris6132 has quit [Ping timeout: 260 seconds]
pfraze has quit [Remote host closed the connection]
patcon has quit [Ping timeout: 244 seconds]
wiretapped-cb has quit [Ping timeout: 260 seconds]
damongant has quit [Ping timeout: 260 seconds]
yosafbridge has quit [Ping timeout: 260 seconds]
yosafbridge has joined #ipfs
wiretapped-cb has joined #ipfs
damongant has joined #ipfs
Senji has quit [Ping timeout: 276 seconds]
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
pfraze has joined #ipfs
pfraze has quit [Remote host closed the connection]
fleeky has joined #ipfs
ashark has joined #ipfs
patcon has joined #ipfs
rgrinberg has quit [Ping timeout: 244 seconds]
fleeky has quit [Ping timeout: 240 seconds]
reit has joined #ipfs
pfraze has joined #ipfs
patcon has quit [Ping timeout: 240 seconds]
fleeky has joined #ipfs
dignifiedquire has quit [Quit: Connection closed for inactivity]
patcon has joined #ipfs
chungy has quit [Ping timeout: 250 seconds]
chungy has joined #ipfs
jedahan has joined #ipfs
mfranzwa has joined #ipfs
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
chungy has quit [Ping timeout: 260 seconds]
chungy has joined #ipfs
mfranzwa has quit [Ping timeout: 258 seconds]
PrinceOfPeeves has quit [Quit: Leaving]
metaf5 has joined #ipfs
taaz has joined #ipfs
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
patcon has quit [Ping timeout: 276 seconds]
dmr has joined #ipfs
jedahan has joined #ipfs
jedahan has quit [Max SendQ exceeded]
jedahan has joined #ipfs
nonaTure has joined #ipfs
pfraze has quit [Remote host closed the connection]
matoro has quit [Ping timeout: 260 seconds]
chungy has quit [Ping timeout: 260 seconds]
chungy has joined #ipfs
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
nonaTure has quit [Ping timeout: 276 seconds]
Looking has quit [Ping timeout: 244 seconds]
mpi has joined #ipfs
jedahan has joined #ipfs
<Kubuxu>
galois_dmz: yes, the overhead is quite high and it is in out too
chriscool has joined #ipfs
mpi_ has joined #ipfs
mpi has quit [Read error: Connection reset by peer]
<Kubuxu>
galois_dmz: yes the overhead is quite high and it is on our todo list to resolve it. The relation you see is because internally ipfs uses 256K blocks.
Tv` has quit [Quit: Connection closed for inactivity]
chungy has quit [Ping timeout: 250 seconds]
nonaTure has joined #ipfs
chungy has joined #ipfs
chungy has quit [Ping timeout: 260 seconds]
jaboja has joined #ipfs
chungy has joined #ipfs
chris613 has quit [Read error: Connection reset by peer]
diffalot has quit [Remote host closed the connection]
diffalot has joined #ipfs
diffalot has quit [Changing host]
diffalot has joined #ipfs
wallacoloo has quit [Ping timeout: 258 seconds]
jaboja has quit [Ping timeout: 250 seconds]
nonaTure has quit [Ping timeout: 260 seconds]
ashark has quit [Ping timeout: 276 seconds]
jaboja has joined #ipfs
ygrek has joined #ipfs
jaboja has quit [Remote host closed the connection]
slothbag has joined #ipfs
kerozene has quit [Ping timeout: 260 seconds]
M-alwi has joined #ipfs
slothbag has quit [Quit: slothbag]
od1n has joined #ipfs
dmr has quit [Ping timeout: 250 seconds]
palkeo has quit [Ping timeout: 260 seconds]
apophis has joined #ipfs
reit has quit [Ping timeout: 244 seconds]
disgusting_wall has quit [Quit: Connection closed for inactivity]
Arakela007 has joined #ipfs
chungy has quit [Ping timeout: 260 seconds]
chungy has joined #ipfs
jedahan has quit [Read error: Connection reset by peer]
jedahan has joined #ipfs
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
sahib has joined #ipfs
reit has joined #ipfs
kerozene has joined #ipfs
mpi_ has quit [Remote host closed the connection]
od1n has quit [Quit: WeeChat 1.4]
davidar_ has joined #ipfs
reit has quit [Ping timeout: 244 seconds]
cketti has joined #ipfs
bertschneider has joined #ipfs
bertschneider has quit [Ping timeout: 264 seconds]
zorglub27 has joined #ipfs
rendar has joined #ipfs
MahaDev has quit [Remote host closed the connection]
dignifiedquire has joined #ipfs
mpi has joined #ipfs
apophis has quit [Quit: This computer has gone to sleep]
chron0 has joined #ipfs
<chron0>
ahoy
Encrypt has joined #ipfs
Bheru27 has quit [Remote host closed the connection]
slothbag has joined #ipfs
wallacoloo has joined #ipfs
slothbag has quit [Quit: slothbag]
wallacoloo has quit [Read error: Connection reset by peer]
wallacoloo has joined #ipfs
zz_r04r is now known as r04r
ygrek has quit [Ping timeout: 260 seconds]
Boomerang has joined #ipfs
Looking has joined #ipfs
Encrypt has quit [Quit: Lunch time!]
slothbag has joined #ipfs
jokoon has joined #ipfs
zorglub27 has quit [Ping timeout: 250 seconds]
slothbag has quit [Quit: slothbag]
slothbag has joined #ipfs
ghtdak has quit [Ping timeout: 244 seconds]
ghtdak has joined #ipfs
Looking has quit [Ping timeout: 272 seconds]
<chron0>
jbenet: thanks for pushing and taking care of this issue - enjoyed your talks on YT and am very happy to see some light in terms of offline, permanence and souvereignty for the future. Speed/Latency of course will also be appreciated but wasn't my primary concern but I was looking for a distributed data layer for wiki-like map overlays: https://apollo.open-resource.org/lab:dspace
<chron0>
would it be feasible to store all of that in IPFS to have a fully decentraliced data/map storage layer?
wallacoloo has quit [Quit: wallacoloo]
jokoon has quit [Quit: Leaving]
<ipfsbot>
[js-ipfs] diasdavid force-pushed test/network-turbulence from 3c4279c to 0a01d7a: https://git.io/vrMiG
<ipfsbot>
js-ipfs/test/network-turbulence 0a01d7a David Dias: more swarm tests
ashark has joined #ipfs
Foxcool_ has quit [Read error: Connection reset by peer]
reit has joined #ipfs
Foxcool_ has joined #ipfs
ashark has quit [Ping timeout: 240 seconds]
<ipfsbot>
[js-ipfs] diasdavid opened pull request #287: WIP (master...test/network-turbulence) https://git.io/vr91r
patcon has joined #ipfs
ashark has joined #ipfs
mpi_ has joined #ipfs
mpi has quit [Read error: Connection reset by peer]
ashark has quit [Ping timeout: 250 seconds]
<ipfsbot>
[js-ipfs] diasdavid force-pushed test/network-turbulence from 0a01d7a to 5b8a084: https://git.io/vrMiG
<ipfsbot>
js-ipfs/test/network-turbulence 5b8a084 David Dias: more swarm tests
apiarian has quit [Ping timeout: 264 seconds]
asie has quit [Ping timeout: 252 seconds]
apiarian has joined #ipfs
asie has joined #ipfs
matoro has joined #ipfs
zorglub27 has joined #ipfs
reit has quit [Ping timeout: 276 seconds]
mpi_ has quit [Remote host closed the connection]
slothbag has left #ipfs [#ipfs]
patcon has quit [Ping timeout: 240 seconds]
patcon has joined #ipfs
<r0kk3rz>
lgierth: no need for android app, i can just run it normally on my sailfishos phone because its normal linux
<r0kk3rz>
same with ipfs :)
<lgierth>
ok cool
<lgierth>
that works too
johnchalekson has quit [Ping timeout: 250 seconds]
mpi has joined #ipfs
<Yatekii>
hmm I wonder if ipfs would be suitable to synchronzie data for myself like contacts calendar and stuff
<Yatekii>
also what I don't get yet is how it is managed who has access to the files (I feel like from the demos I have seen, all nodes are interconnected and public?) and how it is managed that I always get the newest hash. Also it uses merkle trees, right? if the leafes are hashes of the data blocks and all parent nodes are hashes of the childeren, don't I have to update the entire tree anytime data changes? But that isn't the case for git (which uses
<Yatekii>
merkle trees as well?) maybe I am asking stupid things (if so, hit me) but I struggle to figure :S
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-joi-8.2.0 (+1 new commit): https://git.io/vr95f
<ipfsbot>
js-ipfs/greenkeeper-joi-8.2.0 fc4b950 greenkeeperio-bot: chore(package): update joi to version 8.2.0...
pfraze has joined #ipfs
<patagonicus>
Yatekii: Anyone can access files (or parts of files) if they know their hash, which is a crypto hash, so brute forcing it shouldn't be a concern. You should encrypt sensitive information, though. With ipns you can have an address that can be updated. You don't "upload" files by adding them to IPFS, you only do that once some other node requests that data. Unchanged data is only transferred once. Git does the
<patagonicus>
same, but it can optimize things a bit better than IPFS as it's a somewhat different use case.
<r0kk3rz>
Yatekii: things dont change in ipfs, only new things added
<r0kk3rz>
Yatekii: i feel syncthing is probably better suited to this use case anyway, its more private and automagic
<ipfsbot>
[js-ipfs] diasdavid deleted greenkeeper-joi-8.2.0 at fc4b950: https://git.io/vr95u
mpi_ has joined #ipfs
mpi has quit [Ping timeout: 272 seconds]
<Yatekii>
patagonicus: wow that was really short and makes full sense ty! One thing tho: sure its a crypto hash, but it is still finite. One could randomly fish for interesting files right? Thats different if you need to know the network, theserver within, the keys for the server, the specific path to the file, right? Sure probability is still low but kinda higher, no? (No expert here)
<Yatekii>
r0kk3rz: i know its additive, that's why i was confused :)
A124 has quit [Quit: '']
<apiarian>
that probably where the "encrypt sensitive information" bit comes in. so even if things do get fished out, the data itself is still encrypted
<Yatekii>
r0kk3rz: what do you mean by syncing? As i understand it, ipfs is automagic syncing too :P
<patagonicus>
My guess is that a) it's still really hard to find a hash that is actually available in the net and b) if you do it's most likely to be a "boring" file.
<r0kk3rz>
Yatekii: google syncthing, its a open source app
<Yatekii>
r0kk3rz: ohh i missread, sorry!
<r0kk3rz>
Yatekii: IPFS isnt a syncing tool, its a transport layer of sorts
<r0kk3rz>
it could be a syncing tool, but it isnt yet
<Yatekii>
r0kk3rz: where is a syncing tool different? ;) it transports the newest version
<r0kk3rz>
Yatekii: IPFS doesnt do that. your node gets what you tell it to
<r0kk3rz>
if something changes, or a new file is added, you need to tell it to go and get it, it doesnt do it itself
Boomerang has quit [Ping timeout: 240 seconds]
<Yatekii>
r0kk3rz: ofc, I woul wirte that part of the syncing ;9 I just mean as a layer :) instead of DAV or sorts of (which horribly sucks imo)
<r0kk3rz>
sure if you write some kind of notification system and use IPNS, then you can do this
<r0kk3rz>
i have a number of ideas along this line for use with IPFS, but ipfs is only one part
<Yatekii>
also what I do not fully understand: ofc ipfs just adds data instead of altering. but if the leaves of a merkle tree are hashes of the data and the parents are hases of the leaves etc. then if data gets added the hashes would ahve to change too, right?
<Yatekii>
somehow I got a logical twist there :S
<Yatekii>
and I don't seem to fix it myself :S
matoro has quit [Ping timeout: 276 seconds]
<Yatekii>
r0kk3rz: ok gread I will have a look :)
<apiarian>
any change in the data of a child will require a re-hashing and recreation of all of the parents up to the root
<Yatekii>
the point is that I don't like current solutions like owncloud or sabre or anything ... they are bloated with bullshit code in ugly languages ... so I was evaluating options to do my own simple rest API with a simple webui and then a mate of mine posted me the link to ipfs (just randomly) and I really got curious
<Yatekii>
hmm
<apiarian>
so if you're thinking about a data structure rooted at the nodes's ipns that will be updating various bits and pieces often, it is (probably?) better to have a wide shallow tree than a deeply nested tree
<Yatekii>
yeah I guess so
<apiarian>
but then also think about maybe laying out bits that will change together within a unified subtree so that they don't disrupt more of the tree than necessary on change
<Yatekii>
ohhh I guess I understand now :D
<r0kk3rz>
instead of file on ipfs, you can just record raw blocks if you want
matoro has joined #ipfs
<Yatekii>
ya
<Yatekii>
r0kk3rz: sadly that ipfs-log is in js ...
<ipfsbot>
[js-ipfs] diasdavid force-pushed test/network-turbulence from 5b8a084 to 0784281: https://git.io/vrMiG
<ipfsbot>
js-ipfs/test/network-turbulence 0784281 David Dias: more swarm tests
<Yatekii>
btw question: I know this is not the job of ipfs but how do you plan on overcoming firewall/nat issues as they exist right now, which limits p2p?
<Yatekii>
(I mean without a central broker)
rgrinberg has joined #ipfs
<Yatekii>
also: if right now I want to grant individual users access to a file on my server I create different logins if it is webui or ssh or whatever doesn't matter. with public key encryption of a file that doesn't work, right, how would you do that?
<r0kk3rz>
Yatekii: its pretty cool, works in a similar fashion to IPFS, but obviously targetted towards keeping folders in sync between devices
<ipfsbot>
[js-ipfs] diasdavid force-pushed test/network-turbulence from 0784281 to 0910494: https://git.io/vrMiG
<ipfsbot>
js-ipfs/test/network-turbulence 0910494 David Dias: more swarm tests
espadrine has joined #ipfs
pfraze has quit [Remote host closed the connection]
sahib has quit [Ping timeout: 276 seconds]
disgusting_wall has joined #ipfs
screensaver has joined #ipfs
chris613 has joined #ipfs
chris613 has quit [Client Quit]
dguttman has joined #ipfs
<Yatekii>
r0kk3rz: I just feel that ipfs is more bare that I can make the syncing layer as I want than syncthing
<r0kk3rz>
Yatekii: of course, if you want to roll your own by all means do so!
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-joi-8.2.1 (+1 new commit): https://git.io/vr9NS
<ipfsbot>
js-ipfs/greenkeeper-joi-8.2.1 3013cd3 greenkeeperio-bot: chore(package): update joi to version 8.2.1...
<Yatekii>
r0kk3rz: I just couldn't find any nice webui with a nice synctool yet :( nice mailclients are non existant too :(
palkeo has joined #ipfs
dguttman has quit [Quit: dguttman]
zorglub27 has quit [Quit: zorglub27]
<ipfsbot>
[js-ipfs] diasdavid deleted greenkeeper-joi-8.2.1 at 3013cd3: https://git.io/vr9AY
ygrek has joined #ipfs
lothar_m has joined #ipfs
matoro has quit [Ping timeout: 276 seconds]
<Yatekii>
guys question: I am tinkering a little with go-ipfs which works really nice
<Yatekii>
one question tho: if I have __hash__/filename how come? I mean isn't every file a hash so I would ahve __hash1__/__hash2__ etc ...?
<Yatekii>
hmm actually it makes quite sense that I can access data in a directory I know the hash of
<Yatekii>
hmm
<Yatekii>
buuut if I can do that can't I do __hash__/../.. etc until I reach the root to access al files in ipfs?
rgrinberg has quit [Ping timeout: 246 seconds]
dmr has joined #ipfs
reit has joined #ipfs
sahib has joined #ipfs
pfraze has joined #ipfs
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
Looking has joined #ipfs
PrinceOfPeeves has joined #ipfs
<Yatekii>
Btw next question: if i delete or alter a file, and i never streamed it to other nodes, is the old version gone or does my ipfs node somehow store it?
dmr has quit [Ping timeout: 272 seconds]
<r0kk3rz>
Yatekii: it lives in your ipfs cache
<r0kk3rz>
unless you clear it, or enough stuff gets added that it gets garbage collected
<Yatekii>
Also: how does go-ipfs know which file is adressed by the hash? Does it have a db that keeps track of it? Sorry for all the questions .. If thats written somewhere please point me to it :) i have watched all the talks on the webpage
<r0kk3rz>
its in the cache, the original file is not needed
<Yatekii>
Hmm okay, so if i add too many files it can basically happen that my node forgets what files i have so it wont deliver them to others anymore even tho it has the files?
jedahan has joined #ipfs
Foxcool_ has quit [Ping timeout: 260 seconds]
<r0kk3rz>
Yatekii: you can pin files, not sure if it does that by default yet
Foxcool has joined #ipfs
<Yatekii>
Hmm
Encrypt has joined #ipfs
Encrypt has quit [Client Quit]
taaem has joined #ipfs
Encrypt has joined #ipfs
lothar_m has quit [Quit: WeeChat 1.5-dev]
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
patcon has quit [Ping timeout: 276 seconds]
cemerick has joined #ipfs
ashark has joined #ipfs
Foxcool has quit [Ping timeout: 276 seconds]
Foxcool has joined #ipfs
mpi_ has quit [Remote host closed the connection]
<Yatekii>
hmm there is no ipfs fuse driver here yet, right?
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-libp2p-swarm-0.19.1 (+1 new commit): https://git.io/vrHej
<ipfsbot>
js-ipfs/greenkeeper-libp2p-swarm-0.19.1 66232e7 greenkeeperio-bot: chore(package): update libp2p-swarm to version 0.19.1...
<patagonicus>
Yatekii: It's built in as ipfs mount. Mounts to /ipfs and /ipns by default.
<apiarian>
is this where the ipld work is happening? https://github.com/ipfs/go-ipfs/tree/ipld . and the there's https://github.com/ipfs/go-ipld . what would be the way to go if I wanted to use ipld instead of the current {data:{}, links:[]} structure in my project? how to help with the go-ipfs ipld migration efforts?
<ipfsbot>
[js-ipfs] diasdavid deleted greenkeeper-libp2p-swarm-0.19.1 at 66232e7: https://git.io/vrHvc
<apiarian>
or is the goal to just use IPLD formatted data within the Data part of the merkeldag.Node, and then have the Links be populated by analyzing the contents of that Data?
<Yatekii>
patagonicus: do I explicitely need to mount it or does ipfs daemon work? because iatm I have no /ipfs :S
ashark has quit [Ping timeout: 276 seconds]
Arakela007 has quit [Remote host closed the connection]
Stebalien has joined #ipfs
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-libp2p-swarm-0.19.2 (+1 new commit): https://git.io/vrHfu
<ipfsbot>
js-ipfs/greenkeeper-libp2p-swarm-0.19.2 15ed88f greenkeeperio-bot: chore(package): update libp2p-swarm to version 0.19.2...
<Yatekii>
Hmm could it be right that permission-management is not easy with ipfs?
<espadrine>
Write protection is pretty much guaranteed in ipfs
<Yatekii>
espadrine: "pretty mcuh" hehe
<ipfsbot>
[js-ipfs] diasdavid deleted greenkeeper-libp2p-swarm-0.19.2 at 15ed88f: https://git.io/vrHfx
<Yatekii>
there is the problem even tho I know it's highly improbable
<Yatekii>
another question: how scalable is ipfs? I mean doesn't it get pretty huge if I always pile data on top and broadcast it? also how does search/routing work? do I publish: hey I got this new data or do I say: hey I need this data and others answer?
<espadrine>
it uses Kademlia, I think
sahib has quit [Ping timeout: 250 seconds]
<Stebalien>
Yatekii: Currently, you only store what you download. IPFS (ignoring IPNS) doesn't really have routing. It you just ask your peers for things.
<Yatekii>
gotta read about that then, I just looked it up very quick eralier but didn't really read on
cemerick has quit [Ping timeout: 260 seconds]
<Stebalien>
Using a protocol called bitswap. In the future, other data exchange protocols will be added.
<Yatekii>
hmm so I just ask every peer and hope one has it?
<Yatekii>
ok
<Yatekii>
ty
<Stebalien>
Yatekii: As far as I know, that's how it *currently* works. It does use kademlia but only for IPNS (IIRC).
<espadrine>
I think it's for both
<Stebalien>
In general, it's supposed to be extensible.
<espadrine>
Kademlia is a DHT. It ensures that your query will only require log(n) hops.
<Yatekii>
hmm ok I gotta read a lot I guess
<Yatekii>
because atm I do not understand how ipns works. does it map from filenames to hashes?
<Stebalien>
I think it only uses the dht to *find* peers and lookup IPNS names. The DHT doesn't map blocks to peers.
<Yatekii>
what if there are equal filenames that contain different stuff?
<Stebalien>
IPNS maps public keys to hashes.
<Yatekii>
also: I now managed to map ipfs but when I do cd or ls as expained in the tut it just hangs. question is: doesn't it find peers that know the file or what's the problem. can I somehow get a status? also if I fetch a huge file I don't have a status, it will jsut block I guess?
<Stebalien>
If you know the associated private key, you can create a mapping. The newest mapping (based on some timestamp) wins.
<Yatekii>
ok
<Yatekii>
so I can basically retain my root with my privatekey
<Stebalien>
You can use `ipfs bitswap stat` to figure out what blocks you're trying to download
<Stebalien>
Yatekii: "so I can basically retain my root with my privatekey"?
<Yatekii>
thanks!
<Yatekii>
Stebalien: well I don't fully understand the purpose of ipns :S
<Stebalien>
Yatekii: Ah. It gives you mutability.
<Yatekii>
Stebalien: yes well so I can make a publickey per file? and like that I can always find the newest version of my file?
<Stebalien>
Yatekii: Basically, in IPFS, a name always names the same thing. In IPNS, the owner of a private key can change where the associated public key points.
<Yatekii>
yeah right
<Yatekii>
so I got that now I guess :) tnaks a ton!
<Stebalien>
Yatekii: Generally, you don't have one per file. You just point your public key at the root of some directory tree. Whenever you update the file, you update the hashes of the directory tree and make your key point at the new root.
<Yatekii>
Stebalien: yes
<Yatekii>
Stebalien: sorry another question :(
<Yatekii>
:
<Stebalien>
Yatekii: Don't be sorry. That's why I joined this channel.
<Yatekii>
that's what they often do in the tut/expanation vids
<Yatekii>
so obv. the hash points to a dir
<Yatekii>
and bar is a file in it
<Yatekii>
BUT
<Yatekii>
why is the file not a hash too? :S I mean it surely wont store a snapshot of the whole directory right?
<Yatekii>
so the files have to be versioned too. how can it find the right file by name instead of hash?
<Yatekii>
(does that question make sense? :S)
<Yatekii>
yeah well I sometimes ask too much, but I have really been watching the vids on Juans Youtube and read on the website but I still have questions :S
<Stebalien>
Yatekii: It is hashed. Basically, /ipfs/QmSh5e7S6fdcu75LAbXNZAFY2nGyZUJXyLCJDvn2zRkWyC/ maps names (in this case "bar") to the hash of bar.
<Stebalien>
Yatekii: You can actually resolve /ipfs/QmSh5e7S6fdcu75LAbXNZAFY2nGyZUJXyLCJDvn2zRkWyC/bar to get its hash.
<Stebalien>
Yatekii: I believe the command is "ipfs resolve ..."
<Yatekii>
Stebalien: oh wow I am so dumb
<Yatekii>
ty!
<Stebalien>
Yatekii: No. This isn't obvious. It's called a merkeldag if you're interested.
<Yatekii>
Stebalien: I read about the merkle tree
<Yatekii>
but it didn't make 100% sense tbh but yes
<Yatekii>
I understand the concept
<Yatekii>
not sufre if the dag is different from the tree
<Yatekii>
ah is it a directed acrylic (?) graph or sth so it would be a tree?
<Stebalien>
Yatekii: From the directory tree? Well, it is but that's more of an implementation detail (large files are split into multiple nodes in the DAG).
<Yatekii>
the point is that I don't get how everything can be immutable if you have to actually alter the tree when data is added (altering is just adding too)
<r0kk3rz>
Yatekii: because its hashed, you change the file you change the hash
Encrypt has quit [Quit: Quitte]
<Stebalien>
Yatekii: If you modify the tree, you get a new tree. The cool thing about merkeldags is that you don't need to copy the entire tree, you can just re-use the parts of the tree that you didn't modify.
<voxelot>
Yatekii: right it is immutable because altering and node of the tree will result in changing the root level hash, so you create a copy of the merkledag and update the state with a new immutable object when you edit
pfraze has quit [Remote host closed the connection]
<espadrine>
you end up having the old directory without the file you added, and that old directory is still around
<voxelot>
gives nice reasoning on the data with diff on what changed so state updates like Stebalien says will know, and only update whats needed, good for rendering
<Stebalien>
voxelot: I wouldn't use IPFS for rendering. It will take more time to hash a single node than to just re-render an entier frame from scratch. :) (you could use a merkeldag but with a faster non-cryptogrpahic hash function).
<espadrine>
rendering what?
<Yatekii>
okay thanks so much guys!
<Yatekii>
espadrine: voxels ofc :P
<espadrine>
ofc
matoro has joined #ipfs
<voxelot>
Stebalien that is true. But what if I want to distribute the state/frame. Think the performance would be high enough for say a blog post?
<voxelot>
And also we can use other hash funcs. I wonder how react render works
Bheru27 has joined #ipfs
<Stebalien>
voxelot: I see. Yes, that makes sense. Basically, only mess with the parts of the DOM that have changed.
erde74 has joined #ipfs
Stebalien has quit [Remote host closed the connection]
Looking has quit [Ping timeout: 272 seconds]
Looking has joined #ipfs
zorglub27 has quit [Quit: zorglub27]
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-joi-8.3.0 (+1 new commit): https://git.io/vrHkO
<ipfsbot>
js-ipfs/greenkeeper-joi-8.3.0 5355fb2 greenkeeperio-bot: chore(package): update joi to version 8.3.0...
erde74 has quit [Quit: Verlassend]
<ipfsbot>
[js-ipfs] diasdavid deleted greenkeeper-joi-8.3.0 at 5355fb2: https://git.io/vrHka
reit has quit [Ping timeout: 240 seconds]
leer10 has joined #ipfs
Encrypt has joined #ipfs
matoro has quit [Ping timeout: 240 seconds]
jaboja has joined #ipfs
rgrinberg has joined #ipfs
jedahan has joined #ipfs
herzmeister has quit [Ping timeout: 260 seconds]
sahib has joined #ipfs
herzmeister has joined #ipfs
mildred1 has joined #ipfs
ygrek has quit [Ping timeout: 260 seconds]
rendar has quit [Ping timeout: 244 seconds]
rgrinberg has quit [Ping timeout: 244 seconds]
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
chriscool has quit [Quit: Leaving.]
jedahan has joined #ipfs
arpu has quit [Ping timeout: 244 seconds]
mildred1 has quit [Ping timeout: 244 seconds]
jedahan has quit [Client Quit]
Oatmeal has quit [Ping timeout: 260 seconds]
arpu has joined #ipfs
rendar has joined #ipfs
Encrypt has quit [Quit: Sleeping time!]
<daviddias>
dignifiedquire: do you happen to be around for some quick CR? :)
sahib has quit [Ping timeout: 260 seconds]
Oatmeal has joined #ipfs
arpu_ has joined #ipfs
cemerick has joined #ipfs
cketti has quit [Quit: Leaving]
<dignifiedquire>
daviddias: still around?
<daviddias>
yep
<dignifiedquire>
what do you want me to look at
<daviddias>
I realise it is super late in CET, don't pay attention to what I say :P You can do it after, I'll just npm link all the things meanwhile :)
<daviddias>
and another question I had was if you had a clever way to crash a separated browser instance through tests
<daviddias>
I'm pretty confident the thing is going to work, but if we had tests to make sure WebRTC doesn't explode with an error that I'm not catching, it would be great
<dignifiedquire>
can you execute code in that instance?
<daviddias>
yeah, the idea would be "spawn two libp2p nodes in different browser instances, crash one of them, make sure the other continues intact and that updates its peerBook and muxedConn"
<dignifiedquire>
create a stackoverflow
<dignifiedquire>
that should kill things pretty consistent and is easy to do
<daviddias>
ahhah that is perfect
<daviddias>
just need to spawn two instances, but I guess that is why we have karma-peer :D
ering has quit [Quit: WeeChat 1.3]
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<dignifiedquire>
daviddias: you have comments :P
<daviddias>
good ones? :P
<dignifiedquire>
daviddias: they will make your code better :P
tmg has joined #ipfs
matoro has joined #ipfs
<daviddias>
thank you :)
matoro has quit [Ping timeout: 244 seconds]
cemerick has quit [Ping timeout: 276 seconds]
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
patcon has joined #ipfs
fuznuts has quit [Read error: Connection reset by peer]
fuznuts has joined #ipfs
nskelsey has quit [Ping timeout: 246 seconds]
nskelsey has joined #ipfs
rhalff_ has joined #ipfs
matoro has joined #ipfs
rhalff has quit [Disconnected by services]
rhalff_ has quit [Client Quit]
rhalff has joined #ipfs
<nothingmuch>
is there a backup tool (intended for creating backups) that uses merkledag (or some other hash based DAG) internally? (git doesn't count ;-)
<nothingmuch>
by creating backups I mean compact, rarely accessed, encrypted and expirable
ed_t has quit [Read error: Connection reset by peer]
<lgierth>
i'm not sure it exactly uses a merkledag structure
patcon has quit [Ping timeout: 276 seconds]
<nothingmuch>
I'm familiar with it, it does come close, but it's kind of a DIY approach, and I'd rather trust someone else to design a usable backup tool ;-)
matoro has quit [Ping timeout: 250 seconds]
<lgierth>
brad fitzpatrick is pretty high on the list of people i'd trust with designing something like that ;)
<nothingmuch>
oh i trust him a lot
<nothingmuch>
i don't trust myself to make good decisions long term about using camlistore
<nothingmuch>
it assumes I want to store things for life
<nothingmuch>
and gives a bunch of server APIs to construct a more intricate design should I want to
<nothingmuch>
i guess... how would you write a backup script with camlistore is kind of another possible question
<nothingmuch>
(to backup camlistore data itself, assuming I can't afford to sync it all, forever to some cloud storage)
<lgierth>
you could put encrypted data into ipfs
<lgierth>
there's no tooling for that yet
<lgierth>
but you can
<nothingmuch>
*nod*. I'll probably end up doing something like that, using camlistore to oranize important personal files, and created encrypted blob stores to later sync through ipfs to wherever I find cheap redundancy opportunities
<nothingmuch>
(for example my dad just bought a new imac, with way too much storage space, obviously helping him use it is the right thing to do =)