stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.4.23 and js-ipfs 0.41 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of
opal has quit [Ping timeout: 240 seconds]
opal has joined #ipfs
}ls{ has quit [Ping timeout: 256 seconds]
dethos has quit [Ping timeout: 265 seconds]
llorllale has quit [Quit: WeeChat 2.8]
receivership has quit [Ping timeout: 265 seconds]
ipfs-stackbot has quit [Remote host closed the connection]
ipfs-stackbot has joined #ipfs
Belkaar has quit [Ping timeout: 240 seconds]
Belkaar has joined #ipfs
Belkaar has joined #ipfs
jcea has quit [Quit: jcea]
xcm is now known as Guest15637
Guest15637 has quit [Read error: Connection reset by peer]
xcm has joined #ipfs
ygrek has quit [Ping timeout: 240 seconds]
rents has joined #ipfs
user_51 has quit [Ping timeout: 256 seconds]
user_51 has joined #ipfs
llorllale has joined #ipfs
gavlee_ has quit []
aLeSD has quit [Remote host closed the connection]
aLeSD has joined #ipfs
aLeSD has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
xcm has quit [Read error: Connection reset by peer]
xcm has joined #ipfs
InvisibleRasta has quit [Quit: WeeChat 2.3]
InvisibleRasta has joined #ipfs
Newami has joined #ipfs
Newami has quit [Remote host closed the connection]
redfish has quit [Ping timeout: 246 seconds]
KempfCreative has quit [Ping timeout: 250 seconds]
Fessus_ has joined #ipfs
Fessus has quit [Ping timeout: 260 seconds]
redfish has joined #ipfs
Fessus_ has quit [Remote host closed the connection]
Fessus_ has joined #ipfs
Fessus_ has quit [Remote host closed the connection]
Fessus_ has joined #ipfs
MDude has quit [Quit: Going offline, see ya! (www.adiirc.com)]
fleeky has quit [Ping timeout: 260 seconds]
Guest78710 has quit [Ping timeout: 265 seconds]
pepol has joined #ipfs
pepol is now known as Guest63503
<anders5070[m]> anyone know how i could use cids i already know about to make directories easily without needing to add the files again?
<anders5070[m]> seems like i need to use unixfs but not really sure atm
fleeky has joined #ipfs
<RubenKelevra> Take a look at this updated help text:
<RubenKelevra> anders#5070: yes that's easily possible.
<anders5070[m]> thank you i will look at this rn
<anders5070[m]> i would be good to stay away from the mfs api, i want to be able to get the cid of directories by combining cids of files and directories without having to upload the file multiple times.
<anders5070[m]> it would be good to stay away from the mfs api, i want to be able to get the cid of directories by combining cids of files and directories without having to upload the file multiple times.
<anders5070[m]> it would be good to stay away from the mfs api and it dont think this will work for me. i want to be able to get the cid of directories by combining cids of files and directories without having to upload the file multiple times.
<anders5070[m]> i will basically be making something similar to a mfs on top of a crdt
<anders5070[m]> i will basically be making something similar to an mfs on top of a crdt
<anders5070[m]> i will basically be making something similar to an mfs on top of a crdt log
<anders5070[m]> so i want to be able to quickly recalculate the cid of directories by content they contain that i already know about.
<anders5070[m]> so i want to be able to recalculate the cid of directories by content they contain that i already know about, without having to upload the root directory
<anders5070[m]> so i want to be able to recalculate the cid of directories by content they contain that i already know about, without having to upload the root directory each time
<anders5070[m]> so i want to be able to recalculate the cid of directories by content they contain that i already know about, without having to upload the root directory each time there is an update
<anders5070[m]> the reason i cant use mfs for this is because with the crdt log old updates can come in later and be merged so the state of the fs could change dramatically
<anders5070[m]> *edit:* ~~it would be good to stay away from the mfs api and it dont think this will work for me. i want to be able to get the cid of directories by combining cids of files and directories without having to upload the file multiple times.~~ -> it would be good to stay away from the mfs api and i dont think this will work for me. i want to be able to get the cid of directories by combining cids of files and directories
<anders5070[m]> without having to upload the file multiple times.
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
eneren[m] has left #ipfs ["User left"]
rendar has joined #ipfs
Deepman[m] has left #ipfs ["User left"]
cparker has quit [Ping timeout: 260 seconds]
cparker has joined #ipfs
RingtailedFox has quit [Ping timeout: 260 seconds]
Nact has joined #ipfs
fleeky has quit [Ping timeout: 240 seconds]
fleeky has joined #ipfs
aaaaaa has joined #ipfs
aaaaaa has quit [Client Quit]
Nact has quit [Ping timeout: 256 seconds]
Nact has joined #ipfs
moinmoin has joined #ipfs
<moinmoin> Hi all..so I understand in IPFS, new contents do not get "pushed" into the "network" unless that content is requested, then it moves to the node requesting it. I have a use case where I need some content hosted on IPFS, but when the content is updated, it should be "pushed" to nodes already hosting blocks from that content
<moinmoin> Does IPFS support this use case?
<moinmoin> Like some pubsub mechanism for updates to be pushed into the network to those who care about it
<it-xp[m]> const { password = 'OPEN DATA ENCRYPTION LAYER!', pubHash } = context;
<it-xp[m]> ```js
<it-xp[m]> ```
<it-xp[m]> but this show here too )
<ensrationis[m]> <moinmoin "Hi all..so I understand in IPFS,"> Example with sensor data https://youtu.be/fV9b841ms5M
}ls{ has joined #ipfs
<moinmoin> ensrationis[m] ok...looks promising. I see nodes can subscribe to "topics". It would be interesting if nodes can subscribe to "ipns names". As that would be the perfect setup for the use case I have. Contents are hosted and referenced via IPNS. Nodes subscribe to this via IPNS. When contents get's updated, they receive the updates
<ensrationis[m]> <moinmoin "ensrationis ok...looks promising"> IMHO : better to keep actual channel id for your pub sub msging in some public blockchain (ethereum as a example)
<aschmahmann[m]> > . It would be interesting if nodes can subscribe to "ipns names"
<aschmahmann[m]> They can, that
<moinmoin> I have little knowledge about blockchain/ethereum - so I ask, why would that be a better setup?
<aschmahmann[m]> * > . It would be interesting if nodes can subscribe to "ipns names"
<aschmahmann[m]> They can, that's IPNS over PubSub https://github.com/ipfs/go-ipfs/blob/master/docs/experimental-features.md
<moinmoin> aschmahmann[m] that is interesting. Thanks for the link
<ensrationis[m]> <moinmoin "I have little knowledge about bl"> IPNS will be central point of your system , so it will be ok, but if you want to β€œmore p2p” - try to find approach to keep actual channel id with out single point of failure
<moinmoin> ok
<aschmahmann[m]> moinmoin: IPNS over PubSub will only push the update to the IPNS record (i.e. that instead of being version 1 with path /ipfs/QmABC it's now version 2 with path /ipfs/QmXYZ) actually resolving the data pointed to by IPNS would require a separate hook. Also, I'm not sure off hand if the `--stream` flag will allow you to stream PubSub results indefinitely or not.
<aschmahmann[m]> * moinmoin: IPNS over PubSub will only push the update to the IPNS record (i.e. that instead of being version 1 with path /ipfs/QmABC it's now version 2 with path /ipfs/QmXYZ) actually resolving the data pointed to by IPNS would require a separate hook. Also, I'm not sure off hand if IPNS's `--stream` flag will allow you to stream PubSub results indefinitely or not.
<moinmoin> hmmm, so it is not update of the "contents"?
adasumizox has joined #ipfs
Nact has quit [Ping timeout: 265 seconds]
<aschmahmann[m]> By itself IPNS is just the "pointer" to the latest content. Content resolution itself is taken care of through IPFS resolution. There may be other subsystems (e.g. the mount system) which will automatically resolve the contents for you. However, if you could get the latest "pointer" from IPNS then resolving via IPFS should be pretty straightforward.
Nact has joined #ipfs
<aschmahmann[m]> Also, you're likely to be "lucky" since IPFS probably won't have to internally do a DHT request for the content since it's likely that the peers you're connected to in the pubsub channel already have the data you're looking for and you'll fetch it straight from them.
<moinmoin> aschmahmann[m]. cool. It seems I have the pieces I need to make things work. I am still fuzzy about how IPNS work I guess that is why I don't have a crystal clear picture of the setup...but I will start poking around soon :)
Nact has quit [Quit: Konversation terminated!]
<aschmahmann[m]> SGTM, do some reading through the docs and if you're still confused come on back with questions πŸ™‚
fleeky has quit [Ping timeout: 256 seconds]
<TraderOne[m]> When publishing website to IPFS and view it using IPFS plugin in browser how are MIME types determined? Do we have some support for metadata which can be attached to file in IPFS?
cparker has quit [Remote host closed the connection]
fleeky has joined #ipfs
Adbray has quit [Quit: Ah! By Brain!]
MDude has joined #ipfs
fleeky has quit [Ping timeout: 256 seconds]
vroom has quit [Read error: Connection reset by peer]
vroom has joined #ipfs
cparker has joined #ipfs
cipres has joined #ipfs
fleeky has joined #ipfs
__jrjsmrtn__ has quit [Ping timeout: 250 seconds]
__jrjsmrtn__ has joined #ipfs
<swedneck> i know js-ipfs can embed metadata to ipfs objects, but i don't think go-ipfs has such capability
moinmoin has quit [Remote host closed the connection]
<swedneck> i think mime types are decided like any other time a file has no explicit metadata, though i'm not sure how that's actually done
<swedneck> something about magic numbers?
Arwalk has quit [Read error: Connection reset by peer]
kivutar has quit [Ping timeout: 258 seconds]
dethos has joined #ipfs
Pie-jacker875 has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
Arwalk has joined #ipfs
Pie-jacker875 has joined #ipfs
kivutar has joined #ipfs
<ipfsbot> Ilya @Ilya_Krav posted in Ipfs pubsub C# error unity - https://discuss.ipfs.io/t/ipfs-pubsub-c-error-unity/7710/1
ZaZ has joined #ipfs
hurikhan77 has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
fazo96 has joined #ipfs
Ecran has joined #ipfs
hurikhan77 has joined #ipfs
RingtailedFox has joined #ipfs
RingtailedFox has quit [Ping timeout: 260 seconds]
cipres has quit [Ping timeout: 256 seconds]
pecastro has joined #ipfs
tsrt^ has quit []
kivutar has quit [Ping timeout: 260 seconds]
kivutar has joined #ipfs
<RubenKelevra> <it-xp[m] "> <@it-xp:matrix.org> ok, imagin"> Also, IPFS is always e2e encrypted. All transfers are point to point and encryption is done with the node keys.
tsrt^ has joined #ipfs
<RubenKelevra> it-xp: don't get me wrong, I like to be able to secretly share data over IPFS, but your approach makes currently no sense to me.
ZaZ has quit [Read error: Connection reset by peer]
Mikaela has quit [Quit: Mikaela]
Mikaela has joined #ipfs
InvisibleRasta has quit [Quit: WeeChat 2.3]
kivutar has quit [Ping timeout: 250 seconds]
invra has joined #ipfs
fleeky has quit [Ping timeout: 265 seconds]
jcea has joined #ipfs
invra has quit [Quit: WeeChat 2.3]
kivutar has joined #ipfs
invra has joined #ipfs
fleeky has joined #ipfs
foxcpp1 has quit [Quit: Looks like my relay decided to commit suicide]
ipfs-stackbot has quit [Remote host closed the connection]
adasumizox has quit [Quit: WeeChat 2.8]
lovetox has joined #ipfs
lovetox has left #ipfs [#ipfs]
lovetox has joined #ipfs
lovetox has left #ipfs [#ipfs]
ipfs-stackbot has joined #ipfs
cparker[r] has joined #ipfs
lovetox has joined #ipfs
mauz555 has joined #ipfs
lovetox has left #ipfs [#ipfs]
lovetox has joined #ipfs
cparker has quit [Ping timeout: 260 seconds]
lovetox has left #ipfs [#ipfs]
lovetox has joined #ipfs
jcea has quit [Remote host closed the connection]
fazo96 has quit [Ping timeout: 258 seconds]
kivutar has quit [Ping timeout: 250 seconds]
rents has quit [Ping timeout: 260 seconds]
jcea has joined #ipfs
fazo96 has joined #ipfs
fleeky has quit [Ping timeout: 240 seconds]
kivutar has joined #ipfs
Mikaela has quit [Quit: Mikaela]
Mikaela has joined #ipfs
fazo96 has quit [Ping timeout: 256 seconds]
Taoki has joined #ipfs
fleeky has joined #ipfs
<TraderOne[m]> what about IPV6 only nodes, will be they part of new 0.5 DHT?
<TraderOne[m]> I have about 30% ipv6 traffic today
maxzor has quit [Ping timeout: 260 seconds]
fazo96 has joined #ipfs
Taoki has quit [Remote host closed the connection]
Taoki has joined #ipfs
fazo96 has quit [Ping timeout: 265 seconds]
Taoki has quit [Remote host closed the connection]
Taoki has joined #ipfs
maxzor has joined #ipfs
saki_ has joined #ipfs
kivutar has quit [Ping timeout: 265 seconds]
lovetox has left #ipfs [#ipfs]
kivutar has joined #ipfs
Newami has joined #ipfs
Newami has quit [Client Quit]
KempfCreative has joined #ipfs
llorllale has quit [Ping timeout: 256 seconds]
zeden has joined #ipfs
llorllale has joined #ipfs
KempfCreative has quit [Ping timeout: 256 seconds]
kivutar has quit [Ping timeout: 256 seconds]
KempfCreative has joined #ipfs
kivutar has joined #ipfs
lordcirth has quit [Remote host closed the connection]
RingtailedFox has joined #ipfs
Guest63503 has quit [Ping timeout: 258 seconds]
pepol has joined #ipfs
pepol is now known as Guest78728
Jybz has joined #ipfs
tryte has quit [Ping timeout: 240 seconds]
<TraderOne[m]> If I do pin folder in webui MFS - its not recursive?
Jybz has quit [Excess Flood]
tryte has joined #ipfs
Jybz has joined #ipfs
developer24 has joined #ipfs
developer24 has quit [Max SendQ exceeded]
Taoki has joined #ipfs
<anders5070[m]> @RubenKelevra#0000 would encrypting the data before storing in ipfs be more effective? i want to look at a way to keep data stored in ipfs more private for things like user storage.
kivutar has quit [Ping timeout: 256 seconds]
<TraderOne[m]> Pining folder in webui is recursive = its just not indicated with icon at subfolders
kivutar has joined #ipfs
attero has joined #ipfs
cparker[r] has quit [Remote host closed the connection]
developer24 has joined #ipfs
cparker has joined #ipfs
<TraderOne[m]> There is problem if you announce both ipv4 and ipv6 and ipv6 is blocked by firewall by mistake then cloudflare never retry with ipv4.
KempfCreative has quit [Ping timeout: 250 seconds]
brombek[m] has joined #ipfs
nicolas__ has joined #ipfs
eldritch has quit [Quit: ZNC 1.6.6+deb1ubuntu0.1 - http://znc.in]
nicolas__ has quit [Remote host closed the connection]
ljmf00 has quit [Read error: Connection reset by peer]
ljmf00_ has joined #ipfs
cipres has joined #ipfs
stavros has joined #ipfs
<stavros> Hello
<stavros> I had some corruption on my local node's filesystem, is there a way I can ask the node to delete the corrupt data?
<stavros> I have ipfs-cluster running over my node and am getting "failed to decode Protocol Buffers: incorrectly formatted merkledag node:"
<stavros> Is that because of the node having corrupt data?
<TraderOne[m]> I just delete everything and do cluster follow over
<stavros> What's follow over?
<stavros> Oh, there's only one node in the cluster, me (don't ask)
pecastro has quit [Ping timeout: 258 seconds]
<stavros> Does the node's changed identity matter to the cluster?
kivutar has quit [Ping timeout: 256 seconds]
<stavros> I keep getting "context canceled" from the node
<TraderOne[m]> yes it matters
<stavros> Is there any way I can add the new node's identity?
<stavros> Or recover my data?
eldritch has joined #ipfs
Jesin has quit [Quit: Leaving]
<RubenKelevra> <stavros "Is there any way I can add the n"> Well, yes, the config file of ipfs holds the private and public key and the config folder for the cluster also holds a file with the cluster-identity.
kivutar has joined #ipfs
<RubenKelevra> But as long as your blocks folder is the same, the data is there :)
<stavros> RubenKelevra, the cluster is fine, what got corrupt is the node config and some of the data in the blocks dir
<stavros> So now I need help restoring the cluster and deleting the corrupt data
Jesin has joined #ipfs
<stavros> Is the datastore directory important?
<stavros> Yeah, it looks like the old data won't work, I keep getting "corrupted merkledag"
<stavros> Can't IPFS tell which data is valid? The CIDs themselves are hashes, so presumably it knows what's good
developer24 has quit [Quit: Leaving]
<stavros> TraderOne[m], doesn't look like the identity of the node matters
fleeky has quit [Ping timeout: 265 seconds]
mowcat has joined #ipfs
rendar has quit []
jrt3 has joined #ipfs
<it-xp[m]> <RubenKelevra "it-xp: don't get me wrong, I lik"> I meen, that any data must can't be able to shared in unencrypted form, in principle )
fleeky has joined #ipfs
jrt2 has joined #ipfs
jrt has quit [Ping timeout: 240 seconds]
<it-xp[m]> so, even open data, is data, encrypted by default keys, well knowing by global broadcast domain members
jrt3 has quit [Ping timeout: 250 seconds]
fengling has quit [Ping timeout: 258 seconds]
<stavros> How can I ask IPFS-cluster to repin all pins?
<stavros> Even though it thinks they're already pinned, I guess
fengling has joined #ipfs
RingtailedFox has quit [Ping timeout: 265 seconds]
ZaZ has joined #ipfs
kivutar has quit [Ping timeout: 258 seconds]
RingtailedFox has joined #ipfs
<stavros> Can someone help me fix my cluster, or create a new one?
KempfCreative has joined #ipfs
kivutar has joined #ipfs
vvoz has joined #ipfs
vvoz has quit [Remote host closed the connection]
rjzxrscdjz[m] has joined #ipfs
saki_ has quit [Read error: Connection reset by peer]
mowotter has joined #ipfs
pecastro has joined #ipfs
dandle has joined #ipfs
mowcat has quit [Ping timeout: 250 seconds]
fleeky has quit [Ping timeout: 264 seconds]
Alin[m] has joined #ipfs
fleeky has joined #ipfs
maxzor has quit [Ping timeout: 258 seconds]
astronavt has joined #ipfs
yue6688[m] has joined #ipfs
cathadan[m] has joined #ipfs
kivutar has quit [Ping timeout: 264 seconds]
kivutar has joined #ipfs
<stavros> I keep getting "context canceled"
<stavros> On a new node and a new cluster
mowotter has quit [Remote host closed the connection]
<ripply[m]> thats just a generic error, golang has contexes that get passed around, that could mean something is timing out and cancelling the context which basically stops processing in that context
<stavros> Can I ignore it? I've deleted the node and cluster three times now and still get that error
<ripply[m]> it really depends what is causing it, I am not too familiar with the ipfs code, just libp2p
<stavros> Hmm, I see, thanks
<stavros> ripply[m], do you know how I can reinitialize a node?
<ripply[m]> nope
<stavros> Hmm, thanks
ljmf00_ has quit [Quit: ZNC 1.7.5 - https://znc.in]
ljmf00 has joined #ipfs
mauz555 has quit [Remote host closed the connection]
llorllale has quit [Quit: WeeChat 2.8]
<RubenKelevra> <it-xp[m] "so, even open data, is data, enc"> I think I know what you mean, but if you match the unencrypted CIDs to locally encrypted data, you won't gained anything.
<RubenKelevra> stavros: I can help you. What happend?
<stavros> RubenKelevra, I have a node and a cluster that only has that node in it (for unimportant reasons). I got disk corruption on the disk with the node data, and I'm trying to recover as much as I can
<RubenKelevra> <stavros "RubenKelevra, the cluster is fin"> so you got the old node config, which holds the private key?
<stavros> I made a new node and cluster, connected the old node to the new node, and added all the CIDs to the new cluster to pin
mauz555 has joined #ipfs
<RubenKelevra> that should be easy. :)
<stavros> RubenKelevra, no, unfortunately the config was corrupt as well
mauz555 has quit [Read error: Connection reset by peer]
invra has quit [Quit: WeeChat 2.3]
<RubenKelevra> yeah, but this should be easy to recover. :)
<stavros> Some of the data might be corrupt too though :/
<RubenKelevra> Do you have a copy of the (uncorrupted) data somewhere on a backup?
<stavros> I keep getting a lot of "ERROR core/comma: context canceled pin.go:132" on the new node
<stavros> RubenKelevra, no, if I did I'd just restore that :P
invra has joined #ipfs
<RubenKelevra> oh wonderful :D
<stavros> Haha yep
<RubenKelevra> okay, forget about the cluster, that doesn't really matter.
<stavros> Yeah, I don't really need the cluster
<stavros> It's just for an easier API to pin stuff
<RubenKelevra> which kind of storage do you use for the blocks, flatfs?
<RubenKelevra> flatfs creates a "blocks" folder under ~/.ipfs
<stavros> Yes
<RubenKelevra> what kind of corruption are we dealing with? Is the filesystem broken or is the harddrive broken?
<stavros> The filesystem was broken (part of the filesystem was overwritten)
<stavros> I managed to recover data by using a backup superblock
<RubenKelevra> ah okay
<RubenKelevra> then copy the blocks folder away from this filesystem
<stavros> Already done, yep
<stavros> I copied everything I recovered into a different directory and ran a node on it
<RubenKelevra> you can now put those blocks in the folder of your new node
<RubenKelevra> (shut it down - for obviously reasons), then rename the blocks folder and put the backed up one there
<stavros> I tried that but I got errors about a corrupt merkledag
<stavros> So I ran a clean new node and connected the two
<stavros> So now I want to pin all the hashes on the new one so they transfer over the network
<RubenKelevra> yeah, we clean up the corruptions
<stavros> IPFS does, you mean?
<RubenKelevra> well, you can let ipfs check it's datastore
<stavros> Oh, how can I do that?
<RubenKelevra> don't start the node with the old blocks then, run instead `ipfs repo verify`
<RubenKelevra> it will print out everything which is broken
<stavros> Oh wow, nice
<stavros> Does it also check the keystore and all the other files?
<stavros> Sorry, I mean blocks and datastore
<RubenKelevra> I actually don't know :)
<RubenKelevra> the datastore should just hold metadata, like which pins you've pinned etc. πŸ€”
<stavros> Hmm, so I can delete that and it will still look things up by hash fine?
<RubenKelevra> but I might be wrong here
<stavros> I'll probably just leave it
<stavros> Since the node works reasonably, I think transferring all the data by pinning on the new node and connecting the two together should be fine, no?
<stavros> Except I keep getting a lot of that error
<RubenKelevra> well, you can just run the `repo very` and then remove everything which is corrupted: `ipfs block rm <hash`
<stavros> And it's a brand new node, it's odd
<stavros> Ah, fantastic
<RubenKelevra> <stavros "Since the node works reasonably,"> well, by default ipfs doesn't check that it's data is correct before sending it - it get's checked on the second node which receives and this will cause constant resends until something times out - I guess
<RubenKelevra> you can change this behaviour in the config, called `HashOnRead`
<RubenKelevra> But I don't know what a node will do if it encounters a erronous checksum ... maybe it will just print a warning πŸ€”
<stavros> Oh hmm, I see
<stavros> I will change that, thank you
Jybz has quit [Quit: Konversation terminated!]
cparker has quit [Ping timeout: 260 seconds]
<stavros> Do you have any idea why I might be getting "context canceled" errors?
<RubenKelevra> Context cancelled just means something has "timed out"
<RubenKelevra> or actively canceled
cparker has joined #ipfs
<RubenKelevra> youll see this for example when you let do the cluster operations on a node, since the cluster config sets a lot of timeouts - if something is too slow, it will be cancelled and retried later
<stavros> Ah, that doesn't sound too bad
cheetypants has joined #ipfs
cheet has quit [Ping timeout: 256 seconds]
cheetypants has quit [Ping timeout: 256 seconds]
cheet has joined #ipfs
invra has quit [Quit: WeeChat 2.3]
kivutar has quit [Ping timeout: 256 seconds]
<stavros> RubenKelevra, the pinning seems to be going well, thank you for your help!
<RubenKelevra> stavros: cool!
<RubenKelevra> You may want to look into backup solutions now :P
cparker has quit [Ping timeout: 260 seconds]
<RubenKelevra> stavros: I like to recommend this one: http://duplicity.nongnu.org/
<stavros> RubenKelevra, my server is backed up regularly, but apparently the provider doesn't back up their network volumes :/
<stavros> Btw Restic is much better than duplicity :P
Smashnet has joined #ipfs
cparker has joined #ipfs
<RubenKelevra> what's better with restic than duplicity? :)
<stavros> What do you mean?
<RubenKelevra> you said it's better... why?
<stavros> Oh, you don't need to do full backups then diffs on those
<stavros> Basically, all these things: https://www.stavros.io/posts/holy-grail-backups/
<stavros> Borg is great, Restic is equally good
<stavros> Use whichever you prefer, Restic is slightly more modern but both are great
<stavros> They also do deduplication, you can mount the encrypted remote as a local fs
<stavros> So you can restore specific single files
<RubenKelevra> Ah okay. I don't use anything fancy, just a 40 line bash script to do my backups :)
<RubenKelevra> a zfs on a luks on a usb stick
<stavros> Is zfs your backup fs or your regular fs?
<RubenKelevra> zfs is the backup fs
<RubenKelevra> I switched just to btrfs on my main desktop :)
kivutar has joined #ipfs
<stavros> Ah, how do you like btrfs?
<RubenKelevra> it's very sparse on features ...
<stavros> Hm yeah
<stavros> I assume it doesn't have something like zfs send?
<RubenKelevra> it has
<stavros> Is that not good for backups?
<RubenKelevra> I just don't want to use it, because it might trash on the receiving end the data... you know?
Taoki has joined #ipfs
<stavros> Hm, yeah
<stavros> The good thing about Restic is that you can also have it verify the data
<RubenKelevra> I just want to read the data, as the operating system does, and save it somewhere else - without having to care about the integrity of the filesyystem
<stavros> On the remote, so it doesn't need to transmit all the data back
<stavros> Right
<RubenKelevra> well, you can just run a scrub on the zfs storage and it will verify the integrity
<stavros> Yeah, but that's hard to do for offsite backups
<stavros> Unless you control the server
<stavros> I use rsync.net and send the data there
Taoki has quit [Remote host closed the connection]
<stavros> I pay $45/yr/TB, seems worth it for remote backups
<RubenKelevra> Well, I don't have to backup much ... I have 64 GB sticks and do alternate between 2 sticks
<RubenKelevra> gzip compression, deduplication etc on
<stavros> My issue with that was that, if there's a house fire, I'm going to lose the sticks as well
<RubenKelevra> easy: Just don't set your house on fire :D
<RubenKelevra> well, I alternate between 2 sticks since I store one offsite, and just switch them :)
<stavros> I mean, I also have a rule "don't format your VPS network volumes with all your data", yet here we are :P
<stavros> RubenKelevra, that sounds reasonable
<stavros> I prefer to not have to think of it, just to have it automatic
<stavros> For a few bucks per year, I never have to think about it
<RubenKelevra> Yeah, I get that - but I have the rule that I don't trust other peoples computers :D
redfish has quit [Ping timeout: 246 seconds]
<RubenKelevra> I just need some common files back, say GPG key, SSH keys etc. :)
<RubenKelevra> and they don't change THAT often
<RubenKelevra> btw zfs can also encrypt a datastore and still check the integrity and send and receive the data, even encrypted one without deencryption :)
<stavros> Hmm, I need to look into that
<RubenKelevra> so you can send an encrypted datastore to a backup provider and the backup provider can run a scrub over your data and make sure the redundancy works, without having the key
<stavros> RubenKelevra, I don't trust other people's computers either, that's why I check the integrity :P
Smashnet has quit [Quit: Leaving]
Taoki has joined #ipfs
<stavros> RubenKelevra, that's a great feature, what's it called?
Smashnet has joined #ipfs
<RubenKelevra> well, just the build in encryption is made that way. :)
<stavros> Oh huh
<stavros> I guess the checksum is stored over the encrypted data, huh
<RubenKelevra> both
<stavros> Err, I mean calculated
<RubenKelevra> yeah
<stavros> Though they probably do authenticated encryption I guess, not just checksums at that point
<RubenKelevra> when it's unlocked, you can scrub the unencrypted data too
<RubenKelevra> @stav
<stavros> Hmm, what's the difference there?
<stavros> Isn't scrubbing just verifying you can read the blocks?
<stavros> If the encrypted data is good, why would the decrypted data not be?
<RubenKelevra> Scrubbing is reading the blocks from all available copies and will checksum the data and verify it
<stavros> Ah
<RubenKelevra> while just reading all data would not verify all sources
<stavros> What kind of sources are there?
<stavros> Isn't it just the disk(s)?
se7en has quit [Quit: ZNC - https://znc.in]
se7en has joined #ipfs
<RubenKelevra> Well, you can add different types of disks... like a cache device, a writeback cache, a device which stores just small blocks and you can spread a pool with multiple copies (raid1 like) or with 1,2,3 mathematical redundancy (like raid5, raid6 etc.) and you can set a "copy" level to a dataset, everything stored there will be stored twice.
Smashnet has quit [Quit: Leaving]
<stavros> Hm, yeah, but when you scrub a pool, it scrubs all devices, no?
<stavros> My raidz does, for example
<RubenKelevra> yes
<RubenKelevra> but it scrubs also the memory itself, so everything currenlty cached there
<stavros> Ah, right
<RubenKelevra> when you just read a file, it might be delivered from the cache, but it got a broken redundancy somewhere, scrub will find and fix that. That's why its so slow. :)
<stavros> True
redfish has joined #ipfs
<RubenKelevra> And for encryption: a loaded key will enable scrub to check not only the integrity of the encrypted block and it's metadata, but will also verify the data cryptographically with the hmac
<RubenKelevra> so you don't rely on the checksum itself, but also verify cryptographically that the data is exactly what you're stored there and nobody has tempered with it (and also modified the hash)
<stavros> Makes sense
<stavros> That looks great, thank you
pecastro has quit [Quit: Lost terminal]
KempfCreative has quit [Ping timeout: 265 seconds]
zeden has quit [Quit: WeeChat 2.8]
zeden has joined #ipfs
RingtailedFox has quit [Ping timeout: 260 seconds]
skreech has left #ipfs [#ipfs]
cparker has quit [Ping timeout: 260 seconds]
invra has joined #ipfs
AlexShadowDiscor has joined #ipfs
cparker has joined #ipfs
llorllale has joined #ipfs
<stavros> RubenKelevra, thanks for all your help!
<RubenKelevra> stavros: you're welcome!
dsiypl4 has joined #ipfs
galaxie has quit [Ping timeout: 240 seconds]
Taoki has quit [Ping timeout: 264 seconds]
galaxie has joined #ipfs
Taoki has joined #ipfs
RingtailedFox has joined #ipfs
stavros has quit [Quit: Leaving]
invra has quit [Quit: WeeChat 2.3]
Taoki has quit [Ping timeout: 265 seconds]
mowcat has joined #ipfs
<it-xp[m]> <it-xp[m] "I meen, that any data must can't"> and also I look on L2 cryptography like "data layel" security, instead "link layer". Open data state allowed only in DRAM or (our current R&D) only as resulting state of homomorphic computations
<RubenKelevra> it-xp: that's over my head. :)
redfish has quit [Ping timeout: 246 seconds]
dethos has quit [Ping timeout: 250 seconds]
ragecryx has joined #ipfs
redfish has joined #ipfs
ZaZ has quit [Read error: Connection reset by peer]
<it-xp[m]> You, look on contact list, get public keys, compute for each hash pk + ypk and verify incoming message, when you lookup chat history, you have access to char ledger merkle branch and you messages in it addressed counter: ypk + pk
Taoki has joined #ipfs
opal has quit [Ping timeout: 240 seconds]
<it-xp[m]> this is consensuses, which I told above: Proof of Security, Proof of Fact., Proof of Self Identity
opal has joined #ipfs
<it-xp[m]> over pBFT (Tendermint)
<it-xp[m]> firs consensus surprise ) will discover it in proposal for RFP 0
<it-xp[m]> track it here https://github.com/stels-community/IPFS-RFP0 (comming soon)
<it-xp[m]> * You, look on contact list, get public keys, compute for each hash pk + ypk and verify incoming messages, when you lookup chat history, you have access to char ledger merkle branch and you messages in it addressed counter: ypk + pk
<it-xp[m]> * You, look on contact list, get public keys, compute for each hash pk + ypk and verify incoming messages, when you lookup chat history, you have access to chat ledger merkle branch and you messages in it addressed counter: ypk + pk
<it-xp[m]> a, accessing to chat ledger, yes, also field for research, as one solution called DC networks keys validation
mowcat has quit [Remote host closed the connection]
<it-xp[m]> it based on diner cryptographers problem
dsiypl4_ has joined #ipfs
<it-xp[m]> * it based on dining cryptographers problem https://en.wikipedia.org/wiki/Dining_cryptographers_problem
Taoki has quit [Remote host closed the connection]
cipres has quit [Ping timeout: 256 seconds]
dsiypl4 has quit [Ping timeout: 265 seconds]
warner has joined #ipfs
<it-xp[m]> this scheme vulnerable to the Sybil attack, but other way to R&D, is found right transactions structure, which resistance improved with global tree growing
<it-xp[m]> * this scheme vulnerable to the Sybil attack, but other way to R&D, is found right transactions structure, for which percentage resistance improved with global tree growing
Taoki has joined #ipfs
invra has joined #ipfs
jcea has quit [Ping timeout: 260 seconds]
jcea has joined #ipfs
Ai9zO5AP has joined #ipfs