lgierth changed the topic of #ipfs to: go-ipfs v0.4.5: https://dist.ipfs.io/go-ipfs | Wekek 7+8: 1) Web browsers https://git.io/vDyDE 2) Private networks https://git.io/vDyDh 3) Cluster https://git.io/vDyyt | Roadmap: https://waffle.io/ipfs/roadmaps | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | FAQ: https://git.io/voEh8 | Logs: https://botbot.me/freenode/ipfs/ | Code of Conduct: https://git.io/vVBS0
ebarch10 has quit [Remote host closed the connection]
ebarch10 has joined #ipfs
ebarch10 is now known as ebarch
pfrazee has joined #ipfs
cemerick has joined #ipfs
seagreen has quit [Quit: WeeChat 1.6]
jedahan has joined #ipfs
ylp has quit [Ping timeout: 255 seconds]
Mizzu has quit [Ping timeout: 255 seconds]
cemerick has quit [Ping timeout: 240 seconds]
sbruce has quit [Ping timeout: 256 seconds]
gully-foyle has joined #ipfs
jedahan has quit [Read error: Connection reset by peer]
jedahan_ has joined #ipfs
maxlath has quit [Quit: maxlath]
muvlon has joined #ipfs
pfrazee has quit [Ping timeout: 260 seconds]
DiCE1904 has quit [Read error: Connection reset by peer]
aquentson1 has quit [Ping timeout: 240 seconds]
fleeky__ has joined #ipfs
fleeky_ has quit [Ping timeout: 240 seconds]
jedahan_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
pfrazee has joined #ipfs
matoro has joined #ipfs
john1 has joined #ipfs
pfrazee has quit [Ping timeout: 240 seconds]
john1 has quit [Ping timeout: 260 seconds]
shott has joined #ipfs
shott has quit [Remote host closed the connection]
pfrazee has joined #ipfs
Oatmeal has joined #ipfs
realisation has joined #ipfs
pfrazee has quit [Ping timeout: 240 seconds]
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
realisation has joined #ipfs
zopsi has quit [Quit: Goodnight and Good Luck]
pfrazee has joined #ipfs
MDude has quit [Quit: MDude]
MDude has joined #ipfs
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
zopsi has joined #ipfs
lothar_m has quit [Quit: WeeChat 1.7-dev]
pfrazee has quit [Ping timeout: 255 seconds]
realisation has joined #ipfs
_whitelogger has joined #ipfs
john1 has joined #ipfs
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
realisation has joined #ipfs
pfrazee has joined #ipfs
john1 has quit [Ping timeout: 240 seconds]
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
john1 has joined #ipfs
pfrazee has quit [Ping timeout: 268 seconds]
pfrazee has joined #ipfs
pfrazee has quit [Ping timeout: 255 seconds]
_whitelogger has joined #ipfs
pfrazee has joined #ipfs
Aranjedeath has quit [Ping timeout: 240 seconds]
tmg has quit [Ping timeout: 268 seconds]
Foxcool has joined #ipfs
mguentner has quit [Quit: WeeChat 1.7]
wallacoloo_____ has quit [Quit: wallacoloo_____]
pfrazee has quit [Remote host closed the connection]
mguentner has joined #ipfs
Foxcool has quit [Read error: Connection reset by peer]
Foxcool has joined #ipfs
spread has joined #ipfs
Foxcool has quit [Read error: Connection reset by peer]
Foxcool has joined #ipfs
Foxcool has quit [Read error: Connection reset by peer]
Foxcool has joined #ipfs
IRCFrEAK has joined #ipfs
webdev007 has quit [Quit: Leaving]
Foxcool has quit [Read error: Connection reset by peer]
mguentner2 has joined #ipfs
mguentner has quit [Ping timeout: 240 seconds]
IRCFrEAK has left #ipfs [#ipfs]
Foxcool has joined #ipfs
Foxcool has quit [Read error: Connection reset by peer]
suttonwilliamd has joined #ipfs
pfrazee has joined #ipfs
AkhILman has quit [Ping timeout: 240 seconds]
spread has left #ipfs [#ipfs]
Catriona has joined #ipfs
pfrazee has quit [Ping timeout: 240 seconds]
wallacoloo_____ has joined #ipfs
ubiquitous[m] has joined #ipfs
<ubiquitous[m]> Heya. I'm working on an idea to upgrade robots.txt / sitemap.xml with a RDF based alternative using for instance, schemaorg.
<ubiquitous[m]> Am wondering how it might make sense to use IPFS to create a decentralised search / discovery / graph append methodology
muvlon has quit [Quit: Leaving]
muvlon has joined #ipfs
AniSkywalker has joined #ipfs
DiCE1904 has joined #ipfs
chris613 has quit [Quit: Leaving.]
<ubiquitous[m]> Perhaps mod something like: https://github.com/linkeddata/webizen to be able to search for particular things, ie, sparql-endpoints, news, etc. Leveraging RDF.
AniSkywalker has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
muvlon has quit [Ping timeout: 240 seconds]
reit has joined #ipfs
muvlon has joined #ipfs
<MikeFair> oi! hello all
<MikeFair> ubiquitous[m]: you still out there?
<ubiquitous[m]> Yup
<MikeFair> ubiquitous[m]: Tell me more what you're thinking; it hit me the other day that the way to do distributed search was a small piece of code that each node executed separately then aggregated their results into a repository for the user
<MikeFair> "Indexing" would be an act similar to "pinning" in that a node would announce it provided indexing for a particular directory or set of content ids
AkhILman has joined #ipfs
Foxcool has joined #ipfs
<ubiquitous[m]> I wasn't sure whether to script based on @context info (pertaining to this method) or whether something else would be better suited.
<MikeFair> It occured to me that trying to bring the content to indexing agent the way someone like Google does it just can't work
<ubiquitous[m]> As I've noted in the doc. At the simplist level, I just want to avoid manually updating a webpage about the various schema tools.
<MikeFair> TL;dr atm
<MikeFair> ubiquitous[m]: let me ask you this; with robots.txt that generally says "automated agents can't go here"
<MikeFair> ubiquitous[m]: let me ask you this; with robots.txt that generally says "automated agents shouldn't go here"
<ubiquitous[m]> Understood. I'm talking about an RDF file.
<MikeFair> but it's still 'sucking down the content' to the agent for processing
<ubiquitous[m]> If you search TimBL foaf RDF
<MikeFair> same kind of thing in the sense that agent is bringing the RDF content to it
<ubiquitous[m]> You'll find a good example of one describing a person
<ubiquitous[m]> Imho this file simply gives some info about what sort of site it is, what's on offer. Is it a blog, personal website, media, sparql-endpoint, etc. Etc.
<MikeFair> ok; sure a technical "table of contents"/"index"
<ubiquitous[m]> Then the collection of RDF indexes can be use by apps to do more. But in the first instance, we need some simple and easy way to support discovery
<ubiquitous[m]> +1
<ubiquitous[m]> I think I wrote an example in the gdoc
<ubiquitous[m]> Noting, drafted it yesterday
wkennington has joined #ipfs
<MikeFair> If what you're asking is "does a p2p need this kind of thing"; the answer is yes; but with a caveat
<MikeFair> the indexing must be published by the network
<MikeFair> the concept of a central agent (or even distributed agents) downloading all the content to build a central index won't work overall
<MikeFair> It needs to be a decentralized index
<MikeFair> each node needs to announce the part it provides; and others need to be able to find their way to that node based on the content they are looking for
<ubiquitous[m]> Perhaps read that doc. The question is about how to make this work with IPFS
<MikeFair> Do you know IPLD?
<MikeFair> basically this is kind of already done using that; it's like a distributed JSON dictionary where the values are links to other documents
<MikeFair> not "done" as in already doing it; but "done" as can already be integrated
<ubiquitous[m]> K. I imagine link is on the web?
<MikeFair> The example you have: “@ledger” : “SparqlEndPoint”
<MikeFair> That whole structure has a place to be uploaded
<ubiquitous[m]> Json is different to RDF regardless of serialisation
<MikeFair> exactly as is and addressed
<MikeFair> it's not json; it's KVP
<MikeFair> keys are strings
<MikeFair> jsonesque serialized
<ubiquitous[m]> Link?
<MikeFair> {"/":"HASH"}
<MikeFair> is a link to another document
<MikeFair> looking
<ubiquitous[m]> Effectively, it would be good if someone can generate standard triples / quads, then have some minor thing they need to add to make it populate via IPfS.
<MikeFair> ipld.io / https://github.com/ipld/specs
tmg has joined #ipfs
<ubiquitous[m]> If you wanted to put notes in the gdoc, that would be appreciated.
<ubiquitous[m]> I hope you can see the intention is indeed a decentralised approach.
<MikeFair> not exactly; I can see a description for indexing data; but how to actually do the search using a content addressed mechanism
<MikeFair> for example; any time a file is updated; the address changes
<MikeFair> (technically it becomes two addresses)
<ubiquitous[m]> Herein some of the IPFS related questions I had.
<MikeFair> So which document did you index?
<ubiquitous[m]> They should have a pointer.
<ubiquitous[m]> Also, collaborative use of that pointer
<MikeFair> But not all blocks have such a pointer
<ubiquitous[m]> So. If you have mysparqlendpoint.mymedicalwebsite.org
<MikeFair> So the "fixed point" in the address space is currently provided by another mechanism called "IPNS" which gives you an address that can have its target redirected
<ubiquitous[m]> Then the RDF file would have RDF statements saying "here's a sparql endpoint", it knows sparql-mm, it had Biomedical imaging on it for tissue / fluid samples
<ubiquitous[m]> Then via IPFS some sorta statement needs to exist to say "add this to the sparql endpoint graph, sub classification, Biomedical imaging"
<ubiquitous[m]> The root file somewhere gives a list of all these contributors
<ubiquitous[m]> Important to ensure these things don't get too big at scale.
Oatmeal has quit [Quit: Suzie says, "TTFNs!"]
<MikeFair> That would be IPNS
<ubiquitous[m]> :)
<MikeFair> "mysparqlendpoint.mymedicalwebsite.org" <---- IPNS address
ruby32 has joined #ipfs
<MikeFair> well actually; DNSLINK to IPNS addresss
<MikeFair> ubiquitous[m]: So the way that works is you make a PKI keypair; the address of the hash of the public key is the IPNS address; that makes a "fixed address" that everyone can find
<ubiquitous[m]> The utility of that link doesn't need to be over IPFS. But the distribution of the RDF file that adds to a broader ledger of available sites, the index, I was hoping IPFS could solve that problem, in a decentralised way, et.al.
<MikeFair> ubiquitous[m]: That address is then a redicrection pointer to some IPFS content based address
<MikeFair> that's easy too; each RDF section is its own document
<ubiquitous[m]> Understood. So we have a dictionary of IPFS Conteny pointer addresses.
<MikeFair> this is also where IPLD comes in; it stores a distributed structured document
<ubiquitous[m]> We point RDF documents at those addesses
<MikeFair> ubiquitous[m]: right; and those content pointers are constantly changing as the content changes
<MikeFair> so the IPNS link at the top is what people follow to see "the current version"
<ubiquitous[m]> What is the equivalent of "latest"
<ubiquitous[m]> K. I think we're saying the same thing.
<MikeFair> ubiquitous[m]: whatever the signing key of the IPNS node published as the pointer to the top of the content
<ubiquitous[m]> So the node itself isn't decentralised as someone is managing the key for it.
<MikeFair> ubiquitous[m]: What's interesting is once a particular version of the content tree has been published; you can't get the same address to give you anything other then that exact version (it can't be changed -- it's immutable)
<MikeFair> the management of the "document content" is centralized; the "document" is stored in a completely decentralized way
<ubiquitous[m]> I understand.
<MikeFair> so that's the value of the IPNS content link
<ubiquitous[m]> I need the equivalent of "latest" in addition to version control stuff
<MikeFair> it's a fixed hash
<MikeFair> will never change
<ubiquitous[m]> Also means to boot bad actors (which is why I have the reputation thing)
<ubiquitous[m]> Cheers. BBL.
<MikeFair> but the hash content id it points to can be changed by the person with the signing key
<MikeFair> and that's how create the "latest version" by updating the ipns link and telling the world the ipns address is the address of the doc
jedahan has joined #ipfs
ShalokShalom_ has joined #ipfs
anewuser has quit [Quit: anewuser]
ShalokShalom has quit [Ping timeout: 255 seconds]
rendar has joined #ipfs
arpu has quit [Ping timeout: 240 seconds]
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
arpu has joined #ipfs
realisation has joined #ipfs
realisation has quit [Ping timeout: 240 seconds]
sushipantsu[m] has joined #ipfs
pfrazee has joined #ipfs
pfrazee has quit [Ping timeout: 240 seconds]
dignifiedquire has quit [Quit: Connection closed for inactivity]
Mizzu has joined #ipfs
espadrine has joined #ipfs
m0ns00n_ has joined #ipfs
m0ns00n_ has quit [Client Quit]
ygrek has quit [Ping timeout: 240 seconds]
koalalorenzo has joined #ipfs
ygrek has joined #ipfs
chriscool has joined #ipfs
dignifiedquire has joined #ipfs
espadrine has quit [Ping timeout: 240 seconds]
maxlath has joined #ipfs
robattila256 has quit [Ping timeout: 260 seconds]
ylp has joined #ipfs
pfrazee has joined #ipfs
cemerick has joined #ipfs
pfrazee has quit [Ping timeout: 260 seconds]
cemerick has quit [Ping timeout: 240 seconds]
cemerick has joined #ipfs
Guest156168[m] has joined #ipfs
gmoro_ has joined #ipfs
gmoro_ has quit [Remote host closed the connection]
gmoro_ has joined #ipfs
Guest156174[m] has joined #ipfs
ygrek has quit [Ping timeout: 240 seconds]
lothar_m has joined #ipfs
AkhILman has quit [Read error: No route to host]
koalalorenzo has quit [Quit: Sto andando via]
Encrypt has joined #ipfs
Encrypt has quit [Quit: Quit]
wallacoloo_____ has quit [Quit: wallacoloo_____]
Oatmeal has joined #ipfs
pfrazee has joined #ipfs
ylp has quit [Ping timeout: 255 seconds]
wkennington has quit [Quit: Leaving]
pfrazee has quit [Ping timeout: 240 seconds]
aquentson has joined #ipfs
maxlath has quit [Ping timeout: 268 seconds]
cwahlers_ has joined #ipfs
Foxcool has quit [Read error: Connection reset by peer]
cwahlers has quit [Ping timeout: 255 seconds]
Foxcool has joined #ipfs
cemerick has quit [Ping timeout: 240 seconds]
cemerick has joined #ipfs
espadrine_ has joined #ipfs
Foxcool has quit [Read error: Connection reset by peer]
Foxcool has joined #ipfs
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 240 seconds]
aquentson has joined #ipfs
aquentson1 has quit [Ping timeout: 260 seconds]
tmg has quit [Ping timeout: 255 seconds]
Encrypt has joined #ipfs
pfrazee has joined #ipfs
AniSkywalker has joined #ipfs
<AniSkywalker> morning
maxlath has joined #ipfs
<SchrodingersScat> AniSkywalker: welcome back, we've been waiting
<AniSkywalker> Not to be creepy, but to be creepy.
pfrazee has quit [Ping timeout: 240 seconds]
cemerick has quit [Ping timeout: 240 seconds]
Guest156247[m] has joined #ipfs
ruby32 has quit [Remote host closed the connection]
ruby32 has joined #ipfs
kthnnlg has joined #ipfs
cxl000 has quit [Ping timeout: 240 seconds]
ylp has joined #ipfs
ianopolous has quit [Ping timeout: 260 seconds]
ruby32 has quit [Ping timeout: 260 seconds]
<kthnnlg> Hi All, I have a usb drive that I would like to have load at boot time in my nixos machine. I would like to use luks to encrypt the partition. Unfortunately, I'm having trouble getting nixos to recognize the drive at boot time. My suspicion is that the uuid entry that appears in /dev/disk/by-uuid keeps changing every time I reboot. Does anyone have a pointer to the proper method for adding an encrypted USB drive to an existi
<SchrodingersScat> this is the channel for ipfs
<kthnnlg> oops, apologies
Foxcool has quit [Ping timeout: 240 seconds]
<MikeFair> SchrodingersScat: How opposed would people get to installing an OTP library into ipfs daemon
<SchrodingersScat> idk what that is but I'm passionately opposed to it.
<MikeFair> hehe --
<MikeFair> it's a one time pad/passcode generator
mildred1 has quit [Ping timeout: 260 seconds]
<MikeFair> So you've got something on your machine/phone that generates a code based on a secret seed
<Mateon1> MikeFair: Any reason? I'm opposed to bloat, so that depends
<MikeFair> Mateon1: Authentication for IPNS might be a current use case
<MikeFair> Mateon1: I'm working toward this idea of a signing server
<Mateon1> OTP is usually not the answer with crypto, and it's hard/impossible to make a true OTP
<MikeFair> Mateon1: But with ipns you could validate the person had the private encryption key and new the OTP
<MikeFair> Mateon1: it's not _the answer_
<MikeFair> but making people keep these private key files private is well equally ludicrous
<SchrodingersScat> you can't make me do anything
<MikeFair> SchrodingersScat: exactly
<MikeFair> but sticking with just ipns for a moment; I can't make a varying pass phrase, or a varying encryption algorithm
<MikeFair> but I can send a time varying encrypted message
<Mateon1> Wait, "something that generates a code based on a secret seed" then that's not a OTP
<MikeFair> So the idea is that the ipfs servers would also validate the OTP code that belongs to the IPFS update
<MikeFair> Mateon1: PsuedoOTP; a cipher stream
<MikeFair> Mateon1: one-time passcode ; not a true one-time pad
<MikeFair> (because true one-time pads are nearly impossible to securely communicate anyway)
<MikeFair> So when I send in an IPNS update, I have to use this OTP code to encrypt the message
<SchrodingersScat> why not ipfs name publish
<MikeFair> IPFS will see what node I'm trying to update, generate the OTP code, decrypt the message; then valid that it's been encrypted with the private key
<MikeFair> SchrodingersScat: this would be "ipfs name publish -otp 4583573"
mildred1 has joined #ipfs
<MikeFair> It's just proving that the thing sending the message was in the presence of the correct OTP code when the message was generated
<MikeFair> not "just proving" ; but proving
<MikeFair> The ipfs server is already validating the message came from the right private key; this is giving it that extra factor of authentication
<MikeFair> I think I can use it to create a "user login" session
<MikeFair> though even if it kept to only ipns updates it's still make sense and be useful
<betei2> Are there any plans to implement some kind of history into IPNS? Or is there already?
<MikeFair> betei2: AKAIK there's no history of ipns hash values; but obviously the ipfs hashes themselves persist
<MikeFair> betei2: You're thinking that perhaps if I looked at an IPNS node; I'd see a list of historical values; the top one being the "current" one
<MikeFair> Mateon1: I think it'd be something like this: https://github.com/hgfischer/go-otp
<betei2> Exactly, that if i know an IPNS hash, but want to get the hash of an older version it previously pointed to
vapid has left #ipfs ["http://quassel-irc.org - Chat comfortably. Anywhere."]
<MikeFair> betei2: 1) I could see that being a useful thing but I'm not sure how to express accessing the various histories offhand (maybe something like an ipld array); 2) you can emulate it with a 2nd ipns address
pfrazee has joined #ipfs
<MikeFair> the second address would like to an IPLD database with links to the historical values; each time you published IPNS, you'd update this ipld database; then publish that to the other ipns address
<MikeFair> s/like/link
<MikeFair> it'd be wrapped in a script
<MikeFair> betei2: The main challenge I see with updating the direct ipns node is how do you distinguish what you're requesting
<MikeFair> ipfs file --stat /ipns/ipnshash
dimitarvp has joined #ipfs
<MikeFair> it looks like maybe ipfs file --history /ipns/ipnshash
<MikeFair> it looks like maybe ipfs file --history /ipns/ipnshash/1/
<MikeFair> it looks like maybe ipfs file --history /ipns/ipnshash/0/ <--- gives you the current one (or would it be the first one?)
<Mateon1> Currently, IPNS doesn't support history, and `ipfs files` is reserved for the files api (aka MFS), which would get confusing
<dimitarvp> are there any plans for IPNS to support history, blockchain style?
<Mateon1> Once history is thought out, and maybe the git-like structure, maybe: `ipfs name resolve --revision <x> /ipns/hash/path/to/thing`
<dimitarvp> that'd be extremely important f.ex. in an event of a very strong attack on the network.
<SchrodingersScat> merkledags
<Mateon1> dimitarvp: I believe this is a long term plan, or at least was some time ago. Currently there are no short term efforts on that
<dimitarvp> Mateon1: thank you. ^_^ it's good to know and it's definitely not a judgement from my side
<dimitarvp> btw is there an automated tester for IPFS clients? I mean, if somebody is writing an IPFS client in an other language than Go and wants to check if their code is behaving properly, does there exist an agent that can "audit" an IPFS client implementation, automatically?
<MikeFair> dimitarvp: I don't know about that; but I've used ipfs.io/ipfs/hash to test that my commits "made it"
<dimitarvp> MikeFair: it's a start. I was wondering f.ex. if I try to create an IPFS client, could an automated testing procedure try to send me different IPFS protocol packets and inspect if the responses of my implementations are correct
<dimitarvp> like, if the implementation is a "good citizen", so to speak :D
john2 has joined #ipfs
<MikeFair> dimitarvp: You're thinking a server type thingy that can participate with other daemons
<dimitarvp> MikeFair: maybe I am missing something, but isn't every IPFS client sort of like a torrent client? Can both seed and leech.
<MikeFair> I think of a client that submits to a local ipfs daemon
<dimitarvp> ah, sorry for the confusion
<dimitarvp> I meant full IPFS daemon then.
<MikeFair> I'd have to leave that to others; but I'd think the go test should work
cwahlers_ has quit [Ping timeout: 240 seconds]
john1 has quit [Ping timeout: 240 seconds]
<MikeFair> but I can't say if they've written "network protocol bot tester"
<dimitarvp> MikeFair: thanks. I am thinking stuff like github.com/shadow/shadow, for IPFS, with extra functionality that validates a daemon's responses are protocol-compliant.
<dimitarvp> there's also github.com/whyrusleeping/iptb, I am yet to look at that though.
john3 has joined #ipfs
john2 has quit [Ping timeout: 260 seconds]
appa has quit [Ping timeout: 240 seconds]
pfrazee has quit [Ping timeout: 240 seconds]
Akaibu has quit [Quit: Connection closed for inactivity]
ruby32 has joined #ipfs
tr909 has quit [Ping timeout: 252 seconds]
suttonwilliamd_ has joined #ipfs
suttonwilliamd has quit [Ping timeout: 260 seconds]
cemerick has joined #ipfs
toXel has quit [Remote host closed the connection]
Guest156348[m] has joined #ipfs
toXel has joined #ipfs
gmoro_ has quit [Ping timeout: 268 seconds]
Caterpillar has joined #ipfs
cwahlers has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers has quit [Ping timeout: 240 seconds]
Aranjedeath has joined #ipfs
pawalls has quit [Remote host closed the connection]
<MikeFair> Does anyone know if the javascript api can publish to an ipns address?
pawalls has joined #ipfs
<AniSkywalker> Hey, I'm a bit confused about the proof of retrievability filecoin specifies--it has sk_i, pk_i, theta_i <-- PoR.Setup(piece_i); does this mean that for every given piece, there is a secret key, public key, and authenticator? Because I've also seen the theta referenced as a plurality, which would imply multiple authenticators.
Encrypt has quit [Quit: Quit]
vapid has joined #ipfs
corvinux has joined #ipfs
espadrine has joined #ipfs
rcat has joined #ipfs
rcat has quit [Remote host closed the connection]
ralphtheninja has quit [Remote host closed the connection]
ralphtheninja has joined #ipfs
rcat has joined #ipfs
<AniSkywalker> Anyone here very familiar with Shacham and Waters' Proofs of Retrievability and mind answering a few questions?
pfrazee has joined #ipfs
corvinux has quit [Quit: Leaving]
renat has joined #ipfs
* MikeFair listens to the crickets chirp.
<MikeFair> I suspect they're just away atm
vapid has quit [Remote host closed the connection]
aquentson has quit [Ping timeout: 260 seconds]
appa has joined #ipfs
<renat> hello everyone
<renat> is there an IPFS slack channel I could join
<renat> If anyone has an invited I'd appreciate it, thank you
<cblgh> this is it
<renat> oh no slack channel?
<cblgh> nope
<renat> alright thanks for letting me know
<renat> ipfs is pretty damn cool
<renat> permanent fabric of the web
<noffle> MikeFair: I don't think so: https://github.com/ipfs/js-ipfs/issues/209
<noffle> but the js api to go-ipfs certainly can
aquentson has joined #ipfs
<cblgh> agreed renat
<__uguu__> ipfs needs to start considering a native port tbh
[1]MikeFair has joined #ipfs
<__uguu__> one that is lean and can run on soho routers
<noffle> __uguu__: there's a desire for a lower level impl: https://github.com/ipfs/ipfs/issues/164
MikeFair has quit [Ping timeout: 268 seconds]
[1]MikeFair is now known as MikeFair
<__uguu__> yeah
<noffle> PRs welcome! :)
<__uguu__> imo it doesn't need to implement every single feature just enough to work
<__uguu__> i.e. doesn't need websockets
<__uguu__> ipfs seems to be lost in high level abstraction hell, a lower level impl would help ground it maybe
SuprDewd has quit [Read error: Connection reset by peer]
<__uguu__> looks rather active too
SuprDewd has joined #ipfs
chris613 has joined #ipfs
renat has quit [Ping timeout: 260 seconds]
cxl000 has joined #ipfs
ShalokShalom_ is now known as ShalokShalom
rcat has quit [Ping timeout: 240 seconds]
rcat has joined #ipfs
mguentner2 is now known as mguentner
<mguentner> __uguu__: imho a security nightmare
chungy has quit [Ping timeout: 255 seconds]
cwill has quit [Ping timeout: 240 seconds]
cwill has joined #ipfs
SuprDewd has quit [Read error: Connection reset by peer]
SuprDewd has joined #ipfs
chungy has joined #ipfs
<__uguu__> idk about that
<__uguu__> if you write low quality code nothing stops a security nightmare
matoro has quit [Remote host closed the connection]
cemerick has quit [Ping timeout: 240 seconds]
cemerick has joined #ipfs
absdog18[m] has joined #ipfs
<mguentner> __uguu__: it's 2017, there is no need to write stuff like ipfs in c, at least use modern c++ or something that does not give you enough rope to shoot yourself in the foot
<__uguu__> i disagree
<__uguu__> C++ is a bigger "security nightmare" than C if you go by code complexity
ygrek has joined #ipfs
Mossfeldt has joined #ipfs
ygrek has quit [Ping timeout: 255 seconds]
<AniSkywalker> I think I've got it!
<AniSkywalker> I'm working on a PoR implementation in Go for FileCoin
<__uguu__> awesome
<frood> AniSkywalker: are you this guy? https://github.com/CapacitorSet/por
<AniSkywalker> No but that's my reference.
maxlath has quit [Ping timeout: 240 seconds]
<AniSkywalker> I'm making mine much more pluggable.
<frood> ok. he uses the RSA construction, and last I talked to him it crashed his machine while encoding a small file.
<frood> there's also the old Storj implementation of the privately-verifiable construction in C++. https://github.com/storjold/heartbeat
<frood> re: go-por: he said "I eventually tested on a 1 MiB file, and it froze my computer for several minutes while signing - I'm currently waiting for an emergency shell to appear so I can kill the process."
<frood> so you should probably use the BLS construction instead of the RSA one.
<AniSkywalker> frood is that a fault of RSA or is it more about his implementation?
<AniSkywalker> Also, frood would I not use RSA anyways for the public/private key pairs?
<frood> I would guess that the RSA construction is very inefficient. try the BLS one instead
<AniSkywalker> Does Golang have an implementation?
<frood> No. afaik, nobody has implemented the publicly-verifiable BLS construction
<AniSkywalker> I meant the BLS part.
matoro has joined #ipfs
<frood> oh geez, I don't know.
<frood> BLS is not exactly a widely-used algorithm.
<frood> how's your linear algebra? :)
<AniSkywalker> Well, I am a high school student who is in geometry, so I'm in for some fun.
<AniSkywalker> However, I've got some great algebra professors who could help me understand that.
<AniSkywalker> It doesn't look to complicated frood from the Wikipedia page.
<AniSkywalker> frood How about I finish the RSA implementation and see how good/bad it is.
vapid has joined #ipfs
<AniSkywalker> Since that performance on 1MB seems almost comical.
<frood> I'd like to see the result
<frood> I know the Storj C++ implementation of privately verifiable BLS encoded files at ~4MiB/s
<frood> I'd hazard a guess that the publicly verifiable construction is less efficient. but I don't know of any hard numbers on that.
<frood> (guess is based on complexity)
<AniSkywalker> frood I think I just realized what his problem might have been
<AniSkywalker> He had the shard size hard-coded at 3 bytes.
<AniSkywalker> Which means he's generating a K/V pair for every three bytes and signing
<frood> ah hmm, lambda is 10 bytes in the paper, so he's not far off.
<AniSkywalker> frood Huh? Where are you looking? By shard I mean dividing the data in to pieces.
<AniSkywalker> Which would be the equivalent of a block in IPFS.
<frood> hold on, let me pull up the pape
<frood> s/pape/paper
<frood> so you're chunking the file, then applying erasure codes, then signing each chunk. these chunks are not equivalent to IPFS blocks
Encrypt has joined #ipfs
<frood> or rather, you're making tags for each chunk, then signing each tag
<frood> look at 6.1: PubRSA.St.
<AniSkywalker> frood You apply the erasure coding (which effectively chunks it since I set the default to 0 parity) and then sign each.
<frood> File M. Erasure code M -> M'. Split M' into n blocks with s sectors. generate τ_0, then sign it to make the tag
bsm117532 has quit [Ping timeout: 260 seconds]
<frood> then compute the authenticators σ_i
maxlath has joined #ipfs
<AniSkywalker> Right, but n blocks with s sectors is effectively erasure code, no? I.e. M [][]byte where first dimension is blocks and second is sectors?
<AniSkywalker> frood or am I misunderstanding it?
<AniSkywalker> I.e. I could apply reed-solomon, and then generate tau 0 and sign it?
rendar has quit [Ping timeout: 240 seconds]
<AniSkywalker> And then go through and for each block compute the authenticator?
<AniSkywalker> Or do you need to erasure code it first?
<frood> if I understand correctly, erasure coding is critical for the adversarial retrieval property
<frood> also, check the parameter selection section. it's assumed that n, the number of blocks, is much greater than lambda
<frood> lambda being the security parameter, which is 80 bits in the BLS construction
<frood> they don't seem to parameterize lambda in the RSA construction
M156463[m] has joined #ipfs
<frood> but given the assumption, I'd say that RS is an okay chunking method if you use many many blocks.
<AniSkywalker> Wait, security bits? Wouldn't that be the RSA 2048 part?
<AniSkywalker> Per the por package: "The current implementation tags the file example.txt with an RSA 2048-bit keypair (λ_1 - 1 = 2048), issues a verification request with l = 2 elements, and checks for its correctness."
M156463[m] has left #ipfs [#ipfs]
<frood> nah, λ_1 is explicitly defined in the RSA construction.
<frood> λ is defined earlier in the paper in the common terms section
matoro has quit [Ping timeout: 255 seconds]
<frood> see the discussion in the parameter selection section
<AniSkywalker> I see. That means the number of blocks needs to be greater than lambda, which is what in the case of RSA?
<AniSkywalker> That's what I can't find, the relation between lambda and lambda_1
<AniSkywalker> frood: section 1.1 suggests 80, but how does that translate to RSA?
ajsantos has joined #ipfs
robattila256 has joined #ipfs
<frood> one sec, reading
aquentson has quit [Ping timeout: 240 seconds]
<AniSkywalker> Also, frood would you be interested in helping implement BLS in Go? I can't find anything on it and barely anything in other languages.
<frood> I don't really have time for that kind of project. :(
<frood> I'm already spending my sunday afternoons discussing PoR on IRC. I need some breaks
aquentson has joined #ipfs
<AniSkywalker> Heh, that's fine.
<frood> the RSA construction seems to rely only on λ_1 and λ_2. It says that you should pick λ_1 appropriate to λ.
ajsantos has left #ipfs [#ipfs]
<frood> so basically, just pick an RSA bitlength that you think is hard to factor. :)
<AniSkywalker> 2048? :P
<frood> λ_2 loosely depends on λ, as it is derived from max B, which is a bitstring of length λ
<frood> (or less)
matoro has joined #ipfs
<frood> tbh, I would stick with λ=80, as I can't find anything else about it.
<AniSkywalker> frood to be clear, that's fine for RSA bitlength?
<AniSkywalker> As in rsa.GenerateKey(rand.Reader, 80)
<frood> params I'd use: λ=80, λ_1=2048, λ_2=88
<frood> so it'd be rsa.GenerateKey(rand.Reader, 2048)
suttonwilliamd_ has quit [Ping timeout: 268 seconds]
Mossfeldt has quit [Quit: Bye]
rendar has joined #ipfs
ZaZ has joined #ipfs
<AniSkywalker> frood one last thing, and thanks so much for helping: it says to chose s random elements from Z*n, what does it mean?
<AniSkywalker> I.e. what's the U component of Tau_zero
<AniSkywalker> Or does it mean use the name as a source of random?
<frood> Z_n is the group of integers modulo n
<frood> so {0, 1, 2, ... n}
Jean-PierreLauri has joined #ipfs
<frood> s is the number of sectors in a chunk (1 by default. see the section about tradeoff of storage vs communication)
<AniSkywalker> I really do need to get into linear algebra. Anyways, so my understanding thus far is you make an authenticator for each piece
<AniSkywalker> And then those go into the higher-up "tau"
<AniSkywalker> Only confusing part, is it one tau per piece? frood
<AniSkywalker> Or am I missing something here?
<frood> There are i blocks, each with a tau, and a sigma
<frood> or rather, there are n blocks, and i is an index where 1 <= i <= n
anewuser has joined #ipfs
<frood> so there is block m_i, which has tag tau_i, and authenticator sigma_i
anewuser has quit [Remote host closed the connection]
<frood> scratch that, there is only 1 tau.
<frood> there is a sigma for each block.
<AniSkywalker> Ah I see. So the overarching path is input data -> erasure encoding -> generate sigma per piece -> generate tau?
anewuser has joined #ipfs
<frood> tau can be generate before the sigmas
<AniSkywalker> Where a piece is some part of the input data, so if we imagine the erasure encoding is just splitting it into sections ("abcd" => "ab", "cd") then it would generate the sigma for "ab" and "cd" and return the tau and sigmas?
<frood> seems right
<frood> the filetag, tau, seems to be mostly for identification
<frood> anyway, I gotta head out
<AniSkywalker> alright, thanks
<frood> tau may also function as a verification of the name, which is included in the authenticators (sigma)
<frood> good luck :)
<AniSkywalker> thanks :)
pfrazee has quit [Ping timeout: 240 seconds]
cemerick has quit [Ping timeout: 260 seconds]
pfrazee has joined #ipfs
redfish has quit [Ping timeout: 268 seconds]
Oatmeal has quit [Ping timeout: 240 seconds]
Oatmeal has joined #ipfs
tmg has joined #ipfs
rcat has quit [Ping timeout: 260 seconds]
reit has quit [Quit: Leaving]
Encrypt has quit [Quit: Quit]
mildred2 has joined #ipfs
ZaZ1 has joined #ipfs
Encrypt has joined #ipfs
john3 has quit [Ping timeout: 268 seconds]
ianopolous has joined #ipfs
ZaZ has quit [Ping timeout: 260 seconds]
Encrypt has quit [Client Quit]
mildred1 has quit [Ping timeout: 260 seconds]
kthnnlg has quit [Ping timeout: 268 seconds]
ZaZ1 has quit [Read error: Connection reset by peer]
espadrine_ has quit [Ping timeout: 240 seconds]
aquentson1 has joined #ipfs
ygrek has joined #ipfs
aquentson has quit [Ping timeout: 240 seconds]
rcat has joined #ipfs
maxlath1 has joined #ipfs
maxlath has quit [Ping timeout: 260 seconds]
maxlath1 is now known as maxlath
silotis has quit [Remote host closed the connection]
silotis has joined #ipfs
aquentson has joined #ipfs
aquentson1 has quit [Ping timeout: 260 seconds]
ylp has quit [Ping timeout: 255 seconds]
cxl000 has quit [Quit: Leaving]
pawalls has quit [Ping timeout: 240 seconds]
<AniSkywalker> frood So what I've figured out is that there's a fundamental disconnect between the PoR specification and FileCoin--I don't need to implement erasure encoding in the PoR library for FileCoin because that's a fundamental part of FileCoin itself--the notion of parts.
pawalls has joined #ipfs
<AniSkywalker> So all I really have to make is a Piece => Private key, public key, Authenticator function to get the authenticators for a piece.
<AniSkywalker> Which means I just need to implement https://hastebin.com/atikopigub.scala
ygrek has quit [Ping timeout: 268 seconds]
lothar_m_ has joined #ipfs
lothar_m has quit [Ping timeout: 240 seconds]
horrified has quit [Quit: quit]
webdev007 has joined #ipfs
horrified has joined #ipfs
ylp has joined #ipfs
ygrek has joined #ipfs
arkimedes has joined #ipfs
ruby32 has quit [Remote host closed the connection]
ruby32 has joined #ipfs
ruby32 has quit [Client Quit]
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<AniSkywalker> Who wrote the filecoin spec?
[1]MikeFair has joined #ipfs
MikeFair has quit [Ping timeout: 260 seconds]
[1]MikeFair is now known as MikeFair
webdev007 has quit [Ping timeout: 268 seconds]
<AniSkywalker> So the filecoin spec has recordi = (H(pi);H(thetai); pki; rewardi), but it describes the theta without the hash around it.
<AniSkywalker> It describes the piece with the hash, so I'm wondering which it is.
<AniSkywalker> H(theta) or theta
<SchrodingersScat> he's asking too many questions, he's on to us
<AniSkywalker> SchrodingersScat I've observed it but it's not taking a state.
chriscool has quit [Ping timeout: 240 seconds]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
dimitarvp has quit [Ping timeout: 240 seconds]
webdev007 has joined #ipfs
webdev007 has quit [Max SendQ exceeded]
webdev007 has joined #ipfs
webdev007 has quit [Max SendQ exceeded]
webdev007 has joined #ipfs
webdev007 has quit [Max SendQ exceeded]
bwn has quit [Ping timeout: 276 seconds]
ylp has quit [Ping timeout: 255 seconds]
bwn has joined #ipfs
hexican[m] has joined #ipfs
JustinDrake has joined #ipfs
webdev007 has joined #ipfs
ylp has joined #ipfs
<JustinDrake> Is there IPFS example code to Cat from the DAG? I’m looking for minimal code in a similar vein to https://github.com/libp2p/go-libp2p/tree/master/examples/echo
<AniSkywalker> JustinDrake have you checked go-ipfs?
<AniSkywalker> Wait, cat from the DAG? DO you mean follow the DAG to the path?
<JustinDrake> I have a resolved path (e.g. /ipfs/QmPofcak7rgNQJNkQRNrBhPwvdqm3WU3sxGzVU2SqvhL4C) and want to traverse and download the associated blocks
<AniSkywalker> Oh, check the code in go-ipfs for "ipfs get"
webdev007 has quit [Quit: Leaving]
robattila256 has quit [Quit: WeeChat 1.7]
suttonwilliamd has joined #ipfs
rcat has quit [Quit: Lost terminal]
robattila256 has joined #ipfs
rcat has joined #ipfs
wallacoloo_____ has joined #ipfs
jkilpatr has quit [Ping timeout: 240 seconds]
andoma has quit [Remote host closed the connection]
ylp has quit [Ping timeout: 255 seconds]
andoma has joined #ipfs
matoro has quit [Ping timeout: 240 seconds]
tmg has quit [Ping timeout: 255 seconds]
hexican[m] has left #ipfs ["User left"]
<Kubuxu> JustinDrake: go-ipfs/core/core-api.Cat
<JustinDrake> @Kubuxu: The UnixfsAPI assume a core.IpfsNode which is very high level. I was hoping for something maximally lightweight.
<Kubuxu> we have plans for more lightweight API
<Kubuxu> what is your setups, do you have external node running?
tmg has joined #ipfs
<JustinDrake> I’m writing a custom IPFS scraper for OpenBazaar. I think the Bitswap API is more or less what I’m looking for.
<Kubuxu> Hmm, it will be hard to explain and I am off to sleep. Do you have an issue open with explanation? What I can tell you right away is that "minimal" will be hard.
matoro has joined #ipfs
<JustinDrake> I’ll look into it more deeply tomorrow. Good night.
JustinDrake has quit [Quit: JustinDrake]
reit has joined #ipfs
espadrine has quit [Ping timeout: 240 seconds]
rcat has quit [Ping timeout: 240 seconds]
maxlath has quit [Quit: maxlath]
andoma has quit [Ping timeout: 240 seconds]
andoma has joined #ipfs
rcat has joined #ipfs
rcat has quit [Ping timeout: 260 seconds]