weez17 has quit [Read error: Connection reset by peer]
weez17 has joined #ipfs
pcardune has quit [Remote host closed the connection]
mauz555 has joined #ipfs
xenial-user2 has joined #ipfs
mauz555 has quit [Ping timeout: 240 seconds]
JimmieD has joined #ipfs
mauz555 has joined #ipfs
b5 has quit [Ping timeout: 240 seconds]
infinity0_ has joined #ipfs
infinity0_ has quit [Changing host]
infinity0 has joined #ipfs
infinity0 is now known as Guest68726
xenial-user2 has quit [Quit: Leaving]
b5 has joined #ipfs
mauz555 has quit [Ping timeout: 265 seconds]
pcardune has joined #ipfs
pcardune has quit [Ping timeout: 265 seconds]
seizo has quit [Ping timeout: 260 seconds]
b5_ has joined #ipfs
b5 has quit [Ping timeout: 256 seconds]
jkrone has quit [Quit: Leaving]
jkrone has joined #ipfs
ONI_Ghost has quit [Remote host closed the connection]
ONI_Ghost has joined #ipfs
ONI_Ghost has quit [Changing host]
ONI_Ghost has joined #ipfs
coursetro has quit [Ping timeout: 260 seconds]
espadrine_ has quit [Ping timeout: 256 seconds]
mauz555 has joined #ipfs
sz0 has quit [Quit: Connection closed for inactivity]
daMaestro has joined #ipfs
pcardune has joined #ipfs
mauz555 has quit [Ping timeout: 256 seconds]
pcardune has quit [Ping timeout: 260 seconds]
mauz555 has joined #ipfs
chowie has joined #ipfs
colatkinson has joined #ipfs
jared4dataroads has quit [Ping timeout: 276 seconds]
clemo has quit [Ping timeout: 264 seconds]
mauz555 has quit [Ping timeout: 240 seconds]
dimitarvp has quit [Quit: Bye]
b5_ has quit [Ping timeout: 264 seconds]
chowie has quit [Ping timeout: 255 seconds]
mauz555 has joined #ipfs
Xiti` has quit [Quit: Xiti`]
mauz555 has quit [Ping timeout: 256 seconds]
Xiti has joined #ipfs
pcardune has joined #ipfs
mauz555 has joined #ipfs
pcardune has quit [Ping timeout: 264 seconds]
mauz555 has quit [Ping timeout: 256 seconds]
infinisil has quit [Ping timeout: 240 seconds]
misuto has quit [Quit: No Ping reply in 180 seconds.]
jpaa has quit [Ping timeout: 260 seconds]
misuto has joined #ipfs
fireglow has quit [Ping timeout: 255 seconds]
jared4dataroads has joined #ipfs
infinisil has joined #ipfs
fireglow has joined #ipfs
user51 has joined #ipfs
jpaa has joined #ipfs
user_51 has quit [Ping timeout: 255 seconds]
mauz555 has joined #ipfs
mauz555 has quit [Ping timeout: 240 seconds]
pcardune has joined #ipfs
Candida has joined #ipfs
pcardune has quit [Ping timeout: 260 seconds]
colatkinson has quit [Ping timeout: 240 seconds]
b5 has joined #ipfs
b5 has quit [Ping timeout: 264 seconds]
JimmieD has quit [Remote host closed the connection]
JimmieD has joined #ipfs
DJ-AN0N has quit [Quit: DJ-AN0N]
pcardune has joined #ipfs
}ls{ has quit [Ping timeout: 264 seconds]
zautomata1 has joined #ipfs
chriscool1 has joined #ipfs
}ls{ has joined #ipfs
zautomata has quit [Ping timeout: 264 seconds]
opal has left #ipfs ["i'm never coming back"]
daMaestro has quit [Quit: Leaving]
mauz555 has joined #ipfs
mrBen2k2k2k has joined #ipfs
mauz555 has quit [Ping timeout: 240 seconds]
MrSparkle has joined #ipfs
MrSparkle has quit [Client Quit]
MrSparkle has joined #ipfs
colatkinson has joined #ipfs
chriscool2 has joined #ipfs
chriscool1 has quit [Read error: Connection reset by peer]
Candida has quit [Remote host closed the connection]
pcardune has quit [Remote host closed the connection]
treethought has joined #ipfs
ulrichard has joined #ipfs
lemonpepper24 has quit [Quit: Leaving]
espadrine_ has joined #ipfs
jkrone has quit [Remote host closed the connection]
pcardune has joined #ipfs
pcardune has quit [Ping timeout: 264 seconds]
jared4dataroads has quit [Ping timeout: 260 seconds]
treethought has quit [Remote host closed the connection]
treethought has joined #ipfs
treethought has quit [Ping timeout: 264 seconds]
chowie has joined #ipfs
MikeFair has joined #ipfs
MikeFair has quit [Client Quit]
colatkinson has quit [Quit: colatkinson]
bht-ricardo[m] has joined #ipfs
brechtvb[m] has joined #ipfs
pcardune has joined #ipfs
colatkinson has joined #ipfs
cwahlers_ has quit [Quit: Gone fishing]
pcardune has quit [Ping timeout: 264 seconds]
espadrine_ has quit [Ping timeout: 265 seconds]
colatkinson has quit [Client Quit]
yank has joined #ipfs
yank has left #ipfs [#ipfs]
yank has joined #ipfs
<yank>
.
}ls{ has quit [Quit: real life interrupt]
xzha has joined #ipfs
movedx has left #ipfs [#ipfs]
mtodor has joined #ipfs
fazo96 has joined #ipfs
colatkinson has joined #ipfs
pcardune has joined #ipfs
cwahlers has joined #ipfs
xzha has quit [Ping timeout: 260 seconds]
yank has quit [Ping timeout: 260 seconds]
pcardune has quit [Ping timeout: 268 seconds]
trqx has quit [Ping timeout: 268 seconds]
mauz555 has joined #ipfs
mauz555 has quit [Read error: Connection reset by peer]
mauz555 has joined #ipfs
trqx has joined #ipfs
mauz555 has quit []
rendar has joined #ipfs
treethought has joined #ipfs
Miriamne has joined #ipfs
bht-ricardo[m] has quit [*.net *.split]
sushiogoto[m] has quit [*.net *.split]
Joakim[m] has quit [*.net *.split]
sppqd[m] has quit [*.net *.split]
afshin[m] has quit [*.net *.split]
SamLord[m] has quit [*.net *.split]
jaydenhawkes123[ has quit [*.net *.split]
nullc1pher[m] has quit [*.net *.split]
doubleorseven[m] has quit [*.net *.split]
Wikipedia[m] has quit [*.net *.split]
dyce[m] has quit [*.net *.split]
pubemail[m] has quit [*.net *.split]
renatocan[m] has quit [*.net *.split]
klara[m] has quit [*.net *.split]
lnxw37[m] has quit [*.net *.split]
superusercode has quit [*.net *.split]
cosmosinfo[m] has quit [*.net *.split]
yayota[m] has quit [*.net *.split]
billsam[m] has quit [*.net *.split]
mdrights[m] has quit [*.net *.split]
Polychrome[m] has quit [*.net *.split]
musicmatze[m] has left #ipfs ["User left"]
ylp has joined #ipfs
colatkinson has quit [Ping timeout: 264 seconds]
Trieste has quit [Ping timeout: 264 seconds]
Trieste has joined #ipfs
pcardune has joined #ipfs
pcardune has quit [Ping timeout: 240 seconds]
Miriamne has quit [Remote host closed the connection]
ygrek has joined #ipfs
upperdeck has quit [Ping timeout: 260 seconds]
upperdeck has joined #ipfs
MikeFair has joined #ipfs
akkumulator[m] has joined #ipfs
caveat has quit [Ping timeout: 255 seconds]
gmoro has joined #ipfs
gmoro has quit [Remote host closed the connection]
gmoro has joined #ipfs
bomb-on has quit [Quit: SO LONG, SUCKERS!]
plexigras has joined #ipfs
s1na has joined #ipfs
<s1na>
Hey guys, I was interested in working on semantic web (rdf, etc.) on top of ipfs (ipld). I saw there have been some discussions about this, have there been any efforts on implementing such a thing? can someone please point me in the right direction?
Augustus has joined #ipfs
vyzo has quit [Ping timeout: 276 seconds]
<MikeFair>
s1na, any specific questions you had in mind?
<MikeFair>
They way I understand semantic web is that it's much more of an addressing scheme than a storage/data routing scheme
vyzo has joined #ipfs
<MikeFair>
The closest thing I can think of is something I was looking at doing to create a new address "namespace"
<MikeFair>
Using this new address namespace you map "strings" to IPFS entries
<MikeFair>
and updating the mapping for these "strings" is access controlled via cryptoSignatures
<akkumulator[m]>
Semantic web is a machine interpretable representation of knowledge, not sure in which direction you're going and what should be the use of putting semantic web in ipfs
<MikeFair>
The "domain" portion of the path forms the "volume id" and is the "root" of the namespace and defines what cryptosignatures can update the records in the namespace under it
<MikeFair>
akkumulator[m], You could also see it as "alternative to something like IPLD"
bomb-on has joined #ipfs
<MikeFair>
akkumulator[m], and in that sense it'd be like a custom structured data crawler like Git, Ethereum, and BitCoin blockchain IPLD plugins
<akkumulator[m]>
Maybe, but I think the Semantic Web is something completely different.
<MikeFair>
the biggest disconnect I see is Semantic Web addresses identify mutable content
<akkumulator[m]>
As s1na referenced RDF etc. s1na
<akkumulator[m]>
s1na: Is this for a bachelor thesis or something like that?
<s1na>
Thanks for the responses. Yes, perhaps I chose the wrong terms. What I mean to do is store rdf triples on ipfs, which i think has some benefits (and drawbacks). One would be to prevent data silos, as right now there are many vocabularies on individual servers. One would be immutability, and versioning
* MikeFair
googles "semantic web" to refresh his understanding.
<s1na>
yeah exactly, for a thesis
<MikeFair>
hmm: a proposed development of the World Wide Web in which data in web pages is structured and tagged in such a way that it can be read directly by computers.
<s1na>
and then, run sparql queries over the data on ipfs
<MikeFair>
I believe my "biggest disconnect" comment still stands
<akkumulator[m]>
RDF triples form knowledge and putting this on ipfs is not really necessary, as the URIs that are used as identifiers don't mean that the related ontologies are situated in the referenced URL
<MikeFair>
Semantic Web doesn't mesh directly with CAS
<MikeFair>
which goes back to creating a "Semantic Web Address Translation Layer"
<akkumulator[m]>
Actually, I believe that using another prototol wouldn't really matter, as the URIs would still be mainly used as identifiers.
<MikeFair>
Right, that's what I'm seeing; the real point is the URI is a "namespace" and you need to translate that address into the current CID
<akkumulator[m]>
Also, I studied semantic web about 10 years ago, and in practical research its relevance inclines towards zero. =)
<MikeFair>
akkumulator[m], How different is IPLD really?
<MikeFair>
I see IPLD to Semantic Web as like XML to SGML
chriscool2 has quit [Quit: Leaving.]
<akkumulator[m]>
Just googled, indeed there are similarities
<s1na>
Hm, they are used as identifiers, you're right. I was wondering if that could be helped by storing the ontologies on ipfs, and using cid to refer to the objects instead of uri, so that in addition to namespace, the actual data also resides there, which would facilitate querying and logic verification
<s1na>
so think dbpedia on ipfs (but not limited to wikipedia content)
<akkumulator[m]>
I have learned so dislike static information representation
clemo has joined #ipfs
<akkumulator[m]>
Information changes so quickly that humans were never quick enough to update it, and so the machine interpretation though reasoning was always flawed
<akkumulator[m]>
best community driven ontology database was freebase, but I think that was bought by google and then shut down
<s1na>
I see, so you think this is a useless endeavor? thanks for the help anyhow :)
<akkumulator[m]>
I'm working in IT products and research, and in the research area it has become a pretty tiny field.
b5 has joined #ipfs
cxl000 has joined #ipfs
<akkumulator[m]>
The IPLD MikeFair talked about looks more interesting.
<MikeFair>
s1na, if I was doing what you're doing, and I kind of am, but for a diffrent focus, here's what I'd do
<MikeFair>
Take a Checksum of the address tring
<MikeFair>
string; and use that as an entry in a DAG
<MikeFair>
Get a CID from it
<MikeFair>
and pull the data from the IPFS CID
<MikeFair>
Now to update it, you need an "authority" test
b5 has quit [Ping timeout: 256 seconds]
<MikeFair>
How do you prove something has privileges to update the record of that checksum?
<MikeFair>
oh sorry, backup; the DAG entry is "Domain.dom/checksum"
b5 has joined #ipfs
<MikeFair>
You use a DNS TXT record to identify the root signing key (the authority that controls the checksum entries underneath the domain volume id)
<MikeFair>
that's about as much as I have atm; it's incomplete; but it's a start
<MikeFair>
You create a federated namespace, arbitrary string -> CID translation layer
<MikeFair>
the fact the path strings follow RDF is useful in the context of RDF, but not this 'address translation' layer
zautomata2 has joined #ipfs
Lieuwex has quit [Remote host closed the connection]
b5 has quit [Ping timeout: 256 seconds]
<MikeFair>
akkumulator[m], Like you I'm looking forward to the day when requesting a CID can work more like a web CGI call ;)
zautomata1 has quit [Ping timeout: 240 seconds]
Lieuwex has joined #ipfs
<MikeFair>
I even started picturing putting the full hard coded web server response data at the CID to statically serve it "as if" IPFS CID objects were being served over a REST API
JayFreemansaurik has joined #ipfs
zautomata3 has joined #ipfs
zautomata2 has quit [Ping timeout: 255 seconds]
<s1na>
MikeFair: that sounds really interesting. I'm glad there's someone else also working on something similar. Thanks for the tips.
<s1na>
I was thinking of moving authorization and name resolving to ethereum
<s1na>
do you think that could work?
<MikeFair>
I'm working on Stellar
<MikeFair>
same principle;
<MikeFair>
I would rather see IPFS get an upgraded concept of signer authority
<s1na>
oh, i didnt know stellar supports smart contracts
pcardune has joined #ipfs
<MikeFair>
It does host the code; but it has a very robust "object metaphor" on its accounts
<MikeFair>
err doesn't
treethought has quit [Ping timeout: 240 seconds]
<MikeFair>
But more interesting is actually libp2p-consensus imho
Joakim[m] has joined #ipfs
bht-ricardo[m] has joined #ipfs
afshin[m] has joined #ipfs
pubemail[m] has joined #ipfs
jaydenhawkes123[ has joined #ipfs
sushiogoto[m] has joined #ipfs
Polychrome[m] has joined #ipfs
doubleorseven[m] has joined #ipfs
sppqd[m] has joined #ipfs
nullc1pher[m] has joined #ipfs
billsam[m] has joined #ipfs
renatocan[m] has joined #ipfs
klara[m] has joined #ipfs
lnxw37[m] has joined #ipfs
mdrights[m] has joined #ipfs
<MikeFair>
s1na, neither Stellar nor Ethereum are any good at storing large amounts of data
SamLord[m] has joined #ipfs
cosmosinfo[m] has joined #ipfs
dyce[m] has joined #ipfs
yayota[m] has joined #ipfs
<s1na>
ah, hadnt heard about lib2p-consensus
superusercode has joined #ipfs
<MikeFair>
So the answer is "put it in IPFS and store the CID on the ledger/chain"
<s1na>
exactly, this was my thought
Wikipedia[m] has joined #ipfs
<MikeFair>
In Stellar land this means you can associate a CID to an account, asset or account/asset combination
<s1na>
btw, ipfs nodes do have a private key, couldnt that be used for our purpose?
<MikeFair>
yes, and ipfs nodes can also use additional custom keys
<MikeFair>
the feature you are looking for is called IPNS
pcardune has quit [Ping timeout: 264 seconds]
<MikeFair>
In IPNS the CID of the public key data becomes a "fixed CID" in the IPNS namespace; however the data at that CID is an updateable pointer to another CID
<MikeFair>
It's a "symlink" type CID
<MikeFair>
to change the CID of the IPNS entry; you need to sign the update request
<MikeFair>
(which means you need the private key that belongs to the public key that created the IPNS CID)
<MikeFair>
Did I lose you there?
treethought has joined #ipfs
<MikeFair>
CID of PubKey == a pointer record that contains a reference another CID; the CID of PubKey is the CID you get when you add PubKey to IPFS; to change the CID reference stored at CID of PubKey, the update request has to be signed by the PubKey's private keypair
<MikeFair>
You can create a long list of keypairs that can be used for different purposes; for example I created keypairs for two different domain names
brabo has quit [Ping timeout: 255 seconds]
brabo has joined #ipfs
<MikeFair>
The challenge I ran into was I wanted to give different people authority over different subtrees of the namespace (like different departments having control over their own departmental web pages)
<s1na>
Aah, i think i understood, this could definitely be used
<MikeFair>
IPLD doesn't support that concept
<s1na>
i see, is that why you decided to go with stellar?
<MikeFair>
The root of the IPLD tree is immutable below that point
<MikeFair>
Well I started with Stellar because I find the SCP impressive; it doesn't pretty much everything I'd want a distributed consensus algorithm to do
<MikeFair>
SCP = Stellar Consensus Protocol
<MikeFair>
What I think I'd ideally want is to have a way to use the SCP to transform an IPFS/IPLD data object
<s1na>
yeah, ive read a bit about scp. hm, how do you mean transform?
<s1na>
btw, is data persistence a concern for you?
<MikeFair>
Yes, but its secondary to data "currency"
<MikeFair>
Yes, but its secondary to data "currency"
<MikeFair>
hehe - oops didn't mean that
<MikeFair>
Okay, let's take the stellar/ethereum block cahin
<MikeFair>
You can describe an account as an "Object" with a structured data representation
<r0kk3rz>
what do you mean about scp transforming a ipld object? how would that work
<MikeFair>
Okay let's take script that is stored in IPFS so it has a CID
<MikeFair>
Now take an object with that CID as a "Code" member of the object
<MikeFair>
that is an existing IPLD entry
<MikeFair>
simply a JSON object loaded into IPLD
<MikeFair>
with the 'code':CIDOFSCRIPT on it
<MikeFair>
Now there's a bunch of compute nodes listening on PubSub
<MikeFair>
And I submit an 'Invoke Function of Script on IPLD Object [CID of IPLD Object]'
<MikeFair>
with the right signatures so the compute nodes believe I am authorized to invoke such a thing
<MikeFair>
Many nodes now take the IPLD object as the input and run the script from CID on that object
ilyaigpetrov has joined #ipfs
<MikeFair>
The output is a new JSON object
<MikeFair>
That JSON object obviously has a new CID (assuming the script did work)
<MikeFair>
Those compute nodes then come to consensus on what the CID is
<MikeFair>
and that updates the link in the IPLD tree
<r0kk3rz>
yeah that last part will be tricky...
<MikeFair>
It's got an element of IPNS to it so the object ID itself is fixed
<r0kk3rz>
otherwise, sounds easy done
<MikeFair>
that's why I was looking at libp2p-consensus
<MikeFair>
And the SCP
<MikeFair>
THe nodes simply announce their CID answers
<MikeFair>
via pubsub
<MikeFair>
using the object id as the channel id
<MikeFair>
Each node collects the results and accumulates the scores
<MikeFair>
and you get a weighted set of answers: NODES 1- 25: CID1; NODES 28, 30: CID2; NODES 26-27, 29: CID3
<r0kk3rz>
i think youd need a trusted member that looks at the scp output and updates the ipld
<r0kk3rz>
which isnt great
<MikeFair>
Well if each node signs its result; then you accumulate those signatures
<MikeFair>
It requires an N of M support on IPNS
<MikeFair>
Then all the nodes submitthe update the update as soon as they've got sufficient weight to "succeed"
<r0kk3rz>
yeah not sure about multisig and ipns
<MikeFair>
not supported yet
<MikeFair>
and it might be a new object type
<r0kk3rz>
unless you do it downstream with whatever wants to read this data
<MikeFair>
but this is what a Stellar account does already
<MikeFair>
Stellar Accounts have custom data
<MikeFair>
So I'd be treating the account like the IPFS entry
<MikeFair>
Err IPNS entry
<MikeFair>
and it has multisig support already
<r0kk3rz>
because you can have several IPNS pointing to the same CID just fine
zautomata4 has joined #ipfs
<MikeFair>
Well I'm thinking every node would be building an update record to a single IPNS record
<MikeFair>
but a single signature isn't enough
<MikeFair>
so 20 nodes all submit the same IPNS update with 20 signatures on it
<MikeFair>
(that they got via pubsub from their peers)
<MikeFair>
The IPNS updater validates that the 20 signatures have enough authority to update the IPNS entry
b5 has joined #ipfs
<MikeFair>
and since the 20 records are all identical; when peers talk to each other; there's a lot of "I already have that message" that goes on
ONI_Ghost has quit [Ping timeout: 240 seconds]
zautomata3 has quit [Ping timeout: 260 seconds]
<MikeFair>
The way I handle this in Stellar is the custom data on the account is the IPNS record
<MikeFair>
I don't use IPFS' IPNS records
<MikeFair>
the AccountId is the Object Id
<MikeFair>
it has IPFS entries on it
<MikeFair>
in its custom data section
anewuser has joined #ipfs
<r0kk3rz>
yeah, or you figure out a deteministic proof that you can store and just use its CID
<MikeFair>
It's not "as nice"
<MikeFair>
Can you expand on that a bit?
<MikeFair>
What I'm trying to validate is that the "Script" was used to transform the objet
<r0kk3rz>
mmm, no that wont work either
<MikeFair>
And the only way I can see to do that is consensus on a bunch of nodes doing the same work
<MikeFair>
If 20 nodes run the same script and all produce the same resulting object; then odds that's the output that "script" produces are pretty good
<MikeFair>
and I'd rather it be 128 nodes or so, but 20 is the number of signatures stellar supports atm so that's what I'm using ;)
b5 has quit [Ping timeout: 264 seconds]
djellemah_ has joined #ipfs
<MikeFair>
I can see using PubSub, and creating a topic id based on the IPNS Id of the Object being updated
treethought has quit [Ping timeout: 256 seconds]
<MikeFair>
The "trick" is that all the nodes are building the same update request
vmx has joined #ipfs
<MikeFair>
It's an "alternate path" to updating an IPNS entry -- instead of the submitter signing the request; "execution nodes" can sign a request; but only if the request collects enough "signatures" to pass
<r0kk3rz>
ipns is quite simple
djellemah has quit [Ping timeout: 276 seconds]
<MikeFair>
I know, but inspiring ;)
<MikeFair>
the ipns record doesn't support the concept of "additional signers" at the moment
<MikeFair>
This is where I think stellar's model is quite useful/inspiring
<r0kk3rz>
its just a simple public key + signed packet in a dht
<r0kk3rz>
realistically, using a blockchain would be a better idea than ipns
treethought has joined #ipfs
<MikeFair>
Well from what I've read, the IPNS++ looks like blockchainesque in the data it stores
<MikeFair>
like history of revisions
<r0kk3rz>
thats IPRS
<MikeFair>
right
<MikeFair>
I'm not assuming that what I'm talking about definitely goes into IPNS, could be something new; it's more the inspiration came from the mutable IPNS entry + multsig concept on stellar accounts
<MikeFair>
instead of a single key on the IPNS record; you'd have an array of keys
<MikeFair>
(potentially with a weight)
<r0kk3rz>
so to read the record you'd have to parse the consensus result?
<MikeFair>
And ideally you can even weight the original private key to 0 (called the master key); so the master public key still identifies the record; but its private key no longer has update authority on the record
<MikeFair>
no
<MikeFair>
no to read the record, you'd just take the CID;
<MikeFair>
also on the record is a list of public keys and weights and an "update threshold"
<MikeFair>
To update the record; you need to submit an update record that's been signed by enough public keys on that list to pass the "update threshold" score
aliabdullah[m] has left #ipfs ["User left"]
<MikeFair>
By default, the original master key as a weight of 1 and a threshold of 1
<r0kk3rz>
but the major part of IPNS is that you can verify the record when you read it
<MikeFair>
its private key is sufficient to update the record
<r0kk3rz>
not just when you write it
<MikeFair>
Okay, then keep the signatures of the most recent update
<MikeFair>
then you can validate it too
<r0kk3rz>
yeah, so you have to parse the consensus
<MikeFair>
I see it more like you validate the list of signatures and sum weight for each sig that passes
<r0kk3rz>
which probably isnt too bad, but its more stuff to store
<MikeFair>
compare sum to threshold; which I guess is "parsing the consensus
<MikeFair>
You can limit the number of keys so everything still fits in a single DAG entry
<MikeFair>
It'd be key + signature for each "row"
<MikeFair>
oh + weight
<r0kk3rz>
yeah, it'll be real slow but doable
<MikeFair>
I don't know how many of those would fit in a single record
<MikeFair>
Really it shouldn't be "that much" slower than a current record
<MikeFair>
or sig validation considered an expensive process; so doing 20 instead of 1 is a bit deal
<r0kk3rz>
it will, because you have to get another cid before you can get the cid you actually want
<MikeFair>
That's what IPNS does already though? the list of signatures and keys is still on the same IPNS record; the record is simply a bit bigger now
<r0kk3rz>
iirc its all in the dht
<MikeFair>
right; and on a signle entry
<MikeFair>
that doesn't change; I'm simply thinking it'd use more space in that entry
<r0kk3rz>
whether you can fit all that into a dht entry i dont know
<r0kk3rz>
but generally speaking this stuff wants to be last, like resolve in a few milliseconds
<r0kk3rz>
*fast
* MikeFair
nods.
<MikeFair>
Well validating would be client side;
<MikeFair>
except for updates
<r0kk3rz>
its ipfs, everything is client side :)
<MikeFair>
hehe
<MikeFair>
well I figured updates would have to do some work to validate the update record
<MikeFair>
and I wouldn't want to break the "single entry" nature of it
<MikeFair>
the main contribution I saw was that multiple execution nodes could come up with an update record and share their individual records with each ; and assuming the each independently came up with the same resulting CID; then by appending the list of signatures to the update record; they can simultaneous submit the same request
<MikeFair>
err share their individual records with each other;
xnbya has quit [Ping timeout: 265 seconds]
goiko has quit [Ping timeout: 265 seconds]
<MikeFair>
"Small values (<= 1k) are stored directly in the DHT
<MikeFair>
Assuming it takes 64 bytes to store a key + signature + 1 byte for "weight" and the referenced CID data itself
<MikeFair>
There's probably enough space for 12 Signers
xnbya has joined #ipfs
goiko has joined #ipfs
<MikeFair>
1k/64 = 16
ygrek has quit [Ping timeout: 256 seconds]
<MikeFair>
taking 4, 64 byte entries for record data and weights and the threshold means 256 bytes for all that
Steverman has joined #ipfs
<r0kk3rz>
what would the address be?
<MikeFair>
I have a couple thoughts on that front
<MikeFair>
Hopefully it'd stay as the address as defined as the master public key which is a permanent part of the record
<MikeFair>
But I'd like to add an "Object Identifier" number to it; so that "PubKey + OID" defines the address
<r0kk3rz>
what would that object id be?
<MikeFair>
a numeric id created by the user; 0 by default
<r0kk3rz>
presumably this is so you can reuse the master key?
<MikeFair>
Ye
<MikeFair>
s
<MikeFair>
The Master Key creates "Volume Root" of sorts
<r0kk3rz>
mmm yeah that might be tricky
<MikeFair>
And then the OID are "sub-roots" like "mountpoints"
<MikeFair>
but if it casuses problems take it out
<r0kk3rz>
it needs to be verifiable
<r0kk3rz>
everything in ipfs has a concrete relationship with its identifier
<MikeFair>
except for IPNS records
<r0kk3rz>
public key -> data signed with that key
<r0kk3rz>
its quite concrete
Augustus has quit [Remote host closed the connection]
<MikeFair>
How does PubKey -> (OID + data) signed with that key change that?
<MikeFair>
ack
<MikeFair>
(PubKey ,OID) -> sign (OID, data)
<MikeFair>
The tricky part I'm seeing is proving the list of signers
<MikeFair>
Okay, so the liste of signers all comine into the address
<MikeFair>
change the signers, change the address
<MikeFair>
I'd love to find a way to void that
anewuser has quit [Quit: anewuser]
b5 has joined #ipfs
fazo96 has quit [Quit: Konversation terminated!]
xnbya has quit [Ping timeout: 246 seconds]
<MikeFair>
r0kk3rz, This is where your recommendation of a block chain comes into play I think
girlhood has joined #ipfs
<r0kk3rz>
mmm, this whole stuff is like smart contracts 101
<r0kk3rz>
then your method of updating can be whatever you can program
<MikeFair>
I'm trying to reconcile the following: "Node A and B are going to provide this new multisig entry to each other; one of the nodes is malicious and changed the list of signers to make the record look like it has integrity, the other is truthful; but we don't know which one is which"; what data is on the records that tell us the right answer
xnbya has joined #ipfs
<r0kk3rz>
yep thats the core of public distributed systems right there
<MikeFair>
In my initial happy world the execution engines were part of the trusted execution team
<MikeFair>
and those pubKeys were known
<MikeFair>
And all records pretty much had the same set
<MikeFair>
Okay does IPFS have anything like that? An approved list of nodes?
<MikeFair>
One exploit the attacker is taking advantage is they are free to make up their own signing keys
<r0kk3rz>
theres private network, where you share a single key with all approved nodes
<MikeFair>
Okay, then instead of the OID, perhaps it's a "network id", and that network id defines the list of acceptable PubKeys
<MikeFair>
I mean maybe that doesn't work exactly; but if the honest node can discern a list of "approved/accepted PubKeys" of which the record must use a subset; then the liar can't get away with it
<MikeFair>
The list of PubKeys would a set of Nodes that are the accepted execution nodes for that network id
<r0kk3rz>
well, the honest node is you, you verify everything client side always
<MikeFair>
Right, but what I'm looking for now is a way to get efficiently understand who the approved PubKeys for the Network this record belongs to are
<MikeFair>
DNS perhaps?
<MikeFair>
And the network is a domain name
<MikeFair>
I mean this works
<r0kk3rz>
dns? that doesnt sound very self contained
chriscool1 has joined #ipfs
<MikeFair>
it's not; but IPNS uses it now
<MikeFair>
there's a problem with defining namespace ownership in IPFS at the moment
raynold has quit [Quit: Connection closed for inactivity]
<r0kk3rz>
thats just dns link, ipns doesnt need that
<MikeFair>
I if I ask the liar who the approved list of executions for the network are, it'll give me the list of nodes it ued as PubKeys
<r0kk3rz>
it'd have to be a signed list with the master key
treethought has quit [Ping timeout: 260 seconds]
<MikeFair>
But the master key is the key of network id
Neomex has joined #ipfs
<MikeFair>
in the case of the approved node list
<MikeFair>
I'm think there's N nodes in the master list; and 12 of those are on this record
<MikeFair>
And this record belongs to "that network"
<r0kk3rz>
yeah they're trusted
<r0kk3rz>
if they arent trusted, you need to use something else
<MikeFair>
I just need a way to ensure that I am actually looking at the right master list; and I want to avoid the additional CID lookup but I don't think I can
<MikeFair>
At least I can reuse it for all objects in that network
<r0kk3rz>
its not too bad
<MikeFair>
Okay, a "Network Record" can only have 1 signer; the master signer is irreplaceable; which _really sucks_ because it's a hot target to attack
dimitarvp has joined #ipfs
<MikeFair>
But at least then I can use an IPNS record for the current master list for that network
girlhood has quit [Remote host closed the connection]
<MikeFair>
But if the security practice is that the really serious folks always sign their master network list offline then getting the secret key is harder
<r0kk3rz>
key management is important sure
<MikeFair>
OH!! A Network Record is identified by the Sum of all its keys
<MikeFair>
On that record, you change the signers and you change the network id
<MikeFair>
that way it's not all down to one key
<MikeFair>
it's down to N keys where N <= 12
<MikeFair>
And you only change that record when add/remove trusted execution nodes
treethought has joined #ipfs
<MikeFair>
or more specifically trusted object signers (which are most likely execution nodes)
<r0kk3rz>
so each multisig is immutable, makes sense
fazo96 has joined #ipfs
<MikeFair>
Only the network ones are
<MikeFair>
the object multsigs have mutable signers
<MikeFair>
but those signers come from the CID pointed to by the network entry
<MikeFair>
_must come_
<MikeFair>
And the object entry must identify the network CID it belongs to
<r0kk3rz>
so its a multisig of multisigs?
<MikeFair>
And <NetworkId, PubKey>
<MikeFair>
is the object's identifier
<MikeFair>
maybe <networkId, PubKey, Oid>
<MikeFair>
it's two separate multisigs
<MikeFair>
The network multsig is a record that points to a list of PubKeys
<MikeFair>
All signers on objects within the network must be on that list
<MikeFair>
The Object multisig is a record that points to the data of the object; the signers on its record are confirmed as valid because the PubKeys appear on the list of valid PubKeys the network CID publishes
<MikeFair>
That way attacker can't forge the Signers because it doesn't have the private keys of the trusted network signers
<MikeFair>
It can't invent new signers, because they won't be on the list
<MikeFair>
It can't forge the Network CID record because it doesn't have the private key of those signers either
<MikeFair>
It can't invent a new network id, because the network id is part of the object data
<MikeFair>
I think this works
<r0kk3rz>
that sounds like a shedload of signature verification
<MikeFair>
Well you only have to do the network object once
<MikeFair>
well once per purge/reload
<MikeFair>
Once you do that you have to look up each PubKey on the object in the Network list
<MikeFair>
Then you do each object once
<MikeFair>
You can probably even make the file system do some of it for you
Neomex has quit [Read error: Connection reset by peer]
<MikeFair>
You take the network data and make a dirtree out of it where you make the PubKeys filenames
<MikeFair>
then when you get an object; you read the pubKey filenames out of the dirtree
<MikeFair>
if you get a miss, the object is invalid
<MikeFair>
Then you simply validate and sum the weights of the object to ensure it passes 'threshold'
<MikeFair>
object's signers
<MikeFair>
If there isn't enough weight to meet or exceed threshold, the object is invalid
<MikeFair>
if any signature isn't valid, the object is invalid
<r0kk3rz>
i think we can get a few more layers in there, its definitely not deep enough yet :D
<MikeFair>
:D
<MikeFair>
Well I've been refraining from talking about using some kind of tree to roll together signers to get the signatures way from 12 without violating the single entry tule
<MikeFair>
err way up
<MikeFair>
;)
<MikeFair>
The to level would still be 12 signers; but those would aggregations of lower level signing buckets
<MikeFair>
top
brabo has quit [Remote host closed the connection]
<MikeFair>
r0kk3rz, all kidding aside, I really think it works; I think 12 is a bit limiting on the number of executions; but forcing those 12 to come from a validated list helps
<MikeFair>
I really like that it creates "Networks" which federate the object space
<MikeFair>
Those "Networks" define "executors" which run the code attached to the objects
<MikeFair>
(via the PubKeys)
<MikeFair>
I can validate with a fair degree of certainty that the CID identified in the object entry I'm looking at was properly updated/defined
<r0kk3rz>
yeah it would be interesting to see what the overhead is like in practise
<MikeFair>
(ignoring the final CID retrieval to actually get the data we wanted in the first place)
<r0kk3rz>
thats assuming you already have all the files
<MikeFair>
The bottleneck is fetching the network CID for the first time
<MikeFair>
yes, that's an IPNS retrieval
<MikeFair>
the files in this case is the mapping of PubKey to files from the NetworkCID
<MikeFair>
I guess this NetworkCID kind of defines a CA Root for the network
<MikeFair>
Assuming a network doesn't want to have a lot of dynamic execution nodes, I think this works
<MikeFair>
Updating the NetworkCID is "expensive"
<MikeFair>
I guess all the Objects would have to be revalidated
<r0kk3rz>
well you cant, you can only make a new one
<MikeFair>
I can update the list of PubKeys
<MikeFair>
Because the networkCID is an IPNS Pointer record
<MikeFair>
IPNSlike
<MikeFair>
It has to be singed by 1 - 12 signers; and those 1 - 12 signers define the network id
<MikeFair>
But it points to a CID that has the list of PubKeys
treethought has quit [Ping timeout: 276 seconds]
<MikeFair>
So you can redefine that list as long as you can get all the signers to sign the update
<MikeFair>
the CID pointer update that is
<MikeFair>
Just like an IPNS update, but supporting <= 12 signers instead of only 1
b5 has quit [Ping timeout: 264 seconds]
<MikeFair>
Yeah life sucks if any of those network id signers gets compromised
<MikeFair>
But I guess everything is okay as long as <= threshold aren't compromised
<MikeFair>
But I was only thinking that some nodes from the PubKey list were compromisd
<MikeFair>
The network discovers this, and puts those PubKeys on the "reject" list, and then has to do something about all the objects signed with those keys
<ChrisMatthieu>
^ There's a spot in the automation where you can see we write the Computes task to the IPFS DAG and check the DAG for results.
<ChrisMatthieu>
cat split-task-auto.json | ipfs dag put > split-task.hash
droman has quit [Quit: WeeChat 2.1]
pawalls has joined #ipfs
clemo has quit [Ping timeout: 260 seconds]
xzha has quit [Remote host closed the connection]
anewuser has joined #ipfs
aananev has quit [Ping timeout: 240 seconds]
colatkinson has joined #ipfs
aananev has joined #ipfs
colatkinson has quit [Client Quit]
treethought has joined #ipfs
jesse22 has joined #ipfs
<Powersource>
why is pinning in the js api not recursive by default? It is in the CLI iirc
m3lt has quit [Ping timeout: 240 seconds]
pcardune has joined #ipfs
b5 has quit [Ping timeout: 240 seconds]
<MikeFair>
ChrisMatthieu, eh hey! :)
ericxtang has quit [Remote host closed the connection]
jkrone has joined #ipfs
<Powersource>
and is MFS implemented? it is really confusing that the documentation sometimes updates before the features are working.
<ChrisMatthieu>
Hey MikeFair !
<MikeFair>
demo.sh looks awesome!! :)
* ChrisMatthieu
bows
<MikeFair>
I know I keep coming to this idea; if I didn't care if others tried submitting work to my containers; can I run on public?
<ChrisMatthieu>
I have a new post scheduled to release tomorrow walking you through how to build and run your own algorithms on Computes. I use that demo.sh script now all the time
<ChrisMatthieu>
Yes :)
<MikeFair>
I was thinking it wouldn't be too hard to require job submissions to be signed
<ChrisMatthieu>
We've discussed that too. I agree.
<MikeFair>
The idea is something an IPNS entry
<ChrisMatthieu>
I believe that iamruinous may be working on a autodeploy for Digital Ocean
<MikeFair>
Perhaps there is a set of standing IPNS CIDs that a node watches?
<ChrisMatthieu>
as well
<ChrisMatthieu>
Yea, either IPNS or IPLD perhaps
<MikeFair>
Well the difference is an IPNS update requires my SIG
<ChrisMatthieu>
true
<MikeFair>
it could point to an IPFS DAG entry
<MikeFair>
err IPLD
<ChrisMatthieu>
that should be fairly simple
<MikeFair>
Then set of jobs submitted to my nodes can be controlled by the set of approved IPNS updaters
<MikeFair>
In essence, publishing the new IPNS record is "signing the request"
<MikeFair>
I figured out a great use case that I think computes might be perfect for
<ChrisMatthieu>
do tell!
<MikeFair>
I discussed it with r0kk3rz last night
<MikeFair>
Well actually I've got several
<ChrisMatthieu>
i can scroll up - hold on...
<MikeFair>
(as you know)
<ChrisMatthieu>
haha - true
<MikeFair>
Well I can sum up too
<ChrisMatthieu>
ok
<MikeFair>
It took a while to narrow down the description
<MikeFair>
So the goal was consensus driven object updates over IPFS
<ChrisMatthieu>
interesting
<MikeFair>
You can attach a script as an entry in IPLD and the script can transform the object
<MikeFair>
Think of an IPNS CID representing the instance object of a class
dimitarvp` has joined #ipfs
<ChrisMatthieu>
what drives the consensus?
<MikeFair>
It points to an IPLD entry
<MikeFair>
computes ;)
<MikeFair>
Leaving out for a moment "how" I submit the command to run the script
<ChrisMatthieu>
what's the connection between the script in IPLD and the script in Computes Lattice?
<MikeFair>
The script is a property of the IPLD object
<MikeFair>
I submit `On IPNS CID; Execute "code"/"command"`
<MikeFair>
That becomes a computes job
treethought has quit [Ping timeout: 256 seconds]
<ChrisMatthieu>
ok
dimitarvp has quit [Ping timeout: 264 seconds]
<MikeFair>
The result of a successful execution is going to be an update to the IPNS entry
<MikeFair>
(Transform the IPLD entry; generating a new CID; and publishing the IPNS update to reflect the new "state"
<MikeFair>
)
<ChrisMatthieu>
I like where you are going...
<MikeFair>
The part to work out was how to run the conensus
<MikeFair>
This partly can't be done at the moment the multisig IPNS record types I need at the moment are missing
<MikeFair>
They might be programmable as part of the computes framework; I'm not sure (the HOWTO is still new)
<ChrisMatthieu>
Yea, I was thinking that we could do something similar to IOTA by validating the previous two DAG entries but I don't like wasting computes
<MikeFair>
But here's the "top down"
<ChrisMatthieu>
I would love to come up with a more elegant IPFS related approach like you are thinking
<MikeFair>
There's an IPNS CID out there that represents a "Network"
<MikeFair>
The CID referenced by that IPNS entry is a list of PubKeys that represent compute nodes
<MikeFair>
Those Compute Nodes "Sign" their output
<MikeFair>
So the PubKey identifies the set of Valid, Trusted, Executors in the Network
<ChrisMatthieu>
This could also provide the basis of machine reputation using Jonny Crunch's IPID as well
<MikeFair>
A "Network" is a list of those things
<ChrisMatthieu>
roger that
<MikeFair>
Yeah machine reputation matters _a lot_ imho ; it's my solution for why "fee to prevent spam" is a bad idea imho
<ChrisMatthieu>
yep
<MikeFair>
okay So we each can all create a network because any of us can make an IPNS entry
<MikeFair>
The "Network" is like a "Volume Root" for an object store
<ChrisMatthieu>
We could map the Lattice queues to a network too
<MikeFair>
The "NetworkCID" is also on every IPLD object as one of its properties
<MikeFair>
Perfect!
<ChrisMatthieu>
This could give you networks within networks
<MikeFair>
I kind of have a big black spot on the part between "submit" and "collect results"
<ChrisMatthieu>
Today, Computes results are pushed back into the IPFS DAG
pcardune has quit [Remote host closed the connection]
<ChrisMatthieu>
You define a results hash in the tasks and you can access results at any time
<ChrisMatthieu>
tasks can also invoke other tasks
<ChrisMatthieu>
output of one could be the input of another
<MikeFair>
okay; there's a PubSub piece I need to stick in the middle here
Mateon2 has quit [Ping timeout: 240 seconds]
<MikeFair>
So each node runs "script"/"command" on IPLD Object input
Mateon1 has joined #ipfs
<ChrisMatthieu>
Lattice uses pubsub to sync lattice info across all nodes
<MikeFair>
When that happens, each executor node starts listening to `IPNSCID` as a topic
<MikeFair>
They execute their script get an IPLD Entry CID as a result