shibacomputer has quit [Ping timeout: 240 seconds]
null_radix has joined #ipfs
<zippy314_>
yah, this looks like it's a problem. Prevents my code from compiling...
john__ has quit [Ping timeout: 260 seconds]
Mateon3 has joined #ipfs
Mateon1 has quit [Ping timeout: 268 seconds]
Mateon3 is now known as Mateon1
ylp has joined #ipfs
<zippy314_>
I'm gonna guess this is one of those dependency issues, and that I have to start using GX in my projects too...
palkeo has joined #ipfs
Mateon3 has joined #ipfs
zopsi has quit [Ping timeout: 255 seconds]
jhulten[m] has joined #ipfs
Mateon1 has quit [Ping timeout: 260 seconds]
Mateon3 is now known as Mateon1
zopsi has joined #ipfs
leeola has quit [Quit: Connection closed for inactivity]
fjl_ has joined #ipfs
fjl has quit [Read error: Connection reset by peer]
special has quit [Ping timeout: 252 seconds]
special has joined #ipfs
special is now known as Guest95756
ebarch has quit [Remote host closed the connection]
<zippy314_>
Can someone help figure out where I can find the current gx package id for project (go-libp2p) for importing in to my package?
ebarch has joined #ipfs
ylp has quit [Ping timeout: 255 seconds]
Mateon3 has joined #ipfs
Mateon1 has quit [Ping timeout: 260 seconds]
Mateon3 is now known as Mateon1
matoro has joined #ipfs
jkilpatr has joined #ipfs
Mateon3 has joined #ipfs
Mateon1 has quit [Ping timeout: 268 seconds]
Mateon3 is now known as Mateon1
mvollrath has quit [Ping timeout: 252 seconds]
<zippy314_>
For now I figured it out by just looking at the package.json of go-ipfs which had in it the hash I wanted, but in general how does one look this up?
<lgierth>
this retains the original path, but still has origins
john__ has quit [Ping timeout: 240 seconds]
<lgierth>
for the price of duplicating the hash in there, but if it's just an internal mapping anyhow i think that's okay
ygrek has quit [Ping timeout: 260 seconds]
<lgierth>
the host/authority portion would be transparent except for its use as an origin, and would be defined as MUST be the base16 v1 cid of the thing in the path
<kythyria[m]>
The problem is, you still need redirects if the authority doesn't match the second path component, otherwise anything written for FUSE won't work... would it?
<kythyria[m]>
What does linux do if given an absolute path that starts with //
<lgierth>
this is about web browsers
<lgierth>
fitting ipfs paths into whatwg URLs and content security policy
<kythyria[m]>
Yes. And I'm not sure it's possible to make something work with the this-is-mounted-as-a-local-filesystem thing and URLs.
<lgierth>
it's not going through fuse
<lgierth>
:)
<lgierth>
it's gonna have a js-ipfs node running the the browser, or speaking with the api of an external ipfs node
<kythyria[m]>
I thought that was the entire point of IPFS paths being unix-shaped?
<kythyria[m]>
So that you can mount it as a fuse filesystem and it'll work perfectly
<lgierth>
not neccessarily fuse, but yes
<lgierth>
unifying everything in one path namespace
<lgierth>
but we want to integrate with browsers so i'm looking for a compatibility layer with gozala
<kythyria[m]>
You can't
<lgierth>
ok
<kythyria[m]>
So far as I know, it's literally impossible to have both the unixy paths and URLs and have the same content work in both.
<lgierth>
well watch the browser sprint mentioned in the topic :)
<lgierth>
ah do you mean file:///ipfs/?
<lgierth>
yeah that might not work, who knows. fuse is very low priority at the moment, since it's a very stable thing in general
<lgierth>
*it isn't
wallacoloo_____ has joined #ipfs
<kythyria[m]>
It can't work, because anything you do so that origins work correctly in browsers will use URLs that don't work with file:///
<AphelionZ>
how long does an ipns "lease" last
<AphelionZ>
is it per machine? per session? per time period?
<lgierth>
kythyria[m]: it won't be using fuse
<lgierth>
AphelionZ: time
<lgierth>
it's configurable on ipfs name publish
<AphelionZ>
oh ok I can just pass in the hash i want as an argument
<lgierth>
:)
<AphelionZ>
ty!
<kythyria[m]>
fuse is an implementation detail, the point is that the unixoid paths are simply not compatible with URLs if you want origins to work without changing what browsers consider an origin.
Akaibu has quit [Quit: Connection closed for inactivity]
<lgierth>
the access in browsers will just be for content-adressed data
<lgierth>
so we can always easily construct an origin
<lgierth>
no need for the rest of $everything path-ish for now
<kythyria[m]>
Why is this project so in love with unix path shaped representations for everything?
<lgierth>
:)
<kythyria[m]>
IPFS is not something it's not really reasonable to mount as if it were a local disk
<lgierth>
#4 has a bit of background on the motiviations
<lgierth>
(of the path scheme)
<kythyria[m]>
I'm not sure I agree with #4 all that much. Having a distinction between "local" and "remote" is important given that local things are much less likely to fail.
mguentner has quit [Quit: WeeChat 1.7]
<kythyria[m]>
At present, damn near everything is written with the assumption that file: accesses things that are basically fast and reliable and somewhat shaped like FAT-with-symlinks.
<MikeFair>
oi all! o/
<MikeFair>
kythyria[m]: Agreed; I think ipfs is better treated more like the "PUBLISH" command and REST style apis; something async and that you don't have control over the response conditions
mguentner has joined #ipfs
chris613 has left #ipfs [#ipfs]
<lgierth>
yeah local vs. networked makes sense
<lgierth>
it would make everybody's life easier though if there was a nice unified way of addressing (and sharing) all the networked data there is ;)
<kythyria[m]>
URLs :P
<kythyria[m]>
They don't chain nicely, but the chaining is somewhat scheme-specific anyway
<lgierth>
oh yeah we're building a nice URL scheme
<lgierth>
great scheme, best scheme i've ever seen
<lgierth>
i'm typing about whatwg URL compatible ipfs paths right now
john__ has joined #ipfs
<lgierth>
it boils down to fs://$cidbase16/ipfs/$cid/path/with-in
<lgierth>
the host part is effectively transparent for app code, only the browser cares about it for origin. app code purely looks at the url's path
<lgierth>
the host doesn't appear in the browser UX either, this url form is purely internal to have an origin
<kythyria[m]>
That sounds... complicated. And you'll have to do something... interesting... with redirects to make origins come out right
<lgierth>
there's experimental code for firefox that works
<lgierth>
don't ask me how
<lgierth>
it's magic to me :)
<lgierth>
i'm just kind of a spec editor on this one :)
pfrazee has quit [Remote host closed the connection]
mguentner has quit [Read error: Connection reset by peer]
<MikeFair>
lgierth: Or does that handle the "Description of" but not the "Addressing of"
<MikeFair>
lgierth: btw, I don't expect you to be an expert on everything anyone has ever invented; I'm pretty clear that someone has already invented a solution to this problem :)
<lgierth>
yeah rdf is purely description i think
<lgierth>
hah word :)
<lgierth>
it's hard to find the perls
shizy has quit [Ping timeout: 255 seconds]
ylp has quit [Ping timeout: 255 seconds]
<lgierth>
95% of the important internet protocols has the assumption of some sort of authority in its dna
<MikeFair>
lgierth: fwiw; I agree with curated namespaces; I just want universal namespace roots
<lgierth>
btw i've heard people say about ipld, this is what the semantic web should have been :)
<lgierth>
yes! just some common ground so everybody can interoperate
<MikeFair>
lgierth: that way I can ask for a new domain name, and get back a new AS Hash
<MikeFair>
lgierth: I can then distribute the keys/rules for updating what goes on underneath that AS
<MikeFair>
lgierth: Some AS' are more "wild west" and some are more "body cavity search and DNA samples before you get in"
<MikeFair>
lgierth: but the fact anyone can get an AS is wide open
DiCE1904 has joined #ipfs
ylp has joined #ipfs
<lgierth>
the only practical reason for AS authorities are that you "need a number"
<lgierth>
that's from before cryptography was a widespread idea
<MikeFair>
lgierth: and CAS solves that :)
<lgierth>
or rather, a widespread practical thing
<lgierth>
point me to links
<lgierth>
i'm relatively new to practical bgp
<MikeFair>
lgierth: oh; it's just the concept that the address of the public key we randomly just made up functions well as teh concept of an AS within ipfs
<MikeFair>
lgierth: similar to a virtual cluster id
<lgierth>
ok
<lgierth>
what does it stand for?
<MikeFair>
lgierth: AS Autonomous or CAS Content Addressed Storage
Guest46221 has joined #ipfs
<lgierth>
eh what does it mean
<MikeFair>
System
<lgierth>
ah got it
<lgierth>
thanks
<lgierth>
and the good thing is you can do the content adressing thing on the networking layers too
<lgierth>
network address = hash of pubkey
<lgierth>
the fc00::/8 space is wide open for that
<gozala>
@lgierth: I'll respond tomorrow when I'll be by my computer
<lgierth>
gozala: awesome thanks :)
<lgierth>
i'll be off to bed
<gozala>
But you'd need to transcode to case insensitive encoding
<gozala>
As hostname is case insensitive
<lgierth>
yeah base16
<MikeFair>
lgierth: I was able to push it the ethernet MAC level via a psuedo nat concept (4, 24-bit active content identfiers to chain a link of up to 4 nodes in a single hop)
<MikeFair>
err push it to
<MikeFair>
lgierth: it's more for many local nodes talking on the same lan in a way compatible with ethernet switches
<MikeFair>
lgierth: so each node maps its nodeid (24-bit unique to the local environment); and each file it has in its repo (24-bit (16 million hashes))
<gozala>
lgierth: as of reversing id & protocol that sounds good
noderunner has quit [Ping timeout: 255 seconds]
Guest46221 has quit [Remote host closed the connection]
<MikeFair>
lgierth: Then to request a block; a node checks the "provides" for the long address of who it needs to ask; looks up the "short id" of that node and the "short id" they said that "provides entry" was; and gets the ethernet MAC address for that block
BobRotten has quit [Quit: Connection closed for inactivity]
<MikeFair>
lgierth: the requestor makes up its own 24-bit identifier to "know" the hash as; combines it with its own 24 bit id (preferably reusing the same 24-bit id as the other node); and that's the reply MAC
<gozala>
lgierth: and you can't retain fs:/ipfs/cid/path/...
<gozala>
As logic of computing origin can't be altered
<gozala>
Without significant changes on gecko side
<gozala>
Pretty sure that's also a case with chromium given that electron is even less flexible than gecko in terms of what protocol handlers can do
<lgierth>
gozala: ah, so the url with host is not internal to the protocol handler eh?
<lgierth>
mh then the duplication of the hash in the url is unfortunate
ylp has quit [Ping timeout: 255 seconds]
Aranjedeath has quit [Quit: Three sheets to the wind]
<gozala>
Yeh that's a reason why complex redirects have being used
<lgierth>
would suborigins help at all?
<lgierth>
(i'll have to read your email closely again tomorrow)
Aranjedeath has joined #ipfs
<gozala>
Not necessarily, but juan had some thought there. I'm probably not qualified to answer this question
<gozala>
lgierth: ^
<gozala>
lgierth: sub-origin spec is based on http headers & in ipfs realm there's no headers as far as I understand
ylp has joined #ipfs
<gozala>
lgierth: So it maybe there's an opportunity to get involved with sub-origin spec to try and make it compatible with what IPFS would want to do
ylp has quit [Ping timeout: 255 seconds]
koshii has quit [Ping timeout: 258 seconds]
zippy314_ has quit [Ping timeout: 268 seconds]
Caterpillar has joined #ipfs
_whitelogger has joined #ipfs
Thomas has joined #ipfs
Thomas is now known as Guest64939
aquentson has quit [Ping timeout: 255 seconds]
Encrypt has joined #ipfs
Encrypt has quit [Client Quit]
tmg has quit [Ping timeout: 260 seconds]
<MikeFair>
Is there any documentation on direct p2p streaming using the IPFS network?
<alu>
i deleted ~/.ipfs/blocks/
<alu>
it was getting too big
<alu>
what do i do next lol
<alu>
nvm fixed
maxlath has joined #ipfs
Guest64939 has quit [Remote host closed the connection]
ylp has joined #ipfs
<whyrusleeping>
alu: lol
<whyrusleeping>
MikeFair: you mean like, live streaming?
chriscool has joined #ipfs
<MikeFair>
whyrusleeping: yeah
<MikeFair>
whyrusleeping: specifically a multiplayer game realtime channel
<whyrusleeping>
MikeFair: We need to do some writeup on pubsub, but thats what you'll want to look at
Kou[m] has joined #ipfs
<MikeFair>
whyrusleeping: I can pretty much see how IPFS can distribute the code library files and media assets and such; but doing game updates and audio/video feeds would be another aspect that'd be nice to do
<MikeFair>
whyrusleeping: Or should I look into using libp2p directly?
<MikeFair>
whyrusleeping: Using ipfs to get node ip addresses
<MikeFair>
whyrusleeping: fwiw, I can see that one way to emulate a direct peer link would be to make a topic name by contactenating the two node peer ids in numeric order
aquentson has joined #ipfs
aquentson1 has joined #ipfs
<whyrusleeping>
MikeFair: if you want a direct peer to peer link, just use libp2p directly
<MikeFair>
whyrusleeping: I might start with just pubsub;
Charley has joined #ipfs
aquentson has quit [Ping timeout: 240 seconds]
<MikeFair>
whyrusleeping: I'd be curious to see if Mumble could use libp2p or be a test case for pubsub routing
ylp has quit [Ping timeout: 255 seconds]
<MikeFair>
whyrusleeping: Voice calls over IPFS would be fun :)
<whyrusleeping>
mumble uses a udp based protocol IIRC
<MikeFair>
whyrusleeping: The doc mentions IPNS pushing as "will be" is that currently still a "will be" or is that a "does"
<whyrusleeping>
theyre okay with losing packets
<whyrusleeping>
ipfs doesnt currently have an unreliable transport
<whyrusleeping>
its planned for the future
<whyrusleeping>
hopefully this year
<MikeFair>
whyrusleeping: oohhh, maybe something I could take on... a UDP routing service should be very straight forward given the "next hop" info available; user's beware you now have the privelege of dealing with both "packet loss" and "multiple delivery"!
<whyrusleeping>
you also don't need to put my handle in front of every message :P It *does* send a notification
<MikeFair>
heh
<MikeFair>
old habits die hard;
<MikeFair>
When using the DAG to link to file blocks; I noticed I'd get the DAG header for the block back instead of the file data; it's not 'wrong' and I get 'why' but it wasn't obvious if there was a way to tell the link to return the 'file contents' of the link instead of the 'DAG entry' the link points to
<MikeFair>
do you know?
<whyrusleeping>
what commands are you running?
<MikeFair>
I don't recall exactly but it would have been ipfs dag get hash/path/link/
<MikeFair>
and in the javascript api
<MikeFair>
actually not the jsapi
<dignifiedquire>
good morning ipfs world
<whyrusleeping>
dignifiedquire: goood morning
<whyrusleeping>
dignifiedquire: i still need to review things for you i think
<alu>
seize the day
<whyrusleeping>
MikeFair: `ipfs dag get` only deals in objects
<whyrusleeping>
if you wanted to cat a file, use ipfs cat
<MikeFair>
whyrusleeping: but I don't know it's a file :)
<whyrusleeping>
but you found that out pretty quickly, no?
<MikeFair>
whyrusleeping: because I knew what I was looking at
<MikeFair>
whyrusleeping: don't have a "code" equivalent for that in my repertoire yet
<whyrusleeping>
hrm... so youre wanting it to be more obvious that a given object is a file?
<MikeFair>
whyrusleeping: I don't know; because I don't know the full structure of the DAG entry yet (that information could be part of the DAG entry and I just don't "understand" that yet); but yes; what I want is "The data at this path"
<MikeFair>
whyrusleeping: When that's a larger block of JSON; I wasn't expecting a DAG entry thing
<whyrusleeping>
MikeFair: if you just 'cat foo.exe', do you expect to see the program run?
<whyrusleeping>
or, 'cat blah.mp4', do you expect to get a view of the movie?
<MikeFair>
whyrusleeping: no, when I get some/long/path/foo/ I expect to see json come back like I do with /some/long/
<whyrusleeping>
i'm not sure what youre saying there
<MikeFair>
when I asked for higher level nodes in the tree there's a JSON object struture; some data, some links
<MikeFair>
but I get the whole object
<MikeFair>
I then uploaded a json data file and linked to it in the tree with its hash id
<MikeFair>
"The value at this entry is a file"
<MikeFair>
"The value at this entry is a link to a file"
<MikeFair>
is what I'd like to express
<MikeFair>
so if I put foo.exe there; then I expect to see all of foo.exe bytes come flying at me (not just the top level DAG entry for the file and its links)
<whyrusleeping>
Then you want to use 'cat'
<whyrusleeping>
because 'file' is an abstract concept
<whyrusleeping>
a 'file' is really made up of many different nodes
<MikeFair>
whyrusleeping: but I'd like to express in the link descriptor
<whyrusleeping>
the 'dag' is those many different nodes
<MikeFair>
ok, then I'd like ipfs cat /some/dag/path to work ;)
<MikeFair>
My code is walking/discovering the tree; it doesn't know what's dag and what's file it's just all JSON
<MikeFair>
(in this particular application)
<MikeFair>
But it makes sense to me that some dag paths will terminate at data
<dignifiedquire>
whyrusleeping: yes there is more review for you
<MikeFair>
data > 256k
<MikeFair>
like /dag/path/to/jpeg/image/cat.jpg
<whyrusleeping>
ipfs cat /some/dag/path where 'path' isnt a file?
<MikeFair>
whyrusleeping: yes; to return what ipfs dag get returns
<whyrusleeping>
but only in the case where the thing isnt a unixfs file?
<MikeFair>
which is why I was saying I was hoping to add the information to the link to say "treat the object at the other of this link as an "X" (unixfs file) instead of a DAG entry)
<whyrusleeping>
That feels like a weird mixing of abstraction layers
<whyrusleeping>
There should probably some different API that provides what you want
<MikeFair>
it's structured data
<MikeFair>
ok sure
<MikeFair>
that works too; ipfs get / ipfs cat seem like they should be able to "do the right thing"
<whyrusleeping>
thats kinda like how on BSD systems, you can cat a directory and get the raw dirent data
<whyrusleeping>
but on linux they don't let that fly
<MikeFair>
this is clearly more about the use csae than the specific api/mechanism
<whyrusleeping>
right, i'm on board with the usecase
<whyrusleeping>
i'm just trying to think through the 'how'
<whyrusleeping>
and making sure that it makes sense
<whyrusleeping>
and doesnt just feel like a few things cobbled together
<whyrusleeping>
MikeFair: do you agree that 'ipfs cat' on a directory should fail?
<MikeFair>
I've had to back off structuring my information in the DAG (which I was kind of excited about) because my code got more complicated then just loading an index and data files
<MikeFair>
I don't have a frame of reference for what ipfs cat is supposed to do; in my mind's eye; "cat" should return the raw "data" meant to be expressed at the entry to stdout ; 'get' should put it in a file
<MikeFair>
it's got nothing to do with unixfs semantics
<MikeFair>
what should 'cat' of a directory do? I ask what's described in the "data"
<whyrusleeping>
okay, so 'cat' is a unixfs command
Oatmeal has joined #ipfs
<MikeFair>
oh I know that
<MikeFair>
but the religious war about ;)
<MikeFair>
it
<whyrusleeping>
heh
<whyrusleeping>
So how should it 'ask' the 'data'?
arpu has quit [Ping timeout: 260 seconds]
<MikeFair>
I think I tend to fall more on the BSD side of the argument because in unix "everything is a file" so "cat file" should do what "cat" does
<MikeFair>
whyrusleeping: that's where I think a descriptor of "what this object is" is required
<MikeFair>
whyrusleeping: the current implementation of ipfs get ; if I understand it correctly; is more like "ipfs file get" (treat the hash like a file and download what you see)
<MikeFair>
cat the same; ipfs file cat
<MikeFair>
I've been trying to imagine what would it be like if I store a SQL database in the DAG?
<MikeFair>
not uploading it's files ; but describing rows and and tables in the DAG
<MikeFair>
if I hit a node that's a "table" I should get back the whole table; if I hit a node that's a block of rows; I should get the rows
<whyrusleeping>
yeah, its weird
<MikeFair>
but this model requires some kind of data interpreter
<whyrusleeping>
on the bsd side, catting a directory gives you the raw direct data
<whyrusleeping>
which in the 'ipfs dag' case
<whyrusleeping>
would give you raw cbor
<whyrusleeping>
which is not super useful to anyone
<whyrusleeping>
(this becomes equivalent to just calling 'block get')
<MikeFair>
so rather than make the DAG itself smart; I wanted to make the links smart
<MikeFair>
so I could have two entries linking to the same hash, but depending on which path I take, I get different results
<whyrusleeping>
mm... you cant put things in the links like that
<whyrusleeping>
but i get what youre poking at
<MikeFair>
not currently ;)
<whyrusleeping>
no, you just cant
<whyrusleeping>
it breaks pathing
<MikeFair>
I read much of the discussions on this and I think I disagree with the definition of "breaks"; I fall on the side of "if the link object has the entryname, the link object wins, but if the whole linked to entry is requested, the linked to object is returned and not the entries on the link object"
ygrek has joined #ipfs
<whyrusleeping>
okay, so what if i have an object that is {"foo":"bar"}
<MikeFair>
it _can_ create inconsistencies in the results; so you tell users "Users! It will hurt if you do that"
<whyrusleeping>
and an object that links to it like: {"blah": {"/":"hashoffoobar", "foo": "what"}}
<whyrusleeping>
what does ThatObject/blah/foo reference?
<MikeFair>
what
<MikeFair>
"what"
<MikeFair>
ThatObject/blah/ -> {"foo":"bar"}
<whyrusleeping>
yeah, see, that feels like poor UX to me
<MikeFair>
No it's bad data entry
<MikeFair>
garbage in garbage out
<whyrusleeping>
its garbage because the link has some other value in it
bwn has quit [Ping timeout: 240 seconds]
<MikeFair>
right; BUT what if that same example had "bar" in both places
<whyrusleeping>
you cant tell the contents of something youre linking to
<MikeFair>
You avoid a link traversal for speed and opportunistic data loading on the application's part
JustinDrake has joined #ipfs
arpu has joined #ipfs
<MikeFair>
I can put summarized results of thinks beyond that link in the link entry itself
<MikeFair>
I was thinking an array to go nested levels but it could just be a direct object reference
<MikeFair>
oh
<MikeFair>
also
<MikeFair>
The path /ThatObject/blah/not_named would traverse the link
ylp has joined #ipfs
mildred1 has joined #ipfs
<MikeFair>
This is extremely useful for displaying lists of things; when you have no information at all (like "name"); you have to traverse all the links to get the "name" parameter
<MikeFair>
s/parameter/field value/
<MikeFair>
The user will likely not click on every name in that list; they might even just be paging through (so won't click any of them); but the system is still returning to the entire reference linked
mildred has quit [Ping timeout: 268 seconds]
rendar has joined #ipfs
Oatmeal has quit [Ping timeout: 240 seconds]
<MikeFair>
err returning the whole linked entry
<MikeFair>
So in this case one person's "breaks" is another person's "PEBKAC"
<MikeFair>
:)
JustinDrake has quit [Quit: JustinDrake]
<whyrusleeping>
You could definitely implement something like that on a layer above the dag
<whyrusleeping>
change the format of links for that system to be something like: {"foo": {"target":{"/":"QmBlah"}, "otherdata": "hello" } }
<whyrusleeping>
so when dealing with the raw graph, you don't get any ambiguity
<whyrusleeping>
but the abstraction youre talking about could provide the 'useful' interface
<MikeFair>
but it's hard for me to see how get the ipfs api tools to understand foo/otherdata as "hello" unless you mean another layer up but still within ipfs
<MikeFair>
like this other "smart link" api we were talking about that knew how to fetch files
<MikeFair>
ipfs db instead of ipfs dag
<whyrusleeping>
Yeah, it could be another layer up within ipfs
<whyrusleeping>
It could be the case that we move the current 'cat' to 'ipfs file cat' or 'ipfs unixfs cat'
<whyrusleeping>
and extend the current 'cat' command to support such a layer
<whyrusleeping>
Or, alternatively
<whyrusleeping>
we will be at some point be moving unixfs to use ipld objects
<whyrusleeping>
which may well obviate the issue
<MikeFair>
That makes the most sense to me honestly; because ipfs cat to should be more like ipfs dag at this point
<whyrusleeping>
at any rate, its late and i'm pretty tired
* MikeFair
nods.
<whyrusleeping>
lets continue this discussion tomorrow, or maybe you can push your holy war on Kubuxu and dignifiedquire
<whyrusleeping>
:P
<MikeFair>
I've never quite understood the unixfs centrism L(
<MikeFair>
err :)
<MikeFair>
:P
<MikeFair>
MikeFair the crusader!
<whyrusleeping>
(youre one of the few who doesnt think of "files" as the default mode, its not a bad position)
* whyrusleeping
zzzz
<dignifiedquire>
get some sleep whyrusleeping
<dignifiedquire>
I hear it's even better than fighting holy wars
* MikeFair
grins.
<MikeFair>
dignifiedquire: I'm excited to say I've given up the holy war crusades. :) I respect the decisions the ones who wrote the code made at the time; and offer comments; unless I'm able to offer them code :)
<MikeFair>
then I offer that
ylp1 has joined #ipfs
<dignifiedquire>
sounds like a great way of handling things :) there are enough wars on the internet
<dignifiedquire>
and I appreciate comments especially if they come from someone who understands things but isn't as deep in the code, as they offer a fresh perspective on it
<MikeFair>
I see being able to model the object tree of a distributed program execution directly via the DAG
<MikeFair>
glad to hear it; thanks!
<MikeFair>
I think of ipfs as two things; one hidden atm; it's a content addressing scheme and git-like repository -- but also and perhaps more importantly it's a peer connectivity routing system
<MikeFair>
and provides a way to describe "distributed computing" in ways that make sense (ways that IP could never properly express)
<MikeFair>
I see nothing that really prevents the concept of "provides" from extending to "services" in addition to files
<dignifiedquire>
I am not sure I fully understand what you mean with 'model distributed computing'
<MikeFair>
Imagine I've got a render farm
<MikeFair>
or I'd like a render farm
<MikeFair>
Here's my new procedure: ipfs key gen mynewrenderfarm;
<dignifiedquire>
I see, so in addition to files you would be able to distribute services over the network
<MikeFair>
addressed by the HASH
<MikeFair>
these hashes become "endpoints"
<dignifiedquire>
how would you get to the outputs in this scenario, ie where is the hash for that announced
<MikeFair>
ipfs file ls -k mynewrenderfarm /results/
<MikeFair>
The -k mynewrederfarm is an ipns root to a namespace
<MikeFair>
the services that were launched using that key have control over what's under it
<MikeFair>
or maybe its: ipfs file ls /ipns/mynewrednerfarmhash/results
<MikeFair>
unless in the submission command I gave it a pubsub callback address
<MikeFair>
then it could notify my listener when the job was finished
<dignifiedquire>
I wonder if it's possible to reuse an orchestration framework to implement this
<MikeFair>
I haven't used any so I couldn't say;
<MikeFair>
I was thinking we need something similar for handling distributed search... Instead of everyone trying to suck all of ipfs through their nodes to run indexing scripts; nodes grant locked down execution privileges to run small scripts than can report back results
<dignifiedquire>
my first idea would be to use docker containers for running the executable inside and mount a data volume inside whos content gets written into ipfs, that way the executable wouldn't even have to know about ipfs, if just dumps the content into a folder
<MikeFair>
dignifiedquire: exactly;
<MikeFair>
dignifiedquire: I want these things locked down and kept on a leash!
<MikeFair>
An extremely painful spiky, electric shocking type leash! :)
<dignifiedquire>
and docker hub should use ipfs as distribution layer anyway
<dignifiedquire>
some knows their leashes;)
deetwelve has quit [Quit: foobar]
<dignifiedquire>
s/some/someone
maxlath has quit [Ping timeout: 240 seconds]
<MikeFair>
It's always boggled me that debian/redhat/gentoo installs don't automatically install a p2p daemon client to distribute packages
<MikeFair>
It makes sense that each member of the community have the opportunity to help support the infrastructure; it's also about the most legitimate and overwhelming use case that could dwarf media/music
<MikeFair>
but I could be wrong about that
<dignifiedquire>
yes, very much
wallacoloo_____ has quit [Quit: wallacoloo_____]
<MikeFair>
I'd alos like to see a COW layer over IPFS
<dignifiedquire>
but I suspect that a lot of the people are still afraid to touch torrents, which was effectively the only scalable p2p system available for this for a long time
<dignifiedquire>
cow as in copy on write?
<MikeFair>
the challenge I have with some of the ipfs thinking/applications is there's a lot of what I consider useless "history" that gets generated; and I feel bad because of the disk space it churns
maxlath has joined #ipfs
<MikeFair>
copy on write; yeah; to help as an overlay that I can "batch up" a file system commit
<MikeFair>
or use as the base for a docker image/etc
reit has joined #ipfs
<dignifiedquire>
it might be interesting to have sth like explicit commits that trigger local data to be moved into ipfs
<dignifiedquire>
I think docker has sth similar where you can commit the container state when you are happy with it
<MikeFair>
dignifiedquire: I'd even be happy with just keeping the changes inside the container
ylp1 has quit [Ping timeout: 255 seconds]
<MikeFair>
dignifiedquire: I work with the OpenACH project that distributes a Docker image; and having it launch directly from an mounted ipfs directory would be awesome
<MikeFair>
dignifiedquire: Kind of related to that I've been trying to envision how to effectively provide the concept of a "User's IPFS Home Directory" ; obviously there's ipns attached to a user's key but from there; it just get wierd
<dignifiedquire>
it's also tricky, because the expectation is that a home directory is private, but if it's on ipfs it's fully public, though this might better map to a public github repository conceptually
<MikeFair>
I modelled it out like it was a USB attached storage device on the computer; with an encrypted FS on it
<MikeFair>
dignifiedquire: I then used the DAG to describe to block address paths; and links to point to the data blocks
<dignifiedquire>
that sounds pretty cool
<dignifiedquire>
where did you get stuck?
<MikeFair>
writing the virtual usb driver
<MikeFair>
not stuck
<MikeFair>
just "haven't actually done it; and the feedback I got was kind of like "why use a USB device as a model"
Guest158986[m] has joined #ipfs
<MikeFair>
And I was kind of dumbfounded because my answer of "having ipfs daemon be able to present virtualized block devices that it already understands are lossy and need commits is an excellent mirror of what's happening" din't seem to work
<Guest158986[m]>
hello
<cblgh>
whyrusleeping: [from the backlog] fish shell's what you're using in that ipfs demo you did?
<cblgh>
because i thought that ghost complete was raad
<MikeFair>
Hi Guest!
<dignifiedquire>
Hi Guest158986[m]
<dignifiedquire>
cblgh: do you mean the typeahead?
ianopolous has quit [Ping timeout: 260 seconds]
<cblgh>
dignifiedquire: yes probably!
<cblgh>
where you're typing and you can see future parts kind of greyed out
<MikeFair>
You can address every square meter of the planet and if you could somehow make those ipns addresses ; link data to them
JustinDrake has joined #ipfs
<MikeFair>
I guess what you'd is make an ipns root and publish these address under that root hash
null_radix has quit [Excess Flood]
null_radix has joined #ipfs
s_kunk has joined #ipfs
<dignifiedquire>
MikeFair: re home directory, have you looked at the fuse stuff? maybe that would be an easier first step
<MikeFair>
dignifiedquire: I'm looking to manage it for "customers" of OpenACH
<MikeFair>
and a free software distributed donations system
Foxcool has joined #ipfs
<MikeFair>
dignifiedquire: The amount of data is relatively small a few MBs _maybe_
<MikeFair>
dignifiedquire: in the donations system; the data is required to be public; but it requires a lot of ipns nodes so people can find the latest version of the content
<MikeFair>
dignifiedquire: I'm also trying to scheme out a phone number / email address -> ipfs contact record (primarily giving the searcher the ipns hash)
<MikeFair>
dignifiedquire: At the moment it's got part Stellar, part IPFS, part OpenACH
<dignifiedquire>
nice combination :)
ygrek has quit [Ping timeout: 240 seconds]
<MikeFair>
dignifiedquire: part a whole lot of websites
<dignifiedquire>
websites, what are those? ;)
<dignifiedquire>
those are so 90s
<MikeFair>
locally hosted browser applications provided directly via a serverless p2p content distribution network ;)
DiCE1904 has quit [Read error: Connection reset by peer]
tmg has joined #ipfs
<dignifiedquire>
exactly
<MikeFair>
It's a pretty cool scheme overall ; you're able to upload your opinion of what you consider the right people/projects/organizations/industries for donating to; and if other people like the way you think in that area; they can subscribe to your opinions and suck that into part of their giving tree
<MikeFair>
The system takes a daily portion of the donors contributed funds and splits out according to the donor's current tree make up
<MikeFair>
These "portfolios" as I like to call them can be nested; so it's possible for you or me to say give to "Debian" as a thing; and that becomes a few thousand people we just donated to
bwn has joined #ipfs
gmoro has joined #ipfs
<MikeFair>
You also control how much you give; so if you give $10/year or $120/month; the algorithm is the same; divide the balance by the current number of remaining days and take that daily pro rata portion; suck down your current subscriptions fomr the ipfs network as desribed in your portfolio; divide across all the people at the leaves of that tree; execute the transaction
<MikeFair>
In practice there's a bit of grouping that happens so a number of people's txn are aggregated together; but the results are the same
rcat has joined #ipfs
rcat has quit [Client Quit]
rcat has joined #ipfs
funspectre[m] has joined #ipfs
wkennington has quit [Read error: Connection reset by peer]
deetwelve has joined #ipfs
parmenides has joined #ipfs
ekaforce[m] has joined #ipfs
<jbenet>
!befriend mikolalysenko pin
<pinbot>
Hey mikolalysenko, let's be friends! You can pin
ylp1 has joined #ipfs
kenshyx has joined #ipfs
espadrine has joined #ipfs
<cblgh>
MikeFair: makes me think of the brave browser
<mildred>
I want to use the protocol, but not necessarily stick on current file formats, cache locations. I'd like to offload this to the user (the program that implements the naming scheme)
tmg has quit [Ping timeout: 260 seconds]
caiogondim has joined #ipfs
Magik6k_ has quit [Ping timeout: 252 seconds]
mbrock has quit [Ping timeout: 252 seconds]
kyledrake has quit [Ping timeout: 252 seconds]
Magik6k has joined #ipfs
bigbluehat has quit [Ping timeout: 252 seconds]
c0dehero has quit [Ping timeout: 252 seconds]
xelra has quit [Remote host closed the connection]
bigbluehat has joined #ipfs
mbrock has joined #ipfs
c0dehero has joined #ipfs
xelra has joined #ipfs
gruu has joined #ipfs
lohkey_ has joined #ipfs
lohkey has quit [Ping timeout: 252 seconds]
rodarmor has quit [Ping timeout: 252 seconds]
lohkey_ is now known as lohkey
rodarmor has joined #ipfs
dimitarvp has joined #ipfs
robattila256 has quit [Ping timeout: 255 seconds]
apiarian has quit [Ping timeout: 255 seconds]
apiarian has joined #ipfs
zippy314_ has joined #ipfs
qgnox has joined #ipfs
_whitelogger has quit [Ping timeout: 240 seconds]
_whitelogger has joined #ipfs
mildred1 has joined #ipfs
minibar[m] has joined #ipfs
gruu has quit [Ping timeout: 260 seconds]
minibar[m] has quit [Changing host]
agumonkey[m] has joined #ipfs
Aranjedeath has joined #ipfs
Aranjedeath has quit [Changing host]
mildred has quit [Ping timeout: 255 seconds]
mildred2 has joined #ipfs
chriscool has quit [Ping timeout: 260 seconds]
mildred1 has quit [Ping timeout: 240 seconds]
aquentson has joined #ipfs
ecloud has quit [Ping timeout: 260 seconds]
ulrichard has quit [Remote host closed the connection]
chriscool has joined #ipfs
mildred3 has joined #ipfs
mildred2 has quit [Ping timeout: 260 seconds]
maxlath has joined #ipfs
maxlath has quit [Client Quit]
jkilpatr has joined #ipfs
maxlath has joined #ipfs
maxlath has quit [Remote host closed the connection]
ecloud has joined #ipfs
ecloud has quit [Client Quit]
ecloud has joined #ipfs
palkeo has joined #ipfs
palkeo has joined #ipfs
palkeo has quit [Changing host]
maciejh has joined #ipfs
maciej_ has joined #ipfs
maciejh has quit [Ping timeout: 260 seconds]
koalalorenzo has joined #ipfs
koalalorenzo has joined #ipfs
kyledrake has joined #ipfs
palkeo has quit [Quit: Konversation terminated!]
qgnox has quit [Read error: Connection reset by peer]
pfrazee has joined #ipfs
vapid is now known as \\\\
\\\\ is now known as \\\\\\\\\\\\
\\\\\\\\\\\\ is now known as \\\\\\\\\\\\\\\\
jkilpatr has quit [Quit: Leaving]
jkilpatr has joined #ipfs
ashark has joined #ipfs
hansnust[m]1 has quit [Ping timeout: 240 seconds]
hansnust[m]1 has joined #ipfs
chaosdav has quit [Ping timeout: 245 seconds]
qgnox has joined #ipfs
leeola has joined #ipfs
maciejh has joined #ipfs
maciej_ has quit [Ping timeout: 240 seconds]
<ebel>
thought: If I put gateway.ipfs.io as 127.0.0.1 in my hosts file, then when I access that site, I'll really be going to my machine and using my ipfs daemon, right?
n0z has joined #ipfs
<ebel>
and then I send people links that use gateway.ipfs.io, and if they don't have ipfs set up, then everything will Just Work (tm) for them. But I'll be using ipfs properly. right? :)
<cblgh>
depends on if the ipfs gateway requires a port to work properly?
n0z has quit [Quit: .]
<cblgh>
if you can configure it to serves stuff out at port 80 then it should work like you think, i'd say
<cblgh>
serve*
<ebel>
ah yes. ipfs daemon by default listens on port 8080 (or so), not 890
<ebel>
*80
<lgierth>
you'd need to run it as root (strongly recommended against), or put a reverse proxy like nging in front
<lgierth>
s/nging/nginx/
<ebel>
sure. or apache, or socat or something
<lgierth>
yeah socat will do too, doesn't have to be a fullfledged httpd
<ebel>
(heck could even do it with iptables probably, redirect all incoming port 80 to port 8080/etc)
<ebel>
what does gateway.ipfs.io do? reverse proxy or something?
grosscol has joined #ipfs
<AphelionZ>
daviddias whyrusleeping: either of you around today?
qgnox has quit [Read error: Connection reset by peer]
anco has joined #ipfs
<lgierth>
ebel: yeah nginx, for ssl and all that
<lgierth>
it's the same as ipfs.io
<lgierth>
same vhost
<lgierth>
we're just keeping gateway. around for historical reasons, and so that people can point their cname records at it
<lgierth>
anycast soon too :D
<ebel>
I noticed there was a few "ipfs over http" things on ipfs.io. Is there a preferred method/URL scheme/prefix?
<ebel>
like, so I use ipfs.io/ipfs/X rather than gateway.ipfs.io/ipfs/X ?
koalalorenzo has quit [Quit: This computer has gone to sleep]
infinity0 has quit [Ping timeout: 268 seconds]
koalalorenzo has joined #ipfs
koalalorenzo has joined #ipfs
koalalorenzo has quit [Changing host]
infinity0 has joined #ipfs
koalalorenzo has quit [Client Quit]
infinity0 has quit [Remote host closed the connection]
<lgierth>
yeah ipfs.io is preferred, but gateway.ipfs.io is promised to work as long as ipfs.io
infinity0 has joined #ipfs
<lgierth>
one major principle of ipfs is that we don't break links :)
reit has quit [Ping timeout: 240 seconds]
infinity0 has quit [Remote host closed the connection]
<lgierth>
right when i get home (coffee shop closing)
aquentson has joined #ipfs
<kpcyrd>
do you think we can get that into 4.6?
<lgierth>
nope, rc1 went out yesterday
<lgierth>
will have to be 0.4.7
reit has joined #ipfs
<kpcyrd>
kk
<lgierth>
no more docker changes that might break things in this one ;)
<lgierth>
ok gotta run!
s_kunk has quit [Ping timeout: 240 seconds]
kenshyx has quit [Quit: Leaving]
n0z has joined #ipfs
Encrypt has joined #ipfs
Guest159402[m] has joined #ipfs
matoro has quit [Ping timeout: 240 seconds]
Encrypt has quit [Quit: Quit]
JustinDrake has joined #ipfs
espadrine has quit [Ping timeout: 260 seconds]
ygrek has joined #ipfs
<AphelionZ>
what's to stop somebody from overwriting my ipns name?
<AphelionZ>
is it just leased for a certain amount of time anyway?
<lgierth>
they'd need your private key
<whyrusleeping>
AphelionZ: yeah, ipns names are validated with a signature
<whyrusleeping>
to overwrite it they would have to be able to forge a signature (requiring them having your private key)
<MikeFair>
I'm still boggled how that black magic works; changing the thing the ipns is pointed to without changing its address; and the code to check the private key
<MikeFair>
I'd to add a Timebased Onetime Password or "passphrase" component to the updater; that way the private key can be more distributed more freely (share within a community of people who still need to know the passphrase or have their token generated number)
<AphelionZ>
lgierth: whyrusleeping i might need soem guidance on how I can create an ipns for each of my app's users. Should I create a new private key for each of them
<whyrusleeping>
MikeFair: we're going to add new validity schemes that will allow that sort of workflow without actually sharing a key (through key signing and web of trust type mechanisms)
<whyrusleeping>
AphelionZ: You could use `ipfs key` to create a new key for each user
<AphelionZ>
aha!
<AphelionZ>
ok
<whyrusleeping>
but using that comes with a caveat
<whyrusleeping>
you have to manually republish ipns records once a day for those
<whyrusleeping>
we don't have the republisher setup for keystore keys yet
<whyrusleeping>
its on our shortlist of things to do
<AphelionZ>
ok so I'll have to do some sort of cron-y type thing for that for now
<AphelionZ>
I'd love to do EVERYTHING in the browser but I need server support for other stuff anyway
<MikeFair>
AphelionZ I didn't see using a key provided through the JS Api yet
<MikeFair>
I coudn't see how to get the js function ipfs.name.publish to take the -key parameter (but I figured it was my own ignorance)
<MikeFair>
What's the longest TTL one an ipns entry; isn't that a parameter?
<MikeFair>
whyrusleeping: So that's why my domain records keep disapearing! (I saw the republish entries in the daemon console log and wasn't clear what was happening)
<MikeFair>
Is there any way we can provide the random data that key gen uses?
<whyrusleeping>
MikeFair: so theres a reason why the ttl parameter is labelled experimental
<whyrusleeping>
there are really two different TTLs going on
<whyrusleeping>
the ttl option on the command specifies how long the record itself is valid for
<MikeFair>
I haven't tried using the ttl yet, just saw it there, figured as long as my daemon up it'd republish
<whyrusleeping>
the other ttl is one thats set by each node when they receive a record
<whyrusleeping>
and its 24 hours
<whyrusleeping>
basically, no matter what the ttl you set on your record, a node will generally only hold it for 24 hours
<MikeFair>
hehe; only a problem for my local "node" which was the original source provider ; once it forgets; everything forgets ;)
<MikeFair>
to the "providing the random data" that way a key could be regenerated rather than stored
<MikeFair>
The key be generated based on something like email address
<MikeFair>
then that becomes the hash for sending content to that email address
mdom_ has joined #ipfs
vapid is now known as vapidEXECUTIONER
A124 has quit [Quit: '']
vapidEXECUTIONER is now known as edgemster13
mdom has quit [Ping timeout: 240 seconds]
JustinDrake has quit [Quit: JustinDrake]
A124 has joined #ipfs
chungy has joined #ipfs
matoro has joined #ipfs
infinity0_ has joined #ipfs
infinity0 is now known as Guest90591
infinity0_ has joined #ipfs
Guest90591 has quit [Killed (card.freenode.net (Nickname regained by services))]
infinity0_ is now known as infinity0
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 260 seconds]
jkilpatr has joined #ipfs
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
ianopolous has joined #ipfs
infinity0 has joined #ipfs
ShalokShalom has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
aquentson has joined #ipfs
aquentson1 has quit [Ping timeout: 240 seconds]
Encrypt has joined #ipfs
<A124>
hey, why: what you use as syntax higligh color scheme for github? I think it was you with the dark.
ShalokShalom has quit [Remote host closed the connection]
<Mossfeldt>
Nice. Will try that tomorrow.
adrianovalle[m] has joined #ipfs
wallacoloo_____ has joined #ipfs
aquentson1 has joined #ipfs
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
aquentson has quit [Ping timeout: 255 seconds]
<MikeFair>
whyrusleeping / dignifiedquire: I think the answer to what happens when you "ipfs cat" a directory, in the "smartlink" scenarion, is you get the ouput from "ipfs ls"
aquentson has joined #ipfs
<whyrusleeping>
MikeFair: Hrm... why?
<MikeFair>
Ideally something that code could iterate through, like an array of "file" entries
yoh has joined #ipfs
<MikeFair>
whyrusleeping: 'cat' is about dumping the contents to stdout; 'get' is about dumping the contents to a file; the equivalent to the DAG is something like being an inode entry
aquentson1 has quit [Ping timeout: 240 seconds]
<MikeFair>
'cat' file dumps contents of a file to stdout
<yoh>
Hi! could someone point to description of what ipfs's hash is? the main question -- is hash for the same file will be the same for multiple ipfs nodes? or if having a file, could I compute its hash and ask ipfs if anyone provides such a file?
<MikeFair>
Well the contents of a "directory" is a list of file entries
<whyrusleeping>
MikeFair: yeah, so should it just list the names?
<MikeFair>
whyrusleeping: ls --list? (the extended version that gives the has ids; perhaps size too)
<whyrusleeping>
yoh: yes, the hash for a given file will be the same no matter who computes it, so long as the parameters of the 'add' are the same
<whyrusleeping>
MikeFair: in what format? json? protobuf? cbor?
<whyrusleeping>
plaintext?
<yoh>
whyrusleeping: lovely -- what is the algorithm?
<whyrusleeping>
yoh: the default uses sha2-256 as the hash function, and chunks files into blocks of 256k bytes, then builds a balanced merkledag out of them
<whyrusleeping>
the resultant hash is the hash of the root node of that dag
chris613 has joined #ipfs
<MikeFair>
whyrusleeping: I think you'd have options; where the default is plaintext; but the idea being you're really getting a "structured document" a DOM if you will
<yoh>
whyrusleeping: got it -- thanks a bunch!
<whyrusleeping>
MikeFair: where does the structure come from? is that hardcoded?
<MikeFair>
whyrusleeping: So for different APIs, ipfs.cat of a directory would return an iterable structurable that API handles
<MikeFair>
whyrusleeping: This is the "layer of above DAG but still within IPFS" idea
<whyrusleeping>
MikeFair: right, same question applies
<MikeFair>
what I called the "SmartLinks"
<whyrusleeping>
Also, i'm not so sure i like this, theres no easy way for me to tell if the output is because i cat'ed a directory, or because i cat'ed a file that contains the same data as catting a directory would produce
<MikeFair>
For the most part I think it'd be JSONesque ; I'd like XML, but that gets hard to describe in typical object classes
<MikeFair>
whyrusleeping: that's why you have the lower level apis still
<whyrusleeping>
but higher level apis shouldnt just cause confusion
<whyrusleeping>
They should be very clear still
<MikeFair>
whyrusleeping: For an application that is trying to describe it's application's data; the distinction between those two is rather pedantic/moot
<MikeFair>
It depends on what you think the point of the data access I guess
<MikeFair>
I'm suspecting that code doing this is 'iterating' through a structured dataset of someking
<MikeFair>
some kind
<MikeFair>
while (haventreachedaleaf) { follow links and do something; }
<whyrusleeping>
why not just have all links to files have a label next to them indicating that its a file and that they should cat it?
<whyrusleeping>
make files the special case, and default to just using the dag
<lgierth>
(agree that cat should not ls)
<MikeFair>
That works too
<MikeFair>
but there's nothing in the "link" format description that gives that info atm (and my attempts to add are what prompted the question)
<MikeFair>
I consider files and directories two instances of types of structured data that can be linked to in the DAG
Encrypt has quit [Quit: Sleeping time!]
<MikeFair>
For example, it'd be like adding "mime-type" to the link; something that tells an application "how" it should handle the DAG entry it gets back
aquentson1 has joined #ipfs
cxl000 has quit [Quit: Leaving]
Mossfeldt has quit [Quit: Bye]
<MikeFair>
(DAG entry and Data) -- not sure if you were here for it; but two links pointing to the same "hash" could result in different responses (one could return the raw DAG entry itself; the other the "file" pointed to by that entry)
aquentson has quit [Ping timeout: 260 seconds]
<MikeFair>
I'm curious; in your mind's eye; given that everything in unixfs is a file; what does 'cat' do?
ylp has quit [Ping timeout: 255 seconds]
<lgierth>
directories are not files
<lgierth>
anyhow, cat returns a reader for the content read from the concatenated blocks
leeola has quit [Quit: Connection closed for inactivity]
<MikeFair>
hard drives are files; serial ports are files; sockets are files; everything is represented as a file
ylp has joined #ipfs
<MikeFair>
So what are the contents of a directory entry?
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
ylp has quit [Ping timeout: 255 seconds]
<MikeFair>
I just see a strong consistency that happens when everything is a file; and cat always returns something to represent the contents of that thing; it makes little sense to me that saying `cat /dev` does not have any contents; while `cat /dev/hda1` does
<MikeFair>
or cat /proc/12345/mem
<MikeFair>
that's not a "file" either really; but you can cat it :)
<Kubuxu>
MikeFair: it is file , just a virtual one
<MikeFair>
then why wouldn't a directory also contain a virtual file representing the directory entry?
<Kubuxu>
real meaning of "everything is a file" is: "everything is in virtual filesystem"
<Kubuxu>
and directory is in virtual filesystem
<MikeFair>
I'm simply saying that I find "''cat' doesn't mean anything on a directory" is inconsistent to that point of view
<lemmi>
MikeFair: ooooh yes. finally someone else sees the problem. i constantly try to cat a directory to view its contents
<Kubuxu>
point is, cat is tool that: calls `open` and the `read`
<Kubuxu>
on directory you have to call `open` and `readdir`
<MikeFair>
So when I'm asked what should 'cat' on a directory do; I ask myself what are the logical contents of a directory thing? "some header data; and list of file entries"
aquentson has joined #ipfs
<MikeFair>
I can get behind that readdir is about reading the logical children of an entry
<MikeFair>
So then I'd expect the logical "read" of a directory to at least be "how many children; total size; (directory name perhaps)"
edgemster13 is now known as vapd
aquentson1 has quit [Ping timeout: 260 seconds]
<MikeFair>
I'd expect "read" data on a directory to also contain the addresses of its children (the readdir entry would be included in the larger "read" call; mostly because of the usability of that
<MikeFair>
it also seems to follow that 'read' on a directory is synomous with 'readdir'
<MikeFair>
err synonomous
<MikeFair>
in using the read operation in that case; the virtual file represented by the directory's path name; what you are reading is a dirent
<MikeFair>
It's up to you to understand that though; just like it's up to you to undertand reading a jpeg file structre when you read the cat.jpg path name
hoboprimate has quit [Quit: hoboprimate]
Encrypt has joined #ipfs
Encrypt has quit [Client Quit]
<MikeFair>
Relating it to graph terminology; everything is a "node"; some of these "nodes" contain named relationships to other nodes
ylp has joined #ipfs
<MikeFair>
It's consistent; no exceptions; using "readdir" on a "node" returns other nodes that are children to this node; If the node were a SQL table; it's children would be rows; the children of those rows would be fields/columns
cemerick has quit [Ping timeout: 260 seconds]
<MikeFair>
the children of those fields/columns would be values; and it was a string; the children of those string values would be letters
wallacoloo_____ has quit [Ping timeout: 255 seconds]
<MikeFair>
or maybe words, then letters ;)
<MikeFair>
One of the sad things I see about the way unixfs happens at the moment is that there isn't more opportunity for "readdir" behavior on other virtual files; for instance cat.jpg has a bunch of EXIF data in it that would be awesome to get access to directly
ylp has quit [Ping timeout: 255 seconds]
<MikeFair>
ls cat.jpg/*
aquentson1 has joined #ipfs
<MikeFair>
image_data, asegs/ ...
aquentson has quit [Ping timeout: 260 seconds]
krzysiekj has quit [Read error: Connection reset by peer]
ylp has joined #ipfs
tilgovi has quit [Ping timeout: 240 seconds]
Mizzu has quit [Ping timeout: 240 seconds]
matoro has quit [Ping timeout: 240 seconds]
ashark has quit [Ping timeout: 260 seconds]
galois_d_ has joined #ipfs
krzysiekj has joined #ipfs
ianopolous has joined #ipfs
tilgovi has joined #ipfs
matoro has joined #ipfs
galois_dmz has quit [Ping timeout: 240 seconds]
yar has quit [Remote host closed the connection]
yar has joined #ipfs
ylp has quit [Ping timeout: 255 seconds]
espadrine has quit [Ping timeout: 260 seconds]
mildred3 has quit [Ping timeout: 240 seconds]
ebel has quit [Ping timeout: 258 seconds]
ebel has joined #ipfs
galois_d_ has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
<appa>
Hi, is there any risk of running a gateway? I'm searching for tutorials on setting a public gateway but if anyone knows a good reference...