<MikeFair>
I like the "And the Internet can be so much more than that!"
<MikeFair>
intersperse some related stats followed by a feature ; then another stat
<MikeFair>
don't make me accumulate information from multiple slides to make your point
chris613 has quit [Quit: Leaving.]
<AniSky_>
Hm, this is supposed to be an "intro" to IPFS, answering three questions: what's the problem, how do you solve it, and why do I care.
<MikeFair>
So perhaps something like "Our data and usage is locked behind the closed doors of domain owners"
<MikeFair>
What's the opportunity; what difference will make for me; how do I execute it -- is a different communication model
<deltab>
who are you aiming it at?
<MikeFair>
deltab" another important point
<MikeFair>
AniSky_: i've got a question directly for you that's kind of in that vein ; what made you type "ipfs daemon"
<MikeFair>
(motivations)
<AniSky_>
curiosity mainly
<MikeFair>
that's a good angle for the intro video
<MikeFair>
"Check it out"
<MikeFair>
I don't think we want people to believe they "need" to run ipfs ; we want people to "want" to run ipfs
<MikeFair>
Make it your personal pitch on what make IPFS so great that we should all take an interest
<MikeFair>
s/should/would want to/
<MikeFair>
My thinking is you also just might want to talk about the future opportunities to address desirable features by using p2p networks ; not specifically ipfs ; -- you could influence people's awareness of good use cases for p2p file systems beyond downloading music and movies
<MikeFair>
Why/How p2p systems make a difference to accessing our public records
<MikeFair>
Personal Storage that's acutally Personal
<MikeFair>
Group Storage that doesn't involve an outside hoster
<AniSky_>
So the problem is that right now we don't have that as a product.
<AniSky_>
That's not a good idea for something advertising it as-is. Instead, I like the idea of "content-addressing"
<AniSky_>
I'll add in an animation for that
<MikeFair>
Which is why I was suggesting the "p2p is good" angle -- because people want features if you convince them to install the software
<AniSky_>
I see. Yeah, p2p has a bad rep. Prolly need to address that.
<AniSky_>
I want to make it really simple, so I'll break it down--first slide after the "it could be so much more" part will be "introducing InterPlanetary File System" and then the uses for it
<MikeFair>
Oh! An animation on "how it works!" targetting for potential techies would be awesome ; give these guys a break from repeating the info that's in all those whitepapers the rest of us can't seem to find/wade through/read
<MikeFair>
but as a separate "learn more" video
<MikeFair>
A video on "The gritty internal details" would be ideal ; it takes education for "centralized top down" thinking to be realigned with p2p bottom up
<MikeFair>
Learning how to accept "Your the content addres of what you think of as a file will change with every change you make to that thing you think of as a file"
<MikeFair>
s/You//
pfrazee has quit [Remote host closed the connection]
<MikeFair>
you "publish" to ipfs ; not "copy to"
<MikeFair>
(think of it like "uploading to" ipfs)
<MikeFair>
it's like your website ; your local dev changes don't make difference until you post them
<MikeFair>
To be frank ; ipfs doesn't have many working features at all of interest to an ordinary person -- it's all still too cumbersome
<MikeFair>
at the moment I mean
arkimedes has quit [Ping timeout: 240 seconds]
onabreak has joined #ipfs
<MikeFair>
I'd like to see repositories like NPM / Debian / GenToo / RedHat use ipfs as a CDN
<MikeFair>
I think that sits in ipfs' current wheelhouse and gives people an actual immediate use case for ipfs daemon
Caterpillar2 has joined #ipfs
<MikeFair>
If there was a "help ipfs with a video" project; I'd like to suggest "leaving people wanting to execute "ipfs daemon"" ought to be the goal
<MikeFair>
AniSky_?
<AniSky_>
Ah sorry. I believe we already a nitty-gritty video right now.
<MikeFair>
Btw; who runs ipfs.io whyrusleeping; is that you?
<AniSky_>
I thought there was a repo for it :P
<MikeFair>
AniSky_: well there's a talk ; not a friendly animation with soothing music ;)
<AniSky_>
Well duh, however I only just learned after effects today :P
<AniSky_>
So I'm going to take it slow here and do a 60 second thing.
aquentson has quit [Ping timeout: 240 seconds]
<MikeFair>
I was just thinking it would be great to have a big old "XXX Nodes; hosting YYYY GB; and ZZZZ TB capacity Add yours today!"
<AniSky_>
:P I think I asked if that was possible at some point.
<MikeFair>
AniSky_: A 60 second clip that talked up the group and the infrastructure would be a great help
<MikeFair>
AniSky_: as deltab asked ; who is the audience (what level communicates to them)? What is the consequence of them having seen the video?
<AniSky_>
Audience is anyone who goes to ipfs.io, consequence is they are intrigued or curious.
zippy314 has quit [Ping timeout: 260 seconds]
<AniSky_>
This ideally could be published with media outlets, etc.
<AniSky_>
So not too technical.
<MikeFair>
How does intrigued / curious help ipfs at this point in the state of affairs
<MikeFair>
or make it and pull it off the shelf a bit later
<MikeFair>
Imagine in your wildest success dreams it does exactly that; what happens next?
<MikeFair>
You've now got people intrigued and curious about ipfs
<MikeFair>
what do they do?
<MikeFair>
what impact does their activity have on the project
<MikeFair>
I think your video(s) could make a hug difference ; so go for it!
<MikeFair>
I think the right audience for the moment is reaching technical users ; people who download technical and public information
<MikeFair>
People who use things like NPM, RPM, APT-GET, YUM, etc
<MikeFair>
what do you think AniSky_?
<AniSky_>
So people intrigued visit ipfs.io and then see a button "Try IPFS"
<AniSky_>
From there it's not my problem :)
<MikeFair>
Those people will bring skillz and successfully deal with the current usage pains :)
<AniSky_>
Well, those people probably don't need the same type of video
<AphelionZ>
is there anything to run ipfs locally that comes with a nice UI, etc
<AphelionZ>
for the "lay person"
<AniSky_>
Not yet :(
<AniSky_>
I wanted to make one.
<MikeFair>
AniSky_: no it becomes a problem for the very people leading the charge on coding the project :)
<MikeFair>
AniSky_: there kind of is localhost:5001
chased1k_ has joined #ipfs
chased1k has quit [Ping timeout: 255 seconds]
asyncsrc[m] has joined #ipfs
<MikeFair>
AniSky_: but agreed a great UI would really help
arkimedes has joined #ipfs
<AniSky_>
When you have illustrator, premier, after effects, photoshop, and media encoder open on a laptop at the same time (explosions)
gts has joined #ipfs
gts has quit [Client Quit]
<AniSky_>
dignifiedquire did anything come of your updated logo?
<MikeFair>
Woot!!
* MikeFair
makes ipfs dag do something that it's intended for! \o/
gts has joined #ipfs
gts has quit [Client Quit]
anewuser has quit [Quit: anewuser]
<MikeFair>
AniSky_: Kind of like when I screen sharing with Skype, Unity, Visual Studio, Blender, and 100 browser tabs :)
<AniSky_>
Except with Adobe products
<AniSky_>
Like I could run all of the above and not be anywhere near the fan usage I'm at rn
<MikeFair>
hehe; I concede - you win ;)
<MikeFair>
or lose in this case
<MikeFair>
The laptop I'm on only has 3G so it's quite painful ; I've actaully not tried doing all that at once anywhere
<MikeFair>
anymore
MDude has quit [Ping timeout: 276 seconds]
<whyrusleeping>
kevina: one last comment on the pin add and then LGTM
<whyrusleeping>
MikeFair: protocol labs runs the ipfs.io gateways, lgierth is the one in charge of them
<kevina>
whyrusleeping: all right I look into using time.Ticker, I was hoping to avoid it though
<MikeFair>
I was just noticing that a "how large is the current network" piece would be cool ; I'll mention it to him
<whyrusleeping>
why? its the right tool for the job
<kevina>
whyrusleeping: I will push that then go to bed :)
dan0_0 has left #ipfs ["WeeChat 1.0.1"]
<whyrusleeping>
cool cool, its like 1am over tere
<MikeFair>
whyrusleeping: East Code US?
<kevina>
whyrusleeping: just the time it will take to figure it out that all
<whyrusleeping>
I'm west coast
* MikeFair
is Los Angeles
<kevina>
yes its 1am, but I am good for another hour or so :)
<whyrusleeping>
lol
<MikeFair>
whyrusleeping: Has anyone produced good strategies on search indexing yet?
<whyrusleeping>
MikeFair: not really, ipfs-search.com is the only one i know of right now
<whyrusleeping>
the only real 'strategy' i've thought of so far is to have a bunch of dht nodes listening for 'provide' announcements
<whyrusleeping>
and collecting those all together
<MikeFair>
I've got a different one; though it's _reall different_
<MikeFair>
A script bot that "crawls" the files in a directory node
<MikeFair>
So here's the main things I've wanted to avoid; bringing all that data into one place; making the nodes store "everything"
<MikeFair>
the scary proposal here is I'm talking a small script that would move from node to node and get executed at each node tracing through the data at each local repo
<MikeFair>
and collecting results
* MikeFair
posting results to a DAG object the user is watching
mildred3 has joined #ipfs
muvlon has quit [Ping timeout: 245 seconds]
<MikeFair>
It's a WIP ; FreeNode publishes tags on files and generates the tags when you first upload them (in addition to manually adding them)
<AniSky_>
MikeFair, don't the DAG links show you block sizes?
ygrek_ has joined #ipfs
<AniSky_>
So you wouldn't have to actually store the data?
mildred2 has quit [Ping timeout: 245 seconds]
<AniSky_>
Plus, isn't IPFS "one giant Merkel DAG" which would make it relatively easy to traverse?
<MikeFair>
AniSky_: I'm storing the "file address" of the block that got the hit
<MikeFair>
AniSky_: Yes; but think about what "traversal" means in terms of I/O; at your local machine your pulling the data in; processing it; then requesting the next part
<AniSky_>
So you're talking about building an index?
<MikeFair>
assuming by traversal you mean; I have a script that runs on my machine and crawls it
<MikeFair>
AniSky_: Some form of Google for IPFS
<AniSky_>
Sure, so building an index.
ulrichard has joined #ipfs
<MikeFair>
AniSky_: Yes; but a distributed index where code moves from node to node through the index ; not the index data comes to the code
<MikeFair>
The code doesn't "lookup" the index
<AniSky_>
You're talking about a tiny implementation deal. The overall architecture is exactly the same.
<MikeFair>
(it doesn't execute ipfs get indexData)
<AniSky_>
You're tokenizing a file and then storing it in a database.
<MikeFair>
AniSky_: but that implementation detail is the difference between your computer having to download/maintain 100TB of data to execute a search vs 100k script flying around teh web
<MikeFair>
AniSky_: You search is a custom script
<AniSky_>
Wait... you realize your computer will need each file at some point or another to index it, right?
<MikeFair>
AniSky_: that's what I'm saying ; it won't
<AniSky_>
um how?
<MikeFair>
AniSky_: Nodes will only maintain indexes on directory nodes they host
<MikeFair>
AniSky_: correction "uploaded for indexing"
<AniSky_>
Ah. I think you'll have a fun time trying to coordinate that but sure.
<MikeFair>
(I'm thinking hosting in the "pinning" sense not the in my repo sense)
<MikeFair>
So it's a command for someone to say "publish this to the index"
<MikeFair>
that makes it readable
<AniSky_>
What I was going to suggest, if you really want to be like Google, was simply index files as you go and trash them.
<MikeFair>
AniSky_: That's completely untenable in an Interplanetary context ; you don't have the collective I/O for it
<AniSky_>
Search, by nature, should not be possible in the first place without a centralized index.
<AniSky_>
You could try what you described but I assure you it's going to be a lot of fun to coordinate indexing nodes.
<MikeFair>
AniSky_: but that's so far all we've mostly come up with ; either precache the index and stick the tags in the DAG ; or suck up bandwidth sucking down data and files I'll never use
<MikeFair>
AniSky_: That's what I'm saying this does ; it makes a distributed index
<MikeFair>
AniSky_: The script is the "centralizing agent" but it moves
<AniSky_>
Yes but _how_? What if two nodes have the same thing? Do they both have the same index then? How do you trust the index of a node? Hell, how do you know where to start when searching?
<MikeFair>
AniSky_: You don't have to coordinate them ; you just need "permission" and an "execution grant" from them
<MikeFair>
AniSky_: No the Nodes that had the command "Index this" executed on them have it
<MikeFair>
AniSky_: It's like pinning
<MikeFair>
AniSky_: But instead it's "indexing"
<MikeFair>
AniSky_: So a node hosts an Index for a Block
muvlon has joined #ipfs
<MikeFair>
AniSky_: it's a little boolean flag
<MikeFair>
AniSky_: and the node announces that it has this index in the network in its "provides" list
<AniSky_>
Ehm, again, trust, coordination, and entry point.
<MikeFair>
AniSky_: it's a feature of ipfs daemon to support javascript execution
<MikeFair>
AniSky_: Or at least to search its local index
<MikeFair>
AniSky_: it's asking for permission to execute/consume node resources to search its index ; the node can say no
<MikeFair>
AniSky_: but given that it was instructed/asked to host/provide this index ; it ought to say yes if it can
<MikeFair>
The script/execution carries with it an ipns address and key to post its results
<MikeFair>
the point is that effectively all indexing nodes practically simultaneously search their local index because they were asked to and if they got a hit ; publish the answer to a predetermined fixed address (a mailbox for that node)
<MikeFair>
What the nodes are announcing is "I provide an search index for HASH "
<MikeFair>
(which means I can read the data)
<MikeFair>
(The important thing there is that most HASH nodes are in the middle of files; so you can't really tell what the file is to index it ; you have to start at the root of the file; the top level HASH)
<MikeFair>
err not HASH nodes ; but HASH addresses
<MikeFair>
And that's another reason it's dynamic code ; the swarm doesn't understand file type
<MikeFair>
So if I want to do a search for "pictures of cats" ; that's obviosuly different than HTML
<MikeFair>
and in that case no need for the uppercase; 'cid' it is :)
<MikeFair>
thanks
Caterpillar has joined #ipfs
<chrono[m]>
:)
<MikeFair>
on a scale of fuzzy bunnies (0) to SkyNet(10) where do you think roaming scripts that run in an ipfs daemon execution zone fall?
<MikeFair>
The good news the script only requires local I/O
<MikeFair>
and all of that is limited/mediated bythe ipfs vm
<MikeFair>
The script itself is hosted in ipfs; so the ipfs daemon does something like "vm start cid"
Caterpillar2 has joined #ipfs
<chrono[m]>
Everything that has by design the potential to wreak uncontrollable havoc within and/or with the sum of nodes may lean towards SkyNet but on the other hand I can also see benefits from being able to have something like that.
<AniSky_>
FUSE version check fails on mac, even with latest version.
henriquev has quit [Quit: Connection closed for inactivity]
Mateon2 has joined #ipfs
<chrono[m]>
at least in terms of accessibility I'm convinced that something like convenient search must be in place for everything information related, in order to be quickly accepted and used by most people. In terms of ipfs it's a bit like the www itself in 1994, we already had content but no convenient search engine to actually let us find it.
Mateon1 has quit [Ping timeout: 240 seconds]
Mateon2 is now known as Mateon1
substack_ is now known as substack
Caterpillar2 has quit [Ping timeout: 255 seconds]
<MikeFair>
Chris[m]: exactly; when I first started thinking of this ; all the powerful scary SkyNet hairs screamed on the back of my neck (but it was still sooo cool)
<MikeFair>
So I went ok ; "lock downs" :)
<MikeFair>
chrono[m]: and I like that someone else isn't necessarily in control of the search algorithm (aka all things optimized for Google's PageRank)
<MikeFair>
in practice there's just going to be a few major scripts we all reuse/share but the number will grow as content grows
<chrono[m]>
MikeFair: full ack to algo control
Mateon3 has joined #ipfs
<MikeFair>
I also needed a bot; the equivalent of an auto-repsonder to a node -- I thought it was "nicer" on the system as a whole if the script bot ran at the same nodes the update was posted (thinking in ipns terms) -- rather than having to notify my local machine (adding i/o and latency)
Mateon1 has quit [Ping timeout: 255 seconds]
Mateon3 is now known as Mateon1
<MikeFair>
what's it take to make a trustable script; I'm thinking a "nice 19"; no I/O except for the local ipfs node data; and an interpretted vm
<MikeFair>
Quiark_: This thing I'm talking through to "crawl" indexable content
<MikeFair>
Quiark_: I'm assuming there isn't alrady such an actual thing
<MikeFair>
Quiark_: You can ask a node to "execute me/this"
<MikeFair>
Quiark_: It then finds more nodes of interest and the code/script moves to the node; not the far node moving the data
<MikeFair>
script == 100k
<Quiark_>
oic. Sounds pretty bad from security POV
<MikeFair>
data == 100G
<MikeFair>
exaclty ; so I'm saying "this is what I think it would take to do it"
arkimedes has quit [Ping timeout: 258 seconds]
<Quiark_>
feature #1: DDoS
<MikeFair>
it spends some serious bitSwap credit by the initiator
<MikeFair>
Quiark_: Security lockout feature #1 no I/O
<MikeFair>
Quiark_: the script can query the local repo of the Node it's running on ; that's it
<MikeFair>
it gets some RAM and CPU beyond that
Aranjedeath has quit [Quit: Three sheets to the wind]
<MikeFair>
again insanley restricted (think embded ARM arduino type envs)
<Quiark_>
yeah. Something akin to Ethereum VM
<MikeFair>
I haven't studied it; but it's likely the same thing
<MikeFair>
There are two use cases I have: 1) search IPFS
<MikeFair>
an end user would mark a cid for indexing
<MikeFair>
this is like pinning
<MikeFair>
it says that this node will accept index execution requests for thoe cids
<MikeFair>
it broadcasts that info as part of its "provides"
<MikeFair>
That's how nodes trade info / and scripts find other nodes of interest
<MikeFair>
the script has the command equivalent to "local node; would you please ask [othernode] to execute [cid] and tell me what it says"
<Quiark_>
but maybe search indexing should be built into IPFS rather than creating a generic platform for SkyNet
<MikeFair>
Quiark_:: You can't do it right
<MikeFair>
Quiark_: what file types does it index?
<MikeFair>
Quiark_: how does it learn about new ones?
<MikeFair>
Quiark_: what's the criteria for results? every cid with that keyword? what order do they come back in?
<Quiark_>
all right
<MikeFair>
Quiark_: So yes there is indexing built-in with the DAG structure
<MikeFair>
my take on it is that it takes intelligent agents to populate meaningfully
<MikeFair>
And you need to discourage anyone from having a reason to try and suck every file through its local i/o
<MikeFair>
(a la Google webcrawler style indexing)
<MikeFair>
only someone with the resources of Google can do that right; which means you can only find things those algos will give you (and everyone clammers to game/pay to be part of their index)
<MikeFair>
I get the DDoS risk ; I can't think of a way around it ; if I can't ask the other nodes to do this for me, then I have to do it myself ; and suck that i/o across the net is actually more expensive than publishing my 100k script and requesting therelevant nodes to execute it
<MikeFair>
so that's the background here
robattila256 has quit [Ping timeout: 258 seconds]
<MikeFair>
so my thinking is that for the most part an index node most host the entire file it claims to provide indexing fro
<MikeFair>
the user requests to index the top level hash of the file data
<MikeFair>
and directories
<MikeFair>
Hosting indeces gets you bitSwap :)
<MikeFair>
credit
<MikeFair>
The only thing the script can do is publish data results to an ipns address DAG structure
<MikeFair>
and read the locally indexed files on the node's repo
<MikeFair>
we'll say DAG results must <= 1MB
<MikeFair>
maybe more ; but if you can generate data in a distributed way you can do bad things
<MikeFair>
So I'll just leave the concept there for the moment
<MikeFair>
I'm pretty sure it'll work
<MikeFair>
jsut need to work out the collection results part
ylp has joined #ipfs
mildred4 has joined #ipfs
s_kunk has quit [Ping timeout: 240 seconds]
rendar has joined #ipfs
ShalokShalom_ has joined #ipfs
tmg has quit [Ping timeout: 240 seconds]
maxlath has joined #ipfs
ShalokShalom has quit [Ping timeout: 240 seconds]
G-Ray_ has joined #ipfs
zandy[m] has joined #ipfs
bastianilso has joined #ipfs
Foxcool has joined #ipfs
<zandy[m]>
what are the active channels on orbit?
Shatter has joined #ipfs
ygrek_ has quit [Ping timeout: 240 seconds]
stevenaleach has quit [Quit: Leaving]
espadrine has joined #ipfs
espadrine has quit [Ping timeout: 240 seconds]
mingchan88[m] has joined #ipfs
Encrypt_ has joined #ipfs
bastianilso has quit [Quit: bastianilso]
tclass has quit [Quit: Ping timeout (120 seconds)]
Foxcool has quit [Ping timeout: 240 seconds]
Encrypt_ has quit [Quit: Quit]
gmoro has joined #ipfs
rcat has joined #ipfs
Mizzu has joined #ipfs
bastianilso has joined #ipfs
s_kunk has joined #ipfs
cyanobacteria has quit [Ping timeout: 240 seconds]
gmoro has quit [Ping timeout: 240 seconds]
gmoro has joined #ipfs
wallacoloo_____ has quit [Quit: wallacoloo_____]
wallacoloo_____ has joined #ipfs
bastianilso has quit [Ping timeout: 256 seconds]
maciejh has joined #ipfs
[1]MikeFair has joined #ipfs
MikeFair has quit [Ping timeout: 240 seconds]
wallacoloo_____ has quit [Quit: wallacoloo_____]
bastianilso has joined #ipfs
espadrine has joined #ipfs
[1]MikeFair is now known as MikeFair
bastianilso has quit [Ping timeout: 256 seconds]
arpu has quit [Remote host closed the connection]
Encrypt_ has joined #ipfs
Mateon3 has joined #ipfs
jkilpatr has quit [Ping timeout: 255 seconds]
Mateon1 has quit [Ping timeout: 255 seconds]
Mateon3 is now known as Mateon1
bastianilso has joined #ipfs
rcat has quit [Ping timeout: 256 seconds]
rcat has joined #ipfs
suttonwilliamd has joined #ipfs
Encrypt_ has quit [Quit: Quit]
__uguu__ has joined #ipfs
sametsisartenep has joined #ipfs
bastianilso has quit [Quit: bastianilso]
Foxcool has joined #ipfs
__uguu__ has quit [Quit: WeeChat 1.5]
__uguu__ has joined #ipfs
jkilpatr has joined #ipfs
gmoro has quit [Ping timeout: 240 seconds]
ZaZ has joined #ipfs
gmoro has joined #ipfs
rcat has quit [Ping timeout: 260 seconds]
rcat has joined #ipfs
maciejh has quit [Ping timeout: 245 seconds]
Foxcool has quit [Ping timeout: 268 seconds]
Foxcool has joined #ipfs
maciejh has joined #ipfs
slothbag has quit [Quit: Leaving.]
Foxcool has quit [Ping timeout: 240 seconds]
[1]MikeFair has joined #ipfs
MikeFair has quit [Ping timeout: 240 seconds]
rcat has quit [Ping timeout: 245 seconds]
damongant has quit [Quit: Bye.]
aquentson has joined #ipfs
Encrypt_ has joined #ipfs
tmg has joined #ipfs
mguentner2 is now known as mguentner
Foxcool has joined #ipfs
ZaZ has quit [Read error: Connection reset by peer]
[1]MikeFair is now known as MikeFair
[1]MikeFair has joined #ipfs
MikeFair has quit [Ping timeout: 240 seconds]
Encrypt_ has quit [Quit: Quit]
arpu has joined #ipfs
robattila256 has joined #ipfs
chriscool has joined #ipfs
bwerthma1n has joined #ipfs
bwerthmann has quit [Ping timeout: 260 seconds]
tmg has quit [Ping timeout: 255 seconds]
[1]MikeFair is now known as MikeFair
kthnnlg has joined #ipfs
rendar has quit [Read error: Connection reset by peer]
skinkitten_ has joined #ipfs
skinkitten has joined #ipfs
bastianilso has joined #ipfs
<kthnnlg>
Hi All, I'm having trouble with adding large files to the ipfs repository. Does anyone know if these issues are likely to be resolved soon? For example, I have a directory containing 100GB of binary data. Now, when I run `ipfs add -r datadir`, the process always fails about 5-10% through. The error messages differ. Sometimes I see "blockservice is closed client.go:247". Other times I see ünexpected EOF client.go:247". Finally, if I add the big
<AphelionZ>
its running on https but trying to call the localhost:5001 APIs
<AphelionZ>
however, those are http
<AphelionZ>
is there either: 1) an http gateway
<AphelionZ>
or 2) a way to run the 5001 api with a self signed cert?
mildred4 has joined #ipfs
aquentson1 has joined #ipfs
rendar has joined #ipfs
rendar has joined #ipfs
rendar has quit [Changing host]
aquentson has quit [Ping timeout: 258 seconds]
aquentson has joined #ipfs
bitspill has quit [Ping timeout: 258 seconds]
gozala has quit [Ping timeout: 258 seconds]
sickill has quit [Ping timeout: 258 seconds]
nicolagreco has quit [Ping timeout: 258 seconds]
aquentson1 has quit [Ping timeout: 260 seconds]
richardlitt has quit [Ping timeout: 258 seconds]
aaaaaaaaa____ has quit [Ping timeout: 258 seconds]
anewuser has joined #ipfs
mbrock has quit [Ping timeout: 258 seconds]
Kamilion has quit [Ping timeout: 258 seconds]
risk has quit [Ping timeout: 258 seconds]
voxelot has quit [Ping timeout: 258 seconds]
lohkey has quit [Ping timeout: 258 seconds]
hosh has quit [Ping timeout: 258 seconds]
s4y has quit [Ping timeout: 258 seconds]
tibor has quit [Ping timeout: 258 seconds]
wa7son has quit [Ping timeout: 258 seconds]
shaunooo_ has quit [Ping timeout: 258 seconds]
mikolalysenko has quit [Ping timeout: 258 seconds]
aaaaaaaaa____ has joined #ipfs
sickill has joined #ipfs
bastianilso has quit [Quit: bastianilso]
tibor has joined #ipfs
gozala has joined #ipfs
<AphelionZ>
also any plans for gzip on the gateway? :)
hosh has joined #ipfs
risk has joined #ipfs
s4y has joined #ipfs
shaunooo_ has joined #ipfs
voxelot has joined #ipfs
mbrock has joined #ipfs
bastianilso has joined #ipfs
lohkey has joined #ipfs
Kamilion has joined #ipfs
maxlath has quit [Ping timeout: 255 seconds]
bitspill has joined #ipfs
shizy has joined #ipfs
mikolalysenko has joined #ipfs
nicolagreco has joined #ipfs
skinkitten has joined #ipfs
skinkitten_ has joined #ipfs
hundchenkatze has quit [Remote host closed the connection]
hundchen_ has joined #ipfs
hundchen_ has quit [Remote host closed the connection]
Stskeepz is now known as Stskeeps
Stskeeps has quit [Quit: Reconnecting]
Stskeeps has joined #ipfs
Stskeeps has joined #ipfs
Boomerang has joined #ipfs
ashark has joined #ipfs
Boomeran1 has joined #ipfs
Boomerang has quit [Ping timeout: 252 seconds]
Shatter has quit [K-Lined]
skinkitten_ has quit [Ping timeout: 240 seconds]
skinkitten has quit [Ping timeout: 240 seconds]
atrapado_ has joined #ipfs
Boomeran1 has quit [Ping timeout: 240 seconds]
pfrazee has joined #ipfs
cwahlers has quit [Ping timeout: 240 seconds]
hundchen_ has joined #ipfs
rojulin[m] has joined #ipfs
cwahlers has joined #ipfs
andoma has quit [Ping timeout: 240 seconds]
andoma has joined #ipfs
hundchen_ has quit [Remote host closed the connection]
jonnycrunch has joined #ipfs
Boomerang has joined #ipfs
richardlitt has joined #ipfs
andoma has quit [Ping timeout: 240 seconds]
andoma has joined #ipfs
hundchenkatze has joined #ipfs
hundchenkatze has quit [Client Quit]
Boomerang has quit [Quit: Lost terminal]
Boomerang has joined #ipfs
maxlath has joined #ipfs
ulrichard has quit [Remote host closed the connection]
Boomerang has quit [Ping timeout: 255 seconds]
Boomeran1 has joined #ipfs
tclass has joined #ipfs
dspp[m] has joined #ipfs
<dspp[m]>
can you share a full repack of a video game on ipfs?
<lgierth>
you can share anything whose license permits redistribution
<cblgh>
well you can pretty much share anything that is bits tho right
<AphelionZ>
thats not enforced though ;)
sametsisartenep has quit [Quit: zzz]
<lgierth>
it's strongly encouraged though
<cblgh>
just that sharing pirate stuff makes ipfs seem bad
<AphelionZ>
and i think that once something is on ipfs it's hard to get off, no?
<cblgh>
only if it's popular AphelionZ
<cblgh>
people have to pin it, as that's nothing that happens automatically iirc
<cblgh>
unless you access it through ipfs gateway?
<lgierth>
no they just have to access it
<SchrodingersScat>
requesting stuff through the gateway at least helps propagate it a little
<AphelionZ>
yeah the swarming makes stuff stick around in and of itself
<lgierth>
when they access it, that means their node fetches it, and thus is now a provider too
<AphelionZ>
lgierth: did you happen to see my questions above?
<whyrusleeping>
Note the code of conduct for discussion of copyrighted material on this channel
<whyrusleeping>
kthnnlg: What version of ipfs are you using?
<AphelionZ>
you have to assume people are going to use ipfs for illegal content
<dspp[m]>
ok thx i was just wondering cause i haven't seen it anywhere i just saw the usual mp4,mkv,mp3
<AphelionZ>
its like when they created diaspora and they were SO SURPRISED when ISIS was using it
<AphelionZ>
like, duh, of course ISIS is gonna use your distributed anonymous untraceable communication channel
<SchrodingersScat>
untraceable?
<AphelionZ>
well
<AphelionZ>
ok thats an exaggeration
<cblgh>
whyrusleeping: what's the coc?
<AphelionZ>
but my point remains. Diaspora should have seen that coming
<cblgh>
i mean discussing actual hypotheticals and boundaries should be encouraged right?
<cblgh>
rather than "WHAT IF I PUT UP <BLOCKBUSTER> ;) WINK"
ylp has quit [Quit: Leaving.]
<AphelionZ>
yeah i think it just means "here's a link to Frozen" is not cool
<whyrusleeping>
people will use ipfs for whatever they want, but we don't condone or support the discussion of its use for 'bad bits'
matoro has quit [Ping timeout: 255 seconds]
<AphelionZ>
i think simply being more agfressive in how you chunk stuff and split up and distribute the chunks across the swarm is your best bet to avoid the anti-use case of "oh crap I ran ipfs daemon and now I have all this pirated software on my computer"
<AphelionZ>
i certainly dont want to have the FBI raid me and look look in my ipfs repo and discover anything unseemly
damongant has joined #ipfs
<whyrusleeping>
cblgh: yeah, discussion of boundaries and policy is just fine
<cblgh>
whyrusleeping: cool good to know, thanks for the link
wa7son has joined #ipfs
<cblgh>
hm i have some dumb questions that would probably be quick to answer
<cblgh>
i made a thing in go that uses ipfs, and i want to optimize it so that it can run on the earlier raspberry pis without running out of memory
<cblgh>
is there anyway of bundling my go code with ipfs?
<cblgh>
right now all i do is just run ipfs daemon & then issues shell commands to do ipfs stuff
<cblgh>
so i guess i would like to know if it's possible do at all, and in that case if there's any reference i could look at to get started with bundling my code + ipfs-go
<AphelionZ>
how do I do a 'hello world' with orbit?
<cblgh>
orbit is a chat app rather than a framework
<cblgh>
but basically your question is the same as i'm wondering, how i'd make a standalone ipfs browser app that could be reached & useful from one of the http gateways
kthnnlg has quit [Remote host closed the connection]
skinkitten has joined #ipfs
skinkitten_ has joined #ipfs
jkilpatr_ has joined #ipfs
cyanobacteria has joined #ipfs
jkilpatr has quit [Ping timeout: 260 seconds]
<AphelionZ>
cblgh: the biggest thing for me right now is the API
<AphelionZ>
the gateway is https, my local api is http
<AphelionZ>
that's the only thing stopping it from being a fully operational battle station
galois_d_ has joined #ipfs
<whyrusleeping>
cblgh: the two different kinds of webapps youre seeing are really the same thing
<whyrusleeping>
orbit-web just embeds a javascript ipfs node in the page
<whyrusleeping>
(so its technically 'running a daemon' for you)
<lgierth>
i'd love to get relay done this sprint, but it's gonna be a bit much
<lgierth>
and revive utp
<lgierth>
that'd make 7 transports
<AphelionZ>
whyrusleeping: might that solve my issue?
<AphelionZ>
alternatively is there a way to run my local API under a self-signed cert?
<AphelionZ>
the remote app calling local ipfs apis thing seems to be a pretty good model, actually
<AphelionZ>
it gives you total data governance
<whyrusleeping>
Yeah, i do really like that model
<whyrusleeping>
we need a better permissions model though
<whyrusleeping>
don't want random webapps having access everything on your node
galois_dmz has quit [Ping timeout: 240 seconds]
skinkitten has quit [Ping timeout: 260 seconds]
skinkitten_ has quit [Ping timeout: 240 seconds]
<AphelionZ>
true true
<AphelionZ>
you could prrrrrobably pin permissions to an ipns key
<AphelionZ>
this key can read, write, etc... whatever ACL you want
<cblgh>
whyrusleeping: ohhhhhhh
<cblgh>
so it's doing the js version of what i want to do with my go service
cyanobacteria has quit [Ping timeout: 255 seconds]
<AphelionZ>
cblgh: yes the ipfs daemon itself opens up an HTTP api so you just need to call it from your app
<AphelionZ>
i think you were doing something kiiiiinda similar by invoking the shell commands via your go app
<cblgh>
well, what i'm saying is i want to embed the ipfs nodes in both js and go cases
<AphelionZ>
i don't think you're truly embedding the ipfs node in the case of javascript
<cblgh>
that's how i understood it from whyrusleeping's reply
<AphelionZ>
yeah whyrusleeping can you unpack that
<AphelionZ>
... embeds a javascript ipfs node in the page (so its technically 'running a daemon' for you)
anewuser has quit [Quit: anewuser]
Boomeran1 has quit [Quit: leaving]
Boomerang has joined #ipfs
jonnycrunch has quit [Quit: jonnycrunch]
<whyrusleeping>
ipfs is implemented in go and javascript
<whyrusleeping>
the javascript version can be run inside a webpage
<whyrusleeping>
the javascript stuff is still pretty early
<AphelionZ>
will that solve my issue with http/https?
<whyrusleeping>
and has troubles connecting to go clients
<whyrusleeping>
but it generally works and is getting better quicklu
<whyrusleeping>
AphelionZ: likely, you wont be making any http requests to a local daemon
<AphelionZ>
i was hoping i could let my users decide to keep data local or to share it with the full ipfs system
ygrek_ has joined #ipfs
espadrine has quit [Ping timeout: 252 seconds]
phorse has joined #ipfs
s_kunk has quit [Ping timeout: 255 seconds]
atrapado_ has quit [Ping timeout: 276 seconds]
kthnnlg has joined #ipfs
cemerick has joined #ipfs
AniSky_ has joined #ipfs
anewuser has joined #ipfs
jkilpatr has joined #ipfs
jkilpatr_ has quit [Ping timeout: 255 seconds]
tilgovi has joined #ipfs
G-Ray_ has quit [Quit: G-Ray_]
lothar_m has joined #ipfs
atrapado_ has joined #ipfs
grosscol has quit [Ping timeout: 240 seconds]
Foxcool has quit [Ping timeout: 260 seconds]
<AniSky_>
hsanjuan I just had a thought about ipfs-cluster + operating on the block level. Would it not be possible, since theoretically the person adding something to the cluster should have a copy of it, to ask for a proof of retrievability as a part of the consensus?
pfrazee has quit [Read error: Connection reset by peer]
pfrazee has joined #ipfs
<AniSky_>
Or something like that at least, where peers must prove retrievability for the leader to add them to the log.
<AniSky_>
That handles bad actors for the most part.
espadrine has joined #ipfs
cyanobacteria has joined #ipfs
<whyrusleeping>
proofs and consensus? sounds like a blockchain
<AniSky_>
Essentially.
<AniSky_>
I basically am suggesting Filecoin without the rewards mechanism and with private "adding" capabilities.
<AphelionZ>
cant wait until filecoin :D
matoro has quit [Ping timeout: 240 seconds]
<frood>
permissioned filecoin? why not use like tahoe or something?
<hsanjuan>
AniSky_ what is a bad actor for you?
<AniSky_>
hsanjuan Well, it's anything from a specifically malicious node to a faulty one that's not storing properly.
<AniSky_>
Just nodes that don't do their job in general.
<AniSky_>
Kubuxu ipfs update is failing for me (failed to acquire repo lock at /Users/meyerzinn/.ipfs/repo.lock) and the binaries are not updated on the website, I'll build from source.
<AniSky_>
hsanjuan sure
<lgierth>
you can grab the latest from dist.ipfs.io
<lgierth>
we'll see to updating the website
<lgierth>
would be great if the website just also grabbed the stuff from dist.ipfs.io
<AniSky_>
Had to downgrade FUSE :(
<AniSky_>
Well, that's great. Fuse 2.7.4 is not compatible with my OS lgierth
ShalokShalom_ has quit [Read error: Connection reset by peer]
pfrazee_ has joined #ipfs
pfrazee has quit [Read error: Connection reset by peer]
agi78 has joined #ipfs
bwn has joined #ipfs
birarda has joined #ipfs
<birarda>
hi there!
<birarda>
I work at High Fidelity, a virtual reality startup
<AphelionZ>
whats up birarda
<birarda>
We're checking out IPFS to see if we might be able to integrate it with our distributed architecture
<birarda>
I'm doing some preliminary research before a meeting today and have some extra questions I don't have answers to yet
<birarda>
was hoping somebody might be able to answer
<AphelionZ>
I can try to help until one of the team members can jump in and correct my horribly wrong / incomplete answers :)
<birarda>
When somebody hits a gateway.ipfs.io link - I assume Protocol Labs is running a server that handles the DHT lookup and gets you a WebRTC connection to peers that have the blocks you need
<whyrusleeping>
birarda: hey! i was just reading about you guys the other day :)
<birarda>
Awesome!
<whyrusleeping>
if you use http to access a gateway.ipfs.io link, its using ipfs nodes that Protocol Labs runs to resolve and load the content
<birarda>
We started to build our own decentralized file storage and transfer system before stumbling upon IPFS, I'm now trying to figure out if we can save ourselves a ton of time by leveraging the great work already done here
<birarda>
Okay, so let's say I add a file to my local node
<whyrusleeping>
birarda: we would love to collaborate :)
<birarda>
and then I use the gateway to request it
matoro has quit [Ping timeout: 260 seconds]
<whyrusleeping>
okay
<birarda>
the ipfs nodes run by Protocol Labs load the content from my local node and, in a way, proxy it down to me?
<whyrusleeping>
yeap
<whyrusleeping>
you can also use the local gateway your node runs at localhost:8080
<whyrusleeping>
(or on any ipfs node)
<birarda>
makes sense, thanks
<whyrusleeping>
our gateway nodes arent special at all, we just have a domain and public IP
<birarda>
gotcha, so it's effectively the same as the local node I am running just a helpful service being run by making it publicly accessible
<whyrusleeping>
yeap, makes it real easy for people to play with and demo ipfs
<whyrusleeping>
plus, our website (ipfs.io) is actually hosted through ipfs
<birarda>
so it would be faster presumably to reach the node that has the file directly
<birarda>
if it's possible over HTTP
<birarda>
or using IPFS more directly
<birarda>
with one of your libraries or by following the protocol
<whyrusleeping>
right, http request direct to the node with data, or the preferred method of fetching via ipfs
<birarda>
okay, so one thing I'm specifically wondering about is the underlying protocol
<birarda>
it defaults to utp, right?
<whyrusleeping>
it works over a variety of transports, right now most ipfs nodes use tcp
<whyrusleeping>
we have utp support, but its disabled right now until i can verify a bug in the utp library we were using is gone
<birarda>
oh, interesting
<whyrusleeping>
ipfs also works over websockets, and webrtc
<whyrusleeping>
(in the browser)
<birarda>
so you're doing tcp hole punching?
<whyrusleeping>
yeap
<whyrusleeping>
we want to move away from it, but it hasnt been the biggest pain point yet
<whyrusleeping>
so pressure to do so hasnt been high
<birarda>
we use UDP hole punching throughout our applications but have always stayed away from TCP hole punching because I thought there were a variety of issues there
<whyrusleeping>
there *are* a variety of issues, lol
<birarda>
hah, okay
grosscol has quit [Quit: Leaving]
<whyrusleeping>
but we've worked around most of them rather effectively
<birarda>
I read the white paper yesterday - it suggests that you could use any transport protocol
<whyrusleeping>
yeap
<whyrusleeping>
the open bazaar guys got it working over tor
<__uguu__>
any docs on how to do that?
<__uguu__>
(ipfs over tor)
<birarda>
where does that hook in @whyrusleeping? I imagine they talk to the same bootstrap nodes using the existing protocol for that, and then get connected to peers and at that point assuming the peer is ready to use a different transport protocol they use that?
<birarda>
do they in effect create a private network or exist on the same network but only talk to nodes that can use tor?
<__uguu__>
if it's not on mainline ipfs network then why bother?
<whyrusleeping>
If i understand correctly they have their own private network
<whyrusleeping>
open bazaar is something like 10,000 nodes
<whyrusleeping>
They may choose to interop later (they havent shipped this yet)
<birarda>
to accomplish that I assume they just run their own bootstrap servers?
<whyrusleeping>
yeap
<whyrusleeping>
they do one or two other things to make the isolation a bit more stable, i cant quite remember
<birarda>
Interesting that you're using TCP hole punching between nodes - we were testing IPFS and were blocked in a hotel room apparently because the traffic looked like bittorrent
<birarda>
I chalked that up to it being UTP but maybe that's just a general P2P filter
<whyrusleeping>
yeah, thats probably a general P2P filter
<whyrusleeping>
all node to node communication is encrypted
<whyrusleeping>
so they proably just blocked on number of outbound connections
<whyrusleeping>
We're working on doing relaying
<birarda>
makes sense
<whyrusleeping>
so nodes can have other peers relay connections for them, avoiding weird NAT issues
<birarda>
Any sense for the total number of files/hashes in the DHT of the main IPFS network currently?
<whyrusleeping>
theres a lot... its hard to get an exact number as people bring their machines offline and online at different times
agi78 has left #ipfs [#ipfs]
<whyrusleeping>
but i know for a fact theres at least 400TB of data accessible that i've seen.
<whyrusleeping>
And lots more random hashes i havent seen
<AphelionZ>
whyrusleeping: 400TB inside of ipfs already?
<birarda>
Okay, so let's talk redundancy - are there any provisions for that currently or does that depend on something like Filecoin?
maciejh has quit [Ping timeout: 256 seconds]
<whyrusleeping>
AphelionZ: yeah, i've browsed through quite a few different large collections that were 5-6TB
<AphelionZ>
thats great
<birarda>
Basically, if I put a file on my local node, I know I can "pin" it
<birarda>
but if I shut that node down, I assume that file is unreachable
<whyrusleeping>
birarda: There are two (or three) different approaches to that now
matoro has joined #ipfs
<whyrusleeping>
first, is getting someone else to pin things for you. For which there are a couple groups making a payed service for
betei2 has joined #ipfs
<whyrusleeping>
Second, is Filecoin, as you said
<whyrusleeping>
Third, is running an ipfs cluster
<birarda>
filecoin would basically be a decentralized model of the first one, right
tilgovi has quit [Ping timeout: 256 seconds]
<whyrusleeping>
yeah
s_kunk has joined #ipfs
<whyrusleeping>
and ipfs cluster isnt quite ready yet, but it will basically be a coop of ipfs nodes pinning a set of data
<AniSky_>
I'm also looking in to using ipfs-cluster for a dropbox-like service, which I guess falls under the first category.
<AniSky_>
whyrusleeping what is the webui written in?
<whyrusleeping>
Yeah, thats the dht discovery and connection time
<whyrusleeping>
we're working on making it faster
<whyrusleeping>
and i'm quite confident we can get it down to a second in most cases
<AniSky_>
birarda whyrusleeping couldn't you theoretically make a private network to further reduce connection latency?
<birarda>
no worries, I think we could find a short circuit if need be
<birarda>
yup, exactly AniSky_
<birarda>
okay, I have a couple more questions if you don't mind? This has helped out a ton!
<whyrusleeping>
birarda: you should send us an email, juan and the others would love to chat
<AniSky_>
birarda shoot
<whyrusleeping>
I've got to head out, will be back in ~30min
<birarda>
thanks @whyrusleeping!
<birarda>
I'll shoot an email this afternoon once I know more about the direction we are headingh
<whyrusleeping>
but go ahead and say hi at contact@protocol.ai
<AniSky_>
birarda I will warn you I'm not an expert here but I have been using IPFS for a bit and breaking things :)
<birarda>
I'm wondering about the inter planetary naming system
<AniSky_>
Ah, that's a fun part ;)
<AniSky_>
It's a DHT of peer ID -> DAG, which means you essentially get your own IPNS address that can resolve to a tree.
<AniSky_>
I can publish something to IPNS and then others can resolve my IPNS address to whatever I published.
<AniSky_>
That's how you accomplish dynamic data on IPFS.
<AniSky_>
birarda would you like to test it?
<birarda>
okay, not sure I totally follow yet
<birarda>
yes
<birarda>
that would help!
<AniSky_>
OK, if you use `ipfs get /ipns/QmPi8PL1jrsbk2qBC2xz8V1adrXL17c4LKftYKg5fk77XH` or use `ipfs name resolve QmPi8PL1jrsbk2qBC2xz8V1adrXL17c4LKftYKg5fk77XH` you'll see that it resolves to an IPFS logo.
<AniSky_>
Notice it's /ipns/
<AniSky_>
Oh wait forgot my daemon
<birarda>
and I assume you have the ability to re point that if you want? hence the mutability
<cblgh>
AniSky_: how do you create an ipns name?
<AniSky_>
It's your peer ID IIRC.
<AniSky_>
So you already have one. It will tell you when you publish
<cblgh>
ahh
<AniSky_>
There birarda should be published now
<cblgh>
how do you point it then?
<birarda>
okay ... if it's your peer ID does that mean you can only have one thing "mapped" at a given time?
<flyingzumwalt>
birarda that's exactly the idea -- you can update an ipns "name" hash to point to new values over time. That's what lets us publish ipfs.io over ipfs + ipns
<AniSky_>
Yes, but you map a merkel DAG, which means you get hierarchal traversal like a normal web server.
<birarda>
right, okay
<flyingzumwalt>
When we push a new version of the website to ipfs, we just update the ipns entry. DNS for ipfs.io maps to that ipns hash
<birarda>
okay, so it'll let you do $hash/thing.jpg
<birarda>
and basically I just need to "re-point" the folder
<birarda>
and thing.jpg might be different
<flyingzumwalt>
yep.
<birarda>
but the hash that is my peer ID will not have changed so people's references will not need to change
<cblgh>
Now, there are a few things to note; first, right now, you can only publish a single entry per ipfs node. This will change fairly soon
<flyingzumwalt>
well, if you're using an IPNS hash then that will be true
<AniSky_>
cblgh "single entry" is misleading because it's a merkel DAG, which makes it effectively a normal web server.
<flyingzumwalt>
birarda publishing $ipns-hash/thing.jpg would allow you to publish updates to thing.jpg, updating the $ipns-hash to point to the new dag containing the new version of thing.jpg
<birarda>
makes perfect sense, thank you
<AniSky_>
*that is, without re-uploading everything else, since other files' DAGs hadn't changed.
<cblgh>
AniSky_: i read it more as "in the future you can have multiple ipns names to point things as, not just your peer id"
ygrek_ has joined #ipfs
<flyingzumwalt>
yeah. Ani is right. the entry is a hash, which can be the root hash of any size dataset or filestore.
<cblgh>
point things at*
<AniSky_>
I get what you're saying--multiple root entry points?
<AniSky_>
Like "subdomains"?
<cblgh>
mm effectively i guess, if they are tied to the peer id
<cblgh>
maybe whyrusleeping can answer the above when they get back
<flyingzumwalt>
cblgh that final comment about "in the future" is about keeping a history of what the ipns name pointed to. Currently, ipns just tells you the current value. It doesn't store history.
<birarda>
I thought I read something else in the white paper about getting around the unfriendlyness of hashes
<birarda>
or was that about the IPNS or using the dag for paths
<flyingzumwalt>
birarda you can map any DNS name to an IPNS hash, thus hiding the hashedhess
<birarda>
right, I see it again now
<cblgh>
flyingzumwalt: ah that would definitely be cool
<birarda>
or proquint phrases
<birarda>
or we could run a shortener on our end possibly
AniSky_ has quit [Read error: Connection reset by peer]
<birarda>
okay - that helps
AniSky_ has joined #ipfs
<cblgh>
damn proquint phrases are cool
<cblgh>
thanks for that birarda :3
<AniSky_>
Well that just happened
<birarda>
I think I should read up more on DHTs
<AniSky_>
Macbook pro crashed >..
<birarda>
IPFS uses a variant of Kademlia right?
<hsanjuan>
birarda: if you have a usecase for ipfs-cluster it would be great if you do a write up as an issue in https://github.com/ipfs/ipfs-cluster/
<birarda>
will do, thanks
<birarda>
does the resolution of IPNS -> hash require the daemon of that peer to always be running?
<AniSky_>
Good question. I feel like I tested that but i can't remember.
<AniSky_>
IIRC it's a DHT lookup that ends with the peer in question--I'd have to check.
<birarda>
There are provisions in IPFS for encrypting the contents of files right?
<flyingzumwalt>
birarda yes, ipfs uses kademlia
<flyingzumwalt>
birarda currently encrypting content is out of band -- you can encrypt the content before you pipe it into ipfs and decrypt it on the read-side.
<birarda>
okay, no problem
<AniSky_>
You have to know the hash of data in order to access it anyways.
<flyingzumwalt>
ipfs currently encrypts communications on the wire and will eventually support some native encryption of content, but we prioritized other features first, since you can already encrypt on your own
<AniSky_>
It's not like someone can waltz through some FS listing and see, "oh, joe's got a.txt!"
<flyingzumwalt>
actually AniSky anyone can watch for hashes on the DHT and read them all. See projects like ipfs-search
<AniSky_>
Hm, that's true...
<flyingzumwalt>
it's a public DHT on a public network. If the content's not encrypted, anyone can troll for content.
<flyingzumwalt>
*trawl* for contentå
<AniSky_>
flyingzumwalt I was thinking of making a lucene/solr index that does ipfs-search except like a full search engine.
<AniSky_>
To do that, I could use ipfs's mount and some Go workers feeding data to solr.
<flyingzumwalt>
AniSky there are already a couple projects out there doing exactly that.
<flyingzumwalt>
You could use that code as a starting point.
<birarda>
I assume that often in IPFS currently the bitswap just has one node show up wanting a bunch of blocks and that there isn't really an exchange
<flyingzumwalt>
birarda you would have to ask lgierth about that one. he's most familiar with bitswap usage patterns on the network
<AniSky_>
flyingzumwalt the other idea I had was something like archive.org using IPFS.
<flyingzumwalt>
AniSky yeah. archive.org should use IPFS. ;-)
<AniSky_>
I mean think of how many copies of jQuery they have with minor differences.
<AniSky_>
Plus, site diff's likely only change a few lines per query.
<flyingzumwalt>
Yeah, the chunking/deduplication algorithms will need a lot more vetting before people consider them archivally safe.
<flyingzumwalt>
like *years* of vetting and experiments.
<AniSky_>
ipfs-search seems to only search file names.
<flyingzumwalt>
even then, archivists will give it teh stink eye.
<flyingzumwalt>
there are a bunch of projects like ipfs-search. I haven't kept track of their names.
<flyingzumwalt>
At least one of them is doing full-text indexing of anything that's got a filename and looks indexable
<birarda>
are "blocks" in the bitswap protocol the full referenced data or is data split into pieces being requested.
matoro has quit [Ping timeout: 260 seconds]
<birarda>
basically, is it possible to request the same file from multiple peers that have it pinned
<AniSky_>
Yes.
<birarda>
or do you find a single peer that has the file and then grab it from them
<flyingzumwalt>
birarda the content is chunked
<flyingzumwalt>
like bittorrent
<birarda>
oh, right - I read something about the chunking options
<AniSky_>
^ using the Rabin fingerprinting algorithm, you can have files with small differences have relatively few changes in terms of blocks. For example, prepending to a log.
<flyingzumwalt>
by default, ipfs breaks content into 256kb chunks, but there are other algorithms like rabin fingerprinting
<AniSky_>
Which is a big plus for lots of cases.
<AniSky_>
Could anyone help me figure out what's wrong with this Dockerfile?
<whyrusleeping>
I really don't know that much about docker
<whyrusleeping>
but i would guess that the host machine that the containers are running on must have fuse installed correctly
<AniSkywalker>
I don't see why if it's running a virtual machine?
<frood>
docker is not a VM
<AphelionZ>
yeah its more akin to a chroot jail than a vm
<frood>
think of containers as sandboxes
Tsutsukakushi has quit [*.net *.split]
<whyrusleeping>
frood: docker on OSX runs in a vm though
<AniSkywalker>
Well the error is coming from the UNIX mount file in IPFS.
<AniSkywalker>
So I assume it's recognizing the fact it's on UNIX>
<AniSkywalker>
Linux*
Tsutsukakushi has joined #ipfs
rendar has quit [Ping timeout: 260 seconds]
<betei2>
AniSkywalker the problem could be with how docker mounts stuff. could you try without mounting it to the host and take a look inside when its running? open up a bash inside the running container using docker exec -ti $image_name /bin/bash
<AniSkywalker>
OK. I'll take out the mount command from the CMD part. What am I looking for betei2 ?
<frood>
oh we're on Mac? docker's AUFS does really weird things on mac sometimes
<frood>
been a while since I played with it
<betei2>
just cd into /ipfs and look if fuse works
<AniSkywalker>
betei2 FUSE is the part that's failing.
<AniSkywalker>
fuse mount
bastianilso has joined #ipfs
<lgierth>
i'm pretty sure fuse doesn't work with docker
<lgierth>
neither on osx nor linux
<AniSkywalker>
> Error: fuse failed to access mountpoint /ipfs
<birarda>
So if I look up a given hash in the DHT, basically I should find that one or more peers have it, and I'll also have connection information for those peers?
<lgierth>
birarda: finding addresses for the providers is a second step
<lgierth>
content routing, then peer routing
<birarda>
gotcha
<lgierth>
they incidentally use the same underlying mechanism at the moment (a variation of kademlia)
<birarda>
so basically you say "who knows about this thing"
<betei2>
AniSkywalker does --cap-add SYS_ADMIN --device /dev/fuse when running container helps?
<lgierth>
but we found it's better going forward to have these two interfaces separate
pfrazee has quit [Remote host closed the connection]
<birarda>
and then you say "how do I find the peers that know about this thing"
<lgierth>
yeah
<AniSkywalker>
betei2 haven't tried
<lgierth>
check the ipfs dht findprovs and findpeer commands
<frood>
lgierth: sloppy kad, likt BT, right?
cemerick has quit [Ping timeout: 260 seconds]
<AniSkywalker>
betei2 "docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"--cap-add\": executable file not found in $PATH"."
<AniSkywalker>
Not sure if that's something dumb I did.
<lgierth>
frood: not sure
<lgierth>
frood: not sure what sloppy kad is :)
cemerick has joined #ipfs
<frood>
in sloppy kad FIND_VALUE finds a provider list instead of the value
<lgierth>
ah
<betei2>
AniSkywalker docker run --cap-add SYS_ADMIN --device /dev/fuse ... and then this is popping up?
<lgierth>
word
<AniSkywalker>
Wait no it's working
<AniSkywalker>
Trying to mount now
<AniSkywalker>
It worked! Thanks so much betei2
<betei2>
glad i could help :)
<frood>
afaict, ipfs uses signed sloppy kad, swarm is signed forwarding kad, and storj is signed kad + extensions
jedahan has joined #ipfs
<lgierth>
AniSkywalker: write it down -- i think there's a doc about fuse somewhere
<AphelionZ>
whyrusleeping: how does the javascript ipfs daemon run and fetch data? through the gateway?
<lgierth>
frood: same for you, wanna write that down in notes? :D
<lgierth>
people keep asking about the DHT
Encrypt_ is now known as Encrypt
<lgierth>
notes is kind of the playground for docs, and for new features
<lgierth>
eeh i mean the ipfs/notes repo
<AniSkywalker>
Well it works :D
<AniSkywalker>
I'll make a PR with the docker image that works with FUSE.
<AniSkywalker>
So is there a reason that the Dockerfile go-ipfs provides uses one giant RUN statement rather than a bunch?
<lgierth>
yes -- every new statement creates a new image layer
<lgierth>
we want to save space so we "compress" it as much as possible
<AniSkywalker>
Ah. That was good for me but I guess not once you've figured out the container :P
<lgierth>
i.e. if the installed packages aren't removed in the same statement, the image is suddenly like 300MB large
<whyrusleeping>
Yeah, i remember when paul started that last june
<whyrusleeping>
it had ipfs support at first
<cblgh>
did they remove it?
pfrazee has joined #ipfs
<whyrusleeping>
Yeah, i think that some api they were using changed, and instead of updating it they just dropped it
<whyrusleeping>
though i'm unsure of the details
<cblgh>
huh
<whyrusleeping>
i'm sure if someone sends them a PR with integration it will get merged :)
<AphelionZ>
i was trying to envision a gzip / DEFLATE style compression algo that would work in the browser but also reach across ipld links to get more content
<AphelionZ>
ipzip or some such thing
<lgierth>
filecoin is probably the most productive and useful rabbit hole ever
<frood>
birarda: storj is in production, so that's one thing. >.>
<frood>
architecturally speaking, though, very very different
<AniSkywalker>
I think IPFS actually solves reader privacy via chunking
<lgierth>
AniSkywalker: one answer is the /onion transport -- we'll be drilling more into libp2p and networking in the second half this year. i wanna build a properly packet-switched overlay network
john1 has quit [Ping timeout: 255 seconds]
<whyrusleeping>
Eh, reader privacy is *very* hard
<lgierth>
frood: ipfs is in production too ;) just not filecoin
<lgierth>
i hear you ;)
<AniSkywalker>
So doesn't chunking mostly solve that whyrusleeping ?
<AniSkywalker>
Because other peers only know which blocks you're looking for
<AniSkywalker>
Not the order or context
<frood>
IPFS and storj have completely different goals. They could actually run on the same overlay
<lgierth>
AniSkywalker: the possible existing contexts of each chunk are pretty easily discoverable though
<whyrusleeping>
AniSkywalker: no, because you can still get a map of the blocks a peer is requesting
Boomerang has joined #ipfs
cemerick has quit [Ping timeout: 260 seconds]
<birarda>
should I expect the Filecoin whitepaper to change heavily?
<whyrusleeping>
birarda: there will be some changes, yeah. But the main ideas and such are mostly correct still
<lgierth>
frood: if you know any deep technical specs of storj, i'd be thankful for links :)
wallacoloo_____ has quit [Ping timeout: 258 seconds]
<frood>
storj.io/storj.pdf
tilgovi has quit [Ping timeout: 240 seconds]
<lgierth>
each of the more integrated p2p projects has brilliant ideas in it that deserve to be proper sliced apart and protocol-ized
<whyrusleeping>
storj relies on 'gateways' for all data transfers
<lgierth>
frood: hah rftm eh ;)
<whyrusleeping>
its decentralized, but not distributed
<whyrusleeping>
theres been a fair amount of talk about using ipfs to move around storj data
<whyrusleeping>
so that you could use storj to pay for people to host ipfs content for you
<lgierth>
and by that, host *any* content-addressed data
<lgierth>
git repos, dat datasets, etc.
<lgierth>
right?
<whyrusleeping>
blockchainsss
<whyrusleeping>
yeap
<whyrusleeping>
storjs model is for renting disk space on a one-to-one basis
<whyrusleeping>
roughly
<frood>
I'd be more inclined to host dapps on IPFS and use storj for SLA
<frood>
distributed systems are not good at performance guarantees
<frood>
lgierth: RTFM is the best way to learn anything. ;)
<whyrusleeping>
frood: the bitcoin blockchain is pretty good at guaranteeing that your transactions will be around ;)
<lgierth>
that's only one guarantee though (persistance), the time-to-submit a transaction is another
<frood>
within around 45 minutes, provided the mempool isn't full
<lgierth>
remember when the core team was deadlocked and the transaction times shot up?
<frood>
the future is likely hybridized low-server apps, rather than fully serverless dapps
<lgierth>
i wanna use graphic cards to accelerate crypto operations for network routing