<MikeFair>
(Sorry to break in an aside; I wanted to give ipld a whirl and the examples on ipld.io show fetching data using an executable called "ipld" where does that executable come from?)
<lgierth>
sorry about that, ipld.io is a bit outdated -- right now i'm thinking i should maybe take it down
<MikeFair>
Heck; I posted a suggestion Issue that equates to: When you roll up the changes from linked database to fix the links; also include "Summary Data" (which became the output from a set of map/reduce functions on the child database) on the link object in the parent so it isn't completely without information about what's after the link
<lgierth>
you're free to build your data structures however you want with IPLD
<lgierth>
the (hash,name,size) links are from the past (merkledag)
<lgierth>
and the hashes in ipld links even include info about what type of data it is
<MikeFair>
lgierth: Well it would be this: { "contact": {"/": {"link":"someHash", "indexData": {"name": "@lgierth"}} }
<MikeFair>
then when I did: /contact/name
<MikeFair>
Ordinarily it would need to cross the link
<MikeFair>
But the indexData provides "name" field; so it can not do the full retrieval of the linked db
<lgierth>
first, i think it's just {"contact":{"/":"somehash"}} :)
<Kubuxu>
whyrusleeping: you wanted something?
<lgierth>
and yeah you can just add more data in that link object. {"/":"somehash","size":12345,"somethingelse":["foo","bar"]}
<whyrusleeping>
Kubuxu: yeah, just checkig that the makefile thing was ready to go
<lgierth>
(nevermind that the object is called "link" in the second code example
<lgierth>
(the "pseudo link object" example)
<MikeFair>
right; I knew that, thanks ;)
<Kubuxu>
whyrusleeping: wait
<MikeFair>
So then the only change I'm looking for is automated indexData management :)
<lgierth>
MikeFair: sorry if i sound scatterbrained about it, ipld hasn't really been my focus in the past few weeks :)
* MikeFair
nods. Totally get it; I'll let you guys focus on publishing while I look at ipfs dag
<lgierth>
that can just be managed by your code, when you build that data
<MikeFair>
Distributed database will be updated by distributed people and distributed code
arkimedes has quit [Ping timeout: 264 seconds]
<MikeFair>
I don't trust the other authors to "clean up" ;)
tabrath has quit [Ping timeout: 258 seconds]
* whyrusleeping
waits
<whyrusleeping>
Kubuxu: you wanna squash any of that down?
<Kubuxu>
not really, they are logical changes, I skipped one feature (commit number), I left note to reimpl it and forgot about it
cemerick has joined #ipfs
<Kubuxu>
fixed
<Kubuxu>
whyrusleeping: ^^
<daviddias>
lgierth: sounds like what you want is a page that says 'new page coming soon'
cemerick has quit [Ping timeout: 255 seconds]
cemerick has joined #ipfs
<MikeFair>
It totally just hit me that what I'm wanting/needing next is are plugin modules that gets fired up with ipfs daemon (an idea stolen from FreeNet)
<MikeFair>
Something that can watch for the changes and execute stuff
<lgierth>
daviddias: or an http redirect to ipld/specs/tree/master/ipld
<MikeFair>
(and that others can suck into running on their nodes too)
<lgierth>
mh -- you mean listening for IPNS changes?
<daviddias>
lgierth: I don't think that spec is better
<lgierth>
mmmh yeah
<lgierth>
yeeah it has the same imaginary commands
<MikeFair>
lgierth: Primarily; but ipld changes too (which might just be an ipns change)
<whyrusleeping>
Kubuxu: alright, is it really good to go now?
<Kubuxu>
yes
<whyrusleeping>
kay
<lgierth>
MikeFair: ipld changes? do you mean something like tail -f, but for blocks that the node is fetching/creating?
<lgierth>
s/blocks/objects/
<MikeFair>
lgierth: Objects of interest
<Kubuxu>
when you merge it, start re-basing PRs onto it, I hope to have the pipeline running on Monday
<MikeFair>
lgierth: Like my data structures that myself and others are allowed to update from their local nodes
<MikeFair>
lgierth: Publications of files under ipns nodes; I mean it just boil down to ipns changes; but I'm not certain that's the only use case atm
<MikeFair>
lgierth: Using IPLD, mimic CouchDB REST API
<lgierth>
you can share an IPNS key (via the ipns-pub tool, not via go-ipfs), but IPNS hasn't really been made for publish-many cases yet
<MikeFair>
for example; so this would be an HTTP proxy that launches with ipfs daemon
<lgierth>
ah, couchdb <3
<lgierth>
_changes
<MikeFair>
_changes; map/reduce indexes
<lgierth>
yeah
<MikeFair>
validation scripts
<MikeFair>
(in that model there'd be a place where anyone could "post/write" their changes; these agents would then act as servers to suck in those postings, validate them, then publish to the official "read" ipns location
<MikeFair>
Something like /ipns/[clusterpeerhash]/incoming
<MikeFair>
But the key part is that I can "ipfs add module [somehash]" which then gets wired into the event architecture
<lgierth>
mh, word, it'd be really interesting to take couchdb's api and have it based off ipfs
<MikeFair>
I've already written my own code to compute the Couch _rev key
<lgierth>
i wrote a soundcloud<->couchdb replicator once :)
<lgierth>
including replication checkpoints and all that
<MikeFair>
in fact; you could just take the IPLD hash and use that as the _rev key
<lgierth>
yeah i think _rev and _id are opaque
<lgierth>
just needs to be "sortable" for mvcc
<MikeFair>
exactly
<MikeFair>
There is some confusion when replicating between nodes on _rev hash disagreement for the same sequence
<MikeFair>
(it'll generate a conflict object)
<MikeFair>
I put out a recommendation that in that case; have the local system recompute the hash of the incoming object before generating the conflict object
<MikeFair>
because of exactly this "intersystem opaque hash" case
<MikeFair>
(when everyone computes their _rev hash the same it isn't an issue; but the truth is (and not even within the official CouchDB line of servers) it doesn't work that way
<MikeFair>
I also wrote up a workflow for safely merging conflicting objects when the user tried to fix the conflict object by updating the data on the two documents to be the same
<MikeFair>
It's mentally easier for a human being to "copy/paste/edit" to make the two conflicting documents be the same and save them; then to fix the right one and delete the wrong one
<MikeFair>
s/two/N
arpu has quit [Ping timeout: 240 seconds]
<MikeFair>
To make that work; you'd need an online automated agent that's constantly looking for these things and executing
<MikeFair>
It also begins heading torward something really experimental I'd like to play with; it's less expensive on the system to move my code and send me the results then it is to move the data
<MikeFair>
but that's freaky scary
<MikeFair>
But it would literally be more like a "web crawler"; an agent that could run around the system; burning up bitSwap credits following a path to the peers that have the blocks of interest
<MikeFair>
for now I'd just take a CouchDB proxy server ;)
dignifiedquire has quit [Quit: Connection closed for inactivity]
dryajov1 has quit [Client Quit]
pfrazee has quit [Ping timeout: 252 seconds]
pfrazee has joined #ipfs
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
tilgovi has quit [Ping timeout: 255 seconds]
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
pfrazee has quit [Ping timeout: 276 seconds]
pfrazee has joined #ipfs
tmg has quit [Ping timeout: 255 seconds]
anewuser has quit [Read error: Connection reset by peer]
anewuser has joined #ipfs
slothbag has joined #ipfs
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
pfrazee has quit [Ping timeout: 240 seconds]
mguentner has quit [Quit: WeeChat 1.6]
anewuser has quit [Quit: anewuser]
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
mguentner has joined #ipfs
pfrazee has joined #ipfs
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
pfrazee has quit [Ping timeout: 256 seconds]
Foxcool_ has joined #ipfs
mguentner2 has joined #ipfs
dryajov1 has joined #ipfs
mguentner has quit [Ping timeout: 255 seconds]
dryajov1 has quit [Client Quit]
dryajov1 has joined #ipfs
<AphelionZ>
hi again, how do I set the CORS headers on the daemon? The ipfs config commands google led me to don't seem to be doing the trick, even after restarting the daemon
dryajov1 has quit [Client Quit]
* MikeFair
listens to the crickets chirp. :)
<MikeFair>
Wish I knew :)
<AphelionZ>
womp womp
dryajov1 has joined #ipfs
<AphelionZ>
its ok, they were busy with mission critical stuff earlier in here
<AphelionZ>
it looks like my changes were reflected, checking via ipfs config show...
<whyrusleeping>
AphelionZ: youll need to restart your daemon
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
aquentson has joined #ipfs
anewuser has joined #ipfs
<MikeFair>
whyrusleeping: So I'm really digging on the whole daemon plugins concept; I haven't looked at the code; but do you think it'd be moderately feasible to hook some 3rd party modules into the ipfs daemon processing loop?
<MikeFair>
It'd be really helpful for this theoretical distributed database API concept I have;
<MikeFair>
nm; I'll work with haad and the IPLD stuff
MikeFair has quit [Read error: Connection reset by peer]
FiveBroDeepBook has joined #ipfs
wallacoloo_____ has joined #ipfs
FiveBroDeepBook has quit [Excess Flood]
FiveBroDeepBook has joined #ipfs
FiveBroDeepBook has quit [Excess Flood]
muvlon has quit [Ping timeout: 245 seconds]
cyanobacteria has quit [Quit: Konversation terminated!]
muvlon has joined #ipfs
anewuser has quit [Quit: anewuser]
slothbag has quit [Quit: Leaving.]
jedahan has joined #ipfs
Foxcool_ has quit [Ping timeout: 240 seconds]
tmg has joined #ipfs
chris613 has quit [Quit: Leaving.]
Aranjedeath has quit [Quit: Three sheets to the wind]
jceb[m] has joined #ipfs
aquentson1 has joined #ipfs
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
aquentson has quit [Ping timeout: 264 seconds]
Oatmeal has joined #ipfs
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
jedahan has quit [Read error: Connection reset by peer]
jedahan_ has joined #ipfs
Pharyngeal has quit [Ping timeout: 252 seconds]
dignifiedquire has joined #ipfs
robattila256 has quit [Ping timeout: 258 seconds]
main has joined #ipfs
Caterpillar has joined #ipfs
main has quit [Quit: Leaving]
jedahan_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
john__ has joined #ipfs
john___ has joined #ipfs
ShalokShalom has quit [Read error: Connection reset by peer]
john1 has quit [Ping timeout: 260 seconds]
john__ has quit [Ping timeout: 255 seconds]
dbri1 has joined #ipfs
dbri has quit [Remote host closed the connection]
dbri1 has quit [Remote host closed the connection]
dbri has joined #ipfs
rendar has joined #ipfs
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
dryajov1 has joined #ipfs
slothbag has joined #ipfs
dryajov1 has quit [Client Quit]
slothbag has left #ipfs [#ipfs]
dryajov1 has joined #ipfs
MikeFair has joined #ipfs
dryajov1 has quit [Client Quit]
maxlath has joined #ipfs
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
dryajov1 has joined #ipfs
aquentson1 has quit [Ping timeout: 240 seconds]
dryajov1 has quit [Client Quit]
dryajov1 has joined #ipfs
cemerick has quit [Ping timeout: 255 seconds]
cemerick has joined #ipfs
dryajov1 has quit [Client Quit]
ShalokShalom has joined #ipfs
MikeFair has quit [Ping timeout: 264 seconds]
Oatmeal has quit [Remote host closed the connection]
MikeFair has joined #ipfs
ygrek has quit [Ping timeout: 264 seconds]
tabrath has joined #ipfs
espadrine has joined #ipfs
john has joined #ipfs
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
john is now known as Guest31438
john___ has quit [Ping timeout: 240 seconds]
wallacoloo_____ has quit [Quit: wallacoloo_____]
Mizzu has joined #ipfs
Caterpillar has joined #ipfs
palkeo has joined #ipfs
mildred1 has quit [Ping timeout: 252 seconds]
brabo has quit [Ping timeout: 264 seconds]
Akaibu has quit [Quit: Connection closed for inactivity]
ylp has quit [Quit: Leaving.]
ylp has joined #ipfs
Foxcool_ has joined #ipfs
mrBen2k2k2k has joined #ipfs
FiveBroDeepBook has joined #ipfs
FiveBroDeepBook has left #ipfs [#ipfs]
FiveBroDeepBook has joined #ipfs
FiveBroDeepBook has left #ipfs [#ipfs]
<AphelionZ>
whyrusleeping: i did restart it. Im just trying to call the http api from a browser
<MikeFair>
It's finally just hitting me that an ipfs daemon might have a to present an immutatable namespace and repond with redirects to the mutable ipfs nodes
<MikeFair>
using http
<MikeFair>
So http://localhost:[ipfsport]/some/object/path -> HTTP 302 -> ipfs://someHash
<MikeFair>
Or http://localhost:[ipfsport]/some/object/path -> HTTP 302 -> http:[ipfsport]/some/object/path?ipfs=someHash
* MikeFair
shrugs "WIP"
<NeoTeo>
congrats on 0.4.5 \o/ great update.
<Stskeeps>
/g izh
<Stskeeps>
.. ignroe me
apiarian has quit [Read error: Connection reset by peer]
apiarian has joined #ipfs
<AphelionZ>
congrats everybody!
<MikeFair>
hey did someone actually do that data.gov ipfs dump?
<MikeFair>
I forgot where I saw that some team was going to test/attempt to load some decently large portion of the site publications into ipfs
<AphelionZ>
thats good. they / we should do that with the climatemirror.org stuff before it's destroyed
<MikeFair>
Question on "pinning"; if I pin a Hash; does my node attempt to pin the entire tree of hashes in the DAG starting at that point; or can it somehow walk the tree finding the least held hashes in the swarm and request/push those; then repeat
<MikeFair>
(having to copy/paste/deal with these hash strings makes think of trying to use a file system by only referencing iNode numbers.)
aquentson has joined #ipfs
ShalokShalom has quit [Remote host closed the connection]
JPLaurin has joined #ipfs
ZaZ has quit [Read error: Connection reset by peer]
ianopolous has quit [Ping timeout: 276 seconds]
<AphelionZ>
if I have a curl call like curl -F file=@myfile "http://localhost:5001/api/" how do I do the file=@myfile bit with the fetch api?
screensaver has joined #ipfs
<AphelionZ>
better question - whats the best way to call the /dag/put http api from the browser?
<AphelionZ>
im so close to finsihing my thing
<AphelionZ>
im just getting File argument 'object data' is required
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
tmg has quit [Ping timeout: 252 seconds]
<AphelionZ>
is there any documentation at least for /dag/put?
<MikeFair>
kythyria[m], there are ways to get a HashCode that represents just those bytes; but the existing tools aren't designed for that atm
<kythyria[m]>
Hmm
<MikeFair>
kythyria[m]: Easiest way would be [tool to cat/dump that section] > file_section; ipfs add file_section
<kythyria[m]>
That'd duplicate the section unless you used rabin fingerprints, though?
<MikeFair>
kythyria[m]: My command line foo is low; but I'm pretty confident you can pipe the bytes directly to ipfs add --
<Kubuxu>
MikeFair: if you are interested just in HTTP api
<Kubuxu>
then the gateway supports ragnes
<MikeFair>
kythyria[m]: The bytes are deduplicated; you can add them as many times as you want and it won't increase the storage
<Kubuxu>
MikeFair: this depends, but generally ture
<MikeFair>
Kubuxu: (yes I'm overgeneralizing ;) )
<MikeFair>
As long as the beginning and ending of the block breakdown is on teh same alignment right
<MikeFair>
?
<kythyria[m]>
If the section isn't aligned on a chunk boundary (and you aren't using rabin fingerprints) you'll get duplicate chunks for the exact reason that rabin fingerprints are implemented.
<MikeFair>
kythyria[m]: So why/how do you think this would be any different if you made a file versus cat the blocks from the middle?
* kythyria[m]
remembers thinking about a similar question for bittorrent, and IIRC Steam is documented as being most efficient at patching when the patch is mostly file appends.
<kythyria[m]>
It's more a question of being able to request efficiently.
<kythyria[m]>
If you have a format that has huge files, but it's reasonable to just read a bit out the middle.
<MikeFair>
kythyria[m]: Oh yeah; you can totally just request those blocks
<MikeFair>
kythyria[m]: You would use the merkle DAG info to figure out what they were first; then just get those hashes
<kythyria[m]>
Can you start from the hash that will get you the entire file and a byte range, and end up with which blocks to request, though?
cemerick has quit [Ping timeout: 255 seconds]
<kythyria[m]>
Ah, so it has ranges included?
<MikeFair>
kythyria[m]: By using the sizing information in the metadata on the blocks; yes
<kythyria[m]>
Ahhh.
<MikeFair>
kythyria[m]: Each block has two sections; (1) The raw data; and (2) a list of links
bastianilso has joined #ipfs
<MikeFair>
I don't totally have the tree hierarchy mapped out in my head yet; it's a directed acyclic graph (aka not just linear list of links); but it's completely navigable
<MikeFair>
Kubuxu: What were you thinking on the HTTP REST API?
<MikeFair>
Kubuxu: looks like some of your thought didn't come through
<Kubuxu>
the `localhost:8080/ipfs/Qm...AAA` supports range and seek requests
<MikeFair>
I guess since it's acyclic; it's easier to think of it as an unbalanced tree
<Kubuxu>
allowing you to download only parts of the file
<MikeFair>
oh!
* MikeFair
points kythyria[m] at what Kubuxu just said. "You can just do that!" :)
<AphelionZ>
Kubuxu: what did you use for filws paths in your pastebin
<AphelionZ>
files*
<Kubuxu>
window.fragemt holds IPFS path of a fole
<Kubuxu>
s/fole/file
<MikeFair>
Kubuxu: So I'm understanding the DAG properly; you can create whatever kind of "raw_data" section you want (up to 256bytes (or 1MB docs aren't clear/a bit confusing)) and give it whatever address you want
<Kubuxu>
yes but you will get generated HASH of it
<Kubuxu>
then you can combine that hash and the address you gave it
<Kubuxu>
to get your raw section back
<AphelionZ>
also whats the difference between a dag put, an object put, and a file write :D
chris613 has joined #ipfs
<Kubuxu>
file write is for files as it name suggests, object is for internal layer used for files and directories, dag is new API for IPLD that we will be shifting to
<MikeFair>
Hmm not sure how that combining part works; sounds cool
<Kubuxu>
you just use the HASH to resolve linking block that contains map of string keys to different hashes
* MikeFair
played with IPLD *a little* earlier.
Mateon1 has quit [Ping timeout: 240 seconds]
<AphelionZ>
objects will be deprecated for dag?
<MikeFair>
So the addres I made up is a null raw_data section with a link to the content containing blocks
<Kubuxu>
yeah, object supports only unixfs format
robattila256 has joined #ipfs
<AphelionZ>
gotcha, cool
<AphelionZ>
thanks
mildred1 has joined #ipfs
<MikeFair>
Kubuxu: Or is it more the case that I can't make up an address; what I can do is make a head block with content that doesn't change (like the hash of public key ;) ); and use that address to store links on downstream content
<MikeFair>
?
<Kubuxu>
ipfs name publish
<Kubuxu>
ipfs name resolve
<Kubuxu>
/ipns/Qm...AAA
<MikeFair>
Kubuxu: Right; I was just connecting how those worked to our discussion
<MikeFair>
But if the content of the block isn't changing; just the links; why the need for ipns versus ipfs
<Kubuxu>
because links are the content
<MikeFair>
ok I get that; but I still not seeing how you get that consistently linked to a new downstream thing if you can't make up your own address
<MikeFair>
unless the ipns graph "works differently"
AkhILman has quit [Ping timeout: 255 seconds]
<MikeFair>
and excludes links as content
<Kubuxu>
no IPNS is just a one links
<Kubuxu>
s/links/link
<Kubuxu>
from hash of a public key
<Kubuxu>
to IPFS content hash
<Kubuxu>
which can be updated
<MikeFair>
Right; I'm looking forward to seeing/learning why/how that update isn't moving its address :)
<MikeFair>
I totally get "Hash this key" always get the same hash; I'm not seeing/clear on how "publishing" which I'm assuming does something to the links on that block, doesn't changethe address just as much as changing the public key itself would
<AphelionZ>
what did I do wrong there ^^
<AphelionZ>
in node.js it would be a new Buffer but I don't have that in the browser
<__uguu__>
when will ipfs's unixfs expose something like epoll?
<MikeFair>
I think you have localStorage and something like a browser.files object which prompts the user to pick a spot
<MikeFair>
iain: and add it to ipfs to get the hash of that document
mguentner2 is now known as mguentner
<MikeFair>
iain: They'd then update the master database with the link to the document they just created
<MikeFair>
using their peer node id as the key; or just appending it to an array
<iain>
okay... and what's stopping a peer from ruining that shared db?
<iain>
like permissions wise would it not create problems?
<muvlon>
my question exactly
<MikeFair>
iain: Given that you gave all the peers the same priv/pub key; nothing
pfrazee has quit [Ping timeout: 255 seconds]
<MikeFair>
but they are trying to collaborate
<AphelionZ>
i dunno, the fact that the content in ipfs is write once, and write ONLY once for the same content hash has weird implications
aquentson has quit [Ping timeout: 240 seconds]
<AphelionZ>
hmmm ok so I wrote a file using the js api...
<AphelionZ>
what exactly did that do? I was expecting to get a key back. I still dont get this haha
<MikeFair>
AphelionZ: Woot!!! \o/
<AphelionZ>
MikeFair: yes! very good news!
<AphelionZ>
but im still confused :(
<MikeFair>
iain: tell me more specifially what you're thinking
aquentson has joined #ipfs
<MikeFair>
and muvlon: you too for security
<MikeFair>
how do you think it should it work?
<MikeFair>
Err; what is the behavior you think should be accomplishable
<iain>
yeah. Well right now I'm just looking for a good/easy enough way to find other peers. Now I understand correctly from you that there are multiple ways
<MikeFair>
Anyone who has a pub/priv keypair can get a fixed hashAddress that changeable content can be associated to
<iain>
and that using the dht networking on would not be the best
<iain>
one*
<AphelionZ>
iain: are you just looking to kinda explore ipfs and look around?
<iain>
sort of yeah. I wanted to make a simple proof of concept
<iain>
see if I could get two peers to talk to each other without me giving their exact address
<AphelionZ>
oh cool
<iain>
in go that is. no js
<MikeFair>
iain: It seems like you have app concept; and your thought was that all the users would connect via IPFS to swarm and find each other
<AphelionZ>
yeah you're not as brave (or dumb) as me
aquentson1 has quit [Ping timeout: 264 seconds]
<iain>
MikeFair: true
<MikeFair>
iain: is that what you were going for; and the thing they knew was they were all using the same app
<AphelionZ>
but so... base level question here. if I do a `files write` and then I see the file under its path in `ipfs ls`
<AphelionZ>
what exactly have I accomplished
<MikeFair>
AphelionZ: Anyone within IPFS can get that HASH key / data out
<AphelionZ>
but I didn't get a hash
<MikeFair>
stat the file
<MikeFair>
ipfs files stat /file/path
<MikeFair>
ipfs files stat /file/path --hash
<MikeFair>
err ipfs files stat --hash /file/path
<MikeFair>
to get just the hash id
<MikeFair>
ipfs files is a link to the stuff on your local peer in a root filesystem like format
<AphelionZ>
well I'll be
FiveBroDeepBook has joined #ipfs
FiveBroDeepBook has left #ipfs [#ipfs]
<AphelionZ>
alright... hmmm
pfrazee has joined #ipfs
<MikeFair>
Now you need to communicate that to the other party for them to retrieve the file
* MikeFair
cancels the get and reissues it as an ipfs ls
<iain>
thx :$
<AphelionZ>
I run a website for hydrologists
<MikeFair>
AphelionZ: now you don't need to pin it; pinning it just means your local node won't purge it while its pinned
<AphelionZ>
that would be really cool
<AphelionZ>
how often does a local node purge?
<AphelionZ>
does it purge even though the files are created locally
mildred1 has quit [Ping timeout: 240 seconds]
<muvlon>
MikeFair, I think the bare minimum would be for orbitdb to not allow *every* user to basically do DELETE *
<MikeFair>
AphelionZ: I haven't gotten that deep into it to say concretely; my understanding is that under certain conditions it wipes its slate clean to get back drive space or at user request to garbage collect
<AphelionZ>
ah ok
<MikeFair>
muvlon: Well; they kind of can't
<MikeFair>
muvlon: If they do; then a responsible party just rolls it back
<MikeFair>
muvlon: All versions of every change are always still in the system
<MikeFair>
accessible by their hash id
<AphelionZ>
it seems like another use case is like... as a ipfs supporter, i want to set up a long-standing server and allocate a certain amount of disk space to allow the system to autonomously fill it with crucial objects, so that the whole network runs faster / smoother
<muvlon>
yes, but who gets to decide what's the "real" status of the db?
mildred1 has joined #ipfs
<AphelionZ>
maybe it could store blocks instead of files so you never have true files, just chunks
<muvlon>
there needs to be either an authentication system or a proof-of-work/stake/etc
<AphelionZ>
so you dont get duped into storing illegal stuff
<MikeFair>
muvlon: The owner of the private key that goes with the public key that the hash is made from
<muvlon>
ah, alright
<MikeFair>
muvlon: And that's only for the /ipns/ space
<muvlon>
AphelionZ, you can't get duped into storing _anything_
<MikeFair>
muvlon: in the /ipfs/ space every does by just uploading the content
<muvlon>
ipfs wont put stuff on your computer that you didn't request
<MikeFair>
s/every/everyone/
<AphelionZ>
muvlon: im kinda saying it should haha
<MikeFair>
muvlon: It will because it's sharing blocks to get credit in the system
<AphelionZ>
optionally
<muvlon>
oh that, sure
<AphelionZ>
MikeFair: ok so it does do that
<muvlon>
but having chunks of illegal stuff is no better than having files of illegal stuff
Foxcool_ has quit [Ping timeout: 240 seconds]
<AphelionZ>
i mean clearly people are gonna put obnoxious stuff on here if they havent yet
<MikeFair>
AphelionZ: might not be exactly working that way atm; but that's the intent; and by querying the state of the network you can find "weakly stored" hash ids and ask for them
<AphelionZ>
MikeFair: thats's what ipfs refs local is?
<AphelionZ>
like, i have 614 local refs but i've maybe written like 4 files
<muvlon>
have you received any files?
<muvlon>
also, 1 file can get turned into a lot of hashes, due to chunking
<MikeFair>
AphelionZ: Those are blocks; chunks of files
<AphelionZ>
are those the weakly stored ones you talked about?
<MikeFair>
AphelionZ: If they only exist on your node; then yeah
espadrine has joined #ipfs
<AphelionZ>
ok but the general question... "by simply running ipfs daemon and I helping the global ipfs supersystem?"
<MikeFair>
Your node publishes a "provides" list
<AphelionZ>
the answer is "yes, probably" right?
<MikeFair>
exactly; "yes, probably"
<muvlon>
to see everything you're actually pinning, do "ipfs pin ls"
<muvlon>
those are the ones you're actually holding on to
<AphelionZ>
muvlon: ok cool I have 13 of those, some indirect and some recursive
<muvlon>
should be 4 recursive
<muvlon>
those are the ones you explicitly added
<AphelionZ>
ok 5 :)
<AphelionZ>
but yes
<muvlon>
indirect means they're pinned as a consequence of pinning something else
<AphelionZ>
because... they're... blocks? not files?
<muvlon>
if you add a file and it gets turned into 3 chunks, those 3 chunks get pinned indirectly
<muvlon>
AphelionZ, they can be both
<muvlon>
they're files in case you're pushing a directory
<AphelionZ>
chunks is what i meant I think
<muvlon>
but if you're only adding simple files, they'll be chunks
<MikeFair>
muvlon: The bitSwap proposal (work in progress) handles the spamming concerns you were talking about; A peer keeps a credit score on other peers; it has no obligation to negotiate or share data or info with another peer
<muvlon>
AphelionZ, are you adding with -w ?
<AphelionZ>
no..
<MikeFair>
That 5th might something related to his peer node id root
<AphelionZ>
or i just added 5 files and miscounted
mildred1 has quit [Ping timeout: 255 seconds]
skinkitten has joined #ipfs
skinkitten_ has joined #ipfs
<muvlon>
you can look at the content of each hash and figure that out
<AphelionZ>
just via ipfs get?
<MikeFair>
ipfs cat is likely easier to just look
<MikeFair>
I think ls -v
pfrazee has quit [Ping timeout: 260 seconds]
iain_ has joined #ipfs
<MikeFair>
iain (if you're still watching); Here's an interesting thought experiment; imagine you show up at a conference hall and everyone takes a seat; now everyone picks a number between 1 and 100
iain__ has joined #ipfs
<MikeFair>
Each participant is then tasked with getting the seat numbers of everyone else who picked the same number as they did
<muvlon>
just stay seated, people will come up and ask for your number
<MikeFair>
There are lots of empty seats; and you can really only get their seat number by asking them
<iain__>
or is it better to use a swarm on a ipns hash?
<MikeFair>
muvlon: If everyone simultaneously executed that strategy; it'd be oddly quiet
<MikeFair>
:)
<muvlon>
I'm booking on that not to happen
iain has quit [Ping timeout: 260 seconds]
<muvlon>
of course I realize it's no solution
<MikeFair>
muvlon: Well your answer is coding the bots on the swarm ;)
iain_ has quit [Ping timeout: 260 seconds]
<MikeFair>
iain: I expect it to
<AphelionZ>
ok here's another weird question
<iain__>
okay
<AphelionZ>
on my local filesystem, i can have two files, with different names
<AphelionZ>
with the same content
FiveBroDeepBook has joined #ipfs
FiveBroDeepBook has left #ipfs [#ipfs]
<AphelionZ>
those will have the same content hash, right?
<MikeFair>
muvlon: So you might do something like: Odd number seats stay sitll; even number seats go ask :)
<MikeFair>
AphelionZ: if they are actually the same file; yes
<muvlon>
what's the objective though?
<muvlon>
doing it as fast as possible? or with the fewest asks?
<AphelionZ>
ok cool
<MikeFair>
muvlon: THe random number you picked is a "Topic of Interest"
<AphelionZ>
yeah they should be identical
<MikeFair>
muvlon: So you're trying to getting into communication with the others members of your topic
<muvlon>
what is the communication model though?
<muvlon>
can I broadcast my topic? or is it only point-to-point?
<AphelionZ>
I suppose I could use multihash and just generate my own keys to match
<MikeFair>
muvlon: As fast as possible and with a way to prevent anyone from drowning everyone else out for any meaningful period of time
<AphelionZ>
and then store the files under those names
<muvlon>
so I can't expect people to cooperate?
<MikeFair>
muvlon: It's whatever you can do hardware wise; it's best if you make a strategy for each model (as ipfs has)
<muvlon>
not even those from the same topic?
<muvlon>
hmm ok
<MikeFair>
muvlon: Assume there are malicious actors
<MikeFair>
muvlon: In general most people "want" to succeed at this
<MikeFair>
so they'll cooperate; and you've communicated a strategy to the whole room beforehand
<muvlon>
alright
<MikeFair>
So maybe instead of being a butt in a seat; its better you're the MC trying to give people instructions for how to find each other
<MikeFair>
For the most part; ipfs uses the model that each person uses a post-it note with what information they have about others; and what they are interested in
<AphelionZ>
yeah i think by and large the number of bad actors vs the number of rational actors is the same as anywhere else in society
<AphelionZ>
i.e. very low
<MikeFair>
But enough to cause DOS
<AphelionZ>
low and loud :)
<AphelionZ>
especially if they are on twitter
<MikeFair>
"The vocal minority"
anewuser has joined #ipfs
<MikeFair>
but hey; all in all; I think it collectively works towards the general betterment of our condition on the planet
<muvlon>
my first instinct would be to use MPI-style AllGather
<muvlon>
but MPI is designed for 100% cooperative distributed systems
<muvlon>
it does have some tolerance to faulty nodes, but not to malicious nodes
<MikeFair>
muvlon: We all then pass these post-it notes around (and make more of them as we learn more information)
<MikeFair>
muvlon: THre's the specific DHT algorithm for exactly who the people are that you hand your post-its off to next
JPLaurin has quit [Quit: Leaving]
<MikeFair>
muvlon: In general it'd be safe to say that ipfs uses a scheme where if one of the digits in the information you receive, is the same as your seat number, then you should keep the information
<MikeFair>
You keep all the information you see anyway; but you expect other seat numbers to take care of saving the things that don't share numbers close to your own
<muvlon>
do you have a list of all occupied seats?
<muvlon>
otherwise you're poking around in the dark, no?
<MikeFair>
muvlon: no; but you can ask for all theseats your neighbors have seen; and so forth
<muvlon>
oh ok
<MikeFair>
and people move seats
<MikeFair>
So you can get new neighbors
FiveBroDeepBook has joined #ipfs
<muvlon>
yeah then it'd be best to flip each bit in your seat number once and see who's there
FiveBroDeepBook has left #ipfs [#ipfs]
<MikeFair>
a "Seat Number" in this scenario is your "Peer Node Address"
<muvlon>
i know
<MikeFair>
So when you change seats; you have to start hunting again; (you can't change your physical seat location; aka your ip address; but you can change your logical position in the neighborhood)
<MikeFair>
You can also occupy multiple logical seats
<MikeFair>
But you have to be prepared to do the work for both
<muvlon>
now what's this about logical seats?
<MikeFair>
muvlon: Oh the actual location of your seat can't be changed (your networked location (ip address) is fixed in space)
<MikeFair>
So instead of logical seat; let's say it's your name
<MikeFair>
you can change your name even though you can't change your seat number
<MikeFair>
that's a better way of describing it
<muvlon>
if I change my name, would I not at least notify my neighbors?
<muvlon>
otherwise that behavior seems outright malicious
<MikeFair>
So you exchange names and topic ids with neighbors that have seat numbers near you
<muvlon>
"near me" meaning hamming distance, yes?
<iain__>
MikeFair: Weirdly enough. i'm only finding 'QmSdBtmNCzuX5ajNBA25CBDzTy3k3Jns4pB4pAV1Brq3AV'
<MikeFair>
muvlon: near you meaning physical datalink
<MikeFair>
it's about all these things; they're taken in layers; so when you see them talk about "multiaddress" that's the ipfs teams way of dealing with the various network topology options
<MikeFair>
radio; serial link; ethernet switching; internet ip addresses
<MikeFair>
carrier pigeon
<MikeFair>
But you only have the actual bandwidth on the links you have
<muvlon>
that's link layer, all of those are point-to-point
<muvlon>
network topology is one layer up
<MikeFair>
ethernet and ip can be point to multipoint
<muvlon>
fair enough
<muvlon>
either way, I think it's best to find a communication model first and *then* implement pubsub on top of it
<MikeFair>
the point is that your network has particular characteristics that impact how you do this whole distributed comms thing
<MikeFair>
the same strategy can't work in all cases
<MikeFair>
Right that's the whole "names" thing
<MikeFair>
Kademlia DHT is the core of it
<MikeFair>
Based on your name; you communicate with other people close to your own name; by asking the people that can actually hear you to pass the note
<muvlon>
ah, alright
<MikeFair>
you specifically choose the person with the name closest towards the name of the person you're trying to reach
<MikeFair>
That's the XOR routing techniques you may see mentioned
cemerick has joined #ipfs
<MikeFair>
So as the information builds up; the information you're looking for is going to be stored with people that have names similar to that request
<MikeFair>
and as the notes are being passed around; everyone is copying down the info
<muvlon>
so far so good
<MikeFair>
every so often someone takes a bio-break; dies; or gets new papers; or otherwise comes in newly to join the fray
<muvlon>
so are you looking for a better way to decide which notes to pass to whom?
<MikeFair>
As a class of problem; yes; but not something I'm actively working on
<MikeFair>
I'm working on how do we use the notes more effectively to keep note passing to minimum; and succesfully use it effectively in applications
<MikeFair>
and a big problem imho is the whole bootstrapping a peer group thing
<MikeFair>
(the thing iain asked about)
<MikeFair>
well not so much a "problem" but an interesting challenge with no identified "great" solution ye
<MikeFair>
t
<muvlon>
so the problem is to find someone to talk to in the first place?
<SchrodingersScat>
^ the human condition
<MikeFair>
find the right person you're supposed to talk to
<MikeFair>
SchrodingersScat: Haha! :)
<muvlon>
which peers are the "right ones"?
<MikeFair>
muvlon: application defined
<muvlon>
alright
ianopolous has joined #ipfs
<MikeFair>
muvlon: and everyone has a different opinion on that ;)
<MikeFair>
Which is why it's so good to optimize for certain classes of use cases; like file sharing; streaming channels; hierachical structured data
<MikeFair>
generally these things can be used to meet most application's requirements
jedahan has joined #ipfs
<MikeFair>
So for example the model I'm advocating is one where the application doesn't see a network; it just sees a document tree
<MikeFair>
and it looks local
<MikeFair>
it changes this document to share it with others; and gets events that tell him that others have made as the local document changes are being made
<MikeFair>
totally combined two sentences on that last line
<MikeFair>
the application gets an event when the document gets updated from remote sources
<MikeFair>
it's like getting a mouse click of a keyboard even
<MikeFair>
s/of/or
<MikeFair>
Coders know generally know how to update a local object -- that will trigger a side effect of sharing that info
<MikeFair>
peers will magically appear in one section of the document
<MikeFair>
that kind of thing
superluser[m] has quit [Ping timeout: 252 seconds]
patrickr[m] has quit [Ping timeout: 252 seconds]
<iain__>
can we set any restrictions on the pub sub flood thing?
<iain__>
like size limitations amount of messages per second
<Kubuxu>
not currently, but you will only relay messages on channels you are subscribed to
maxlath has quit [Ping timeout: 252 seconds]
<iain__>
true yeah
<MikeFair>
iain: BTW congrats on getting that working!!!!
<iain__>
but lets say there is someone on the network, wanting to do harm. he could just spam the network with crap
<iain__>
thx :)
<MikeFair>
iain: No different than any other network :)
<iain__>
true.
<iain__>
and the alternative was @MikeFair, to have a shared resource
<MikeFair>
Basically you need to either shut them out of the conversation or move to a new channel
<iain__>
yeah
<iain__>
is there are way of blocking certain peer ids?
<iain__>
cuz that would fix it
<MikeFair>
moving to a new channel is generally more successful but you have to share the new channel info
Buli[m] has quit [Ping timeout: 240 seconds]
datenpunk[m] has quit [Ping timeout: 240 seconds]
tkorrison[m] has quit [Ping timeout: 240 seconds]
ntrn[m] has quit [Ping timeout: 240 seconds]
unilovelight[m] has quit [Ping timeout: 240 seconds]
M107262[m] has quit [Ping timeout: 240 seconds]
M-sol56 has quit [Ping timeout: 240 seconds]
dyce[m] has quit [Ping timeout: 240 seconds]
M-Shrike has quit [Ping timeout: 258 seconds]
krigare[m] has quit [Ping timeout: 258 seconds]
teufel7[m] has quit [Ping timeout: 258 seconds]
demyan[m] has quit [Ping timeout: 258 seconds]
Powersource has quit [Ping timeout: 258 seconds]
victor_mobile has quit [Ping timeout: 258 seconds]
am2on has quit [Ping timeout: 258 seconds]
gmanrwp[m] has quit [Ping timeout: 245 seconds]
MrAxilus[m] has quit [Ping timeout: 245 seconds]
saintaquinas[m] has quit [Ping timeout: 245 seconds]
M50574E[m] has quit [Ping timeout: 245 seconds]
sultan[m] has quit [Ping timeout: 245 seconds]
BlooAlien[m] has quit [Ping timeout: 245 seconds]
kxra[m] has quit [Ping timeout: 245 seconds]
cbHXBY1D[m] has quit [Ping timeout: 245 seconds]
SShrike has quit [Ping timeout: 245 seconds]
herzmeister[m] has quit [Ping timeout: 245 seconds]
M-anomie has quit [Ping timeout: 245 seconds]
<iain__>
yep. which a smart attacker at that point could figure out and follow us, hmm
torarne has quit [Ping timeout: 245 seconds]
Remramm has quit [Ping timeout: 245 seconds]
musicmatze[m] has quit [Ping timeout: 255 seconds]
M-mubot has quit [Ping timeout: 255 seconds]
nop[m] has quit [Ping timeout: 255 seconds]
Quiark_ has quit [Ping timeout: 255 seconds]
M-oddvar has quit [Ping timeout: 255 seconds]
bart80[m] has quit [Ping timeout: 252 seconds]
WinterFox[m] has quit [Ping timeout: 252 seconds]
Scio[m] has quit [Ping timeout: 252 seconds]
jord[m] has quit [Ping timeout: 252 seconds]
Leviathanone[m] has quit [Ping timeout: 252 seconds]
na9da[m] has quit [Ping timeout: 252 seconds]
davidar has quit [Ping timeout: 252 seconds]
<MikeFair>
that's bitswap's goal/purpose
bolton has quit [Ping timeout: 258 seconds]
hiq[m] has quit [Ping timeout: 255 seconds]
Matthew[m] has quit [Ping timeout: 255 seconds]
M-Dave has quit [Ping timeout: 256 seconds]
TheGillies has quit [Ping timeout: 256 seconds]
djokeefe[m] has quit [Ping timeout: 276 seconds]
M-BostonEnginer4 has quit [Ping timeout: 276 seconds]
cryptix has quit [Ping timeout: 276 seconds]
kythyria[m] has quit [Ping timeout: 276 seconds]
bb010g[m] has quit [Ping timeout: 276 seconds]
scde[m] has quit [Ping timeout: 264 seconds]
Franklyn[m] has quit [Ping timeout: 264 seconds]
Olivier[matrix] has quit [Ping timeout: 264 seconds]
alaeri[m] has quit [Ping timeout: 264 seconds]
kewde[m] has quit [Ping timeout: 255 seconds]
kzimmermann[m] has quit [Ping timeout: 255 seconds]
sudoreboot[m] has quit [Ping timeout: 255 seconds]
jsyn[m] has quit [Ping timeout: 255 seconds]
jfred[m] has quit [Ping timeout: 255 seconds]
<iain__>
am I the only one seeing all these ping timeouts?
<MikeFair>
iain: depends on the use case; if your channel is supposed to be openly discoverable; then yes
<MikeFair>
iain: Nope. Netsplit! Wee!!!
<iain__>
:P
dkx[m] has quit [Ping timeout: 276 seconds]
lugarius has quit [Ping timeout: 276 seconds]
M-martinklepsch has quit [Ping timeout: 276 seconds]
<Kubuxu>
iain__: that is the reason 1. it is called floodsub 2. it is behind experimental flag
<MikeFair>
or someone on the IRC network cleaning house
<iain__>
yeah I figured so
vpham24[m] has quit [Ping timeout: 240 seconds]
<Kubuxu>
it isn't resistant yet
spbike[m] has quit [Ping timeout: 256 seconds]
Guest123181[m] has quit [Ping timeout: 256 seconds]
Paul[m] has quit [Ping timeout: 256 seconds]
M-wldhx has quit [Ping timeout: 256 seconds]
hjoest[m] has quit [Ping timeout: 256 seconds]
qolop[m] has quit [Ping timeout: 256 seconds]
captainplanet[m] has quit [Ping timeout: 256 seconds]
NathanBraswell[m has quit [Ping timeout: 256 seconds]
jceb[m] has quit [Ping timeout: 256 seconds]
Guest118430[m] has quit [Ping timeout: 256 seconds]
tcrypt[m] has quit [Ping timeout: 256 seconds]
nala[m] has quit [Ping timeout: 256 seconds]
RedArmy[m] has quit [Ping timeout: 256 seconds]
M-Lambo has quit [Ping timeout: 256 seconds]
magog[m] has quit [Ping timeout: 256 seconds]
sa1[m] has quit [Ping timeout: 256 seconds]
wmo[m] has quit [Ping timeout: 256 seconds]
mwilcox[m] has quit [Ping timeout: 256 seconds]
Guest118378[m] has quit [Ping timeout: 256 seconds]
Polychrome[m] has quit [Ping timeout: 256 seconds]
M-mlt has quit [Ping timeout: 256 seconds]
basilgohar[m] has quit [Ping timeout: 256 seconds]
lsf[m] has quit [Ping timeout: 256 seconds]
M-ms has quit [Ping timeout: 256 seconds]
Chris[m] has quit [Ping timeout: 256 seconds]
M-krsiehl has quit [Ping timeout: 256 seconds]
M-Amandine has quit [Ping timeout: 256 seconds]
M-manveru has quit [Ping timeout: 256 seconds]
interfect[m] has quit [Ping timeout: 256 seconds]
max_power[m] has quit [Ping timeout: 256 seconds]
kegan[m] has quit [Ping timeout: 256 seconds]
chpio[m] has quit [Ping timeout: 256 seconds]
bgrayburn[m] has quit [Ping timeout: 256 seconds]
erikj` has quit [Ping timeout: 256 seconds]
silwol has quit [Ping timeout: 256 seconds]
Famicoman[m] has quit [Ping timeout: 256 seconds]
yugosalem[m] has quit [Ping timeout: 240 seconds]
Guest120746[m] has quit [Ping timeout: 240 seconds]
vd[m] has quit [Ping timeout: 240 seconds]
agumonkey[m] has quit [Ping timeout: 240 seconds]
Walter[m] has quit [Ping timeout: 240 seconds]
M[m]6 has quit [Ping timeout: 240 seconds]
Guest144866[m] has quit [Ping timeout: 240 seconds]
MawKKe[m] has quit [Ping timeout: 240 seconds]
BeautifulBash[m] has quit [Ping timeout: 240 seconds]
rplevy[m] has quit [Ping timeout: 240 seconds]
charlienyc[m] has quit [Ping timeout: 240 seconds]
jam[m] has quit [Ping timeout: 240 seconds]
luke-clifton[m] has quit [Ping timeout: 240 seconds]
kharybdis[m] has quit [Ping timeout: 240 seconds]
hansnust[m] has quit [Ping timeout: 240 seconds]
yuh96bbn[m] has quit [Ping timeout: 240 seconds]
gwillen[m] has quit [Ping timeout: 240 seconds]
liuweize[m] has quit [Ping timeout: 240 seconds]
AntoineM[m] has quit [Ping timeout: 240 seconds]
vishnuv[m] has quit [Ping timeout: 240 seconds]
airsickpayload[m has quit [Ping timeout: 240 seconds]
jugash[m] has quit [Ping timeout: 240 seconds]
joshb[m] has quit [Ping timeout: 240 seconds]
harlock[m] has quit [Ping timeout: 240 seconds]
stanko[m] has quit [Ping timeout: 240 seconds]
irx[m] has quit [Ping timeout: 240 seconds]
M-cameron has quit [Ping timeout: 240 seconds]
M-alien has quit [Ping timeout: 240 seconds]
maxlath[m] has quit [Ping timeout: 240 seconds]
PseudoPseu[m] has quit [Ping timeout: 245 seconds]
gendale[m] has quit [Ping timeout: 245 seconds]
firemound[m] has quit [Ping timeout: 245 seconds]
urhuruhurh[m] has quit [Ping timeout: 245 seconds]
dmholmes[m] has quit [Ping timeout: 245 seconds]
M-iav has quit [Ping timeout: 245 seconds]
igel[m] has quit [Ping timeout: 245 seconds]
M0--_[m] has quit [Ping timeout: 245 seconds]
aunix[m] has quit [Ping timeout: 245 seconds]
M-HirmeS has quit [Ping timeout: 245 seconds]
wfjsw[m] has quit [Ping timeout: 245 seconds]
digitalesnievana has quit [Ping timeout: 245 seconds]
sk23[m] has quit [Ping timeout: 245 seconds]
M-erwin has quit [Ping timeout: 245 seconds]
alx[m] has quit [Ping timeout: 245 seconds]
Yakomo[m] has quit [Ping timeout: 245 seconds]
mikkaworkscom[m] has quit [Ping timeout: 245 seconds]
Guest126462[m] has quit [Ping timeout: 245 seconds]
steven[m] has quit [Ping timeout: 245 seconds]
arzale[m] has quit [Ping timeout: 245 seconds]
graydon[m] has quit [Ping timeout: 245 seconds]
Guest75149[m] has quit [Ping timeout: 245 seconds]
m3ta[m] has quit [Ping timeout: 245 seconds]
alex4o[m] has quit [Ping timeout: 245 seconds]
Zedwick[m] has quit [Ping timeout: 245 seconds]
litebit[m] has quit [Ping timeout: 245 seconds]
M-TidyKoala has quit [Ping timeout: 245 seconds]
Guest125925[m] has quit [Ping timeout: 245 seconds]
lambert[m] has quit [Ping timeout: 245 seconds]
M-sraja has quit [Ping timeout: 245 seconds]
null_radix[m] has quit [Ping timeout: 245 seconds]
iwxzr[m] has quit [Ping timeout: 245 seconds]
frabrunelle has quit [Ping timeout: 245 seconds]
fil_redpill has quit [Ping timeout: 245 seconds]
Guest149624[m] has quit [Ping timeout: 258 seconds]
jpereira[m] has quit [Ping timeout: 258 seconds]
jbbr[m] has quit [Ping timeout: 258 seconds]
daviddias[m] has quit [Ping timeout: 258 seconds]
xtarget[m] has quit [Ping timeout: 258 seconds]
Guest147163[m] has quit [Ping timeout: 258 seconds]
mirek1337[m] has quit [Ping timeout: 258 seconds]
zwollenar[m] has quit [Ping timeout: 258 seconds]
ralphtheninja[m] has quit [Ping timeout: 258 seconds]
mojarra[m] has quit [Ping timeout: 258 seconds]
terence977[m] has quit [Ping timeout: 258 seconds]
Guest144612[m] has quit [Ping timeout: 258 seconds]
karambirMatrix has quit [Ping timeout: 258 seconds]
guaraqe[m] has quit [Ping timeout: 258 seconds]
stevelord[m] has quit [Ping timeout: 258 seconds]
Visevius[m] has quit [Ping timeout: 258 seconds]
Bloo[m] has quit [Ping timeout: 258 seconds]
xamino[m] has quit [Ping timeout: 258 seconds]
M-Guillaume has quit [Ping timeout: 258 seconds]
ztl8702[m] has quit [Ping timeout: 258 seconds]
fd0422b08[m] has quit [Ping timeout: 258 seconds]
disinibito[m] has quit [Ping timeout: 258 seconds]
M-javissimo has quit [Ping timeout: 258 seconds]
kekbringer[m] has quit [Ping timeout: 258 seconds]
b3[m] has quit [Ping timeout: 258 seconds]
Kenneth_Seelig[m has quit [Ping timeout: 258 seconds]
RasmusErik[m] has quit [Ping timeout: 258 seconds]
Guest75817[m] has quit [Ping timeout: 258 seconds]
cml[m] has quit [Ping timeout: 258 seconds]
karpodiem[m] has quit [Ping timeout: 258 seconds]
kumavis[m] has quit [Ping timeout: 258 seconds]
pakdeen[m] has quit [Ping timeout: 258 seconds]
Jackneill[m] has quit [Ping timeout: 258 seconds]
lnxw37[m] has quit [Ping timeout: 258 seconds]
brettrick[m] has quit [Ping timeout: 255 seconds]
ttk2[m] has quit [Ping timeout: 255 seconds]
stardot[m] has quit [Ping timeout: 255 seconds]
<iain__>
yeah. just wondering if there is something I could do to make it so
sam[m] has quit [Ping timeout: 252 seconds]
mith[m] has quit [Ping timeout: 252 seconds]
joki[m] has quit [Ping timeout: 252 seconds]
Guest146683[m] has quit [Ping timeout: 252 seconds]
kalon[m] has quit [Ping timeout: 252 seconds]
MemeVillain[m] has quit [Ping timeout: 252 seconds]
M-brain has quit [Ping timeout: 252 seconds]
dz[m] has quit [Ping timeout: 252 seconds]
lueo[m] has quit [Ping timeout: 252 seconds]
brunohenriquebh[ has quit [Ping timeout: 252 seconds]
solariiknight[m] has quit [Ping timeout: 252 seconds]
Dineshsac[m] has quit [Ping timeout: 252 seconds]
yifnoon[m] has quit [Ping timeout: 252 seconds]
vorce[m] has quit [Ping timeout: 252 seconds]
aruslan[m] has quit [Ping timeout: 252 seconds]
procarryoat[m] has quit [Ping timeout: 252 seconds]
Guest128423[m] has quit [Ping timeout: 252 seconds]
moellus[m] has quit [Ping timeout: 252 seconds]
Guest94493[m] has quit [Ping timeout: 252 seconds]
jon[m]1 has quit [Ping timeout: 252 seconds]
Turingmachine[m] has quit [Ping timeout: 252 seconds]
LouisJencka[m] has quit [Ping timeout: 252 seconds]
remmy[m] has quit [Ping timeout: 252 seconds]
cdetrio[m] has quit [Ping timeout: 252 seconds]
M-3630 has quit [Ping timeout: 252 seconds]
Guest123396[m] has quit [Ping timeout: 252 seconds]
tr909[m] has quit [Ping timeout: 252 seconds]
M-jfred has quit [Ping timeout: 252 seconds]
neurocis[m] has quit [Ping timeout: 252 seconds]
M89224[m] has quit [Ping timeout: 252 seconds]
Leer10[m] has quit [Ping timeout: 252 seconds]
ashish[m] has quit [Ping timeout: 252 seconds]
palkeo[m] has quit [Ping timeout: 252 seconds]
eluc[m] has quit [Ping timeout: 252 seconds]
whirm[m] has quit [Ping timeout: 264 seconds]
oma[m] has quit [Ping timeout: 264 seconds]
exyi[m] has quit [Ping timeout: 264 seconds]
Guest999999[m] has quit [Ping timeout: 264 seconds]
Guest34757 has quit [Ping timeout: 264 seconds]
M-hostbbb has quit [Ping timeout: 264 seconds]
Hansf[m] has quit [Ping timeout: 264 seconds]
jimtendo[m] has quit [Ping timeout: 264 seconds]
mels[m] has quit [Ping timeout: 264 seconds]
pep0ni[m] has quit [Ping timeout: 264 seconds]
chrono[m] has quit [Ping timeout: 264 seconds]
underskore[m] has quit [Ping timeout: 264 seconds]
datan[m] has quit [Ping timeout: 264 seconds]
yangwao[m] has quit [Ping timeout: 264 seconds]
Guest136220[m] has quit [Ping timeout: 264 seconds]
H3ndr1k[m] has quit [Ping timeout: 264 seconds]
junw[m] has quit [Ping timeout: 264 seconds]
atom[m] has quit [Ping timeout: 264 seconds]
Nekit[m] has quit [Ping timeout: 264 seconds]
gh0st[m] has quit [Ping timeout: 264 seconds]
russon77[m] has quit [Ping timeout: 264 seconds]
atenorio[m] has quit [Ping timeout: 264 seconds]
Step[m] has quit [Ping timeout: 264 seconds]
osiris-[m] has quit [Ping timeout: 264 seconds]
edsilv[m] has quit [Ping timeout: 264 seconds]
Neurolit[m] has quit [Ping timeout: 264 seconds]
M-kalmi has quit [Ping timeout: 264 seconds]
onlnr has quit [Ping timeout: 264 seconds]
Silke[m] has quit [Ping timeout: 264 seconds]
tx[m] has quit [Ping timeout: 264 seconds]
unlmtd[m] has quit [Ping timeout: 264 seconds]
plindner has quit [Ping timeout: 264 seconds]
eshon[m] has quit [Ping timeout: 255 seconds]
PurgingPanda_[m] has quit [Ping timeout: 255 seconds]
Guest148729[m] has quit [Ping timeout: 255 seconds]
Guest150078[m] has quit [Ping timeout: 255 seconds]
Guest139267[m] has quit [Ping timeout: 255 seconds]
Guest146352[m] has quit [Ping timeout: 255 seconds]
BanJo[m] has quit [Ping timeout: 255 seconds]
n0x65B[m] has quit [Ping timeout: 255 seconds]
alkar[m] has quit [Ping timeout: 255 seconds]
Eru[m] has quit [Ping timeout: 255 seconds]
grensjo[m] has quit [Ping timeout: 255 seconds]
esbjorn[m] has quit [Ping timeout: 255 seconds]
wakest has quit [Ping timeout: 255 seconds]
pik[m] has quit [Ping timeout: 255 seconds]
anon[m] has quit [Ping timeout: 255 seconds]
kratonisch[m] has quit [Ping timeout: 255 seconds]
M-nated has quit [Ping timeout: 255 seconds]
cubemonkey[m] has quit [Ping timeout: 255 seconds]
gsf[m] has quit [Ping timeout: 255 seconds]
crvd[m] has quit [Ping timeout: 255 seconds]
SARANKUMAR[m] has quit [Ping timeout: 255 seconds]
dze[m] has quit [Ping timeout: 255 seconds]
reto[m] has quit [Ping timeout: 255 seconds]
DokterBob has quit [Ping timeout: 255 seconds]
nerdfiles[m] has quit [Ping timeout: 255 seconds]
M-slang has quit [Ping timeout: 255 seconds]
ivegotasthma[m] has quit [Ping timeout: 255 seconds]
gabyshu[m] has quit [Ping timeout: 255 seconds]
rschulman has quit [Ping timeout: 255 seconds]
caught_in_the_ma has quit [Ping timeout: 255 seconds]
Hongar[m] has quit [Ping timeout: 255 seconds]
drakaro[m] has quit [Ping timeout: 255 seconds]
M-hash has quit [Ping timeout: 255 seconds]
because[m] has quit [Ping timeout: 255 seconds]
MarkOtaris has quit [Ping timeout: 255 seconds]
M-jimt has quit [Ping timeout: 255 seconds]
mythmon- has quit [Ping timeout: 255 seconds]
ntninja has quit [Ping timeout: 255 seconds]
shellkr[m] has quit [Ping timeout: 255 seconds]
t0dd[m] has quit [Ping timeout: 255 seconds]
<iain__>
oh my...
Guest149259[m] has quit [Ping timeout: 276 seconds]
chao[m] has quit [Ping timeout: 276 seconds]
Guest146508[m] has quit [Ping timeout: 276 seconds]
hannes[m] has quit [Ping timeout: 276 seconds]
zerorunnr[m] has quit [Ping timeout: 276 seconds]
Guest115434[m] has quit [Ping timeout: 276 seconds]
inori[m] has quit [Ping timeout: 276 seconds]
M-fabrixxm has quit [Ping timeout: 276 seconds]
masoodahm[m] has quit [Ping timeout: 276 seconds]
tsnieman[m] has quit [Ping timeout: 276 seconds]
ZerataX[m] has quit [Ping timeout: 276 seconds]
DavidAmorn[m] has quit [Ping timeout: 276 seconds]
Guest140787[m] has quit [Ping timeout: 276 seconds]
Guest139286[m] has quit [Ping timeout: 276 seconds]
mcint[m] has quit [Ping timeout: 276 seconds]
Guest41990[m] has quit [Ping timeout: 276 seconds]
aerobil[m] has quit [Ping timeout: 276 seconds]
neurochemical[m] has quit [Ping timeout: 276 seconds]
cretz[m] has quit [Ping timeout: 276 seconds]
jerryisme[m] has quit [Ping timeout: 276 seconds]
YossIrving[m] has quit [Ping timeout: 276 seconds]
Guest122979[m] has quit [Ping timeout: 276 seconds]
rmi7[m] has quit [Ping timeout: 276 seconds]
M-hierophantos has quit [Ping timeout: 276 seconds]
hans[m] has quit [Ping timeout: 276 seconds]
renemayer[m] has quit [Ping timeout: 276 seconds]
MrManor[m] has quit [Ping timeout: 276 seconds]
cmp[m] has quit [Ping timeout: 276 seconds]
googlfvKI6B[m] has quit [Ping timeout: 276 seconds]
xldrkp[m] has quit [Ping timeout: 276 seconds]
panicbit-M has quit [Ping timeout: 276 seconds]
Natanael[m] has quit [Ping timeout: 276 seconds]
albuic has quit [Ping timeout: 276 seconds]
musoke[m] has quit [Ping timeout: 276 seconds]
Guest115462[m] has quit [Ping timeout: 276 seconds]
Patrik[m] has quit [Ping timeout: 276 seconds]
M-amblin has quit [Ping timeout: 276 seconds]
<AphelionZ>
whoah
<SchrodingersScat>
whatever this is, it's happening
<iain__>
xD
<MikeFair>
iain: Depends on how many people you're trying communicate with and is it an open group or closed group; or open for a period and then closed
<MikeFair>
Public Key hashes make great, shareable channel names
pfrazee has joined #ipfs
kragniz has quit [Quit: WeeChat 1.5]
<iain__>
well. it would be a open group. Just a few hundred people that would get connected and find each others peer ids around a common sentence or name
<MikeFair>
And they are easy to change/communicate securely
<iain__>
and then talk, send messages to each other. much like a chat box
<MikeFair>
Hey; we jsut experienced a perfect example of the propblem you're looking to solve
kragniz has joined #ipfs
<iain__>
so I understand that the Orbit project has the same weakness, due to it using the Floodsub method of finding peers
<MikeFair>
there was nothing we could do about the ping flood messages
<iain__>
really?
<MikeFair>
Who did they come from?
<MikeFair>
They made the channel hard to use
<iain__>
Sorry? Is that question address to me?
<iain__>
addressed*
<Kubuxu>
iain__: yes, point is, pubsub is an interface, floodsub is one naive implementation of it
<SchrodingersScat>
MikeFair: seems like different accounts from gateway/shell/matrix.org
<iain__>
ah k
<MikeFair>
but it was more that we were seeing the messages
<SchrodingersScat>
that's between you and your client
<MikeFair>
SchrodingersScat: true; I have a client view that filtered them out
<SchrodingersScat>
done and done
<MikeFair>
iain__: Actually that's probably how you do it
<MikeFair>
SchrodingersScat: The main channel window still saw them; but the agrregated "speaking messages from all channels" view left them out
maxlath has joined #ipfs
<MikeFair>
iain__: implement scoring in your app on the speakers and mod them out/score them down
<iain__>
scoring?
<MikeFair>
iain__: you can't stop the messages from being sent; but you can filter them from being seen
<iain__>
true
<MikeFair>
iain__: You have to make a judgement call on what a "Malicious User" is
<MikeFair>
if you want totry and do something about it
<iain__>
oh that's quite easy. It's just what do you do after that
<iain__>
I mean. Best case would be informing some firewall about this and blocking the ip
<iain__>
but that's hard in production. it would run on too many different environments
<MikeFair>
If each client has a public key and has to sign every message (the software does this behind the scenes no user interaction involved)
<iain__>
hmm
<MikeFair>
Then the public key is the thing you're scoring; you first throw out any messages that aren't properly signed
<iain__>
It's not really the amount of messages I'm worried about. More the size of them
<MikeFair>
that generally proves the message came from your app
<iain__>
If there isn't a limit on it. They could just upload a arbitrary large message and everyone's bandwith would just get screwed being subscribed to it
<MikeFair>
iain__: you have to request the message data
<iain__>
oh.
<MikeFair>
iain__: you get an inexpensive annoucement that says there is data
<MikeFair>
(at this address)
<iain__>
that is true. yeah
<MikeFair>
And for the most part; the messages are small enough to fit in the payload of the announcement so it's not generally thought of as a thing
<iain__>
But how do I tell what the message is like before I download it
cemerick has quit [Ping timeout: 255 seconds]
<MikeFair>
Perhaps you could request the size first before you fetch it
<MikeFair>
ipfs ls
<iain__>
ah
<MikeFair>
I mean I haven't used the FloodSub thing yet; so I'm not sure how it works
<iain__>
true
<iain__>
no you're right
<iain__>
before I go down this rabbit hole. There isn't a better way of finding peers right?
<MikeFair>
iain__: not in a distributed system; registering with a central website
<MikeFair>
But you can layer it
<MikeFair>
it doesn't all have to hang off one channel
<AphelionZ>
is there a way to notify the dht that new content is availale
<Kubuxu>
lgierth: are you around?
<iain__>
well that's where I started. with dht
<iain__>
but eh.
<MikeFair>
AphelionZ: ipns name publish ? or ipfs add
<AphelionZ>
ipfs add probably
<AphelionZ>
well, hmm
<MikeFair>
AphelionZ: ipfs add puts blocks into the system; but it doesn't announce it to anyone
<MikeFair>
except your local peers
<MikeFair>
Someone has to figure out how to ask for the block before it really gets out there
<MikeFair>
That's why the whole ipns thing is so useful
<AphelionZ>
im just trying to tighten the space between me making a new paste in my pastebin and them being able to get it from ipfs.io/ipfs
<MikeFair>
It's a fixed address where the content can change
<AphelionZ>
ok cool, yeah i get what ipns is
<MikeFair>
AphelionZ: hehe; have pinbot on this channel pin it; then unpin it
<AphelionZ>
im worried there's some NAT traversal sisues
<MikeFair>
AphelionZ: it might not listen to you ;)
<AphelionZ>
i wouldnt listen to me
<MikeFair>
AphelionZ: have your app submitthe request to ipfs.io right away
<AphelionZ>
but you couldnt ipfs get my stuff, and now i'm trying myself and its struggling
anewuser has quit [Ping timeout: 255 seconds]
<MikeFair>
iain__: Not putting you on the spot; but I'd love to hear how you'd like it to work
<iain__>
what exactly? Which part?
<iain__>
oh I don't mind btw :)
<MikeFair>
iain__: The peer group bootstrapping
<MikeFair>
iain__: and is this more like real time chat or message boards?
<MikeFair>
(async)
<iain__>
Well. Before I started playing with ipfs I was just using dht directly. Using github.com/anacrolix/dht to just find the ip addresses.
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<iain__>
I was intrigued on how I didn't need to take care of NAT and so on with ipfs
<MikeFair>
iain__: Which is great as long as they aren't behind a NAT
<iain__>
true. exactly
<iain__>
I can't talk to clients. just know what their addresses might be
anewuser has joined #ipfs
<AphelionZ>
how long does discovery typically take?
<MikeFair>
AphelionZ: I think something is wrong
<iain__>
But it did work up to a point. I think there should be an easy way of finding common peers which are looking for the same info hash
<AphelionZ>
yeah i agree
<iain__>
just a few seconds
<apiarian>
is there something weird between 0.4.5 and the IPFS Station chrome extension? just updated and the daemon seems to be working fine, but the extension can't find it
<Kubuxu>
apiarian: we have changed one API endpoint
<Kubuxu>
it might be the problem
<iain__>
But that's how torrents for example find each other. If I'm asked on how I'd like it to work. I'd assume this part of finding peers would be relatively the same
<Kubuxu>
new version of js-ipfs-api supports both formats now
<MikeFair>
iain__: I'm just thinking that through; that'd mean potential sending information in the opposite direction of the data flow
<MikeFair>
iain__: So you'd be able to ask "Who are all the peers who have asked for hash XYZ"
<iain__>
exactly
<iain__>
Before I started talking to you, I was planning to do just that
<MikeFair>
iain__: The problem is you'll get a bunch of non-participants
<iain__>
true
aquentson1 has joined #ipfs
<MikeFair>
iain__: because it's a rumormill filled with nodes passing notes; everyone in the neigborhood is asking for that hash
<MikeFair>
iain__: The way it's set up at the moment is you can announce to everyone listening to hash xyz that you've arrived
<MikeFair>
and they can get back to you
<iain__>
true again. yeah. but you would just add a layer of verfication, pinging them and if they don't respond properly dismiss that address.
<iain__>
yeah...
<MikeFair>
iain__: You don't really have to worry about that in this scenario
<MikeFair>
iain__: If they are going to take the take to get back to you; it'll be a well-formed message or a bug
aquentson has quit [Ping timeout: 245 seconds]
<MikeFair>
What you might want is authentication for future scoring though
<MikeFair>
and permissions/authorizations access
<iain__>
to send messages on certain topics?
<MikeFair>
yes; and to represent the topics themselves
<MikeFair>
each topic is its own address
<MikeFair>
its own PubSub channel
<iain__>
yeah
<MikeFair>
Oh here's a way to kind of lean toward what you want
<MikeFair>
You're PubSub listening in the lobby
<MikeFair>
actually; the foyer
<MikeFair>
someone shows up and you issue them the current key to get to the lobby
mildred1 has joined #ipfs
<iain__>
ah
ShalokShalom has quit [Ping timeout: 240 seconds]
ygrek has joined #ipfs
<MikeFair>
and the public key for the agent of the lobby
cemerick has joined #ipfs
<iain__>
Eventho it's very interesting. I have to leave right now. Dinner is here. I'll be back in half an hour or so
ShalokShalom has joined #ipfs
<iain__>
But yeah. maybe a lobby first and then switching topics
<MikeFair>
People then set up a private channel with the lobby to get teh addresses of topcis
<MikeFair>
In most cases you can model IRC
<MikeFair>
we issue commands to the server individually before we can group chat
<MikeFair>
that makes the server the gatekeeper
<MikeFair>
in this case; you use crypto keys to represent "Server messages"
<MikeFair>
only the person who has the permission to make the update can be the server
<MikeFair>
anyway; it's an interesting challenge; you give up one limitation (can't get passsed NAT) and take on another :)
<MikeFair>
enjoy dinner!
<MikeFair>
Kubuxu / SchrodingersScat: You guys still here?
mildred2 has joined #ipfs
<SchrodingersScat>
yes
<Kubuxu>
I am here, partially
<MikeFair>
That conversation reminded me of another "desirable" feature I was wishing for over on Stellar
<MikeFair>
private sub-swarms
ygrek has quit [Ping timeout: 260 seconds]
<MikeFair>
clusters I guess
DiCE1904 has joined #ipfs
<MikeFair>
I'd have to add a key to the node so it could talk; another layer of encryption for the whole set
mildred1 has quit [Ping timeout: 240 seconds]
<MikeFair>
is that something ipfs cluster does
<MikeFair>
(can do)
apiarian_ has joined #ipfs
apiarian has quit [Read error: Connection reset by peer]
apiarian has joined #ipfs
<Kubuxu>
no, we know it as private networks
<Kubuxu>
I will be finishing it up for next two weeks
<Kubuxu>
depending what you want to do: ipfs swarm peers
<Kubuxu>
to check if you have connectivity
<Kubuxu>
or ipfs cat QmejvEPop4D7YUadeGqYWmZxHhLc4JBUCzJJHWMzdcMe2y
<kpcyrd>
Kubuxu: I'm still fighting with "run an ipfs service on a 512M droplet", the go-ipfs docker container seems to be stuck in an undead state every couple of days
<Kubuxu>
yeah, there were few bugs (and still are few bugs)
<Kubuxu>
update to 0.4.5
<Kubuxu>
it is more stable
<Kubuxu>
we still know one bug that might lockup the go-ipfs
<kpcyrd>
let me check which version I'm on
<Kubuxu>
daviddias: have you killed greenkeeper?
<daviddias>
no, why?
skinkitten_ has quit [Ping timeout: 240 seconds]
skinkitten has quit [Ping timeout: 240 seconds]
<Kubuxu>
I just saw lots of GK PRs and I remember reading something about killing it.
<kpcyrd>
Kubuxu: I'm building a new image, let's see if that fixes the issue :)
<Kubuxu>
if you see lockup again or weird behaviour
<whyrusleeping>
Kubuxu: do we want to do that? or just reset --hard the release branch?
ZaZ has quit [Read error: Connection reset by peer]
<Kubuxu>
This is almost reset --hard but with preserving history and not breaking people setups (git pull will break if you do force push and they have this branch checked out).
<Kubuxu>
I will create PR merging it back to master after is it merged to release
<Kubuxu>
this way it will be possible to do FF merge next time
kzimmermann[m] has joined #ipfs
<whyrusleeping>
alright,SGTM
bastianilso has quit [Quit: bastianilso]
<whyrusleeping>
i like that we can see everyone who contributed to this release
<AphelionZ>
I need to make the promises more "functional programming"ish
<AphelionZ>
but at first glance it works
<whyrusleeping>
AphelionZ: looking at it now :)
* whyrusleeping
realizes he cant read javascript
<AphelionZ>
to be fair that particular JS of mine isnt the easiest to read
jeb_bush[m] has left #ipfs ["User left"]
<whyrusleeping>
be careful about exposing the 5001 api to anything public
<AphelionZ>
it uses swarm peers to get the number at the bottom, and uses ipfs.files.write to write a file named as the timestamp, and then uses ipfs.get to get the contents of the file
<AphelionZ>
I'm using the microsecond timestamp for the name of the file
<whyrusleeping>
Do you want to keep them named? You can skip that entirely by just using add
<AphelionZ>
so i might end up with multiple duplicate files with different names, with the same content
<whyrusleeping>
content gets deduped, no worries
<AphelionZ>
yeah I don't need to keep them named at all
<AphelionZ>
right, they'll have the same content hash
<AphelionZ>
its a minor thing
<whyrusleeping>
Yeah, just use ipfs add
<AphelionZ>
so just ipfs.add?
<AphelionZ>
ok cool
<whyrusleeping>
yeap, it will give you back a hash
<AphelionZ>
tight
ylp has quit [Ping timeout: 252 seconds]
<kpcyrd>
Kubuxu: Feb 12 20:02:03 ipfs-ink docker[8638]: Run migrations now? [y/N] Not running migrations of fs-repo now.
<kpcyrd>
it appears that the docker image can't upgrade without manual plumbing
<kpcyrd>
cc: lgierth
<Kubuxu>
it can
<Kubuxu>
gimme a sec
<whyrusleeping>
lgierth was able to do this somehow
<Kubuxu>
you can just pass `--migrate=true` to the run command
matoro has quit [Ping timeout: 255 seconds]
<kpcyrd>
Feb 12 20:11:53 ipfs-ink docker[12748]: no fs-repo-migration binary found for version 5: failed to check migrations version: fork/exec /tmp/go-ipfs-migrate272553187/fs-repo-migrations: no such file or directory
<Kubuxu>
interesting
<Kubuxu>
I think it worked over here.
<Kubuxu>
lgierth: ^^
<iain__>
how must I do for pub sub to work over the internet
ylp has joined #ipfs
<iain__>
what*
<Kubuxu>
kevina: can I rebase both of those branches to master?
<Kubuxu>
If you have any local changes, push them.
<kpcyrd>
Kubuxu: not sure if there's anything going on in the docker containers
<Kubuxu>
yeah, it is interesting
<Kubuxu>
I might have an idea
aquentson has joined #ipfs
<Kubuxu>
what architecture are you running?
<kpcyrd>
amd64
<Kubuxu>
hmm
<kpcyrd>
not sure what the flags on /tmp are inside docker
<kpcyrd>
I could imagine it's has noexec or something set
<whyrusleeping>
Kubuxu: once we get jenkins running nicely we can run CI on raspberry pis :D
iain__ has quit [Quit: Page closed]
<kpcyrd>
*it has
<whyrusleeping>
kpcyrd: ooooh... thats a good point
<Kubuxu>
that might be quite possible
aquentson1 has quit [Ping timeout: 276 seconds]
<Kubuxu>
but how it worked in past then?
<kevina>
Kubuxu: is there a reason you want to rebase?
<whyrusleeping>
you can fix this manually by getting a shell in the docker container and downloading/running the fs-repo-migrations
matoro has joined #ipfs
<whyrusleeping>
kevina: so we can get CI on jenkins running
<Kubuxu>
kevina: yes, Jenkins CI compat (tomorrow) and sharness test coverage
<whyrusleeping>
i'll rebase my filestore branch
<Kubuxu>
whyrusleeping: kk
arkimedes has joined #ipfs
<kevina>
okay, I will rebase my code, it will require some care...
<kevina>
I do have a unpushed commit
<Kubuxu>
k, yeah, that is what I was worried about
<Kubuxu>
don't ask
<kpcyrd>
/tmp/ is not a mountpoint
<kpcyrd>
so that doesn't seem to be the issue
<Kubuxu>
kpcyrd: does this file is still in tmp dir after it is downloaded (you might want to run shell in the container, and run ipfs from the shell).
<Kubuxu>
if it is
<Kubuxu>
check its perms
<Kubuxu>
ldd it
<lgierth>
hey hey
<lgierth>
so we have a migrations problem?
<lgierth>
eeh two migrations problems?
aquentson1 has joined #ipfs
<kpcyrd>
-rwxr-xr-x 1 ipfs ipfs 7772873 Feb 12 20:27 /tmp/go-ipfs-migrate358426158/fs-repo-migrations
<AphelionZ>
...can we talk about the ipfs.get API?
<whyrusleeping>
what about it?
<AphelionZ>
if i'm not mistaken it returns an object stream, when then returns multiple objects, each of which has a content property, which is another stream
<whyrusleeping>
kpcyrd: looks like bad perms to me. That should be 0755
<AphelionZ>
could we potentially move that complexity one level up and just have that API return a single stream?
dryajov1 has joined #ipfs
<whyrusleeping>
AphelionZ: ipfs get should be returning a tar stream
<AphelionZ>
not in the JS API, I don't think
<AphelionZ>
in my code the first call returns an ObjectsStream, which in turn returns objects like:
<AphelionZ>
{ Hash: "hashKey", content: Source }
<whyrusleeping>
daviddias: why doesnt the js-ipfs api just return a tar stream for get?
<AphelionZ>
where Source is another event source
<kpcyrd>
whyrusleeping: yup. I tried to verify I have a 0777 folder, but I can't find it. Are you working on a fix? I'd file a PR otherwise
<kpcyrd>
:)
<whyrusleeping>
kpcyrd: go ahead and file a PR
<whyrusleeping>
i'm going to be away for a few hours