inconshreveable has quit [Remote host closed the connection]
inconshreveable has joined #ipfs
<jbenet>
zignig! o/
domanic has joined #ipfs
<zignig>
jbenet: how goes ipfsing ?
<zignig>
I'm have been reworking astral boot , for more awesome.
<zignig>
boots a working coreos cluster now. and I should be able to stuff ipfs into a rkt and then you can launch as many as you want.
<zignig>
If the offer still stands I will take you up on that digital ocean boxen for some testing in a few weeks.
gatesvp_ has joined #ipfs
<gatesvp_>
@jbenet @whyrusleeping been attempting to run some `sharness` tests and getting the following: "rm: cannot remove 'trash directory.t0040-add-and-cat.sh/ipfs': Is a directory" unclear what's going on here, have we seen this before?
<jbenet>
zignig: absolutely
<jbenet>
zignig: that would be awesome
<zignig>
I have some ideas ( and a bit of code ) , for preprocessing files through a spool and then inserting them into ipfs.
<jbenet>
gatesvp_ looks like a fuse unmount fail. we should have a "unmount fuse" script that ... really really unounts fuse, and make sharness use it.
<jbenet>
zignig: we have some boxes to play with-- they're beefy, what sort of resources do you want/need?
<jbenet>
depending can give you new boxes or use those.
<zignig>
not sure yet , I am staging on some virtual boxes at home. Will have some structure in a week or two.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<gatesvp_>
any guesses on where I could find the "really-really unmount fuse" script?
<jbenet>
zignig: ok just lmk whenever
<jbenet>
gatesvp_ oh i mean we should make one.
<jbenet>
gaetsvp_ this is linux right?
<gatesvp_>
yep... working on an Ubuntu Linux in Digital Ocean
inconshreveable has quit [Ping timeout: 265 seconds]
<Tv`>
i don't expect that sudo-ness to change much
<Tv`>
and linux umount -f pretty much does nothing except for some very narrow special cases
<Tv`>
(there was talk about making it more generic; code never materialized)
<jbenet>
rht: btw, if you look at the crypto/rand stuff, note that secio is a crappy AEAD attempt. it's not known to be correct. I hope that AGL's call for a standard AEAD protocol (and perhaps one of the CEASAR entrants) will yield a good protocol soon. (see https://www.imperialviolet.org/2015/05/16/aeads.html)
gatesvp_ has quit [Ping timeout: 246 seconds]
ei-slackbot-ipfs has quit [Remote host closed the connection]
ei-slackbot-ipfs has joined #ipfs
ir2ivps9 has quit [K-Lined]
rht________ has quit [Ping timeout: 246 seconds]
rht__ has joined #ipfs
domanic has quit [Ping timeout: 258 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
hellertime has joined #ipfs
Wallacoloo has joined #ipfs
anshukla has quit [Ping timeout: 264 seconds]
rht__ has quit [Quit: Page closed]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
inconshreveable has joined #ipfs
pfraze_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
anshukla has joined #ipfs
ehmry has quit [Ping timeout: 276 seconds]
anshukla has quit [Remote host closed the connection]
ehmry has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ehmry has quit [Remote host closed the connection]
anshukla has joined #ipfs
anshukla has quit [Remote host closed the connection]
anshukla has joined #ipfs
anshukla has quit [Remote host closed the connection]
Wallacoloo has quit [Quit: Leaving.]
pfraze_ has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
anshukla has joined #ipfs
EricJ2190 has quit [Ping timeout: 258 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
hellertime has quit [Quit: Leaving.]
lgierth has quit [Quit: Ex-Chat]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Wallacoloo has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nell has quit [Quit: WeeChat 0.4.2]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<tperson>
Anyone know what happens when a peers ip address changes?
<tperson>
(While a daemon is running)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
it should manage just fine
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nell has joined #ipfs
<whyrusleeping>
jbenet: you around?
<whyrusleeping>
i have a crazy idea
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
also, bitswap cr pls
anshukla has quit [Remote host closed the connection]
anshukla has joined #ipfs
pfraze has joined #ipfs
anshukla has quit [Ping timeout: 246 seconds]
sharky has quit [Ping timeout: 272 seconds]
<jbenet>
I'm around
<whyrusleeping>
so, the importer calls
<whyrusleeping>
currently take a pinner
<whyrusleeping>
but we dont use that because its not simple for what we want to do
<whyrusleeping>
so my proposal is to remove that parameter and replace it with a callback function
<whyrusleeping>
that will be called with every block added through the importer
<whyrusleeping>
and a flag for whether or not the given block is the root of the dag
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
sharky has joined #ipfs
<jbenet>
that sgtm
<jbenet>
why is it crazy?
<jbenet>
oh woah
<jbenet>
hang on
<jbenet>
i've an idea
<jbenet>
the chunker-- do this in the chunking API.
<jbenet>
pass a function to call
<jbenet>
don't allocate
<jbenet>
sync function, that function stores, and can call another callback itself.
<jbenet>
hmmmmm not sure how we can get by without allocating more-- you'd think we could.
<jbenet>
ah yeah with sub-readers
<jbenet>
derive a reader that's only reading a subset.
<cryptix>
all i can say is, that go doesnt magically buffer things if you dont instruct it to
<cryptix>
io.Pipe also has internal buffers
<whyrusleeping>
i didnt think it did
<whyrusleeping>
i thought it blocked until read was called elsewhere
<cryptix>
i want that wantmanager btw looks rad
notduncansmith_ has joined #ipfs
notduncansmith_ has quit [Read error: Connection reset by peer]
<whyrusleeping>
yeah?
<whyrusleeping>
:D
<whyrusleeping>
it performs a good deal better too
<whyrusleeping>
ugh, godeps cant handle two different versions of the same code
notduncansmith_ has joined #ipfs
notduncansmith_ has quit [Read error: Connection reset by peer]
ur5edgb has joined #ipfs
okket_ has quit [Quit: Have a nice day.]
u7654dec has quit [Ping timeout: 256 seconds]
okket has joined #ipfs
notduncansmith_ has joined #ipfs
notduncansmith_ has quit [Read error: Connection reset by peer]
tilgovi has joined #ipfs
inconshreveable has quit [Remote host closed the connection]
pfraze_ has joined #ipfs
notduncansmith_ has joined #ipfs
notduncansmith_ has quit [Read error: Connection reset by peer]
pfraze_ has quit [Remote host closed the connection]
<bigbluehat>
Blame: great to here :) However, I think what they mean by WebDHT is not necessarily BrowserDHT
<bigbluehat>
in the same way WebRTC should really be called BrowserRTC
<Blame>
Essentially, unless browsers are full dht peers (which will cause horrible churn) then they will allways be somebody's client.
<whyrusleeping>
Blame: you can work around churn though
<whyrusleeping>
do you really think the added churn will outweigh the benefit of that many more users?
<Blame>
Define users?
<Blame>
I think we be running on different terms
notduncansmith has quit [Remote host closed the connection]
<Blame>
esentially: we can hook browsers up as "clients" to a distributed system, where they have no responsibilities, high performance, and can pick who they talk to (websockets let me connect to anyserver I want without stun)
notduncansmith has joined #ipfs
<Blame>
they just wont be able to setup webrtc without a meet-in-the-middle server to setup the connection
<Blame>
this way we could let the users pick thier meet-in-the-middle server
notduncansmith_ has joined #ipfs
notduncansmith_ has quit [Read error: Connection reset by peer]
notduncansmith_ has quit [Read error: Connection reset by peer]
vijayee_ has joined #ipfs
octalberry has joined #ipfs
<whyrusleeping>
vijayee_: jbenet knows what he wants for the tour
<whyrusleeping>
its been a whiiiilie since we touched it
chriscool has joined #ipfs
chriscool has quit [Client Quit]
chriscool has joined #ipfs
<vijayee_>
thanks whyursleeping
<jbenet>
morning o/
<jbenet>
vijayee_ it may be good to make all the content first.
<jbenet>
vijayee_ have you seen nodeschool.io ?
<whyrusleeping>
jbenet: good afternoon :P
octalberry has quit [Ping timeout: 245 seconds]
<vijayee_>
yeah I've used the nodeschool tool
<vijayee_>
they have it abstracted out from the lessons
<vijayee_>
workshopper I think its called
<vijayee_>
should it just be modeled after that?
notduncansmith_ has joined #ipfs
<jbenet>
that's the sort of thing i want.
<jbenet>
vijayee_ it doesn't have to be a tool inside go-ipfs
notduncansmith_ has quit [Read error: Connection reset by peer]
<jbenet>
vijayee_ we can bundle it
<vijayee_>
I believe they use a test suite to evaluate the completion of each lesson
<vijayee_>
but it should be command line not browser based
tilgovi has quit [Remote host closed the connection]
<jbenet>
vijayee_ doing it as a separate tool-- the tricky bit will be using the config file for the tour state.
<jbenet>
vijayee_ but maybe we can just store a ~/.ipfs/tour-state file.
<jbenet>
instead of using the config.
<vijayee_>
hmmmm...
<vijayee_>
the config file lives where now? or do you mean a config that is specific to the tour
<whyrusleeping>
the config file lives in $IPFS_PATH/config
<jbenet>
vijayee_ what i'm saying is that if we do it as a separate tool it will be tricky to coordinate access to the config file because a daemon may be running. thus-- it is perhaps simpler to use a separate file altogether.
<jbenet>
vijayee_ ($IPFS_PATH defaults to ~/.ipfs)
<vijayee_>
I see. Ok I'll give it a shot emulating how workshopper does it
notduncansmith_ has quit [Read error: Connection reset by peer]
RzR has quit [Excess Flood]
<Blame>
I just setup a cron job to snapshot ipfs nodes daily
<whyrusleeping>
random whyrusleeping idea #9152: lets have a 'fuse-setup.sh' script to help set up/troubleshoot fuse for new users
RzR has joined #ipfs
<jbenet>
anything else on 1218?
<jbenet>
i know we want to get that out asap-- just worry about the difficulty in editing wantmanager safely.
<jbenet>
whyrusleeping o
<jbenet>
o/
<whyrusleeping>
i think its fine, unless you have some specific concerns on the editing it front
u7654dec has quit [Ping timeout: 272 seconds]
<whyrusleeping>
the main idea behind it is that everything is synchronized in the run loop
<Evermore>
The FUSE mount can't add files, right? Because it has to name everything by hash?
<whyrusleeping>
Evermore: the /ipns/ fuse mount can add files
<Evermore>
cool
<whyrusleeping>
anything you add to /ipns/local will be available through /ipns/<your peer id>
<whyrusleeping>
if you do end up messing around with the fuse stuff please let me know how it goes
notduncansmith_ has joined #ipfs
notduncansmith_ has quit [Read error: Connection reset by peer]
<Evermore>
I'm thinking of putting an IPFS node on my web server because I've had a couple cases where I rename a file and it breaks links that I forgot I had made
<jbenet>
whyrusleeping: ping me here if input needed. bb in a while.
<Evermore>
Other than that haven't used it for a month
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to refactor/bitswap: http://git.io/vTlei
<ipfsbot>
go-ipfs/refactor/bitswap a1d6265 Jeromy: address comments from CR
<Evermore>
plus non-technical people will think all my links are viruses
<whyrusleeping>
Evermore: lol, if non-technical people looked at the links they clicked on a daily basis, i'm sure they would be far more scared than looking at ipfs links
<Blame>
there was a "ping the entire network" command. Did that get removed?
<whyrusleeping>
ipfs diag net
<whyrusleeping>
dont rely on it, no guarantees about its lifespan
<whyrusleeping>
i might remove it tomorrow
<whyrusleeping>
i might remove it in ten years
<whyrusleeping>
(probably wont remove it tomorrow though)
<Blame>
thats what I figured
<Evermore>
Any estimates how big the network is?
<Blame>
would you be upset if I ran that once an hour for a few weeks?
<Blame>
1 sec
<Blame>
I'll tell you
<Blame>
i just ran: `ipfs diag net | wc` and I got 3492
<Evermore>
wow cool
<Blame>
im still making sure that is right (it could be an integer multiple of the number of servers)
<Blame>
whyrusleeping: what does it mean if a node does not have an associated latency?
<jbenet>
Blame yeah that's not right-- also there are many different networks. I've noticed clusters merging and separating
<jbenet>
(People use different bootstrap nodes)
<Blame>
as long as the bootstraps are on the same network, you should be fine. We would notice a true net-split
<Blame>
people would not be able to see freshly posted data on their nodes
<Evermore>
I had trouble with not seeing freshly-posted data on my game project back in April but that could have also been the VM I was using
<Evermore>
I'm sure it worked once, then I went to show the artist and it suddenly didn't work
<whyrusleeping>
it prints out each node along with each of its connections
<jbenet>
Evermore were you relying on the gateways?
<Evermore>
jbenet: Each game ran an IPFS daemon as a subprocess so no I don't think so?
aluchan has quit [Changing host]
aluchan has joined #ipfs
<whyrusleeping>
Blame: you can run that once an hour, no worries
<Evermore>
jbenet: I used the gateways to double-check but I don't remember what the results were
<jbenet>
From my observations there are around 60-140 dedicated ipfs nodes (up lots of the time) these days, in the main network.
notduncansmith_ has joined #ipfs
<jbenet>
Evermore id love to get that working well for you
<jbenet>
Like that's a perfect use case
notduncansmith_ has quit [Read error: Connection reset by peer]
<whyrusleeping>
Evermore: as much info about your setup and work flows as you can give us would be awesome :)
<whyrusleeping>
maybe file an issue somewhere for us to explain what youre up to?
<Evermore>
jbenet: It was such a fun idea too, it was like pictionary. I made a little drawing widget and you would draw a picture, upload it to IPFS, paste the link in IRC (Didn't have time for pure ipfs messaging) and then other people would pin it from you
<Evermore>
whyrusleeping: I'll take better notes next time I work on it
<Blame>
I just got 66 nods
<Blame>
*nodes
<Blame>
smaller than I expected
<whyrusleeping>
Blame: it varies a lot
<Blame>
good! I want that data
dPow has quit [Ping timeout: 245 seconds]
ei-slackbot-ipfs has quit [Remote host closed the connection]
ei-slackbot-ipfs has joined #ipfs
Wallacoloo has joined #ipfs
notduncansmith has quit [Remote host closed the connection]
dPow has joined #ipfs
notduncansmith has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to refactor/bitswap: http://git.io/vTlGh
notduncansmith has quit [Read error: Connection reset by peer]
<krl>
whyrusleeping: interesting
<krl>
will have to look at this more closely later
<whyrusleeping>
cool cool :)
<whyrusleeping>
let me know if you use it, or if anything is wonky
patcon has quit [Ping timeout: 246 seconds]
Wallacoloo has quit [Quit: Leaving.]
lgierth has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
u7654dec has joined #ipfs
chriscool has quit [Read error: No route to host]
<ipfsbot>
[go-ipfs] chriscool created test-cat-with-stdin (+1 new commit): http://git.io/vTlSH
<ipfsbot>
go-ipfs/test-cat-with-stdin 863f386 Christian Couder: t0040: add tests for ipfs cat with stdin...
chriscool has joined #ipfs
<ipfsbot>
[go-ipfs] chriscool opened pull request #1250: t0040: add tests for ipfs cat with stdin (master...test-cat-with-stdin) http://git.io/vTlQZ
<jbenet>
whyrusleeping: will people see warnings? they shouldn't be seeing these errors i don't think.
<jbenet>
whyrusleeping: like, if we did nice logs like an HTTP server where all individual actions are logged once, that'd be nice.
<jbenet>
whyrusleeping: but otherwise the UX is "this thing is silent, and suddenly yelling at me"
<jbenet>
whyrusleeping: (maybe warnings are silenced, i dont recall)
<jbenet>
whyrusleeping: i understand the desire to be clear about errors, but this isn't an error to tell the end users about. it's an error that -- if truly an error to handle -- we should handle ourselves silently.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<wking>
it is. Just running through it's Travis tests now
<jbenet>
yay
<wking>
per-commit #1208 review needs to go back to "namesys: Add recursive resolution", since I squashed the ResolveN API back into that commit. No need to clutter history with my abandoned Resolve-with-depth approach ;)
<jbenet>
yeah sounds good to me.
<wking>
Looks like the Travis tests for #1208 are also hitting the "No output has been received in the last 10 minutes" issue
<whyrusleeping>
wking: throw things at it
<jbenet>
is this bitswap getting stuck again?
<jbenet>
whyrusleeping: i think parallel is disabled on bitswap on travis already, no?
<whyrusleeping>
jbenet: no, its not disabled
<whyrusleeping>
jbenet: so we shouldnt use warnings either?
<jbenet>
ah. maybe want to push that up first? -- wonder why only now it's been an issue.
<whyrusleeping>
jbenet: could be kvm exacerbating the problem
<wking>
It's also popping up in #1250
<jbenet>
-- on warnings: think about it this way-- warnings are things people need to worry about and perhaps do something about.
<wking>
The OS X tests went through fine for #1208
<whyrusleeping>
okay, notice?
notduncansmith_ has joined #ipfs
notduncansmith_ has quit [Read error: Connection reset by peer]
<jbenet>
-- this failure to send is not a warning-- if it failed every single time, that's a different thing-- it suggests the network is failing drastically, but that's not a one-off failure. think about it like TCP-- a packet drop isn't a warning apps hear about, but apps do hear about the network totally failing (timeouts, etc)
<jbenet>
yeah sure notice
<whyrusleeping>
well, if a message fails to send, thats not a packet dropping
<whyrusleeping>
thats a connection loss
<jbenet>
that's one connection being lost
<jbenet>
that happens all the time
<jbenet>
mid-streams.
<whyrusleeping>
hrmm, alright
<jbenet>
think about nodes going up and down.
<jbenet>
laptops closing
<jbenet>
phones going in and out of range
<jbenet>
network interfaces changing
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to refactor/bitswap: http://git.io/vT8e0
<chriscool>
wking: yeah there are bitswap go test failures in #1250 too
<jbenet>
chriscool -- i personally don't want to have anything to do with jenkins. if you guys want to use it, go for it, but I've wasted too much time on it already
<chriscool>
yeah ok I should be able to handle Jenkins by myself
hellertime has quit [Quit: Leaving.]
<chriscool>
jbenet ok I can send an email to the google guys about golang/build
anshukla has joined #ipfs
<chriscool>
before trying Jenkins
hellertime has joined #ipfs
patcon has joined #ipfs
<chriscool>
could someone merge #1250 anyway?
<whyrusleeping>
jbenet: i dont know if one single thing fixed the hanging, but removing parallelization helped
<jbenet>
yeah i'll merge it
<jbenet>
chriscool, i'll try to put my concerns about jenkins into an issue somewhere-- i don't think it's a good fit for github workflow, and when we were using it many people either found it difficult, or didn't even encounter it (jenkins was not testing their PRs automatically)
<whyrusleeping>
lets us choose to whitelist users for jenkins to build their PRs
<whyrusleeping>
or have jenkins test them on a case by case basis
<whyrusleeping>
i'm not sold on jenkins, but i do think we could have been using it better than we were
<jbenet>
yes, and then we have to whitelist users individually, so people who submit PRs on thier own don't see those tests.
<jbenet>
aaand we have to do it for every single repository
<jbenet>
adding a ton of complexity and making it way harder to collaborate on modules
anshukla has quit [Read error: Connection reset by peer]
anshukla has joined #ipfs
ei-slackbot-ipfs has quit [Remote host closed the connection]
<jbenet>
instead of throwing away travis, maybe we could be focusing on making the tests better
ei-slackbot-ipfs has joined #ipfs
<jbenet>
the reason travis is hanging is not because of travis-- the bitswap test is hanging.
<jbenet>
the machine may be low on resources, that test is huge
<whyrusleeping>
i'm not saying we throw away travis
<whyrusleeping>
i'm saying jenkins is nice because we can run more tests than travis lets us
<whyrusleeping>
on our own reliable hardware
<whyrusleeping>
so we can be more sure about the code we're shipping
<jbenet>
and it wastes a LOT of development time to manage. travis is one file in every new repository. it is enabled with one command. it takes literally less than 10s per repository and works for every user.
aluchan has quit [Quit: WeeChat 0.3.8]
<whyrusleeping>
and modules problem isnt solved at all by golang/build.
<whyrusleeping>
travis just cant do some of the things we need
<chriscool>
so that even if the go tests fail the sharness tests still run
<jbenet>
we should also be breaking apart the repository-- there are sub-things like the network and bitswap, that can be developed and tested in isolation (and used by other people for other things) -- we'll still need strong integration tests, but there's no need to run the massive bitswap tests on a tiny little readme change.
<Tv`>
would a thing that supports the last two config file snippets from the issue be a "S3 MVP" you agree with?
<Tv`>
config file still being a local thing, ~/.ipfs still being as it is, etc
<Tv`>
that approach would put the /pk/<binary> and /ipns/<binary> keys into S3 too; i *still* don't really know what those keys are, and whether they're actually used
notduncansmith has joined #ipfs
<Tv`>
renamed the cache dir in last example to not be the same as "blocks", it's not just the /blocks namespace anymore
chrisr_ has left #ipfs [#ipfs]
ei-slackbot-ipfs has quit [Remote host closed the connection]
ei-slackbot-ipfs has joined #ipfs
saebekassebil has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<jbenet>
Tv`: I can take a closer look in a bit (walking around atm) but I that looks great to me.
<jbenet>
The cache part being great to have.
<Tv`>
no rush, just would like to align better with your plans
chriscool has quit [Read error: Connection reset by peer]
<Tv`>
the gist is like all-the-parallel-worlds view of what could be donwe
<Tv`>
*done
chriscool has joined #ipfs
flugsio has quit [Quit: WeeChat 1.2]
<jbenet>
For the "s3 as blocks store" part, one use case to consider is what happens if multiple nodes (with their own local configs) were to share it? I'm thinking of our gateways here which could all be storing immutable objects/pulling them out of s3 and caching locally.
<jbenet>
This may be beyond mvp but something o keep in mind
<jbenet>
The /pk keys are cached public keys. They're content addressed
<jbenet>
The /ipns entries are records. They're not content addressed atm but they will be, records will be ipfs objects too
tilgovi has joined #ipfs
<jbenet>
(I can put this all in the note and expand once I stop trying to evade cars)
notduncansmith_ has joined #ipfs
notduncansmith_ has quit [Read error: Connection reset by peer]
Wallacoloo has joined #ipfs
<Tv`>
jbenet: as long as it's a content-addressed store, everything works out; the only exception to that, that i can see in the code right now, is pinning root object pointer
<Tv`>
S3 has eventual consistency for mutating existing objects
<Tv`>
so it may make the system "sloppy" about pins, or the ipns records
<Tv`>
there is *nothing* in S3 that would help you get past that
<Tv`>
if you need more than that, you need dynamodb etc
<Tv`>
but considering a running ipfs daemon would probably hold both of those in memory too, it shouldn't really hurt very often
<jbenet>
Yeah the records will be content addressed so -- like pins-- hanging on to the root in the process/loca config/copy-paste handles mutability
<Tv`>
S3, when choosing region carefully, is read-after-write for creates
<Tv`>
so as long as the node gets the root object right, the rest of the data is good to go
<jbenet>
Yep perfect
<jbenet>
We're in agreement :)
nsh is now known as EmmyNoether
EmmyNoether is now known as nsh
<whyrusleeping>
Tv`: /pk/<hash> is another nodes public key stored in the dht
<whyrusleeping>
/ipns/<hash> is an ipns record stored in the dht
<whyrusleeping>
both are used
<Tv`>
whyrusleeping: the funny thing is i've only seen one /pk/foo entry ever ;)
<whyrusleeping>
people must not be using ipns all that much yet
<whyrusleeping>
lol
<whyrusleeping>
theyre only stored when someone publishes an ipns entry
<Tv`>
and it happens to hit my dht key, right
<whyrusleeping>
yeap
<Tv`>
and when i said "*nothing*" above, here's the exception: if you use S3 as a journal, and write objects like foo/1 foo/2 etc, and probe not list to find the latest, then you can "mutate" data almost safely (documented safe, observed 1/100k non-consistency)
<Tv`>
oh actually nevermind even that 1/100k, that's not for read-after-create
<Tv`>
so S3 "mutate" = write new object with predictable name, delete old one
<Tv`>
init time could list bucket to find claimed latest entry, then probe next until it gets a real 404
<Tv`>
kind of a mess ;)
<Tv`>
(and don't even think of using the "S3 compatibility mode" of any non-Ceph product)
<whyrusleeping>
>.>
<whyrusleeping>
that seems... messy
<whyrusleeping>
but alright
<Tv`>
yeah, S3 is not the right tool for that problem
notduncansmith_ has joined #ipfs
notduncansmith_ has quit [Read error: Connection reset by peer]
chriscool has quit [Ping timeout: 258 seconds]
<whyrusleeping>
jbenet: re the readme, the parts removed are just restated differently
patcon has quit [Ping timeout: 264 seconds]
<Tv`>
wow the amount of rotten code surrounding the config file is almost funny
<Tv`>
almost
<whyrusleeping>
lol
<whyrusleeping>
more sad than funny, but i laughed
ei-slackbot-ipfs has quit [Remote host closed the connection]
ei-slackbot-ipfs has joined #ipfs
<Tv`>
this really is a state file not a config file
notduncansmith_ has joined #ipfs
notduncansmith_ has quit [Read error: Connection reset by peer]
<Tv`>
the parts in it that looked for a moment like configuration, like things with pathnames, *aren't used*
lgierth has quit [Read error: Connection reset by peer]
lgierth has joined #ipfs
<whyrusleeping>
yeah... i want them to be though..
<whyrusleeping>
it would be nice to be able to specify the datastore location
<whyrusleeping>
and we used to be able to
<whyrusleeping>
something happened while i had my back turned on that code
mitchty has quit [Ping timeout: 265 seconds]
<Tv`>
sure sure just saying it's a wild west
<whyrusleeping>
yeap
<whyrusleeping>
sorry :/
lgierth has quit [Read error: Connection reset by peer]
lgierth has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to refactor/bitswap: http://git.io/vT8xc
<ipfsbot>
[go-ipfs] jbenet force-pushed travis-split-go-sharness from 9a02b78 to 7bf18ca: http://git.io/vT8cU
<ipfsbot>
go-ipfs/travis-split-go-sharness 7bf18ca Christian Couder: .travis: split go and sharness tests...
anshukla has joined #ipfs
<aatkin>
Had a question about designing a distributed dropbox app using ipfs. If I want to use android/ios could I have it running a browser accessing webui running on a DO droplet with ipfs daemon running? If I wanted to make it multiuser but with no permissions could I run the daemon and serve webui on DO as sandstorm grains?
gatesvp has quit [Ping timeout: 246 seconds]
<whyrusleeping>
aatkin: you could probably do it that way
<whyrusleeping>
i know not enough about sandstorm, i probably should read up more
<aatkin>
sort of a personal cloud thing. I'd love to run it on the phone, but not currently possible right?
<aatkin>
(ipfs0
pfraze_ has quit [Remote host closed the connection]
<aatkin>
*ipfs
<jbenet>
aatkin: people have put ipfs on phones -- very resource intensive though atm.
<jbenet>
will get bettwr with time.
<aatkin>
wow, ok.
<jbenet>
aatkin: actually could do a specail build that delegates all routing to another node
<jbenet>
that's the majority of the resource consumption
<whyrusleeping>
that would be a lot of fun
<jbenet>
so you could have one (or a cluster of) ipfs node out there
<whyrusleeping>
i cant remember who had ipfs on android...
<jbenet>
whyrusleeping: yeah, "delegated routing" makes more sense than "supernode routing" for the security guarantees
<whyrusleeping>
yeah
<whyrusleeping>
agreed
<jbenet>
whyrusleeping: one is delegated routing security to the delegates
<jbenet>
we just need to get travis to take our money and give us more builds.
<jbenet>
builders*
<whyrusleeping>
lol, its getting there!
<jbenet>
we should fix all the stupid failures.
<aatkin>
If I have 1 daemon running in background and spin up and autokill webui sessions as sandstorm grains do the files in webui stay separate as there's a separate chroot for the leveldb?
<aatkin>
i.e. different users will come in and spawn webui against a single ipfs daemon
<aatkin>
I should just try it. :)
notduncansmith has joined #ipfs
notduncansmith has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]