<jbenet>
**shrug** not a big deal there. i'd go for whatever makes the calling code easier.
<jbenet>
try writing out a few exampels
<whyrusleeping>
good idea
jibber11 has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<Tv`>
whyrusleeping: why must transaction get be async?
therealplato has quit [Ping timeout: 255 seconds]
therealplato has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
Tv`: the workflow with the transaction is:
<whyrusleeping>
t := dstore.NewTransaction()
inconshreveable has joined #ipfs
<whyrusleeping>
t.Get(k, cb)
<whyrusleeping>
t.Commit()
inconshr_ has joined #ipfs
<Tv`>
notsurewhyfry.gif
<whyrusleeping>
Tv`: alternative being...?
<whyrusleeping>
the point is you do multiple gets at a time
<whyrusleeping>
like, if youre getting values from s3
<Tv`>
...why?
<Tv`>
the underlying api doesn't implement that
<Tv`>
neither s3 nor flatfs have "multiple get" in any useful sense
<whyrusleeping>
well then i have no idea why i'm doing this
<Tv`>
batching makes sense if/when it can combine the write syncs
<whyrusleeping>
my only real usecase was s3 latency avoidance
<Tv`>
that's solved with just concurrent http gets
inconshreveable has quit [Ping timeout: 248 seconds]
<jbenet>
yeah i dont think we need to implement anything special for batch gets anywhere yet. if the get UX is annoying, then maybe punt entirley.
therealplato has quit [Ping timeout: 246 seconds]
therealplato has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
hellertime has joined #ipfs
jibber11 has joined #ipfs
jibber11 has quit [Client Quit]
<mafintosh>
jbenet: man multiplexing is *hard* :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Wallacoloo has joined #ipfs
<jbenet>
mafintosh lol, what did you run into now?
dread-alexandria has quit [Quit: dread-alexandria]
therealplato has quit [Ping timeout: 250 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
pfraze has joined #ipfs
nessence has joined #ipfs
nessence has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
nessence has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nessence has quit [Read error: Connection reset by peer]
nessence has joined #ipfs
nessence has quit [Read error: Connection reset by peer]
nessence has joined #ipfs
nessence has quit [Read error: Connection reset by peer]
nessence_ has joined #ipfs
nessence_ has quit [Ping timeout: 252 seconds]
nessence has joined #ipfs
nessence has quit [Read error: Connection reset by peer]
nessence_ has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
www1 has quit [Ping timeout: 272 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
hpk has quit [Ping timeout: 264 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
tilgovi has joined #ipfs
hpk has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
tilgovi has quit [Ping timeout: 248 seconds]
therealplato has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
inconshreveable has joined #ipfs
inconshreveable has quit [Remote host closed the connection]
inconshreveable has joined #ipfs
inconshr_ has quit [Ping timeout: 248 seconds]
tilgovi has joined #ipfs
hellertime has quit [Quit: Leaving.]
tilgovi has quit [Remote host closed the connection]
tilgovi has joined #ipfs
dandroid has joined #ipfs
notduncansmith has joined #ipfs
Wallacoloo has quit [Ping timeout: 248 seconds]
notduncansmith has quit [Read error: Connection reset by peer]
dandroid has quit [Ping timeout: 246 seconds]
dandroid has joined #ipfs
<dandroid>
hey guys, dan here. if i want to use do a custom hash function for ipfs objects (sha256(sha256(obj)) this is what bitcoin has) what do i need to change/configure?
<dandroid>
it's ok actually i found it
notduncansmith has joined #ipfs
reit has quit [Read error: Connection reset by peer]
notduncansmith has quit [Read error: Connection reset by peer]
jibber11 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<wking>
so you don't have racy stuff like (using a filesystem example) open(path), can't open it's a symlink, lstat(path). In the filesystem case the object at path could change between the open() and lstat() calls, but with a transactional store you don't have to worry about that
<wking>
not a big deal for content-addressable data ;)
<dandroid>
yeap ty
tilgovi has quit [Ping timeout: 248 seconds]
sharky has quit [Ping timeout: 256 seconds]
williamcotton has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
sharky has joined #ipfs
jibber11 has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
dread-alexandria has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
inconshreveable has quit [Remote host closed the connection]
Tv` has quit [Quit: Connection closed for inactivity]
pfraze has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
slothbag has joined #ipfs
<kyledrake>
Noticed it taking a while to complete the directory object for a site that has a lot of changes - dumped the stack in offline mode just to see if there's anything interesting going on https://gist.github.com/kyledrake/c1dcf90a35cfaf86b95a
<kyledrake>
This is a few weeks old, so I was going to pull to the new build and try that as soon as it's got the ip filter changes
<kyledrake>
When I check the queuer it's hanging on a lot of them (ipfs add -r sitename)
<kyledrake>
But it's all the same site, which is a bit odd.
<kyledrake>
The site is one that gets updated every 2-5 ish minutes and it's displaying some machine data for stats
williamcotton has quit [Ping timeout: 256 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has quit [Read error: Connection reset by peer]
lidel has joined #ipfs
reit has joined #ipfs
pfraze has joined #ipfs
m0ns00n has quit [Quit: Leaving]
nessence_ has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nessence has joined #ipfs
mildred has quit [Ping timeout: 255 seconds]
mildred has joined #ipfs
williamcotton has joined #ipfs
Tv` has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has quit [Ping timeout: 272 seconds]
MatrixBridge has quit [Remote host closed the connection]
MatrixBridge has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot>
[go-ipfs] lgierth opened pull request #1422: Expose metrics via Prometheus (master...metrics) http://git.io/vtOvX
* lgierth
out for a bit
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
lgierth: youre not allowed to be out for a bit!
<lgierth>
oh right office hours!
<whyrusleeping>
lol
<whyrusleeping>
i was kidding
<lgierth>
but i need to pee so badly!
<lgierth>
any time yet for the sync?
<whyrusleeping>
sync?
* whyrusleeping
tries to recall what sync hes supposed to have time for
<lgierth>
ppl
<whyrusleeping>
lgierth: jbenet isnt around anyways, go do whatever it is you need to do, lol
<lgierth>
anyhow, i'll be back by 18:00Z
<whyrusleeping>
SGTM
<lgierth>
good good o/
<whyrusleeping>
o/
<daviddias>
I'm around, if needed :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
patcon has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping created test/only-hash (+1 new commit): http://git.io/vtOED
<ipfsbot>
go-ipfs/test/only-hash 53dee3c Jeromy: add test for only-hash to ensure no blocks are added to datastore...
<ipfsbot>
[go-ipfs] whyrusleeping opened pull request #1423: add test for only-hash to ensure no blocks are added to datastore (master...test/only-hash) http://git.io/vtOuY
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
wking: ping!
<wking>
hi?
<whyrusleeping>
just wanted to ask how the docker stuff was going
<whyrusleeping>
and if you needed anything from me on that end
<whyrusleeping>
(or if you need anything from me in general)
<wking>
I haven't rerolled the baseEmbed stuff out yet, so feel free to take a stab at that if you're feeling especially energetic ;)
<whyrusleeping>
haha, the coffee hasnt quite kicked in yet
<whyrusleeping>
i'm really excited to have that working
<wking>
Having the records spec and proper validity comparison would be nice :)
<whyrusleeping>
YES
<whyrusleeping>
jbenet: i'm looking at you right now
<whyrusleeping>
directly
<whyrusleeping>
staring out the window, south
<wking>
Looks like 'ipfs file ls ...' (go-ipfs#1348) is still unmerged...
<wking>
and go-ipfs#1413 is probably ready for initial review
<whyrusleeping>
the git push each thing seems to have passed completely
<whyrusleeping>
i think its good to merge
<whyrusleeping>
wking: so its RFM, yeah?
<whyrusleeping>
(1348 that is)
<wking>
I like how "passed completely" is "circleci failures for each commit and occasional travis-ci/push failures with stuff like p2p/net/mock breakage" ;)
<whyrusleeping>
lol
<whyrusleeping>
we need to get better. but thats as good as we are reasonably going to get right now
<wking>
but yeah, I don't know about any concerns with 1348, and I certainly don't have any about it myself
<whyrusleeping>
SGTM, SHIPPING!
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to master: http://git.io/vtOrV
<wking>
and now the Docker-registry storage backend should run on the respective masters :)
<whyrusleeping>
sweet!
<whyrusleeping>
we should start using that for our docker images on the gateways
<whyrusleeping>
lgierth: o/
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<wking>
it would be nice to have a read-only config for the repository, so you could run one read/write repository that published to IPNS, and then as many local, read-only repositories as you have hosts
<wking>
that would put the public-network communication in IPFS, so you wouldn't have to worry about securing it (beyond the usual IPFS stuff, signed records would be nice :)
<wking>
and you wouldn't have to worry about re-fetching shared Merkle objects
<whyrusleeping>
that would be awesome
<whyrusleeping>
what would it take to do that?
notduncansmith has joined #ipfs
<wking>
adding the read-only config to the registry (which would be independent of the IPFS-backend PR)
notduncansmith has quit [Read error: Connection reset by peer]
<wking>
you can do that now if you just have local users promise not to push anything to their local registry
<wking>
and I guess the storage driver would pass back the "I tried to publish to <not-my-key> but failed" error to the pusher if you did try?
<wking>
I haven't looked into it in much detail
<whyrusleeping>
hrm, okay
<whyrusleeping>
it would be nice if the registry code supported readonly, maybe we should file an issue?
<wking>
that would probably be fine, but I expect the "our storage distributes so easily that we want local-only, read-only registries" may be enough of a special-case that we'll be implementing it ourselves ;)
<wking>
No harm in filing an issue though in case the Docker devs or someone else wants to implement it for us though
<whyrusleeping>
yeah, thats fine
<whyrusleeping>
but having their input on the design would be good
<wking>
I imagine they'd merge it if someone else wrote it though
<prosodyContext>
Maybe have Aether automatically synced to IPFS?
husanu3 has joined #ipfs
<prosodyContext>
That would make searching for content easier and better logged.
<whyrusleeping>
prosodyContext: hrm... that might be interesting
<prosodyContext>
id donate immediately, just let me know how we can help
<prosodyContext>
It's urgent imho.
<whyrusleeping>
hrm, aether doesnt appear to scale well though
<whyrusleeping>
it has very good anonyminity guarantees
<prosodyContext>
How so? And might help to have more nodes, but its unknown.
<whyrusleeping>
aether sends every message to every other node in the network
<prosodyContext>
Ah. I guess that's what Burak is talking about taking time.
<prosodyContext>
If someone requests info removed from IPFS, other nodes can still have it correct? I've been meaning to ask without interrupting.
inconshreveable has joined #ipfs
<whyrusleeping>
you cant actually remove info from ipfs
<whyrusleeping>
the best you can do is block it from being on your machines
<prosodyContext>
I thought I've read a github page about conduct or something reporting content?
<prosodyContext>
I'll try to find it.
<whyrusleeping>
yeah, thats referring to us blocking content from the gateway nodes
<prosodyContext>
Ah, okay I guess, ty.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
M-_mis has joined #ipfs
<M-_mis>
p.s. ftr, some Aether users have talked about making website hosts/mirror for Aether, I do not know technically how, but as a way to balance the scale issue I figure?
<prosodyContext>
I'd love IPFS to be that suggestion.
<prosodyContext>
I'll ping the thread. Right now there are no aether://links yet unfortunately, another problem even a simple IPFS upload might solve?
<whyrusleeping>
yeah, i'd love for ipfs to be able to help out
mildred has joined #ipfs
husanu3 has quit [Remote host closed the connection]
<prosodyContext>
(ftr too, Matrix HQ https://matrix.org/beta/#/room/#matrix:matrix.org has talked publically about using IPFS for their protocol at times, but it lacks a knowledgable lead, if you have time to idle there. Or @matrix:freenode:net)
<prosodyContext>
It would solve a lot of problems but nobody knows.
<whyrusleeping>
okay, i'll idle over there
<prosodyContext>
MatrixBridge went live here just today with a simple /join #freenode_IPFS:matrix.org command. So you now have persistent logging (maybe backed up to IPFS?:). ttyl =))
<whyrusleeping>
prosodyContext: is it not the same as just /join #matric?
<whyrusleeping>
#matrix*
<prosodyContext>
They are mirrors iirc. It's imperfect, but only can get better.
<prosodyContext>
Oh but you cannot see backlog from freenode? Only through matrix..
<prosodyContext>
And no live activity indicators, if you like knowing.
<whyrusleeping>
ah, interesting
<prosodyContext>
:)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has quit [Quit: Leaving]
bengl has quit [Ping timeout: 256 seconds]
inconshreveable has quit [Remote host closed the connection]
<wking>
checkin: Chipping in on github.com/opencontainers (not strictly IPFS-related, but I'm trying to keep them focused on runtime so we can slot in IPFS for the auth/tooling ;)
<whyrusleeping>
checkin! got a basic mark and sweep GC implementation pushed yesterday, still a little WIP, but good enough for some CR. also have a batch datastore interface written up, thats looking for a little CR on the interface before I commit to writing the batch stuff for *all* our datastore implementations
<whyrusleeping>
i pushed the only-hash command, and a test for it this morning
<whyrusleeping>
er, not command, but an option to add
<daviddias>
sprintbot:
<daviddias>
+ Managed to get spdy-transport to parse go-spdystream frames. Found that there is still some http layering over spdy-transport, for example it expects for a SYN_STREAM to always have a :path and a :method. Working on spdy parser.js tests, waiting on Indutny to get some feedback on some impl details.
<daviddias>
+ Started writing what is in my head as DHT spec
<whyrusleeping>
daviddias: good stuff! i was gonna take a look at your interop tests in a bit
<whyrusleeping>
just to help me understand the protocol a bit more
<daviddias>
Cool, let me know if you have any ideias/questions, the test I made is pretty simple (just popping a stream and writing something to it). I love how Indutny built the parser, instead of calling some function with a buffer and get the frame back, you feed a stream that pops frames as soon as it finishes parsing one.
notduncansmith has joined #ipfs
<daviddias>
Makes the testing of it super sweet
<whyrusleeping>
neat. i'll probably ping you at some point :)
notduncansmith has quit [Read error: Connection reset by peer]
chriscool has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
dread-alexandr-1 has joined #ipfs
dread-alexandria has quit [Ping timeout: 246 seconds]
mildred has quit [Ping timeout: 272 seconds]
patcon has joined #ipfs
m0ns00n has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
mildred has joined #ipfs
domanic has quit [Ping timeout: 250 seconds]
Encrypt_ has joined #ipfs
<whyrusleeping>
Tv`: random question that i think you'll have good input on: what would a filesystem optimized for SSDs look like?
notduncansmith has joined #ipfs
<ipfsbot>
[go-ipfs] chriscool created test-circleci (+2 new commits): http://git.io/vt3xh
<ipfsbot>
go-ipfs/test-circleci 2fafb74 Christian Couder: circleci: temporarily disable go tests...
<ipfsbot>
go-ipfs/test-circleci 9bb95f5 Christian Couder: add debug infos...
notduncansmith has quit [Read error: Connection reset by peer]
<cjb>
whyrusleeping: flash filesystems have existed for decades :) e.g. jffs2
<cjb>
but it's complicated because modern SSDs have a flash controller that sits between you and the NAND, and makes it look and address like a hard disk does
<cjb>
and the firmware on the flash controller is a closed source black box that does whatever it wants to, usually badly
<whyrusleeping>
yeah, i was gonna say that todays SSDs are very different than old flash
<Tv`>
just about everything predating f2fs was designed for old school compact flash
<cjb>
so the question of "what would an ideal flash filesystem look like?" is totally different to "what would an ideal filesystem for a modern SSD running controller type <x> look like?"
<Tv`>
cjb: the f2fs paper goes into detail on that
<Tv`>
i remember reading results of probing the ssd to discover it's behavior, and categorizing based on that
<Tv`>
there weren't very many categories -> can pretty safely assume certain base facts
<cjb>
cool
<cjb>
there's nilfs, too
<cjb>
which I think has modern btrfs-y features like cheap snapshots
<cjb>
but the general idea is shared between jffs2/f2fs/nilfs, I think -- try to split the flash into eraseblocks (because you have to erase/write a whole eraseblock at a time) and then append logs into the blocks.
<cjb>
yeah, I remember seeing a tool from arndb called "flashbench" that does benchmarking in order to empirically determine eraseblock size and alignment, for example
Encrypt_ is now known as Encrypt
<ipfsbot>
[go-ipfs] chriscool opened pull request #1424: Test circleci (master...test-circleci) http://git.io/vtsvZ
patcon has quit [Ping timeout: 256 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
therealplato has quit [Quit: Leaving.]
therealplato has joined #ipfs
chriscool has quit [Quit: Leaving.]
therealplato has quit [Changing host]
therealplato has joined #ipfs
hellertime has quit [Ping timeout: 256 seconds]
chriscool has joined #ipfs
<ipfsbot>
[go-ipfs] chriscool force-pushed test-circleci from 9bb95f5 to 775a976: http://git.io/vtsmX
<ipfsbot>
go-ipfs/test-circleci 775a976 Christian Couder: add debug infos...
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
chriscool has quit [Ping timeout: 246 seconds]
<ipfsbot>
[go-ipfs] chriscool pushed 1 new commit to add-circleci: http://git.io/vtsZY
<ipfsbot>
go-ipfs/add-circleci 9fcf6f9 Christian Couder: t0060: export IPFS_PATH...
notduncansmith has quit [Read error: Connection reset by peer]
inconshreveable has joined #ipfs
chriscool has quit [Ping timeout: 252 seconds]
* spikebike
catches up on flash/filesystem discussion
<ipfsbot>
[go-ipfs] jbenet deleted tk/unixfs-ls at 4acab79: http://git.io/vtsEU
kbala has joined #ipfs
inconshreveable has quit [Remote host closed the connection]
<whyrusleeping>
lgierth: you around?
<whyrusleeping>
well, jbenet lgierth, what do you both think of using wkings docker registry fork to ship images for our gateways?
infinity0 has quit [Ping timeout: 276 seconds]
Encrypt has quit [Quit: Sleeping time!]
<jbenet>
whyrusleeping: yeah im for this
infinity0 has joined #ipfs
<whyrusleeping>
sweet
domanic has joined #ipfs
<whyrusleeping>
although, the problem will be that in order to update the image, we will need a running ipfs daemon...
<whyrusleeping>
hrmmm
<ipfsbot>
[go-ipfs] jbenet pushed 2 new commits to master: http://git.io/vtsa8
<ipfsbot>
go-ipfs/master ac7d25c Juan Batiz-Benet: Merge pull request #1423 from ipfs/test/only-hash...
<ipfsbot>
go-ipfs/master 53dee3c Jeromy: add test for only-hash to ensure no blocks are added to datastore...
tilgovi has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
haha
<jbenet>
whyrusleeping: i want to make simple, ephemeral-node based `ipget` tooling
<whyrusleeping>
okay, that shouldnt be too hard
<jbenet>
random ports, no api/gateway, just starts an online node at a random port
<whyrusleeping>
want me to write that?
<jbenet>
and it always does this, does not defer to local node. It can _connect_ to a local node and get stuff from there if available.
<jbenet>
(hmmm, well, i guess it could try and use the flatfs, not sure what UX we want here, i can see use cases for both)
patcon has joined #ipfs
<whyrusleeping>
should just use a size limited LRU datastore
<whyrusleeping>
that way it *can* take advantage of deduped blocks
<whyrusleeping>
but doesnt use up too much RAM
<whyrusleeping>
and avoids touching the disk
<wking>
whyrusleeping: even if you aren't updating the images, the current (12hr?) IPNS record expiration is going to mean the IPFS daemon publishing the namespace used by read-only registries will need to stay running
<whyrusleeping>
wking: true...
<Blame>
would we be able to integrate filecoin at the dht level? generate/pay filecoin to store records?
<jbenet>
Blame: yes, but that's not ideal. filecoin will be too slow (global consensus). for a dht i have a different incentivized protocol in mind.
patcon has quit [Ping timeout: 264 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
inconshreveable has joined #ipfs
<Blame>
id love to hear about that when you get the chance. I'm mulling on ways to do agressive dht backups (nodes pushing backups rather than clients) but I want ways to do it that are harder to abuse.
<whyrusleeping>
Blame: one way is with adding a price to bandwidth, and having every node keep ledgers of its bandwidth with other nodes
<whyrusleeping>
depending on how much you trust a node, you can settle your ledger with them in short periods of time, or you can wait longer to be more efficient
<whyrusleeping>
so, if you play fairly, theres no total cost to you
<Blame>
what choices do you make based on those legers? How does one establish/quantify trust?
<Blame>
the easy way to do backups is to send them to the nodes that will take over that location when you fail.
<whyrusleeping>
trust in that situation would be based on a peers reputation to pay up
<Blame>
You also need to dump backup reconrds onto new nodes when they join, otherwise the space they occupied was essentially erased.
<whyrusleeping>
ah, i see
<whyrusleeping>
hrm, so you'd have to do some different accounting, maybe not charge based on bandwidth
<whyrusleeping>
but i still think having trust defined as 'a given nodes history of paying on time' is a good one
<whyrusleeping>
its basically a credit score
<richardlitt>
@jbenet Any idea why I would get a different response for the same code, except one is local and one is on the server? Getting a 405 on server, no issue on local.
<Blame>
interestingly, the ipfs kademlia dht has that issue, it just hopes that not enough nodes join to obfuscate a record before it is re-published.
<Blame>
*obscure not ofuscate
<whyrusleeping>
Blame: for now, yeah
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
* whyrusleeping
thinks having pipes in unixfs would be super awesome
<wking>
whyrusleeping: Path to that building on go-ipfs#1413: add a mode field to our metadata object, and add a flag to 'ipfs add' that inserts that metadata in front of files and/or directories?
<whyrusleeping>
wking: it would have to be something like 'ipfs file mkfifo'
<whyrusleeping>
but yeah, having the metadata would help a lot
<whyrusleeping>
i'm trying to decide between two different strategies
<whyrusleeping>
one is currently easily possible
<whyrusleeping>
the other, is slightly more difficult
<whyrusleeping>
basically, you create this pipe object with some identifier in it
<whyrusleeping>
and then if you try to read from the pipe, you use corenet to listen on `/pipe/<hash of pipe object>`
<whyrusleeping>
and if you try to write to the pipe, you either dial the creator (option 1, possible) and write through the network
<whyrusleeping>
or you find providers of the pipe object and dial a random one of them (or all of them?) and write to it that way
<wking>
ah, you don't just want to store named pipes in IPFS, you want named pipes in IPFS that connect you to other nodes?
<whyrusleeping>
yes!
<whyrusleeping>
i want magic
<wking>
would the metadata include a list of nodes authorized to write to that IP-pipe?
<whyrusleeping>
hrm...
williamcotton has quit [Ping timeout: 264 seconds]
<wking>
I think I like this idea, but it seems pretty shiny, so I think it makes more sense to figure it out without bringing Unix-FS into the picture from the start ;)
* whyrusleeping
sighs
* Blame
is considering suggesting you could use the pubsub feature he just added to UrDHT to do that job...
<wking>
'ipfs fifo new <node-IDs-that-can-publish>...' and 'ipfs fifo cat <fifo-ID>' would let you play around with it more easily
<whyrusleeping>
wking: true
<whyrusleeping>
Blame: hows that?
<whyrusleeping>
i was planning on using our p2p transport layer so you could get some really good throughput
<Blame>
aha, yeah that would not work
<Blame>
at best it would allow for easy discovery for direct connections
williamcotton has joined #ipfs
<whyrusleeping>
yeah, and we have that baked in already
<spikebike>
interesting, I've been pondering how to ask another peer to pin an object the local peer has
sophyn has joined #ipfs
<lgierth>
whyrusleeping wking: yeah we should use the registry on the gateways!
<lgierth>
what's involved in making that happen?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
sophyn has left #ipfs ["Ex-Chat"]
<wking>
lgierth: someone needs to run the writable registry, push required images to it, and keep that node online so the IPNS record stays live
<jbenet>
wking we should be able to grab things by /ipfs/<hash> no?
<wking>
then the individual hosts need to run a local IPFS daemon and read-only registry, and the host can pull/launch containers from their local registry
<lgierth>
ok
<lgierth>
i was thinking we push a locally-built ipfs binary to gateways and use it to bootstrap all kinds of stuff
<lgierth>
docker images, apt repos, git repos
<lgierth>
etc.
<wking>
jbenet: yeah, if you'd rather not use IPNS for the read-only nodes. Just use /ipfs/<hash> in your registry-storage ipfs root
inconshreveable has quit [Remote host closed the connection]
<wking>
lgierth: yeah, you would to do something like that to bootstrap this. Although if you have a local Docker, you could use Docker's public registry to bootstrap the IPFS-backed local registry, and then use *that* to pull/launch your other containers
<wking>
but "host has a local Docker" for that route and "host has a local IPFS" (if you just wanted to run the IPFS-backed registry natively on the host) don't seem too far apart to me
<wking>
so whichever seems easier...
<jbenet>
lgierth: yeah see my comment above about an "ephemeral node-based" `ipget`
<wking>
jbenet: the IPFS-root approach is nice because you don't need to worry about security or expiration for IPNS records, but hopefully we'll have those holes closed soon anyway
<wking>
the drawback is that you need to restart your registry whenever you need access to a new image
<lgierth>
it's just to get around the chicken/egg problem
<lgierth>
once you have a proper install, you don't need it anymore
<lgierth>
that's what i thought for apt repos and the like, at least
<jbenet>
wking wait why?
<jbenet>
wking why restart the registry?
inconshreveable has joined #ipfs
inconshreveable has quit [Remote host closed the connection]
inconshreveable has joined #ipfs
notduncansmith has joined #ipfs
dread-alexandr-1 has quit [Quit: dread-alexandr-1]
<lgierth>
jbenet: to point it to another hash i guess
notduncansmith has quit [Read error: Connection reset by peer]
inconshreveable has quit [Ping timeout: 244 seconds]
inconshreveable has joined #ipfs
<wking>
yeah. If you push a new image to the writable registry, the root Merkle object for it's storage tree is going to be different
<wking>
if your read-only registries are serving from an immutable IPFS root, the only way to tell them about new images is to restart them pointing at the new IPFS root object
therealplato has quit [Ping timeout: 264 seconds]
<jbenet>
wking: why can't the registry _look for_ new roots on IPFS?
<jbenet>
If i said something close to "docker run /ipfs/<hash>/foo/bar" i would expect it to be able to find `/ipfs/<hash>`
<wking>
jbenet: that sounds technically possible, but I think you'd have to check through how the Docker client and registry handle parsing container IDs
nessence_ has joined #ipfs
nessence has quit [Read error: Connection reset by peer]
<wking>
They may have lifted a slash limit recently. Maybe we can rework the IPFS storage backend to understanand that as a name...
therealplato has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
williamcotton has quit [Quit: quit]
nessence_ has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ruby32 has quit [Quit: ruby32]
<whyrusleeping>
jbenet: ping, got a question
<jbenet>
whyrusleeping sup
<jbenet>
wking worst case we could have a transform like ipfs--<hash>--foo--bar which is of course not ideal but may work fine.
m0ns00n has quit [Ping timeout: 264 seconds]
<jbenet>
wking: and certainly can do just: `docker run <hash>`
therealplato1 has joined #ipfs
<whyrusleeping>
jbenet: so, i'm working the batching stuff up into the merkledag in go-ipfs
<whyrusleeping>
and i'm wondering what semantics we should use
<whyrusleeping>
if i have a 'transaction' on the dagservice, should each transaction add announce that block right as its added to the transaction, even though it hasnt reached the datastore yet?
<lgierth>
jbenet: i'll git mv {protocol,ipfs}/infrastructure.git/solarnet before merging the cjdns + secrets.yml PR ok?
<lgierth>
will just plain copy the fs, we can keep the history oveer in the old repo
<whyrusleeping>
or should i just opt for the simpler route of announcing all at once with the stores
<whyrusleeping>
i guess its easy to change later... probably should just take the easy route for now
therealplato has quit [Ping timeout: 256 seconds]
<jbenet>
lgierth: sgtm
<lgierth>
jbenet: oh, and MIT?
<lgierth>
(i'm fine with MIT)
<jbenet>
lgierth: yeah been using MIT for everything.
notduncansmith has joined #ipfs
<lgierth>
ok that's what i remembered from the weekend in berlin, just wanted to doublecheck :)
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
whyrusleeping: easy sgtm? not sure if it makes sense to bring up the transaction thing all the way up-- maybe there are ways to coalesce under the hood? (like make a Put block for a bit, so that many simult writes in ~500ms get automatically batched
therealplato has joined #ipfs
<jbenet>
whyrusleeping: makes _sequential_ puts hard, but most of that stuff (like adds of many files) should be parallelized
therealplato1 has quit [Ping timeout: 246 seconds]
<jbenet>
lgierth: yep sgtm
<whyrusleeping>
jbenet: i could do the coalescing...
<whyrusleeping>
that definitely requires less code change in go-ipfs
<jbenet>
that seems like the lowest friction way to do it.
<whyrusleeping>
yeah
<jbenet>
not the highest benefit but the hot paths will take advantage of it
m0ns00n has joined #ipfs
<whyrusleeping>
although, i've looked at doing that before and been shot down because 'what if your computer explodes in those 500ms and you lose data oh god no you cant lose data thats the end of the world'
<whyrusleeping>
or something
<jbenet>
oh
<jbenet>
no make it wait
<jbenet>
like dont externalize the put
<jbenet>
until it commits.
<whyrusleeping>
oh
<whyrusleeping>
no, that wont work at all
<jbenet>
(thats what i meant by sequential things wont work for it)
<whyrusleeping>
i'm trying to make adds faster
<jbenet>
"parallelized" o/
<whyrusleeping>
to make that work nicely we're going to implement the same stuff that would make the batch transactions work inside go-ipfs
<jbenet>
this will suck for anything with a bunch of sequential puts
<jbenet>
mmm i dont think it's as much work / change to the interfaces
<whyrusleeping>
eeeehhhhhhh
<whyrusleeping>
maybe i'm misinterpreting what youre thinking should be paralellized
<whyrusleeping>
separate files? or consecutive blocks within the same file?
<jbenet>
so the add stuff needs to be parallelized anyway because it needs to do hashing and io simultaneously.
m0ns00n has quit [Ping timeout: 256 seconds]
<jbenet>
i think down to the block, because adding one huge file should be as fast as we can get it
<jbenet>
that's different from batches though
<jbenet>
you also dont want to do one batch for the whole thing, that would be really bad. put everything in memory and so on.
<whyrusleeping>
no, i would do the batching per layer with an upper bound on the number of bytes to cache before commiting
<whyrusleeping>
so if a transaction add would put the held size over the limit, it would do a flush inside that put call
<jbenet>
so it would barely help / suck for binary trees.
<whyrusleeping>
no, it would help quite a bit
<jbenet>
i think we want to allow whatever the next writes are, no matter where in the tree, to go in the same batch if they can.
<jbenet>
based on when they arrive.
<whyrusleeping>
why?
<whyrusleeping>
(also, thats totally doable)
<whyrusleeping>
and would actually make one of the other changes i need to make easier...
dread-alexandria has joined #ipfs
www1 has joined #ipfs
www has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]