<alu>
kyledrake DUDE you met my friend darkengine at the same conference
<lgierth>
and jupiter's using its own gateway upstream again
<alu>
this is crazy lol
<alu>
Cuz we were just talking about you and neocities / bitcoinjs for the metaverse
<alu>
and then darkengine is like "Yo I juts met the neocities guy bcuase of that lain site fauxx made"
<alu>
and I'm LIKE WHAT
<donpdonp>
i started the daemon. visiting localhost:8080 says nothing but "404 page not found"
<donpdonp>
(v0.3.5)
<donpdonp>
hmm the webui is on 5001. odd
<lgierth>
donpdonp: :8080 is the gateway
<donpdonp>
thats confusing as hell :) 8080 always means 'alternative port for web interface'
tilgovi has quit [Ping timeout: 256 seconds]
<donpdonp>
instead the 'API' port is the web interface. seems backwards but at least I know now :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
domanic has joined #ipfs
donpdonp has left #ipfs ["WeeChat 0.4.1"]
<jbenet>
lgierth: lol I knew that would cause a problem eventually.
<jbenet>
:(
<lgierth>
oh it's fine :)
<lgierth>
i confused ansible_hostname and inventory_hostname
<lgierth>
the pieces are coming together
dread-alexandr-1 has joined #ipfs
<kyledrake>
alu lol
<kyledrake>
alu yeah his Docker talk was great.
<alu>
the internet felt a lil smaller tnite
<alu>
darkengine is awesome :D
dread-alexandria has quit [Ping timeout: 265 seconds]
<alu>
I didn't know that lain page was the most popular
<alu>
I've always loved it <3
<alu>
enjoy tho, we were talkin about you in VR when he posted about meeting the neocities guy
<alu>
cuz neocities is sorta similar to vrsites, which is like neocities for 3D webpages in Janus
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<alu>
but i could do just the same on neocities which ill demonstrate later lol, gotta finish scraping a site
<alu>
oh yeah was also discussing bitcoin integration on server side for making transactions in virtual reality
<alu>
fuck thats so cool
MatrixBridge has quit [Ping timeout: 265 seconds]
inconshreveable has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
Blame has quit [Remote host closed the connection]
inconshreveable has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Blame has joined #ipfs
inconshreveable has quit [Remote host closed the connection]
donpdonp has joined #ipfs
pfraze has joined #ipfs
www1 has quit [Ping timeout: 265 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nessence has quit [Remote host closed the connection]
<wking>
jbenet: runc example in /ipfs/QmRR3uDfh3QPxxhngH9FPynfPXyAyVoShimheojrhxHaCe
<wking>
I've put a runc binary under bin/, but it's just a build of the current runc master, so feel free to compile your own or replace any of the other binary bits with stuff you've compiled yourself ;)
domanic has quit [Ping timeout: 264 seconds]
<wking>
just unpack /ipfs/QmRR3uDfh3QPxxhngH9FPynfPXyAyVoShimheojrhxHaCe to a filesystem that supports modes, 'chmod +x rootfs/bin/* rootfs/lib/*', and ./rootfs/bin/runc
<wking>
much nicer than messing with the Docker registry ;)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nessence has joined #ipfs
hellertime has quit [Quit: Leaving.]
zabirauf has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
reit has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
chriscool has quit [Ping timeout: 256 seconds]
reit has quit [Ping timeout: 256 seconds]
sharky has quit [Ping timeout: 264 seconds]
dandroid has joined #ipfs
sharky has joined #ipfs
pfraze has quit [Remote host closed the connection]
zabirauf has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
Tv` has quit [Quit: Connection closed for inactivity]
zabirauf has joined #ipfs
Wallacoloo has joined #ipfs
dandroid has quit [Ping timeout: 246 seconds]
therealplato has quit [Ping timeout: 276 seconds]
Wallacoloo has quit [Quit: Leaving.]
zabirauf has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
zabirauf has joined #ipfs
<kyledrake>
alu You're getting into snow crash territory there. Like VRML style sites?
dread-alexandr-1 has quit [Quit: dread-alexandr-1]
domanic has joined #ipfs
<pguth2>
donpdonp Yeah, that port usage confused me a bit too in the beginning.
dread-alexandria has joined #ipfs
mildred has joined #ipfs
<domanic>
jbenet, hey
kbala has quit [Quit: Connection closed for inactivity]
therealplato has joined #ipfs
<jbenet>
domanic: heyo
<domanic>
oh, hey I forgot what I wanted to talk to you about... reading your paper though
<kyledrake>
alu if you have a URL it would be neat to read more about this
* jbenet
checkin: lots of CR
domanic has joined #ipfs
ed_ is now known as ed
ed is now known as Guest66137
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
sprint checkin pushing through PRs for batching and GC changes.
<whyrusleeping>
I'd like some review from Tv` on the putMany method i wrote on flatfs
<ipfsbot>
[webui] jbenet pushed 2 new commits to master: http://git.io/vt8eq
<ipfsbot>
webui/master c866c52 Sean Lang: fix git clone link
<ipfsbot>
webui/master e6b02e9 Juan Batiz-Benet: Merge pull request #66 from slang800/patch-1...
<daviddias>
checkin: On the DHT Spec front, submitted first PR, it is being a good process. As for spdy-transport tests, working on it, also got some feedback from Indutny, but need to talk more, spdy-transport shouldn't have HTTP semantics (request/reply with mandatory method and path headers)
* daviddias
goes grab some food, bbiab
<jbenet>
daviddias: https://github.com/hashicorp/yamux/issues/7#issuecomment-115004649 <--- another muxing library (go) based on SPDY, but which departed from spdy. lib author points out SPDY and HTTP2 had lots of core HTTP semantics baked in very close to the muxer
<jbenet>
not clear if this is still the case
* whyrusleeping
is nervous about that
nicknikolov has joined #ipfs
<daviddias>
what's your feeling after working with spdystream? Did it felt it was missing something? SPDY framing layer supposedly just talks about headers as a generic thing, could be really any KeyValue pairs and the behaviour of the stream wouldn't change
<whyrusleeping>
yeah, i guess... if we just have the generic framing layer its probably fine
<whyrusleeping>
but theres so much in spdy thats 'made for http'
<whyrusleeping>
feels like there needs to be a standard for muxing or something
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
nsh has quit [Excess Flood]
nsh has joined #ipfs
Encrypt has joined #ipfs
<jbenet>
daviddias: realistically, how soon do you think we could have a spdy-transport that interops with go-spdy?
<jbenet>
go-spdystream* ?
<jbenet>
(like are these problems we're running into one-offs and it feels very close, or do you feel like there will be a bunch of other problems
<jbenet>
?
<daviddias>
Can I tell you that tomorrow or Sunday? Indutny's code is really close to have it all implemented, of course with his own interpretation of some details. Right now if I set the headers from the go side, the streams open and the frames are parsed, however it says data is "undefined", if this happens to be a quick fix, it probably means we are close (
<daviddias>
therefore asking for an extra day :) )
<jbenet>
yeah fair enough :)
<whyrusleeping>
jbenet: so, the coalescing approach to batching blockstore writes works better for bitswap, but working that into the add process was looking to be really obnoxious
<whyrusleeping>
i'm interested in what you think of the approach
<whyrusleeping>
can always undo it and write the timing coalescer thing, but i'm always very worried about making things rely on timing
tilgovi has quit [Ping timeout: 264 seconds]
notduncansmith has joined #ipfs
<jbenet>
whyrusleeping: understood about timing being a concern. i think depends on the calling code + perf. my guess is the time coalescer thing will give much better perf for tons of writes. what did the different calling codes look like?
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed unused-cleanup from e1e5a46 to 719b3ac: http://git.io/vt8On
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
jbenet: i ended up having some *really* weird synchronization stuff to make the adds happen concurrently and still keep them ordered to be added into their parent node
<whyrusleeping>
the perf improvement for that PR is very nearly the same perf improvement i got in benchmarking just the datastore itself
<whyrusleeping>
jbenet: also what was the flag to disable reuseport?
<jbenet>
"benchmarking just the datastore itself" that depends on how that benchmark was designed
<jbenet>
"i ended up having some *really* weird synchronization stuff to make the adds happen concurrently and still keep them ordered to be added into their parent node" what do you mean? have an example? why can't you just open (limited # of) goroutines? they dont need to be ordered necessarily. the parents will just block until the children are done.
<whyrusleeping>
yeah, but each of the nodes we are adding in parallel needs to be inserted into its parents link array after its been added
<whyrusleeping>
its doable, but we wont see the same perf improvements as the approach i pushed
<whyrusleeping>
the benchmark i did for the datastore was designed to mimic datastore usage during an add operation, i.e. a bunch of 256k blocks
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
whyrusleeping: maybe add the benchmark?
<whyrusleeping>
to go-datastore?
<jbenet>
whyrusleeping: my concern is that the timing approach allows a large number of consecutive writes to all come in under the same batch, possibly hundreds, where your approach forces the user to try and batch things manually and has to deal with coming up with a good batching design
<whyrusleeping>
no, mine doesnt
<whyrusleeping>
and the timing approach specifically hampers 'a large number of consecutive writes' thats what its slowest at
chriscool has quit [Ping timeout: 265 seconds]
<spikebike>
linux has various ways to set algorithm/scheduler and high/low watermarks to allow tuning for various workloads
<spikebike>
databases often allow batched inserts to delay indexing until done
<jbenet>
i want to make that reproducible, and nc _should_ work :/
<jbenet>
oh i think i know what's up. it may be getting crap stdin
<lgierth>
jbenet: regarding the test panic about the duplicate metric collector, there's another way of registering them, which is basically register-unless-exists
<lgierth>
that might bite us if we want to test individual metrics
<lgierth>
one could unregister the collector though, to have a clean slate for the test
<lgierth>
yeah that's fine
<whyrusleeping>
jbenet: re 1399, no i hacked at it for a little bit but it passes locally
<whyrusleeping>
i couldnt repro the failure
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<sugarpuff>
jbenet or anyone else who knows: where can i find docs on the pub/sub mechanism? am interested in real-time message delivery applications
<jbenet>
sugarpuff: pub/sub's not built yet, sorry. we need to get IPNS really robust first
<sugarpuff>
jbenet: thanks for the link! and yeah, even if pub/sub doesn't exist yet, is there any info about it anywhere i can read?
<jbenet>
and bundle nsq in it or something
<jbenet>
sugarpuff not at the moment sorry, it's been discussed a few times in the irc channel, github, and in person
<jbenet>
so we have a set of designs in mind, but nothing hyper concrete. mostly because pub/sub isn't that hard, it's just about the set of properties you want to provide
<sugarpuff>
jbenet: gotcha (sorta), well, i look forward to learning more about it whenever there's something out there to read. ^_^
<sugarpuff>
jbenet: thanks for your help!
kbala has joined #ipfs
<jbenet>
sugarpuff: also thanks for the notes on twitter, i should probably respond in there. that guy seems to not get it at all / didn't even look into either protocol
<jbenet>
notes/help*
<sugarpuff>
jbenet: it is a bit surprising. Steve's a very smart guy
<jbenet>
i think most times very smart + knowledgable people tend to look at things a certain way and can be surprisingly less open than people who know less.
<sugarpuff>
s/he'd asked me/I saw his tweet and volunteered :P/
<sugarpuff>
jbenet: definitely agree there. that's the argument used by some Zen folk for cultivating "Beginner's Mind"
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
whyrusleeping: i think something's happening with stdin to nc. nc works manually but not when i pass in anything (the other side closes the connection immediately).
<jbenet>
whyrusleeping: how do we pass _nothing_ to stdin? -- not /dev/null (that's EOF). _nothing_
<prosodyContext>
sugarpuff: i just x-posted that comment to #matrix
<jbenet>
hey prosodyContext: are you one of the matrix people?
<prosodyContext>
nah i wish =))
<sugarpuff>
prosodyContext: cool (which comment?). and yeah, #matrix is relevant here
<lgierth>
jbenet: maybe echo -n | nc ?
<lgierth>
not sure -n is portable, that's gnu coreutils
<prosodyContext>
sugarpuff: load https://matrix.org/beta/#/room/#matrix:matrix.org for th persistent loggable enmfo so i dunnot need 2 repeat ( <3 ) )) ) (/join matrix.org/beta/#/room/#freenode_IPFS:matrix.org while ur at it :)
<jbenet>
lgierth: thanks-- that still gives an EOF. maybe `nc -d`
<sugarpuff>
prosodyContext: +1 thanks, and yeah, i just joined #matrix
<kyledrake>
alu if there's any way I can help w/Neocities, just let me know.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<sugarpuff>
jbenet: "IPFS is a p2p, well-layered _transport_ {protocol, toolchain} for arbitrary merkle trees." <— can't it also be non-merkle trees, i.e. just hash -> data mapping?
stevedekorte has joined #ipfs
<jbenet>
sugarpuff: "hash -> data mapping" is a merkle tree of two nodes.
<jbenet>
"merkle dag"
<sugarpuff>
jbenet: i don't think so... isn't a merkle tree composed on hashes *only*?
<sugarpuff>
i.e. i'm wondering whether it's accurate to describe IPFS as a "hash -> data mapping"
<sugarpuff>
jbenet: whatever happened to twitter being bad for convo? stevedekorte is here too :)
<jbenet>
ah great! -- i was answering open threads
<jbenet>
hey stevedekorte o/
<sugarpuff>
jbenet: (did twitter not show you my replies to those tweets?)
<jbenet>
sugarpuff: the version control community uses "merkle tree" to mean any merkelized tree/dag. which i agree causes confusion.
<jbenet>
kyledrake: great thank you!
<jbenet>
stevedekorte: if you have any questions, please shoot. otherwise, you may want to read more. sorry, there's _a lot_ of discussion because there is a lot to address.
<sugarpuff>
jbenet: thanks for that link, addresses my question
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
IPFS DRAFT 3 is at http://static.benet.ai/t/ipfs.pdf -- this is outdated, and i will spend some time writing DRAFT 4 in the coming months. (between code vs writing, i think code is more important right now)
<jbenet>
stevedekorte: if the dtn.is talks were out, i'd show you the video from that one. i think it does the best job at explaining the goals of the protocol, the approaches we take, and the problems we're solving.
<jbenet>
hij1nx: when will dtn.is videos be out?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<stevedekorte>
jbenet: did you catch the problem I’d like to solve?
<jbenet>
stevedekorte: yes, I did. You can build this very easily on top of IPFS, and we already plan to do something like it.
<jbenet>
stevedekorte: I'll be in and out for next hour(s), in a conversation
<stevedekorte>
what are the incentives for servers to hosting 30 billion messages (typical IM traffic) a day?
<stevedekorte>
*host
<stevedekorte>
that is, how do they pay their EC2 bills, etc
<jbenet>
stevedekorte: ipfs a transport. If you want to _hire_ peers to back things up that's what Filecoin is for. Direct 1:1 incentives tend not to work when production/consumption isn't equal between all peers. (Hence, a currency as a medium of exchange for providing and using backup service)
<jbenet>
(In the simplest sense, "filecoin is a pinning service" for ipfs. But it is its own protocol because it doesn't require ipfs)
<stevedekorte>
I guess I don’t see a relay hosts motiviation for using IPFS. How would be cheaper and/or faster than using their own, say, EC2 resources?
<sugarpuff>
stevedekorte: you could use IFPS with an EC2 instance, but the point is that you wouldn't be married to it if you used IPFS
stevedekorte_ has joined #ipfs
<stevedekorte_>
there are standards for containers now, right?
stevedekorte has quit [Ping timeout: 276 seconds]
stevedekorte_ is now known as stevedekorte
<sugarpuff>
stevedekorte: yeah. (did you see my reply btw?)
<sugarpuff>
(i saw that you had dropped out)
<stevedekorte>
which one?
<sugarpuff>
"stevedekorte: you could use IFPS with an EC2 instance, but the point is that you wouldn't be married to it if you used IPFS"
<stevedekorte>
yes, that is what I was replying to
<sugarpuff>
ah ok
<stevedekorte>
trying to understand how it offers net value for this use case, not clear so far
<kyledrake>
whyrusleeping got a netscan warning. Does this look right to you? http://pastie.org/10261010
<kyledrake>
I'm not sure it's working
<sugarpuff>
stevedekorte: it's valuable to be able to get your data given just a small pointer to it
<stevedekorte>
sugarpuff: you mean a namespace?
<sugarpuff>
stevedekorte: i mean a hash
nessence has quit [Remote host closed the connection]
<stevedekorte>
I guess I don’t understand the incentive system for maintaining the DHT. What if someone wrote a script to dump terabytes of garbage into it?
<stevedekorte>
(per day)
<stevedekorte>
how does the system distinguish what’s worth keeping from gargabe? what prevents the “good” stuff from being wiped out with the bad if there is some garbage collection mechanism?
<stevedekorte>
I apologize if these are naive questions
<spikebike>
I thought the DHT was just for finding peer info
<sugarpuff>
stevedekorte: there is no DHT. There is EC2.
<sugarpuff>
forget about the DHT
nessence has joined #ipfs
<sugarpuff>
stevedekorte: you take whatever data you have, put it on the EC2 in whatever way you want and just make sure you can return it when given its hash, and boom, you've created an IPFS router
nessence has quit [Remote host closed the connection]
<spikebike>
stevedekorte: read your post, my 2 main concerns are that people won't want to connect their p2p to a bitcoin waller (thus hindering adoption), and that bitcoin transctions are cpuintensive/slow
<spikebike>
er wallet
<spikebike>
seems kinda overkill for things like chat, im, email, and related messaging which are so cpu/bandwidth efficient that it seems silly to track
<spikebike>
imagine having to connect bitcoin to your irc client for it to work.