<Xe>
I need to find the id's for it, but I currently have 2 gigabytes of gangnam style remixes in ipfs
<Xe>
it is so much more convenient than figuring out a FTP server
<ReactorScram>
nice
<Xe>
is there a way to run a webapp inside the context of IPFS?
<ReactorScram>
Like with server-side scripting?
<krl>
Xe: this is in the making atm
<ReactorScram>
krl: how's that going to work?
<Xe>
ah cool
<jbenet>
Luzifer: i would really like to be able to run "gobuilder get-all <import-path> <version> output/dir" which would download _all_ archs and put them into output/dir (for local caches)
<jbenet>
Xe you can already do it, see how the webui works
<jbenet>
whyrusleeping if we use a makefile, we can patch in the current git commit into the build process. basically, put in a magic value like var currCommit = "hey-you-replace-me-with-the-version", and then "sed" the binary.
<bret>
Oh man running late. Getting on the max in a few
<reit>
is there any way to map files directly to ipfs without caching a blockstore version like bittorrent does?
<reit>
i have over a TB of stuff I'd like to add but don't really want to expend double the disk space storing multiple copies
<ogd>
jbenet: yea if i wrote it today i would just require('through2') and return a through instance
<jbenet>
ogd: take a look at this and lmk what you think. we're trying to pass a dataurl with an image and turn it into a legit stream (or a vinyl file)
<jbenet>
ogd: we found your filereader-stream and seemed to be what we wanted, is there another thing?
<ogd>
jbenet: not that i know of
<ogd>
jbenet: but if its a dataurl you have it in memory right
<ogd>
jbenet: so you can just use like require('from2') and emit a single buffer
<jbenet>
ogd yeah. so given a FileReader()-- how do i grab the buffer?
<jbenet>
maybe readAsBinaryString()
<ogd>
i dont think you even need to do a FileReader you can just create a Blob from the dataUrl
<ogd>
but i could be wrong
<ogd>
jbenet: but you want readAsArrayBuffer
<ogd>
jbenet: and then you can do new Buffer(new Uint8Array(arraybuffer))
<jbenet>
ogd: ahh and from there, how do i get a proper Buffer
<ogd>
because Buffer browserify is actually a monkeypatched Uint8Array
<ogd>
so their instanceof fails
void_ has quit [Quit: ChatZilla 0.9.91.1 [Firefox 39.0/20150629114848]]
<ogd>
i actually have no idea why that file exists
dignifiedquire has joined #ipfs
eternaleye has joined #ipfs
<jbenet>
ogd thanks! contents: streamifier.createReadStream(buffer) fixed it in the end
<eternaleye>
Huh, I just found IPFS, and I find myself wondering if there might be a neat way of integrating it with HIPv2(RFC 7401) at the HI/NodeID/peer-address level.
<eternaleye>
Specifically, so long as the IPFS NodeID public key is of a type that is supported in HIP, then a HIT ORCHID can also be generated from it, and a /hit/ peer address namespace could Just Work
<eternaleye>
And that'd provide some neat multihoming/mobility/NAT-bypass stuff "for free", along with a nice keying protocol for an IPsec ESP transport
Tv` has quit [Quit: Connection closed for inactivity]
atomotic has joined #ipfs
<cryptix>
hej everybody
semidreamless has quit [Quit: Connection closed for inactivity]
<noffle>
I can do a quick PR, but it'll have to be post-work-hours for me
<whyrusleeping>
noffle: mind just filing an issue for me? I can get to it :)
<whyrusleeping>
(unless you really want to PR it, then by all means go ahead and do so)
cdata has quit [Ping timeout: 244 seconds]
<noffle>
yup, I can file something
<noffle>
whyrusleeping: hm. I'm getting "panic: multihash length inconsistent: &{4 180 [242 122 13]}" when I try to run the client. The server is outputting "<peer.ID XoPY8C>" -- are peer ids that short?
<daviddias>
whyrusleeping: found an article explaining all the craziness about econreset in the different node versions
<daviddias>
I know your issue was in Go, but it is a very good reference
<jbenet>
sprintbot: I'm fixing a ton of bugs i found hacking with ward.
therealplato1 has joined #ipfs
therealplato has quit [Read error: Connection reset by peer]
<ei-slackbot-ipfs>
<zramsay> what's the deal with the error "context deadline exceeded"
<ei-slackbot-ipfs>
<zramsay> seems to come and go as it pleases
<jbenet>
zramsay: it means a timeout. the --timeout command whyrusleeping added should allow you to try resolving for a long time. it may mean the content is not available or is hard to reach.
<alu>
I think a decentralized bitcoin marketplace would want to be ephemeral.
<daviddias>
If attending.io page for the IPFS meetup tonight happens to be down for some reason, we also have a Meetup.com - http://www.meetup.com/Seattle-IPFS-Meetup/events/224077819/ - as backup. Make sure to RSVP so we can figure out resources :)
<jbenet>
hey thefinn93 d6e tperson whyrusleeping -- could you invite other people in the area that would be interested to come? (I dont know people around here)
<thefinn93>
sure
<thefinn93>
i guess i could mail the seattle meshnet list
<jbenet>
+1
<whyrusleeping>
jbenet: yeap, i've been inviting people i know
rschulman__ has joined #ipfs
<whyrusleeping>
although a good chunk of my tech friends are vacationing right now
<whyrusleeping>
apparently caves in vietnam are more interesting >.>
<thefinn93>
jbenet: eh, its a bit late to put it on the mailing list
<thefinn93>
if you want to tho
<thefinn93>
seattle@lists.projectmesh.net
<jbenet>
daviddias want to send an email (since you wrote the other ones) \o
<daviddias>
sure :)
<daviddias>
on it
<whyrusleeping>
daviddias: <3
rschulman___ has joined #ipfs
rschulman__ has quit [Ping timeout: 265 seconds]
rschulman___ is now known as rschulman__
domanic has joined #ipfs
<d6e>
yeah, sure
Encrypt has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<daviddias>
email sent out :)
whidgle has joined #ipfs
tilgovi has quit [Ping timeout: 244 seconds]
kalmi has joined #ipfs
<thefinn93>
put through
hellertime has quit [Ping timeout: 265 seconds]
hellertime has joined #ipfs
<whyrusleeping>
jbenet: i think 'ipfs get' is broken...
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dignifiedquire has quit [Quit: dignifiedquire]
emery has quit [Disconnected by services]
<jbenet>
whyrusleeping: oh uh? how come?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
it will randomly exit without error or getting all the data
ehmry has joined #ipfs
tilgovi has joined #ipfs
hellertime1 has joined #ipfs
hellertime has quit [Read error: Connection reset by peer]
<jbenet>
whyrusleeping: what! that's really, really bad. can you reproduce it? what will cause it to exit? is it silenced errors somewhere?
<whyrusleeping>
i can reproduce it, but not super reliably
<whyrusleeping>
like, i was able to reproduce it three times, but the fourth time its just hanging waiting for data
<whyrusleeping>
elon, save is from mundane travel! build us your hyperloop!
<whyrusleeping>
jbenet: that go struct isnt really that useful at all...
<jbenet>
given how json, cbor, protobuf, and other encoders work (with reflect) we should be able to do a "dag.Unmarshal(buf, &commit)" that just works.
<whyrusleeping>
i would want real types
<jbenet>
yeah take that up with rob pike, go doesnt do generics
<whyrusleeping>
one sec, let me write something for you
<jbenet>
and, those are links.
<jbenet>
they're not the object. you should be able to get the raw nose with: commit.Author.Node(), and unmarshal with wither: commit.Author.Node().Unmarshal(&author) or commit.Author.Unmarshal(&author) as a shortcut.
<jbenet>
raw node*
<jbenet>
whyrusleeping: note that this is like a pb struct, remember that right now we hide all our pb structs and copy over everything. this wouldnt do that.
<jbenet>
we'd unmarshal directly into the proper struct.
<jbenet>
(also, you dont want to do that necessarily, because unmarshalling one node would unmarshal the entire dag in memory. imagine doing that to a root directory.
<jbenet>
google's python datastore api did this dynamically. grabbing the object would lazily load it. so we could hide that behind functions, but it gets really really annoying to do in go without codegen.
<whyrusleeping>
we're going to need codegen anyways
<jbenet>
not necessarily, this can work with a generic cbor decoder.
<jbenet>
i guess protobuf does that too
<jbenet>
if you pass in the schema.
<jbenet>
whyrusleeping: how would you deal with unmarshalling the root of an enormous dag in your case
<whyrusleeping>
i see the value in using the links
<whyrusleeping>
we would just have getters for accessing everything
bedeho has joined #ipfs
www1 has joined #ipfs
www has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
cdata has joined #ipfs
<lgierth>
anybody know a good linux tool for ascii graphs?
<whyrusleeping>
in #3, youre dropping the size from links?
<whyrusleeping>
i guess thats okay, we didnt really use it
<jbenet>
whyrusleeping: in this example yes. we dont have to, but i think it was krl that pointed out we should either have more things that size (like tree height, node count, etc) or not.
<jbenet>
i do like having the size, i'm not sure.
<jbenet>
so one "not-breaking-anything" way to do things is to wrap this whole cbor thing into the Data segment of a protobuf object, pull out the links and construct our protobuf links table
<jbenet>
that duplicates data
<whyrusleeping>
yeap
<whyrusleeping>
it doesnt really duplicate data
<jbenet>
blowing up the storage requirement, unless we do clever things