<cryptix>
i see, agreed. maybe we shouldnt use it in the tests, though. it gets chicken-egg-ish. 'we dont know what daemon we called' 'lets request id from http' 'but we just started the daemon with --init and dont know the id'...
<ipfsbot>
[go-ipfs] jbenet deleted refactor/importer at a0aa07e: http://git.io/vkztN
<jbenet>
we need to land all these PRs
<jbenet>
cryptix: which test uses it? we should test that _it_ works, but then use the normal thing. (then again, it technically shouldnt be a problem)
<jbenet>
i think the sharness instrumentation of the config is clunky, mostly because init is a pain. if `ipfs daemon --init` took a config on stdin maybe we could do this easier.
<cryptix>
jbenet: t0060 comes to mind
<cryptix>
init first and daemon than would at least give us the daemon id we are waiting for in pollEndpoint
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
pfraze has joined #ipfs
<cryptix>
jbenet: and I just had the case of daemons sticking around after a sharness suite, causing pollEndpoint to advance too early... :-/ i think whyrusleeping is right but we need to init first than
<jbenet>
cryptix: sharness should be killing them -9 if they dont exit.
<jbenet>
cryptix: if they stick around _after_ there's something very wrong going on.
<jbenet>
cryptix: recommend the following: add the nodeid to /version, and print it out when things error out or whatever.
<jbenet>
cryptix: it definitely sounds like things are talking to the wrong thing.
<cryptix>
its def very annoying for local dev work too to not get told that
<cryptix>
i'll make double sure about my sharness claim above but even in the case of failing test> daemon staying alive.. when i restart tests, i assume the suite gets everything into zero state and tells me if something is wrong
kyledrake has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot>
[go-ipfs] cryptix force-pushed feat/httpApiTcpPort0 from 599cc2a to 9a2bc59: http://git.io/vkWm3
<ipfsbot>
go-ipfs/feat/httpApiTcpPort0 9a2bc59 Henry: http endpoints: dont print before listen
<ipfsbot>
[go-ipfs] cryptix force-pushed feat/httpApiTcpPort0 from 9a2bc59 to d8c11a5: http://git.io/vkWm3
<ipfsbot>
go-ipfs/feat/httpApiTcpPort0 d8c11a5 Henry: http endpoints: dont print before listen
<jbenet>
whyrusleeping cryptix: lmk if any of your PRs are waiting on me atm.
<jbenet>
i'm going to CR the pinning, migration, s3 stuff next (closer to eve)
<ipfsbot>
[go-ipfs] cryptix force-pushed maint/godeps2master from 683aaf4 to 8a00395: http://git.io/vkzGd
<ipfsbot>
go-ipfs/maint/godeps2master 8a00395 Henry: godeps: update everything to master...
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot>
[go-ipfs] cryptix created maint/dropUUID (+1 new commit): http://git.io/vkznz
<ipfsbot>
go-ipfs/maint/dropUUID 4634285 Henry: godeps: drop uuid from code.google.com
<ipfsbot>
[go-ipfs] cryptix deleted maint/dropUUID at 4634285: http://git.io/vkznw
<ipfsbot>
[go-ipfs] cryptix created maint/dropUUID (+1 new commit): http://git.io/vkzn1
<ipfsbot>
go-ipfs/maint/dropUUID 46db9de Henry: godeps: drop uuid from code.google.com
<ipfsbot>
[go-ipfs] cryptix opened pull request #1305: godeps: drop uuid from code.google.com (master...maint/dropUUID) http://git.io/vkznS
<cryptix>
this is very annoying if i want to construct gateways with script (or auto start on boot) and just misconfigured my config
<cryptix>
if i dont want gateway, set it to empty string in config
<jbenet>
cryptix: agreed
<ipfsbot>
[go-ipfs] jbenet closed pull request #1305: godeps: drop uuid from code.google.com (master...maint/dropUUID) http://git.io/vkznS
<Confis>
I've been reading the IPFS paper (draft 3), and I was wondering if the "Object-level crypto" of section 3.5.5. is implemented somewhere in go-ipfs. I've been trying to find it, in between trying to grok the structure of your enourmous(ly awesome) source tree.
<whyrusleeping>
jbenet: 1299 is RFM i think.
<whyrusleeping>
Confis: thats coming pretty soon
hellertime has joined #ipfs
hellertime has quit [Ping timeout: 244 seconds]
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed feat/bitswap-speed from db8af56 to 2ec4c9a: http://git.io/vk0E0
<ipfsbot>
go-ipfs/feat/bitswap-speed f574cd4 Jeromy: Move findproviders out of main block request path...
<ipfsbot>
go-ipfs/feat/bitswap-speed ab161cf Jeromy: clean up organization of receivemessage and fix race
<ipfsbot>
[go-ipfs] whyrusleeping created gpe-f574cd4b80e13870f6f31ce8535e568f67847f16 at f574cd4 (+0 new commits): http://git.io/vkz4t
<ipfsbot>
[go-ipfs] whyrusleeping created gpe-2ec4c9ac455dc4781e937d463b97aa6162d26c5c from feat/bitswap-speed (+0 new commits): http://git.io/vkz4O
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<kyledrake>
jbenet whyrusleeping I wanted to throw this out there: It would be really handy if there was a way to synchronize all of the stored content between IPFS nodes.
<kyledrake>
jbenet whyrusleeping I can do this with external code, but if it was relatively easy to do, it would be nice.
<jbenet>
kyledrake: very much agreed! Will get easier once current pin changes make it
luca has quit [Ping timeout: 258 seconds]
<headbite>
couldn't you ipfs refs local > ipfs pin add or something like that?
<kyledrake>
That opens up a lot of new use cases. Replication for redundancy is a big deal in the file storage space.
<kyledrake>
And most of the tooling for doing it is.. not very good.
<whyrusleeping>
hrmmmmmmmmmm
<whyrusleeping>
could be implemented as a bitswap agent
<kyledrake>
It's possible a lot of this stuff is out of scope, but if I wanted to use IPFS as a primary datastore for my files, it would be great to be able to do a backup of that. And of course the more realtime it is the better.
<whyrusleeping>
kyledrake: i think that might be something we could do
<ipfsbot>
[go-ipfs] cryptix pushed 1 new commit to feat/httpApiTcpPort0: http://git.io/vkzBi
<ipfsbot>
go-ipfs/feat/httpApiTcpPort0 b175e49 Henry: daemon: split api, gw and fuse bring up into functions
<cryptix>
jbenet: got a bit longer but each of them only needs the cmds.Request - feared they would need a gazillion arguments
<jbenet>
whyrusleeping: maybe it's not the full repo but part of it. Also all secret keys should be stored encrypted anyway. Though yes the config file can leak other things.
<jbenet>
Can store the top level repo object encrypted too
<jbenet>
(Or all repo defining objects)
<jbenet>
And not provide any of them to the whole network
<ipfsbot>
[go-ipfs] cryptix force-pushed feat/httpApiTcpPort0 from b175e49 to fef207b: http://git.io/vkWm3
<ipfsbot>
go-ipfs/feat/httpApiTcpPort0 fef207b Henry: daemon: split api, gw and fuse bring up into functions
<whyrusleeping>
a list of what blocks i have isnt something i always want to broadcast
inconshreveable has quit [Ping timeout: 272 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
* cryptix
will now check how bad kong fury is
<cryptix>
for science, of course
<whyrusleeping>
dude
<whyrusleeping>
kung fury was amazing
<cryptix>
i got tortured with the theme song at a party
<whyrusleeping>
lol
<whyrusleeping>
is anyone able to repro those iptb failures locally?
<kyledrake>
I guess the personal use case I'm thinking of is, store file changes directly into an ipfs node, and then that node replicates real-time to a few others. Aside from backups, you then get more nodes to load balance from, as long as the chances synchronize fairly quickly.
<cryptix>
no.. not at will
<kyledrake>
s/chances/changes
<kyledrake>
Instant CDN
<Confis>
kyledrake: I'm very fresh on the subject, but it sounds that if you keep your files in a path under your NodeID (/ipns/...), and you recursively pin that NodeID on another host, you effectively backup.
<kyledrake>
confis: ponder the use case of having 100 million files, and needing to change it 500 times every second.
<kyledrake>
That's contrived, but gives a flavor of the problem.
<kyledrake>
That's why I was wondering if it may be a bit out of scope :)
<cryptix>
Confis: be carfull with the B word. if you accidental delete on host #1 and these changes propagate.....
<kyledrake>
true.
<Confis>
cryptix: So it's possible to pin a mutable name? How often does it pull changes?
<whyrusleeping>
the B word!
<whyrusleeping>
Confis: its not possible yet
<whyrusleeping>
precisely because answering your second question is hard
<Confis>
Right :)
<kyledrake>
I would probably have an off-IPFS backup of some sort in this theoretical system too.
<whyrusleeping>
it *will* be possible at some point
<Confis>
whyrusleeping: But I would be able to emulate it by manually looking up the object under a mutable name, and recursively pinning that object, right?
<Confis>
This seems a very application level thing.
<kyledrake>
rsync is kindof accidentally useful like that. It doesn't (by default, anyways) delete files if they go missing on the source.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
Confis: yeap
hellertime has joined #ipfs
<kyledrake>
Use case: Jason Scott throws all the Textfiles.com BBS data on an IPFS node. You can pin it. But instead of just pinning the data at the point in time you pinned it, if he adds new data it "pins" the new information. It's a fundamentally different contract. You didn't agree to host that new information. That example gives me a feeling for why I thought it was out of scope.
<kyledrake>
OK I'll stop now :)
<Confis>
It's well in scope of IPFS. It's just not an issue: you can't insert new data without changing the merkle hash of objects up the path.
<Confis>
Objects are immutable.
<jbenet>
kyledrake: yep like a feed of sorts.
<jbenet>
+1
<kyledrake>
Yeah, a feed.
<kyledrake>
I suppose, curiously, there could be security implications as well. You could do some pretty fun damage if you hacked Jason's private key.
<jbenet>
Yep
<kyledrake>
I was more thinking of this for private datastore usage, but that would def. be an issue for public usage.
<kyledrake>
The Internet Archive obviously would love this (or some flavor of it). They'll need a way to catch changes to IPNS pubkeyhashes they're tracking
<kyledrake>
They could just hit them all the same way they do now though. It's just that event-driven is more efficient.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
kyledrake has quit [Quit: Leaving]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to rm-testing: http://git.io/vkzw6
<ipfsbot>
go-ipfs/rm-testing e43106a Jeromy: swap out testing.T for an interface
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed rm-testing from e43106a to d8bf35f: http://git.io/vT2Pa
<ipfsbot>
go-ipfs/rm-testing d8bf35f Jeromy: remove testing imports from non testing code...
anshukla has quit [Remote host closed the connection]
anshukla has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Confis>
I was thinking about service/application discovery over IPFS. In other words: suppose I want to use application X together with some other (random) users, how would we be able to find each other, without exchanging NodeIds out of band?
<Confis>
Could I for example listen to certain key (prefixes) on the DHT, or?
<whyrusleeping>
Confis: yeap! you could use the dht for this :)
<whyrusleeping>
i had sketched out an idea a few different times...
<whyrusleeping>
oh yeah, i remember
<whyrusleeping>
so, what you would do, is have each peer with the service 'add' a block containing the ID of the service
<whyrusleeping>
that will register them as a 'provider' of that 'block' on the dht
<whyrusleeping>
and a call do 'findproviders' for that block will return peers who are running that service
<Confis>
I don't think I understand it correctly. Would that node then do something along the lines of `dht put [serviceId] [NodeId]`? Then would a searching node do `dht findprovs [serviceId]`?
<Confis>
I'm also a a bit confused that I can choose arbitrary keys to put in the DHT. I just did a `dht put omg wtf`. What happens when two client insert different values on the same key?
nessence has quit [Remote host closed the connection]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
hellertime has quit [Ping timeout: 264 seconds]
PayasR has quit [Ping timeout: 245 seconds]
nessence has joined #ipfs
EricJ2190 has quit [Ping timeout: 276 seconds]
nessence has quit [Ping timeout: 276 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
pfraze has quit [Remote host closed the connection]
wedowmaker has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
sharky has quit [Ping timeout: 245 seconds]
<ipfsbot>
[go-ipfs] jbenet force-pushed add-circleci from 1fcde90 to 9e78883: http://git.io/vT2zm
<ipfsbot>
go-ipfs/add-circleci 9e78883 Juan Batiz-Benet: add circleci support
notduncansmith has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
alexandria-devon has joined #ipfs
<alexandria-devon>
hey juan, i was just trying out the "?dl=1" flag on a localhost:8080/ipfs/ link and it doesn't seem to change the behavior from not having it - is that an incomplete feature, or am i using it wrong?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Tv` has quit [Quit: Connection closed for inactivity]
nessence has joined #ipfs
nessence has quit [Ping timeout: 256 seconds]
gwillen has joined #ipfs
gwillen is now known as Guest53442
mildred has joined #ipfs
mildred has quit [Ping timeout: 246 seconds]
mildred has joined #ipfs
nessence has joined #ipfs
nessence has quit [Ping timeout: 264 seconds]
alexandria-dev-1 has joined #ipfs
alexandria-devon has quit [Quit: Page closed]
alexandria-dev-1 has left #ipfs [#ipfs]
zabirauf has joined #ipfs
<cryptix>
hello again
<tperson>
Heyi
<cryptix>
hey tperson :)
zabirauf has quit [Remote host closed the connection]
<cryptix>
jbenet: lol are you starting a ff cluster over there? :)
zabirauf has joined #ipfs
<tperson>
So I really want to add a flag to the ipfs command to allow you to specify a host to talk to.
<cryptix>
which api host you mean?
<tperson>
Right now the ipfs command is hard coded to the config file, so you can't talk to remote daemons without changing the config under .ipfs.
<tperson>
ya
mildred has quit [Ping timeout: 256 seconds]
<cryptix>
its icky but you could have multiple .ipfs dirs for the remote ones and use the IPFS_PATH to trick it into using that... problem is, i'm not sure how to trick it into thinking its online mode
<tperson>
ah crap, thats a bigger issue lol
<tperson>
I mean it could be in presence of a flag.
<tperson>
ipfs --remote /ip4/host/tcp/port
mildred has joined #ipfs
alexandria-dev-1 has joined #ipfs
void has joined #ipfs
Guest53442 is now known as gwillen
gwillen has joined #ipfs
gwillen has quit [Changing host]
<void>
whyrusleeping: Is there any documentation about the trickledag data structure, except the code itself?
zabirauf has quit [Remote host closed the connection]
domanic has joined #ipfs
nessence has joined #ipfs
nessence has quit [Ping timeout: 250 seconds]
domanic has quit [Ping timeout: 264 seconds]
void has quit [Ping timeout: 252 seconds]
chriscool has joined #ipfs
zabirauf has joined #ipfs
zabirauf has quit [Remote host closed the connection]
nessence has joined #ipfs
prosodyContext has joined #ipfs
nessence has quit [Ping timeout: 258 seconds]
alexandria-dev-1 has quit [Quit: alexandria-dev-1]
mildred has quit [Ping timeout: 256 seconds]
zabirauf has joined #ipfs
daviddias has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
chriscool has quit [Quit: Leaving.]
chriscool1 has joined #ipfs
zabirauf has quit [Remote host closed the connection]
nessence has joined #ipfs
nessence has quit [Ping timeout: 256 seconds]
<cryptix>
i love it when i cant reproduce ci test failure locally...
infinity0 has quit [Ping timeout: 256 seconds]
daviddias has joined #ipfs
zabirauf has joined #ipfs
chriscool1 has quit [Quit: Leaving.]
infinity0 has joined #ipfs
chriscool has joined #ipfs
rk[1] has quit [Ping timeout: 265 seconds]
nessence has joined #ipfs
domanic has joined #ipfs
nessence has quit [Ping timeout: 245 seconds]
chriscool has quit [Ping timeout: 264 seconds]
chriscool has joined #ipfs
chriscool has quit [Ping timeout: 256 seconds]
chriscool has joined #ipfs
domanic has quit [Ping timeout: 258 seconds]
vaelys has quit [Ping timeout: 255 seconds]
nessence has joined #ipfs
nessence has quit [Ping timeout: 246 seconds]
chriscool has quit [Ping timeout: 272 seconds]
EricJ2190 has joined #ipfs
vaelys has joined #ipfs
vaelys has quit [Ping timeout: 272 seconds]
lgierth has joined #ipfs
nessence has joined #ipfs
nessence has quit [Ping timeout: 272 seconds]
chriscool has joined #ipfs
vaelys has joined #ipfs
nessence has joined #ipfs
vaelys has quit [Ping timeout: 250 seconds]
nessence has quit [Ping timeout: 256 seconds]
tilgovi has joined #ipfs
nessence has joined #ipfs
nessence has quit [Remote host closed the connection]
pfraze has joined #ipfs
zabirauf has quit [Remote host closed the connection]
lgierth has quit [Quit: Ex-Chat]
ashleyis has joined #ipfs
vaelys has joined #ipfs
m3s has joined #ipfs
Blame has quit [Quit: Connection closed for inactivity]
vaelys has quit [Ping timeout: 246 seconds]
tilgovi has quit [Ping timeout: 256 seconds]
vaelys has joined #ipfs
chriscool has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
vaelys has quit [Quit: leaving]
inconshreveable has joined #ipfs
<jbenet>
tperson: yeah we should head to this
foobar__ has joined #ipfs
m3s has quit [Remote host closed the connection]
ralphtheninja has joined #ipfs
<foobar__>
hi, I just did this:
<foobar__>
C:\>go version go version go1.4.2 windows/386 C:\>go get -u github.com/ipfs/go-ipfs/cmd/ipfs # github.com/ipfs/go-ipfs/Godeps/_workspace/src/bazil.org/fuse d:\go\src\github.com\ipfs\go-ipfs\Godeps\_workspace\src\bazil.org\fuse\error_std.go:27: undefined: errNoXattr d:\go\src\github.com\ipfs\go-ipfs\Godeps\_workspace\src\bazil.org\fuse\fuse.go:1092: undefined: attr d:\go\src\github.com\ipfs\go-ipfs\Godeps\_workspace\src\bazil.org\fuse\f
<jbenet>
cryptix: o/
<jbenet>
i think this comes from the vendor changes
<foobar__>
what do I need to change compile ipfs from sources?
<jbenet>
foobar__ i think you want the windows build
<jbenet>
go get will not work because of fuse, i believe.
<jbenet>
fobar__ try `make nofuse`
<jbenet>
or: cd cmd/ipfs && go build -tags nofuse
<foobar__>
ahh!
<foobar__>
thanks, that what I needed!
<foobar__>
that->that's
<cryptix>
yea -tags nofuse :)
<grawity>
what's the correct way of building this on Linux by the way
<grawity>
I used to run `go get -u -v github.com/jbenet/go-ipfs/cmd/ipfs` but not sure if that's right
Blame has joined #ipfs
<jbenet>
grawity: that should work but may break in the future. I use: make install
<grawity>
well it does work, usually, but sometimes it spews out things like "cannot find package go-uuid" and only works on the 2nd try
compleatang has quit [Ping timeout: 272 seconds]
<foobar__>
I can't find what I'm doing wrong, hence next question:
<foobar__>
D:\>ipfs daemon Initializing daemon... API server listening on /ip4/127.0.0.1/tcp/5001 Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/8080
<foobar__>
(sorry about linebreaks, please tell me, if I should paste line-by-line)
<foobar__>
D:\>ipfs diag net Error: can't Lock file "C:\\Users\\foobar\\.ipfs/repo.lock": has non-zero size
<whyrusleeping>
foobar__: that might be an issue with our lock checking code. although i dont know that i've seen that one before
m3s has joined #ipfs
Tv` has joined #ipfs
inconshreveable has quit [Remote host closed the connection]
<foobar__>
(build seems to be taking quite a long time on an rpi B+)
<whyrusleeping>
foobar__: lol, yeah, it will
<foobar__>
(though not _that_ long, it's
<foobar__>
(though not _that_ long, it has just finished)
compleatang has joined #ipfs
<foobar__>
rpi build seems to be ok. any pointers on where I should start to diagnose the windows lock issue?
<cryptix>
whyrusleeping: thats the lock error i also got on linux
<whyrusleeping>
cryptix: hrm... okay
alexandria-devon has joined #ipfs
<foobar__>
whyrusleeping: rpi: stock Raspbian7, go version is 1.4; windows 7: go version is 1.4.2 (though I, don't know if that's relevant or not.)
<jbenet>
whyrusleeping: locking's broken. We should fix or revert it.
chriscool has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
headbite has quit [Quit: Leaving.]
headbite has joined #ipfs
chriscool has quit [Quit: Leaving.]
chriscool has joined #ipfs
<tperson>
grawity: I've run into this before, I don't remember what the problem was but it's due to things moving around. They aren't really errors, go just spits them out.
<foobar__>
linux on amd64 seems to be ok as well
inconshreveable has joined #ipfs
tilgovi has joined #ipfs
kyledrake has joined #ipfs
<ralphtheninja>
heya
inconshreveable has quit [Read error: Connection reset by peer]
<ralphtheninja>
when starting up go-ipfs via docker .. I guess you need to do something like 'docker exec ipfs_host ifps init' if that hasn't been done before?
<ralphtheninja>
oh I see .. sorry just needed some rubber ducking :)
<ralphtheninja>
kyledrake: heya
tilgovi has quit [Ping timeout: 258 seconds]
<ralphtheninja>
/export is essentially used to put data into the container
<ralphtheninja>
or get out for that matter
<kyledrake>
ralphtheninja oh neat I want to run ipfs in docker too. what's the command right now?
<kyledrake>
Aha. not too bad.
chriscool has quit [Ping timeout: 250 seconds]
<whyrusleeping>
jbenet: re locking: https://github.com/ipfs/go-ipfs/pull/1295 patches the issue. But the larger issue is that the camlistore lib doesnt use named errors
<whyrusleeping>
so error checking is impossible
nessence has joined #ipfs
lgierth has joined #ipfs
domanic has joined #ipfs
lgierth has quit [Quit: Ex-Chat]
nessence has quit [Remote host closed the connection]
<barnacs>
so i've been fiddling with ipfs add performance and got some interesting results but i don't have a variety of machines to test with
<barnacs>
would anyone be interested to help with a little testing?
<barnacs>
preferably with ssd, raid or pretty much any setup that's different than mine helps
<tperson>
What are you trying to do?
<barnacs>
just playing with making it more concurrent
<barnacs>
just add something, preferably not a lot of small files and time it againts master
<barnacs>
i'm getting promising results with a single hdd but i have no idea how it would hold up in different environments
<tperson>
Is a single file better?
<barnacs>
it should be
<whyrusleeping>
barnacs: i did something similar a little while back (though not as involved as yours) and the issue i ran into was memory pressure from the chunkers
<whyrusleeping>
the conclusion i ran into was that the chunkers need to be reworked a bit to use a buffer pool
foobar__ has quit [Ping timeout: 246 seconds]
<barnacs>
whyrusleeping: i'm just trying to decouple the cpu intensive steps from the io heavy part of building the dag at this point
<barnacs>
it just didn't seem to be either io or cpu bound so i assumed some pipelining would help
<barnacs>
if nothign else it should make it easier to profile what's actually holding it up
<barnacs>
but with my setup i found this to already help considerably
<whyrusleeping>
barnacs: awesome :)
mildred has joined #ipfs
<kyledrake>
whyrusleeping what's the endian for multihash?
<kyledrake>
Oh wait, maybe that doesn't matter. n/m
tilgovi has joined #ipfs
<ipfsbot>
[go-ipfs] jbenet pushed 3 new commits to master: http://git.io/vkVkS
<ipfsbot>
go-ipfs/master 17d71c4 rht: Add test for no repo error message
<ipfsbot>
go-ipfs/master ee10b41 Juan Batiz-Benet: Merge pull request #1293 from rht/cleanup-cat-error...
<foobar__>
I am really amazed. That stuff looks like something from the future.
Confis has quit [Ping timeout: 244 seconds]
Confis has joined #ipfs
<jbenet>
foobar__ what stuff?
<whyrusleeping>
jbenet: oh, the daemon on pinbots machine crashed
<jbenet>
whyrusleeping yeah. running a service that _always works_ with close to zero mgmt overhead is not trivial. it's why i'm hesitant to depend on pinbot.
<jbenet>
(usually includes all sorts of watchers that test + ping you if something goes wrong)
<foobar__>
ipfs
<jbenet>
foobar__ ohhh thanks! :D
<lgierth>
is dependency mgmt for ipfs just regular godep, or any special cases? i don't have any experience with go packaging or dependency mgmt
<lgierth>
right now i'm developing within gopath/src/github.com/ipfs/go-ipfs/cmd/ipfs/, i think i had issues just cloning the repo and working there
<jbenet>
lgierth: _future proof_ go packaging and dependency managament is not good at the moment. we use godeps and we'll be moving to using ipfs itself soon.