<achin>
i wouldn't expect a NAT issue to produce a multihash length error, though
devbug has quit [Ping timeout: 250 seconds]
<achin>
anyway, i'm glad it seems to work now! i was worried for a sec, because i never actually tried to start a node with one of these keys (i only did the "ipfs id" test)
<davidar>
oh, duh, i was using the wrong command :p
Pasquino has quit [Killed (Sigyn (Spam is off topic on freenode.))]
<achin>
ah hah!
<davidar>
ipfs peer ping instead of just ipfs ping :/
<daviddias>
ipfs cli will fmt.println('sparkles') when you hit it :D
callowness has joined #ipfs
rombou has joined #ipfs
<moep>
Is there an article about evaluating the possible personal liability for illegal stuff which is routed through you? (Maybe I didn't get the concept completely, and this question is bullshit)
border0464 has quit [Ping timeout: 246 seconds]
OutBackDingo has quit [Quit: Leaving]
border0464 has joined #ipfs
<M-davidar>
moep (IRC): nothing gets routed through your node automatically
<moep>
M-davidar hm, I was under the impression that it worked like a torrent swarm, where everybody would have some amount of other user's files; so stuff only gets routed through you if you access it / create it
<moep>
?
<davidar>
moep (IRC): yeah, so only if you request illegal stuff will your nice start serving it
<ipfsbot>
[go-ipfs] jbenet closed pull request #1940: add option to version command to print repo version (master...feat/version-repo) http://git.io/vl9c2
hellertime has joined #ipfs
Eudaimonstro has quit [Read error: Connection reset by peer]
<victorbjelkholm>
anyone have any slides that I can share and re-use to quickly explain IPFS? cc jbenet
dignifiedquire has joined #ipfs
<dignifiedquire>
I’m back :)
rombou has quit [Ping timeout: 250 seconds]
Encrypt has joined #ipfs
guest234234 has joined #ipfs
uhhyeahbret has quit [Ping timeout: 252 seconds]
devbug has quit [Ping timeout: 252 seconds]
chriscool has quit [Ping timeout: 272 seconds]
anshukla has quit [Remote host closed the connection]
<xelra>
This might also be a good question regarding ipfs-cluster. Access rights. Who can manage the cluster and to what degree. Also ratio and quota. Doesn't the cluster need to be partly centralized? The managing part.
<xelra>
The cluster manager would need to know how fast each cluster node is and how much storage is available there.
<zignig>
ipfs does not do this (yet) , but in can because of the merkeldag.
jabberwocky has quit [Remote host closed the connection]
jhulten has quit [Ping timeout: 252 seconds]
<ion>
Doesn't that one talk about a single authority which the network keeps honest and prevents someone else from faking?
cemerick has joined #ipfs
<cryptix>
a friend of mine poked me with that paper, too
<NeoTeo>
Hi guys. How do I interrupt the ipfs log tail command without breaking the node? A ^C stops the output but also stops the node from responding...
callowness has quit [Killed (Sigyn (Spam is off topic on freenode.))]
<ion>
NeoTeo: That sounds like the bug that is being fixed. I think there may be a PR.
<NeoTeo>
Ok, cool. Thx.
<NeoTeo>
So how would the api interrupt a log tail request?
<ion>
By closing the connection, I think.
chriscool has joined #ipfs
JohnClare has joined #ipfs
Carraway has joined #ipfs
<Carraway>
Is there a list of publicly hosted gateways (other than the ipfs.io ones) anywhere?
<NeoTeo>
Right. I'll see when the fix is available :)
pfraze has joined #ipfs
HostFat has joined #ipfs
JohnClare has quit [Ping timeout: 252 seconds]
ashark has joined #ipfs
jhulten has joined #ipfs
e-lima has quit [Ping timeout: 260 seconds]
anshukla has joined #ipfs
jhulten has quit [Ping timeout: 260 seconds]
anshukla has quit [Ping timeout: 246 seconds]
e-lima has joined #ipfs
<ion>
The Duration is the maximum cache time and applies to both *ResolveResult and error. Watershed time: what is the best order for the result tuple?
<ion>
func ResolveDetailed(ctx context.Context, n *IpfsNode, p path.Path) (time.Duration, *ResolveResult, error) {
<ion>
It would be nice to have the result before the TTL, but then the error seems to be canonically the last value in Go. Having the TTL in the middle would feel weird because it affects the values on both sides.
tinybike has quit [Quit: Leaving]
sseagull has joined #ipfs
jhulten has joined #ipfs
voxelot has joined #ipfs
voxelot has joined #ipfs
taillie has joined #ipfs
water_resistant has joined #ipfs
warner has quit [Quit: ERC (IRC client for Emacs 24.5.1)]
rombou has joined #ipfs
dignifiedquire has quit [Ping timeout: 246 seconds]
Tv` has joined #ipfs
dignifiedquire has joined #ipfs
amade has joined #ipfs
<rschulman>
whyrusleeping: you around yet?
<achin>
Carraway: as the best i know, the ipfs.io gateways are the only public gateways
anshukla has joined #ipfs
cleichner has quit [Ping timeout: 264 seconds]
mildred has quit [Ping timeout: 244 seconds]
cleichner has joined #ipfs
<cryptix>
some people expose their :8080 but i dont think i've seen a maintained list
demize is now known as man
man is now known as demize
<voxelot>
this is the only list i know of aside from the federate ipfs servers
<Carraway>
is ipfs trademarked or plan to be trademarked in anyway? I'm thinking of either starting or contributing to something like openipfs and I'm wondering if one day we'll all have to rename our services
HoboPrimate has joined #ipfs
<Carraway>
This might seem like a dumb question since ipfs is an open protocol, but just want to play it safe
<Carraway>
Interesting, thanks cryptix. It seems his issue there is the vagueness of 'open' rather than including 'ipfs', but at the same time his suggestions don't include ipfs except for ipfscoop.
jhulten has quit [Ping timeout: 244 seconds]
<ansuz>
more like ipfs-coup amirite
<ansuz>
hon hon hon
devbug has joined #ipfs
devbug has quit [Remote host closed the connection]
devbug has joined #ipfs
jhulten has joined #ipfs
moep has quit [Ping timeout: 265 seconds]
devbug has quit [Read error: No route to host]
devbug has joined #ipfs
rombou has quit [Ping timeout: 240 seconds]
HostFat_ has joined #ipfs
rombou has joined #ipfs
HostFat has quit [Ping timeout: 250 seconds]
<noffle>
trying to decide what a common interface that sits in front of either an IpfsNode or an HTTP API client would expose, so that we can abstract away whether it's an ephemeral node or some local/remote node via http. (https://github.com/ipfs/go-ipfs/issues/1945, fyi)
<whyrusleeping>
and then we can have something called mantle
<whyrusleeping>
and then we have 'the core'
JohnClare has joined #ipfs
<noffle>
ಠ_ಠ
<noffle>
NodeShell? to suggest it's a shell wrapper around a real node *somewhere*?
<noffle>
IpfsNodeShell? too verbose?
JohnClare has quit [Remote host closed the connection]
JohnClare has joined #ipfs
<ion>
whyrusleeping: Because the TTL value affects all results (whether a success or an error), I better take the TTL field out of the Result data structure and make it an additional return value. This means ResolveResult will end up only containing the path.Path. I might as well get rid of the struct. This would result in ResolveDetailed(…) (value path.Path, ttl time.Duration, err error). How about your comment in
<whyrusleeping>
ion: you can return "" for an empty path
<ion>
whyrusleeping: Yeah, that’s what the code does now. So I’ll retain that behavior.
<whyrusleeping>
cool, and yeah. i really think that the three argument return is right
<whyrusleeping>
after thinking about it some more, it feels better
<ion>
whyrusleeping: Do you have thoughts about whether the middle position is the right one for the TTL in the return type?
JohnClare has quit [Ping timeout: 260 seconds]
<whyrusleeping>
ion: no preference there
<whyrusleeping>
do whatever you think is best (as long as the error is last)
<ion>
People on #go-nuts seemed to think so, so I guess I’ll go with that.
<alterego>
Hrm, can I build a static ipfs executable?
<achin>
by default all go binaries are static
<whyrusleeping>
alterego: yeap, thats why you can download the binaries from gobuilder and just run them
<alterego>
They link against libc and ld-linux
<alterego>
Well, the ones I downloaded
<ipfsbot>
[js-ipfs] diasdavid pushed 4 new commits to jsipfs: http://git.io/v83FQ
<ipfsbot>
js-ipfs/jsipfs fef5f17 David Dias: base for commands
<ipfsbot>
js-ipfs/jsipfs 7880218 David Dias: help menu formmater
<ipfsbot>
js-ipfs/jsipfs 43712e7 David Dias: tests base with disposable daemon
* whyrusleeping
standing on the corner shaking coins in a cup "could anyone spare some code review?"
* whyrusleeping
"CR for the poor"
<alterego>
:)
<ion>
Will work for CR
<whyrusleeping>
i mean, probably not
<whyrusleeping>
but i'll say that too
<alterego>
Does ipfs us cgo?
fingertoe has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
<whyrusleeping>
alterego: a little bit, but in a way that lets us still mostly build statically
<voxelot>
anyone going to london tomorrow? I'll be flying in sunday for the ether conference if anyone wants to grab a drink next week
<voxelot>
i know JB was trying to duck out being all like "ohh i've been traveling so much and i was jsut in london" pfff ;)
<noffle>
whyrusleeping: I can look at it for sanity/basics, but my codebase of go-ipfs is still shallow :)
<noffle>
codebase knowledge*
ashark has quit [Ping timeout: 240 seconds]
NightRa has quit [Quit: Connection closed for inactivity]
<whyrusleeping>
voxelot: if you buy me a ticket i'll head over :P
<whyrusleeping>
otherwise i'm gonna sit in my chair right here and look at pictures of london for a minute
<whyrusleeping>
and pretend i was there
<voxelot>
you know i wish i could man haha. missed you at the SF meetup
<voxelot>
but im in LA so we can just kick it here in the states sometim
<voxelot>
or in germany for ccc.. ill be there too
<noffle>
I only just found out that folks were in the bay area recently :P
<noffle>
*sigh!*
pfraze has quit [Remote host closed the connection]
f[x] has joined #ipfs
domanic has joined #ipfs
mvr_ has quit [Quit: Connection closed for inactivity]
Matoro has joined #ipfs
devbug has quit [Ping timeout: 250 seconds]
ashark has joined #ipfs
<whyrusleeping>
yessss ccc
<whyrusleeping>
i'm planning on being there
ashark has quit [Ping timeout: 260 seconds]
<noffle>
I invoke the sagely wisdom of #ipfs to help me name something
<noffle>
because I'm still getting the hang of go-style naming
<noffle>
and because naming is inherently hard
pfraze has joined #ipfs
dignifiedquire has joined #ipfs
<noffle>
the package exposes a single method, MakeLocalShellWithEmbeddedFallback(), that grabs the local ipfs node (if running), and falls back to creating an ephemeral node otherwise. right now it's ipfs-shell-with-fallback
<noffle>
returning a common interface
Matoro has quit [Ping timeout: 240 seconds]
Matoro_ has joined #ipfs
dignifiedquire has quit [Remote host closed the connection]
<cojy>
how do i get a list of which things i currently have pinned?
<voxelot>
cojy: ipfs pin ls
<cojy>
thanks
dignifiedquire has joined #ipfs
<whyrusleeping>
noffle: whats the interface called?
ashark has joined #ipfs
<dignifiedquire>
daviddias: have you seen https://npmcdn.com/ ? sounds like an interesting concept that might great to be combined with ipfs
<noffle>
cryptix: maybe rename go-ipfs-shell to go-ipfs-http-shell or similar? I think the http shell, the embedded shell, and the easy shell wrapper should be separate packages
<cryptix>
yea but they can be subpkgs imho
<cryptix>
go-ipfs-shell/jsonApi /embedded maybe even /http (as in :8080)
<whyrusleeping>
noffle: its currently called go-ipfs-api
<cryptix>
i think its supposed to support all of :5001
<cryptix>
i'm not sure how much of that we want in Shell
<cryptix>
ie ideally ipfs-api would also support 'swarm (dis)connect', all of dht etc
<noffle>
yeah, I think the common shell interface will only have useful common bits. as long as we keep signatures consistent it should be fine if your api shell exposes more
<cryptix>
yup exactly
<noffle>
I'm not sure how shutdown will work. api is a noop, but ephemeral needs proper shutdown
<noffle>
you okay with api shell having a stub Shutdown method?
<cryptix>
whyrusleeping: i meant to ask you: how bad is killing a node w/o shudown for the network?
<whyrusleeping>
not
<cryptix>
well then.. :)
<whyrusleeping>
we dont do anything fancy on shutdown
<cryptix>
i guess its just better for the fsrepo is still beeing worked on
<noffle>
still, we don't want to leave it alive forever if the process using it is long-lived
<cryptix>
sure - i don't think a no-op shutdown() would be a problem
<ipfsbot>
[go-ipfs] cryptix deleted feat/closeNotifier at 7fd6247: http://git.io/v8G6m
<ion>
Srsly, the CCC isn’t on the first page of Google search results for “CCC” 0) in general and 1) even despite me being logged in and thus in my search bubble.
<ion>
I was curious: turns out it is on page 19. :-D
<ion>
Pro tip: RED="$(tput setaf 1 2>/dev/null)" etc.
<cryptix>
victorbjelkholm: funky!
<victorbjelkholm>
thanks!
<victorbjelkholm>
ion, what's that for?
<cryptix>
to not have the jibberish codes (\033[0;31m)
<ion>
victorbjelkholm: Getting the code to set the color, but tput takes your terminal or lack thereof into account.
<victorbjelkholm>
oh, I see. I'll try it out
<ion>
Actually, safer if you use set -e and want to support weird systems without tput: RED="$(tput setaf 1 2>/dev/null || :)"
<ion>
tput sgr0 resets the formatting.
ianopolous2 is now known as ianopolous
<victorbjelkholm>
ion, right, was just gonna ask about nc
<ion>
tput bold is equivalent to \e[1m
Oatmeal has joined #ipfs
pfraze has joined #ipfs
<ion>
terminfo(5) holds a comprehensive list.
f[x] has quit [Ping timeout: 250 seconds]
<victorbjelkholm>
hm, the reset of the color doesn't seem to work
<victorbjelkholm>
oh
<victorbjelkholm>
I'm the one misunderstanding
<victorbjelkholm>
ion, want to take a look again?
<cryptix>
image claims "No root password has been set", booting it, trying to login on serial, 'Password: #'
<ion>
victorbjelkholm: Looks good, but you’ll want to add 2>/dev/null to the sgr0 one, too.
<ianopolous>
whyrusleeping et al: In light of the ongoing DDOS on protonmail, how resilient do we think IPFS is currently to various DDOSes?
<whyrusleeping>
ianopolous: uhm... what kind of ddos?
<ianopolous>
any
<ianopolous>
api floods
<ianopolous>
malicious nodes
<victorbjelkholm>
ion, oh, crap, for sure. Thanks!
<whyrusleeping>
malcious nodes might be annoying, but kademlia can work through them
<whyrusleeping>
to a certain extent
<whyrusleeping>
api floods, you mean like pounding the gateways?
<whyrusleeping>
because the gateways are pretty centralized and not really all that resilient (at least, no more so than a normal http server)
<ion>
ianopolous: If someone wants to target *you*, they can take up all your bandwidth. If someone wants to target the availability of a file, that will be more difficult if the file is served by a large number of nodes.
<ianopolous>
pounding the gateways, or pounding the network directly though a local node. or e.g. a botnet of them
<whyrusleeping>
yeah, they could probably start pounding the network and it would slow things down
<whyrusleeping>
but their effect is limited to requesting data that people have
<whyrusleeping>
they would have to mod their daemon to be evil
<ion>
Targeting the bootstrap nodes might cause issues. go-ipfs could save the list of previously connected peers on disk for bootstrapping through them.
<ianopolous>
okay, so if they figure out some data that is on my node, they could flood me.
<whyrusleeping>
yeah
<ianopolous>
maybe we should rate limit requests from a given source?
nessence has joined #ipfs
<whyrusleeping>
yeah, that gets tricky
<ianopolous>
yeah i know, but what alternative is there?
Vyl has joined #ipfs
JohnClare has joined #ipfs
<ion>
They could just take up all your bandwidth if they wanted to. There’s nothing your node could do about that.
<Vyl>
So can ICMP pings.
<whyrusleeping>
DDoS mitigation is something that should be done at the router and firewall layer
<whyrusleeping>
not in the application layer
<ianopolous>
I'm just thinking about the case when people are storing their encrypted files, and thus noone else is likely to have them. Someone could deny you remote access to your own files.
<achin>
just like my ISP and deny me access to my files stored on dropbox when my ISP crapps out
<Vyl>
IPFS is for publishing. If you're not expecting anyone else to retrieve them, it's not publishing.
<ianopolous>
vyl: What about you retrieving them from somewhere else?
<Vyl>
It's in the nature of protocols like IPFS: Storage is not infinite. If a file isn't being pinned by someone, eventually there will come a time it has to be discarded to make space. If you are using it for backup purposes, you can't be pinning it - if you could do that, you wouldn't need it on IPFS too. It's the wrong protocol for the job. Backups would be better handled by more conventional means.
<Vyl>
Now, if you're just storing an encrypted file so that only a select few can get it, no problem. Just like any other files.
<ianopolous>
vyl: It's the latter I'm thinking of
<Vyl>
Dropbox is used like that a lot already, along with other file hosting services.
<Vyl>
For confidential documents, as well as to avoid the anti-piracy blacklists.
<Vyl>
I see no reason IPFS would be any different in that usage scenario.
<ianopolous>
In IPFS availability of the encrypted file would be limited by the bandwidth of the machine(s) storing it, which is much easier to DDOS than dropbox.
Gaming4JC has joined #ipfs
<Vyl>
Briefly.
<achin>
what if that encrypted file was hosted by 1000 nodes?
<Vyl>
As soon as people start fetching it, it'll be mirrored around the network. Any DDOSer would have to be very fast to strike.
<achin>
in that case, it might be harder to "DOS" this file (compared to dropbox)
<ianopolous>
achin: sure, but then the owner has to pay for 1000X the storage space
<achin>
not necessarily
<Gaming4JC>
Hey! First time user. 1) Is there a way to limit the size of the storage on the local disk? 2) Is it possible to control the throughput. It's filling up my drive and taking half the network down (DSL speed here)
<achin>
they might not have to pay anything, if 1000 different people have fetched the file
<ianopolous>
space has a non zero cost, whether it is filecoin or something else.
<achin>
it might have a non-zero cost to *you* though
JohnClare has quit [Remote host closed the connection]
<achin>
Gaming4JC: hi! at the moment, there are no built-in controls for either local disk space or bandwidth. for the disk space, you'll just have to manually be careful about what you download (and/or gc often)
<ianopolous>
in the process of someone requesting a file, do any intermediate nodes cache the file on the way back?
<Gaming4JC>
achin: Hmm ok. Perhaps I can put it in some sort of container.
JohnClare has joined #ipfs
<achin>
Gaming4JC: remember that your node doesn't not automatically download data. so there's no risk that an unattended (unused) node will fill up all your disks
<achin>
ianopolous: no, there's no relaying yet
<Gaming4JC>
achin, ah - that is good to know! So it only downloads if I query an IPFS hash?
<achin>
Gaming4JC: yep
<achin>
ipfs get, ipfs cat, ipfs pin, and requesting data via your gateway are all ways to get your ipfs node to download (and store) something
<achin>
(there are others, too, but those are the common ones)
<Gaming4JC>
achin, the other issue I have - my current data plan does have a limit. IPFS seems to be constantly downloading data from the Swarm.
<Gaming4JC>
I can probably control it with ip tables I gues.
<Gaming4JC>
just throttle it down some.
JohnClare has quit [Ping timeout: 244 seconds]
cemerick has joined #ipfs
<achin>
what do you mean by "constantly" ?
<achin>
there is always a little trickle of data flowing (as the node communicates with others in the network)
<achin>
but if you're not downloading (or uploading) a file, there shoudln't 'be much data being transferred
<daviddias>
dignifiedquire: it breaks if a node is already running?
<dignifiedquire>
daviddias: no node running
<daviddias>
and it started doing that now?
<dignifiedquire>
it doesn’t start at all
<daviddias>
but was it working before you upgraded ipfsd-ctl?
<dignifiedquire>
looking at the tests you only check disposable, so my guess it has something to do with the local version
<dignifiedquire>
yes
cemerick has quit [Ping timeout: 265 seconds]
<dignifiedquire>
(before was ipfs 0.3.7)
OutBackDingo has joined #ipfs
OutBackDingo has quit [Client Quit]
<daviddias>
I see
OutBackDingo has joined #ipfs
OutBackDingo has quit [Remote host closed the connection]
OutBackDingo has joined #ipfs
<dignifiedquire>
daviddias: you are doing lots of regex matching magic maybe sth changed there in the upgrade?
ashark has quit [Ping timeout: 255 seconds]
<daviddias>
dignifiedquire: tbh, the majority of that code was not done by me, so I need to dial in and understand how it is all done and which are design decisions
<Gaming4JC>
achin, the trickle is enough to conflict with some services on my network :/
<Gaming4JC>
e.g. kick me out of XMPP
<Gaming4JC>
ping timeouts
<dignifiedquire>
daviddias: sure, I’ll try to do some more debugging myself, let me know if you find anything
<daviddias>
for sure, and thank you for catching this :)
<ipfsbot>
[js-ipfs-api] VictorBjelkholm created better-readme-and-api-docs (+1 new commit): http://git.io/v8Zfg
<ipfsbot>
js-ipfs-api/better-readme-and-api-docs 79892a6 Victor Bjelkholm: Add current work in progress of readme.md and api.md
f[x] has joined #ipfs
<ipfsbot>
[js-ipfs-api] VictorBjelkholm opened pull request #104: Add current work in progress of readme.md and api.md (master...better-readme-and-api-docs) http://git.io/v8Zf1
OutBackDingo has quit [Quit: Leaving]
OutBackDingo has joined #ipfs
<ipfsbot>
[js-ipfs-api] VictorBjelkholm closed pull request #102: Add current work in progress of readme.md and api.md (master...better-readme-and-api-docs) http://git.io/vl9Bt
<ion>
Gotta get some sleep. I’ll continue working on the Cache-Control/ETag PR tomorrow. Good neiht.
amade has quit [Quit: leaving]
<Vyl>
Did anyone give thought to the problem of where best to split files on insert?
<ion>
Vyl: go-ipfs splits by a rolling hash when you use --chunker=rabin and that will become the default in the future.
<dignifiedquire>
daviddias: found the issue I think
<ion>
Vyl: Fancier splitting can be implemented for specific file formats.
<daviddias>
what was it?
<dignifiedquire>
just a sec verifying it fixes it
<whyrusleeping>
i cant focus on anything today
<whyrusleeping>
i'm just gonna call it a day off
<whyrusleeping>
see y'all tomorrow
<ion>
See you
<dignifiedquire>
whyrusleeping: cheers
jhulten has quit [Ping timeout: 252 seconds]
jhulten has joined #ipfs
<dignifiedquire>
daviddias: looks like it has something to do with the changes I made to subcomandante to fix other issues
<Vyl>
Ion: I spent some time thinking about it at work today while bored out of my skull.
<jbenet>
hey whyrusleeping -- what are the things that make add slow right now? is there a build tag for no-sync? is there a PR open for the mem leak fix that i can just try out?
<daviddias>
whyrusleeping: enjoy the rest of your friday :)
<Vyl>
But I came to pretty much the same conclusion as my ideal solution.
<jbenet>
oh damn i just missed him
<Vyl>
Some sort of rolling hash is the obvious method, with a restriction on minimum chunk size.
<Vyl>
Otherwise some nasty person could inset a hundred-meg files split into one-byte chunks and cripple nodes with the overhead.
<Vyl>
Or craft a file for which the rolling hash would inevitable do that.
<M-hash>
i have some questions about the idea of custom splitting for specific file formats, as well
OutBackDingo has quit [Quit: Leaving]
<M-hash>
like, how does custom splitting jive with determinism? i want `ipfs add thefile` to always give me the same hash, regardless of e.g. mimetype detection, right?
<Vyl>
I can only see that as being worthwhile perhaps for disc images. Remember that archive files are compressed, so you're not going to see any benefit from splitting on a file boundry. And I agree with M-hash, the splitting really should be deterministic.
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed temp-nosync from 0a938c1 to c26a0b7: http://git.io/v8Zqj
<ipfsbot>
go-ipfs/temp-nosync c26a0b7 Jeromy: temporarily disable syncing in flatfs...
<whyrusleeping>
okay, NOW im done
<jbenet>
thanks whyrusleeping <3
<Vyl>
If you're using a rolling hash anyway, then the benefits from custom format splitting really are minimal.
<jbenet>
Ohhh wait, maybe not because we may be on the "other side"
<jbenet>
like the _client_ sees stdin as readable, but _we_ are writing to it?
<M-hash>
+1 to Vyl's comment "If you're using a rolling hash anyway, then the benefits from custom format splitting really are minimal"... I've done some research on this before (which I can't share, sorry), and... yeah, a well tuned rabin really is asymptotically close to win. Determinism and avoidance of perverse cases from *bad* special casing attempts are probably more useful.
<pinbot>
now pinning /ipfs/QmPuUk5ACcWGLUFhyJkhPkx372DEKdim4gWrQqkzz1Eqv7
<cryptix>
ohi jbenet :)
<jbenet>
cryptix: heya
<jbenet>
dignifiedquire: interesting, sec. mid something
<jbenet>
also i think the progress bar disappeared from both cat and add :/
<dignifiedquire>
progress bar?
<daviddias>
ok, so one thing that is different than what I was expecting, is that `local` simply uses the local repo and not the local binary
<daviddias>
I believe that the jump to 0.3.9 didn't require any repo migrations, but just to take that out of the list, have you refreshed your repo recently?
<dignifiedquire>
repo?
<daviddias>
~/.ipfs
<dignifiedquire>
yes I just cleared it
pinbot has quit [Ping timeout: 252 seconds]
jhulten has quit [Ping timeout: 265 seconds]
<dignifiedquire>
after it failed to start
<dignifiedquire>
but that didn’t help
<daviddias>
and then ipfs init?
<dignifiedquire>
yes, through electron, which worked
* cryptix
admits, he also wants 1.4 build times back
<dignifiedquire>
jbenet: had the same thought, but I tried adding simple logs to the callbacks and they don’t print anything, which is why I thought the issue at hand might be with subcomandante
<jbenet>
dignifiedquire: oh quite possibly. it may not be calling the callback?
<dignifiedquire>
jbenet: it seems that neither ‘error’ nor ‘data’ are emitted
<dignifiedquire>
daviddias: I think the syntax error does not stem from a file, but rather from a JSON.parse gone wrong somewhere, otherwise there would be some line number
<dignifiedquire>
daviddias: standard is happy
<daviddias>
got it, thanks for checking :)
<daviddias>
one thing I noticed now too is that the shutdown handler is only added on init time
<daviddias>
for a 'local', already init'ed node, that is never added
s_kunk has joined #ipfs
<dignifiedquire>
which it wasn’t before as far as I understood
<dignifiedquire>
or did I miss that?
cemerick has joined #ipfs
trochosphere has joined #ipfs
<daviddias>
before of your PR?
hellertime has quit [Quit: Leaving.]
* daviddias
is confused with timespace, relativity and logic clocks, we need ancestry chains for everything
<jbenet>
__mek__ this should work: go get -u github.com/ipfs/go-ipfs/cmd/ipfs
<jbenet>
daviddias: yeah there's a lovely parallel between hashchains as accumulators of information propagation in physics.
<jbenet>
daviddias: might make CATOCS finally work.
<jbenet>
probably can write some trivial proof like, _CATOCS only works when you feed all inputs to an accumulator and you can verify an input is there_. i would really like to disprove cheriton for lamport's sake.
<daviddias>
oh, that is very interesting!
<dignifiedquire>
daviddias: before my PR, I thought shutdown was only used for disposable nodes
<dignifiedquire>
but need to recheck
<ipfsbot>
[js-ipfs-api] Dignifiedquire opened pull request #105: Auto generate API.md using mocha (master...auto-docs) http://git.io/v8Z8N
<daviddias>
jbenet: I'm thinking that: or we conclude that is doable, or create a proof of why all hashing functions are fundamentally broken. If I'm understanding what you are proposing correctly.
NeoTeo has quit [Quit: ZZZzzz…]
pinbot has joined #ipfs
cemerick has quit [Ping timeout: 255 seconds]
<daviddias>
will look with more depth into these papers :)
<daviddias>
dignifiedquire: I feel that we should have had better names
<daviddias>
disposable node is temporary repo+binary running
<daviddias>
local node is local repo+binary running
<daviddias>
in both, the binary will have to be shutdown
<dignifiedquire>
makes sense
<dignifiedquire>
but the names are fine I think, though we should add some notes to the readme I guess
<dignifiedquire>
so people know what they are supposed to do
<daviddias>
sounds good
<dignifiedquire>
I gotta go though now, my eyes are closing themselves
<daviddias>
I'm feeling exhausted too, I'll try to check it again tomorrow, although I'll be only be able to do it later at night
<daviddias>
have a great weekend!
<dignifiedquire>
thanks you too :)
<dignifiedquire>
btw jbenet did you get my email? are you free over the weekend or do you want to postpone it until next week?
Matoro has quit [Ping timeout: 260 seconds]
<jbenet>
dignifiedquire: sorry, weekend is possible but hard. lets schedule over email, sorry a bit behind
devbug has quit [Quit: No Ping reply in 180 seconds.]
<dignifiedquire>
jbenet: no worries, just ping me via email, cheers