<clownpriest>
are there any examples of go-libp2p usage apart from the echo server in the github repo?
matoro has joined #ipfs
se3000 has quit [Quit: My iMac has gone to sleep. ZZZzzz…]
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<ani>
(This is why I dislike PDD. One protocol is holding up everything. It seems like Protocol Labs has a thousand simultaneous projects, rather than focusing on one at a time.)
<ani>
Just my two cents.
<ani>
hsanjuan
screensaver has quit [Ping timeout: 264 seconds]
wmoh1 has quit [Ping timeout: 240 seconds]
realisation has joined #ipfs
tilgovi has quit [Ping timeout: 255 seconds]
<ani>
So what, exactly, is the purpose of libp2p? Does it handle networking protocols etc. or is it an RPC framework? Both?
<jbenet>
later in the quarter-- would love to have your input and help then
<jbenet>
ani: yeah, feel it on the "many simultaneous things" -- it definitely hurts us. part of it is that we've excavated a lot of problems and replacing things with nicer solutions takes time. also takes a lot more time to make them modular, usable by others, etc. our docs and examples and dev UX _definitely_ need a ton of help, we're planning a sprint for it
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
john1 has joined #ipfs
<jbenet>
(more on "many things" -- the balance to strike is not take on too much more before landing and shipping something.
<ani>
jbenet: the main problem is that right now you have a person (me) who wants to help, but any time I find something to be done I'm told to make an issue. It's added to a board and then nothing apparently happens for a while, then finally something happens and I've moved on to something else.
<lgierth>
we should add "free for all" tasks that block nobody to each sprint, whereever possible
<lgierth>
mh yeah i hear you
<ani>
For example, I'd love to help with ipfs-cluster right now, and it's extremely applicable to something I'm working on, but apparently it's being held up by what appears to be a completely unrelated issue elsewhere.
<ani>
To me, it seems like it'd be more productive to make an interface for whatever we are going to replace, then use the dirty hacks for things until our replacements are done.
<ani>
For example, it's great that Protocol Labs wants to use the new version of ipld or whatnot (still can't figure out what the deal is there) but for now we could get a working cluster going and then switch over when the time comes.
<ani>
Not to mention that everywhere I go there's a new repo :P it feels like Protocol Labs is making repos for things they (y'all) don't have the time/people to maintain in the short term. Case in point: https://github.com/ipfs/papers
<ani>
lgierth / jbenet whenever the team meets next, have an honest discussion about how much you guys can get done at once and then do it :P
<ani>
I see so many awesome things that can come from IPFS, but until it's solid/stable things like ipfs-cluster are inevitably going to be blocked by the changing core.
<ani>
Again just my two cents :3
fleeky_ has joined #ipfs
<lgierth>
thanks, they're appreciated!
<lgierth>
i'm too close to sleep to come up with good answer right now :)
<lgierth>
in CET too
<lgierth>
we're starting to practice scrum sprints this quarter to get better overall focus
<ani>
Sounds good. Can't wait to see what happens!
dignifiedquire has quit [Quit: Connection closed for inactivity]
<kevina>
it could go into a marshaler, just won't be able to seperate out stdout from stderr...
<whyrusleeping>
kevina: we agreed that it was okay to write to stderr and stdout, but you can write to stderr and stdout in the marshaler in the same exact way youre doing
<whyrusleeping>
using the marshaler allows the commands lib to handle errors better
<kevina>
how so?
* kevina
looking into exactly how the marshaler works
<jbenet>
ani: what are you trying to do in cluster that's getting stuck?
<jbenet>
ani: one thing i'd really like to figure out is dependencies between issues, so that you can walk up the dependency graph, understand it, question it + adjust it if it's wrong, and land things
<whyrusleeping>
kevina: the marshaler is given a commands.Result
<whyrusleeping>
which has access to the same Stdout and Stderr things youre writing to in the PostRun
cemerick has joined #ipfs
<ani>
jbenet: I am looking at building something similar to Dropbox (for a wildly different use-case but similar nonetheless).
<jbenet>
ani: one good thing to practice more though may be the "shim with an interface, hack it up and fix it better later". we do that a number of places, but perhaps not as much as would be ideal
<ani>
jbenet: honestly I think you guys focus way too much at protocols and specifications for products that aren't even alpha.
<kevina>
whyrusleeping: I'm fine with that, just curious to how the marshaler allows the commands lib to handle errors better
<jbenet>
ani: though at the same time, help us solve the problems correctly-- you shouldnt be blocked by anything-- meaning anything should be doable. if the dependency graph strays too far, it's probably just not necessary, or adjustable, and that's something useful. I do not at all want you to be in a position where "i want to do this" turns into "wait for someone
<jbenet>
else to do this"
<kevina>
whyrusleeping: in any case will change
<ani>
The main thing I notice is you guys are big on specifications and protocols. You have a specification for that even.
<jbenet>
ani: let's make it very concrete-- what's a protocol we need to spec, that we are blocked by, that isn't even alpha, that we consider in our "active set" (the things being worked on now)
<dyce[m]>
how long should it take to pin a 5gb file?
<dyce[m]>
ipfs pin add
<ani>
IPFS-Cluster is evidently blocked by ipls
<jbenet>
ipls?
<jbenet>
or ipld?
<ani>
Sorry ipld
<jbenet>
have an issue? (i'm not at this moment familiar with the blockage there)
<ani>
I badly hurt my ankle today and am on pain medicine, so I apologize in advance for any screw ups in my speech.
<ani>
Well, right now it's file-oriented as opposed to block-oriented.
<ani>
Which essentially nullifies the heterogeneous properties a cluster provides.
<jbenet>
(but bear in mind that ipld is much more than alpha, and is getting shipped into 0.4.5. arguably, it's one of the most important things we've done-- it's not well communicated yet. but the protocol and interfaces are there.)
<whyrusleeping>
kevina: errors in the http pipeline will circumvent going through the marshaler correctly
<whyrusleeping>
but going through the postrun makes handling that not work right
realisation has joined #ipfs
<ani>
I made a quick thing about how easy it could be to implement replication.
<jbenet>
-- ani: aside: no worries at all! :) it's _great_ to get this pushback, no pushback would be bad. we should clearly justify what we can, and change what we can't :) --
ralphtheninja has quit [Ping timeout: 240 seconds]
<ani>
But apparently, IPFS-cluster operates on the file level for now since IPLD has some changes coming.
onabreak has quit [Ping timeout: 260 seconds]
<ani>
I can't find any mention of these changes but because of them, IPFS-cluster is making serious sacrifices.
<whyrusleeping>
such as?
* whyrusleeping
really needs to go pay attention to ipfs cluster
<lgierth>
i'm pretty sure ipfs-cluster can deal with any thing that has a hash
<ani>
Reducing the heterogeneous property of the cluster (where nodes of any sizes can share limited space)
<jbenet>
ani: i think that's merely a side-effect of using `ipfs pin add -r <root-hash>` directly. i believe you can ask the cluster to do `ipfs pin add <single-hash>` and it will work just fine?
<kevina>
whyrusleeping: okay, i just left a comment, respond there and I will push a fix
<ani>
Sure, anything with a hash.
<jbenet>
ani: any block hash a hash.
<ani>
But my suggestion was that the cluster operate on the block level.
<ani>
So open up any DAG and get the hashes.
<jbenet>
have you tried it?
cyanobacteria has joined #ipfs
<ani>
I don't mean just acting on hashes, I mean ipfs-cluster-cli add [file hash] should break down that hash into its base blocks before feeding it in
<jbenet>
ipfs add -r <file> -- then `ipfs refs -r <root-hash>` to list all hashes, take one of those hashes, then ask the cluster to `ipfs pin add <that-block-hash>`. (clean test that should be there anyway)
matoro has joined #ipfs
<ani>
Anyways, the next component was "claiming" blocks
<ani>
And then deciding what to replicate based on how much is claimed.
<jbenet>
Why should cluster do the chunking of files when go-ipfs can do that? (recall that ipfs-cluster is designed to operate on top of another ipfs node)
<ani>
Sure, but someone should do that.
<ani>
I.e. the cluster should never have to think about DAGs.
<ani>
Since by nature its responsibility is to replicate and serve blocks, not files.
<ani>
My use-case here is trying to replicate the architecture that backs Dropbox: store blocks in the cluster and indices of file systems in a database.
<jbenet>
if what you're saying is "i want the cluster agreement to distribute the dags across all shards, such that the content is well replicated and any shard can fail without taking out anything, and the total storage space of the cluster is aggregated" -- we are totally in agreement
<jbenet>
ani: sounds like that's a design decision you came up with, which is valid, but not necessarily the best way.
<lgierth>
also note that ipfs-cluster is *not* the final user-ready product with a UI
<ani>
That's basically it. To do that, it should operate on the block level and try to make block sizes uniform so that peers can look at a block and say: "I'd like to store 8MB of data. What is the least replicated stuff the cluster is responsible for"
<ani>
Of course not :P
<jbenet>
ani: agreed, that's definitely _one_ use case.
<lgierth>
:)
<lgierth>
i'll be off to bed
<jbenet>
ani: a very important one, but not all use cases.
<ani>
Why would that not apply to any use-case?
<lgierth>
jbenet: let me know if you find anything about that account ;)
* lgierth
zzz
<ani>
Replicating whatever blocks are least found in the cluster?
<jbenet>
ani: consider a case where you want a cluster of full mirrors. a small cluster of k nodes that have full replicas. (another, different use case)
<ani>
Hm, so you mean that each one should be able to open the file.
<ani>
Is there a reason a simple pin ring can't do that?
<jbenet>
ani: consider another case where the cluster should shuffle blocks and use erasure coding to recover better from shard failures. that's different still (though in principle similar to the first option, just slower to pull data from)
<ani>
That's exactly the same case I'm giving.
<jbenet>
ani: lots of reasons, consider 10 different mutually distrusting entities, that want to back up an agreed-upon dataset.
<ani>
Peers look at the log, figure out what has the least redundancy, and get it.
ralphtheninja has joined #ipfs
<ani>
Right, that's possible in my model also.
<jbenet>
ani: they specifically want to replicate the entire dataset in their own hardware.
<dyce[m]>
yay for ipfs cluster
<ani>
Yeah, by nature of how mine works, peers decide what they will and will not replicate.
<dyce[m]>
will you be able to use the nodejs backend for some nodes?
<jbenet>
ani: but are forming part of a cluster to distribute bandwidth usage and serve a larger, broader community.
<ani>
Sure. So in my model, ipfs-cluster has a "Replicates" interface that defines the replication strategy--that is, it defines how it decides which blocks to accept.
<dyce[m]>
does 0.4.5 rc2 have faster pinning than rc1
<jbenet>
ani: i'm teasing apart similar, but substantially different use cases to convey why cluster should track "pins" the same way go-ipfs does, as graphs. (i've got more, too)
<ani>
You could have a "Complete" strategy which decides to take them all.
<ani>
You could have a "Contribute" strategy that makes yours a part of the whole with the shuffling.
<noffle>
jbenet: o/ still in town?
<jbenet>
consider a case, for example, where you want all nodes in the cluster to keep some sub-graph g1, and then devote the rest of storage to back up the rest together.
<ani>
The premise is that each peer decides which blocks it wants to take and then tells the cluster via the log.
<ani>
Sure, just add another strategy. It's pluggable :)
<ani>
When I say "peer" I mean client implementation.
<jbenet>
ani: sure, but other algorithms want to place peers s.t. the consensus dictates to them what they should store
dryajov1 has joined #ipfs
<jbenet>
ani: anyway i think we're naturally in agreement, it's just that you haven't seen or found the listing of use cases, or reasoning why graphs matter to end users.
<ani>
That's what I'm arguing against.
<jbenet>
hey noffle! o/ -- yes i am
<ani>
I think that the consensus shouldn't dictate who takes what.
<jbenet>
ani: for your use case that's fine.
<jbenet>
not for several of mine.
<ani>
Which use-case do you have that you don't think works with my peer-claiming strategy?
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
dryajov1 has quit [Client Quit]
<noffle>
jbenet: I'm in oakland saturday for nodeschool @ npm
leeola has quit [Quit: Connection closed for inactivity]
kulelu88 has joined #ipfs
<jbenet>
a strawman example: a use case where a single org spins up 1000 machines to replicate data, expect high failures and network partitions, and want the cluster's geographically distributed shards to rebalance themselves over time to best serve the demand in their regions
<jbenet>
in this case, the consensus process should figure out who should store what.
<jbenet>
based on a global perspective of what needs replicating, where demand is coming from, what partitions look like (when they come), etc.
<ani>
So why couldn't the log accomplish the global perspective?
<jbenet>
?? the consensus is the log
<ani>
yeah, my model deals with the consensus as well :P
<jbenet>
a shared log is a consensus--
<ani>
Right, I got that. I read the Raft spec earlier
<ani>
Basically, the difference is I have peers tell the log they want to replicate data, rather than all nodes deciding who to allocate to.
<ani>
That is my understanding at least.
<jbenet>
ani: either the consensus (or the log if you want to call it that) _determines_ who stores what, or peers (without _agreeement_) decide for themselves
<jbenet>
you can't say peers dont use consensus to agree on what to store, if you also claim they share a log that tells them what to store
<ani>
The consensus doesn't tell them what to store. Hold on one sec
<noffle>
jbenet: you ought to drop by; maybe we can grab some food after
<jbenet>
you're saying peers volunteer to store certain things ("i will store x1 ... xk", "i will store xj ... xy", ...)
<ani>
Basically. Not volunteer but inform.
<jbenet>
noffle: awesome! i'd like to-- will see if i can make it.
<ani>
I.e. "I am storing x1"
<ani>
Which can inform other peers
<jbenet>
ani: sure, inform. volunteer. claim.
<ani>
Well volunteer implies there's another step to approval.
<jbenet>
ok, so next step. other peers look at that and calculate what is NOT stored, and try to store that. either by issuing claims, or sampling from a uniform distribution.
<jbenet>
(you can actually do this without informing, with good guarantees)
<ani>
Sure. They have a state so they good-faith believe it is accurate and decide which blocks have the fewest nodes on it.
tmg has joined #ipfs
<ani>
But different nodes can have different strategies.
<jbenet>
add to shared vocabulary: "volunteer" does not need acceptance by a leader or consensus. it's a "voluntary" action.
<jbenet>
(unilateral)
<jhulten>
If the originator of the block list provides a list of blocks to manage, then nodes could '
<ani>
The list of blocks is in the consensus.
dryajov2 has joined #ipfs
<ani>
Which gets added to by someone telling the leader to add it.
<ani>
(Which means ipfs-cluster probably wants auth)
<jhulten>
Okay and there is a replication count?
<jbenet>
ani: great, so that is _one_ form of a consensus driven sharding. more formally: the complete set of allocations of blocks to each peer is ultimately determined by distributed agreement.
<ani>
Yeah, that's the part about claiming.
<ani>
jbenet: The allocation to each peer is determined by the peer is what my model means.
<ani>
Oh my bad.
<jhulten>
Is the 'consensus' a hash at it core? With log indexes etc.
<ani>
Sorry, pain meds. Yeah, that's it.
<jbenet>
ani: no, that's incorrect. your description requires a log to inform everyone else
<jbenet>
jhulten: well it's actually a blockchain.
<jbenet>
jhulten with a head, yes.
<jbenet>
(head = a hash or block, depending on your vocab)
<jbenet>
consensus + a log that happens to be a hash-chain (good idea) = a blockchain.
<jbenet>
(not all blockchains use economic consensus)
<jbenet>
(or "rational" consensus)
<jhulten>
jbenet: dht
<jhulten>
?
<ani>
So which type of cluster does ipfs-cluster want to target? Because this seems to be a classic case of abstraction, trying to fit every use-case in the same product.
<jbenet>
jhulten: no, a dht is not consensus, nor a hash-chain. (though there are some dht's that _use_ consensus or hash-chains)
dryajov has quit [Ping timeout: 255 seconds]
<ani>
Or are you going to provide swappable "strategies"
<jbenet>
ani: not really. modularize it, and as you claimed, different strategies can be plugged in
<jbenet>
we only write a few, but let others write more.
dryajov2 has quit [Ping timeout: 240 seconds]
<jbenet>
the hard part, by the way, is getting all the rebalancing to work well, under byzantine fault tolerance assumptions, and quickly.
<ani>
Alright, so my understanding now is that it's basically the same model as mine. So is there anything I can do to contribute? hsanjuan
<jbenet>
ani: to be clear, my main interest is in targetting "a set of use cases of growing scale", meaning, right now now i want to target small clusters and get the model right, then work to scale it up to thousands, then millions of nodes.
<ani>
Alright, I've got to get some shuteye but let's continue this discussion.
ani has quit [Quit: Page closed]
<jhulten>
elastic search has some interesting practices for shard management that may give ideas, especially around constraints (don't load all copies of this block in the same datacenter/with the same user/etc)
<jbenet>
hsanjuan: maybe let's talk about the roadmap and see how we can enable others, like ani, to contribute more effectively.
dryajov has joined #ipfs
<jbenet>
jhulten: great, do you have a pointer to their algorithms?
infinity0 has quit [Ping timeout: 240 seconds]
infinity0_ has joined #ipfs
infinity0 has joined #ipfs
infinity0_ is now known as infinity0
infinity0 has quit [Changing host]
tmg has quit [Ping timeout: 252 seconds]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
clownpriest has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
infinity0 has joined #ipfs
<jhulten>
done:
infinity0 has quit [Remote host closed the connection]
<jbenet>
thanks!
dryajov has quit [Ping timeout: 255 seconds]
<jhulten>
Having objects and blocks in a CAS at least means the individual files won't get borked in a split brain scenario, but the pointers from human readable names is another matter. Thoughts?
infinity0 has joined #ipfs
<jbenet>
jhulten: the concise answer is: distribute secure logs of name records
<jbenet>
(lots of variability as to how to do that depending on your model)
infinity0 has quit [Remote host closed the connection]
<jhulten>
So in a blockchain, a split brain would lead to an eventually approved chain and an chain of uncles. The work on the uncle chain would be lost, yes?
onabreak has joined #ipfs
<jhulten>
Much to wrap my brain around.
<jbenet>
jhulten: in the first blockchains yes. not in future ones.
dryajov2 has quit [Read error: Connection reset by peer]
<jhulten>
Anything I can add to MY reading list on that one?
reit has quit [Ping timeout: 240 seconds]
<jhulten>
What would the expected behavior of the minority side of the split be?
dryajov has joined #ipfs
<jbenet>
jhulten: not yet ;)
maciejh has quit [Ping timeout: 276 seconds]
stevenaleach has joined #ipfs
Foxcool has joined #ipfs
<jhulten>
Each block in the chain only contains changes since the last block, yes? Or is there plan to have keyframe type blocks that contain the whole log/hash state so you can truncate the chain?
<jhulten>
Or is there one chain for all clusters?
AkhILman has joined #ipfs
matoro has quit [Ping timeout: 255 seconds]
se3000 has joined #ipfs
chased1k has quit [Remote host closed the connection]
aquentson1 has quit [Ping timeout: 240 seconds]
Foxcool has quit [Ping timeout: 258 seconds]
cemerick has quit [Ping timeout: 255 seconds]
Foxcool has joined #ipfs
chased1k has joined #ipfs
fleeky__ has joined #ipfs
HostFat_ has joined #ipfs
fleeky_ has quit [Ping timeout: 258 seconds]
HostFat__ has quit [Ping timeout: 252 seconds]
tilgovi has joined #ipfs
amiller has quit [Ping timeout: 252 seconds]
brendyyn has quit [Ping timeout: 252 seconds]
mmuller_ has quit [Ping timeout: 252 seconds]
cehteh has quit [Ping timeout: 252 seconds]
mmuller has joined #ipfs
robogoat_ is now known as robogoat
aeternity has quit [Ping timeout: 240 seconds]
arkimedes has joined #ipfs
Guest94103 has joined #ipfs
matoro has joined #ipfs
cehteh has joined #ipfs
wallacoloo____ has joined #ipfs
tilgovi has quit [Ping timeout: 255 seconds]
brendyn has joined #ipfs
tilgovi has joined #ipfs
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
ygrek has quit [Ping timeout: 255 seconds]
brendyn has quit [Ping timeout: 252 seconds]
matoro has quit [Ping timeout: 255 seconds]
brendyn has joined #ipfs
dbri has quit [Remote host closed the connection]
dbri2 has joined #ipfs
se3000 has quit [Quit: My iMac has gone to sleep. ZZZzzz…]
realisation has joined #ipfs
matoro has joined #ipfs
realisation has quit [Client Quit]
mguentner has quit [Quit: WeeChat 1.6]
mguentner has joined #ipfs
shamb0t has joined #ipfs
shamb0t has quit [Remote host closed the connection]
shamb0t has joined #ipfs
shamb0t has quit [Remote host closed the connection]
shamb0t has joined #ipfs
tilgovi has quit [Quit: No Ping reply in 180 seconds.]
kulelu88 has quit [Quit: Leaving]
tilgovi has joined #ipfs
shamb0t has quit [Ping timeout: 258 seconds]
arkimedes has quit [Ping timeout: 240 seconds]
mguentner2 has joined #ipfs
pfrazee has quit [Remote host closed the connection]
mguentner has quit [Ping timeout: 255 seconds]
chased1k has quit [Remote host closed the connection]
bolton has joined #ipfs
deetwelve has quit [Ping timeout: 240 seconds]
robattila256 has quit [Ping timeout: 260 seconds]
aquentson has joined #ipfs
reit has joined #ipfs
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
aquentson has quit [Ping timeout: 240 seconds]
aquentson has joined #ipfs
shamb0t has joined #ipfs
chris613 has quit [Quit: Leaving.]
dryajov1 has joined #ipfs
dryajov1 has quit [Client Quit]
infinity0 has joined #ipfs
Charley has quit [Read error: Connection reset by peer]
Catriona has joined #ipfs
ecloud is now known as ecloud_wfh
tilgovi has quit [Remote host closed the connection]
MikeFair has joined #ipfs
tilgovi has joined #ipfs
tilgovi has quit [Ping timeout: 255 seconds]
anewuser_ has joined #ipfs
aquentson1 has joined #ipfs
iinaj_ has joined #ipfs
sneak_ has joined #ipfs
wallacoloo_____ has joined #ipfs
jnagro_ has joined #ipfs
Akaibu_ has joined #ipfs
dvim_ has joined #ipfs
bengl_ has joined #ipfs
risk__ has joined #ipfs
stoopkid_ has joined #ipfs
Flyingmana__ has joined #ipfs
tilgovi has joined #ipfs
Captain_Beezay_ has joined #ipfs
m3s_ has joined #ipfs
mvollra7h has joined #ipfs
chungy_ has joined #ipfs
bren2010 has joined #ipfs
Foxcool_ has joined #ipfs
stoopkid has quit [Ping timeout: 240 seconds]
jmelis has quit [Ping timeout: 240 seconds]
Captain_Beezay has quit [Ping timeout: 240 seconds]
Akaibu has quit [Ping timeout: 240 seconds]
sneak has quit [Ping timeout: 240 seconds]
iinaj has quit [Ping timeout: 240 seconds]
Flyingmana_ has quit [Ping timeout: 240 seconds]
risk has quit [Ping timeout: 240 seconds]
bengl has quit [Ping timeout: 240 seconds]
dvim has quit [Ping timeout: 240 seconds]
m3s has quit [Ping timeout: 240 seconds]
gde33 has quit [Ping timeout: 240 seconds]
aquentson has quit [Ping timeout: 240 seconds]
Foxcool has quit [Ping timeout: 240 seconds]
captain_morgan has quit [Ping timeout: 240 seconds]
chungy has quit [Ping timeout: 240 seconds]
wallacoloo____ has quit [Ping timeout: 240 seconds]
jnagro has quit [Ping timeout: 240 seconds]
anewuser has quit [Ping timeout: 240 seconds]
s_kunk has quit [Ping timeout: 240 seconds]
mvollrath has quit [Ping timeout: 240 seconds]
bren2010_ has quit [Ping timeout: 240 seconds]
mvollra7h is now known as mvollrath
jmelis has joined #ipfs
Akaibu_ is now known as Akaibu
gde33 has joined #ipfs
m3s_ is now known as m3s
m3s has joined #ipfs
m3s has quit [Changing host]
sneak_ is now known as sneak
jnagro_ is now known as jnagro
iinaj_ is now known as iinaj
stoopkid_ is now known as stoopkid
bengl_ is now known as bengl
risk__ is now known as risk
Flyingmana__ is now known as Flyingmana_
dvim_ is now known as dvim
s_kunk has joined #ipfs
captain_morgan has joined #ipfs
luizirber has quit [Ping timeout: 255 seconds]
arkimedes has joined #ipfs
ygrek has joined #ipfs
chriscool has joined #ipfs
shamb0t_ has joined #ipfs
shamb0t has quit [Read error: Connection reset by peer]
luizirber has joined #ipfs
ShalokShalom has joined #ipfs
john1 has quit [Ping timeout: 245 seconds]
chriscool has quit [Ping timeout: 258 seconds]
muvlon has quit [Ping timeout: 245 seconds]
chriscool has joined #ipfs
tilgovi has quit [Ping timeout: 255 seconds]
arkimedes has quit [Ping timeout: 258 seconds]
<MikeFair>
Hey all; I've registered my DNS server with a TXT entry to do ipns; is there anyway for me to publish to a Directory label "under" that namespace instead of overwriting it entirely?
<MikeFair>
For instance, I want want something like "Published to <MYIPNSADDRESS>/MySubdir: /ipfs/SomePublishedFiles"
<MikeFair>
Where SomePublishedFiles is an ipfs address
muvlon has joined #ipfs
<H3ndr1k[m]>
I am not 100% sure, but I think it was the -p param of ipfs publish.
<H3ndr1k[m]>
But it could only be a pull request.
<H3ndr1k[m]>
I asked the same a few weeks ago. ;)
<MikeFair>
Yeah; docs don't reference a -p
<MikeFair>
Just the -k :)
<MikeFair>
(dones't mean it won't work
<MikeFair>
unrecognized option ;)
<MikeFair>
I'm running the latest release candidate; so I guess there it is
dignifiedquire has joined #ipfs
bastianilso has joined #ipfs
stevenaleach has quit [Quit: Leaving]
maciejh has joined #ipfs
maciejh has quit [Remote host closed the connection]
Catriona has quit [Remote host closed the connection]
chungy has joined #ipfs
ylp has joined #ipfs
ulrichard has joined #ipfs
rendar has joined #ipfs
wallacoloo_____ has quit [Quit: wallacoloo_____]
atrapado_ has joined #ipfs
shamb0t has joined #ipfs
shamb0t_ has quit [Read error: Connection reset by peer]
chriscool has quit [Ping timeout: 264 seconds]
ShalokShalom_ has joined #ipfs
ShalokShalom has quit [Ping timeout: 245 seconds]
ygrek has quit [Ping timeout: 252 seconds]
gde33 has quit [Remote host closed the connection]
shamb0t has quit [Remote host closed the connection]
shamb0t has joined #ipfs
gde33 has joined #ipfs
shamb0t has quit [Ping timeout: 240 seconds]
G-Ray_ has joined #ipfs
ShalokShalom_ is now known as ShalokShalom
chriscool has joined #ipfs
chriscool has left #ipfs [#ipfs]
s_kunk has quit [Ping timeout: 255 seconds]
aquentson has joined #ipfs
aquentson1 has quit [Ping timeout: 255 seconds]
chriscool has joined #ipfs
chriscool has quit [Client Quit]
Qwazerty has quit [Quit: WeeChat 1.6]
Qwazerty has joined #ipfs
A124 has quit [Ping timeout: 240 seconds]
tmg has joined #ipfs
ygrek has joined #ipfs
A124 has joined #ipfs
tmg has quit [Ping timeout: 276 seconds]
s_kunk has joined #ipfs
s_kunk has joined #ipfs
s_kunk has quit [Changing host]
s_kunk has quit [Read error: Connection reset by peer]
rickygee has joined #ipfs
tmg has joined #ipfs
gts has joined #ipfs
<gts>
are the ipfs gateways down?
<gts>
they seem to be too slow and sometimes not reachable
<gts>
ah forgive me looks like they are outdated files
gts has quit []
tclass has joined #ipfs
Mizzu has joined #ipfs
dryajov has quit [Read error: Connection reset by peer]
dryajov has joined #ipfs
Akaibu has quit [Quit: Connection closed for inactivity]
locusf_ has joined #ipfs
cemerick has joined #ipfs
locusf has quit [Ping timeout: 240 seconds]
rickygee has quit [Quit: see ya latahz]
PurgingPanda_[m] has joined #ipfs
espadrine has joined #ipfs
maxlath has joined #ipfs
ylp has quit [Quit: Leaving.]
ylp has joined #ipfs
ebarch has quit [Remote host closed the connection]
ebarch has joined #ipfs
shamb0t has joined #ipfs
chriscool has joined #ipfs
jkilpatr has quit [Ping timeout: 255 seconds]
john1 has joined #ipfs
shamb0t has quit [Remote host closed the connection]
shamb0t has joined #ipfs
maciejh has joined #ipfs
shamb0t has quit [Ping timeout: 258 seconds]
locusf_ is now known as locusf
kenshyx has joined #ipfs
ShalokShalom has quit [Read error: No route to host]
ShalokShalom has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
jkilpatr has joined #ipfs
Foxcool_ has quit [Ping timeout: 256 seconds]
Foxcool_ has joined #ipfs
ShalokShalom has quit [Remote host closed the connection]
Neiman[m] has left #ipfs ["User left"]
shamb0t has joined #ipfs
ensrettet has joined #ipfs
ensrettet has quit [Client Quit]
ShalokShalom has joined #ipfs
realisation has joined #ipfs
cyanobacteria has quit [Ping timeout: 255 seconds]
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
maxlath has joined #ipfs
cehteh has quit [Ping timeout: 258 seconds]
Akaibu has joined #ipfs
tmg has quit [Ping timeout: 276 seconds]
zopsi has quit [Ping timeout: 252 seconds]
aquentson1 has joined #ipfs
Foxcool_ has quit [Ping timeout: 240 seconds]
aquentson has quit [Ping timeout: 240 seconds]
Foxcool_ has joined #ipfs
kenshyx has quit [Remote host closed the connection]
aquentson has joined #ipfs
cehteh has joined #ipfs
aquentson1 has quit [Ping timeout: 256 seconds]
chriscool has quit [Ping timeout: 256 seconds]
maxlath has quit [Ping timeout: 255 seconds]
realisation has joined #ipfs
spbike[m] has joined #ipfs
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
bastianilso has quit [Quit: bastianilso]
suttonwilliamd has joined #ipfs
maxlath has joined #ipfs
Adbray has quit [Quit: Quit]
robattila256 has joined #ipfs
iovoid has quit [Remote host closed the connection]
iovoid has joined #ipfs
gillisig has left #ipfs ["User left"]
bastianilso has joined #ipfs
pfrazee has joined #ipfs
matoro has quit [Ping timeout: 240 seconds]
shizy has joined #ipfs
ensrettet has joined #ipfs
ensrettet has quit [Client Quit]
leeola has joined #ipfs
tabrath has quit [Ping timeout: 258 seconds]
ashark has joined #ipfs
muvlon has quit [Ping timeout: 245 seconds]
cemerick has quit [Ping timeout: 255 seconds]
cemerick has joined #ipfs
<lgierth>
kyledrake: got suggestions for a dns hoster which supports ANAME/ALIAS records?
<lgierth>
there's a couple of them, just thought you might have experienced a few them
<kythyria[m]>
Those are a thing?
<lgierth>
yeah, because you can't have CNAME on the root of a zone
<lgierth>
so you can't CNAME example.com to gateway.ipfs.io, which is a biiiiit annoying
maciejh has quit [Ping timeout: 258 seconds]
<lgierth>
so people came up with a mechanism where the authoritative nameserver of the zone does the lookup, but transparently, instead of passing the CNAME record as the response and letting the resolver down the road to the lookup
muvlon has joined #ipfs
<kythyria[m]>
And 90% of that is because it's easier than getting browser developers to stop adding shiny mud to the ball for five minutes and implement SRV records D:
<Kubuxu>
kythyria[m]: SRV RFC explicitly says that it should be added to a protocol only when new breaking release is done
<Kubuxu>
this is why HTTP doesn't have it so far
<kythyria[m]>
D:
<kythyria[m]>
So we'll never have it for HTTP because browser devs won't ever break compatibility with garbage from 1995.
<lgierth>
hosts=(earth pluto venus mercury neptune uranus jupiter); for h in ${hosts[@]}; do ../../misc/doctl compute domain records create orbit.chat --record-name '@' --record-type AAAA --record-data $(dig +short AAAA $h.i.ipfs.io); done
Akaibu has quit [Quit: Connection closed for inactivity]
<kythyria[m]>
You'd think they'd have allowed it with HTTP/2 though or something. Incompatible wire format!
ulrichard has quit [Remote host closed the connection]
<Kubuxu>
No as http/2 is http 1.2 not http 2.0
<Kubuxu>
formats are compatiable
<Kubuxu>
just http2 extends it, qq
ckwaldon has joined #ipfs
wmoh has joined #ipfs
<kythyria[m]>
They're not really compatible wire formats. At most they start with handshakes that will allow implementations of either to nope instead of being confused.
Foxcool_ has quit [Ping timeout: 240 seconds]
ylp has quit [Quit: Leaving.]
gde33 has quit [Remote host closed the connection]
maciejh has joined #ipfs
cemerick has quit [Ping timeout: 255 seconds]
cemerick has joined #ipfs
gde33 has joined #ipfs
shamb0t_ has joined #ipfs
shamb0t has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
aquentson has quit [Read error: Connection reset by peer]
aquentson has joined #ipfs
<seharder>
@dignifiedquire: do you have time for another question
<lgierth>
cool do you wanna be the DRI for our dns? :)
<kyledrake>
Sure. What's a DRI?
<Kubuxu>
if we want to do it, I can but I would have to know what we want to do with them exactly and re-allocate time.
<Kubuxu>
I think it was to me.
<Kubuxu>
DRI = directly responsible individual
Encrypt has joined #ipfs
ShalokShalom has quit [Read error: Connection reset by peer]
<lgierth>
ideally we have a bunch of zone files in git and a tool to upload them to various providers, and one of these could be our own dns if we want to run it ourselves
ShalokShalom has joined #ipfs
<lgierth>
ALIAS/ANAME is just useful to have CNAME-like semantics on the root of a zone
<lgierth>
today for example i created 14 A/AAAA records for orbit.chat
<lgierth>
scripted, but still it'd be awesome if we could just ALIAS orbit.chat => gateway.ipfs.io
<kyledrake>
DNSimple is fine. No complaints. I'm more comfortable paying them than using Cloudflare's plan since it's designed for what you're using it for
<kyledrake>
Yeesh, their business plan is expensive though.
<kyledrake>
(you won't get anything cheaper elsewhere though)
<Kubuxu>
This quote on PowerDNS site: "It is a book about a Spanish guy called Manual. You should read it."
<lgierth>
don't need vanity nameserver and concierge, so pro plan would work too
<lgierth>
haha
<Kubuxu>
whyrusleeping: release 0.4.5 if you want fast CI
<kyledrake>
I'm using the pro for Neocities. Had no issues yet.
<kyledrake>
I've pondered running my own DNS on the anycast network.. just havent had a really compelling reason to do it.
<lgierth>
yeah that'd be the only interesting thing about it
<whyrusleeping>
Kubuxu: <3
<musicmatze>
hey ipfs people! ... I guess it was asked before, but did you hear about keybase? And if yes, can someone explain the difference to IPFS?
<whyrusleeping>
Kubuxu: one last PR since you pointed it out
<fil_redpill>
so I understand ipfs-pack is like bittorrent
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<fil_redpill>
there's an element in the tutorial that I'm not sure is right
<fil_redpill>
when its says that changing a pack's content removes the old data from the network, I think it's not totally correct: it just removes your "original" copy, but if another computer has it, it's still available to everyone
<whyrusleeping>
^ correct
<ShalokShalom>
oh fine
<ShalokShalom>
thanks a lot
<ShalokShalom>
great
Lostfile has joined #ipfs
<fil_redpill>
hmmm re-reading the sentences are correct
<fil_redpill>
I just misread them
maxlath has quit [Ping timeout: 256 seconds]
<fil_redpill>
if you remove or modify any of the dataset contents the old information will no longer be available from your ipfs-pack node.
<fil_redpill>
I just slipped over the last part
<Lostfile>
hi
<fil_redpill>
maybe add "but other nodes might already have a copy and will be able to serve it" or somthing like that
<fil_redpill>
anyway it's good — was one of the main barriers for me especially on a tiny HD
<whyrusleeping>
fil_redpill: yeah, thats a clearer phrasing
<fil_redpill>
the other thing I'm still debating with myself is the lack of mime-type and filenames
<whyrusleeping>
you can get filenames, just add with -w
<fil_redpill>
ok I'll check again^^
<whyrusleeping>
the mime-types stuff is still being discussed
shamb0t has joined #ipfs
<Lostfile>
i wonder if we will be able to use namecoin names for our sites or some thing like that
<Lostfile>
well theres ipns but thats still sort of uses a hash
<lgierth>
and namecoin name will point to a hash
<lgierth>
you'll always point to a hash, no matter what naming system you use :)
pfrazee has quit [Read error: Connection reset by peer]
pfrazee has joined #ipfs
galois_d_ has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
<iso>
basicly the disk partition get filled up and then I get 18:44:06.895 ERROR commands/h: err: mkdir /ip/blocks/122075c0: no space left on device handler.go:287
<iso>
ok log file not lof file :-)
<lgierth>
please only add stuff to ipfs when you sure you have the permission to do so
<lgierth>
not sure about your logs
<lgierth>
it's hard to tell what's going on
<lgierth>
the "could not resolve" messages are okay
<iso>
try grap on 'no space'
<lgierth>
well that just means you're out of diskapce
<lgierth>
*diskspace
<iso>
yes but I found no way of saving my system, I had to do a reinstall.
espadrine has quit [Ping timeout: 240 seconds]
aquentson1 has joined #ipfs
aquentson has quit [Ping timeout: 255 seconds]
draynium has joined #ipfs
pfrazee has quit [Read error: Connection reset by peer]
<iso>
Ok going to bed.
iso has quit [Quit: Page closed]
ashark has quit [Ping timeout: 255 seconds]
wmoh has quit [Ping timeout: 240 seconds]
slothbag has joined #ipfs
pfrazee has joined #ipfs
palkeo has quit [Quit: Konversation terminated!]
AtnNn has quit [Ping timeout: 240 seconds]
matoro has quit [Ping timeout: 240 seconds]
AtnNn has joined #ipfs
<slothbag>
hey guys, is there an easy way to calculate a DAG multihash client side in the browser?
matoro has joined #ipfs
muvlon has quit [Quit: Leaving]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
leeola has quit [Quit: Connection closed for inactivity]
Oatmeal has joined #ipfs
<dyce[m]>
lgierth: do you say that because unauthorized distribution would hurt ipfs's reputation?