ZaZ has quit [Read error: Connection reset by peer]
ckwaldon1 has joined #ipfs
ckwaldon has quit [Ping timeout: 260 seconds]
ckwaldon1 is now known as ckwaldon
fleeky has quit [Ping timeout: 245 seconds]
ianopolous has quit [Ping timeout: 268 seconds]
<flyingzumwalt>
Ani I'm sorry I missed you earlier. I've been traveling today. It's late here in Europe now. Can I get back to you tomorrow with info/answers?
<flyingzumwalt>
You pose a good question about the advantages of IPFS over bittorrent for your use case. I've asked people to weigh in on the best answers for you here: https://github.com/ipfs/notes/issues/208
mrpoopyb1 has quit [Ping timeout: 245 seconds]
<flyingzumwalt>
Off the top of my head, the primary advantage is that you're not limited to building torrent files. Instead, you can build DAGs that contain a dynamic, growing amount of data. This lets you do things like store multiple versions of a dataset without storing duplicate blocks, store metadata about the datasets along with the datasets themselves, etc.
baffo32 has joined #ipfs
mrpoopyb1 has joined #ipfs
ivo_ has quit [Ping timeout: 245 seconds]
anewuser has joined #ipfs
pfrazee has quit [Read error: Connection reset by peer]
SuperPhly has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
mguentner2 has joined #ipfs
mguentner has quit [Ping timeout: 248 seconds]
_whitelogger has joined #ipfs
gully-foyle has quit [Ping timeout: 264 seconds]
gully-foyle has joined #ipfs
john4 has joined #ipfs
john has joined #ipfs
john is now known as Guest36523
john3 has quit [Ping timeout: 260 seconds]
john4 has quit [Ping timeout: 260 seconds]
pfrazee has quit [Remote host closed the connection]
wallacoloo_____ has joined #ipfs
chris613 has quit [Quit: Leaving.]
Furao has quit [Ping timeout: 258 seconds]
Encrypt has joined #ipfs
Encrypt has quit [Client Quit]
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
vtomole has quit [Ping timeout: 260 seconds]
pfrazee has joined #ipfs
pfrazee has quit [Ping timeout: 268 seconds]
Furao has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
cyanobacteria has joined #ipfs
lepisma[m] has left #ipfs ["User left"]
ulrichard has joined #ipfs
Kane` has quit [Ping timeout: 245 seconds]
Kane` has joined #ipfs
gpestana has joined #ipfs
rendar has joined #ipfs
cyanobacteria has quit [Ping timeout: 245 seconds]
cyanobacteria has joined #ipfs
tabrath has joined #ipfs
muvlon has quit [Ping timeout: 240 seconds]
uzyn has joined #ipfs
ygrek_ has joined #ipfs
muvlon has joined #ipfs
lkcl has joined #ipfs
Kane` has quit [Ping timeout: 240 seconds]
Kane` has joined #ipfs
Kane` has quit [Client Quit]
wallacoloo_____ has quit [Quit: wallacoloo_____]
Guest41990[m] has joined #ipfs
Mizzu has joined #ipfs
cyanobacteria has quit [Ping timeout: 258 seconds]
maxlath has joined #ipfs
infinity0 has quit [Ping timeout: 252 seconds]
infinity0_ has joined #ipfs
infinity0 has joined #ipfs
infinity0_ is now known as infinity0
uzyn has quit [Quit: uzyn]
cyanobacteria has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
uzyn has joined #ipfs
uzyn has quit [Client Quit]
gpestana has quit [Ping timeout: 245 seconds]
infinity0 has quit [Remote host closed the connection]
cyanobacteria has quit [Ping timeout: 256 seconds]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
Sophrosyne has joined #ipfs
infinity0 has joined #ipfs
robattila256 has joined #ipfs
test has joined #ipfs
test has quit [Client Quit]
gpestana has joined #ipfs
PseudoNoob has joined #ipfs
espadrine has joined #ipfs
frmendes has joined #ipfs
maxlath has quit [Ping timeout: 252 seconds]
pfrazee has joined #ipfs
pfrazee has quit [Ping timeout: 265 seconds]
maxlath has joined #ipfs
Guest36523 has quit [Ping timeout: 260 seconds]
ebarch has quit [Remote host closed the connection]
ebarch has joined #ipfs
infinity0 has quit [Ping timeout: 265 seconds]
infinity0 has joined #ipfs
Furao has quit [Ping timeout: 248 seconds]
frmendes has quit [Ping timeout: 258 seconds]
ygrek_ has quit [Ping timeout: 256 seconds]
Kingsquee has quit [Quit: Konversation terminated!]
infinity0 has quit [Ping timeout: 246 seconds]
infinity0 has joined #ipfs
ianopolous has joined #ipfs
<DokterBob>
Hey whyrusleeping Kubuxu_ , want to meet at the 3c33?
<DokterBob>
If so, where might I find you? :)
<Kubuxu>
DokterBob: I am not at CCC :|, but whyrusleeping, lgierth and victorbjelkholm are ^^
<DokterBob>
Ah, sorry. ;)
<lgierth>
i'm not
<lgierth>
but whyrusleeping hsanjuan and victorbjelkholm[ are
<lgierth>
and dignifiedquire
<victorbjelkholm>
dignifiedquire is here as well?
<victorbjelkholm>
hsanjuan went home yesterday
<dignifiedquire>
lgierth: I'm still not there oO
<dignifiedquire>
why do you keep saying that I am :D
<victorbjelkholm>
dignifiedquire if you say something for enough times, eventually it becomes true? :D
<victorbjelkholm>
wishful thinking
<dignifiedquire>
:D
<dignifiedquire>
that might be
<lgierth>
haha
<lgierth>
sorry
<lgierth>
i was really sure
<ansuz>
:D
cemerick has joined #ipfs
jedahan has joined #ipfs
<jedahan>
hallo from 33c3 :D
Caterpillar has joined #ipfs
<musicmatze[m]>
Hi
<victorbjelkholm>
Hello!
<victorbjelkholm>
where is all ipfsers at?
<victorbjelkholm>
I'm sitting at one of the food-courts right now
* ansuz
in paris
* brimstone
in berlin
cemerick has quit [Ping timeout: 258 seconds]
<jedahan>
I am having trouble starting orbit, missing config file
mguentner2 has quit [Ping timeout: 258 seconds]
<jedahan>
should I just copy my ~/.ipfs/config to where it is looking?
tabrath has quit [Remote host closed the connection]
<jedahan>
or point orbit to ~/.ipfs instead of ~jedahan/Application\ Support/orbit/ipfs ?
tabrath has joined #ipfs
<DokterBob>
victorbjelkholm: also in the foodcourt, dinner room
cemerick has joined #ipfs
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jedahan has joined #ipfs
<Caterpillar>
Hi, I just started playing with ipfs. I pinned a 1,2 GB file using "ipfs add Fedora-KDE-Live-x86_64-25-1.3.iso" command. Now from another machine I am trying to do "ipfs get /ipfs/QmSK....(foo)/Fedora-KDE-Live-x86_64-25-1.3.iso" but the download is in the range of 0-100kb/s while the "server" has a 100 Mbit/s bandwidth. Firewalls are fine. What can be the problem? Thank you
victorbjelkholm[ is now known as victor_mobile
cemerick has quit [Ping timeout: 248 seconds]
<victorbjelkholm>
yeah, had the same issue yesterday when sending file to brodo, peaked around 120kb on 120mb file
<victorbjelkholm>
whyrusleeping, we can to reproduce it if you're around
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
muvlon has quit [Ping timeout: 240 seconds]
<jedahan>
got orbit running, anyone in any channels here?
<jedahan>
only shows 1 peer...
<jedahan>
oh i lied, i see 2 now :D
<Caterpillar>
victorbjelkholm: it's very strange
gpestana has quit [Ping timeout: 268 seconds]
<victorbjelkholm>
maybe Kubuxu have some ideas
<victorbjelkholm>
very strange, we were directly connect on the same wifi as well
<Kubuxu>
interesting
<Kubuxu>
no idea right now
<Kubuxu>
I might try replicating it latter
<victorbjelkholm>
yeah, should be fairly simple to reproduce, you try transfer between two computers on same network
<Caterpillar>
if you want to try I can give you the full hash
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
infinity0 has quit [Remote host closed the connection]
tabrath has quit [Ping timeout: 258 seconds]
infinity0 has joined #ipfs
mib_kd743naq has joined #ipfs
<mib_kd743naq>
Kubuxu: btw, most of the matrix is generated entirely procedurally, I just have to manually change V to X here and there
<mib_kd743naq>
Kubuxu: so if you want a dataset with the multihash id being > 127 (e.g. blake) - it's relatively easy to produce
<Kubuxu>
awesome, it will be useful in future
<Kubuxu>
we are delaying blake for noe
<Kubuxu>
now
<mib_kd743naq>
nod, ping me on the github issue whenever the need arises ( even after the issue is long closed )
<mib_kd743naq>
I tried to build js-ipfs, but on cat I keep getting things like Error: ENOENT: no such file or directory, open '/home/ipfs/.jsipfs/blocks/CIQIO/CIQIOUUR4OVHD53VXXJ7CPTERB7DZ4ZDOGY5HDWUN2MIKIVZNQX5BQQ.data'
<mib_kd743naq>
what am I missing?
<victorbjelkholm>
mib_kd743naq, could you try with a new repo? I think you get that repo when it's a mixed go-ipfs/js-ipfs repo
<victorbjelkholm>
and make sure daemon is actually running (js-ipfs daemon)
<mib_kd743naq>
ah, I don't have a demon running
<victorbjelkholm>
daviddias, we don't have any integration tests between js-ipfs running in node and browser and passing data in between no?
<mib_kd743naq>
is there actually a browser implementation that I can just... try?
<daviddias>
and then when runnings tests either in the browser or in node.js, we contact those node.js nodes
<daviddias>
so node.js <-> node.js and browser <-> node.js
<koalalorenzo>
Hey Guys, what is the status of FileCoin project? Any roadmap planned?
<victorbjelkholm>
daviddias, hm, I was having some issues, maybe related to the webrtc stuff then, but I couldn't pass data from nodejs > browser or vice versa. I'll try to make a small reproducible case and create a issue instead. Thanks!
<victorbjelkholm>
got some "invalid input" error...
<daviddias>
weird, that demo I did before christmas was exactly that working
<daviddias>
if you can get a test case, that would be stellar :)
<victorbjelkholm>
roger! :)
DiCE1904 has joined #ipfs
Sophrosyne_ has joined #ipfs
ulrichard has quit [Read error: Connection reset by peer]
<daviddias>
kumavis[m]1: on the dag-api, we need first to finish the core one, and also to settle on go-ipfs, so that js-ipfs-api has it and so that js-ipfs can use it from the CLI
<kumavis[m]1>
daviddias: ok - should i just leave that PR as is for now?
<kumavis[m]1>
note: merge target is `feat/dag-api`
M-G-Ray has joined #ipfs
<daviddias>
kumavis[m]1: for now, yes, as it won't be possible to complete until the above is done
<kumavis[m]1>
i made that PR bc it was a useful locally as a sanity check that my ethereum stat dumps were working
<kumavis[m]1>
daviddias: personally I would just merge it into `feat/dag-api`as all it really does is call ipfs.dag/resolve. can be tweaked if the dag api changes
<kumavis[m]1>
but you have a better birds eye view of how all the pieces are coming together
Boomerang has joined #ipfs
m0ns00n_ has joined #ipfs
m0ns00n_ has quit [Client Quit]
chriscool has quit [Ping timeout: 260 seconds]
tabrath has quit [Ping timeout: 258 seconds]
tabrath has joined #ipfs
Encrypt has quit [Quit: Dinner time!]
espadrine has quit [Ping timeout: 252 seconds]
maxlath has quit [Ping timeout: 248 seconds]
chris613 has joined #ipfs
ianopolous has quit [Ping timeout: 258 seconds]
rendar has quit [Ping timeout: 260 seconds]
Guest36523 has joined #ipfs
SuperPhly has joined #ipfs
maxlath has joined #ipfs
espadrine has joined #ipfs
gpestana has joined #ipfs
Encrypt has joined #ipfs
gpestana has quit [Ping timeout: 260 seconds]
cyanobacteria has joined #ipfs
cyanobacteria has quit [Ping timeout: 258 seconds]
arpu has quit [Ping timeout: 265 seconds]
gpestana has joined #ipfs
SuperPhly has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
arpu has joined #ipfs
ianopolous has joined #ipfs
<kumavis[m]1>
daviddias: i realize my resolver code would be a lot simple if I could somehow specify some locally available data blob as a type that should be handled by its relevant type if there is remaining path
<kumavis[m]1>
we do this with remote data { "/": data type, remainingPath: '...' }
<kumavis[m]1>
but with local data it would be easier to say { value: accountData, type: 'eth-account', remainingPath: '...' }
<kumavis[m]1>
than embedding the eth-account resolver inside the eth-state-trie resolver
<kumavis[m]1>
to clarify, the `eth-account` data is stored directly inside the `eth-state-trie` leaf node
<kumavis[m]1>
so currently when resolving in the trie, if I reach a leaf node and if there is remaining path, I need to then run the account resolver in order to properly resolve inside that object
<kumavis[m]1>
theres a similar issue with `.tree`
<kumavis[m]1>
if I could just do a type-specific resolution and say "heres this, you may know how to dig deeper into it" and then ipfs-resolver would dig again
<kumavis[m]1>
to be clear, im talking about a single data blob, not hitting the network at all
<kumavis[m]1>
to be clear, im talking about a single data blob, not resolving anything external
tabrath has quit [Ping timeout: 260 seconds]
<kumavis[m]1>
this might allow json-like resolvers to be defined recursively. it would only return the top level children, and if its an obj, say "this is a cbor result, expand it if you wish"
<kumavis[m]1>
jbenet let me know what you think ^ (from "daviddias: i realize my resolver...")
cyanobacteria has joined #ipfs
Boomerang has quit [Ping timeout: 264 seconds]
Boomerang has joined #ipfs
mguentner2 is now known as mguentner
Oatmeal has quit [Quit: Suzie says, "TTFNs!"]
Boomerang has quit [Ping timeout: 258 seconds]
<Caterpillar>
IPFS could be very cool to be applied to Linux distro repositories
<Caterpillar>
so you can index the repositories packages with IPFS
SuperPhly has joined #ipfs
<Mateon1>
True, it seems that getting packages from multiple sources using IPFS would make getting packages so much faster.
<Caterpillar>
Mateon1: has anybody ever written a paper about that?
<jbenet>
hey kumavis[m]1 -- not sure i understand correctly-- the idea is you have a big graph of one type (the trie), and reach a link which points to more graph but nodes of different type (account data)?
<jbenet>
(there's something more i'm missing, else what i described could get handled by the multicodec -- if it's a completely different serialization type).
<kumavis[m]1>
that is correct, except it is not an external link
<jbenet>
or perhaps you mean different logical type (but same serialization format)?
<Mateon1>
Caterpillar: No paper as far as I know, I actually searched arxiv for IPFS related papers, and only found Juan's paper on IPFS itself
<kumavis[m]1>
the `eth-state-trie` leaf nodes contain the `eth-account` data locally
<jbenet>
"not an external link" == means what exactly?
<jbenet>
Ahhh, in the same blob?
<kumavis[m]1>
right
<jbenet>
like, there's a blob/block of data, and it contains both "trie data struct" data, and embedded within it some "account data struct" data
infinity0 has quit [Remote host closed the connection]
<kumavis[m]1>
correct
<jbenet>
so traversing within the same serialized block of data requires two different data structure resolvers (two different tree/resolve resolvers)
<kumavis[m]1>
so currently i have to resolve the "outer layer" `eth-state-trie` and then resolve the inner `eth-account`
<kumavis[m]1>
correct
<jbenet>
yeah by returning back up the partial path and partially consumed block/graph
<jbenet>
so-- is what you're proposing a change to the interface to return the remaining data too? (the partially consumed block/graph)
infinity0 has joined #ipfs
<kumavis[m]1>
yes sort of
<kumavis[m]1>
return the data with type information
<kumavis[m]1>
to resume resolving with a different resolver
<jbenet>
(IIRC the interface of the type-specific resolver returns the last value reached, and the partial path remaining,
<jbenet>
so the idea here is, the underlying type-specific resolver (trie resolver) knows the next thing is account data, and it has no way to communicate that back up to the general ipld resolver, it can only return the partial data, which is just data, not a link (hence no multicodec)
<jbenet>
therefore right now (as you were saying) the trie resolver has to embed the account data resolver
<kumavis[m]1>
right -- well instead of just aborting, im currently importing the eth-account resolver and continuing the resolution myself
<kumavis[m]1>
inside the eth-state-trie resolver
<kumavis[m]1>
so this is not a showstopper, not a requirement at this point
<kumavis[m]1>
but it would make things a lot cleaner
<jbenet>
so two thoughts:
<jbenet>
(a) how many such "sets of datastructures" exist where data of different types is embedded in the same serialized block/blob.
<jbenet>
the questions i have to figure out which of these it should be are:
<jbenet>
(2), it's possible that -- as you say -- a tweak to the interface could let the ipld resolver know the next type, so it can call the correct "next resolver" even when there is no multicodec.
<jbenet>
(1), it's possible that this kind of issue should be solved with an intermediate "ethereum resolver" that understands all the eth types, it embeds/mounts/has both the trie and account data resolvers. (ipld -> eth -> {eth-state-trie, eth-account})
<jbenet>
(b) whether the "type set" resolver (eth in this case) is a more or less appealing way for people to combine these. if they are "islands" (i.e. non-overlapping type sets) it would make sense to do (1). if they combine a lot (lots of overlapping data structs among "type sets" with embedded data), then it would make sense to do (2).
SuperPhly has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
<jbenet>
i do think it would be nice if it's (2). i dont know what the interface change would be atm, so we'd have to consider that carefully. ipld is already abstraction heavy and we should strive for simplicity. (2) may both simplify and complicate different parts
<jbenet>
same with (1)
<jbenet>
kumavis[m]1: we should probably dump this into a note/issue somewhere
ygrek has joined #ipfs
<kumavis[m]1>
yes i agree
<kumavis[m]1>
i say we put it off until its really desirable : P
<kumavis[m]1>
its not blocking for me
<kumavis[m]1>
but it does relate to the idea to prefix a data blob with its type, which may pop up elsewhere
tabrath has joined #ipfs
<jbenet>
yeah indeed. it does add a nice symmetry that was lacking: intra-block/blob "links" can be of different types. the assumption that they're not may be flawed.
<jbenet>
this is less true for general serialization stuff (like json/protobuf/cbor) because in that case the abstractions layer. dag-cbor can house different types fine. cleanly solving _that_ requires solving the "graph wrapping" or "graph transformations" problem. (if unfamiliar, this where one graph represents/implements another, like sharding a unix-fs
<jbenet>
directory)
SuperPhly has joined #ipfs
arpu has quit [Read error: Connection reset by peer]
SuperPhly has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
ygrek has quit [Ping timeout: 265 seconds]
slothbag has joined #ipfs
kulelu88 has joined #ipfs
mildred has joined #ipfs
fleeky has quit [Quit: Ex-Chat]
Boomerang has joined #ipfs
SuperPhly has joined #ipfs
Encrypt has quit [Quit: Sleeping time!]
greenej has joined #ipfs
greenej has quit [Remote host closed the connection]
greenej has joined #ipfs
gpestana has quit [Ping timeout: 268 seconds]
fleeky has joined #ipfs
nixze has quit [Remote host closed the connection]
Boomerang has quit [Ping timeout: 264 seconds]
<lgierth>
jbenet kumavis[m]1: i agree that (2) sounds better. the interface could be something like, the "main" resolver passes not just <cid,blob> to the specific resolver picked per multicodec, but also passes itself, so that the specific resolver can hand off stuff. same for tree() of course
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
<lgierth>
another interface would be resolve() returning the "remaining paths and blobs", but i feel like it'd get messy