whyrusleeping changed the topic of #ipfs to: go-ipfs v0.4.9 is out! https://dist.ipfs.io/#go-ipfs | Week 13: Web browsers, IPFS Cluster, Orbit -- https://waffle.io/ipfs/roadmaps | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | FAQ: https://git.io/voEh8 | Logs: https://botbot.me/freenode/ipfs/ | Code of Conduct: https://git.io/vVBS0
Akaibu has joined #ipfs
<deltab> baggatea_: files are split into chunks which are kept in a store, typically udner ~/.ipfs
<baggatea_> nice, I'll look into snapcraft
infinity0_ has joined #ipfs
infinity0_ has quit [Changing host]
infinity0 has joined #ipfs
infinity0 has quit [Killed (card.freenode.net (Nickname regained by services))]
<baggatea_> thanks
infinity0 has quit [Remote host closed the connection]
<baggatea_> pity about the chunks thing though, I was hoping to be able to use BTRFS dedup to keep IPFS files and my own library using the same space
<baggatea_> is there any linux filesystem integration that works with de-duplication?
<deltab> there is a new file-backed store, but I don't know if or how it deals with content from elsewhere, not added by you
<libman> I think it's perfectly valid to store "like normal files on disk". Can't let go of hierarchical file system paradigm completely. Maybe IPFS is best when it's not used as a stand-alone system, but as a transfer protocol.
<deltab> you can use --nocopy when adding content
infinity0 has joined #ipfs
<deltab> libman: sure; I guess the issue is that the files could get moved or modified
infinity0 has quit [Remote host closed the connection]
<baggatea_> deltab: I'm using BTRFS deduplication, I was thinking I say "pin all this stuff" then I let the dedupe process find full file hash matches in my own library and deduplicate them
<libman> So you would download over IPFS to normal files, and re-run `ipfs add -r --nocopy` when you change something.
chris613 has quit [Quit: Leaving.]
<baggatea_> ooh cool
<baggatea_> that would work
infinity0 has joined #ipfs
<libman> Before --nocopy, IPFS was for all economically practical purposes married to FSen that support deduplication, which is a problem. OpenBSD doesn't have one.
infinity0 has quit [Remote host closed the connection]
<deltab> I don't think it works with filesystem dedup: ipfs adds its own wrappers
<libman> I'm trying to come up with a pure-copyfree software stack, which would exclude BtrFs, ZFS, etc - leaving HAMMER FS as the only option, which makes it married to DragonFlyBSD...
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
<libman> Last time I checked, directory links need to be rehashed from scratch if just one file changes. Logically this shouldn't be necessary with something like a delta mode: here are the hashes of the modified files, and a link to the previous version for the rest of them.
<deltab> libman: possible, but then you have to fetch both versions
<deltab> sharded diretories were added fairly recently
infinity0 has joined #ipfs
jaboja has quit [Remote host closed the connection]
<deltab> so for big directories, only parts would need to be updated
infinity0 has quit [Remote host closed the connection]
<deltab> (big directories like the wikipedia snapshots)
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
akt has joined #ipfs
<libman> For a distributed (not decentalised mutable) Wikipedia, I would store articles in a local SQLite file and use IPFS only for images and compressed article update dumps (full download and compressed daily / hourly `sqldiff`s for catching up).
<libman> ZIM created their own format that embeds images, but it uses SQLite for indexes and isn't updatable. Bad design.
<lemmi> libman: i'm contemplating ipfs for static assets aswellsomething similar.
<lemmi> libman: i'm contemplating ipfs for static assets aswell
<libman> Getting every article file over IPFS: huge download overhead compared to compressed bulk diffs, and you need to download all articles and index them to have a functional local copy. Bad design.
jaboja has joined #ipfs
<kythyria[m]> Diffs have the disadvantage of splitting the swarm
<lemmi> at what point are saving diffs anything compared to a clean full download?
<lemmi> atm the overhead is rather in the protocol than in download size
<kythyria[m]> When the cumulative size of the diffs is a lot smaller than the size of redownloading the whole thing.
<lemmi> when does that even happen. what sizes of articles are we talking about?
<kythyria[m]> It's like full versus incremental versus differential backups, and I suspect format-aware chunking is needed to really minimise the amount of data transferred when you miss a few patches.
<libman> This isn't a "fully-decentralized blockchain Wikipedia project", just "offline wikipedia downloaded via P2P". You have a centralized authority telling everyone what the valid file list is.
<lemmi> that depends on how the conses algorithm is designed
<lemmi> that title alone says nothing about that
<kythyria[m]> lemmi: The problem exists even when the consensus algorithm is "trust that central authority"
<libman> Server storage is cheap, so maintain hourly and daily diffs for the last few months. Somebody might be living in a cave waiting for a sneakernet delivery or something. Then the diffs are applied in order.
<lemmi> no you guys lost me
dominic_ has quit [Ping timeout: 255 seconds]
<libman> Again, I am NOT talking about conses algorithms, just P2P distribution of centrally controlled content (like offline Wikipedia snapshots).
<kythyria[m]> Or game updates.
<kythyria[m]> (which are, admittedly, bigger but much less frequent than changes to wikipedia)
<lemmi> yeah, i'm still lost.
<libman> Wikipedia is a better example because it's huge and covers both scenarios: lots of little text files that you would want to bind together and and also larger less mutable image files you may want to download separately.
<kythyria[m]> Yeah. It's kind of a nasty case.
<lemmi> what problem is there? that blocks get too small and introduce overhead?
talonz has quit [Remote host closed the connection]
talonz has joined #ipfs
<kythyria[m]> That's part of it, though you can trade granularity for overhead if you like. The bigger problem is that it's constantly changing.
<libman> Wikipedia ZIM files are 59GB (split into 2GB chunks) plus there's 21GB of indexes that you can generate yourself.
chris613 has joined #ipfs
<libman> But I'm saying that was a bad approach to doing this. Images (very low quality) are embedded in the ZIMs. And there's no mechanism to partial update.
<kythyria[m]> A format which accomodates quite granular partial updates is critical.
<libman> We need a middle way between the git / ipfs "pile of files" approach and something like SQLite.
<libman> (The term "pile of files" comes from https://sqlite.org/appfileformat.html )
<lemmi> ok, embedded images suck.
jaboja has quit [Remote host closed the connection]
<lemmi> although.. you just can concatenate files in ipfs how you like
<libman> You can have a script fetch all images sequentially without much efficiency loss compared to downloading all images in one big tar file.
<libman> But for the article text, the benefits of batching together updates via solid compression are HUGE.
<libman> ("Solid compression" is WinRAR terminology for differentiating zip-style per-file compression and tar-style compression over all files in archive concatenated together.)
<libman> Plus if you download every little thing as a separate file every time it updates, the FS will end up with billions of inodes, which will break some systems and slow down some operations.
apiarian has quit [Quit: Textual IRC Client: www.textualapp.com]
palkeo has quit [Ping timeout: 255 seconds]
<lemmi> for that you might want to have a proper key-value store than the current fs-repo.
thekyriarchy has left #ipfs ["User left"]
<kythyria[m]> Yeah
apiarian has joined #ipfs
<lemmi> you are going to end up with huge amounts of blocks anyway. they might aswell be as self-contained as possible
<kythyria[m]> I suspect there's no good way of making it so that even once you're up to date you can still serve diffs.
<lemmi> so an article is just a single transmission and not 2 or 3 because you need some baseline and then the diffs
<kythyria[m]> You wouldn't necessarily diff individual articles either.
seharder has joined #ipfs
seharder has left #ipfs [#ipfs]
infinitesum has joined #ipfs
skeuomorf has quit [Ping timeout: 255 seconds]
baggatea_ has quit [Quit: Page closed]
<libman> SQLite provides everything you need: indexing, FTS, sqldiff, etc. It runs on billions of devices. It's public domain. Everyone knows how to use it.
shizy has joined #ipfs
shizy is now known as _shizy
<libman> I don't see the reason to use a simpler less inveterate key-value store instead.
domanic has joined #ipfs
chungy has quit [Ping timeout: 268 seconds]
<pawn> I'm still working a web browser with the single purpose to browse an IPFS web. My main conflict right now is in understanding the intended way multiaddr should fit into the IPFS-web model. I published this issue to strike a discourse on the subject: https://github.com/multiformats/multiformats/issues/40
chungy has joined #ipfs
<deltab> pawn: hi! I don't believe there is any use of multiaddr in the IPFS-web model
<deltab> it's used at a lower level, for establishing connections in libp2p
ckwaldon has quit [Remote host closed the connection]
<deltab> admittedly it's confusing that both use /ipfs/ followed by a hash, but in the case of libp2p it's a node id (the hash of its public key), not content
<libman> What ever happened to the idea of having namecoin domains point to ipns? http://dns.dot-bit.org/
eliXeratoR has joined #ipfs
zeroish has joined #ipfs
<pawn> deltab: both what use /ipfs/?
<eliXeratoR> I don't know if this is too basic of a question to ask, but if anyone would be willing to explain a couple of basic things to me, I would really appreciate it. I just need some clarification on what a node is vs what a gateway is and how they interact. after reading through the documentation I still feel a little lost. Thanks in advance and sorry for noob questions.
<deltab> pawn: multiaddrs and ipfs paths
<deltab> pawn: I'm guessing that was the source of your confusion
infinitesum has quit [Quit: infinitesum]
<pawn> deltab: Where are IPFS web addresses specified; where's the spec for them?
<deltab> libman: if you can make TXT records then you can use dnslink
infinitesum has joined #ipfs
<pawn> deltab: Yeah it is a bit confusing as to why web address would be prefixed with /ipfs/. I was almost certain that a IPFS web address was a multiaddr.
<deltab> do you know how you got that idea?
<pawn> deltab: Simularity in their appearance.
<pawn> So, if a IPFS web address is not a multiaddr, then what is it?
<deltab> it's a pathname in the ipfs filesystem (see sections 3.6, 3.8)
<pawn> Ah, I see
<deltab> web gateways simply make those files accessible through URLs
dimitarvp has quit [Quit: Bye]
<deltab> (multiaddrs are described in section 3.2)
zeroish has quit [Ping timeout: 240 seconds]
Lymkwi has quit [Ping timeout: 260 seconds]
<kythyria[m]> Heh, by the logic that leads to `/ipfs/` it would be reasonable to write `/http/github.com/...` but it still wouldn't act much like a normal filesystem.
<deltab> true
Lymkwi has joined #ipfs
<deltab> eliXeratoR: welcome! A node is a program you run on your machine that talks the IPFS protocol to other nodes, stores data locally, fetches data from other nodes when you request it, and make data you publish available to others
<deltab> (most) web browsers don't know anything about ipfs though, so to load web pages from ipfs you need something to translate ipfs to http, and that's what a gateway does
<deltab> if you run your own node, it includes a gateway; otherwise you can use the public one at https://ipfs.io
owlet has quit [Ping timeout: 240 seconds]
<pawn> deltab: Technically, a path doesn't need to be prefixed with /ipfs/, so why is this a convention?
<pawn> s/convention/standard
<pawn> The /ipfs prefix allows mounting into existing systems
<pawn> names are of course configurable).”
<pawn> at a standard mount point without conflict (mount point
<pawn> ... stupid limewire.
<pawn> Never mind that question though.
<pawn> How does is an object guaranteed to be a DAG?
owlet has joined #ipfs
<pawn> What encoding are content-addresses typically?
<pawn> base62?
chris613 has quit [Quit: Leaving.]
<eliXeratoR> deltab: Thank you very much! That was perfectly clear. Very much appreciated!
<deltab> the /ipfs/ prefix establishes that what follows is an ipfs content hash; but you could have /ipns/ instead, which is followed by the hash of a key that signs a content hash
<deltab> pawn: links between objects form a DAG
domanic has quit [Ping timeout: 240 seconds]
<deltab> pawn: base58btc, I think
domanic has joined #ipfs
<pawn> object A links to object B and object B links back to object A. Wouldn't this be inconsistent with the definition of the DAG?
<pawn> If so, then how is this prevented?
<deltab> depends on what kind of links you're referring to
<deltab> that's not possible with merkle links, because of hashing
<deltab> but html links aren't merkle links so they're not constrained to form a DAG
<deltab> so if you have a directory that says "file A.html has hash Qmfoo and file B.html has hash Qmbar" then A.html can contain <a href="B.html"> and vice versa
<deltab> since the hashes are in the directory, not in the files
justache is now known as nihilstache
zeroish has joined #ipfs
nihilstache is now known as justache
engdesart has quit [Ping timeout: 240 seconds]
<eliXeratoR> Is there any security advantage in using a private instead of public gateway?
zeroish has quit [Ping timeout: 255 seconds]
palkeo has joined #ipfs
palkeo has joined #ipfs
palkeo has quit [Changing host]
Lymkwi has quit [Ping timeout: 240 seconds]
Lymkwi has joined #ipfs
arkimedes has joined #ipfs
libman has quit [Quit: Connection closed for inactivity]
<pawn> deltab: I meant the links in the IPFSObject
<deltab> those are created by hashing, so as long as the hash function is secure, they can only be made one way
<pawn> deltab: I'll have to read through the spec more to understand what you mean. I must be making a false assumption somewhere.
infinity0 has joined #ipfs
<deltab> from a block of data, you can calculate a hash; change the data, and the hash changes, unpredictably
pawn has quit [Remote host closed the connection]
<deltab> say you start with a block A, and hash it to get hA; you can then include hA in another block, B
<deltab> but you can't include hB in block A, because you can't tell what it'll be ahead of time
Kubuxu has quit [Ping timeout: 255 seconds]
<deltab> and changing A to include hB would change hA, which would change B, which would change hB
Matrix90 has quit [Ping timeout: 255 seconds]
Magik6k has quit [Ping timeout: 260 seconds]
infinitesum has quit [Quit: infinitesum]
Magik6k has joined #ipfs
Kubuxu has joined #ipfs
infinitesum has joined #ipfs
zeroish has joined #ipfs
zeroish has quit [Ping timeout: 255 seconds]
owlet has quit [Ping timeout: 255 seconds]
appa has quit [Ping timeout: 255 seconds]
eliXeratoR has quit [Ping timeout: 240 seconds]
palkeo has quit [Ping timeout: 240 seconds]
appa has joined #ipfs
infinitesum has quit [Quit: infinitesum]
zeroish has joined #ipfs
appa has quit [Ping timeout: 260 seconds]
Pessimist has joined #ipfs
appa has joined #ipfs
Caterpillar has joined #ipfs
ilyaigpetrov has joined #ipfs
JayCarpenter has quit [Quit: Page closed]
zeroish has quit [Ping timeout: 268 seconds]
rendar has joined #ipfs
espadrine has joined #ipfs
maxlath has joined #ipfs
arkimedes has quit [Ping timeout: 260 seconds]
dconroy has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<pinkieval> Are allocations strategies of ipfs-cluster documented somewhere? I found a bunch of issues and commits related to them, but it is not very helpful
zeroish has joined #ipfs
m0ns00n_ has joined #ipfs
jungly has joined #ipfs
domanic has quit [Ping timeout: 255 seconds]
zeroish has quit [Ping timeout: 260 seconds]
espadrine has quit [Ping timeout: 240 seconds]
mahloun has joined #ipfs
Lymkwi has quit [Ping timeout: 260 seconds]
Lymkwi has joined #ipfs
espadrine has joined #ipfs
akt has quit [Ping timeout: 240 seconds]
Magik6k has quit [Quit: Bye!]
Kubuxu has quit [Quit: WeeChat 1.8]
Kubuxu has joined #ipfs
zeroish has joined #ipfs
kantokomi has joined #ipfs
Magik6k has joined #ipfs
m0ns00n_ has quit [Remote host closed the connection]
kantokom1 has quit [Ping timeout: 255 seconds]
Mateon3 has joined #ipfs
Mateon1 has quit [Ping timeout: 240 seconds]
Mateon3 is now known as Mateon1
zeroish has quit [Ping timeout: 240 seconds]
<DokterBob> Hey all, sadly enough, today is the day that ipfs-search is going to die.
<DokterBob> If anyone feels like taking over, this would be a good time to say so.
domanic has joined #ipfs
rcat has joined #ipfs
<DokterBob> Reason: not able to maintain it and hence: not willing to pay for hosting.
<DokterBob> However, it is open source and the index is available on IPFS at QmXA1Wiy3Ko29Q54Sq7pkyu8yTa7JmPokvgx4CXtQBWirt (please, do pin!)
tkorrison[m] has joined #ipfs
m0ns00n_ has joined #ipfs
pcre has joined #ipfs
m0ns00n_ has quit [Quit: quit]
ygrek has quit [Ping timeout: 240 seconds]
pcre_ has joined #ipfs
tglman has joined #ipfs
appa has quit [Ping timeout: 240 seconds]
appa has joined #ipfs
zeroish has joined #ipfs
mahloun has quit [Ping timeout: 240 seconds]
mahloun has joined #ipfs
tglman has quit [Quit: WeeChat 1.8]
<Bat`O> DokterBob: :-|
<Bat`O> i feel the pain
<Atrus[m]> I'm curious, how much is hosting?
<DokterBob> So far, I was 'lucky' enough to have a sponsored hosting account at OVH for 6 months, which have just passed
<victorbjelkholm> Today we have the All-Hands call at 16:00 UTC, if you'd like to participate, feel free to join us and if you have some points you'd like to bring up, please add it to our agenda: https://hackmd.io/IYNgjATAJlAsDsBaG8CsjYFMpkQDgCMBjATkRPgGZMQRhKAzbYIA?edit#
<DokterBob> I might have been able to extend it but I haven't really had time to work on the project
jkilpatr has quit [Ping timeout: 240 seconds]
<DokterBob> And I would prefer a professional hoster
<DokterBob> @Atrus:matrix.org: I think at OVH the hosting is effectively around € 200 / mo
<DokterBob> Before, I made a budget for a more scalable setup that would amount to about € 5.500 per year
<Atrus[m]> Ahh okay. I'll have to pin it later today when I get home.
<DokterBob> Bottom line is: running a search engine through a lot of content is expensive
<DokterBob> If I were to include (reasonably moderate) labour costs the total costs for keeping this baby afloat for about a year would amount to 24k
<Atrus[m]> Yeah, and it's not easy to monitze either.
<DokterBob> (Which includes necessary developments)
<DokterBob> victorbjelkholm: I could just pop in and explain why and how ipfs-search is down, as a last resort to try and find someone to adopt the project
<victorbjelkholm> DokterBob: that sounds good to me
<victorbjelkholm> thanks for that :)
btmsn has joined #ipfs
<DokterBob> I'm pretty sure it'll be really good to monitize, if only you sit through the first year or so until enough interesting content is available on IPFS. If I were to find a good enough developer to take this project on I would even consider getting funding and hiring that person as a proper startup.
<DokterBob> It's just I have enough money and would prefer having more time ^^
m0ns00n_ has joined #ipfs
domanic has quit [Read error: Connection reset by peer]
jkilpatr has joined #ipfs
m0ns00n_ has quit [Quit: quit]
dimitarvp has joined #ipfs
zeroish has quit [Ping timeout: 245 seconds]
zeroish has joined #ipfs
krzysiekj has quit [Ping timeout: 245 seconds]
krzysiekj has joined #ipfs
Foxcool has quit [Ping timeout: 268 seconds]
appa has quit [Ping timeout: 240 seconds]
appa has joined #ipfs
m0ns00n_ has joined #ipfs
jamiew has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…]
jamiew has joined #ipfs
jamiew has quit [Client Quit]
Vladislav has joined #ipfs
Vladislav is now known as persecutrix
m0ns00n_ has quit [Quit: quit]
gmoro has joined #ipfs
persecutrix has quit [Ping timeout: 240 seconds]
jamiew has joined #ipfs
jamiew has quit [Client Quit]
Mateon1 has quit [Remote host closed the connection]
Mateon1 has joined #ipfs
reit has quit [Quit: Leaving]
tglman has joined #ipfs
zeroish has quit [Ping timeout: 246 seconds]
owlet has joined #ipfs
owlet has quit [Ping timeout: 255 seconds]
zeroish has joined #ipfs
owlet has joined #ipfs
zeroish has quit [Ping timeout: 258 seconds]
maxlath has quit [Ping timeout: 240 seconds]
maxlath has joined #ipfs
zeroish has joined #ipfs
zeroish has quit [Ping timeout: 240 seconds]
maxlath has quit [Ping timeout: 246 seconds]
ulrichard has joined #ipfs
jaboja has joined #ipfs
dconroy has joined #ipfs
Falconix has quit [Ping timeout: 240 seconds]
Falconix has joined #ipfs
dconroy has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
reit has joined #ipfs
dconroy has joined #ipfs
jamiew has joined #ipfs
leeola has joined #ipfs
jamiew has quit [Ping timeout: 240 seconds]
maxlath has joined #ipfs
screensaver has quit [Ping timeout: 260 seconds]
screensaver has joined #ipfs
bigbluehat_ is now known as bigbluehat
screensaver has quit [Ping timeout: 268 seconds]
ashark has joined #ipfs
screensaver has joined #ipfs
m0ns00n_ has joined #ipfs
shizy has joined #ipfs
zeroish has joined #ipfs
ulrichard has quit [Remote host closed the connection]
dconroy has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
infinitesum has joined #ipfs
m0ns00n_ has quit [Quit: quit]
dconroy has joined #ipfs
jmill_ has joined #ipfs
jmill has quit [Ping timeout: 246 seconds]
dconroy has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
cjd[m] has joined #ipfs
pcre has quit [Remote host closed the connection]
pcre_ has quit [Remote host closed the connection]
appa has quit [Ping timeout: 240 seconds]
Oatmeal has joined #ipfs
appa has joined #ipfs
chungy has quit [Quit: ZNC - http://znc.in]
jmill_ has quit [Read error: Connection reset by peer]
gmoro_ has joined #ipfs
Foxcool has joined #ipfs
gmoro has quit [Ping timeout: 246 seconds]
dconroy has joined #ipfs
jmill has joined #ipfs
<sprint-helper> The next event "IPFS All Hands Call" is in 15 minutes.
JayCarpenter has joined #ipfs
zeroish` has joined #ipfs
tantalum has joined #ipfs
* whyrusleeping coming
dconroy has quit [Ping timeout: 240 seconds]
alwillis has joined #ipfs
jmill has quit [Read error: Connection reset by peer]
jmill has joined #ipfs
<pinkieval> Is it possible to use ipfs get without making a copy, symmetrically to what ipfs add --no-copy does?
<emunand[m]> ipfs cat
<pinkieval> ipfs cat makes a copy to stdout
girlhood has joined #ipfs
<victorbjelkholm> need to run and will upload the recording as soon as I'm back!
JayCarpenter has quit [Quit: Page closed]
m0ns00n_ has joined #ipfs
jaboja has quit [Ping timeout: 255 seconds]
zeroish has quit [Ping timeout: 260 seconds]
zeroish has joined #ipfs
antfoo has joined #ipfs
eliXeratoR has joined #ipfs
mahloun has quit [Read error: Connection reset by peer]
mahloun has joined #ipfs
maxlath has quit [Quit: maxlath]
fzzzr has joined #ipfs
antfoo has quit [Ping timeout: 240 seconds]
antfoo has joined #ipfs
fzzzr has quit [Ping timeout: 255 seconds]
zuck05 has quit [Ping timeout: 260 seconds]
Foxcool has quit [Ping timeout: 260 seconds]
eliXeratoR has quit [Ping timeout: 240 seconds]
alwillis has quit [Quit: alwillis]
Encrypt has joined #ipfs
zuck05 has joined #ipfs
m0ns00n is now known as Guest21773
m0ns00n_ is now known as m0ns00n
jmill has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jungly has quit [Remote host closed the connection]
btmsn has quit [Ping timeout: 260 seconds]
btmsn1 has joined #ipfs
vapid has quit [Remote host closed the connection]
vapid has joined #ipfs
jaboja has joined #ipfs
btmsn1 is now known as btmsn
eliXeratoR has joined #ipfs
rcat has quit [Ping timeout: 240 seconds]
jmill has joined #ipfs
espadrine has quit [Ping timeout: 240 seconds]
jamiew has joined #ipfs
Encrypt has quit [Quit: Quit]
jaboja has quit [Ping timeout: 240 seconds]
atrapado_ has joined #ipfs
droman has joined #ipfs
infinitesum has quit [Quit: infinitesum]
rendar has quit [Ping timeout: 240 seconds]
Falconix has quit [Ping timeout: 255 seconds]
infinitesum has joined #ipfs
antfoo has quit [Remote host closed the connection]
Foxcool has joined #ipfs
antfoo has joined #ipfs
galois_dmz has joined #ipfs
jamiew_ has joined #ipfs
pcre has joined #ipfs
jamiew has quit [Ping timeout: 246 seconds]
galois_d_ has quit [Ping timeout: 245 seconds]
rendar has joined #ipfs
sirdancealot has joined #ipfs
hoboprimate has joined #ipfs
ShalokShalom has joined #ipfs
maxlath has joined #ipfs
infinitesum has quit [Quit: infinitesum]
infinitesum has joined #ipfs
maxlath has quit [Client Quit]
galois_dmz has quit [Remote host closed the connection]
mahloun has quit [Ping timeout: 255 seconds]
galois_dmz has joined #ipfs
Encrypt has joined #ipfs
tglman has quit [Quit: WeeChat 1.8]
zeroish has quit [Ping timeout: 246 seconds]
tglman has joined #ipfs
zeroish has joined #ipfs
rcat has joined #ipfs
hoboprimate has quit [Quit: hoboprimate]
zeroish has quit [Ping timeout: 260 seconds]
DrWhax has joined #ipfs
crest has joined #ipfs
<crest> how stable/usable is ipfs-cluster(-ctl)?
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
eliXeratoR has quit [Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/ Peace, bitches]
tantalum has left #ipfs [#ipfs]
<null_radix[m]> hey does anyone konw if ipfs.dag operations are sequential consistence?
jaboja has joined #ipfs
zeroish has joined #ipfs
girlhood has quit [Ping timeout: 255 seconds]
mahloun has joined #ipfs
zeroish has quit [Ping timeout: 240 seconds]
espadrine has joined #ipfs
DiCE1904 has quit [Ping timeout: 246 seconds]
zeroish has joined #ipfs
Lymkwi has quit [Ping timeout: 258 seconds]
pcre has quit [Remote host closed the connection]
jamiew has joined #ipfs
jamiew_ has quit [Ping timeout: 240 seconds]
<whyrusleeping> crest: its still pretty early, needs people to use it and report issues
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<whyrusleeping> null_radix[m]: hrm?
<whyrusleeping> null_radix[m]: i'm not exactly sure what you mean
Foxcool has quit [Ping timeout: 246 seconds]
<null_radix[m]> whyrusleeping if a do an update and then a read will the callbacks always resolve in the oder that the operation where made?
<null_radix[m]> ^if I
infinitesum has quit [Quit: infinitesum]
<whyrusleeping> hrm... that sounds like details specific to how js-ipfs-api is implemented. best to ask daviddias
<null_radix[m]> i think it might be nice to have in the spec so that the behaivor is consistent with different backend implemention. Also Other things like JS Atomics/ webassembly shared memory specify this
<whyrusleeping> well, there are only dag get and put officially
<whyrusleeping> not exactly sure what you mean by 'update'
<null_radix[m]> yeah i mean put instead of update
* whyrusleeping summons daviddias
* daviddias a wild daviddias appears
atrapado_ has quit [Quit: Leaving]
<daviddias> null_radix[m]: there is a double answer to that
<daviddias> There are no checks being made to ensure that the order of the returns is yielded in the same order as the calls that were issued
<daviddias> If we switched that on
<daviddias> that a dag.get on a non existing node would have to timeout for all the other dag.get to finish
<daviddias> however, if you are the only user of your js-ipfs node, the only thing that might be switching those orders is the browser WebCrypto and/or IndexedDB api calls itself
<daviddias> It's an event loop, whatever is ready first, gets called
<null_radix[m]> ok thanks
rcat_ has joined #ipfs
rcat has quit [Ping timeout: 246 seconds]
jkilpatr has quit [Ping timeout: 240 seconds]
jamiew has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…]
jamiew has joined #ipfs
AnarchyAo has joined #ipfs
jamiew has quit [Client Quit]
AnarchyAo has quit []
m0ns00n has quit [Quit: quit]
frio[m] has joined #ipfs
AnarchyAo has joined #ipfs
<daviddias> :)
infinitesum has joined #ipfs
Encrypt has quit [Quit: Quit]
ShalokShalom has quit [Ping timeout: 255 seconds]
espadrine has quit [Ping timeout: 240 seconds]
ashark has quit [Ping timeout: 246 seconds]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
jmill has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
AnarchyAo has quit []
AnarchyAo has joined #ipfs
alanz has quit [Ping timeout: 255 seconds]
AnarchyAo has quit [Client Quit]
alanz has joined #ipfs
rcat_ has quit [Ping timeout: 240 seconds]
AnarchyAo has joined #ipfs
AnarchyAo has quit [Client Quit]
bauruine has quit [Quit: ZNC - http://znc.in]
libman has joined #ipfs
bauruine has joined #ipfs
infinitesum has quit [Quit: infinitesum]
infinitesum has joined #ipfs
chris613 has joined #ipfs
jaboja has quit [Ping timeout: 240 seconds]
zeroish has quit [Ping timeout: 240 seconds]
hoboprimate has joined #ipfs
cmbrnt has quit [Ping timeout: 245 seconds]
btmsn has quit [Ping timeout: 240 seconds]
jborak has joined #ipfs
infinitesum has quit [Quit: infinitesum]
shizy has quit [Ping timeout: 255 seconds]
cmbrnt has joined #ipfs
infinitesum has joined #ipfs
gmoro_ has quit [Ping timeout: 240 seconds]
jborak has quit [Ping timeout: 255 seconds]
infinitesum has quit [Quit: infinitesum]
infinitesum has joined #ipfs
gmoro has joined #ipfs
<xelra> Can I theoretically use the IPFS Docker image to get mount functionality on Windows?
zeroish has joined #ipfs
chungy has joined #ipfs
<xelra> I want to be able to open and edit e.g. Word files in my private network.
zeroish has quit [Ping timeout: 255 seconds]
asyncsec has joined #ipfs
<spikebike> a private network that has internet access?
<spikebike> doesn't a linux docker image on windows run inside a virtual machine?
hoboprimate has quit [Quit: hoboprimate]
jmill has joined #ipfs