<brimstone>
when you add a file to ipfs on your laptop, it only exists there until someone pulls it
<Rotwang>
brimstone: when the old data gets removed then?
<brimstone>
data is removed from the network when the node stops serving it
<brimstone>
by going offline or remove it locally
<Rotwang>
brimstone: so if I get a file from ipfs via the ipfs utility it gets stored locally and then others can fetch it directly from me if I have a gateway set up?
<brimstone>
yup
<brimstone>
but not only if you have a gateway setup
<brimstone>
as soon as you fetch it, people will be able to pull it from you as well
<brimstone>
or parts from you
<brimstone>
that's default, out of the box behaviour
jaboja has joined #ipfs
<Rotwang>
ok then, so ipfs is a ~torrent with a "search" service
lothar_m has quit [Quit: WeeChat 1.5-dev]
<brimstone>
kinda
<brimstone>
torrents have a very similar search service, using magnet links
<Rotwang>
I've watched both youtube videos on ipfs.io and couldn't get feeling on what ipfs actually is
<Rotwang>
no offence to anyone but those are bad introductory videos
<Rotwang>
especially the long one, all the important stuff could be summed up in 5 minutes
<Rotwang>
but it dragged on for 70 minutes -__-
<Rotwang>
brimstone: thanks for answers
lothar_m has joined #ipfs
rombou has left #ipfs [#ipfs]
<deltab>
it's approximately git's merkle file system (extended to chunk large files), distributed by bittorrent with lookup by hash using a dht
<Rotwang>
deltab: yep, that's the image that was slowly building in my head
<Rotwang>
thanks for the summary
<Rotwang>
(I need to write it down somewhere)
jaboja has quit [Ping timeout: 244 seconds]
Combined2857 has quit [Quit: Leaving]
nycoliver has joined #ipfs
fmope_ has quit [Remote host closed the connection]
tmg has quit [Ping timeout: 240 seconds]
fmope_ has joined #ipfs
reit has quit [Ping timeout: 250 seconds]
disgusting_wall has joined #ipfs
<daviddias>
nicolagreco: I am now
<dignifiedquire>
daviddias: pretty close to fixing the swarm bug :)
<dignifiedquire>
it's us not them
<daviddias>
nicolagreco: if you enabled swarm.connection.reuse (with a stream multiplexer like spdy added), you can keep calling swarm.dial and all you get is multiplexed streams
<daviddias>
"Send text/binary data to the WebSocket server. data can be any of several types: String, Buffer (see buffer), TypedArrayView (Uint8Array, etc.), ArrayBuffer, or Blob (in browsers that support it)."
<daviddias>
I know it accepts both, not sure if it is mangling one for another
<daviddias>
dignifiedquire: could you really quick do a release without the `const` https://github.com/dignifiedquire/idb-plus-blob-store, so that I can release all of the other modules on top and have tests running on unixfs-engine
<dignifiedquire>
yes will do
<daviddias>
thank you :)
Guest22134 has joined #ipfs
patcon has joined #ipfs
bearbin has quit [Remote host closed the connection]
ianopolous has quit [Ping timeout: 252 seconds]
jbold has joined #ipfs
jaboja has joined #ipfs
pfraze_ has joined #ipfs
pfraze has quit [Ping timeout: 260 seconds]
<dignifiedquire>
daviddias: done
<daviddias>
dignifiedquire: awesome! :)
<daviddias>
npm still shows 1.1.0 only though
<daviddias>
1.1.1, awesome
jager has quit [Ping timeout: 246 seconds]
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-idb-plus-blob-store-1.1.1 (+1 new commit): https://git.io/vw0w6
<ipfsbot>
js-ipfs/greenkeeper-idb-plus-blob-store-1.1.1 c8c0b64 greenkeeperio-bot: chore(package): update idb-plus-blob-store to version 1.1.1...
dignifiedquire has quit [Ping timeout: 276 seconds]
dignifiedquire has joined #ipfs
ianopolous has joined #ipfs
<lachenmayer>
hey there, i just (re-)installed ipfs (v0.4.1-dev) and tried to browse the webui. i get tons of errors of the form "flatfs: too many open files, retrying in %dms0" and some images etc. aren't retrieved.
<dignifiedquire>
daviddias: oh boy type systems sure look nice right about now
<lachenmayer>
ahh hmmmm. i installed ipfs with go 1.5.3, but i since upgrade to go 1.6.2 - do i have to rebuild/reinstall? if so, what's the best way to do that?
_Vi has joined #ipfs
<daviddias>
dignifiedquire: ahaha what happened?
<dignifiedquire>
new Buffer('something).buffer === ArrayBuffer {}
<dignifiedquire>
and multiaddr(new Buffer('..')).buffer === Buffer('..')
<dignifiedquire>
there were addresses added as Buffers instead of as multiaddrs
rombou has joined #ipfs
<Guest22134>
Why does Block.addNodeLink sort the links by name? What if I want to maintain the insertion order?
<daviddias>
Trying to figure out some new ipfs-block errors that resulted from the update to the latest repo
<dignifiedquire>
hm?
<daviddias>
getBlock doesn't find a Block after a addBlock
<dignifiedquire>
oO
<dignifiedquire>
does it have sth to do with the locks I introduced?
<daviddias>
some racing condition
zdm has joined #ipfs
<dignifiedquire>
those are always loads of fun ;)
<daviddias>
I don't think so, maybe the locks just made it to happen often
<dignifiedquire>
I see
<dignifiedquire>
well good luck, family calls
<daviddias>
have a good Sunday :)
<dignifiedquire>
I'm going to start work on bitswap in the morning, already familiarized myself with the go code more and started making plans in my head
zdm has quit [Remote host closed the connection]
<daviddias>
hm.. ok, we might be working in parallel then
<daviddias>
dignifiedquire: the blocks problem is that `finish` is emitted when everything is flushed to the through stream of repo
<dignifiedquire>
:D:D:D
<daviddias>
and it seems that now with the locks, the actual write happens some events after on the loop
<daviddias>
so, when we do get, cause the lock postponed the thing a bit
<daviddias>
it is not there
<dignifiedquire>
on all datastores?
<daviddias>
there is just one datastore
<daviddias>
there is no event that gets propagated back
<dignifiedquire>
I mean which blobstore type
<dignifiedquire>
fs, or idb
<daviddias>
both
<daviddias>
like, we write the block, and the through buffers it all
<daviddias>
and as soon as the .pipe happens
jager has joined #ipfs
<daviddias>
writeStream will be the next one to buffer, and the through calls the finish event, while writeStream is still writting
<daviddias>
unlocking the next call on the callback sequence -> getBlock
<dignifiedquire>
ahh how I leave them streams
<dignifiedquire>
*love
<dignifiedquire>
well have fun, if you can't fix it write a test/pr/issue so I can take a look at it
s_kunk has joined #ipfs
<dignifiedquire>
I go and color pictures of knights now :)
<daviddias>
ahaha that sounds fun! :)
afisk has quit [Remote host closed the connection]
<dignifiedquire>
daviddias: we need to talk about your bad hack ;)
Ronsor` has joined #ipfs
<daviddias>
I've opened an issue and add a really long comment to explain how it is just temporary ? Even for just today. I wanted to make sure that polling would be the right way to do it, or if there is another way
jaboja has quit [Ping timeout: 276 seconds]
<daviddias>
Dignifiedquire I know well it is a wrong :) but it unblocks voxelot to run the files tests on the browser today :)
<dignifiedquire>
polling is not the right way to go
<dignifiedquire>
is my short answer, longer later
<daviddias>
Ok
<daviddias>
Good that I waited for feedback then :)
<dignifiedquire>
(where right is defined as most efficient)
<daviddias>
We have a hard boundary at the blob-store interface, which we do not own, that makes it hard to patch the blob stores with extra events that -blocks can do
<dignifiedquire>
we need to change or drop the blob-store interface
<dignifiedquire>
it's not good enough for the gurantees that we need
<A124>
ACTION points whyrusleeping at http://dave.cheney.net/2015/08/08/performance-without-the-event-loop ... I read that as C is nice, can be fast, but go is way too simple to write. There are likely more performant languages, but in my case I do not need 20k connections. ... so the conclusion is the go should keep just enough threads as needed to keep cpus busy and do that transparently and efficiently, which obviously was not what was happening. So I wonder w
<dignifiedquire>
we might not own it, but there is nothing forcing us to use sth that is not working as it should
disgusting_wall has quit [Quit: Connection closed for inactivity]
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
cjb has joined #ipfs
Ronsor` has quit [Ping timeout: 250 seconds]
rombou has quit [Read error: No route to host]
Ronsor` has joined #ipfs
rombou has joined #ipfs
rendar has quit [Ping timeout: 250 seconds]
Ronsor` has quit [Remote host closed the connection]
Ronsor` has joined #ipfs
rendar has joined #ipfs
<dignifiedquire>
daviddias: which tests exactly were failing ?
Ronsor` has quit [Ping timeout: 268 seconds]
patcon has quit [Ping timeout: 260 seconds]
dhiru26 has quit [Ping timeout: 240 seconds]
insanity54 has joined #ipfs
jaboja has joined #ipfs
Ronsor` has joined #ipfs
insanity54 has quit [Ping timeout: 246 seconds]
<libman>
I already do pre-generated static files with Web development whenever possible, and I've been thinking a lot about transitioning to IPFS... One thing I'm thinking about is IPLD vs WebComponents.
<libman>
With WebComponents finally becoming viable without polyfills, it's tempting to use them as the canonical data storage for each record. XML may be ugly, but WebComponents allow every device to access an XML(ish) record and render it for viewing.
Encrypt has joined #ipfs
Ronsor` has quit [Ping timeout: 260 seconds]
zorglub27 has joined #ipfs
rombou has quit [Ping timeout: 260 seconds]
<cdata>
libman: I'm working on a body of data storage elements for the Polymer project. I'm very interested to hear more about how you imagined using Web Components as a data exchange format. There are limitations worth considering.
nycoliver has quit [Ping timeout: 250 seconds]
<libman>
It's just a thought experiment for me at this point, trying to understand "the new way of doing things". The first step is letting go of my C optimization / RDBMS normalization mentality from the 90s - it no longer makes economic sense, and SSD's & RAM will keep getting faster and cheaper...
<cdata>
This is true, although there is always a desire at scale to squeeze every last ounce of performance out of a client. There is non-trivial cost associated with registering and creating a Web Component, compared to serializing / deserializing equivalent JSON anyway.
<libman>
I'm imagining that the canonical version of the data (to be spread via P2P, and even preemptively cached by IPFS-based NAT devices everywhere) should be optimized for individual record access. You can have a standard for adding additional metadata over XML to automatically set up a PostgreSQL "search server" when needed, but starting with the data in RDBMS is now "premature optimization".
<dignifiedquire>
daviddias: ping
Ronsor` has joined #ipfs
Akaibu has joined #ipfs
<libman>
For example, I'm contemplating a set of WebComponents for scraping and exporting data from message forums, mailing lists, social networks, etc all into one XML-like format, OpenArticle, OpenPost, OpenThread, etc.
<cdata>
I see, so sort of a microformats approach?
<libman>
One post, one post file. One message forum thread / Facebook Page / etc, one file that includes individual posts.
<libman>
I'm really very ignorant of the theory behind it. I'm returning to programming after a long absence...
<libman>
I really love IPFS and the goals behind it. I come from a libertarian / data proliferation / wikileaks / prepper / seasteading mentality. I've been writing about this for years, but never did any programming. I'd like to market IPFS-based NAT solutions (with preemptive cashing) to the libertarian / prepper market of people worried about government harming the Internet. Also, check out seasteading.org - these projects are definitely a perfect fit for IPF
zabirauf has joined #ipfs
<A124>
libman optimization always makes sense, but those rules: 1) Do not optimize. 2) Do not optimize, yet.
<A124>
And my personal view on IPFS, it's not a replacement for http, but enabler to make possible together to do what everyone (almost) dreamed up of web being.
<libman>
The idea for serverless P2P architecture is to be read-optimized, right?
<libman>
I'm thinking: IPFS for existing data, expendable HTTP API's for pushing updates.
<A124>
Not sure what that exactly represents in this case, but I guess so.
<A124>
Well, when the quirks are figured out, it should be able to do everything, but as of now using cloudflare for geo locality is nice too. Fetching stuff from IPFS still takes a lot of time in some cases.
<A124>
I had this idea of having static page with JS, and CORS, that can autoselect between ipfs and http. Getting best of both worlds.
ggoZ has quit [Ping timeout: 276 seconds]
<A124>
And of course, that page could be index for both http server and the ipns resolved path. Http server may maintain a preference of protocol, while when using ipfs, the preference would automatically be on ipfs, unless the requests time out. (Who wants to wait 10s to get a gif image ;))
<libman>
s/cashing/caching/
mildred has quit [Ping timeout: 276 seconds]
<libman>
One issue with P2P is to have a canonical format for everyone to syndicate. I'm glad that IPLD is coming up with a standard JSON-based format, but I'm just trying to reconcile this with the possibilities offered by WebComponents.
<daviddias>
dignifiedquire: pong
<daviddias>
ready to chat
<dignifiedquire>
daviddias: look at my pR :)
Boomerang has joined #ipfs
<cdata>
libman: +1 to extendable HTTP APIs for updates. Give me a Web Component that points to an IPFS address, lets me easily read / write the data from that address and then provides the new address (if needed) without doing any special effort!
disgusting_wall has joined #ipfs
<daviddias>
ah, so, since we have access to the last stream we control
<daviddias>
nice :)
<daviddias>
good use of the whole abstract-blob-store interface
<daviddias>
thank you :)
<libman>
The system should be usable without HTTP2, which is used for things like "are there newer versions of X" (including push notifications of updated versions) as well as publishing new objects and search.
<libman>
IPFS for P2P, and HTTP2 for O2O (op to op, operators being special and trusted peers volunteering more resources)
<libman>
So, like a message forum: all post and post-list (all posts by thread, hashtag, user, etc) data is stored as individual IPFS objects - the question is WebComponents or JSON?
<dignifiedquire>
daviddias: which tests were failing? I tested this with unifx-engine, anything else?
<daviddias>
unixfs-engine browser tests were the ones (and also ipfs-blocks before)
<dignifiedquire>
okay, they are passing fine with those for me
<dignifiedquire>
also added the "stress" test with 1000 blocks
<dignifiedquire>
for the js-ipfs-blocks
<libman>
An advantage of using Web Components (which is pretty much XML) is that it brings sane OO development to the Web. It's also more compatible with search engines: Google will find a message post match in XML a lot better than in JSON.
<daviddias>
<daviddias>
"lots of blocks"
<dignifiedquire>
daviddias: funny thing I first tried with 100 000 blocks
<daviddias>
:D
<dignifiedquire>
but my os told me that it couldn't handle it :D
<daviddias>
ahah
<daviddias>
that won't be good 'soon'
<dignifiedquire>
well we know how to solve the issue, sharding
<dignifiedquire>
we can't write 100k files into one single folder
<daviddias>
we don't write 100k files to a single folder right now
computerfreak has joined #ipfs
<daviddias>
we have a poor man sharding
<daviddias>
which we split blocks by 6 first chars as a prefix
<dignifiedquire>
hmm
<dignifiedquire>
well it fails the os on 100k blocks all starting with "hello-"
* libman
imagines the future: trillions of IPFS files on your network with a NAT device that fits in your pocket.
<daviddias>
it means the sharding isn't enough
<dignifiedquire>
it might be another issue, let me try sth
insanity54 has joined #ipfs
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-ipfs-blocks-0.2.2 (+1 new commit): https://git.io/vw0F8
<ipfsbot>
js-ipfs/greenkeeper-ipfs-blocks-0.2.2 ae32251 greenkeeperio-bot: chore(package): update ipfs-blocks to version 0.2.2...
mildred has joined #ipfs
<libman>
(Currently, if the average file size is on the order of magnitude of 100 KB, and the average pocket size drive is on the order of magnitude 1 TB, we're still at "hundreds of millions", not "billions" or "trillions", but we'll get there...)
zabirauf has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
rombou has joined #ipfs
<mythmon>
16^6 is still a lot of inodes to have in one directory.
<dignifiedquire>
daviddias: I think we just need to not use async.each, but rather async.eachLimit
<dignifiedquire>
with a reasonable limit
libman has quit [Remote host closed the connection]
patcon has quit [Read error: Connection reset by peer]
<daviddias>
Now that we have coverage reports in ou JS repos, I'm really impressed how good we are testing our modules naturally, always above 80 and most above 90%
jaboja has joined #ipfs
<dignifiedquire>
daviddias:
Guest33214 has joined #ipfs
corvinux has quit [Ping timeout: 260 seconds]
Boomerang has joined #ipfs
a1uz10nn has joined #ipfs
<daviddias>
dignifiedquire: are you still around? quick question gulp release --minor with aegir doesn't bump the minor version
<dignifiedquire>
yes should be `gulp relese minor`
<dignifiedquire>
*release
pfraze_ is now known as pfraze
<daviddias>
`gulp release minor` -> Task 'minor' is not in your gulpfile
<dignifiedquire>
hmm sounds like a bug
<daviddias>
aegir release minor works
<dignifiedquire>
yeah that works
<daviddias>
but since in swarm we have to have the custom gulp file for tests
<dignifiedquire>
hmm
<daviddias>
gulp release minor fails and --minor is ignored
palkeo has quit [Quit: Konversation terminated!]
<dignifiedquire>
gulp release -- minor
<dignifiedquire>
try that
<daviddias>
also doesn't work, same error
<dignifiedquire>
then it doesn't work atm I'm afraid
TheNain38 has quit [Quit: I'm going away]
<daviddias>
got it, will do manually then :)
patcon has joined #ipfs
M-fs_IXlocWlFZHF has joined #ipfs
apiarian has quit [Quit: zoom]
M-fs_IXlocWlFZHF has left #ipfs [#ipfs]
Encrypt has quit [Quit: Quitte]
<M-rongladney1>
Sorry everyone, I was testing the WebRTC function....
<M-rongladney1>
Apparently, it works great on a Google Chrome Book PC.
<cdata>
libman: Web Components are a lot more than a data format, though. If all you need is a structured format for your data, you do not need to use Web Components. XML still works great for that kind of thing. So does JSON.
jaboja has quit [Ping timeout: 250 seconds]
corvinux has joined #ipfs
zorglub27 has quit [Quit: zorglub27]
r04r is now known as zz_r04r
Boomerang has quit [Quit: Leaving]
lothar_m has quit [Quit: WeeChat 1.5-dev]
leer10 has quit [Read error: No route to host]
rombou has quit [Ping timeout: 244 seconds]
ruby32 has joined #ipfs
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-libp2p-ipfs-0.3.2 (+1 new commit): https://git.io/vwEek
<ipfsbot>
js-ipfs/greenkeeper-libp2p-ipfs-0.3.2 925e181 greenkeeperio-bot: chore(package): update libp2p-ipfs to version 0.3.2...
<ipfsbot>
[js-ipfs] diasdavid deleted greenkeeper-libp2p-ipfs-0.3.2 at 925e181: https://git.io/vwEem
<ipfsbot>
[js-ipfs] diasdavid created update/libp2p-ipfs (+1 new commit): https://git.io/vwEec
<ipfsbot>
js-ipfs/update/libp2p-ipfs 45c67e4 David Dias: update libp2p-ipfs
matoro has quit [Ping timeout: 246 seconds]
taw00 has quit [Read error: Connection reset by peer]
taw00 has joined #ipfs
sega01_ has joined #ipfs
<sega01_>
hey
<sega01_>
i'm pretty new to ipfs. just trying to fetch an object and getting timeouts. if i kill the "ipfs client", the daemon says: ERROR core/serve: Path Resolve error
<sega01_>
and my connection seems to be hammered right now over udp. i guess IPFS uses udp?