Tv` has quit [Quit: Connection closed for inactivity]
afisk has quit [Ping timeout: 250 seconds]
rendar has joined #ipfs
<ianopolous>
good morning!
tmg has joined #ipfs
dguttman has quit [Quit: dguttman]
the193rd has left #ipfs [#ipfs]
zz_r04r is now known as r04r
devbug has quit [Quit: ZZZzzz…]
<daviddias>
dignifiedquire: wanna hear a new joke? "phantomjs(81307,0x7fff79adc000) malloc: *** error for object 0x7f915d836400: pointer being freed was not allocated
<daviddias>
"
kvda has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<daviddias>
:|
<dignifiedquire>
daviddias: that's not new :P
<daviddias>
ah, you heard about it before? Awesome!
<dignifiedquire>
that's just phantom being the super stable thing that it is :D
<dignifiedquire>
make sure to kill all instances and retry
<dignifiedquire>
it's not you it's phantom
_Vi has quit [Ping timeout: 276 seconds]
<daviddias>
killed all of them, but it still happens, will push to travis and see if it is only in my env
_Vi has joined #ipfs
<daviddias>
seems that CI is ok
<daviddias>
I might have more phantom stuff that I don't know
__konrad_ has quit [Remote host closed the connection]
<daviddias>
dignifiedquire: thank you for the phantom tip, I might had spent hours looking for what the heck was going on
<dignifiedquire>
:)
afisk has joined #ipfs
<daviddias>
going to try the aegir update
__konrad_ has joined #ipfs
afisk has quit [Ping timeout: 276 seconds]
s_kunk has joined #ipfs
M-MichaelBI has joined #ipfs
ggoZ has joined #ipfs
M-MichaelBI has left #ipfs [#ipfs]
s_kunk has quit [Quit: Read error: Connection reset by beer]
s_kunk has joined #ipfs
s_kunk has quit [Changing host]
s_kunk has joined #ipfs
s_kunk has quit [Max SendQ exceeded]
<rendar>
does ipfs save file hash in text form like "Qw..." or in a binary form, internally?
s_kunk has joined #ipfs
<daviddias>
dignifiedquire: some time to chat about bitswap?
<dignifiedquire>
daviddias: sure
Encrypt has joined #ipfs
<daviddias>
rendar: in memory it saves as binary
<daviddias>
Qm is just the b58 encoded version for human readability
<dignifiedquire>
daviddias: I always have time for bitswap :)
<daviddias>
bitswap relies that bitswap and stream muxing exists to get the peerId of an incomming connection
<daviddias>
This makes things a tad more complex to patch that context up + when we get relay, we can't get the peerId from the conn anymore, cause there will be multiple hops
<daviddias>
it would be great if the wantlist message sent with it the peerId that the want list is respective to
<dignifiedquire>
daviddias: not sure I understand
<daviddias>
the _receiveMessage signature expect (peerId, message)
<dignifiedquire>
if we send a bitswap message the call is send(to, msg) so we know where to send it to
<daviddias>
sending is ok
<dignifiedquire>
yes and that peerId is where the message is coming from
rhalff has quit [Ping timeout: 260 seconds]
<daviddias>
yep, but to know that
Boomerang has quit [Quit: Leaving]
Boomerang has joined #ipfs
<daviddias>
or I ensure I know which peerId is tied to the underlying connection (meaning that Identify has to be always on and because of that, Stream Muxing as well)
<daviddias>
or (the more flexible way to) would be to send on the protobuf, the peerId of the peer that is sending the new wantList
reit has joined #ipfs
<dignifiedquire>
I see, but that means changing the protobuf which go-ipfs doesn't know about
<daviddias>
btw dignifiedquire your new aegir PR is working :)
<dignifiedquire>
daviddias: cool, going to merge it then soon
<daviddias>
dignifiedquire: exactly
<dignifiedquire>
so lets find a way without changing the protobuf I would suggest
<dignifiedquire>
so best call those that are specific for the browser .browser.js and the node ones .node.js and those that are for both .spec.js
<dignifiedquire>
and then load all .browser.js in browser.js and all .node.js in node.js
<daviddias>
Is that already implemented ?
<daviddias>
Got it
afisk has joined #ipfs
<ipfsbot>
[js-ipfs] nginnever pushed 1 new commit to files: https://git.io/vw2kV
<ipfsbot>
js-ipfs/files 318fa2d nginnever: add cli does not match go
Boomerang has joined #ipfs
computerfreak has quit [Quit: Leaving.]
the193rd has joined #ipfs
afisk has quit [Remote host closed the connection]
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
<steefmin>
ipfs name publish has a default lifetime of 24h, does that mean I need to refresh it every day? or is it the lifetime of a record at another node?
the193rd has left #ipfs ["WeeChat 1.4"]
<Boomerang>
You can set your node to republish by itself
<Boomerang>
ipfs config Ipns.RepublishPeriod 12h
<steefmin>
cool, thanks
<Boomerang>
(But the node has to be left on for the republish to actually happen...)
<steefmin>
yeah, the command won´t work without the daemon running. got it
<Boomerang>
you can see the current config with: ipfs config show
computerfreak has joined #ipfs
<steefmin>
yeah, was still default (empty)
tmg has quit [Ping timeout: 268 seconds]
<steefmin>
i updated to 0.4.0, but Version.Current still displays ¨0.3.10¨, is that correct?
computerfreak has quit [Client Quit]
Zapadlo has joined #ipfs
computerfreak has joined #ipfs
computerfreak has quit [Remote host closed the connection]
<Zapadlo>
Hello all, I'm running a daemon locally, with peer id: `QmdB5DpP....` I can look it up with: `ipfs dht findpeer QmdB5D....` and I get the list of addresses (wlan0, lo ipv4/ipv6). However when I try to connect: `ipfs swarm connect /ip4/127.0.0.1/tcp/4001/QmdB5Dp` I get an error: `Error: invalid peer address: no protocol with name QmdB5Dp...`
<Zapadlo>
(I'm running another daemon with a different init location)
<Boomerang>
steefmin: It looks like you're still running the older version, how did you try to update it?
<steefmin>
i downloaded the ipfs-update tool
<steefmin>
if i run ´ipfs version´ it shows 0.4.0
afisk has joined #ipfs
<Boomerang>
It's possible Version.Current shows the version your repo was created with... Not too sure about that though. If ipfs version says 0.4.0, it's probably fine :)
<steefmin>
true. thanks for the help
Zapadlo has quit [Ping timeout: 250 seconds]
afisk has quit [Ping timeout: 276 seconds]
Tv` has joined #ipfs
cblgh has quit [Changing host]
cblgh has joined #ipfs
<dignifiedquire>
daviddias: how are you coming along?
afisk has joined #ipfs
afisk has quit [Remote host closed the connection]
<daviddias>
dignifiedquire: helping voxelot debugging a files situation
<daviddias>
we've found the issue (merkle-dag module is not sorting the links properly)
<daviddias>
fixing now
<dignifiedquire>
oO good you found that
<daviddias>
on the bitswap, starting to add the events for 'peer-connected' and 'peer-disconnected' on swarm, so that bitswap can use them
herzmeister has quit [Quit: Leaving]
<dignifiedquire>
cool
<daviddias>
dignifiedquire: I was :| when I saw that a folder with same data and same links had two different multihashes
herzmeister has joined #ipfs
<dignifiedquire>
I can imagine
<daviddias>
"oh did we found a collision?" ahaha
Boomerang has quit [Quit: Leaving]
<dignifiedquire>
daviddias: need to pick your brain when you have a second
ashark_ has joined #ipfs
patcon has joined #ipfs
<daviddias>
just more 5 mins
disgusting_wall has joined #ipfs
afisk has joined #ipfs
bsm117532 has quit [Remote host closed the connection]
<daviddias>
I'm not finding where are merkledag links sorted in go-ipfs
<daviddias>
seems that they arent
<daviddias>
is anyone familiar with that part?
matoro has quit [Ping timeout: 260 seconds]
M-alphakamp has joined #ipfs
jedahan has joined #ipfs
M-keverets has joined #ipfs
matoro has joined #ipfs
<richardlitt>
daviddias: nope, sorry.
_Vi has quit [Ping timeout: 260 seconds]
<conway>
richardlitt: With my experience in using ec2 and S3, I don't think the s3 buckets would be a good fit for ipfs. Mainly because storing a file to be served goes from file(on s3)->s3 api->ec2-> served versus blocks(on s3)->s3 api (call per block)->ec2->served
<conway>
It amounts to turning a single file call into a whole bunch of calls, for possibly unknown gain. Now, what would be interesting is, a way to precompute without sending to blockcache. Store it in a text file in .ipfs and realtime convert the files to ipfs link. Possible?
<richardlitt>
huh, cool.
<richardlitt>
I'm not sure. Would be fun to try!
<richardlitt>
conway: want to put that into the IPFS issue? Might be useful for the guy who posted it.
M-keverets has left #ipfs ["User left"]
<conway>
I was thinking of an orthogonal issue. I have 2TB of data I'd like to convert over. The problem is, I need 2TB of cache during the conversion. If guarantees that files would not be manipulated (saay, atime turned on), could precompute be accomplished? All it's doing is hashing the blocks, and hashing the hashes, right? Not to trivialize what ipfs is doing...
Ronsor` has quit [Ping timeout: 250 seconds]
<daviddias>
richardlitt: thank you anyway :)
<daviddias>
dignifiedquire: ping
<dignifiedquire>
daviddias: 2min?
<daviddias>
do you know a proper way to sort strings through their ascii values
<conway>
richardlitt: just responded to the GH post about s3.
<daviddias>
We've tried a dozen of different ways
<dignifiedquire>
acii values? what about utf8 characters?
<daviddias>
including it's buffer representation with ascii encoding
<daviddias>
or utf8
<dignifiedquire>
there is no canonical way fo that, what sort do you want?
<daviddias>
thing is that JavaScript is saying that lowercase "n" is after a uppercase "I"
<dignifiedquire>
yeah don't use the built in sort
<daviddias>
we are using stable
<daviddias>
with localeCompare
<dignifiedquire>
hmm
<dignifiedquire>
you can downcase everything if you want to
<daviddias>
and already transformed the strings to Buffer(str, 'ascii') and Buffer(str, 'utf-8')
<dignifiedquire>
but again, there is no canoncial way, so not sure which sort you exactly want
<daviddias>
if we downcase, we can have a colission
<daviddias>
I want to sort strings like in C with strcmp
<daviddias>
(which is what Go does)
<dignifiedquire>
then you probably have to implement that yourself
<dignifiedquire>
or find an npm module ;)
<conway>
the IPFS hashes are base62, right? (26*2 +10)
<richardlitt>
conway: thnks! :)
<conway>
just convert the IPFS(base)->number. Then order by number. Then convert number->IPFS(base)
<conway>
richardlitt: gladly :)
<conway>
daviddias: the only problem with that method is that javascript doesn't understand large bases. You'd have to do a function that calls in the "string", looks at the first char(least significant), switch statement and multiply by its appropriate number. then move on to the next character to the left. But this could do any base that you have the lookup chart for.
ashark_ has quit [Ping timeout: 276 seconds]
ylp1 has quit [Quit: Leaving.]
Looking has joined #ipfs
<daviddias>
dignifiedquire: ok, I'm available
<dignifiedquire>
daviddias: got your strings sorted? ;)
<daviddias>
conway: we are sorting file paths
<daviddias>
we store the hashes in base58 for now though
libman has joined #ipfs
<daviddias>
dignifiedquire: not yet, but also don't want to keep you waiting
_rht has quit [Quit: Connection closed for inactivity]
Boomerang has joined #ipfs
ggoZ has quit [Ping timeout: 260 seconds]
mildred has quit [Quit: Leaving.]
<daviddias>
> var a = new Buffer('nested', 'utf8')
<daviddias>
undefined
<daviddias>
> var b = new Buffer('Ing', 'utf8')
<daviddias>
undefined
<daviddias>
> a.compare(b)
<daviddias>
1
<daviddias>
>
<area>
I have two nodes that both have peers, and are both behind firewalls. Files created on either can be seen almost immediately by e.g. the ipfs.io gateway, but not by each other. Is this expected behaviour?
Encrypt has quit [Quit: Quitte]
zeroish has quit [Remote host closed the connection]
<dignifiedquire>
daviddias: is it important that the solution handels non ascii characters well?
<conway>
area: are they both behind the same firewall (on the same private network)?
<voxelot>
shouldnt push a pr on merkle-dag just yet righ
<voxelot>
tohh sure
<daviddias>
if it is done, if not, later
<daviddias>
well, it seems we have the right thing for js-ipfs-merkle-dag
<voxelot>
well i was doing the tests and came to the conclusions that buffer wasnt failing
<daviddias>
buffer (or what dignifiedquire showed) is the right way
<voxelot>
we just need to know if we need to reverse sort or not
* nicolagreco
daviddias: is `conn` of the .dial callback a readable-stream?
<daviddias>
reverse sort is granted
<daviddias>
nicolagreco: it is a duplex stream
<voxelot>
k then ill push buffer with reverse sort
<daviddias>
you can read and write
<conway>
hmm.. I assume that /dns/example.com/tcp/4001/ipfs/~~~~~ isn't supported as a multiaddr? If it was, I have a work-around for Tor support.
<nicolagreco>
daviddias: perfect, thanks!
<daviddias>
conway: not yet
<conway>
rats :P
<daviddias>
conway: or better, it is a valid multiaddr, but the ipfs code doesn't know what to do with it
<daviddias>
nicolagreco: no problem :)
<nicolagreco>
daviddias: is there any package you would suggest me to write json to a stream?
<conway>
gotcha. I'm doing funky stuff on my system resolver daemon, so my linux systems can natively resolve a Tor hidden service without proxy goofiness :)
insanity54 has quit [Ping timeout: 250 seconds]
<Moose>
quick question: is this the right place to ask questions about js-ipfs-api?
<daviddias>
depends
<daviddias>
you want to write just 1 JSOn object
<daviddias>
or several?
<daviddias>
if just one, JSON.stringify should suffice
<daviddias>
if several, check ndjson
<nicolagreco>
daviddias: both sides will keep on writing each others json
ashark_ has joined #ipfs
<nicolagreco>
or cbor
Moose is now known as MooseFO
<dignifiedquire>
MooseFO: yes
<conway>
area: it looks I have no good solutions. The only one I can think of, is if you have a public VPS, you can do a 'ipfs swarm connect' from each firewalled machine to the VPS. It's not ideal, but it does work.
<daviddias>
check ndjson then
matoro has quit [Ping timeout: 260 seconds]
rhalff has joined #ipfs
<area>
conway: That's just to give them common nodes they're connected to? They already do
<area>
(according to ipfs swarm peers)
Iiterature has joined #ipfs
<MooseFO>
nice i'm a newb, what's the most common mistake when you get this error: Payload stream closed prematurely - I'm using a meteor.js setup with js-ipfs-api
<conway>
it seems that their TTL is really high. I got an IP address of ***.***.118.42 Is that correct?
<conway>
high propagation == really laggy updates to records.
Ronsor` has joined #ipfs
pfraze has quit [Remote host closed the connection]
<whyrusleeping>
yeah
MooseFO has quit [Quit: Leaving]
<whyrusleeping>
okay, so i should just wait it out?
Ronsor` has quit [Ping timeout: 246 seconds]
<conway>
Possibly. I've not worked with .xyz tld before. I do know, when I did a "StartupWeekend", they gave away .co domains. And evidently most people rage-quit on the "Free domains" because of jankiness and weirdness they did. In effect, it may not be you, but your TLD.
<whyrusleeping>
mmm, okay
<conway>
I'd do 2 things: 1. get a .com/.net/.org tld (I use google domains, and they don't do weirdness), and 2. wait 12h...
<whyrusleeping>
but .net domains cost me real people money
<whyrusleeping>
.xyz is just imaginary money
rhalff has quit [Ping timeout: 250 seconds]
reit has quit [Quit: Leaving]
<conway>
Weird. they're charging me 10$/yr for an .xyz
<mythmon>
whyrusleeping: when I do dig +trace camlistore.xyz, I see "couldn't get address for 'ns2.git.sexy': not found"
<mythmon>
where ns2.git.sexy is listed as one of the NS records for camlistore.xyz. That's probably a problem.
<conway>
ns1.git.sexy provides the response though. That's what clued me in it was *probably* a propagation delay.
<whyrusleeping>
mythmon: i can fix the ns2 thing real quick and see if that helps
Ronsor` has joined #ipfs
<whyrusleeping>
conway: thats odd... i've been paying $0.88 for xyz domains
Akaibu has joined #ipfs
<richardlitt>
dignifiedquire: you should put "Automated JavaScript project management." as the GitHub repository descriptor for ægir
<mythmon>
whyrusleeping: well, i don't see the problem with ns2 any more, but it still isn't working. how long as it been?
<whyrusleeping>
mythmon: at least 12 hours
Ronsor` has quit [Ping timeout: 252 seconds]
<whyrusleeping>
i bought the domain around 7pm last night
<mythmon>
and you set your registrar to point the NS records at your custom dns server?
<mythmon>
i'd imagine `dig NS camlistore.xyz` to work fine, but I don't see that working either.
<sivachandran>
whyrusleeping: can i increase the object size limit 512kb?
pfraze has joined #ipfs
rhalff has joined #ipfs
<whyrusleeping>
sivachandran: the object size limit is 1MB i beleive
<whyrusleeping>
mythmon: yeah, my registrar points the NS records to ns1 and ns2.git.sexy
<whyrusleeping>
which is my dns server
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-ipfs-merkle-dag-0.4.1 (+1 new commit): https://git.io/vw2A7
<ipfsbot>
js-ipfs/greenkeeper-ipfs-merkle-dag-0.4.1 03dcd04 greenkeeperio-bot: chore(package): update ipfs-merkle-dag to version 0.4.1...
<sivachandran>
whyrusleeping: object put doesn't allow adding more than 512kb. i can see the 512kb check in object put implementation
<whyrusleeping>
sivachandran: ah, you want to increase it past 512kb
<whyrusleeping>
whats your usecase?
<mythmon>
whyrusleeping: weird. I'm at the end of my knowledge. good luck!
<daviddias>
whyrusleeping: need to confirm: do you order links in go-ipfs merkledag by [a, b, c] or [c, b, a]. I believe it is the later, but want to double check
<sivachandran>
whyrusleeping: i am adding huge files(~300GB) to IPFS. by keeping the object size 1mb i can minimize the no. of the objects and the overhead associated with the intermediate DAG nodes
<whyrusleeping>
mythmon: yeah, i've been googling hopelessly
<whyrusleeping>
sivachandran: are you seeing that much overhead?
<whyrusleeping>
even with a 300GB file you shouldnt see more than a two layer deep merkledag
<daviddias>
asc or dsc?
<whyrusleeping>
daviddias: ascending
<ipfsbot>
[js-ipfs] greenkeeperio-bot opened pull request #168: ipfs-merkle-dag@0.4.1 breaks build
<whyrusleeping>
so [a,b,c]
<sivachandran>
there will be around 300MB overhead for 1MB objects. 256KB object size there will be more.
<mythmon>
whyrusleeping: i'd imagine there is something wrong with the setup on your registrar, because `dig NS camlistore.xyz` doesn't work, and that should be provided by your registrar
<mythmon>
might be worth talking to their support
<whyrusleeping>
mythmon: mmm, good point
<whyrusleeping>
i'll email them
<sivachandran>
whyrusleeping: keeping the no. of objects low allows me to get better performance when I use S3 as datastore
<whyrusleeping>
sivachandran: hrm... okay
<whyrusleeping>
you can bump that limit up to 768k
<whyrusleeping>
the hard network limit is 1MB
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<whyrusleeping>
anything even a byte over 1MB won't be transferred through the network
<whyrusleeping>
one other option for you is to try increasing the size of the intermediate blocks
Boomerang has quit [Quit: Leaving]
<whyrusleeping>
you can change that in importer/helpers/helpers.go
<whyrusleeping>
'roughLinkBlockSize'
<sivachandran>
whyrusleeping: but i am able to use 1MB as object size if i add the file through 'ipfs add' along with rabin chunker
<sivachandran>
basically i specified 1MB as min, avg and max size
<daviddias>
whyrusleeping: thank you:)
<daviddias>
case closed!
<whyrusleeping>
sivachandran: hrm... okay fine.
<whyrusleeping>
go ahead and set the limit on object put to 1MB
<whyrusleeping>
but do also try changing the linkblocksize
<sivachandran>
okay
<sivachandran>
another question related to gx. how do we handle go subpackage with gx
<sivachandran>
assume a sub package at "github.com/X/Y/Z" wants to refer its parent package at "github.com/X/Y"
<sivachandran>
as gx uses multihash as to identify the package, how can the sub package refer the parent package without changing the hash
<mythmon>
sivachandran: that's something you can't do.
<mythmon>
a system where all the links are content addressed hashes (like gx or ipfs) means that you simply can't create a loop (unless you can somehow break the hashing algorithm)
<sivachandran>
but this pattern is something widely used in Golang packages
<mythmon>
this pattern isn't compatible with gx.
<sivachandran>
as go-ipfs implementation is moving away from Godeps to gx, i am wondering how can go-ipfs these packages
<sivachandran>
i already got stuck due to this limitation when i try to add S3 datastore support to go-ipfs
<sivachandran>
initially i tried to include the dependency packages through gx but realized this limitation
<sivachandran>
now trying to add them through Godeps
herzmeister has quit [Quit: Leaving]
jedahan has joined #ipfs
jedahan has quit [Remote host closed the connection]
herzmeister has joined #ipfs
<mythmon>
as i understand it, go-ipfs is already switched to gx. I'm not sure how they dealt with this problem if at all)
<mythmon>
hmm. maybe i'm wrong about that. I see Godeps in the go-ipfs repo.
Not_ has joined #ipfs
lispmeister has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
Not_ is now known as Guest89323
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-ipfs-merkle-dag-0.4.2 (+1 new commit): https://git.io/vwavM
<ipfsbot>
js-ipfs/greenkeeper-ipfs-merkle-dag-0.4.2 9cf714d greenkeeperio-bot: chore(package): update ipfs-merkle-dag to version 0.4.2...
Guest89323 has quit [Client Quit]
Not_Jesus has joined #ipfs
s_kunk has quit [Ping timeout: 252 seconds]
Not_Jesus has quit [Client Quit]
<ipfsbot>
[js-ipfs] diasdavid deleted greenkeeper-ipfs-merkle-dag-0.4.1 at 03dcd04: https://git.io/vwafa
navi__ has joined #ipfs
<ipfsbot>
[js-ipfs] diasdavid deleted greenkeeper-ipfs-merkle-dag-0.4.2 at 9cf714d: https://git.io/vwaJn
<dignifiedquire>
daviddias: you are an owner on aegir, and I just merged, could you do the release please? (major bump)
<daviddias>
Ok :D
noffle has quit [Quit: WeeChat 0.3.8]
Trieste has quit [Ping timeout: 264 seconds]
Trieste has joined #ipfs
ggoZ has joined #ipfs
Encrypt has joined #ipfs
noffle has joined #ipfs
s_kunk has joined #ipfs
matoro has quit [Ping timeout: 260 seconds]
ygrek has joined #ipfs
<ashark_>
Anyone know why the container_daemon script is telling me the repo dir isn't writable even if I `chmod -R 777` the whole directory? :-/ It's printing the correct path.
afisk has quit [Remote host closed the connection]
Iiterature has quit [Quit: Connection closed for inactivity]
<ashark_>
Nevermind, it's because I'm a moron. Disregard.
afisk has joined #ipfs
afisk has quit [Remote host closed the connection]
afisk has joined #ipfs
matoro has joined #ipfs
_rht has quit [Quit: Connection closed for inactivity]
pfraze has quit [Remote host closed the connection]
_rht has joined #ipfs
computerfreak has joined #ipfs
jedahan has joined #ipfs
pfraze has joined #ipfs
Ronsor` has joined #ipfs
palkeo has joined #ipfs
palkeo has quit [Changing host]
palkeo has joined #ipfs
jreighley has joined #ipfs
Ronsor` has quit [Remote host closed the connection]
Akaibu has quit [Quit: Connection closed for inactivity]
jaboja has joined #ipfs
rhalff has quit [Ping timeout: 268 seconds]
Ronsor` has joined #ipfs
<nicolagreco>
daviddias: is there a way to check if a `conn` duplex stream is still on? (or I could just use .on('end'))
Ronsor` has quit [Read error: Connection reset by peer]
Ronsor` has joined #ipfs
<ipfsbot>
[js-ipfs] nginnever pushed 1 new commit to files: https://git.io/vwaW4
<daviddias>
nicolagreco: if you haven't received a 'end' event
<daviddias>
you can still write to it
<daviddias>
if the connection breaks
<daviddias>
it will 'error'
<nicolagreco>
ok this is what I was looking for, thanks!
<daviddias>
:)
afisk has quit [Remote host closed the connection]
devbug has joined #ipfs
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-ipfs-repo-0.7.0 (+1 new commit): https://git.io/vwalW
<ipfsbot>
js-ipfs/greenkeeper-ipfs-repo-0.7.0 d9f87c8 greenkeeperio-bot: chore(package): update ipfs-repo to version 0.7.0...
_Vi has quit [Ping timeout: 260 seconds]
Akaibu has joined #ipfs
<ipfsbot>
[js-ipfs] nginnever pushed 1 new commit to files: https://git.io/vwaBI
<ipfsbot>
js-ipfs/files 72e193c nginnever: add core test
insanity54 has joined #ipfs
infinity0 has quit [Ping timeout: 260 seconds]
anshukla has joined #ipfs
rendar has quit [Ping timeout: 260 seconds]
jaboja has quit [Ping timeout: 240 seconds]
sivachandran has quit [Quit: Connection closed for inactivity]
<whyrusleeping>
sivachandran, mythmon: the way we deal with the relative imports is to wait until the last second to rewrite paths
<whyrusleeping>
so when gx publishes the package, all the import paths are the github.com/...
<whyrusleeping>
and *after* we download it, we rewrite everything.
<whyrusleeping>
since at that point we know the correct hash
infinity0 has joined #ipfs
patcon has quit [Ping timeout: 250 seconds]
rendar has joined #ipfs
pfraze has quit [Remote host closed the connection]
mildred has joined #ipfs
jager_ has joined #ipfs
ashark__ has joined #ipfs
<Akaibu>
whyrusleeping: have you try to seriously design a OS with ipfs intergrated yet?
infinity0 has quit [Ping timeout: 240 seconds]
<whyrusleeping>
Akaibu: i wish i had time for such a thing
<whyrusleeping>
i would absolutely love to do that
ggp0647 has quit [Ping timeout: 264 seconds]
jager has quit [Ping timeout: 260 seconds]
<Akaibu>
im thinking that sites or stuff you have downloaded will be automaticly be in the ipfs swarms of that same item
ashark_ has quit [Ping timeout: 276 seconds]
<whyrusleeping>
there is but one ipfs swarm ;)
<Akaibu>
yea, but i think you know what i mean
_Vi has joined #ipfs
<Akaibu>
but the question is, would you have a program that would convert the shards into a useable format, or would it be like it is now, if you want to use it and it be in ipfs, you have use twice the space
<Akaibu>
?
<Akaibu>
sorry if the question is confusing
<Akaibu>
whyrusleeping: ping ^
navi__ has quit [Remote host closed the connection]
<whyrusleeping>
The entire filesystem of such an OS would be backed by ipfs
infinity0 has joined #ipfs
<whyrusleeping>
essentially what you propose when you talk about an 'ipfs OS' is a kernel driver for ipfs