infinity0 has quit [Remote host closed the connection]
john1 has quit [Ping timeout: 260 seconds]
infinity0_ has joined #ipfs
infinity0_ has quit [Changing host]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
sirdancealot has quit [Ping timeout: 258 seconds]
Akaibu has quit [Quit: Connection closed for inactivity]
dimitarvp` has joined #ipfs
dimitarvp has quit [Disconnected by services]
dimitarvp` is now known as dimitarvp
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinitesum has joined #ipfs
infinity0 has joined #ipfs
Jesin has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
ckwaldon has joined #ipfs
neurrowcat has quit [Ping timeout: 260 seconds]
jamiew has joined #ipfs
infinitesum has quit [Quit: infinitesum]
infinitesum has joined #ipfs
neurrowcat has joined #ipfs
infinitesum has quit [Quit: infinitesum]
ipfsq has quit [Quit: Lost terminal]
infinitesum has joined #ipfs
spikebike has joined #ipfs
jaboja has quit [Ping timeout: 240 seconds]
<daviddias>
ipfsq could you open an issue with your question, not sure if I get it
<deltab>
oh right, just src="/ipfs/Qmfoo" would probably do
owlet has quit [Quit: leaving]
<deltab>
hmm, no, that'd rely on having a gateway
<deltab>
daviddias: as I understood it: if you're using js-ipfs in a browser, you can connect to peers and receive data, but how do you then use that data in <img> etc.? you'd have to convert it to a data: url, blob: url, or the like
<deltab>
oh, for video and audio, you could use MSE (media source extensions), I expect
shizy has joined #ipfs
infinitesum has quit [Quit: infinitesum]
infinitesum has joined #ipfs
[X-Scale] has joined #ipfs
X-Scale has quit [Ping timeout: 272 seconds]
[X-Scale] is now known as X-Scale
infinitesum has quit [Quit: infinitesum]
ygrek has quit [Ping timeout: 260 seconds]
skeuomorf has quit [Remote host closed the connection]
infinitesum has joined #ipfs
shizy has quit [Ping timeout: 246 seconds]
sirdancealot has joined #ipfs
Akaibu has joined #ipfs
infinitesum has quit [Quit: infinitesum]
fleeky has joined #ipfs
jamiew has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…]
shizy has joined #ipfs
shizy has quit [Client Quit]
shizy has joined #ipfs
webdev007 has quit [Remote host closed the connection]
jamiew has joined #ipfs
jamiew has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…]
shizy has quit [Ping timeout: 260 seconds]
Quiark_ is now known as Quiark
Jesin has quit [Quit: Leaving]
dimitarvp has quit [Read error: Connection reset by peer]
koshii has quit [Ping timeout: 240 seconds]
koshii has joined #ipfs
john2 has quit [Ping timeout: 240 seconds]
Foxcool has joined #ipfs
reit has quit [Ping timeout: 240 seconds]
tilgovi has joined #ipfs
alpinekid_ has joined #ipfs
Akaibu has quit [Quit: Connection closed for inactivity]
<alpinekid_>
I'm trying out ipfs for the first time. I started the daemon and have done nothing, why is there so much network traffic?
<alpinekid_>
megabytes of data is flowing, does it wind down after a few hours, waited almost 2 hrs
<spikebike>
MBs doesn't sound like much
<spikebike>
there is some overhead for maintaining the DHT
<alpinekid_>
TotalIn: 85 MB TotalOut: 8.5 MB RateIn: 22 kB/s RateOut: 937 B/s, over about 1.5 hours
<alpinekid_>
I only get about 100kB/w from my ISP so its almost 25% of the available bandwidth. I'm in the USA with sucky internet corps but still. how much maintain traffic should there be when no activity is going on.
<spikebike>
I had some numbers from awhile back, but not sure if I can find them
<spikebike>
but yeah sounds like you might not want to run IPFS from home. You could skip IPFS, or run it in a digital ocean droplet for $5 a month or somethig
<spikebike>
I don't believe the DHT overhead drops over time
<alpinekid_>
Im not providing anything or asking for anything plus the inbound is almost 10x the out. I can see it is setting up some kind of table maybe?
<spikebike>
t in DHT is table
<alpinekid_>
Will it die down if I leave it up all night
<spikebike>
but it's always tracking other IPFS users
<Scio[m]>
that's similar to the overhead I have on my pretty much inactive node too
<spikebike>
no content is downloaded without you asking
<spikebike>
but the DHT tracks other IPFS peers, updates the routing table, etc.
<alpinekid_>
\thanks for your input. Is there a way to track less peers? I know not what I speak of. This is a new project for me.
<spikebike>
could be, there might be some way to tune it for less bandwidth
<spikebike>
No idea off hand, I'm just a casual/occasional user, not a dev or anything
<alpinekid_>
Thanks I'll read up on it tomorrow.
<alpinekid_>
Its good to know that someone else is seeing the same thing.
<Bat`O>
so, I have a linux box and a windows box on the same LAN running ipfs O.4.9 and I can't resolve IPNS addresses published from the other side, even after several minutes
<Bat`O>
any idea how I could debug that ?
neurrowcat has quit [Ping timeout: 255 seconds]
<Bat`O>
i have 400 peers connected on one side, 250 on the other
<spikebike>
no idea on debugging ipns
<spikebike>
but 250 vs 400 shouldn't be any issue at all
<Bat`O>
no, it's just that one node was started later
<Bat`O>
but they both have good connectivity
zuck05 has joined #ipfs
cxl000 has joined #ipfs
espadrine has joined #ipfs
rcat has joined #ipfs
<Mateon1>
Well, IPNS is handled by the DHT. Your node has to find the ipns record in one of many peers. 400 vs 250 might slightly affect how the DHT works, but I don't think it should outright break
maxlath has quit [Ping timeout: 246 seconds]
<chpio[m]>
Is there a proposal to integrate cjdns into ipfs as the overlay routing layer?
Guest69 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
subtraktion has joined #ipfs
anewuser has quit [Ping timeout: 240 seconds]
infinitesum has quit [Quit: infinitesum]
infinitesum has joined #ipfs
elkalamar_ has quit [Quit: Leaving]
infinitesum has quit [Client Quit]
zuck05 has quit [Ping timeout: 246 seconds]
john has joined #ipfs
john is now known as Guest59175
jungly has quit [Remote host closed the connection]
Guest51139 has quit [Ping timeout: 260 seconds]
maxlath has quit [Ping timeout: 240 seconds]
SaltyVagabond[m] has joined #ipfs
pat36 has joined #ipfs
dimitarvp has joined #ipfs
chungy has quit [Ping timeout: 240 seconds]
Guest244556[m] has joined #ipfs
Guest244556[m] has left #ipfs [#ipfs]
zuck05 has joined #ipfs
alextes has joined #ipfs
subtraktion has quit []
zuck05 has quit [Remote host closed the connection]
stavros has joined #ipfs
stavros has quit [Client Quit]
stavros has joined #ipfs
<stavros>
hello
borkdoy has joined #ipfs
<stavros>
sorry about the ignorant questions, but the "faq" link took me to the forum, which is too noisy to find my questions
<stavros>
i have a static website which i publish to ipfs (ipfs add -r site/). if i change a single file and republish, will the IDs of *all* the files change?
<chpio[m]>
nope, just the ids of the dirs congaing changed files
<kythyria[m]>
Changing a file counts as changing the directory it's contained in.
<chpio[m]>
containing
zuck05 has joined #ipfs
<stavros>
ah, so the whole directory is a single ID, and the files don't get their own IDs, correct? So every deploy would require repinning the entire dir
<kythyria[m]>
The files have their own IDs, but you don't see that if you're referring to them via a directory
reit has quit [Ping timeout: 240 seconds]
<stavros>
and those IDs will be different if i re-add another directory, even though the files are the same as the first one?
sojka has joined #ipfs
<MaybeDragon>
If I understood correctly then directories are represented in ipfs as files. So two identical directories -> identical files -> identical hashes ?
kn0rki has joined #ipfs
<kythyria[m]>
Yes
<chpio[m]>
if you have a change in a dir (added, removed, renamed files/dirs) the id of that dir changes and also of all parent dirs
<kythyria[m]>
It's like git
<stavros>
ah, so files are by reference?
<chpio[m]>
they are referenced by the id
<MaybeDragon>
referenced by hashes
<chpio[m]>
yes, id = hash
<stavros>
so if i have site/a.txt and site/b.bin, and i change a.txt and redeploy, will the id of b.bin remain the same (and won't need to be repinned)?
<chpio[m]>
yes, but you haveto repin the root dir "site"
<chpio[m]>
and also all changed files/dirs or you just use recursive pinning
<SchrodingersScat>
stavros: correct, afaik it's efficient like that
<stavros>
great, thank you
Jesin has joined #ipfs
ashark has joined #ipfs
pedrovian has quit [Read error: Connection reset by peer]
pedrovian has joined #ipfs
pedrovian has quit [Max SendQ exceeded]
jamiew has joined #ipfs
Caterpillar has joined #ipfs
Guest59175 has quit [Ping timeout: 268 seconds]
mahloun has joined #ipfs
stavros has quit [Quit: Leaving]
Guest59175 has joined #ipfs
owlet has joined #ipfs
Soft has quit [Quit: WeeChat 1.9-dev]
robattila256 has joined #ipfs
Foxcool has quit [Read error: No route to host]
borkdoy has quit [Ping timeout: 260 seconds]
maxlath has joined #ipfs
borkdoy has joined #ipfs
owlet has quit [Ping timeout: 260 seconds]
Foxcool has joined #ipfs
X-Scale has joined #ipfs
cdhagmann has joined #ipfs
<cdhagmann>
Hello!
droman has joined #ipfs
maxlath has quit [Ping timeout: 246 seconds]
<SchrodingersScat>
cdhagmann: oh hey
<cdhagmann>
I had a thought about Libp2p
<SchrodingersScat>
thoughts about Libp2p?
<cdhagmann>
Yes. I was thinking that there may be a way to use GraphQL as a sort of torrent of REST API calls to various computers that have the distributed platform.
esph has quit [Read error: Connection reset by peer]
esph has joined #ipfs
sojka has quit [Ping timeout: 268 seconds]
borkdoy has quit [Remote host closed the connection]
<voker57>
is there an easy way to run ipfs node which only downloads chunks if they were pinned?
borkdoy has joined #ipfs
<whyrusleeping>
voker57: hrm? not sure what youre asking
<MaybeDragon>
Do you want to use that node as a gateway? But it shouldn't download anything automatically?
jamiew has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…]
sirdancealot has joined #ipfs
borkdoy has quit [Remote host closed the connection]
sojka has joined #ipfs
<lemmi>
so i was just toying around. i added a good amount of files at /ipns/void.lpm.pw. this works all find. on another machine on the same network i wanted to test ipfs ls /ipns/void.lpm.pw. this took quite a while and used around 300mb traffic, 70mb space in the repo (fresh) and loaded 2k objects
<whyrusleeping>
oh wait, someone already submitted an ipfs driver it seems
<lemmi>
no.. same game with the experimental flags enable
dimitarvp has joined #ipfs
<whyrusleeping>
lemmi: are you using sharding?
<whyrusleeping>
also, `ipfs ls` pulls a lot more blocks that you would expect
<whyrusleeping>
it pulls the root node of each item in a directory
<whyrusleeping>
use `ipfs ls --resolve-type=false` to avoid that
<lemmi>
whyrusleeping: that's probably it
<voker57>
whyrusleeping: MaybeDragon: I want to use it as gateway, but distribute only files I pin there
cdhagmann has quit [Quit: Page closed]
<lemmi>
whyrusleeping: still takes a while. my node is online, if you want to take a look.
Akaibu has joined #ipfs
Guest59175 has quit [Ping timeout: 240 seconds]
Guest59175 has joined #ipfs
zuck05 has quit [Ping timeout: 255 seconds]
zuck05 has joined #ipfs
jamiew has joined #ipfs
ShalokShalom_ has quit [Remote host closed the connection]
btmsn has quit [Ping timeout: 240 seconds]
espadrine has quit [Ping timeout: 246 seconds]
Encrypt has joined #ipfs
jamiew has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…]
stavros has joined #ipfs
<stavros>
hello
jamiew has joined #ipfs
<stavros>
i run an ipfs node at home, is there a way for me to jail it for more security?
<stavros>
maybe a docker container or something
jamiew has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…]
<mahloun>
yeah or maybe create another user
Encrypt has quit [Quit: Quit]
<stavros>
hmm, that works, thank you
<emunand[m]>
are pubsub topics universal?
<emunand[m]>
ie if i publish "foo" and another node that isnt connect to me publishes "foo" as well, would that cause any problems?
<Mateon1>
emunand[m]: It shouldn't - the topic you're both publishing to will get the message "foo" twice
<emunand[m]>
so it is node-specific?
<Mateon1>
There is no deduplication, if that's what you're asking. If you design an app on top of pubsub, you need to gracefully handle duplicates, unpropagated messages, and invalid data (i.e. from nodes publishing to the topic from outside your app)
<whyrusleeping>
emunand[m]: pubsub topics are currently global, yes
<emunand[m]>
i was wondering how "The Index" solved this
<Mateon1>
It uses a CRDT structure - Conflict-free Replicated Data Type
<Mateon1>
It is a structure that is "eventually consistent" without conflicts
<Mateon1>
It allows for events to happen in any order, like "Add X to database" (append-only) giving the exact same result, and without any conflicts
<stavros>
there's no way to tell IPFS "use this much disk space, this much upload/download" and just leave it there, is there
spilotro has joined #ipfs
<Mateon1>
Upload/download limiting is planned
<stavros>
ah, thanks
<stavros>
and space?
<stavros>
some sort of LRU for the GC?
<Mateon1>
For keeping a set amount of diskspace, look at ipfs config Datastore.StorageMax
<Mateon1>
and start the daemon with ipfs daemon --enable-gc
caiogondim has left #ipfs [#ipfs]
<detran`>
is the CRDT ipfs use operation-based or state-based, do you know?
<detran`>
I'm just reading the wiki article and am curious
<stavros>
Mateon1, thank you. is the GC not on by default?
<Mateon1>
stavros: No, it is not on by default
<stavros>
oh hum, is there any reason for that? it seems like it would be a good default
<Mateon1>
There is no LRU policy for GC, only pinned things will not be collected
<Mateon1>
Hm, I'm not too sure about it now, but at least in 0.4.3 the garbage collector was really slow, and had some bugs, not sure how it has improved since then
<stavros>
oh i see, thanks
<Mateon1>
detran`: IPFS doesn't use CRDTs, apps that use pubsub have to implement them themselves
<Mateon1>
It's a really convenient abstraction
<Mateon1>
Look at orbit-db
<detran`>
Mateon1: oh, cool, that makes sense
<Mateon1>
Also, scuttlebutt, it's a distributed append-only log
<Mateon1>
Hm, actually, could you call ipns records a form of CRDTs?
<Mateon1>
Well, they are monotonic, conflict-free, I think so
vith has joined #ipfs
<whyrusleeping>
Mateon1: I think they might qualify
<whyrusleeping>
even two ipns records with the same sequence number can be deterministically chosen between
<whyrusleeping>
we compare them bytewise
stavros has quit [Quit: Leaving]
Encrypt has joined #ipfs
AgenttiX has joined #ipfs
elkalamar has joined #ipfs
ygrek has joined #ipfs
robattila256 has quit [Quit: WeeChat 1.8]
bwerthmann has quit [Quit: leaving]
jkilpatr has quit [Ping timeout: 240 seconds]
reit has joined #ipfs
rendar has quit [Ping timeout: 268 seconds]
rendar has joined #ipfs
archpc has quit [Ping timeout: 240 seconds]
jkilpatr has joined #ipfs
shizy has joined #ipfs
<fleeky>
anyone made a 'hashtag' system for ipfs?
<fleeky>
i was just talking with a friend and we were daydreaming about if instead of websites ew just had hashtags
<fleeky>
so content would cluster around a #blah whatever . and thought ipfs would actually make sense for something like that
<fleeky>
crankylinuxuser1: like twitter , except not in a walled garden
<fleeky>
hashtags would be an interesting alternative to websites that makes more sense with an ipfs style network
<achin>
swag!
<whyrusleeping>
managed to get the cost down to $10 per shirt, if there ends up being a lot of interest we can order in more bulk and make things even cheaper :)
<markedfinesse>
if i `ipfs name publish ...` some address, is this publishing permanent even if my node stops running? if i restart my node, should i expect to be able to resolve this ipns address?
<crankylinuxuser1>
It's only permanent as long as the duration of IPNS -> IPFS. That's 12 hours right now, according to defaults in the config.
<markedfinesse>
crankylinuxuser1 oh, wow, ok. so would you have to keep publishing if you wanted a more "permanent" name?
<crankylinuxuser1>
However, the IPFS hash is always going to point at the data. As long as you're within the 12h limit, any requests to the network should provide the pointed-at IPFS hash
<markedfinesse>
what's the idea behind the 12 hour limit?
<crankylinuxuser1>
Yeah. I just do a cronjob.
<markedfinesse>
interesting
<crankylinuxuser1>
Not sure. That's a dev question :)
owlet has joined #ipfs
<voker57>
why does "distributed" have (c) at bottom on the shirt?
<whyrusleeping>
markedfinesse: the 12 hour limit is to avoid spam in the network
<whyrusleeping>
when you publish an ipns record youre asking other nodes to hold onto data for you
<markedfinesse>
whyrusleeping do the other nodes just hold the mapping of the ipns address to the ipfs address/
<whyrusleeping>
you cant expect other people to altruistically hold onto an unlimited amount of data for you forever
<whyrusleeping>
markedfinesse: correct
detran` is now known as detran
mahloun has quit [Ping timeout: 260 seconds]
<markedfinesse>
another question -- suppose i host HTML in IPFS and serve it through the gateway. how does relative linking work? like how does the webpage know that the favicon is available? is the answer to mount on the file system?
<detran>
whyrusleeping: what other data are you asking other nodes to hold? I thought ipns was just a pointer?
<whyrusleeping>
markedfinesse: the hash you load to load html is the hash of the directory containing your website
<crankylinuxuser1>
yeah, but the pointer is a piece of memory that says "computer public key ### --> IPFS hash". And others have to provide it if you're not on.
<markedfinesse>
whyrusleeping right now i'm seeing a case where adding a simple `index.html` with a `favicon` doesn't actually load the favicon in the browser
<markedfinesse>
if accessing it via the gateway
<whyrusleeping>
detran: you ask other nodes to store the pointers (which are a struct containing some information) and public keys for verifying records
<whyrusleeping>
markedfinesse: you don't want to add the index.html directly, you want to add the folder that contains it
<detran>
crankylinuxuser: do the others have to provide it you're not on? I thought it was up to the pinners to provide the data
<markedfinesse>
whyrusleeping: yes, that's what i did
<markedfinesse>
here, let me make an example, one sec
<crankylinuxuser1>
ipfs content is just files and dirs that have been hashed. They can be wherever. IPNS is the public key of your peer, which is mutable and can point at IPFS content.
<detran>
alright, with you so far
<detran>
wait, I think I see the gap in my thinking
<crankylinuxuser1>
The message is emitted by your peer, that states that "MY IPNS POINTS TO IPFS ####". And that message is passed through the IPFS network for the specified time.
<detran>
so when I'm not on, other nodes can provide the ipns->ipfs lookup if a node asks for it
<markedfinesse>
whyrusleeping: wow, ok, i didn't realize things like this worked if the ipfs address is a directory `ipfs resolve /ipfs/QmTHKeV6LXjpwC3MRwhouw4jA8UKFcbuviVF4gjGgcYMG3/favicon.ico`
<crankylinuxuser1>
yep. cause *a* computer answers it because they have your message that IPNS ### -> IPFS ###
<markedfinesse>
that's really cool, and it makes sense then how relative paths work
<markedfinesse>
whyrusleeping: regarding ipns, does _every_ node in the network store _every_ ipns entry?
<whyrusleeping>
markedfinesse: hrm... i think we add a <link rel="icon" href="favicon.ico">
<whyrusleeping>
markedfinesse: no, it uses kademlia, K peers store each record
<markedfinesse>
whyrusleeping: yes, ok, that makes sense
<detran>
thanks, that makes sense
<whyrusleeping>
markedfinesse: oh yeah, its a full filesystem
<crankylinuxuser1>
whyrusleeping: normally you dont need that declaration *if* webroot has favicon.ico . The problem here, is 127.0.0.1:8080/ipfs/hash isn't seen as webroot by any web browser.
<whyrusleeping>
crankylinuxuser1: ah, that makes sense
<whyrusleeping>
i've been meaning to make more tutorials, but i'm bad at writing, and it takes me a long time, and i'd much rather write more code
<markedfinesse>
whyrusleeping: ok, this is very interesting. can a value in the dag be another dag? will it resolve recursively?
<whyrusleeping>
yeap!
<markedfinesse>
very interesting. can a value be an ipns entry?
<whyrusleeping>
currently no
<whyrusleeping>
the semantics around that are very difficult to get right
<markedfinesse>
yes, i'd imagine that would be challenging.
<markedfinesse>
immutability is better :)
<whyrusleeping>
people expecting things to be immutable would get rather confused
mayel[m] has joined #ipfs
<whyrusleeping>
markedfinesse: also worth noting, that ipfs.git.sexy is entirely served through ipfs
<whyrusleeping>
(as is ipfs.io, filecoin.io, protocol.ai and the very relevant ipld.io)
<tidux>
ipfs resolve -r /ipns/ipfs.git.sexy/ipfs/QmaCQ5rcvfwhVNmLWdWydKBFDwnVZzexKePMxiyjh1Yh31: no link named "ipfs" under Qmazvovg6Sic3m9igZMKoAPjkiVZsvbWWc8ZvgjjK1qMss
<voker57>
/ipns/ipfs.git.sexy is redundant here
<voker57>
did you mean ipfs resolve -r /ipns/ipfs.git.sexy ?
Mateon3 has joined #ipfs
robattila256 has joined #ipfs
Guest59175 has quit [Ping timeout: 246 seconds]
Mateon1 has quit [Ping timeout: 240 seconds]
Mateon3 is now known as Mateon1
Guest59175 has joined #ipfs
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<whyrusleeping>
Kubuxu: you wanna publish libp2p and bubble up updates?
<whyrusleeping>
i'd love to get your fix out there
<Kubuxu>
not right now, I can try doing it tomorrow
<whyrusleeping>
cool, that works for me
<tidux>
voker57: I clicked on a link on the webpage http://ipfs.git.sexy/ and it threw that error
<tidux>
¯\_(ツ)_/¯
<whyrusleeping>
tidux: which link?
<whyrusleeping>
oh, the bottom one
<whyrusleeping>
hrm
Aranjedeath has joined #ipfs
<whyrusleeping>
lgierth: how do i properly do links to other ipfs things in my dnslink pages?
<voker57>
with konqueror it would be just a matter of implementing appropriate KIO slave
reit has quit [Ping timeout: 240 seconds]
<whyrusleeping>
that sounds like youre volunteering to make ipfs-konqueror ;)
<whyrusleeping>
(which would be really cool, btw)
<voker57>
I could but konq is buried and forgotten by everybody, so no real point
<voker57>
it would apply to whole KDE though, not just konqueror
<whyrusleeping>
voker57: but really, if you made that, i know a lot of people who would install konqueror just to use that
<whyrusleeping>
(me included)
<whyrusleeping>
if you don't feel like doing it, could you at least provide some info on how to do that in an issue here? https://github.com/ipfs/in-web-browsers
Encrypt has quit [Quit: Quit]
<voker57>
ok I'll take a stab at it :)
MetaQED has joined #ipfs
MetaQED has left #ipfs ["Be back later..."]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
ashark has quit [Ping timeout: 240 seconds]
chungy has joined #ipfs
neurrowcat has joined #ipfs
arpu_ has joined #ipfs
arpu_ has quit [Client Quit]
arpu_ has joined #ipfs
Akaibu has quit [Quit: Connection closed for inactivity]
jaboja has joined #ipfs
kn0rki has quit [Quit: Verlassend]
chwy has quit [Ping timeout: 240 seconds]
Guest69 has joined #ipfs
ipfsrocks has joined #ipfs
<ipfsrocks>
Hey any suggestions for adding large files with js-ipfs?
<ipfsrocks>
I found anything above ~20MB starts to slow down the browser
<Atrus[m]>
My only "real" issue is the generic, naked exceptions, it's generally a good idea to catch specific errors.
<whyrusleeping>
mmm, thats a good point
<Atrus[m]>
That, and the spaces between parameters and parentheses annoy me.
* whyrusleeping
enjoys living in a world where 'go fmt' exists
<tidux>
go fmt yourself :^)
<Atrus[m]>
Yeah go fmt is amazing.
<deltab>
is this for python 2, 3, or both?
<Atrus[m]>
The commit is written for python 2
<deltab>
what is delete_chunk meant to do? does pin_rm do it?
<Atrus[m]>
Pin rm only removes the pin. You'd have to also call repo_gc to actually delete it.
<deltab>
my question was more about what the caller expects to happen
emunand[m] has left #ipfs ["User left"]
emunand has joined #ipfs
<Atrus[m]>
Well, if I were reading the doc string myself, I would assume that the data would be deleted, so I think you're right in bringing up the point that pin_rm might not be sufficient.
<deltab>
I take it the interface is defined around True and False as results, instead of raising exceptions
<deltab>
in line 88 there's % with no format code
<whyrusleeping>
Yeah, i'm not sure what the expectations around the delete call are
niao has joined #ipfs
infinity0_ has joined #ipfs
infinity0_ has quit [Remote host closed the connection]