whyrusleeping changed the topic of #ipfs to: go-ipfs 0.4.16 is out! Try out all the new features: https://dist.ipfs.io/go-ipfs/v0.4.16 | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://botbot.me/freenode/ipfs/ | Forums: https://discuss.ipfs.io | Code of Conduct: https://git.io/vVBS0
kaminishi has quit [Quit: Leaving]
<AphelionZ> shoku: thank you - very interesting
<AphelionZ> a browser extension could do that, i imagine
<AphelionZ> somebody would have to run the gateway eventually
bomb-on has quit [Quit: SO LONG, SUCKERS!]
_whitelogger has joined #ipfs
appa_ has quit [Read error: Connection reset by peer]
appa_ has joined #ipfs
IRCsum has quit [Remote host closed the connection]
IRCsum has joined #ipfs
mazeto has joined #ipfs
abueide has joined #ipfs
lanlink has joined #ipfs
ericwooley has quit [Ping timeout: 240 seconds]
lanlink has quit [Ping timeout: 240 seconds]
mrhavercamp has joined #ipfs
ericwooley has joined #ipfs
guideline has joined #ipfs
astrofog has quit [Quit: Quite]
pcardune has joined #ipfs
mrhavercamp has quit [Ping timeout: 244 seconds]
pcardune has quit [Ping timeout: 244 seconds]
mazeto has quit [Ping timeout: 240 seconds]
lassulus_ has joined #ipfs
user_51 has joined #ipfs
lassulus has quit [Ping timeout: 268 seconds]
lassulus_ is now known as lassulus
warner has joined #ipfs
user51 has quit [Ping timeout: 240 seconds]
gh1dra has joined #ipfs
kaotisk has quit [Ping timeout: 260 seconds]
kaotisk has joined #ipfs
jesse22_ has quit [Ping timeout: 276 seconds]
jesse22 has joined #ipfs
abueide has quit [Ping timeout: 260 seconds]
gh1dra has quit [Quit: Page closed]
xcm has quit [Ping timeout: 240 seconds]
xcm has joined #ipfs
TUSF has quit [Read error: Connection reset by peer]
TUSF has joined #ipfs
BenG[m] has joined #ipfs
BeerHall has joined #ipfs
sanderp__ has quit [Ping timeout: 260 seconds]
TUSF has quit [Quit: Leaving]
esph has quit [Remote host closed the connection]
esph has joined #ipfs
IRCsum has quit [Remote host closed the connection]
IRCsum has joined #ipfs
ygrek has joined #ipfs
Have-Quick has joined #ipfs
sanderp has joined #ipfs
Have-Quick has quit [Quit: Have-Quick]
Have-Quick has joined #ipfs
IRCsum has quit [Remote host closed the connection]
IRCsum has joined #ipfs
mrhavercamp has joined #ipfs
Fyr_ has joined #ipfs
reit has joined #ipfs
Have-Quick has quit [Quit: Have-Quick]
Have-Quick has joined #ipfs
sanderp has quit [Ping timeout: 244 seconds]
Have-Quick has quit [Quit: Have-Quick]
mrhavercamp has quit [Ping timeout: 264 seconds]
mismatch has joined #ipfs
sanderp has joined #ipfs
Fyr_ has quit [Ping timeout: 252 seconds]
BeerHall has quit [Quit: BeerHall]
ericwooley has quit [Ping timeout: 240 seconds]
SAuditore has joined #ipfs
[itchyjunk] has quit [Ping timeout: 256 seconds]
ericwooley has joined #ipfs
SAuditore has quit [Ping timeout: 248 seconds]
SAuditore has joined #ipfs
Have-Quick has joined #ipfs
sanderp has quit [Ping timeout: 240 seconds]
tcfhhrw has joined #ipfs
saki has joined #ipfs
pcardune has joined #ipfs
pcardune has quit [Ping timeout: 264 seconds]
SAuditore has quit []
Oatmeal has quit [Ping timeout: 240 seconds]
BeerHall has joined #ipfs
BeerHall1 has joined #ipfs
BeerHall has quit [Read error: Connection timed out]
BeerHall1 is now known as BeerHall
Oatmeal has joined #ipfs
achin has quit [Ping timeout: 268 seconds]
achin has joined #ipfs
whyrusleeping has quit [Changing host]
whyrusleeping has joined #ipfs
whyrusleeping changed the topic of #ipfs to: go-ipfs 0.4.17 is out! Try out all the new features: https://dist.ipfs.io/go-ipfs/v0.4.17 | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://botbot.me/freenode/ipfs/ | Forums: https://discuss.ipfs.io | Code of Conduct: https://git.io/vVBS0
<whyrusleeping> Hey everyone! We released a new version of ipfs, v0.4.17!
<whyrusleeping> This one was a smaller release, but adds a few important things that we're excited to show at the decentralized web summit next week
<whyrusleeping> in particular, a feature called the urlstore, that allows ipfs to track data stored remotely via http
<whyrusleeping> This will be used in the internet archive to load data from the archive through ipfs on demand!
<whyrusleeping> We also fixed an issue with transfer performance from go to javascript nodes
<JCaesar> internet archive… as in the wayback machine? Or something different?
<whyrusleeping> Yeah, the wayback machine
<whyrusleeping> Theres a ton more data in there than just the wayback machine
rendar has joined #ipfs
mauz555 has joined #ipfs
goiko has joined #ipfs
goiko has quit [Changing host]
goiko has joined #ipfs
tcfhhrw has quit [Read error: Connection reset by peer]
tcfhhrw has joined #ipfs
joocain2 has quit [Remote host closed the connection]
joocain2 has joined #ipfs
pecastro has joined #ipfs
BeerHall has quit [Quit: BeerHall]
mauz555 has quit [Remote host closed the connection]
<r0kk3rz> whyrusleeping: so how does that work?
bomb-on has joined #ipfs
Alpha64 has quit [Read error: Connection reset by peer]
<JCaesar> Does the internet archive give out ipfs hashes which you can then use instead of the usual http links?
bomb-on has quit [Quit: SO LONG, SUCKERS!]
mauz555 has joined #ipfs
mauz555 has quit [Remote host closed the connection]
ygrek has quit [Ping timeout: 260 seconds]
<flowpoint[m]> How can I init, start and stop go-ipfs with go?
<flowpoint[m]> I found issue 3060, but the example from http://ipfs.git.sexy/sketches/minimal_ipfs_node.html but it threw some errors/is maybe outdated
bomb-on has joined #ipfs
mrhavercamp has joined #ipfs
lnostdal has joined #ipfs
lnostdal has quit [Quit: https://www.Quanto.ga/]
sanderp has joined #ipfs
lnostdal has joined #ipfs
saki has quit [Ping timeout: 260 seconds]
saki has joined #ipfs
encrypt3dbr0k3r has quit [Ping timeout: 264 seconds]
encrypt3dbr0k3r has joined #ipfs
sbani has quit [Quit: Good bye]
sbani has joined #ipfs
tcfhhrw has quit [Read error: Connection reset by peer]
<Powersource[m]> i'm having a weird issue. i'm making an electron app using js-ipfsd-ctl. before ipfs 0.4.16 the app's daemon could only connect to other local daemons, which was weird but worked ok. on 0.4.16&17 the daemon looks like it's connecting to the rest of the internet which is great, but now discovery of the other local daemon is super slow
<Powersource[m]> this is mostly an issue since i'm using a disposable repo in js for testing purposes and every time i'm launching the app i'm refetching large test files from the local daemon
<Powersource[m]> might be a sign that I should just start using non-disposable repos instead :P
saki has quit [Quit: saki]
mrhavercamp has quit [Ping timeout: 240 seconds]
dethos has joined #ipfs
kaminishi has joined #ipfs
dethos has quit [Ping timeout: 260 seconds]
xcm has quit [Remote host closed the connection]
reit has quit [Quit: Leaving]
xcm has joined #ipfs
<fiatjaf> is the internet archive using ipfs under the hood now?
<fiatjaf> I'm also interested in seeing their hashes
<fiatjaf> ok, so urlstore is just like a new datastore that instead of storing content on disk delegates that to an http url?
<Powersource[m]> fiatjaf: pretty much. some explanation here https://github.com/ipfs/go-ipfs/pull/4896
malaclyps has quit [Ping timeout: 264 seconds]
malaclyps has joined #ipfs
f0i has joined #ipfs
Tiez has joined #ipfs
BeerHall has joined #ipfs
f0i has quit [Read error: Connection reset by peer]
Steverman has joined #ipfs
shguwu has joined #ipfs
[itchyjunk] has joined #ipfs
Tiez has quit [Ping timeout: 240 seconds]
goiko has quit [Quit: ﴾͡๏̯͡๏﴿ O'RLY? Bye!]
goiko has joined #ipfs
goiko has quit [Changing host]
goiko has joined #ipfs
Taoki has joined #ipfs
pcardune has joined #ipfs
Encrypt has joined #ipfs
pcardune has quit [Remote host closed the connection]
pcardune has joined #ipfs
pcardune has quit [Ping timeout: 240 seconds]
<Powersource[m]> continuing my above comment: I'm guessing it's because the client has a lot more nodes to look through to find the hashes (goes a lot faster when you only have to look in 1 place). Could ipfs/the dht prioritize communication with local (mdns/same machine) daemons? I feel that would be a really helpful heuristic.
pcardune has joined #ipfs
Taoki has joined #ipfs
pcardune has quit [Remote host closed the connection]
pcardune has joined #ipfs
pcardune has quit [Ping timeout: 240 seconds]
BeerHall has quit [Ping timeout: 260 seconds]
BeerHall has joined #ipfs
<shguwu> Powersource[m], I thought it already prioritize local connections. That was the promise in many talks
void9 has joined #ipfs
fleeky has joined #ipfs
<void9> https://github.com/ipfs/go-ipfs/blob/master/docs/fuse.md - I followed this guide, but it doesn't say how to actually access files on ipfs through it
<void9> /ipfs/hash ?
<makeworld[m]> I believe so
<void9> do i need to run the daemon as root if I use /ipfs /ipns ?
<swedneck[m]> lol, i'm trying to make a guy on reddit understand that ipfs doesn't use blockchain nor torrents
lnostdal has quit [Quit: https://www.Quanto.ga/]
The_8472 has quit [Ping timeout: 256 seconds]
The_8472 has joined #ipfs
<Powersource[m]> void9: if those dirs are owned by root. but you should probably make yourself the owner
itaipu has joined #ipfs
<void9> alright I just ran everything as root to test.. I could do cp /ipfs/hash ~ for a 2 mb file. Now I am trying to download a 10GB one from the same place and it doesn't work well.. the transfer stops at 65540KB
<makeworld[m]> swedneck: lol, can you link?
s4y has quit [Quit: ZNC - http://znc.in]
<void9> any idea where this limit might come from ? 65540k ?
<void9> I can download the whole file from http, at 100MB/s (local virtual machine)
<makeworld[m]> swedneck: lol. He's probably thinking of filecoin
<swedneck[m]> yeah that makes sense for the blockchain bit
s4y has joined #ipfs
clemo has joined #ipfs
<Powersource[m]> void9: well 2^16 is 65536 but idk how that affects anything :P
<void9> Powersource[m]: nevermind, eventually the size increased, it was still going. but much much slower through fuse than http. is this normal?
<Powersource[m]> swedneck: and tbf about the torrent thing, basic ipfs is pretty close to bittorrent
<swedneck[m]> i mean yeah it functions like a much much better version of bittorrent, but it doesn't use it in any way
The_8472 has quit [Ping timeout: 240 seconds]
itaipu has quit [Ping timeout: 256 seconds]
The_8472 has joined #ipfs
<void9> tried adding a movie to ipfs and stream it from the vm. ipfs is laggy, stops to buffer all the time, https works ok
saki has joined #ipfs
<void9> is this normal ? does anyone have ipfs fuse mounted ?
<void9> seems it is
Tiez has joined #ipfs
PyHedgehog has joined #ipfs
<void9> is there a way to make the performance on fuse mounts close to the 'native' one ?
ericwooley has quit [Ping timeout: 240 seconds]
The_8472 has quit [Ping timeout: 256 seconds]
The_8472 has joined #ipfs
Mateon3 has joined #ipfs
Mateon1 has quit [Ping timeout: 260 seconds]
Mateon3 is now known as Mateon1
s4y has quit [Quit: ZNC - http://znc.in]
Guanin_ has joined #ipfs
Guanin has quit [Ping timeout: 240 seconds]
Guanin_ has quit [Quit: Leaving]
ericwooley has joined #ipfs
reit has joined #ipfs
ericwooley has quit [Client Quit]
ericwooley has joined #ipfs
[itchyjunk] has quit [Ping timeout: 240 seconds]
<makeworld[m]> Can someone answere my question about the linked paragraph? Why is it being changed from /ipfs/ to /p2p/ for multiaddrs? Doesn't that section of the multiaddr denote the protocol being used, such as http or in this case, ipfs?
<makeworld[m]> Obviously this doesn't make sense for libp2p in general, but shouldn't it remain for ipfs
s4y has joined #ipfs
s4y has quit [Client Quit]
s4y has joined #ipfs
s4y has quit [Quit: ZNC - http://znc.in]
s4y has joined #ipfs
Have-Quick has quit [Quit: Have-Quick]
the_merlin has joined #ipfs
astrofog has joined #ipfs
Encrypt has quit [Quit: Quit]
ericwooley_ has joined #ipfs
clemo has quit [Ping timeout: 260 seconds]
ericwooley has quit [Ping timeout: 256 seconds]
Have-Quick has joined #ipfs
Have-Quick has quit [Client Quit]
s4y has left #ipfs [#ipfs]
astrofog has quit [Quit: Quite]
lnostdal has joined #ipfs
rendar has quit []
Alpha64 has joined #ipfs
<shguwu> makeworld[m], I think the purpose of it was to return to original scheme rather than perpetuate incorrect usage
Jesin has joined #ipfs
Have-Quick has joined #ipfs
cris has quit [Ping timeout: 260 seconds]
cris has joined #ipfs
tcfhhrw has joined #ipfs
Have-Quick has quit [Quit: Have-Quick]
<makeworld[m]> shguwu: what do you mean? what original scheme? and wouldn't using ipfs be correct outside of libp2p?
ericwooley_ has quit [Ping timeout: 240 seconds]
mauz555 has joined #ipfs
gmoro_ has quit [Ping timeout: 244 seconds]
Have-Quick has joined #ipfs
<shguwu> using ipfs be correct outside of libp2p? you mean using ipfs with other methods to talk to peers?
<makeworld[m]> I mean how ipfs is protocol in use when talking to peers, so why replace it with p2p, which is not protocol
<shguwu> ipfs is a protocol to talk to other peers only on a certain level of abstraction. That multiaddr is used on the lower level of abstraction
<shguwu> and the p2p part is to specify the means of communication. could be something else in the future
jesse22 has quit [Read error: Connection reset by peer]
jesse22 has joined #ipfs
Have-Quick has quit [Quit: Have-Quick]
Have-Quick has joined #ipfs
Have-Quick has quit [Quit: Have-Quick]
Have-Quick has joined #ipfs
Steverman has quit [Ping timeout: 244 seconds]
Have-Quick has quit [Quit: Have-Quick]
Have-Quick has joined #ipfs
<makeworld[m]> I'm not sure what you mean, talking about abstraction and things. Isn't multiaddr supposed to specify protocols as its intent is to get rid of ambiguity?
<makeworld[m]> Not the architecture of the connection (p2p, client-server, etc.)
jesse22 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jesse22 has joined #ipfs
jesse22 has quit [Client Quit]
mauz555 has quit [Remote host closed the connection]
Steverman has joined #ipfs
abueide has joined #ipfs
jesse22 has joined #ipfs
clemo has joined #ipfs
noresult has quit [Quit: leaving]
noresult has joined #ipfs
BeardRadiation has joined #ipfs
[itchyjunk] has joined #ipfs
abueide has quit [Ping timeout: 240 seconds]
Tiez has quit [Quit: WeeChat 2.2]
jesse22 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Have-Quick has quit [Quit: Have-Quick]
bae9Oosi has joined #ipfs
<bae9Oosi> Hey guys, I want to write a simple p2p sharing service that allows peers to exchange textual notes and small files. I want it to be p2p so a user can save a note to their machine if it's important for them, and then sync it back to the other peers. Is ipfs a good fit for that or are there better alternatives for this use-case?
<bae9Oosi> Note: I want installation to be as easy as possible. I'm not sure if ipfs can be embedded as a library.
Have-Quick has joined #ipfs
pcardune has joined #ipfs
Have-Quick has quit [Remote host closed the connection]
<voker57_> bae9Oosi: do you want to implement "shared folders" like functionality?
Have-Quick has joined #ipfs
pcardune has quit [Ping timeout: 240 seconds]
<voker57_> or do you only need something to generate links to content and link is shared outside the app?
<voker57_> IPFS definitely can be embedded as a library https://ipfs.io/docs/examples/example-viewer/example#../api/service/readme.md
<bae9Oosi> voker57_: really it's more like a blog where each node can save (locally) any post to make sure others will be able to read/duplicate it when the node is online. I need to generate links to be shared outside of the app, yes (http gateway)
<voker57_> bae9Oosi: yes, IPFS is a good fit for that
<bae9Oosi> voker57_: ok, cool! I need to do more research about embedding it as a lib then. I've seen an open issue about that on github, so I figured it's still WIP
Guanin has joined #ipfs
mauz555 has joined #ipfs
<voker57_> bae9Oosi: it's a better practice to launch node separately and talk to it/let users reuse existing node
<voker57_> generally you want the library thing if you want to run a node in very specific way
<voker57_> and in your case using standard interface should be ok
Have-Quick has quit [Remote host closed the connection]
<bae9Oosi> voker57_: well, my main motivation here is ease of installation and use, but I guess I can just bundle everything together and control the node process from my app... dunno, we'll see
<voker57_> yeah that would be best
<voker57_> IPFS is just one standalone executable
<bae9Oosi> Yeah, I guess that makes sense
<bae9Oosi> Thanks for the help!
matthiaskrgr has quit [Quit: PanicBNC - https://PanicBNC.net - currently sucks]
Have-Quick has joined #ipfs
Have-Quick has quit [Read error: Connection reset by peer]
bae9Oosi has left #ipfs ["ERC (IRC client for Emacs 25.3.1)"]
mauz555 has quit [Remote host closed the connection]
matthiaskrgr has joined #ipfs
mauz555 has joined #ipfs
matthiaskrgr is now known as Guest37760
mauz555 has quit [Ping timeout: 256 seconds]
The_8472 has quit [Ping timeout: 240 seconds]
ericwooley_ has joined #ipfs
The_8472 has joined #ipfs
Guest37760 has quit [Changing host]
Guest37760 has joined #ipfs
Guest37760 has joined #ipfs
DJ-AN0N has joined #ipfs
rtjure has quit [Ping timeout: 260 seconds]
bomb-on has quit [Quit: SO LONG, SUCKERS!]
sanderp has quit [Ping timeout: 256 seconds]
drrty has quit [Ping timeout: 264 seconds]
<void9> does anyone know how to get near native performance when fuse mounting ipfs?
<nixze> void9: ipfs mount project that gives much better performance than native go-ipfs mount; https://github.com/piedar/js-ipfs-mount only tested it very lightly - maybe it can be used to be ported into go-ipfs
<void9> just tested it.. maybe I am doing something wrong, but it does not work right
<void9> did you get decent performance from it?
<nixze> void9: when comparing native go-ipfs mount, with js-ipfs-mount mounts then I saw a huge difference ... but that probably was with already localy cached data
sanderp has joined #ipfs
<void9> can you try this file ? QmdUuVHsr38g3ZdkZmUTx5wLY7eRoo5PP626AhfixCBZTp
<void9> maybe my windows daemon is too slow ? hmm
<void9> I am testing windows daemon / linux vm for fuse, on the same computer
<void9> or where can i find very fast files available at gbit speeds on ipfs ?
<nixze> void9: test by adding adding local nodes to the swarm ... if you only want to test mount performance, pin the objects first
<void9> I want to test retrieval/mount performance. but retrieval should be fast as I am sharing the same local LAN between node and client ?
<nixze> void9: test that separately
<nixze> if you want to verify retrival then test that with ipfs cat or similar first
<void9> nixze actually I did, it was 30-100MB/s locally, via http retrieval
<lemmi> another possible cause performance differences between js and go can be that js and go can not really talk to each other at the moment. so while a file might be available in the js net, it may be hard to get via go-ipfs
<nixze> there was some fixes in 0.4.17, are you using that, or are you using something earlier of go-ipfs?
<void9> I am using 0.4.17 as the windows daemon, and ipfs-git on linux for testing mount
<nixze> void9: ipfs ls on above hash no response after waiting for 2 minutes
<nixze> and then I did ...
<void9> nixze it's a file, not a folder
<nixze> void9: ls works fine on files as well
<void9> oh :( then there is something wrong with my node
<void9> another question that has me confused. If I add files from a folder to ipfs, and they each get their own hash.. will they source-match with the files I add in a folder that are identical?
<nixze> but it took ~2 minutes for it to be available, but access now is quick on a few local nodes
IRCsum has quit [Remote host closed the connection]
<void9> I mean, if I add a whole folder vs adding just separate files from that folder, will the identical files have the same hash and be sourced together on ipfs ?
IRCsum has joined #ipfs
<nixze> If you add identical files with the same algorithm then they should have the same file hash hash
<lemmi> void9 maybe your vm setup screws something up. maybe this causes the vm to sit behind another nat
<void9> lemmi: I set it to bridged networking, it has a local ip from the same subnet as the physical network
<lemmi> k
<nixze> hmm getting to late here
* nixze needs sleep
<void9> nixze, is adding as a file vs a folder with that file considered the same algorithm ?
<nixze> void9: should be, AFAIK the only difference is if you are using --nocopy - in which case --raw-leaves is forced and will create different files
<nixze> best way to avoid that is to always add files with --raw-leaves (seems it will be standard in the future)
<void9> omg, this looks like such a horrible design
<void9> oh well, good think it's not popular yet and only v 0.3
<nixze> important to remember that this is all still experimental
<void9> 0.4*
<void9> yeah
<void9> thing*
ericwooley_ has quit [Remote host closed the connection]
<lemmi> what's horrible about it?
ericwooley_ has joined #ipfs
<nixze> on the raw-leaves things, what I have understood from history is that; at first it was a good design choice at the time to not allow raw-leaves at all. but than things changed.
<void9> well, they should never have started with file duplication when adding stuff from local drives
<void9> it's so obvious people will not like that that it should never have been otherwise
<lemmi> there is no other way to ensure things don't just magically disappear
<lemmi> and you have --nocopy if you are sure you can handle the responsibility yourself
<void9> what can I say, huge responsibility :P
<nixze> --nocopy has several (major) issues of it's own, (which I'm currently trying to work around) but again this is all still experimental, so I'm not complaining but rather documenting the issues that I see and the usecase.
<lemmi> void9 well it is. so easy to accidentally mess something up with filestore
<void9> ipfs could just monitor filesystem changes and if filsize changes or the file disappears, it remove it from storage?
<lemmi> which also comes at a cost
<nixze> void9: what if filesize stays the same but contents changes (like with a BTRFS filesystem image)
<lemmi> inotify isn't perfect, it can miss events, so you need to constantly rescan to make sure you didn't miss anything
<void9> nixze that was exactly what was in that file hash i pasted :P
<nixze> void9: yep I know, that's why I took it as an example for when monitoring filesize does not help
Shnaw7 has joined #ipfs
<void9> haha ok
<void9> it's actually set to be a seed, so it's read only
<lemmi> it's not at all trivial to get this right
<void9> it will mount as read only if you try to mount it
<void9> does not have to be perfect, but it has to be able to scale. and 2x storage requirements is not good scaling
<nixze> what would be good however is to have the rm and clean commands added to filestore ... and verify extended so that it can look on file modification times to say how likely a rescan is to be needed
<void9> and then if you have different hashes when using --nocopy, that's also not cool, divies the network resources for files in two fractiosn
<lemmi> void9: you either pack it into ipfs and remove the source file, or use filestore. no 2x storage cost
<nixze> lemmi: in reallity tho, that is not how it works
<nixze> not right now at least
Shnaw7 has quit [Remote host closed the connection]
<void9> and this seems like a project with quite a lot of attention .. why is the development so slow ? I mean it's been going on for 3 years at least
<lemmi> because it's a massive undertaking
<nixze> I would say that it isn't slow at all if you look on what is going on.
<lemmi> nixze: filestore uses the filesize + the usual overhead. if i add the file to ipfs and then remove the file, the same is true
<void9> all I know is that I tried it now and I couldn't get a 1GB image file to mount and use reasonably fast
shguwu has quit [Quit: Leaving]
<nixze> lemmi: talking usecase here, trying to add a 400GB dataset to IPFS which I obtain (and need to keep updated) via rsync, files are added, removed, and some files changed (timestamp files) ...
<nixze> for rsync to work it needs mtime set on the files, so adding to ipfs and remove the sourcefiles is not an option ...
<lemmi> then filestore
<void9> nixze: that's exactly the use case I was thinking of, keep a large collection of files in sync. is it doable yet ?
<nixze> and right now there is no clean or rm in filestore, which breaks things when files are removed by rsync, and also when files are modified
<lemmi> that's more an issue that your sync can't tell ipfs what's happening
<nixze> there is several outstanding issues on github about it, and I feel I have been spamming those in the last few days (sorry about that)
<nixze> lemmi: well it is an actuall usecase - and "rewriting rsync" is _not_ an option
<void9> how hard would it be to set a custom time interval at which the filestore is checked for differing timestamps/filesize, and for new/missing files?
<void9> I mean to code that into ipfs
<nixze> However that is what I'm actually doing, adding a bunch of glue to get this workable ... but again there is no filestore rm, so it is (right now) impossible to remove files from filestore once they have been added
<void9> haha really?
<lemmi> i built something similar to host a distribution on ipfs, but it's too slow to build large directories from the shell. i haven't gotten around to build this in go directly
<lemmi> nixze: are you sure? aren't this just pins?
pecastro has quit [Ping timeout: 260 seconds]
<lemmi> void9: what you basically need to do is copy what syncthing does. and they put a lot of work in it to get this right and with actually ok performance
<nixze> lemmi: so I have file x/y/z.tar.gz ... that file gets removed or modified by rsync ... how do I get original hash so that I can unpin it? https://github.com/ipfs/go-ipfs/issues/5293 https://github.com/ipfs/go-ipfs/pull/5286#issuecomment-408250115 ... the "fix" for now is to run ipfs filestore verify, but running add and verify is just double work, and full add already takes an hour.
<nixze> my workaround for this will be to add to filestore, and then add it to mfs, and use mfs to track old hashes, and compare to new ones based on mtime and dtime
<lemmi> nixze: ipfs filestore ls before rsync. then rsync and track what gets removed
<lemmi> if you don't use the output of rsync, you'll have no choice but to rescan
<nixze> ipfs filestore ls does not give file hash - only block hashes
<nixze> lemmi: I will use mfs to get those hashes and handle this - and that's fine
<voker57_> [02:25:11] <void9> and this seems like a project with quite a lot of attention .. why is the development so slow ? I mean it's been going on for 3 years at least
<voker57_> I reckon the team works on filecoin since that's what they have been paid to do
<void9> I find the filecoin concept to be orders of magnitude more difficult to accomplish than ipfs
<voker57_> ipfs certainly would need improvements if it were to work as filecoin storage backend, so hopefully it'll get some attention (esp. performance-wise) too
jesse22 has joined #ipfs
<lemmi> ah, haven't had large enough files to notice this. but then mfs is the way to go right now, yes. i do that as well.
jesse22 has quit [Client Quit]
<void9> ok this is sad, https://github.com/piedar/js-ipfs-mount freezes with copying after a few kb, and the ipfs mount copies 24K in 30 seconds