aschmahmann changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.7.0 and js-ipfs 0.52.3 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Conduct: https://git.io/vVBS0
LiftLeft has joined #ipfs
alexgr has quit [Ping timeout: 240 seconds]
<Sheogorath[m]>
Uhm, am I correct that it's currently impossible to connect a local go-ipfs client with a remote go-ipfs deamon which has its API exposed through a HTTP basic-auth protected endpoint?
alexgr has joined #ipfs
LiftLeft has quit [Ping timeout: 240 seconds]
<Sheogorath[m]>
(I'm aware that this works with the IPFS-cluster tooling, but I though I cloud omit the cluster setup, when I only need one node, but apparently not 😬)
spectie has quit [Ping timeout: 276 seconds]
royal_screwup213 has quit [Quit: Connection closed]
royal_screwup213 has joined #ipfs
alexgr has quit [Ping timeout: 260 seconds]
LiftLeft has joined #ipfs
alexgr has joined #ipfs
mohsenmo[m] has joined #ipfs
royal_screwup213 has quit [Ping timeout: 260 seconds]
mohsenmo[m] is now known as Mohsen[m]
}ls{ has quit [Ping timeout: 260 seconds]
crookisj0kis has joined #ipfs
alexgr has quit [Ping timeout: 240 seconds]
}ls{ has joined #ipfs
alexgr has joined #ipfs
D_ has quit [Ping timeout: 250 seconds]
clemo has quit [Ping timeout: 240 seconds]
Nact has quit [Quit: Konversation terminated!]
spectie has joined #ipfs
spectie has quit [Changing host]
spectie has joined #ipfs
arcatech has quit [Quit: Be back later.]
crookisj0kis has quit [Ping timeout: 252 seconds]
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has joined #ipfs
jrt has quit [Ping timeout: 246 seconds]
alexgr has quit [Ping timeout: 252 seconds]
alexgr has joined #ipfs
crookisj0kis has joined #ipfs
LiftLeft has quit [Ping timeout: 240 seconds]
crookisj0kis has left #ipfs [#ipfs]
<Discordian[m]>
You could tunnel whatever IP you want using SSH
LiftLeft has joined #ipfs
KempfCreative1 has joined #ipfs
KempfCreative has quit [Ping timeout: 246 seconds]
KempfCreative1 is now known as KempfCreative
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has joined #ipfs
arcatech has joined #ipfs
ipfs-stackbot has quit [Remote host closed the connection]
ipfs-stackbot has joined #ipfs
royal_screwup213 has joined #ipfs
alexgr has quit [Ping timeout: 252 seconds]
arcatech has quit [Ping timeout: 250 seconds]
alexgr has joined #ipfs
royal_screwup213 has quit [Ping timeout: 246 seconds]
arcatech has joined #ipfs
D_ has joined #ipfs
jcea has quit [Ping timeout: 250 seconds]
alexgr has quit [Ping timeout: 265 seconds]
alexgr has joined #ipfs
<Discordian[m]>
* You could tunnel whatever port you want using SSH
M0u0[m] has joined #ipfs
Arwalk has quit [Read error: Connection reset by peer]
<proletarius101>
it could be at the input/output error at other location too
<proletarius101>
Other ipfs commands also triggers that
<Discordian[m]>
I've been really curious how the daemon handles networked drives like that..
<proletarius101>
To make it clear, some files are created in the bucket. And I have enable to "storage" permission for "Cloud API permission" for that VM
<proletarius101>
<Discordian[m] "I've been really curious how the"> It runs into the input/output error too
<Discordian[m]>
I wonder if you could use filestore, and throw the files in the bucket like you are now, but have the daemon's blockstore on the local storage.
<proletarius101>
<Discordian[m] "I wonder if you could use files"> For that, I run into an even more strange problem: I attempt to add a lot of videos (which are large). And the block storage becomes as large as the videos even if I add the files via `ipfs add --pin --nocopy -r the_location_of_the_dir`
<Discordian[m]>
That wouldn't play nice with pins though if you plan to pin a lot : /
<proletarius101>
Filestore is enabled
<Discordian[m]>
Yeah
<Discordian[m]>
aschmahmann: you ever see someone try this before?
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
<Discordian[m]>
This really should work IMO, is it possible to avoid FUSE? Does Google offer a kernel module?
<Discordian[m]>
Might offer more feature and place nicer..
<Discordian[m]>
* Might offer more feature and play nicer..
<Discordian[m]>
I don't have a lot of experience with GCP, I know AWS has several modules, a kernel module, and a fuse one
<proletarius101>
So maybe it's Google not implementing all file operations?
<Discordian[m]>
That's what I'm wondering, like FUSE has limitations, and I'm wondering if they don't play nice with how IPFS is accessing the blockstore
<Discordian[m]>
I have another software I've been trying to integrate into AWS's buckets, and I run into annoying timeout errors. Networked storage systems can get complex
<proletarius101>
<Discordian[m] "I have another software I've bee"> Hmm, but it's hard to believe nobody uses google cloud or something to host an ipfs node?
<proletarius101>
Or that's why there's a separate s3 filestore backend
<Discordian[m]>
Oh right there is that backend!
<Discordian[m]>
A proper S3 backend avoids dealing with unnecessary abstraction
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has joined #ipfs
ninekeysdown8 has joined #ipfs
stoopkid has joined #ipfs
distributedGuru has quit [Quit: WeeChat 3.1]
dpl has quit [Ping timeout: 268 seconds]
LiftLeft has quit [Ping timeout: 240 seconds]
royal_screwup213 has joined #ipfs
maggotbrain has quit [Read error: Connection reset by peer]
maggotbrain has joined #ipfs
LiftLeft has joined #ipfs
maggotbrain has quit [Remote host closed the connection]
maggotbrain has joined #ipfs
royal_screwup213 has quit [Ping timeout: 252 seconds]
dqx has quit [Read error: Connection reset by peer]
Arwalk has quit [Read error: Connection reset by peer]
<proletarius101>
<Discordian[m] "https://cloud.google.com/storage"> It works so greatly. Just another question: why the block store needs such large storage for nocopy? Does it actually copies?
<Discordian[m]>
If you have Filestore enabled, and you're using `--nocopy` it should only be storing the hashes of the blocks, and the file location, no actual data gets stored. Unless you pin other data, in which case that data will be stored in regular blocks in full.
Arwalk has joined #ipfs
koo555 has joined #ipfs
<proletarius101>
<Discordian[m] "If you have Filestore enabled, a"> If I need to publish contents and avoid gc, I have to pin them, right?
dpl has joined #ipfs
kn0rki has quit [Quit: Leaving]
le0taku has quit [Ping timeout: 252 seconds]
theseb has joined #ipfs
theseb has left #ipfs [#ipfs]
<kallisti5[m]>
so... how do you speed up accessing data through gateways?
<kallisti5[m]>
We have our data pinned in two (or more) geographically diverse areas... but navigating the data on a nearby gateway is really slow
<Discordian[m]>
<proletarius101 "If I need to publish contents an"> Yes, or you can simply add them to MFS
<Discordian[m]>
<kallisti5[m] "so... how do you speed up access"> Hmm not entirely sure, I tested out using the ipfs.io gateway the other day and I hit 1Gbit instantly (all my bandwidth).
<Discordian[m]>
(Unfortunately not hosting my own yet)
<kallisti5[m]>
I think it's more slowly "locating chunks"
<kallisti5[m]>
If you run your own node, try navigating /ipns/us.hpkg.haiku-os.org
<kallisti5[m]>
* If you run your own node, try navigating /ipns/hpkg.haiku-os.org
<Discordian[m]>
Opened instantly
<kallisti5[m]>
click through the directories
<kallisti5[m]>
could be the public gateways are just overloaded :thinking
<kallisti5[m]>
* could be the public gateways are just overloaded 🤔
<Discordian[m]>
Could be, dir nav is quick, dling at >4MB/s
<Discordian[m]>
Steadily increasing in speed too
<kallisti5[m]>
yeah.. we have at least full pins, likely more
<Discordian[m]>
Peaked at ~6MB/s, dipped back to 4, seems to be working well.
<kallisti5[m]>
ok. that kind of confirms it
<Discordian[m]>
* Peaked at ~6.5MB/s, dipped back to 4, seems to be working well.
<Discordian[m]>
* Peaked at ~7.1MB/s, dipped back to 4, seems to be working well.
<kallisti5[m]>
Discordian: thanks :-) My tests are: my local gateway (has everything pinned locally), or a misc public gateways
<kallisti5[m]>
* Discordian: thanks :-) My tests are: my local gateway (has everything pinned locally so yeah fast), or a misc public gateways
<Discordian[m]>
Haha I setup a node on a server I spin up specifically for testing my home node sometimes lol
<Discordian[m]>
No problem ^-^
leotaku has joined #ipfs
<kallisti5[m]>
hm.. so the IPFS desktop application runs with ```--enable-gc``` by default
<kallisti5[m]>
Which means, anyone who tries to pin our 20GiB repo will have to re-download all the chunks if pinning fails
<kallisti5[m]>
Is there a way to disable gc? I see Datastore -> StorageGCWatermark: 90, Datastore -> StorageMax: 10GB. Does --enable-gc "enable" those thresholds?
<Discordian[m]>
I believe GC is actually disabled by default, needs to be manually enabled. Not 100% sure about IPFS-Desktop, but I know default go-ipfs it definitely is.
<Discordian[m]>
<kallisti5[m] "Is there a way to disable gc? I"> Yes
<Discordian[m]>
> <@kallisti5:matrix.org> Is there a way to disable gc? I see Datastore -> StorageGCWatermark: 90, Datastore -> StorageMax: 10GB. Does --enable-gc "enable" those thresholds?
<Discordian[m]>
* Yes. So if you don't enable, it's disabled.
<kallisti5[m]>
Looking at the mac IPFS-Desktop, ```ps aux | grep ipfs``` shows --enable-gc :-(
<Discordian[m]>
FWIW the GC won't touch pinned objects
<kallisti5[m]>
yeah, my issue is "pre pinning"
<kallisti5[m]>
* yeah, my issue is "pre pinning complete"
<kallisti5[m]>
Pin 20GiB, if it fails GC "erases everything" and you have to start over
<kallisti5[m]>
which is a bit painful... that's bit me multiple times
<kallisti5[m]>
I guess I'm going to document changing "StorageMax: 10GB" to 200GB or something
<Discordian[m]>
Probably a good idea, you could also present your use-case to an issue on IPFS Desktop if you'd like. I don't know how much discussion has actually been around those defaults, so the input might be valued
<Discordian[m]>
10GB as a default does seem quite low
<kallisti5[m]>
I feel like the thought process was for caching.. but pinning > 10GB is pretty easy to do now-a-days
<kallisti5[m]>
so if your pin fails, restart downloading everything
lawid has quit [Quit: lawid]
<Discordian[m]>
Yeah maybe it's to avoid people being alarmed by IPFS' storage consumption.
<Jassu>
is there a way to absolutely prevent from losing data by a node?
<Discordian[m]>
<Jassu "is there a way to absolutely pre"> You could pin the data you want to keep, or add it to MFS
<Jassu>
Yeah, but if the pinned data can get flushed?
<Discordian[m]>
It doesn't, they're talking in the theoretical instance where pinning fails, or the download gets interrupted, the GC wiping out the unpinned blocks.
lawid has joined #ipfs
tech_exorcist has quit [Ping timeout: 268 seconds]
<kallisti5[m]>
Jassu: example: I pin 20GB of data. IPFS downloads 15GB of chunks and the pin fails.
<Jassu>
Ah, ok. Yeah, I was thinking of use case where instead of using any kind of RAID, I'd rather distribute stuff over multiple nodes with single drives etc... So any node failing would be fine.
<kallisti5[m]>
IPFS will then delete all the downloaded chunks and I have to download 20GB again
<kallisti5[m]>
the solution is to disable garbage collection... or (maybe) raise the StorageMax to 200GiB or something.
<Discordian[m]>
<Jassu "Ah, ok. Yeah, I was thinking of "> Oh yeah, some people use cron jobs to keep IPNS in sync over multiple nodes, works like redundant mirrors like that
jcea1 has joined #ipfs
jcea has quit [Quit: jcea]
jcea1 is now known as jcea
jcea has quit [Quit: jcea]
royal_screwup213 has joined #ipfs
jcea has joined #ipfs
royal_screwup213 has quit [Ping timeout: 260 seconds]
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has joined #ipfs
clemo has joined #ipfs
mowcat has quit [Remote host closed the connection]
royal_screwup213 has joined #ipfs
tenore_di_grazia has joined #ipfs
tenore_di_grazia is now known as yosoylibre[m]
<yosoylibre[m]>
Good afternoon. One question, do you know how to see the link from a correctly uploaded and pined archive?
ctOS has quit [Quit: Connection closed for inactivity]
<yosoylibre[m]>
<yosoylibre[m] "Good afternoon. One question, do"> i mean, I've been working whith go.ipfs (commandline)
MetaVinci[m] is now known as MetaPromoter[m]
<Discordian[m]>
What are you trying to retrieve exactly?
<Discordian[m]>
Like the CID?
<yosoylibre[m]>
this adressing is "like" a link to download anywere?
<yosoylibre[m]>
If yes, yes.
<yosoylibre[m]>
😂😂😃
joey has quit [Remote host closed the connection]
joey has joined #ipfs
koo555 has quit [Ping timeout: 246 seconds]
<Discordian[m]>
If you've added and pinned and archive via CLI, the CID would have been output when you did so. If you want to list all your pins, you can do `ipfs pin ls`, or `ipfs pin ls --help` to see the usage.
gulpsaba[m] has left #ipfs ["User left"]
unnamed55355_ has joined #ipfs
unnamed55355 has quit [Ping timeout: 260 seconds]
koo555 has joined #ipfs
royal_screwup213 has quit [Quit: Connection closed]
royal_screwup213 has joined #ipfs
pecastro has quit [Ping timeout: 252 seconds]
royal_screwup213 has quit [Ping timeout: 265 seconds]
Arwalk has quit [Read error: Connection reset by peer]
Arwalk has joined #ipfs
dqx has joined #ipfs
jesse22_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]