aschmahmann changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.7.0 and js-ipfs 0.52.3 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Conduct: https://git.io/vVBS0
opa73311 has left #ipfs [#ipfs]
Newami has quit [Max SendQ exceeded]
Newami has joined #ipfs
drathir_tor has quit [Remote host closed the connection]
zeden has joined #ipfs
hedgeho9 has joined #ipfs
royal_screwup214 has quit [Quit: Connection closed]
royal_screwup214 has joined #ipfs
hedgeho9 has quit [Ping timeout: 260 seconds]
royal_screwup214 has quit [Ping timeout: 245 seconds]
drathir_tor has joined #ipfs
Newami has quit [Quit: Leaving]
pecastro has quit [Ping timeout: 264 seconds]
jesse22 has quit [Ping timeout: 264 seconds]
jackson9218[m] has joined #ipfs
royal_screwup214 has joined #ipfs
zeden has quit [Quit: WeeChat 3.0.1]
royal_screwup214 has quit [Ping timeout: 246 seconds]
zeden has joined #ipfs
Mikaela has quit [Remote host closed the connection]
<JibranShaikh[m]>
Hi guys I am hosting a IPFS server on VPS. I have setup it correctly, no errors in logs and is publicly accessible. I have allowed like 10GB for hosting but after 2 days, it is still hosting lik 25MB but eats a lot of CPU. Is there something wrong?
}ls{ has quit [Quit: real life interrupt]
<JibranShaikh[m]>
I had setup IPFS server on my laptop and it consumed 10GB in a day :-p
<JibranShaikh[m]>
* I had setup IPFS server on my laptop and it consumed 10GB in a day 😛
kn0rki has joined #ipfs
royal_screwup214 has joined #ipfs
royal_screwup214 has quit [Max SendQ exceeded]
treora has quit [Ping timeout: 264 seconds]
treora has joined #ipfs
rvalle has joined #ipfs
Guest79 has joined #ipfs
<rvalle>
Hi! I am trying to setup IPFS for first time. IPFS is running, UI, status says I have 800 peers, I see traffic up and down. But if I share a link of a hello world file it never opens. What could be wrong?
<rvalle>
port 4001 is NATed into IPFS, and I allowed out connections from ports TCP/UDP 1024 to 65000+
<rvalle>
no clue what could be wrong....
<rvalle>
Mybe it is becase I posted here... but it started working right now.
<Evanito[m]>
IPFS takes a bit to initially distribute a file, as you have discovered
<rvalle>
yes... now I am trying to download a 350MB files that I also have there... and I am surprised it is coming from ipfs.io at 2.5 Mb/s
<rvalle>
Next I want to serve content to the traditional web with a gateway reachable via DNS
<rvalle>
is there any way I can avoid having to update DNS records every time the content changes?
Mikaela has quit [Ping timeout: 268 seconds]
Mikaela has joined #ipfs
hedgeho9 has joined #ipfs
<Guest92766>
I'm running an go-ipfs daemon with the s3 backend via a docker container on a server. But other IPFS instances don't seem to be able to connect to it
<Guest92766>
addresses and protocols being null doesn't sound good
Guest79 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
koalajoe[m] has left #ipfs ["User left"]
eyenx has quit [Read error: Connection reset by peer]
Nact has joined #ipfs
pecastro has joined #ipfs
<Guest92766>
the python IPFS http API doens't support daemons newer than 0.7 :(
fling has quit [Ping timeout: 246 seconds]
wiss has joined #ipfs
ylp has joined #ipfs
fling has joined #ipfs
<Evanito[m]>
<rvalle "is there any way I can avoid hav"> Either make it easier to update your DNS (such as using ipfs-deploy) or use an IPNS record instead of IPFS, and update that instead.
<Evanito[m]>
<Guest92766 "addresses and protocols being nu"> To my understanding, Docker doesn't open any ports by default. Have you opened the IPFS port 4001 manually?
<Guest92766>
yeah, 4001 (tcp & udp) and 5001 as well
<rvalle>
@Evanito: can I put IPNS record on my DNS?
<Guest92766>
I think what's happening is that I think ipfs commands like cat and add talk to the daemon by default, but they don't
hedgeho9 has quit [Quit: WeeChat 3.1]
<Evanito[m]>
<rvalle "@Evanito: can I put IPNS record"> Yep, will just need to call it /ipns/ instead of /ipfs/ iirc
supercoven has joined #ipfs
<rvalle>
OK, because the docs present IPNS and DNSLink as alternatives... not as they can be convined
<Evanito[m]>
yuvipanda: Good call, you may need to connect over the HTTP api since the daemon is not running "locally"
<Guest92766>
I'm calling it from the same machine the ports are exposed on. is there a way for me to tell ipfs cat to connect to the daemon?
<Evanito[m]>
<rvalle "OK, because the docs present IPN"> Mentions here that it can support `/ipns/` for DNSLink 👍️
bengates has joined #ipfs
bengates has quit [Read error: Connection reset by peer]
<rvalle>
Evanito: I can see it now, I totally missed it. I will go with that for now, my DNS is not so easy to update....
<rvalle>
thanks
ipfs-stackbot has quit [Remote host closed the connection]
<Evanito[m]>
To test if you have found the right host+port, you can test the connection with `curl -X POST http://127.0.0.1:5001/api/v0/id` replaced with your host+port
<Evanito[m]>
Cool, you *can* change the host+port that the `ipfs` cli uses, but I've forgotten the syntax
<Evanito[m]>
Looks like it's `--api` and it has the default of `ipfs --api /ip4/127.0.0.1/tcp/5001 id` which is odd because then it should be seeing your node.
<Evanito[m]>
Seems like the work of ghosts, have you tried restarting?
<Guest92766>
yeah
<Guest92766>
at least the docker container
<Guest92766>
don't wanna restart the server :D
<Evanito[m]>
Understandable, but I'd try it since now I suspect the commandline is at fault here.
<Evanito[m]>
That or set an alias `ipfsc=ipfs --api /ip4/127.0.0.1/tcp/5001` as a hack around it 😂
<Evanito[m]>
* Understandable, but I'd try it since now I suspect the commandline is at fault here.
<Evanito[m]>
That or set an alias `alias ipfsc="ipfs --api /ip4/127.0.0.1/tcp/5001"` as a hack around it 😂
bengates has quit [Remote host closed the connection]
<Guest92766>
ya, I'm just talking to the HTTP API directly from python instead
<Guest92766>
which works
bengates has joined #ipfs
<Guest92766>
now sometimes my JSON has in it `{'Message': 'invalid path "comparisons.png": illegal base32 data at input byte 10', 'Code': 0, 'Type': 'error'}`
<Guest92766>
which I can only guess is coming from IPFS
<Evanito[m]>
Would need to know more about your operation to figure that out, could be as simple as an uncleaned string
<Evanito[m]>
Or improperly encoded
<Guest92766>
yeah, most likely since I'm writing to the HTTP API directly
<Guest92766>
it's bafkreibvmyjpuyteunlyce4mpps32ccm4greo7uizllh5f7msra4cigl6y on IPFS
<Guest92766>
I don't even see a 'comparisons.png' tho
<Guest92766>
there's an <img src='comparisons.png'>
<Guest92766>
are these being automatically included somehow?! I doubt that.
<Evanito[m]>
🤔 This error just started popping up? Very strange indeed. I don't see how IPFS could have affected the loading of an image in a jupyter notebook
bengates_ has joined #ipfs
<Guest92766>
yeah, but the image isn't even in the notebook. it's just a link
<Evanito[m]>
Oh so you mean you took this source file and imported it to IPFS then tried to run it from there?
<Guest92766>
even more confusing, since `curl -X POST http://localhost:5001/api/v0/cat\?arg\=bafkreibvmyjpuyteunlyce4mpps32ccm4greo7uizllh5f7msra4cigl6y` doesn't have anything about comparisons.png other than the <img>
<Guest92766>
I took the source file, put it on ipfs, and am just trying to parse it
bengates has quit [Ping timeout: 246 seconds]
<Guest92766>
the thing is, notebook error messages never have keys with first letter caps
<Guest92766>
so not sure where this is coming from
<Evanito[m]>
Well if you don't know then I *definitely* don't know. I'm sticking with my first guess that this got encoded wrong somewhere.
<Evanito[m]>
Maybe try cleaning it before parsing?
<Guest92766>
yeah, am looking at the JSON source it's parsing and comin up empty
<Guest92766>
am gonna keep looking around
<Guest92766>
aaaaaaaaa, I got it! the rendered notebook was trying to load comparisons.png, and my server was interpreting that as an IPFS CID and failing
<Evanito[m]>
Are you using the output from cat to create a string and then parse it into JSON?
<Guest92766>
which explains why the error message was like that :D
<Evanito[m]>
I never would have guessed that your server was trying to cat a random filename, glad you got it figured out
<Guest92766>
:D :D
<Guest92766>
DE BUG GING
john2gb has quit [Read error: Connection reset by peer]
john2gb has joined #ipfs
Caterpillar has joined #ipfs
royal_screwup214 has joined #ipfs
royal_screwup214 has quit [Client Quit]
royal_screwup214 has joined #ipfs
mascherone108[m] has quit [Quit: Idle for 30+ days]
dpatterbee[m] has quit [Quit: Idle for 30+ days]
rfoxxy[m] has quit [Quit: Idle for 30+ days]
swedneck2 has quit [Quit: Idle for 30+ days]
vaultec81[m] has quit [Quit: Idle for 30+ days]
igel[m] has quit [Quit: Idle for 30+ days]
IanPreston[m] has quit [Quit: Idle for 30+ days]
sponge[m] has quit [Quit: Idle for 30+ days]
ZerataX has quit [Quit: Idle for 30+ days]
l0ll15[m] has quit [Quit: Idle for 30+ days]
snoopie[m] has quit [Quit: Idle for 30+ days]
cparker[m] has quit [Quit: Idle for 30+ days]
haint_zer0[m] has quit [Quit: Idle for 30+ days]
liviaamara[m] has quit [Quit: Idle for 30+ days]
royal_screwup214 has quit [Ping timeout: 246 seconds]
<Guest92766>
Evanito: awesome! I also have hypothes.is loading there for annotations (select any test and annotate). Now the hope is to set canonical URL to the IPFS CID, so you can see the same annotations wherever you are
royal_screwup214 has joined #ipfs
<Evanito[m]>
Best of luck!
<Guest92766>
I also wonder if badger instead of the default leveldb will help?
<Guest92766>
probably a bunch of CPU usage is just calls to s3
<Guest92766>
also tempted to try the js daemon instead of the go one, since I can actually write JS
fling has quit [Ping timeout: 246 seconds]
royal_screwup214 has quit [Ping timeout: 265 seconds]
fling has joined #ipfs
[Seldon] has joined #ipfs
<Guest92766>
Evanito: should I also pin all these notebooks manually? Or would that be done 'automatically' since I'm adding them myself?
<Guest92766>
I guess I don't want ipfs to just delete them for GC purposes :D
<Evanito[m]>
Yeah you probably should. `add` does **not** automatically pin
<Guest92766>
> For example, if you add a file using the CLI command ipfs add (opens new window), the IPFS node will automatically pin that file.
<Guest92766>
makes sense - that's probably something the commandline does
<Evanito[m]>
Huh, always happy to be corrected! 😅
<Evanito[m]>
Apparently the default operation is yes to pin it.
<Evanito[m]>
* Yeah you probably should. `add` does* automatically pin
<Evanito[m]>
* You don't need to, `add` does* automatically pin
<Guest92766>
oh, I thought that was for just the commandline - not the API
<Evanito[m]>
> pin [bool]: Pin this object when adding. Default: true. Required: no.
<Guest92766>
makes sense. I'll revert my commit lol
<Evanito[m]>
And to my understanding the CLI is even with the HTTP API
<Guest92766>
right
<Guest92766>
that makes sense
RingtailedFox has quit [Read error: Connection reset by peer]
RingtailedFox has joined #ipfs
sz0 has joined #ipfs
sz0 has quit [Max SendQ exceeded]
sz0 has joined #ipfs
sz0 has quit [Max SendQ exceeded]
sz0 has joined #ipfs
<Guest92766>
so I've S3 'mounted' at /blocks, and leveldb at a /. So this means I've to somehow backup the leveldb file as well - without the metadata in it, the s3 blocks aren't very useful.
<Guest92766>
is that correct?
sz0 has quit [Max SendQ exceeded]
<Guest92766>
so if I want to horizontally scale out, I'll need to use ipfs cluster
<Guest92766>
even if I use something like the database datastore, I can't just point multiple nodes to the same database?
cp- has quit [Quit: Disappeared in a puff of smoke]
cp- has joined #ipfs
cp- has quit [Client Quit]
cp- has joined #ipfs
cp- has quit [Client Quit]
chiui has joined #ipfs
cp- has joined #ipfs
fling has joined #ipfs
tech_exorcist has joined #ipfs
tech_exorcist has quit [Max SendQ exceeded]
eyenx has joined #ipfs
tech_exorcist has joined #ipfs
royal_screwup214 has quit [Quit: Connection closed]
royal_screwup214 has joined #ipfs
eyenx has quit [Remote host closed the connection]
royal_screwup214 has quit [Ping timeout: 246 seconds]
eyenx has joined #ipfs
jcea has joined #ipfs
<voker57>
Guest92766: "NoFetch": true in config
royal_screwup214 has joined #ipfs
eyenx has quit [Remote host closed the connection]
eyenx has joined #ipfs
eyenx has quit [Quit: Bridge terminating on SIGTERM]
eyenx has joined #ipfs
eyenx has quit [Client Quit]
fling has quit [Ping timeout: 245 seconds]
`Alison has joined #ipfs
natinso- has joined #ipfs
plntyk2 has joined #ipfs
PendulumSwinger9 has joined #ipfs
plntyk2 has quit [Max SendQ exceeded]
Caterpillar2 has joined #ipfs
plntyk2 has joined #ipfs
mrus has joined #ipfs
natinso^ has joined #ipfs
jadedctrl has joined #ipfs
plntyk2 has quit [Max SendQ exceeded]
endoffile_ has joined #ipfs
plntyk2 has joined #ipfs
plntyk2 has quit [Max SendQ exceeded]
plntyk2 has joined #ipfs
Nebraskka_ has joined #ipfs
madnight has joined #ipfs
knix_ has joined #ipfs
knix_ has joined #ipfs
plntyk2 has quit [Max SendQ exceeded]
rektide has joined #ipfs
Trieste has joined #ipfs
plntyk2 has joined #ipfs
matthewcroughan_ has joined #ipfs
coniptor_ has joined #ipfs
sknebel_ has joined #ipfs
jadedctrl_ has quit [Ping timeout: 256 seconds]
Alison` has quit [Ping timeout: 256 seconds]
natinso has quit [Ping timeout: 256 seconds]
madnight_ has quit [Ping timeout: 256 seconds]
sammacbeth has quit [Ping timeout: 256 seconds]
mrusme has quit [Ping timeout: 256 seconds]
PendulumSwinger has quit [Ping timeout: 256 seconds]
pfista has quit [Ping timeout: 256 seconds]
natinso| has quit [Ping timeout: 256 seconds]
Caterpillar has quit [Ping timeout: 256 seconds]
save-lisp-or-die has quit [Ping timeout: 256 seconds]
plntyk has quit [Ping timeout: 256 seconds]
Nebraskka has quit [Ping timeout: 256 seconds]
Trieste_ has quit [Ping timeout: 256 seconds]
rektide_ has quit [Ping timeout: 256 seconds]
coniptor has quit [Ping timeout: 256 seconds]
endoffile has quit [Ping timeout: 256 seconds]
hyperfekt has quit [Ping timeout: 256 seconds]
Sigma has quit [Ping timeout: 256 seconds]
sknebel has quit [Ping timeout: 256 seconds]
jmsx has quit [Ping timeout: 256 seconds]
knix has quit [Ping timeout: 256 seconds]
matthewcroughan has quit [Ping timeout: 256 seconds]
pepesza has quit [Ping timeout: 256 seconds]
faenil_ has quit [Ping timeout: 256 seconds]
jmsx has joined #ipfs
pepesza has joined #ipfs
cp- has quit [Quit: Disappeared in a puff of smoke]
PendulumSwinger9 is now known as PendulumSwinger
hyperfekt_ has joined #ipfs
coniptor_ is now known as coniptor
sammacbeth has joined #ipfs
cp- has joined #ipfs
Sigma has joined #ipfs
BeatRupp[m] has left #ipfs ["User left"]
pfista has joined #ipfs
save-lisp-or-die has joined #ipfs
Trieste has quit [Max SendQ exceeded]
hyperfekt_ has quit [Max SendQ exceeded]
hyperfekt has joined #ipfs
faenil has joined #ipfs
Trieste has joined #ipfs
}ls{ has joined #ipfs
fling has joined #ipfs
Trieste has quit [Max SendQ exceeded]
Trieste has joined #ipfs
sknebel_ is now known as sknebel
Mikaela has quit [Remote host closed the connection]
Mikaela has joined #ipfs
dsrt^ has quit []
royal_screwup214 has quit [Quit: Connection closed]
royal_screwup214 has joined #ipfs
cfvnhtsp^ has joined #ipfs
sknebel has quit [Max SendQ exceeded]
Caterpillar2 is now known as Caterpillar
sknebel has joined #ipfs
FettesBrot has joined #ipfs
royal_screwup214 has quit [Ping timeout: 245 seconds]
eyenx has joined #ipfs
royal_screwup214 has joined #ipfs
eyenx has quit [Quit: Bridge terminating on SIGTERM]
Guest79 has joined #ipfs
KempfCreative has joined #ipfs
tj11 has quit [Remote host closed the connection]
cp- has quit [Quit: Disappeared in a puff of smoke]
cp- has joined #ipfs
royal_screwup214 has quit [Quit: Connection closed]
LiftLeft has joined #ipfs
LiftLeft has quit [Max SendQ exceeded]
LiftLeft has joined #ipfs
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Max SendQ exceeded]
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Max SendQ exceeded]
royal_screwup21 has joined #ipfs
Newami has joined #ipfs
Newami has quit [Remote host closed the connection]
RoseBus has quit [Max SendQ exceeded]
jokoon has joined #ipfs
thuggest has joined #ipfs
thuggest has quit [Max SendQ exceeded]
fling has quit [Ping timeout: 276 seconds]
FettesBrot has quit [Quit: Connection closed]
royal_screwup21 has quit [Quit: Connection closed]
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Max SendQ exceeded]
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Max SendQ exceeded]
royal_screwup21 has joined #ipfs
supercoven_ has quit [Read error: Connection reset by peer]
supercoven has joined #ipfs
royal_screwup21 has quit [Quit: Connection closed]
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Ping timeout: 260 seconds]
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Max SendQ exceeded]
royal_screwup21 has joined #ipfs
arcatech has joined #ipfs
theseb has joined #ipfs
royal_screwup21 has quit [Quit: Connection closed]
<theseb>
I'm a newbie at RAIDs but they don't seem to have any magic to increase reliability beyond redudancy? i.e. mirroring is all you can do to increase reliability...there are no other tricks?
royal_screwup21 has joined #ipfs
<theseb>
(I'm wondering how an ipfs app could increase reliability of storage)
<Discordian[m]>
Run 3 nodes, sync the same CID on all 3
<Discordian[m]>
* Run 3 nodes, pin the same CID on all 3
royal_screwup21 has quit [Ping timeout: 264 seconds]
Imperial has joined #ipfs
Imperial has left #ipfs [#ipfs]
svdmerwe has quit [Quit: Idle for 30+ days]
aironmcphee[m] has quit [Quit: Idle for 30+ days]
kavin[m] has quit [Quit: Idle for 30+ days]
ninesigns[m] has quit [Quit: Idle for 30+ days]
Ignacio[m]2 has quit [Quit: Idle for 30+ days]
andre[m] has quit [Quit: Idle for 30+ days]
JakWolf[m] has quit [Quit: Idle for 30+ days]
test[m]2 has quit [Quit: Idle for 30+ days]
GonZo2k has quit [Quit: Idle for 30+ days]
tm--st[m] has quit [Quit: Idle for 30+ days]
jesuskelp[m] has quit [Quit: Idle for 30+ days]
Nebraskka_ has quit [Quit: Good day old chaps]
Nebraskka has joined #ipfs
<theseb>
Discordian[m]: even ipfs still doesn't let you throw your data on random nodes in the network and expect it to be available
<theseb>
the next step seems to be somehow wisely using all the unused capacity in the world somehow
<theseb>
that's what i'm driving at
<Discordian[m]>
If you pin it, how would it ever not be available on the node?
<theseb>
Discordian[m]: well then YOU are responsible...i'm trying to imagine IPFS replacing AWS
<Discordian[m]>
I'm not against working on features that cache data longer
<theseb>
Discordian[m]: i.e. a good system you can pay to just throw your data upon it
joey has quit [Remote host closed the connection]
joey has joined #ipfs
<Discordian[m]>
Forgive me for the question, but isn't that similar to what FileCoin is trying to accomplish?
<Discordian[m]>
Or a pinning service on IPFS?
<Discordian[m]>
AWS is quite broad too, I assume you're talking about something closer to S3 or EFS or something.
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Max SendQ exceeded]
<theseb>
yes S3
royal_screwup21 has joined #ipfs
<theseb>
Discordian[m]: i don't know much about filecoin
royal_screwup21 has quit [Max SendQ exceeded]
royal_screwup21 has joined #ipfs
royal_screwup21 has quit [Client Quit]
royal_screwup21 has joined #ipfs
<Discordian[m]>
I believe with filecoin you pay miners to store your data. I'm not 100% sure if I follow exactly what it is as a whole
Nebraskka has quit [Quit: Good day old chaps]
<Discordian[m]>
However with IPFS v0.8.0, I believe the daemon can integrate with pinning services
Nebraskka has joined #ipfs
<theseb>
Discordian[m]: yes but with filecoin and your pinning services....people still have to step up and declare themselves to be storage providers
<theseb>
I mean that is wonderful...but isn't the next phase where EVERYONE does that automatically by sharing the unused space on their drives?
<Discordian[m]>
Ahh, I see what you're saying
<theseb>
think of all that wasted space
<theseb>
good thanks
<Discordian[m]>
TBH it'd be nice if the daemon automatically cached whatever it encountered until the gc kicks in or something
<theseb>
yea
<Discordian[m]>
Cache the last known CID an IPNS address points to too, if it's unavailable
royal_screwup21 has quit [Ping timeout: 256 seconds]
<rvalle>
Quick beginner question: I uploaded a big folder to my IPFS, and named it with IPNS, mounted it in a domain with DNSLink, and it is working. Now my question is, this does not "pin" my files in my own IPFS server does it? Do I have to explicitly PIN?
<swedneck>
by default any data you add will be pinned
jokoon has quit [Quit: Leaving]
<rvalle>
jan Swedneck: So the content in "Files" is pinned, right? I cannot recognized it when I list the "3 pins" that come there, non of the CIDs show in my files....
<rvalle>
* jan Swedneck: So the content in "Files" is pinned, right? I cannot recognize it when I list the "3 pins" that come there, non of the CIDs show in my files....
<swedneck>
you're talking about the webui/ipfs-desktop?
<rvalle>
yes
<swedneck>
hmm, i'm actually not sure whether adding things to the webui pins them automatically
<Discordian[m]>
I think it does
<Discordian[m]>
However CLI, writing directly into MFS doesn't pin
<rvalle>
Actually I used pin on my main folder and it processed it, now it shows.
<Discordian[m]>
\o/
<rvalle>
I guess is good to have the certainty. If it pinned automatically would be nice to be able to see it....
<swedneck>
well there's a pin icon for pinned things
<rvalle>
Really? I dont see a pin icon on my folder or in the files inside.
<rvalle>
I can see that one of the pins have the CID of the folder and clicking it takes me to the folder....
<rvalle>
is it possible to PIN by IPNS? or always by CID?
<Discordian[m]>
Not possible to pin by IPNS, I'm not sure if it's a planned feature or not, I hope it is. In the meantime I was thinking of enabling ipfs-sync to pin by IPNS, but if the main daemon is getting the feature, I should just PR it.
<rvalle>
I was wondering if perhaps I am meant to use the cluster feature... to manage multiple IPFS servers...
<Discordian[m]>
For multiple nodes, the cluster features are for exactly that AFAIK
<rvalle>
I guess I have to experiment more... lots of things still not clear to me... but this looks cool!
pederdm000[m] has left #ipfs ["User left"]
<Discordian[m]>
Glad you're excited, I found IPFS a few weeks ago, and I love it!
<Discordian[m]>
A true love story, love at first sight.
<rvalle>
I found about it long ago, but only now decided to jump in the wagon! It is certainly cool.
<rvalle>
jan Swedneck: now the pin icon showed up!!
koo5 has quit [Ping timeout: 245 seconds]
<Discordian[m]>
I suppose adding it to MFS via the app is the same as via the CLI or HTTP API
<Discordian[m]>
All the same anyways, things in MFS aren't removed by GC anyways
koo5 has joined #ipfs
bengates_ has quit [Remote host closed the connection]
sz0 has quit [Quit: Connection closed for inactivity]
mshep999 has joined #ipfs
mshep999 has quit [Max SendQ exceeded]
mshep999 has joined #ipfs
mshep999 has quit [Max SendQ exceeded]
mshep999 has joined #ipfs
mshep999 has quit [Max SendQ exceeded]
mshep999 has joined #ipfs
mshep999 has left #ipfs [#ipfs]
snamber has joined #ipfs
theseb has quit [Quit: Leaving]
limbo_ has quit [Ping timeout: 260 seconds]
ctOS has quit [Quit: Connection closed for inactivity]
<RubenKelevra[m]>
<Discordian[m] "I suppose adding it to MFS via t"> yes
<RubenKelevra[m]>
<rvalle "is it possible to PIN by IPNS? o"> While this feature has been planned for a long time, it's not possible.
<RubenKelevra[m]>
<rvalle "I was wondering if perhaps I am "> This makes the most sense. If you need help, I wrote some scripts around the cluster use for my cluster setup. Feel free to ask :)
[Seldon] has quit [Quit: This computer has gone to sleep]
chiui has joined #ipfs
chiui has quit [Ping timeout: 260 seconds]
<Discordian[m]>
<RubenKelevra[m] "While this feature has been plan"> Do you know if there's an issue open for it? Any blockers?
Ringtailed-Fox has quit [Read error: Connection reset by peer]
Ringtailed-Fox has joined #ipfs
plntyk has quit [Ping timeout: 245 seconds]
plntyk has joined #ipfs
<vaultec81[m]1>
Has anyone noticed major speed reduction with using QUIC transport in IPFS?
<Evanito[m]>
If anything you should be getting a speed increase, sure it's not being conflated with some other change?
<vaultec81[m]1>
I've been doing some in the wild testing and with my local machine. QUIC connections have extremely high latency and sometimes don't return pings at all using `ipfs ping` command, when using TCP this isn't an issue whatsoever but it does take many minutes for the automatic switch over to occur.
<vaultec81[m]1>
Transfer times are extremely slow as well
<octav1a>
Can anyone help me figure out what is wrong with my ipfs client? I'm able to view a small file over multiple public ipfs gateways, but my client just hangs _forever_ on "ipfs cat" (left it running in a terminal for a few days just for the heck of it, never even times out) . What should I check first?
tfl^ has joined #ipfs
dta2021[m] has left #ipfs ["User left"]
<Evanito[m]>
octav1a: did you give it anything to cat? what is your output of `ipfs cat QmcXsq8eWVLuYmBhJpYq9A1ytWMocftsdchsgZqiqsBh91`
<octav1a>
That one works
<Evanito[m]>
It was probably waiting for you to tell it what to cat then, if you leave it blank it expects you to type the CID manually
tech_exorcist has quit [Quit: tech_exorcist]
<octav1a>
Evanito[m]: I'm a newbie at some things, but not quite that bad :p , here is an example command I try: ipfs cat QmSNifatPiB1kZswsYcJmQ7k3ukVb47abax1UqK5fZpYup > something.png
<Evanito[m]>
which is to say `ipfs cat QmSNifatPiB1kZswsYcJmQ7k3ukVb47abax1UqK5fZpYup > something.png` worked for me
<octav1a>
Okay, it seems if you do ipfs cat /ipfs/<hash> is fails but ipfs cat <hash> it works... I guess some difference between versions. Nevermind....
<Evanito[m]>
also as a note you can do `ipfs get QmSNifatPiB1kZswsYcJmQ7k3ukVb47abax1UqK5fZpYup -o something.png` as an equivalent command if
<Evanito[m]>
* also as a note you can do `ipfs get QmSNifatPiB1kZswsYcJmQ7k3ukVb47abax1UqK5fZpYup -o something.png` as an equivalent command if that's your preference
<Evanito[m]>
And yes for ipfs you just do the CID, for IPNS I believe you preface it with `/ipns/ipnshash_or_url`
<octav1a>
Thank you. I was picking up on a shell script that used the version with the format /ipfs/<hash> which must have worked with some version, but in the last year or so it no longer works. The delights of cutting edge software :p
<Evanito[m]>
Sounds about right!
<Evanito[m]>
Also I would like to say `ipfs cat /ipfs/QmSNifatPiB1kZswsYcJmQ7k3ukVb47abax1UqK5fZpYup > something.png` *also* works for me
<octav1a>
I'm on version 0.8.0 , is that the same?
<Evanito[m]>
Yup, go-ipfs v0.8.0
<Evanito[m]>
Does that command work/not work for anyone else lurking around?
* octav1a
has no explainable reason, then o.o
<octav1a>
(ubuntu 20.04, if that matters)
limbo has joined #ipfs
[Seldon] has quit [Quit: This computer has gone to sleep]
Mitar has quit [Ping timeout: 260 seconds]
bengates has joined #ipfs
misuto1 has joined #ipfs
misuto has quit [Read error: Connection reset by peer]
misuto1 is now known as misuto
veegee has quit [Read error: Connection reset by peer]
bengates has quit [Ping timeout: 246 seconds]
veegee_ has joined #ipfs
Jad has quit [Quit: Benefits I derive from freedom are largely the result of the uses of freedom by others, and mostly of those uses of freedom that I could never avail myself of.]
Adbray has quit [Quit: Ah! By Brain!]
Newami has joined #ipfs
mRX_ has joined #ipfs
mRX_ is now known as Samsepi0l
Samsepi0l is now known as whiterose
Mitar has joined #ipfs
pecastro has quit [Ping timeout: 260 seconds]
bengates has joined #ipfs
whiterose has quit [Remote host closed the connection]