<Rogastor>
I'm having some trouble installing ipfs
warner has joined #ipfs
taaem has quit [Ping timeout: 248 seconds]
Oatmeal has joined #ipfs
fleeky has joined #ipfs
fleeky_ has quit [Ping timeout: 240 seconds]
<Rogastor>
I'm having some trouble installing ipfs. When I enter "./install.sh" into terminal, I get "mv: rename ipfs to /usr/local/bin/ipfs: Permission denied". What should I do?
<Rogastor>
hello? anyone here?
<kpcyrd>
Rogastor: try sudo
gigq has quit [Quit: leaving]
<kpcyrd>
/usr/local/bin/ is only writable as root
<Rogastor>
it asks for a password.
<Rogastor>
do I give mine?
<kpcyrd>
Rogastor: yes
warner has quit [Ping timeout: 252 seconds]
tclass has joined #ipfs
<Rogastor>
thanks.
bastianilso has joined #ipfs
john4 has joined #ipfs
tclass has quit [Ping timeout: 240 seconds]
john3 has quit [Ping timeout: 260 seconds]
fleeky has quit [Quit: Ex-Chat]
fleeky has joined #ipfs
koalalorenzo has joined #ipfs
koalalorenzo has quit [Client Quit]
koalalorenzo has joined #ipfs
<Mateon1>
I think I just found another bug
<Mateon1>
Ugh, and the DHT got stuck
<achin>
note that you don't actually need to "install" the ipfs binary anywhere. you can put it anywhere you want
ulrichard has quit [Remote host closed the connection]
<Mateon1>
/ipfs/QmbHyoWRYXm29i2sVwZ5VQ6LzBNDcssKWnjyCZAHhMwqjH/ipfsrefsbug.js - run this inside a JS context capable of accessing localhost:5001 (e.g. open https://localhost:5001/webui, and paste this into the console). Wait 30 seconds (on large repos), and note that the count of refs doesn't match for the two results.
<Mateon1>
Yes, I'm calling /api/v0/refs/local twice, and get differing output
<Mateon1>
Not only that, but there are about 16 duplicates for each reported hash
<whyrusleeping>
Do you have the same problem through the cli?
<Mateon1>
Testing now
<Mateon1>
No, ipfs refs local from the CLI is consistent
JustinDrake has quit [Quit: JustinDrake]
<Mateon1>
Uh
<Mateon1>
Sorry, but that might be my code
<Mateon1>
Oh, whoops. That was a bug in the stream parser
<Mateon1>
Ugh, sorry about that
<Mateon1>
I forgot to set the current position to the last parsed line in the XHR readyState handler
<whyrusleeping>
lol, no worries
cyanobacteria has joined #ipfs
palkeo has quit [Ping timeout: 255 seconds]
gigq has joined #ipfs
bastianilso has quit [Quit: bastianilso]
Foxcool has quit [Ping timeout: 255 seconds]
palkeo has joined #ipfs
palkeo has joined #ipfs
palkeo has quit [Changing host]
bastianilso has joined #ipfs
<richardlitt>
Well
<richardlitt>
I have now made a way of automatically creating the ipfs monday call sprint issues
<richardlitt>
That took me too long.
Aranjedeath has joined #ipfs
<voker57_>
is there a way to do permanent ipns publishing? I know I can set TTL to 20 years, but feels not right that network that is oriented at persistent data storage requires republishing every N seconds.
john has joined #ipfs
<whyrusleeping>
voker57_: you cant expect everyone in the network to hold onto that record for you forever, the dht would fill up and explode
john is now known as Guest89400
<whyrusleeping>
you can craft a record that you can give to another peer to republish on your behalf
<whyrusleeping>
but we don't have the api for that nicely exposed yet
john4 has quit [Ping timeout: 252 seconds]
<voker57_>
how DHT is not exploding from all the data chunks in the network? I just want some way to persistently reference some updateable directory
voker57_ is now known as Voker57
Voker57 has quit [Changing host]
Voker57 has joined #ipfs
<whyrusleeping>
the data chunks arent stored in the dht
<Voker57>
but reference to them is stored? or ipfs just tries all the known nodes at random?
<whyrusleeping>
the reference is stored yes, and must be rebroadcast periodically as well
<whyrusleeping>
this isnt ipfs, its just how dhts work
<whyrusleeping>
we're looking at alternatives to using a dht for this since they apparently don't scale very well
Mizzu has joined #ipfs
shamal has joined #ipfs
<Voker57>
so on protocol level it's possible to 'pin' some ipns entry and announce it?
john1 has joined #ipfs
Guest89400 has quit [Ping timeout: 255 seconds]
<whyrusleeping>
you can take someone elses record and rebroadcast it for them up until its TTL
<koalalorenzo>
If I have an hash of a file, is it possible to get the original file name, somehow?
chungy has quit [Ping timeout: 240 seconds]
<whyrusleeping>
not unless you also added the directory containing the file
<koalalorenzo>
I guess it is not possible, but ideally I can store an object with the hash and the original file name, right?
<whyrusleeping>
'ipfs add -w' wraps the file youre adding in a directory in order to preserve the filename
warner has joined #ipfs
<koalalorenzo>
What about when I have to get the file?
bielewelt has joined #ipfs
bielewelt has quit [Client Quit]
<Kubuxu>
use reference it by `THEHASH/filename`
<Kubuxu>
s/use/you
chungy has joined #ipfs
warner has quit [Ping timeout: 256 seconds]
mguentner2 is now known as mguentner
<koalalorenzo>
Thanks! Got it :)
Encrypt has joined #ipfs
s_kunk has quit [Ping timeout: 255 seconds]
espadrine_ has joined #ipfs
espadrine has quit [Ping timeout: 252 seconds]
__uguu__ has quit [Ping timeout: 240 seconds]
maxlath has quit [Ping timeout: 255 seconds]
Aranjedeath has quit [Quit: Three sheets to the wind]
<cpacia>
Basically no issues at all. Just use the transport at github.com/OpenBazaar/go-onion-transport, set the supportedTransporStrings and supportedTransportProtocols in addrutil and then add the transport to the swarm and it works like a charm.
draynium has quit [Ping timeout: 258 seconds]
warner` has joined #ipfs
koalalorenzo has quit [Quit: This computer has gone to sleep]
warner has quit [Read error: Connection reset by peer]
pfrazee has quit [Remote host closed the connection]
wallacoloo____ has quit [Quit: wallacoloo____]
galois_dmz has quit []
warner` is now known as warner
galois_dmz has joined #ipfs
wallacoloo____ has joined #ipfs
wallacoloo____ has quit [Client Quit]
john2 has joined #ipfs
MDude has quit [Quit: Going offline, see ya! (www.adiirc.com)]
draynium has quit [Ping timeout: 240 seconds]
john1 has quit [Ping timeout: 260 seconds]
draynium has joined #ipfs
john3 has joined #ipfs
john2 has quit [Ping timeout: 260 seconds]
tilgovi has joined #ipfs
ianopolous has quit [Read error: Connection reset by peer]
ianopolous_ has joined #ipfs
arpu has quit [Quit: Ex-Chat]
tmg has joined #ipfs
wallacoloo____ has joined #ipfs
[0__0] has quit [Ping timeout: 258 seconds]
[0__0] has joined #ipfs
ianopolous_ has quit [Ping timeout: 240 seconds]
bastianilso has quit [Quit: bastianilso]
<jbenet>
cpacia: that's fantastic
danohu has joined #ipfs
<achin>
sadly they left already
<mguentner>
ipfs object patch add-link also consumes _a lot_ of disk space when not garbage collecting in between: QmfTrExMUnHcJFXufg6PgqmcXHPqo9zjqbC7k9S8RCFgrF
<mguentner>
jbenet whyrusleeping ^
john4 has joined #ipfs
john3 has quit [Ping timeout: 258 seconds]
ianopolous has joined #ipfs
<mguentner>
this is the script that produces the data: QmNfPDt3Kx4v8ab3YnQpXV3PuhkPPZL3i9m51ibxiGdsWC
<whyrusleeping>
mguentner: what does your workload look like?
realisation has joined #ipfs
ianopolous has quit [Read error: Connection reset by peer]
<whyrusleeping>
i'm not certain what exactly that script is doing...
<mguentner>
this adds links to the empty dir as produced by ipfs add
<whyrusleeping>
Are you manually constructing the directory?
<mguentner>
yes
<mguentner>
basically the same workload as in #3621 just making a directory out of it
koalalorenzo has joined #ipfs
Encrypt has quit [Quit: Quit]
danohu has left #ipfs ["Leaving"]
<Kubuxu>
mguentner: you can use ipfs files --flush=false and then ipfs files flush
<Kubuxu>
this way it won't create N objects for N operations
mmc1800[m] has left #ipfs ["User left"]
<Kubuxu>
but then you have to do use ipfs files cp /ipfs/Qmm mydir
koalalorenzo has quit [Quit: Sto andando via]
<mguentner>
Kubuxu: I just wonder where this huge number is coming from. To put n files into one directory, n+1 operations should be necessary, so the result would be one directory and n "garbage" directories. However a lot more objects are created. Patching a directory 12000-times using 'patch add-link' produces ~ 5 GB of data
<Kubuxu>
can you do `ipfs block get FINALHASH | wc -c`
ajp has quit [Quit: No Ping reply in 180 seconds.]
ajp has joined #ipfs
john has joined #ipfs
<Kubuxu>
and give me the result
john is now known as Guest92124
bronger has quit [Ping timeout: 272 seconds]
<mguentner>
Kubuxu: sure, just one moment
john4 has quit [Ping timeout: 255 seconds]
taaem has joined #ipfs
bronger has joined #ipfs
<A124>
Umm.. can someone hint me again how does one sets the gateway timeout for http request?