<Bat`O>
let say I have a file stored in IPFS, what would be the way to get the size from its hash ? (not the size + overhead)
<Mateon1>
Bat`O: For unixfs objects you can do `ipfs object stat <hash>`
<Mateon1>
The CumulativeSize works for files and directories
dansup has quit [Ping timeout: 240 seconds]
dansup_ has joined #ipfs
dansup_ is now known as dansup
aboodman has joined #ipfs
<Bat`O>
Mateon1: that's not what I see
<Mateon1>
Bat`O: What do you see?
<Bat`O>
CumulativeSize: 10488250 instead of 10485760
<Bat`O>
that's the size of the dag, not the size of the file
aboodman has quit [Quit: aboodman]
<Bat`O>
another solution to my problem would be to ba able to create a unixfs directory and add arbitrary unixfs file as child
omega_ has quit [Ping timeout: 260 seconds]
<Bat`O>
but my test with `ipfs object patch` seems to show that even if you start with a unifs-dir template, when you add links with `ipfs object patch add-link` you end up with something that is not a proper unixfs directory
<Bat`O>
or maybe i'm just confused about this one
<Bat`O>
ha no, that's ipfs ls that report the cumulative size instead of the file size
<Bat`O>
well ...
<Bat`O>
i guess I can use the dag size everywhere ..
<francis[m]>
how can i best read up on how files that i publish to ipfs are then disseminated to other nodes? is that automatic? do i or others have to do something more (do others subscribe?)
<SchrodingersScat>
francis[m]: as far as I know it's on a request basis. So my node should only pick up your things if I'm browsing something you published.
<francis[m]>
ok. so, that would mean nobody is sure that there are other distributed copies out there, right SchrodingersScat?
<SchrodingersScat>
francis[m]: there is at least one way to make a 'cluster' of ipfs if you want to mirror things and make your files more robust. ipfs-cluster project, or can pin via ssh, etc.
<SchrodingersScat>
francis[m]: you can check to see if other machines have the hash, so you can be pretty certain if it's out there
<SchrodingersScat>
francis[m]: for example, ipfs dht findprovs QmVLDAhCY3X9P2uRudKAryuQFPM5zqA3Yij1dY8FpGbL7T
<francis[m]>
in this model, why are folks then worried about what others might store on their nodes (black/whitelisting)?
<SchrodingersScat>
francis[m]: if you act as a gateway then you may have a DMCA responsibility to not serve up certain things. I believe the public ipfs.io gateways blacklist certain hashes that have been reported to them.
<francis[m]>
so, if you act as a gateway, others might publish objects on your node?
<SchrodingersScat>
not publish, but if someone requests the file through a gateway then it will cache the file based on the garbage collection rules
<SchrodingersScat>
and serve it
m0ns00n_ is now known as Guest36324
<francis[m]>
i see, so, if im setup as a gateway, people may request objects that are not on my node, but, im requesting and caching those on their behalf, right?
m0ns00n has quit [Quit: quit]
Guest36324 has quit [Quit: quit]
<francis[m]>
would someone actually ask "my" gateway, or would it ask the "network", and then some overlying logic could pass the request on to my node (as it could to others)?
aboodman has quit [Quit: aboodman]
Caterpillar2 has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
m0ns00n has joined #ipfs
Caterpillar2 has joined #ipfs
m0ns00n_ has joined #ipfs
<r0kk3rz>
francis[m]: only if someone asks your gateway, nodes dont do routing at the moment
<francis[m]>
thanks r0kk3rz and SchrodingersScat. i'll keep on experimenting.. :-)
m0ns00n has quit [Client Quit]
m0ns00n_ has quit [Client Quit]
m0ns00n has joined #ipfs
m0ns00n_ has joined #ipfs
m0ns00n has quit [Client Quit]
<SchrodingersScat>
francis[m]: ideally it would be nodes requesting it. If you publish something on your ipfs node and I request it then mine will negotiate where the hash is and download it from you. Then for the time being someone could simultaneously download it from both of us.
<SchrodingersScat>
francis[m]: gateways just seem like the only reason to white/blacklist. Or say if your ISP ran an ipfs node and didn't want to be liable for people downloading movies.
bwn has quit [Ping timeout: 260 seconds]
espadrine has joined #ipfs
m0ns00n_ is now known as Guest36324
bwn has joined #ipfs
Guest36324 has quit [Quit: quit]
MrSparkle has quit [Ping timeout: 248 seconds]
erictapen has quit [Ping timeout: 240 seconds]
<francis[m]>
can the web ui be accessed from a remote host (but, on the same network)
robattila256 has quit [Ping timeout: 240 seconds]
pat36 has quit []
maxlath has quit [Ping timeout: 260 seconds]
plddr has quit [Ping timeout: 240 seconds]
plddr has joined #ipfs
m0ns00n has joined #ipfs
savoir-faire has quit [Ping timeout: 240 seconds]
<r0kk3rz>
francis[m]: not unless you configure it that way, but the general recommendation is dont do that
m0ns00n is now known as Guest36324
<francis[m]>
r0kk3rz: i want to manage it from within my network, but, from a different machine than where i have ipfs installed
<francis[m]>
the ipfs installation is behind a firewall, so, the outside world has no access to the webui
erictapen has joined #ipfs
m0ns00n has joined #ipfs
maxlath has joined #ipfs
robattila256 has joined #ipfs
dimitarvp has joined #ipfs
mildred has quit [Ping timeout: 246 seconds]
jkilpatr has quit [Ping timeout: 246 seconds]
_whitelogger has joined #ipfs
bielewelt has quit [Quit: Leaving.]
m0ns00n has quit [Quit: quit]
bnwilson has joined #ipfs
savoir-faire has joined #ipfs
mildred has joined #ipfs
jkilpatr has joined #ipfs
Guest36324 has quit [Quit: quit]
Gytha has quit [Remote host closed the connection]
anewuser has joined #ipfs
<francis[m]>
r0kk3rz: portforwarding via ssh is the preferred solution?
<r0kk3rz>
to what exactly?
arpu has joined #ipfs
mildred4 has quit [Read error: Connection reset by peer]
mildred has quit [Read error: Connection reset by peer]
mildred4 has joined #ipfs
mildred has joined #ipfs
deep-book-gk_ has joined #ipfs
deep-book-gk_ has left #ipfs [#ipfs]
erictapen has quit [Ping timeout: 260 seconds]
<jokke>
how any ideas why i can't find any peers in the dht?
<jokke>
are there any built in mechanisms to break free from behind a firewall? (without upnp)
<francis[m]>
to access the webui on another machine on my network (where ipfs daemon is running)
<francis[m]>
ipfs resolve -r /ipfs/QmPhnvn747LqwPYMJmQVorMaGbMSgA7mRRoyyZYz3DoZRQ/: Failed to get block for QmPhnvn747LqwPYMJmQVorMaGbMSgA7mRRoyyZYz3DoZRQ: context canceled
<francis[m]>
what would give rise to that error....
o33 has joined #ipfs
erictapen has joined #ipfs
o33 has quit [Quit: Leaving]
<francis[m]>
Error: failure writing to dagstore: A device attached to the system is not functioning. :-(
mildred has quit [Ping timeout: 246 seconds]
jkilpatr has quit [Ping timeout: 240 seconds]
jkilpatr has joined #ipfs
pat36 has joined #ipfs
anewuser has quit [Ping timeout: 240 seconds]
anewuser has joined #ipfs
ashark has joined #ipfs
dtrue has quit [Quit: Page closed]
anewuser has quit [Quit: anewuser]
m0ns00n has joined #ipfs
m0ns00n has quit [Client Quit]
m0ns00n has joined #ipfs
spossiba has quit [Ping timeout: 268 seconds]
spossiba has joined #ipfs
brianhoffman_ has joined #ipfs
brianhoffman has quit [Ping timeout: 248 seconds]
brianhoffman_ is now known as brianhoffman
m0ns00n has quit [Quit: quit]
apiarian_mobile has joined #ipfs
<apiarian_mobile>
Anybody at GopherCon this year?
<jokke>
anybody at 34C3?
<jokke>
:)
brianhoffman has quit [Ping timeout: 260 seconds]
rodolf0 has joined #ipfs
apiarian_mobile has quit [Remote host closed the connection]
jungly has quit [Ping timeout: 240 seconds]
aseriousgogetta has quit [Remote host closed the connection]
aboodman has joined #ipfs
apiarian_mobile has joined #ipfs
jkilpatr has quit [Ping timeout: 248 seconds]
apiarian_mobile has quit [Client Quit]
ylp has quit [Quit: Leaving.]
mildred has joined #ipfs
mildred has quit [Ping timeout: 246 seconds]
apiarian_mobile has joined #ipfs
maxlath has quit [Ping timeout: 248 seconds]
apiarian_mobile has quit [Client Quit]
mildred has joined #ipfs
jhand has joined #ipfs
mildred has quit [Ping timeout: 248 seconds]
mildred has joined #ipfs
sarenord has joined #ipfs
<sarenord>
is there a way i can list the files someone has in their node?
aboodman has quit [Quit: aboodman]
<absullivan[m]>
whyrusleeping: I just tried ipfs last night, and it is absolutely nuts!!! I wasn't able to run it a second time, so I'm not sure what went wrong (Ubuntu 16.04, known bugs?). Are there examples of Jekyll or Hugo websites that I can clone and host with ipfs? Could I use Beaker Browser to clone a site? Thanks!!!
<voker57>
sarenord: I think you'll have to listen to their announcements
<jokke>
voker57: even then it's not possible without fetching the file(s)
<jokke>
(i think, correct me if i'm wrong)
<sarenord>
darn, i got bored and thought i'd look through whatever miscellaneous files people were hosting
lachenmayer_ is now known as lachenmayer
niekie has joined #ipfs
Aranjedeath has joined #ipfs
MrSparkle has joined #ipfs
erictapen has quit [Ping timeout: 276 seconds]
slaejae has joined #ipfs
<jokke>
sarenord: well, you can.
<jokke>
you'll just have to fetch them all
<jokke>
and you'll only see those that get announced by your peers
<voker57>
jokke: you can fetch DAG nodes only which are much smaller and give you file names
slaejae has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
<sarenord>
anyone using The Index?
<voker57>
what's that?
sprint-helper has quit [Write error: Broken pipe]
screensaver4 has quit [Write error: Broken pipe]
sprint-helper2 has joined #ipfs
screensaver has joined #ipfs
ianopolous has joined #ipfs
tulior has joined #ipfs
<sarenord>
voker57 The Index is a js application someone made to work with ipfs to cleanly index the files you have hashes for, and supposedly it works with everyone else who is running it to build a collective ipfs index
<voker57>
link?
<Mateon1>
sarenord: I have it linked, but you have to keep it online in a browser to actually share hashes
<Mateon1>
voker57: Hold on
<sarenord>
Mateon57 does it automatically collect hashes while it's open?