dimitarvp has quit [Read error: Connection reset by peer]
upperdeck has quit [Ping timeout: 246 seconds]
reit has quit [Quit: Leaving]
reit has joined #ipfs
K0HAX has joined #ipfs
edubai____ has quit [Quit: Connection closed for inactivity]
upperdeck has joined #ipfs
bwerthmann has joined #ipfs
<kpcyrd>
or rust
cellvia has left #ipfs ["WeeChat 1.4"]
vivus has joined #ipfs
}ls{ has quit [Quit: real life interrupt]
anshukla has joined #ipfs
jkilpatr has quit [Remote host closed the connection]
anshukla has quit [Ping timeout: 246 seconds]
chris613 has quit [Quit: Leaving.]
slaejae has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
epg has quit [Ping timeout: 260 seconds]
<Kythyria[m]>
> When started using ipfs, I had the very same idea of keeping a gateway on home router, but soon I realized that the power of ipfs relies on running it on every station capable in doing it.
<Kythyria[m]>
Eh, if they're all hidden behind the same firewall it's not as valuable.
taravancil has quit [Ping timeout: 276 seconds]
taravancil has joined #ipfs
aceluck has quit []
maroussil has joined #ipfs
maroussil has quit [Client Quit]
maroussil has joined #ipfs
maroussil has quit [Client Quit]
bwerthmann has quit [Ping timeout: 240 seconds]
WinterFox[m] has quit [Ping timeout: 276 seconds]
dion_97[m] has quit [Ping timeout: 276 seconds]
Guest191936[m] has quit [Ping timeout: 276 seconds]
Lorm[m] has quit [Ping timeout: 276 seconds]
bumi[m] has quit [Ping timeout: 276 seconds]
Guest215078[m] has quit [Ping timeout: 276 seconds]
zebburkeconte[m] has quit [Ping timeout: 276 seconds]
PoeBoy[m] has quit [Ping timeout: 276 seconds]
Guest192511[m] has quit [Ping timeout: 276 seconds]
msmart[m] has quit [Ping timeout: 276 seconds]
M1trace[m] has quit [Ping timeout: 276 seconds]
litebit[m] has quit [Ping timeout: 276 seconds]
Leer10[m] has quit [Ping timeout: 276 seconds]
Markus72[m] has quit [Ping timeout: 276 seconds]
kewde[m] has quit [Ping timeout: 276 seconds]
albuic has quit [Ping timeout: 276 seconds]
cornu[m] has quit [Ping timeout: 276 seconds]
taravancil has quit [Ping timeout: 276 seconds]
WinterFox[m] has joined #ipfs
Guest191936[m] has joined #ipfs
Guest215078[m] has joined #ipfs
msmart[m] has joined #ipfs
dion_97[m] has joined #ipfs
Guest192511[m] has joined #ipfs
Leer10[m] has joined #ipfs
zebburkeconte[m] has joined #ipfs
Lorm[m] has joined #ipfs
cornu[m] has joined #ipfs
taravancil has joined #ipfs
GuilleV[m] has quit [Ping timeout: 276 seconds]
GuilleV[m] has joined #ipfs
bwerthmann has joined #ipfs
Guest211395[m] has quit [Ping timeout: 276 seconds]
cyberpepe[m] has quit [Ping timeout: 276 seconds]
bananabread[m] has quit [Ping timeout: 276 seconds]
mildred has quit [Read error: Connection reset by peer]
mildred3 has joined #ipfs
mildred1 has quit [Read error: Connection reset by peer]
mildred4 has joined #ipfs
mildred3 has quit [Ping timeout: 255 seconds]
<xelra>
harlock[m]: If your host is Windows, keep in mind that mounting is not yet available there. You might be able to circumvent that with some kind of VM/Docker.
<xelra>
I've been wanting to use the Docker image myself and try out how well things work, but unfortunately my CPU doesn't run Docker. :(
<xelra>
... or Hyper-V.
Boomerang has joined #ipfs
anewuser has quit [Read error: Connection reset by peer]
anewuser has joined #ipfs
Caterpillar has joined #ipfs
ethanstokes[m] has joined #ipfs
erictapen has joined #ipfs
}ls{ has joined #ipfs
Bhootrk_ has quit [Ping timeout: 240 seconds]
anshukla has joined #ipfs
mildred has joined #ipfs
mildred4 has quit [Ping timeout: 248 seconds]
anshukla has quit [Ping timeout: 276 seconds]
robattila256 has quit [Ping timeout: 240 seconds]
joocain2_ has joined #ipfs
joocain2 has quit [Ping timeout: 248 seconds]
anewuser has quit [Quit: anewuser]
jkilpatr has joined #ipfs
jamiew has joined #ipfs
jamiew has quit [Client Quit]
jaboja has joined #ipfs
jamiew has joined #ipfs
jaboja has quit [Ping timeout: 268 seconds]
jaboja has joined #ipfs
dignifiedquire has quit [Quit: Connection closed for inactivity]
rafajafar has quit [Quit: Leaving]
jamiew has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…]
mildred1 has joined #ipfs
Bhootrk_ has joined #ipfs
Bhootrk_ has quit [Max SendQ exceeded]
mildred has quit [Ping timeout: 260 seconds]
anshukla has joined #ipfs
anshukla has quit [Ping timeout: 255 seconds]
espadrine has quit [Ping timeout: 276 seconds]
jaboja has quit [Ping timeout: 240 seconds]
jaboja has joined #ipfs
M3v[m] has joined #ipfs
jaboja has quit [Read error: No route to host]
nekomune has quit [Ping timeout: 246 seconds]
jaboja has joined #ipfs
Rug has joined #ipfs
espadrine has joined #ipfs
<Rug>
Is there any way to view the files (filenames) that you have uploaded if you don't know the hash? i.e. yesterday i uploaded 2 files. Without using the web browser, how do I "find" them?
Guest19861 has quit [Ping timeout: 268 seconds]
<SchrodingersScat>
Rug: if you uploaded them then you likely pinned them as well, so you could check your pins
<Rug>
ipfs add filename.123 pins it too? Wow I didn't know that. thanks
nekomune has joined #ipfs
Lymkwi has joined #ipfs
Lymkwi is now known as Guest69980
<SchrodingersScat>
Rug: ipfs pin ls -t recursive -q is what I use
<Rug>
excellent, thanks!
<Rug>
Is there anyway to see the 'real' filename ?
<SchrodingersScat>
Rug: if you wrapped it then it should show when you ls it
<SchrodingersScat>
Rug: otherwise I think ipfs ls displays Error: merkledag node was not a directory or shard
<SchrodingersScat>
Rug: I could be wrong though, hang around a bit.
<Rug>
I am really trying to wrap my head around this. It just seems like I am missing something.
koalalorenzo has joined #ipfs
<Rug>
What do you get when you run this command: % ipfs pin ls --type=all |wc -l
<Rug>
3634
<Rug>
Does that mean that my node can see 3634 other 'gateways', or gateways+files, or files?
<koalalorenzo>
that means that your node is "hosting" 3634 objects. That could be Files, Directories... Not nodes
<Rug>
ok, so i run ipfs get <hash> My computer responds with Saving file(s) to <hash>. Where are they?
<Rug>
Or even better, is there a wiki/how-to that explains all this? I've been all over the ipfs.io guide and it just isn't helping me.
<Rug>
=(
<Rug>
I'd hate to be too annoying.
<r0kk3rz>
Rug: its got its own store
echoSMILE has quit [Ping timeout: 248 seconds]
<koalalorenzo>
@Rug it is saved into your store. Inside the IPFS store. usually it is inside `~/.ipfs` :)
<koalalorenzo>
but you should be able to configure that
<koalalorenzo>
@Rug (you are not annoying! ;-) ) You can also use `ipfs swarm peers` to get the list of peers you are connected to
<Rug>
koalalorenzo: % ipfs swarm peers |wc -l
<Rug>
324
<koalalorenzo>
Exactly! :) You are connected to 324 peers! 😃
<Rug>
I know I am connected, but I need to be able to interact with this via the CLI
<koalalorenzo>
What are you trying to achieve, if I may ask?
<Rug>
How do i find out the 'real filenames' that some of these hashes represent?
<Rug>
I need to be able to upload a file. Then at a later date retrieve it without knowing what that hash was/is?
<koalalorenzo>
The filenames are probably something coming up with IPLD (Plz somebody correct me if I am wrong). But you can add the file with a "wrapper object", that will work as a directory. So you can actually have the file name as well.
<koalalorenzo>
but you always need the hash to retrieve a file :-)
<Rug>
ok understood
<r0kk3rz>
Rug: id store an index of hashes somewhere
<r0kk3rz>
theres no guarantee two files with the same name hash the same
<Rug>
r0kk3rz: yeah, I'll just script it to insert into a DB
jaboja has quit [Read error: Connection reset by peer]
<r0kk3rz>
Rug: whats the purpose of using IPFS?
<Rug>
I am looking at large-scale site backups. So to submit the file, and retrieve it later. I was _hoping_ that in the event of a DB crash, I could just do a "ipfs ls <hash>" and it would show me the filename
<r0kk3rz>
Rug: but without a seperate system downloading the hash, it wont go anywhere
jaboja has joined #ipfs
<Rug>
r0kk3rz: as it stand right now, the plan was to have all of our data-centers participate. Think of it as a dissconnected file-server cluster
<r0kk3rz>
sure, you'd be interested in ipfs-cluster
<Rug>
That way it also acts as a CDN
<r0kk3rz>
which is more or less what you're talking about
<Rug>
ok, one last question (for now). Is the future plan/hope to make ipfs into a full internet protocol? Scenerio: Youtube-like site. "Becky" wants to watch one of our videos. So she clicks play. That URL points to ipfs://<hash> for the video, so she is now downloading/streaming from our swarm.
<Rug>
It's my understanding that this is how IPFS is designed to work.
appa_ has quit [Ping timeout: 240 seconds]
<r0kk3rz>
basically yes
<r0kk3rz>
js-ipfs is the major step towards that goal
<r0kk3rz>
it allows an ipfs node to be bootstrapped in browsers, today
<Rug>
r0kk3rz: Thanks for all of your help.
appa_ has joined #ipfs
Nyx____ has joined #ipfs
shizy has joined #ipfs
sirdancealot has quit [Read error: Connection reset by peer]
sirdancealot has joined #ipfs
jamiew has joined #ipfs
anshukla has joined #ipfs
gmoro has quit [Remote host closed the connection]
<Rug>
Does anybody know how I can reslove these errors: too many open files client.go:247
gmoro has joined #ipfs
anshukla has quit [Ping timeout: 258 seconds]
<SchrodingersScat>
Rug: what version are you on? i thought they took care of that? or there's probably a setting
<Rug>
ipfs version 0.4.10
<SchrodingersScat>
k :(
rodolf0 has joined #ipfs
<Rug>
go version go1.8.3 linux/amd64
<SchrodingersScat>
whyrusleeping: would know what to do :(
<Rug>
ok thanks.
bingus has quit [Ping timeout: 255 seconds]
<Rug>
When filecoin debuts, do you know if it will use the existing IPFS setup, or will I need to start from scratch rebuilding? =)
sevcsik has quit [Quit: WeeChat 1.8]
bingus has joined #ipfs
jaboja has quit [Ping timeout: 255 seconds]
Aranjedeath has joined #ipfs
mildred2 has quit [Read error: Connection reset by peer]
mildred2 has joined #ipfs
bwerthmann has joined #ipfs
<r0kk3rz>
Rug: im not sure filecoin is any further than the whitepaper stage
<Rug>
ok.
<Rug>
We are looking at either continue hosting ourselves, (600TB-800TB) or "spreading it around'
<Rug>
So i am tasked with 'finding a good solution' aka "see if this will work" =)
bwerthmann has quit [Ping timeout: 240 seconds]
<Rug>
I really want to thank the group for the suggestion to use 'wrap'. I think that will sove 80% of my problems.
<lemmi>
Rug: too many open files: you might run against file limit. check ulimit -a (or -n)
<lemmi>
there are several ways to raise the limit, but usually depends on your taste and distribution
cwahlers_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
X-Scale has joined #ipfs
<Kubuxu>
Rug: when are you hitting the too many files?
<Kubuxu>
on add?
<Kubuxu>
can you check in daemon logs if the ulimit is raised?
<Rug>
ok, one sec.
<Kubuxu>
and if it is, just raise it a bit more. We are working on connection closing right now as the network grew fast and we keep to many connections == too many used fds.
<Kubuxu>
Rug: filecoin will base off IPFS but the storage of miners will be separate.
<Rug>
ummm, why doesn't: log ls produce output in alphebetical order?!?!
tilgovi has joined #ipfs
<Kubuxu>
Rug: just start a daemon, it should be 2 and 3 line of the output
<Rug>
ahhh
<Rug>
Kubuxu: Adjusting current ulimit to 2048...
<Rug>
Successfully raised file descriptor limit to 2048.
tilgovi has quit [Ping timeout: 276 seconds]
<Rug>
Kubuxu: Is there a special option for ipfs ulimit, or are we talking about the global one?
appa__ has quit [Ping timeout: 255 seconds]
<Kubuxu>
IPFS_FD_MAX=4096 ipfs daemon
Guest33428 has quit [Quit: ZNC 1.6.3+deb1 - http://znc.in]
<Kubuxu>
it should stop pegging so many FDs after it connects to the network
jamiew_ has joined #ipfs
jamiew_ is now known as Guest33335
Guest33335 has quit [Client Quit]
<Rug>
Kubuxu: that appears to have resolved it
<Kubuxu>
yeah, we are working on more normal solutions for it
chungy has joined #ipfs
tilgovi has joined #ipfs
Guest69980 has quit [Quit: No Ping reply in 180 seconds.]
atz__ has joined #ipfs
Lymkwi has joined #ipfs
Lymkwi is now known as Guest91777
jamiew_ has joined #ipfs
jamiew_ is now known as Guest87766
anshukla has joined #ipfs
shizy has quit [Ping timeout: 240 seconds]
anshukla has quit [Ping timeout: 255 seconds]
jhand has joined #ipfs
tilgovi has quit [Ping timeout: 258 seconds]
atz__ has quit []
<ehd>
Hey, are writeable HTTP gateways available?
<ehd>
*public, that is
<SchrodingersScat>
ehd: not afaik
Guest87766 has quit [Quit: ZNC 1.6.3+deb1 - http://znc.in]
jamiew2 has joined #ipfs
<ehd>
SchrodingersScat: Thanks! js-ipfs should work for my use case, too
rozie has quit [Quit: Lost terminal]
ylp has quit [Quit: Leaving.]
obensource has quit [Disconnected by services]
obensour1 has joined #ipfs
shizy has joined #ipfs
rozie has joined #ipfs
obensour1 has quit [Disconnected by services]
obensour1 has joined #ipfs
kvakes[m] has joined #ipfs
jafow has joined #ipfs
obensour1 has quit [Disconnected by services]
obensour1 has joined #ipfs
droman has joined #ipfs
obensource has joined #ipfs
Boomerang has quit [Quit: Lost terminal]
<Rug>
Does running ipfs init completely wipe out all local data?
rodolf0 has quit [Ping timeout: 255 seconds]
<lemmi>
ipfs init won't do anything to a existing repo
<Rug>
lemmi: thanks
cwahlers has joined #ipfs
obensource has quit [Ping timeout: 255 seconds]
obensource has joined #ipfs
bwerthmann has joined #ipfs
mildred1 has joined #ipfs
<silur[m]>
is the skademilla implementation in ipfs or libp2p repo?
<silur[m]>
can't find it in either
bwerthmann has quit [Ping timeout: 255 seconds]
anshukla has joined #ipfs
<Kubuxu>
go-libp2p-kad-dht
mildred1 has quit [Ping timeout: 240 seconds]
anshukla has quit [Ping timeout: 246 seconds]
bwerthmann has joined #ipfs
mildred2 has quit [Read error: Connection reset by peer]
mildred2 has joined #ipfs
shizy has quit [Ping timeout: 260 seconds]
charley has joined #ipfs
rodolf0 has joined #ipfs
charley has quit []
erictapen has quit [Ping timeout: 240 seconds]
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
vindelschtuffen has joined #ipfs
shizy has joined #ipfs
<_mak>
what can be done when 2 nodes running pubsub (one pub and one sub) are not communicating?
<_mak>
is there a way to 'force' them to find each other?
<Rug>
How is the hash for a file generated? Can you 'pre-calulate' what a hash will be? i.e. I want to store cat.jpg on my webpage. Can I just take (my ipfs id HASH) * (cat.jpg |sha256) = cat.jpg<hash>
<Rug>
That's is a dumb simplification ofcourse
warner has quit [Quit: ERC (IRC client for Emacs 25.1.2)]
cwahlers_ has joined #ipfs
Monokles has quit [Remote host closed the connection]
cwahlers has quit [Ping timeout: 255 seconds]
Monokles has joined #ipfs
<r0kk3rz>
there is an ipfs command for it iirc
<r0kk3rz>
its a bit more complicated than a straight sha hash
<Rug>
Yeah, sha256 and then Base58 (with 1200 added to the front
<Rug>
I'm just trying to figure out if we can pre-compile the ipfs hash, or do we need the file first.
<r0kk3rz>
you need the file
<r0kk3rz>
and you need to use ipfs, because the file gets blockified and merkledagd
<r0kk3rz>
and then hashed
<Rug>
ok thanks
<r0kk3rz>
yeah -n flag on the add command
grimtech has joined #ipfs
<_mak>
how can I try to debug why a message being published by one peer is not being received by another one?
<_mak>
both have pubsub-experiment enabled of course