jbenet changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/ipfs/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- Code of Conduct: https://github.com/ipfs/community/blob/master/code-of-conduct.md -- Sprints: https://github.com/ipfs/pm/ -- Community Info: https://github.com/ipfs/community/ -- FAQ: https://github.com/ipfs/faq -- Support: https://github.com/ipfs/support
<shadeheim> true, but they might know the damage is already done and want to make ammends. if they have no way to let others know they are sorry, then you could say it is tough shit on them I guess, but somethings feels off about that situation to me
<sonatagreen> i mean, if they have an ipns site they can put updates to that
<sonatagreen> but ipfs is fundamentally About static files.
<sonatagreen> if you want updateability, you have to use something else, such as ipns
jamie_k__ has quit [Quit: jamie_k__]
<shadeheim> to use a current example. you get people uploading torrents of fake movies in passworded zip files. then to get the password you need to unlock the zip by viewing ads or filling question forms out
<shadeheim> now say I upload the latest superman movie and its going great I'm earning $100 and feed my starving kids
<shadeheim> but then I see all the comments of people pissed the movie is fake, and I feel bad for how I've upset people
<shadeheim> too late, the torrent is already seeded and being shared
<shadeheim> the least I can do is comment on the torrent and say I'm sorry and not to download the movie cos it is fake. at least I have some platform to do that
<sonatagreen> I think discussion forums are in the works
<sonatagreen> and, more generally, whatever you did to tell people about the hash of the file in the first place, you can probably use the same channel to tell them about your change of mind
<sonatagreen> I think of ipfs as a cdn, usually used to support something more dynamic
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<sonatagreen> quick change of topic
<sonatagreen> is it possible to use ipfs object add through the api? (/api/v0/object/add)
<sonatagreen> it seems to want a file or a stdin and I don't see how to provide either of those things
<sonatagreen> errr, ipfs object put
lazyUL has joined #ipfs
Not_ has quit [Ping timeout: 240 seconds]
reit has quit [Quit: Bye]
cemerick has joined #ipfs
vanila has quit [Quit: Leaving]
reit has joined #ipfs
anticore has joined #ipfs
pfraze has quit [Remote host closed the connection]
<victorbjelkholm> what I'm currently working on: http://imgur.com/a/RidXL
<victorbjelkholm> open network of nodes that share the same content :)
<sonatagreen> How're you going to deal with spam/DoS?
<mappum> victorbjelkholm: how do you prevent abuse? (illegal content, or uploading large amounts of data)?
<sonatagreen> Looks cool, though.
<victorbjelkholm> mappum, there is limits on large amount data. Illegal content is a good question though
<victorbjelkholm> hopefully the rate limiting is good enough to take care of dos, depends on how the daemon takes it though...
<ipfsbot> [go-ipfs] whyrusleeping force-pushed fix/ndjson from 7874a68 to 08ec178: http://git.io/vWgKZ
<ipfsbot> go-ipfs/fix/ndjson 08ec178 Jeromy: add test for ndjson output...
economizer has joined #ipfs
<mappum> indeed it does look cool
<victorbjelkholm> mappum, basically, illegal content depends on country, depending on how ipfs can help, everything will be formed around that. If there is a public list of "banned" files, it can use that before sending pinning requests
<victorbjelkholm> but yeah, though problem
jo_mo has quit [Quit: jo_mo]
jo_mo has joined #ipfs
suprarenalin has quit [Ping timeout: 268 seconds]
<anticore> how can i start the daemon forcing ipv4?
frabrunelle has joined #ipfs
aeronautic has joined #ipfs
economizer has quit [Ping timeout: 264 seconds]
<anticore> if i delete the ip6 line on the conf will it force ipv4?
<sonatagreen> try it and see
<M-davidar> victorbjelkholm (IRC): sweet, would really like to have something like this for ipfs/archives
legobanana has quit [Ping timeout: 268 seconds]
<victorbjelkholm> M-davidar, take inspiration: github.com/victorbjelkholm/openipfs < or clone the whole thing
<victorbjelkholm> kind of undocumented right now, I'll have to add some more things
<M-davidar> victorbjelkholm (IRC): would it be possible to split large objects over multiple nodes?
<victorbjelkholm> M-davidar, Hm, not sure. I'm just communicating over the API and telling the daemons to pin *this*
<victorbjelkholm> don't think I can have the sort of control via the API
<M-davidar> Yeah, I really need to write a tool to handle that...
<M-davidar> victorbjelkholm (IRC): also see https://github.com/ipfs/notes/issues/58
cemerick has quit [Ping timeout: 244 seconds]
<victorbjelkholm> M-davidar, thanks
<M-davidar> kpcyrd (IRC): do you have a ssd by chance?
frabrunelle has quit [Quit: Textual IRC Client: www.textualapp.com]
<victorbjelkholm> ion, well, I don't see why that matter, because the service itself is not hosting any content. The content is hosted by the nodes connected to the service. Currently, I have three nodes. Amsterdam, Toronto and localhost :)
<victorbjelkholm> but thanks for the pointers
frabrunelle has joined #ipfs
<ion> victorbjelkholm: I meant the content hosters.
<victorbjelkholm> ion, the content hosters can be anywhere, not limited to any physical place
<M-davidar> domsch (IRC): cool stuff! Seen https://github.com/ipfs/archives/issues/20 ?
<M-davidar> victorbjelkholm (IRC): I agree with ion though, would be good to (optionally) check hashes against the DMCA list before pinning
<M-davidar> Otherwise people might be wary of donating a node
<victorbjelkholm> M-davidar, well, DMCA is US only no? So in that case I'll check it for daemons that are in the US (because there is a multiaddr to geolocation library, right?)
<victorbjelkholm> don't want to restrict more than necessary
<M-davidar> victorbjelkholm (IRC): well, after the tpp/ttip...
<sonatagreen> maybe have a --dmca option that's off by default?
<victorbjelkholm> M-davidar, well, I'll do it for Asia-Americas then! :)
<sonatagreen> so people can comply if they want
<victorbjelkholm> sonatagreen, yeah, sounds like something that should part of the daemon, rather than the openipfs (the service)
<sonatagreen> the ipfs daemon? uh, I wouldn't
<victorbjelkholm> but feels good to have it in the service as well... As long as people don't get affected by laws that doesn't have anything to do with them
<sonatagreen> I don't want censorship to be built into the wires
<sonatagreen> things that talk about content should operate on the content level, not the transport level
<M-davidar> victorbjelkholm (IRC): I'd just make it opt-in
<victorbjelkholm> sonatagreen, why would you like censorship anywhere?
<victorbjelkholm> M-davidar, yeah, that sounds sensible
Guest73396 has joined #ipfs
<sonatagreen> I don't /like/ it, but I accept that I have insufficient clout to destroy it entirely
<ion> That pull request *is* to the daemon.
<M-davidar> sonatagreen (IRC): I think it would be reasonable to have an opt-in in the daemon too
<M-davidar> You don't have to use it, but a lot of people won't want to use ipfs if they think they might be sued
<M-davidar> Especially businesses
<sonatagreen> I guess
<sonatagreen> but I'd like there to be enough of a minor inconvenience that it's clearly not the default expectation
<kpcyrd> M-davidar: no
<kpcyrd> M-davidar: hdd with btrfs
border0464 has quit [Quit: sinked]
<M-davidar> kpcyrd (IRC): what read speed were you getting?
aeronautic has quit [Killed (Sigyn (Spam is off topic on freenode.))]
<kpcyrd> M-davidar: haven't measured that
<kpcyrd> M-davidar: speed isn't the issue, but the oom killer kicking in after a while so ipfs add -r never finished successfully yet
<kpcyrd> M-davidar: even if nothing changed on disk
<M-davidar> kpcyrd (IRC): yeah, but if it's too fast, the golang gc can't catch up (it's a known issue)
Matthieu has joined #ipfs
<kpcyrd> M-davidar: lol. are there options to change the gc behaviour?
<kpcyrd> java style?
<M-davidar> Ping whyrusleeping
<M-davidar> kpcyrd (IRC): I've tried, doesn't seem to be :(
Guest73396 has quit [Ping timeout: 250 seconds]
<M-davidar> Needs to be fixed in ipfs itself
* whyrusleeping grumbles about irc notifications
<M-davidar> whyrusleeping (IRC): golang memory management sucks :p
<ion> Could mmap be used?
<whyrusleeping> meh
<whyrusleeping> yeah
<whyrusleeping> its mostly my fault
<whyrusleeping> i'll go fix that
<ion> Although not reading from stdin
<whyrusleeping> after a beer
<M-davidar> Can't even ulimit it :(
<ion> or on Windows™ (but it has an equivalent mechanism)
<ion> -ish
<M-davidar> ion (IRC): windows magnetic platter to solid state memory synergy engine enterprise edition 7 tm
<M-rschulman2> whyrusleeping: Thanks for adding me into that thread, I hope we can get a DC meetup to happen.
fingertoe has quit [Ping timeout: 246 seconds]
<M-davidar> And then an AC meetup!
<whyrusleeping> M-rschulman2: yeah! i certainly hope so!
uhhyeahbret has quit [Quit: WeeChat 1.3]
<M-rschulman2> davidar: AC = Atlantic City?
<M-davidar> Alternating current, duh :p
<M-rschulman2> lol, gotcha.
<M-rschulman2> that's funny, I've lived here like 14 years and never really thought of it as direct current.
<kpcyrd> ok, my server just died. I think I have to abort this project for now.
<ion> DC is a neat little calculator.
<M-davidar> kpcyrd (IRC): what are you trying to add?
<ion> which can render the Mandelbrot set in ASCII
<ion> dc -e '[lolssdsl0lqx]sx[1+lddd*lld*-ls+dsdrll2**lo+dsld*rd*+4<kd15>q]sq[q]9ksk[d77/3*2-ss47lxx-P1+d78>0]s00[d23/.5-3*so0l0xr10P1+d24>u]dsux'
<M-davidar> Ross Schulman(@rschulman:matrix.org): you live in the energy grid? Say hi to tron for me ;)
<M-davidar> ion (IRC): worst one liner ever
<sonatagreen> is that a challenge
<M-rschulman2> davidar: I do. Tron is busy fighting for the users right now, though.
<M-davidar> Makes Perl look readable
<M-davidar> sonatagreen (IRC): yes
<sonatagreen> Is it possible to use 'object put' with the API? I can't figure out how to give it the data param it wants.
<M-davidar> sonatagreen (IRC): dunno, whatever the node API does I guess
<anticore> how can i fix this error?: Error: serveHTTPApi: manet.Listen(/ip4/127.0.0.1/tcp/6060) failed: listen tcp4 127.0.0.1:6060: bind: cannot assign requested address
frabrunelle has quit [Quit: Textual IRC Client: www.textualapp.com]
<M-davidar> anticore (IRC): is something else already using that port?
<anticore> nope
captain_morgan has joined #ipfs
acidhax has joined #ipfs
<kpcyrd> M-davidar: mirror of arch repos
<anticore> any attempt to change the port in the config file returns the same error
<M-davidar> kpcyrd (IRC): ooh, cool
<M-davidar> whyrusleeping: arch! ^
<M-davidar> anticore (IRC): not sure, works for me :/
<anticore> :(
<anticore> any ideas on how to troublehoot it?
<M-davidar> anticore (IRC): so, you don't get the error on the default port?
<anticore> i do
<anticore> i changed to try and see if that fixed it
<M-davidar> anticore (IRC): hrm. Does something like "python -m SimpleHTTPServer" work?
<whyrusleeping> anticore: what system?
<whyrusleeping> also, does 'nc -l 6060' work?
captain_morgan has quit [Ping timeout: 246 seconds]
<anticore> M-davidar: it does
<anticore> whyrusleeping: raspi b+ with linux raspbian
acidhax has quit [Remote host closed the connection]
shadeheim has quit [Ping timeout: 264 seconds]
<anticore> nc -l 6060 hangs
acidhax has joined #ipfs
acidhax_ has joined #ipfs
<whyrusleeping> anticore: okay, so it doesnt fail
<whyrusleeping> thats good at least
<victorbjelkholm> anticore, the ui of nc can be a bit tricky. Try typing something
<victorbjelkholm> if I'm not wrong, nc, even when successful, doesn't show any output
<victorbjelkholm> think of it as a shell
<anticore> victorbjelkholm: no output after typing
<victorbjelkholm> anticore, what you type is the output
<anticore> yeah, that
<anticore> lol
acidhax has quit [Ping timeout: 256 seconds]
<victorbjelkholm> ah, you don't even see what you're typing?
<anticore> i do see
<victorbjelkholm> ala http://i.imgur.com/kjVO8eu.gif
<anticore> oh
<anticore> let me try that
<kpcyrd> M-davidar: added 50GB of swap, let's see if that helps
<kpcyrd> ipfs add is also way slower when ipfs daemon is running
<anticore> victorbjelkholm: nope that doesnt work for me
<anticore> might be my iptables rules?
ygrek has joined #ipfs
anticore has quit [Quit: bye]
r04r is now known as zz_r04r
<victorbjelkholm> not sure then. Well, 3 oclock here, time to go to bed. Goodnight everyone
ygrek has quit [Ping timeout: 272 seconds]
<M-davidar> kpcyrd (IRC): cool. Yeah, also a known issue :/
acidhax_ has quit [Remote host closed the connection]
zugz has quit [Ping timeout: 240 seconds]
<whyrusleeping> aw, anticore left
<whyrusleeping> was going to tell him to use 0.0.0.0 instead of 127.0.0.1
<whyrusleeping> his original error clearly said 'cannot bind to requested address'
<whyrusleeping> instead of 'cannot bind to requested port'
<whyrusleeping> .tell anticore use 0.0.0.0 isntead of 127.0.0.1 as the ip
<multivac> whyrusleeping: I'll pass that on when anticore is around.
elgruntox has joined #ipfs
Guest73396 has joined #ipfs
<elgruntox> Hey, I was looking at this project the other day and am interested in using it for a project. I noticed theres a s3-datastore in "thirdparty", does this work and is there an example somewhere of using s3 for storing data?
Not_ has joined #ipfs
border0464 has joined #ipfs
<whyrusleeping> elgruntox: its not quite supported yet
<whyrusleeping> its planned in the 0.4.0 release
<elgruntox> ah, is there any place to track development on that?
<elgruntox> like a github issue
tinybike has quit [Ping timeout: 246 seconds]
captain_morgan has joined #ipfs
O47m341 has joined #ipfs
<whyrusleeping> elgruntox: uhm... i dont beleive so
<whyrusleeping> you can open one for tracking if youd like
SuzieQueue has quit [Ping timeout: 250 seconds]
Matthieu has quit [Remote host closed the connection]
zugz has joined #ipfs
Not_ has quit [Ping timeout: 240 seconds]
Guest73396 has quit []
captain_morgan has quit [Ping timeout: 246 seconds]
ygrek has joined #ipfs
Guest73396 has joined #ipfs
intertrochanteri has joined #ipfs
<kpcyrd> hmm.. ipfs daemon has 31GB now, half of it is in swap and ipfs add got really slow
<kpcyrd> if it's really gc not catching up than there must be a huge load of allocations
sonatagreen has quit [Ping timeout: 244 seconds]
Guest73396 has quit [Ping timeout: 260 seconds]
<whyrusleeping> kpcyrd: you have it using that amount of ram live?
<whyrusleeping> like, right now?
<kpcyrd> whyrusleeping: yes
<whyrusleeping> kpcyrd: do you have your stderr redirected to a file?
<kpcyrd> stdout and stderr are connected to the terminal
<whyrusleeping> hrm okay
<whyrusleeping> could you be awesome and try something for me?
<kpcyrd> sure
<whyrusleeping> okay, kill the daemon
<whyrusleeping> set the environment variable IPFS_PROF to true
<whyrusleeping> and then run the daemon again with 'ipfs daemon 2> somefile.log'
<kpcyrd> 1sec
<ion> IPFS_PROF=true ipfs daemon 2> somefile.log
<whyrusleeping> then reproduce the memory issue
<whyrusleeping> get that 31GB of ram usage again
<whyrusleeping> and once its there, copy the file 'ipfs.memprof' somewhere
<demize> 31G RES is rather impressive.
<whyrusleeping> then kill the daemon with ctrl+\
<whyrusleeping> (sends it SIGQUIT i beleive)
<whyrusleeping> then send me the ipfs binary you are using, the memprof file you copied out, and the somefile.log
chriscool has joined #ipfs
Not_ has joined #ipfs
<kpcyrd> whyrusleeping: everything running again. I think I'll let it run through the night, 5AM here
<whyrusleeping> kpcyrd: sounds good to me!
tinybike has joined #ipfs
uhhyeahbret has joined #ipfs
<multivac> [REDDIT] Glockamole (http://ipfs.pics/QmQMAnDrRyRwYhPaVhYaxAwCgdZBa3zqKRY5HGXoc5LAe6) to r/puns | 21 points (84.0%) | 2 comments | Posted by dachewie | Created at 2015-10-25 - 01:04:59
<codehero> sweet. fonud an ipfs.pics link in the wild :P
<codehero> not on /r/ipfs
jo_mo has quit [Quit: jo_mo]
<codehero> ipfs did it. it's ready to conquer the world
<kpcyrd> ipfs.pics needs https
<codehero> it does... well, no
<codehero> what we need is ipfs support in browsers
fingertoe has joined #ipfs
<codehero> then we don't need https
<codehero> well. we still need encryption in ipfs
<ansuz> is there anything in ipfs.pics that makes it different from a regular gateway other than branding?
Guest73396 has joined #ipfs
<codehero> yes
<ansuz> like, if I ask for a hash, is it going to only deliver it if it's an image?
<codehero> wait, what?
<codehero> oh
<codehero> good question
<ansuz> > yes
<ansuz> heh
<codehero> what if i ask for a video on ipfs.pics
<codehero> or javascript....
<ansuz> good question
<ansuz> I don't have the answer :(
<codehero> well. let's try
<ansuz> alas, we'll never know
* ansuz despairs
<codehero> :D
<codehero> well. it's doing *something*
<codehero> i'll laugh if the whole server breaks
<ansuz> just cite me as a co-author on your paper
<codehero> heh
<codehero> i will
<codehero> oh well
<codehero> it's just a broken picture
<codehero> but it caches. you could technically fill up the server's ram and or disk
<codehero> so that should be fixed
<ansuz> looks like that's just cause of bad headers
<codehero> oh. i see
<codehero> seems like they did anticipate this
<codehero> cool cool
tymat has quit [Ping timeout: 265 seconds]
tymat has joined #ipfs
<codehero> good night everyone!
<ansuz> gn
border0464 has quit [Ping timeout: 256 seconds]
border0464 has joined #ipfs
qqueue has quit [Ping timeout: 250 seconds]
qqueue has joined #ipfs
<Guest73396> beginner question. just tried adding something, a simple text file, got confirmation but cannot open file in browser. Am I missing something?
<Guest73396> confirmation: added QmdxiWgr58U9S8mfv87WsTBN9Z9UaF7Ywxk3M76e5pVBTE side1.txt
<ion> Are you running the ipfs daemon?
hoboprimate has quit [Ping timeout: 260 seconds]
Guest73396 has quit [Ping timeout: 250 seconds]
nicolagreco has joined #ipfs
<nicolagreco> I have got some questions about ipns
<nicolagreco> when I load https://ipfs.io/ipns/em32.net/archives/ how do I know who has em32.net?
<nicolagreco> meaning how does ipfs know how to root that?
<M-davidar> nicolagreco (IRC): DNS records
<achin> dig -t TXT em32.net.
<nicolagreco> this is great
<achin> in theory "ipfs dns em32.net" should also tell you, but this seems broken at the moment
<nicolagreco> but is this temporary or is here to stay?
<achin> i happen to run em32.net, and so i am going to try to make that above URL work forever
<nicolagreco> ahah sure
<nicolagreco> I meant something else
<nicolagreco> I meant is this attachment to the dns something that will be in ipfs
<nicolagreco> or is a nice little feature of ipfs.io ?
<nicolagreco> what I mean is if I can use em32.net as hash to find content
<nicolagreco> because if that was the case, then ipfs depends on the domain name system
<nicolagreco> to resolve hashes
<achin> the ability to attach IPFS or IPNS hashes to domain names is a built-in part of IPFS
<nicolagreco> uhm so imagine we are on an island with a local network
<nicolagreco> and I have the /ipns/QmbuG3dYjX5KjfAMaFQEPrRmTRkJupNUGRn1DXCgKK5ogD
<nicolagreco> that you need
<nicolagreco> you look for em32.net
<nicolagreco> and you find nothing
<nicolagreco> is this a plausible ipfs scenario?
<achin> i'm not sure i understand
<achin> why am i looking at em32.net for a hash that you have?
<nicolagreco> sorry let me rephrase
<M-davidar> nicolagreco (IRC): attaching DNS names to ipfs relies on the DNS system, yes
<M-davidar> you'd need something like namecoin for the desert island scenario
<achin> we have IPFS on a desert island, but no DNS?
<achin> do we at least ahve dessert?
<M-davidar> achin: sure, if you're cut off from the rest of the world
<nicolagreco> we are on an island and me and you are connected on the same network. You have a file that I want. If you gave me the hash of the file, I could retrieve that file. However for some reasons you give me em32.net and when I connect I can't get anything
<achin> ok, then M-davidar has your answer -- if i want to give you a domain name (instead of a hash), we need to have working DNS
<M-davidar> nicolagreco (IRC): yeah
M-davidar is now known as davidar
<nicolagreco> ok, I think this breaks ipfs then
<davidar> nicolagreco (IRC): well, dns names isn't the recommended way to link to things
<nicolagreco> I think we should either not have this built-in or not teach people to use these hashes
<davidar> nicolagreco (IRC): i don't think jbenet likes dns either, not sure what the long-term plans are though
<nicolagreco> davidar: well, I think that usable names will break "recommendation" and all in a sudden these peeps on an island are cut off
<nicolagreco> especially if a new dns system comes up, all in a sudden you don't support that anymore
<nicolagreco> can't*
<davidar> nicolagreco (IRC): like I said, if you're using something like namecoin, then there's not an issue
<davidar> because you can store all the records within ipfs itself
<nicolagreco> (putting on the side the fact that this feature breaks ipfs to me)
<nicolagreco> davidar: can you give me an example of how that would work?
<achin> i know that most people are not on a desert island, so i'm fairly comfortable using DNS for right now
<davidar> nicolagreco (IRC): dns support is just a transitionary mechanism (like the http gateway), and I'm pretty sure it's not part of the long-term plan
<nicolagreco> achin: I think we will all be "on an island" when we decide what to trust and what dns system to rely upon - and as soon as jbenet removes this feature, there will be plenty of pages that won't work anymore
<davidar> so, namecoin stores "dns records" in a blockchain
<nicolagreco> davidar: that's reassuring
<davidar> which you can stick into ipfs and download locally
<davidar> and then lookup names from there
<nicolagreco> davidar: yes! my question was, right now, how can I lookup a name on name coin using ipfs?
<davidar> ah! not yet, I don't think so
<davidar> but I believe it's a planned feature
<nicolagreco> /ipfs/whateve?
<nicolagreco> since that would map to a domain
<davidar> /ipns/example.bit/foo
<davidar> .bit is the namecoin namespace
<nicolagreco> ok, I see where you are heading
<nicolagreco> I guess we could make that a standard
<nicolagreco> uhm, I wonder how we can do this more.. gently
<nicolagreco> since I could come up with a .nicola, still using namecoin
<nicolagreco> but then I have to convince ipfs to map .nicola in a specific way
phuzion has joined #ipfs
<nicolagreco> at that stage, ipfs is the new icann
phuzion has left #ipfs ["See ya"]
<nicolagreco> don't get me wrong
<davidar> nicolagreco (IRC): afaik namecoin only has a single namespace
<nicolagreco> I like the mapping IN TXT from dns to ipfs
<nicolagreco> I am not sure that ipfs to dns is a good idea
<nicolagreco> or at least is very premature to me
<davidar> sure, and the http gateway isn't a very good idea in the long term either
<achin> it's useful for everyone who doesn't use namecoin
<nicolagreco> but gateway is external to the protocol
<davidar> these are just mechanisms to make adoption easier
<nicolagreco> davidar: I agree for dns->ipfs, but not the other way around
<achin> does anyone run a namecoin-to-dns gateway, where you can query namecoin over dns?
<davidar> nicolagreco (IRC): btw, I think /ipns/example.com is going to be renamed to /dns/example.com at some point, to make the dependence clear
<nicolagreco> since you basically break ipfs and all the links as soon as you remove that
<nicolagreco> ok that is a good idea
<davidar> achin: not the last time I checked, since it kind of goes against the point of namecoin
<achin> maybe there will be another solution that will let us come up with pronounacable URLs, so we don't have to rely on DNS
<nicolagreco> maybe
<davidar> well, it can handle local dns queries
<davidar> but you need a copy of the blockchain
<nicolagreco> the blockchain is very cool, but scares me
<davidar> achin: I think we'll need some kind of naming system
<nicolagreco> all of my friends are in quantum crypto
<achin> requiring a full copy of the blockchain can be annoying for some people
<davidar> achin: yeah, I know
<davidar> achin: maybe we can do something clever with hosting it on ipfs? ;)
<achin> i have a [mostly] full copy on my server, but none of my phones, tables, or chromebooks have a copy
<davidar> nicolagreco (IRC): really?
<nicolagreco> achin: holding a blockchain in your phone in a couple of years will be nothing
<achin> hopefully. but it's pretty annoying to do at the moment :)
<nicolagreco> yes, I totally agree
<nicolagreco> well, you can have eventually consistency by loading only part of it
<achin> does namecoin use the bitcoin blockchain? or does it run its own blockchain?
<nicolagreco> its own
<nicolagreco> _but_ you should have a look at blockstack/blockstore
<achin> so that will likely be much much smaller than the bitcoin blockchain
<nicolagreco> they run a dht on top of bitcoin
<nicolagreco> and they do dns
<achin> (i assume a transaction in namecoin is a "dns update", which should not have the same frequency as currency trading)
<nicolagreco> achin: have a look at what I mentioned ^
<nicolagreco> btw last question
<nicolagreco> can someone teach me about ipns
<nicolagreco> I am very confused about it
zeroish has quit [Read error: Connection reset by peer]
<davidar> nicolagreco (IRC): which part of it? :)
<nicolagreco> so IPNS gives me a hash
<nicolagreco> that points to another hash
<nicolagreco> that points to a file
<achin> a lot of things these days are storing data in the bitcoin blockchain. i'm not sure how i feel about this
<davidar> achin: that's only because they don't realise what they really want is ipfs ;?
<davidar> s/;?/;)
<multivac> davidar meant to say: achin: that's only because they don't realise what they really want is ipfs ;)
<akkad> or tahoe-lafs
<achin> davidar: perhaps!
<nicolagreco> well blockchain != ipfs
<davidar> nicolagreco (IRC): but blockchains are a subset of the ipfs merkledag
<achin> it sometimes feels like bitcoin is turning into a very large, very unwieldy database. if i want only a few bits of information, i have to haul around gigabytes of data that i don't care about at all
<davidar> achin: indeed
<nicolagreco> davidar: one thing is data structure one thing is application
<nicolagreco> so can someone help me out on understanding how ipns work?
<achin> how it works in practice? or how the implementaiton works?
<nicolagreco> achin: that's the whole point of the blockchain
<davidar> nicolagreco (IRC): are you familiar with public key crypto?
<nicolagreco> davidar: yes
<achin> i think you already have a good idea how it works in practice -- IPNS maps peerID hashes to ipfs content hashes
<nicolagreco> uhm
<davidar> nicolagreco (IRC): so you have /ipns/QmHashOfYourPublicKey
<nicolagreco> so how do I update a hash?
<nicolagreco> the pointer to *
<davidar> you sign the hash with your public key
<nicolagreco> uhm right
<nicolagreco> and how does this get propagated?
<davidar> to say "this is what my ipns name should resolve to"
<davidar> it gets stored in the dht iirc
<nicolagreco> I am not sure I get this part
<nicolagreco> so there is a chain of changes somewhere? or it is a full replace?
<akkad> if you don't share the sha/hash is the asset considered secure from other nodes listing it?
sbruce has quit [Read error: Connection reset by peer]
<davidar> nicolagreco (IRC): not yet, but i think ancestry chains are planned
<achin> i'm pretty sure it's stored directly in the dht, since an ipns publication will disappear after 24 hours
sbruce has joined #ipfs
<achin> akkad: probably not
<nicolagreco> davidar: as mandatory or optional?
<davidar> akkad (IRC): no, your node still announces blocks you're storing/providing
<nicolagreco> davidar: what do you mean by that?
<davidar> nicolagreco (IRC): afaik there will be a few different options to choose from
<davidar> that = ?
<nicolagreco> sorry I meant achin
<nicolagreco> achin: "i'm pretty sure it's stored directly in the dht, since an ipns publication will disappear after 24 hours"
nicolagreco has quit [Quit: nicolagreco]
<davidar> nicolagreco (IRC): you need to periodically republish (ipfs does this automatically now) to stop it from churning into oblivion
sharky has quit [Ping timeout: 260 seconds]
<achin> they left
<achin> and i'm getting super sleepy
* achin -> zZz
<davidar> night
nicolagreco has joined #ipfs
<nicolagreco> sorry davidar, achin
sharky has joined #ipfs
<davidar> achin gone
<nicolagreco> alright
<nicolagreco> I call it a night too
<nicolagreco> I have allocated research time for natural language processing (anything related to it)
<nicolagreco> I wonder if I could apply something to ipfs/distributed systems
<akkad> go/src/github.com/ipfs/go-ipfs/Godeps/_workspace/src/github.com/anacrolix/missinggo/atime.go:10: undefined: fileInfoAccessTime
<davidar> nicolagreco (IRC): hmm, not sure
<akkad> pwd
<davidar> nicolagreco (IRC): if there's nlp-related open data you'd like to store on ipfs, I'd be interested ;)
<davidar> can't think of anything nlp proper atm though :/
<nicolagreco> so don't get me wrong but every now and then I get confused on the difference between ipfs and bittorrent
<davidar> ipfs borrows a lot of ideas from bittorrent, as well as a number of other distributed systems
<davidar> ipfs isn't a "new" system, it's a combination of a bunch of existing ones
<nicolagreco> but what it is really is bittorrent :/
<nicolagreco> or am I super wrong?
<davidar> the general slogan is "ipfs = bittorrent + git"
<davidar> so the file distribution part is a lot like bittorrent, and ipfs objects are a lot like git
chriscool has quit [Ping timeout: 264 seconds]
<nicolagreco> uhm
<tinybike> thought i'd share my tiny JS multi-hash encoder/decoder, in case it's useful to anyone else: https://github.com/tinybike/multi-hash
<tinybike> (actually just sha256 for now, since that's the IPFS default, heh)
<davidar> nicolagreco (IRC): mmm?
<davidar> tinybike (IRC): cool :)
<nicolagreco> I wonder why don't we have /torrent/<torrent hash>
<tinybike> nicola, you could store your .torrent files in IPFS -- then it'd just be /ipfs/<torrent hash> :)
<davidar> tinybike (IRC): is it different to https://github.com/jbenet/node-multihash though?
<nicolagreco> in bittorrent the magnet uri is the hash of the content right?
<tinybike> ...i didn't realize that repo existed, lol
<tinybike> ::reinvents the wheel::
<tinybike> guys check out this cool thing i invented, i'm thinking of calling it a "wheel"
ilyaigpetrov has joined #ipfs
<davidar> tinybike (IRC): lol
<davidar> nicolagreco (IRC): ipfs is about providing a common layer to glue everything together, "/torrent/" seems like it would case fragmentation :/
<davidar> *cause
<davidar> nicolagreco (IRC): however, having said that, see https://github.com/ipfs/go-ipfs/issues/1678#issuecomment-140463345
<tinybike> davidar, on that note, do you know what udp is used for (if anything) in ipfs?
<tinybike> was thinking of filling in the node-libp2p-udp module
<tinybike> and want to make sure i'm doing it right / not duplicating someone else's work
<davidar> tinybike: something to do with nat traversal i think
<davidar> tinybike: talk to daviddias first if you're concerned about stepping on toes
<tinybike> will do!
<akkad> are #+ foo bar other, comments? or actual build directives?
<davidar> ?
<akkad> oops this is more go-ipfs related
<whyrusleeping> udp will be used as a communication transport
<tinybike> right -- should there be application code providing reliability, ordering, etc. in that case? or is it gonna be used as-is (i.e. just a regular udp socket)?
<ilyaigpetrov> achin: currently ipns resolves this way: ipns_addr >> ipfs_obj. But what about: ipns_addr >> HEAD -> [ipfs_obj, prev_head] -> [prev_ipfs_obj, prev_head] -> ... -> [1st_ipfs_obj, null]
<ilyaigpetrov> achin: so, ipns would be treated like git's HEAD
<davidar> whyrusleeping (IRC): ^ ancestry chains are planned, no?
<fingertoe> just curious - is there a copy of the bitcoin blockchain on IPFS? Seems like a silly thing for people to store millions of times when you could just store the hashes and access at will.. ??
<ilyaigpetrov> fingertoe: in bitcoin ledger is owned be the network, who will own such blockchain in ipfs?
<ilyaigpetrov> achin: having a history of all ipns heads is good, because ipns owner can't delete links from network absolutely
<davidar> fingertoe: yeah, it's possible, the bitcoin network only has to agree on the root hash
<davidar> although you'd still have to download the whole thing to verify
<fingertoe> ilyaigpetrov: Not sure I understand the question. Does it need an owner? Seems like that network already has millions of copies of those block files..
<davidar> well, maybe not the whole thing
<ilyaigpetrov> fingertoe: well, you may keep a copy in ipfs, but this will be your own copy, to which only you may add new blocks. But, yes, blocks kept will be shared by everyone.
<davidar> ilyaigpetrov (IRC): but if you know what the current root hash is, you can retrieve the whole blockchain from ipfs
<ilyaigpetrov> yes, if adding new blocks is not a concern here
<davidar> there could be a bridge to add new blocks
<davidar> how far back in the blockchain do bitcoin nodes have to look to do stuff?
<davidar> i dunno, ask jbenet, I'm pretty sure he's thought about this stuff :)
<davidar> and watch that talk he gave at stanford last week ;)
nicolagreco has quit [Quit: nicolagreco]
tinybike has quit [Ping timeout: 260 seconds]
<fingertoe> Depends. If Satoshi spent his coins you would have to go all the way back to the beginning. Basically the unspent stuff is important, the spent stuff, not so much.
<whyrusleeping> you could fairly easily store each block in the blockchain in ipfs
<whyrusleeping> but the problem would be that the blocks store their hashes in a different format
<whyrusleeping> the links wouldnt work
Guest73396 has joined #ipfs
<ilyaigpetrov> Can I open an issue about ipns like git's HEAD?
<ilyaigpetrov> would be a proposal
<whyrusleeping> thats already planned, search around first
fingertoe has quit [Ping timeout: 246 seconds]
<ilyaigpetrov> sorry, I can't find it
<Guest73396> beginner question: So when I add something it will be there forever and ever?
<whyrusleeping> ilyaigpetrov: something about it is here: https://github.com/ipfs/specs/tree/master/records
<whyrusleeping> Guest73396: no
<Guest73396> Aha, so if I pay filecoin, will it be there forever and ever or not?
<davidar> if you pay someone to pin it forever (through filecoin or other means), yes
<davidar> filecoin will let you verify they're actually doing what you're paying them to
<Guest73396> Ok : )
<Guest73396> beginner question: is there a way to list all files that are currently available in the repo, like a search engine or file browser?
<haadcode> ipfs refs local
tinybike has joined #ipfs
<Guest73396> I mean what's globally available on the IPFS P2P network
<Guest73396> It's supposed to be like a big git repo, right?
astrocyte has quit [Remote host closed the connection]
<ilyaigpetrov> I don't get it, how IPRS works? It gives you DHT with variable string -> value mapping?
<ilyaigpetrov> Guest73396: no, if you want to keep your file private, nobody can "list" it
voxelot has quit [Ping timeout: 250 seconds]
<Guest73396> Aha, so it's like a big git repo, except you don't know what's inside apart from your own files?
chriscool has joined #ipfs
reit has quit [Ping timeout: 255 seconds]
<ilyaigpetrov> Guest73396: instead of addresses you use file hashes, you may get such hashes if you have files or get hashes from other like you get links on the net
Myagui has quit [Ping timeout: 255 seconds]
astrocyte has joined #ipfs
<davidar> Guest73396 (IRC): you can listen on the network for what people are providing
<davidar> there's not yet a search engine though
Myagui has joined #ipfs
chriscool has quit [Ping timeout: 255 seconds]
<Stskeeps> are ipfs content pin-able too?
<Stskeeps> er, ipns
edcryptickiller has joined #ipfs
edcryptickiller has left #ipfs [#ipfs]
<Guest73396> A search engine for those who want their files to be publicly available would be nice. It would also be nice to destroy Google's "business" model i. e. spying on customers searching and selling their information to third party advertisers such as corporations + handing it over to whoever political department is asking for it.
<Guest73396> search requests
<Guest73396> @+davidar how can I listen on the network for what people are providing? You mean just what links they happen to be posting? Or some command way of doing it?
cemerick has joined #ipfs
M-matthew1 has joined #ipfs
M-matthew has quit [Ping timeout: 244 seconds]
gamemanj has joined #ipfs
harlan_ has joined #ipfs
NeoTeo has joined #ipfs
anticore has joined #ipfs
dignifiedquire has joined #ipfs
Not_ has quit [Ping timeout: 272 seconds]
<alu> https://www.youtube.com/watch?v=b5g1xubyuVs new ghost in teh shell movie coming out
jamie_k_ has joined #ipfs
jamie_k_ has quit [Ping timeout: 244 seconds]
jamescarlyle has joined #ipfs
go111111111 has quit [Ping timeout: 250 seconds]
ygrek has quit [Ping timeout: 244 seconds]
go111111111 has joined #ipfs
NeoTeo has quit [Quit: ZZZzzz…]
NeoTeo has joined #ipfs
chriscool has joined #ipfs
jamie_k_ has joined #ipfs
jamescarlyle has quit [Remote host closed the connection]
flounders has quit [Ping timeout: 272 seconds]
jamescarlyle has joined #ipfs
rendar has joined #ipfs
jamescarlyle has quit [Ping timeout: 264 seconds]
Whispery has joined #ipfs
Zuardi has quit [Remote host closed the connection]
Zuardi has joined #ipfs
bedeho has quit [Ping timeout: 255 seconds]
pfraze has joined #ipfs
TheWhisper has quit [Ping timeout: 246 seconds]
Encrypt has joined #ipfs
anticore has quit [Ping timeout: 272 seconds]
Guest73396 has quit [Ping timeout: 272 seconds]
jamie_k___ has joined #ipfs
jamie_k_ has quit [Ping timeout: 265 seconds]
pfraze has quit [Ping timeout: 240 seconds]
pfraze has joined #ipfs
jamie_k___ has quit [Ping timeout: 244 seconds]
amade has joined #ipfs
martinkl_ has joined #ipfs
harlan_ has quit [Quit: Connection closed for inactivity]
tinybike has quit [Quit: Leaving]
cemerick has quit [Ping timeout: 260 seconds]
chriscool has quit [Ping timeout: 240 seconds]
<victorbjelkholm> running 30 ipfs daemons locally really makes my computer behave... Interesting
<davidar> victorbjelkholm (IRC): lol, I'm not even going to ask why
<victorbjelkholm> davidar, testing the performance of my little network
<rendar> victorbjelkholm: what you mean with behave? :)
<victorbjelkholm> rendar, well, it has troubles drawing things on the screen, like pictures and Sketch
<victorbjelkholm> ah, like behave interesting. Not behave. Interesting :p
<davidar> victorbjelkholm (IRC): is it swapping?
<victorbjelkholm> davidar, probably, haven't investigated yet, just got my wonderful bash script to start the daemons but started out with a too high number
<davidar> Anything greater than 1 is probably too high a number :p
<victorbjelkholm> puts more load on the cpu than memory though
<davidar> Huh
<victorbjelkholm> s/than/than running out of
<multivac> victorbjelkholm meant to say: puts more load on the cpu than running out of memory though
anticore has joined #ipfs
dignifiedquire- has joined #ipfs
<dignifiedquire-> victorbjelkholm, did you see my commit/comment on github?
<mungojelly> so the reason i found out about ipfs was actually that clojars (clojure's main source server) went down for a few hours, and that seemed ridiculous for a whole language to depend on a domain name instead of anything stable
<victorbjelkholm> dignifiedquire-, yeah, I did, I'm not sure what's going on. Spent the whole evening yesterday bascially, trying to figure out what's going on
<victorbjelkholm> seems to be a collision with the API method + the config + default config
<dignifiedquire-> hmm
<victorbjelkholm> I'm thinking that fixing this: https://github.com/ipfs/node-ipfs-api/issues/71 could help fix this config issue
<dignifiedquire-> what I pushed should at least fix the issue with the config as it's not shared between instances anymore
<dignifiedquire-> why so you think you need a timeout ?
<victorbjelkholm> dignifiedquire-, no, I'm thinking my suggestion of moving the host/port to a config object instead, that we can pass around. Instead of having the variables in each module
<victorbjelkholm> could help figuring out what's going on
<victorbjelkholm> yeah, I agree that I would think it would fix it
<victorbjelkholm> I'm gonna take a look again later
<davidar> mungojelly (IRC): yeah, I suspect a lot of us have had a similar experience shortly before becoming involved with ipfs
<davidar> mungojelly (IRC): you're lucky yours was just a temporary problem :p
<dignifiedquire-> right, that's what I did effectively in my commit, it only uses the config module for the intial defaults and clones it to ensure no sharing is hapening
<mungojelly> davidar: it's not so much a practical concern, i just couldn't believe the attitude people have, i was like, well we should think about moving towards hashes then eh and people just don't care, there's no vision of how horribly wrong things could go :(
<mungojelly> and then as soon as i started thinking about it i realized, oh, that system is actually HUGELY restricting what we depend on, HUGELY. nothing depends on anything with ANY HEAVY DATA in it at all! :o
<mungojelly> you can "for free" (btw could someone donate, we're having trouble paying for the server) depend upon SHORT PIECES OF TEXT only!
<victorbjelkholm> dignifiedquire-, thanks for the test though, tried creating one yesterday but my brain burned out before I was able to finish it
<mungojelly> so i realized all of a sudden that that's why our programs are mostly ugly and boring.
compleatang has quit [Quit: Leaving.]
<dignifiedquire-> victorbjelkholm, np, let me know if you find anything else, will take another stab at it later today
<mungojelly> you come as a new programmer to programming and it's like, welcome, here's the toolbench! we've got tools for doing these sorts of math, and for manipulating data formats, oh and even some string routines! have fun! there's nothing there that DOES anything, and the reason for that is that "anything" always involves (massive) interesting data about the world
zz_r04r is now known as r04r
dignifiedquire- has quit [Remote host closed the connection]
<mungojelly> ugh, actually getting leiningen to recognize hashes seems difficult-- it's convinced nothing's safe or real unless it comes from a repo. lein, this is a HASH, lein, it is safe i promise. :/
<mungojelly> maybe give it the hash of a maven repo that promises the other hash is real? ugh
pfraze has quit [Remote host closed the connection]
anticore has quit [Ping timeout: 250 seconds]
amade has quit [Remote host closed the connection]
<daviddias> victorbjelkholm: trying to replicate the bug you found, but being unable to
<daviddias> we do have an object that saves our config and is used to move it around https://github.com/ipfs/node-ipfs-api/blob/master/src/config.js
<daviddias> so far, all our tests have been made with disposable nodes, which bind to different ports each time
<daviddias> oh wait!
<victorbjelkholm> daviddias, the code I pasted in the issue outputs two different ids?
<daviddias> got it
<daviddias> damn, good catch!
anticore has joined #ipfs
<victorbjelkholm> daviddias, ah, I made some console logs also and everything looks correct until the second self.id is run
amade has joined #ipfs
<daviddias> I think I get it
<daviddias> since we are using ./config
<daviddias> and Node.js caches modules
<daviddias> and both instances require the same ./config
<daviddias> when you change it for the second instance
<daviddias> it changes also the first
<victorbjelkholm> daviddias, I'm not sure, tried inlining the config but made no difference
dignifiedquire- has joined #ipfs
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vW2bi
<ipfsbot> node-ipfs-api/solid-tests f97c0b3 David Dias: fix issue #84
<dignifiedquire-> daviddias, see my comment for a possible solution
<dignifiedquire-> it clones the config object
<daviddias> dignifiedquire-: you are right :) pushed at the same time f97c0b3 with a config per ipfsAPI instance
<daviddias> verified and it seems to be working well
<daviddias> was completely missing this one, and I'm even surprised how I didn't get zombie daemons (because I was trying to stop 3 different daemons but after all was stopping always the same)
<dignifiedquire-> daviddias, should I rebase my superagent work onto the solid-test branch?
<daviddias> still going through that one. We shouldn't break vinyl fs
<dignifiedquire-> I can add it back, but for what do we actually need that?
<daviddias> unless we have a better way to send multipart requests (aka directories)
<daviddias> getting that to work was a big effort by jbenet and mappum, so that we could finally ipfs add -r a dir
<dignifiedquire-> right so the important part of it is sending directories?
<daviddias> using HTTP multipart message
<daviddias> dignifiedquire-: yep
amade has quit [Remote host closed the connection]
<dignifiedquire-> have not tested directories yet, we are still
<daviddias> go-ipfs API expects a HTTP multipart message to add a dir
<dignifiedquire-> using multipart messaging just not doing it ourselves
<victorbjelkholm> daviddias, awesomeness! Quick to fix for you apparently
<daviddias> you did all the hard work of chasing the rabbit :)
<victorbjelkholm> I'll try out that branch in my project and see if it works
<daviddias> thank you!
amade has joined #ipfs
fiatjaf has left #ipfs ["undefined"]
<dignifiedquire-> daviddias, is there a directory add test somewhere?
<daviddias> was just adding one now :)
<dignifiedquire-> daviddias, reading my
<dignifiedquire-> mind huh ;)
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vW2pf
<ipfsbot> node-ipfs-api/solid-tests 6675e5b David Dias: add folder (recursively) test
<daviddias> there :)
<dignifiedquire-> thanks will rebase the superagent branch on that later and make sure to pass all the tests
Encrypt has quit [Quit: Quitte]
<dignifiedquire-> so it would be okay to drop vinyl support when then recursive add still works?
<mungojelly> often if i go to check if my site is available through https://ipfs.io/ipns/QmSy5XjpgqhD4t436czqP1sgcGSxx7n4p7JC8hVEDsnDs7 it's not, but then if i restart my daemon it becomes available again. any way i could make that bug report more useful?
<mungojelly> fedora 21
<mungojelly> ipfs version 0.3.8 is that the one i should be using :)
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vW2hB
<ipfsbot> node-ipfs-api/solid-tests 88d24e6 David Dias: config.show test
<daviddias> mungojelly: before restarting your Node, could you check do $ ipfs swarm peers
<daviddias> and see how many peers you were connected
<daviddias> dignifiedquire-: if multipart works without vinyl, we will throw a huge party
<mungojelly> daviddias: ok will do, i think when i tried that before it spat out a list of peers but i didn't pay attention to the size/character of the list
<dignifiedquire-> it already works for single files and a list of files
<mungojelly> what's "vinyl"?
<daviddias> because getting that to work was a chasing all the rabitts to all the holes together
<daviddias> mungojelly: virtual file format https://www.npmjs.com/package/vinyl
<dignifiedquire-> I know the code looks freaking scary
<daviddias> mungojelly: something that enables us to send folders from Node.js and the browser as multipart messages to the HTTP api
<mungojelly> hmm ok thanks for the pointer daviddias now i can read more about it, the common word "vinyl" just wasn't quite enough clues :D
<daviddias> mungojelly: no worries :)
dignifiedquire- has quit [Remote host closed the connection]
reit has joined #ipfs
pfraze has joined #ipfs
<ipfsbot> [node-ipfs-api] diasdavid pushed 2 new commits to solid-tests: http://git.io/vWaf6
<ipfsbot> node-ipfs-api/solid-tests 1f611e9 David Dias: .version and .commands
<ipfsbot> node-ipfs-api/solid-tests 6ae3948 David Dias: diag net
pfraze has quit [Ping timeout: 250 seconds]
anticore has quit [Remote host closed the connection]
<ipfsbot> [node-ipfs-api] diasdavid pushed 2 new commits to solid-tests: http://git.io/vWaTG
<ipfsbot> node-ipfs-api/solid-tests f30ddea David Dias: object.stat test
<ipfsbot> node-ipfs-api/solid-tests fdc26a4 David Dias: object.links
nicolagreco has joined #ipfs
chriscool has joined #ipfs
<mungojelly> daviddias: my node seems to have stopped working again, yay! and it shows plenty of peers: http://ipfsbin.xyz/#QmY1JoCSm46AfJ2BskMvoEZsBJA4oZCoSA2zFFxRvy1Gtt
<daviddias> before you reboot it
<daviddias> what is the hash you are trying to resolve?
<daviddias> and what is your ipfs version?
<mungojelly> i was trying to get to my node through ipfs.io at https://ipfs.io/ipns/QmSy5XjpgqhD4t436czqP1sgcGSxx7n4p7JC8hVEDsnDs7
<daviddias> 'get to your node'
<mungojelly> also it seems to hang to say ipfs cat QmY1JoCSm46AfJ2BskMvoEZsBJA4oZCoSA2zFFxRvy1Gtt
<daviddias> was this IPNS linked to an hash?
<mungojelly> ipfs version 0.3.8
<mungojelly> sure yeah? i published a new hash to it yesterday i think, or maybe today
voxelot has joined #ipfs
voxelot has joined #ipfs
<daviddias> my guess is that the 'reproviding' stopped at a given point and the records stopped being valid
<daviddias> and that is why when you reboot it (and it reprovides again) it starts working
anticore has joined #ipfs
<mungojelly> ok sure anything i can do to test that guess?
<daviddias> run ipfs dht findprovs <hash> with your IPNS hash and see if it finds it
<daviddias> would be good to add IPNS reprovider workers to https://www.npmjs.com/package/bsdash
<mungojelly> seems to hang
dignifiedquire_ has joined #ipfs
<cryptix> hello0w
voxelot has quit [Ping timeout: 240 seconds]
<mungojelly> it'll respond to "ipfs swarm peers" but anything else i say to it it just never answers
<mungojelly> oh it'll tail the log, it does have a bunch of things in it saying "error" is that normal, i'll paste some of that i guess
xelra has quit [Ping timeout: 268 seconds]
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWaLH
<ipfsbot> node-ipfs-api/solid-tests a4655dd David Dias: .swarm.peers test
<mungojelly> daviddias: here's ipfs log tail: http://ipfsbin.xyz/#QmX4iVYPMWtMVXLMC4EmG3g8GQt1Kb2hR7hbWewa88i4CK
<mungojelly> lots of error with the ip 0.0.0.0
<daviddias> this seems to be the kind of thing whyrusleeping eats for breakfast! ':D
NeoTeo has quit [Quit: ZZZzzz…]
xelra has joined #ipfs
<daviddias> mungojelly: add all of this to an issue in go-ipfs, seems to be a reuseport problem (probably maxing out the number of file descriptors, which are used for sockets and therefore hanging all the connections)
<mungojelly> oh, huh, it did finally respond to ipfs dht findprovs and said "error: routing: not found" so there's that
<daviddias> cryptix: hi o/ :)
<ipfsbot> [node-ipfs-api] Dignifiedquire closed pull request #83: [discuss] Switch to superagent instead of raw http.request (master...superagent) http://git.io/vWuQO
jo_mo has joined #ipfs
felixn has quit [Ping timeout: 240 seconds]
felixn has joined #ipfs
kanzure has quit [Ping timeout: 240 seconds]
kanzure has joined #ipfs
cemerick has joined #ipfs
<mungojelly> okie dokie here's an issue daviddias https://github.com/ipfs/go-ipfs/issues/1896
<mungojelly> now back to playing with xdotool :)
<dignifiedquire_> daviddias: what should happen if you try adding a path to a directory but do not add the recursive flag?
cemerick has quit [Ping timeout: 240 seconds]
Encrypt has joined #ipfs
<daviddias> » ipfs add examples ◉ ◼◼◼◼◼◼◼◼◼◼
<daviddias> Error: 'examples' is a directory, use the '-r' flag to specify directories
<daviddias> Use 'ipfs add --help' for information about this command
<daviddias> same as the CLI tool, in this case, it throws an error
<dignifiedquire_> okay :
<dignifiedquire_> :)
cemerick has joined #ipfs
<nicolagreco> who owns ipfsbin?
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWant
<ipfsbot> node-ipfs-api/solid-tests ec0336d David Dias: gulpifying things
Mark- has joined #ipfs
Mark- has left #ipfs [#ipfs]
<dignifiedquire_> daviddias: do you have a moment to explain sth to me?
chriscool has quit [Ping timeout: 265 seconds]
chriscool has joined #ipfs
<Bat`O> hummm, looks like build is broken in windows
<Bat`O> ..\..\Godeps\_workspace\src\github.com\shirou\gopsutil\disk\disk_windows.go:10:2: cannot find package "github.com/StackExchange/wmi" in any of: ...
hoboprimate has joined #ipfs
Whispery is now known as TheWhisper
Guest73396 has joined #ipfs
chriscool has quit [Quit: Leaving.]
chriscool has joined #ipfs
<victorbjelkholm> any words on which browser vendor that is thinking about implementing ipfs?
acidhax has joined #ipfs
cemerick has quit [Ping timeout: 250 seconds]
anticore has quit [Ping timeout: 260 seconds]
<ipfsbot> [node-ipfs-api] Dignifiedquire opened pull request #85: [discuss] Switch to superagent instead of raw http.request (solid-tests...superagent) http://git.io/vWa0f
anticore has joined #ipfs
chriscool has quit [Quit: Leaving.]
nicolagreco has quit [Ping timeout: 240 seconds]
chriscool has joined #ipfs
<dignifiedquire_> daviddias: these multipart uploads with directories are really nasty :(
sonatagreen has joined #ipfs
chriscool has quit [Client Quit]
acidhax has quit [Remote host closed the connection]
chriscool has joined #ipfs
nicolagreco has joined #ipfs
chriscool has quit [Read error: No route to host]
chriscool has joined #ipfs
chriscool has quit [Client Quit]
chriscool has joined #ipfs
chriscool has quit [Client Quit]
<ion> victorbjelkholm: If you find out, I'm also curious.
chriscool has joined #ipfs
<M-rschulman2> VictorBjelkholm: I'm almost positive that none of them have given it a moment's thought yet.
<ion> M-rschulman2: Except for jbenet’s cryptic mention in his Stanford talk.
<M-rschulman2> Oh, I haven't watched that one yet, he implied that one of the major browser makers was working on an implementation?
<ion> Apparently
<M-rschulman2> That is fascinating. I'd be surprised if its true, but I've been wrong before!
<ion> The IPFS community on Facebook. https://www.facebook.com/IPFS-464779886928446/
<achin> :D
jamescarlyle has joined #ipfs
captain_morgan has joined #ipfs
flounders has joined #ipfs
chriscool has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
chriscool has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
chriscool has quit [Client Quit]
chriscool has joined #ipfs
pfraze has joined #ipfs
<victorbjelkholm> ion, yeah, that's what I'm thinking about...
captain_morgan has quit [Ping timeout: 246 seconds]
Guest73396 has quit [Ping timeout: 244 seconds]
forthqc has joined #ipfs
captain_morgan has joined #ipfs
NeoTeo has joined #ipfs
edcryptickiller has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
<sonatagreen> I really really want an option where if I 'add -r' this directory: /ipfs/QmcxantnnLbtd58P3f6NUHTVo9vQ7LoKPak4xghrJ3rr4o then it generates this object: /ipfs/QmNraLJc7XRwrCrBnUgGjbHvVZ3eHruA2UJhAFzSKhrTrp
edcryptickiller has left #ipfs [#ipfs]
<sonatagreen> (or something to broadly similar effect)
<sonatagreen> i.e., parse symlinks to /ipfs/QmFoo as ipfs links
<sonatagreen> that would be /very helpful/
border0464 has quit [Ping timeout: 252 seconds]
border0464 has joined #ipfs
acidhax has joined #ipfs
anticore has quit [Read error: Connection reset by peer]
<kpcyrd> yup, I'd use that too
acidhax_ has joined #ipfs
anticore has joined #ipfs
acidhax has quit [Ping timeout: 240 seconds]
dignifiedquire_ has quit [Quit: dignifiedquire_]
rand_me has joined #ipfs
<hoboprimate> files I aff to
jamescarlyle has quit []
<hoboprimate> Can anyone check out /ipfs/QmW8X82wvoC7V4ZxZVVGBbwuGHJKBqBKDDaNxrWV65ZthP to see how long it takes to get a file from my node to you? I already tested at http://ipfs.io/ipfs/ and took a while.
<hoboprimate> it's 34 k image
<hoboprimate> kB
<achin> it only took a few seconds, but i probably ended up fetching it from the gateway nodes
<whyrusleeping> pretty quickly
<ion> It opened in less than a second. It probably came to me from one of the gateway nodes.
<whyrusleeping> yeah, once youve cached it on the gateway, everyone is going to get it quite fast
<hoboprimate> ok, then I'll try adding another and not go to the gateway, just a sec.
<sonatagreen> yeah, fast for me too
<hoboprimate> ok, what about this one, /ipfs/QmPwLFyZwJ5tuGMj4qPZRNdSqMwDtvzPRdp1Bqfa1EWBHh
<hoboprimate> 15,1 Kb
<hoboprimate> of our furry friend
<achin> taking a while
<ipfsbot> [go-ipfs] whyrusleeping created fix/sig-handle (+1 new commit): http://git.io/vWaQU
<ipfsbot> go-ipfs/fix/sig-handle bed164c Jeromy: use os.Interrupt for cross platform happiness...
<hoboprimate> I'm connected to 62 seconds
<ion> hoboprimate: Has taken over a minute so far.
<hoboprimate> I'm from Portugal by the way
<hoboprimate> _in_
<ion> hoboprimate: Are you behind a NAT without UPnP?
acidhax_ has quit [Remote host closed the connection]
<hoboprimate> I have UPnP turned on in my router
<hoboprimate> but.... it never seems to work
chriscool has quit [Read error: No route to host]
<ion> It finally loaded.
<achin> what is your peerID?
<hoboprimate> just a sec
<achin> huh, finally loaded at nearly the same time as it did for ion
<hoboprimate> QmR6hQXoDiJdUjm3FAEziWXorsSFiETGZPeyWDXwTeoS8V
<ion> achin: That is not surprising.
<hoboprimate> peerid ^
<achin> i'm not connected to you. i wonder from where i got that hash
<ion> I am not connected to S8V, i got it from someone else.
<hoboprimate> hm...
<hoboprimate> so this image was in ipfs network already?
<sonatagreen> /ipfs/QmPwLFyZwJ5tuGMj4qPZRNdSqMwDtvzPRdp1Bqfa1EWBHh loaded pretty much instantly for me
<ion> hoboprimate: Try forwarding port 4001 to your IPFS node on your router manually.
<sonatagreen> QmR6hQXoDiJdUjm3FAEziWXorsSFiETGZPeyWDXwTeoS8V is still loading
<hoboprimate> ion: ok, will do
nicolagreco has quit [Quit: nicolagreco]
<cryptix> sonatagreen: S8V is a peerID
<ion> sonatagreen: S8V is his peer ID, not a document.
<sonatagreen> oh
dignifiedquire_ has joined #ipfs
nicolagreco has joined #ipfs
<cryptix> (due to some special handling you cant cat keyhashes like you would expect)
<achin> maybe sonatagreen is connected to hoboprimate and we are all connected to sonatagreen
chriscool has joined #ipfs
<cryptix> (ie it should return the pub key)
<ion> hoboprimate: I still find it pretty great that it worked eventually despite your connectivity problems.
<hoboprimate> io: I should probably also open 4001 on my firewall too! /me slaps forehead
<achin> ion: maybe i woudln't hve worked, though if sonatagreen never tried to load it. i don't think ipfs does relying at the moment
AlexPGP has joined #ipfs
<whyrusleeping> i'll probably change that soon so catting a peer id returns a pubkey
<cryptix> hey ion, re writable gw: did i read your comment right that we might as well implement object patching on the POST handler as it is more rfc conform?
<cryptix> ohi whyrusleeping :)
<whyrusleeping> cryptix: hello!
<ion> cryptix: Yeah, that seems way more conformant than using PUT in a way that does not mutate the resource at the given location.
<ion> cryptix: But feel free to read the relevant RFC text yourself to confirm that i’m not talking out of my ass.
<hoboprimate> ok, I opened 4001 port on router and firewall, try: /ipfs/QmfDPzcV3dwUWx4Qis5VUKvvWjH3NNFi1Rf5SXBDeB2we7
<hoboprimate> 23,9 kB image
captain_morgan has quit [Ping timeout: 240 seconds]
<cryptix> hehe sure :) i will also try to dig up the initial pr discussion (i _WISH_ those threads were linked in the commits)
<hoboprimate> last try, and I won't bother you more!
<hoboprimate> :)
<achin> still taking a while, hoboprimate
AlexPGP has quit [Remote host closed the connection]
<ion> For me, too, and i’m still not connected to S8V.
<hoboprimate> :/
<cryptix> ion: ipfs dht findpeer QmR6hQXoDiJdUjm3FAEziWXorsSFiETGZPeyWDXwTeoS8V ?
<cryptix> i also got a 85.243. addr
<ipfsbot> [go-ipfs] whyrusleeping opened pull request #1899: use os.Interrupt for cross platform happiness (master...fix/sig-handle) http://git.io/vWa5I
<cryptix> it has random port so i guess reuseport failed
<ion> Oh, interesting, the only port listed for the 85. address is not 4001.
<whyrusleeping> hoboprimate: what os?
<ion> So opening port 4001 will not help, nobody will dial 4001.
<hoboprimate> whyrusleeping: fedora linux
<cryptix> ion: right
<achin> how do i manually connect to someone?
<ion> Does “/ip4/192.168.1.128/tcp/4001, /ip4/192.168.124.1/tcp/4001” mean two NATs?
<cryptix> achin: ipfs swarm (dis)connect
<achin> i thought i could use once of the addresses returned by "findpeer" but that doens't work
<whyrusleeping> hoboprimate: could you run ipfs diag sys and gist me the output?
<hoboprimate> sure
AlexPGP has joined #ipfs
<ion> I’m assuming it’s not a single /16 network.
<cryptix> you have to append /ipfs/$peerid
<achin> cryptix: thanks!
<hoboprimate> whyrusleeping: what is gist?
<cryptix> ion: i guessed it some vpn of sorts?
<hoboprimate> whyrusleeping: gist:8b7bd363d698246bd08b
<achin> (normally the recommendation is to dump data to want to share into ipfs, but when ipfs is giving you problems, that doesn't always work :)
<hoboprimate> yeah
<achin> what's the full gist url?
<whyrusleeping> yeap, dont want to make you use ipfs to debug ipfs not working
<whyrusleeping> that is counterproductive
<hoboprimate> I'll go check again if I have upnp activated in the router
<achin> cryptix: it would be sweet if you could just run out of /ipfs/ and skip the cp and chmod steps!
<ion> whyrusleeping: “ipfs diag sys --publish” which adds the output as an IPFS object and uses HTTP to push the object into the gateway using the writeable HTTP resource might be nice.
<haadcode> speaking of NATs, which punchthrough mechanisms ipfs implements atm? are there any relaying mechanisms? whyrusleeping?
<whyrusleeping> ion: yeah, i'm leery of writeable gateways though
<cryptix> achin: ln -s /ipfs/QmSKVENxxkS3QGkFJDofWgcPySdCHZ3sbrU5onXAzhWZdz /bin/ipfs-paste ? :)
<whyrusleeping> haadcode: no relay methods yet, but we use upnp, nat pmp, and tcp reuseport
<whyrusleeping> hoboprimate: yeah, it looks like you might have some weird networking going on...
<cryptix> whyrusleeping: why? the gateways are writable anyways in a way
<whyrusleeping> not sure
<hoboprimate> yep, UPnP is on on the router. Is it possible to block some how in the client, if I have a firewall?
<whyrusleeping> cryptix: yeah... idk. feels weird
<achin> cryptix: but you still don't get +x
<hoboprimate> whyrusleeping: yeah, well, next year I'm switching cable companies anyway, to fiber, perhaps it improves this
pfraze has quit [Ping timeout: 240 seconds]
<hoboprimate> I onlyhave 20 adsl
<cryptix> achin: i know.. :) there is an open issue for acls
captain_morgan has joined #ipfs
pfraze has joined #ipfs
rand_me has quit [Ping timeout: 265 seconds]
domanic has joined #ipfs
<achin> yeah, i created one of them :)
<haadcode> whyrusleeping: stun/turn/others planned? been using ipfs behind strict nats and it hasn't quite punched all the holes yet :)
<ion> haadcode: I remember reading somewhere that an equivalent of STUN/TURN is planned but it will use IPFS itself to negotiate the assistance.
<haadcode> that would make sense
<cryptix> such a thing would be might handy for webrtc - the p2p connections also need some kind of brokering iirc
<whyrusleeping> haadcode: yeah, relay and connection requests are planned
<haadcode> very cool
chriscool has quit [Quit: Leaving.]
chriscool has joined #ipfs
<AlexPGP> Has anyone time for a newbie question?
<achin> sure thing
<AlexPGP> I'm following the steps at ipfs.io, and got to the line that reads
<AlexPGP> go get -u githum.com...
<AlexPGP> When I run the line, the computer pauses for a bit, then returns to the prompt
<AlexPGP> if I then type 'ipfs' I'm told no such program exists
<codehero> you meant github, right?
<achin> look in $GOPATH
<AlexPGP> (um, yes... sorry)
<codehero> okay
<codehero> yeah. do printenv GOPATH
<achin> there should be something in $GOPATH/bin/ipfs
<codehero> i think it's more likely that he didn't even define $GOPATH
<achin> i thought `go get` requires $GOPATH to be set
<codehero> oh. right
pfraze has quit [Ping timeout: 246 seconds]
<AlexPGP> hmmm. nothing showed up when I did 'printenv GOPATH', but then I exited the terminal and opened one up again, and now the command returns /~/go
<AlexPGP> um... ~/go
<codehero> ouch
<codehero> oh
<codehero> k
<codehero> well. then try again to start ipfs
<AlexPGP> no such command found. try the 'go get' line again?
<codehero> yes
<achin> what if you type "~/go/bin/ipfs" ?
pfraze has joined #ipfs
arpu has quit [Quit: Ex-Chat]
<AlexPGP> achin: ipfs is there; I get a screen of usage information
<achin> ok, great
dignifiedquire_ has quit [Quit: dignifiedquire_]
<achin> so it was installed correctly
dignifiedquire has quit [Quit: dignifiedquire]
<codehero> oh
<codehero> right
<codehero> i'm an idiot
<codehero> wait
<codehero> add ~/go/bin to your path >_>
<AlexPGP> I see the problem.
<codehero> or $GOPATH/bin
<achin> if you don't want to type out the full path each time, you can either add "~/go/bin" to your $PATH, or copy "~/go/bin/ipfs" to a folder already in your path
<AlexPGP> Looked at PATH and I have ~/go/hin sitting there. I'm an idiot.
<codehero> yupp. i totally forgot that GOPATH doesn't affect the path :P
<codehero> AlexPGP: lol
<sonatagreen> /ipfs/QmfDPzcV3dwUWx4Qis5VUKvvWjH3NNFi1Rf5SXBDeB2we7 loaded in a second or two for me
wopi has quit [Read error: Connection reset by peer]
<codehero> pretty fast, inded
wopi has joined #ipfs
<AlexPGP> BTW, what's a good way to 'remember' the peer identity for later use in command lines?
<achin> "ipfs id" will always show your peerID
<AlexPGP> aha. thanks. apparently, I need to rtfm a bit more.
pfraze has quit [Ping timeout: 264 seconds]
chriscool has quit [Read error: No route to host]
<cryptix> AlexPGP: there isnt that much to read, please ask if something is unclear
<AlexPGP> will do, thanks.
<cryptix> there is --help on most 'ipfs ... ' commands
<achin> the examples section on the homepage is also a great place to start
cemerick has joined #ipfs
chriscool has joined #ipfs
fingertoe has joined #ipfs
domanic has quit [Ping timeout: 250 seconds]
rand_me has joined #ipfs
bedeho has joined #ipfs
rand_me has quit [Ping timeout: 252 seconds]
<hoboprimate> whyrusleeping and all: I deleted ~/.ipfs and re-inited node, and now getting from gateway is faaast!
<AlexPGP> am trying to run 'ipfs daemon' which seems to run fine, but doesn't return to the command prompt. Should the line end in '&' perhaps?
<codehero> yes
<codehero> you should use &
<hoboprimate> it's isn't fast after all... Maybe the previous image I now added was already cached at the gateway
<codehero> that's what it's for
<whyrusleeping> no, i rarely recommend using the &
<mungojelly> perhaps give it its own terminal in the background, it spits errors sometimes
<whyrusleeping> just leave it running in a terminal
<AlexPGP> aha
<AlexPGP> ok
<_jkh_> whyrusleeping: hey there
<whyrusleeping> _jkh_: hello!
cemerick has quit [Ping timeout: 244 seconds]
<_jkh_> whyrusleeping: so, it’s not urgent, but at some point it would be interesting / useful to have a little electronic fireside chat or something where we bat around some ideas for how to expose ipfs in FreeNAS in a way that makes it easy to import/identify/name things in ipfs since the ipfs datastore itself is “opaque” in terms of names, and /ipfs is a read-only namespace that I’m not even truly sure how you’d use in a production scenario
<_jkh_> (full disclosure - I haven’t even played with ipns yet)
pfraze has joined #ipfs
<_jkh_> whyrusleeping: in 10-BETA we were hoping to have at least the rudiments of doing more than simply enabling / configuring the ipfs service itself (which is done in ALPHA) and “doing more” is kind of an open question
chriscool has quit [Quit: Leaving.]
<whyrusleeping> _jkh_: i'd love to!
<_jkh_> whyrusleeping: however, on the plus side, that also means we’re totally open to creating the UI from scratch since there’s nothing there now. :)
<whyrusleeping> that sounds like fun
chriscool has joined #ipfs
<_jkh_> whyrusleeping: to use bittorrent as an analogy, we’re still in the day when everything was done from the CLI
<whyrusleeping> i have time tomorrow afternoon, most of tuesday, or all day wednesday
<_jkh_> whyrusleeping: and the first bittorrent clients which made it really easy to manage your torrents, drag and drop files, blah blah blah were still to come and really open it up to the masses
<whyrusleeping> yeah, agreed. we're getting to that point now
<_jkh_> whyrusleeping: I’d like FreeNAS to be one of the ways in which ipfs opened up to the masses
<whyrusleeping> sounds mutually beneficial :)
<_jkh_> so I think to start with, we need to be able to click on an existing share, directory or file within the file browser (which we have yet to bring in, but we have multiple candidates) and say “Share this!” or “Give me the hash for this already shared thing” (which is marked in the UI as shared)
<_jkh_> and if I understood ipns, probably also “name this thing FOO"
<_jkh_> then at least the FreeNAS folks can start pasting in the hashes / names in #freenas and say “Hey check out this content"
<whyrusleeping> so, ipns can be treated like a global 'home dir'
<_jkh_> sure, or just a namespace in which to cite “URLs”
<whyrusleeping> yeah
<_jkh_> without having to worry about opening up your NAS to allow access
<whyrusleeping> exactly
<_jkh_> that’s currently the biggest barrier to ad-hoc sharing between *NAS users
<_jkh_> everyone’s behind a NAT or a firewall (or both) and even for those with public IPs, they’d rather not ehave to create a guest account and figure out some out-of-band method for getting the credentials to the sharer
<_jkh_> so I figure we solve that problem first
<_jkh_> then move into the more interesting content delivery methods (as your roadmap docs say - ipfs can be used as a substrate to create higher level things)
<_jkh_> but first and foremost, we have to make it Easy
<_jkh_> because users of FreeNAS expect a pointy-clicky interface
<_jkh_> If they didn’t want that, they’d just run FreeBSD ;)
<_jkh_> the other question I had was the HTTP gateway
<whyrusleeping> haha, yeah. easy is a big thing
<whyrusleeping> http gateway questions?
<_jkh_> do you recommend that most hosts also support the gateway method directly or simply use one of the known gateways out on the interwerb somewhere
<_jkh_> currently we don’t have an option for that since we don’t know how important it is
<_jkh_> e.g. if you share a file via ipfs from your FreeNAS box foobar.mydomain.com
<whyrusleeping> hosting a gateway is generally only recommended if you want to donate your bandwidth and have a public server
<_jkh_> do you also expect / hope http://foobar.mydomain.com/ipfs to DTRT
<_jkh_> ah, OK
<_jkh_> so that sounds like a nice-to-have at most
<whyrusleeping> in the future we will add an option to the gateway to 'only serve *my* content'
<achin> i would guess that some people would like to host a gateway on their internal network (so it would be accessible for any number fo internal hosts, but not the public)
<_jkh_> we could run one at freenas.org just as a public service and that would probably be enough
<whyrusleeping> as opposed to right now it being able to request any object through it
chriscool has quit [Read error: Connection reset by peer]
<_jkh_> whyrusleeping: ah, yes, well that’s another even bigger chat for the future - being able to create sharing groups with transparent crypto
dignifiedquire has joined #ipfs
<_jkh_> whyrusleeping: that one will probably involve us working together in the repo and some number of paid engineers if the TrueNAS customers express any serious interest
<_jkh_> whyrusleeping: but that’s chapter 5 in the book, at least, and we’re still on chapter one. :)
<_jkh_> I’ve also got a storage team spinning up in India (who I just got back from visiting a few days ago) and they’re looking for Interesting Projects so I mentioned you guys
<_jkh_> we’ll see where that goes, if anywhere
<_jkh_> 1.3 billion people, there’s got to be a need for ad-hoc file sharing. ;)
<whyrusleeping> :D
<whyrusleeping> yes!
<_jkh_> whyrusleeping: OK, so I’ll come looking for you tomorrow or the next day and we can brainstorm a bit
chriscool has joined #ipfs
<_jkh_> the UI should also be in better shape over the next few days - a lot of refactoring going on at the moment and, of course, everything is all broken. :-|
jfntn has joined #ipfs
* achin is now looking into freeNAS with great interest
<whyrusleeping> _jkh_: i look forward to it!
fingertoe has quit [Ping timeout: 246 seconds]
chriscool has quit [Quit: Leaving.]
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
<_jkh_> whyrusleeping: I also hear that our own @scottk has been talking to you guys about sticking NFS ganesha or something on top of ipfs to allow easy NFS importing of data
<_jkh_> I think this is a cool idea, though I have lots of questions about how the names would be preseved for either that or an SMB ingest shim
<_jkh_> still, that would be a KILLER evolution in taking this stuff truly mainstream
<whyrusleeping> _jkh_: huh, i actually havent heard about nfs ganesha before
<_jkh_> it’s pretty slick - a lot of folks are using it to do clustered NFS implementations
<_jkh_> at SNIA this year, there was even a Red Hat presentation from someone who had taken nfs-ganesha, samba and glusterfs underneath to create a single, hybrid view of a clustered filesystem (gluster) supporting both SMB and NFS access with a single, unified locking and permissions model
<_jkh_> that’s, like, the holy grail of file sharing
wopi has quit [Read error: Connection reset by peer]
<_jkh_> so even if it doesn’t quite work all the way yet, it’s a big milestone
chriscool1 has joined #ipfs
<_jkh_> I’m watching their work with great interest
<whyrusleeping> oooOooo! that sounds a lot like the company i used to work for
wopi has joined #ipfs
<whyrusleeping> except, open source, and stuff
<_jkh_> :D
<ion> nfs-ganesha, huh? Cool.
<victorbjelkholm> anyone have any good file database to recommend me?
<ion> hth
<victorbjelkholm> hah, thanks a lot! Very useful :) /s
acorbi has joined #ipfs
<ion> whyrusleeping: How about making --chunker=rabin the default in master so it will get more testing? If any issues emerge, it can be reverted before the next release.
<whyrusleeping> ion: i like that idea
<whyrusleeping> i've been meaning to do so for a little while now
<whyrusleeping> _jkh_: i appreciate that all your server names are space themed :)
<whyrusleeping> (for trueNAS)
wtbrk has joined #ipfs
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWVON
<ipfsbot> node-ipfs-api/solid-tests 2cd11f0 David Dias: gulpify start and stop of daemons
<ReactorScram> Hey didn't there used to be an IPFS windows binary?
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWV3P
<ipfsbot> node-ipfs-api/solid-tests 06f6195 David Dias: add browser to the test suit
chriscool1 has quit [Ping timeout: 250 seconds]
<mungojelly> /join #ganesha
<mungojelly> hrm :)
<mungojelly> i like the god ganesha so maybe i'll also like the filesystem!?
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWVsd
<ipfsbot> node-ipfs-api/solid-tests 25f9077 David Dias: enable browser tests
<whyrusleeping> ReactorScram: yeah, its broken in 0.3.8
anticore has quit [Quit: bye]
<whyrusleeping> and we havent been able to merge any code for a while, so the fix i have is just hanging out
<ReactorScram> whyrusleeping: ok just wondered. Someone on Voat was confused about how IPFS works so I was explaining to them a bit
<whyrusleeping> ah, nice. point them to the 0.3.7 binary for windows, for now
NeoTeo has quit [Quit: ZZZzzz…]
acidhax has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
domanic has joined #ipfs
Matoro has joined #ipfs
domanic has quit [Ping timeout: 252 seconds]
forthqc has quit [Quit: Lämnar]
dignifiedquire has joined #ipfs
dignifiedquire_ has joined #ipfs
<dignifiedquire_> daviddias: what do you mean by “sc”?
dignifiedquire has quit [Client Quit]
dignifiedquire_ is now known as dignifiedquire
<daviddias> I got this error:
<daviddias> 25 10 2015 19:24:56.616:ERROR [launcher.sauce]: Can not start chrome (OS X 10.11)
<daviddias> Failed to start Sauce Connect:
<daviddias> 25 Oct 19:24:55 - Error shutting down overlapping tunnels.
<daviddias> 25 Oct 19:24:55 - Sauce Connect could not establish a connection.
<daviddias> 25 Oct 19:24:55 - Please check your firewall and proxy settings.
<daviddias> 25 Oct 19:24:55 - You can also use sc --doctor to launch Sauce Connect in diagnostic mode.
<daviddias> 25 Oct 19:24:55 - Cleaning up.
<dignifiedquire> do you have valid credentials?
<daviddias> I've my username and token set up
wopi has quit [Read error: Connection reset by peer]
<dignifiedquire> hmm
<dignifiedquire> what versions are you running?
<dignifiedquire> (karma and karma-sauce-launcher)
wopi has joined #ipfs
Encrypt has quit [Quit: Quitte]
anticore has joined #ipfs
<dignifiedquire> daviddias: I’m trying to debug this multipart stuff and for that I need to inspect the requests I’m sending but I don’t know how to properly use netcat, if I do “nc -l 5001” and run “ipfs daemon” nothing is printed out when connecting via the node-api
<dignifiedquire> any clues to what I’m doing wrong?
<dignifiedquire> / cc whyrusleeping jbenet and everyone who knows sth about networking and unix tools
<whyrusleeping> nc -l is for listening
<whyrusleeping> run ipfs daemon
<whyrusleeping> er
<whyrusleeping> what are you trying to do?
<whyrusleeping> i recommend wireshark
<daviddias> dignifiedquire: "karma-browserify": "^4.4.0",
<daviddias> "karma-mocha": "^0.2.0",
<daviddias> "karma-sauce-launcher": "^0.3.0",
* daviddias multidebugging mode
<dignifiedquire> whyrusleeping: I want to see the raw http request that is sent by node-ipfs-api to the daemon
<ion> nc won’t help you there.
<ion> As whyrusleeping said, wireshark will.
<whyrusleeping> dignifiedquire: you could do 'nc -l 5556 | tee file | nc localhost 5001'
<whyrusleeping> and then point your node-ipfs-api at port 5556
<ion> But that’s one way only.
<ion> Responses from localhost:5001 will end up on stdout.
<dignifiedquire> that’s fine for now
<dignifiedquire> trying that now
<whyrusleeping> yeah, he just wanted to see the request, not the response
<daviddias> might not work, because if there is some 'handshaking' on the ipfsAPI connect to go-ipfs HTTP API, it will think there is no server there
<daviddias> but worth a shot
<whyrusleeping> could use a named pipe and some more magic to make it bidirectional
<whyrusleeping> mkfifo a && nc -l 5556 < a | tee file | nc localhost 5001 | tee responsefile > a
<whyrusleeping> probably
* whyrusleeping flies away on a buffalo
<dignifiedquire> whyrusleeping: thanks will try that as well, my wireshark install is broken because of some issues with x11 on osx, so happy to not use it
simpbrain has quit [Ping timeout: 244 seconds]
<dignifiedquire> daviddias: that’s strange, all installing fine for me
<daviddias> it says on the connect.log that I'm not authorised
zeroish has joined #ipfs
<daviddias> ok, fixed that one (apparently username !== email)
<dignifiedquire> gotta go now though
<dignifiedquire> daviddias: yeah need to be careful with the login stuff
<daviddias> now I'm gettign a 'Error shutting down overlapping tunnels'
<daviddias> ok, thanks, catch with you later :) we are so close of getting this all done! :D
<ipfsbot> [go-ipfs] whyrusleeping created fix/swarm-con-err (+1 new commit): http://git.io/vWVuv
<ipfsbot> go-ipfs/fix/swarm-con-err 9094345 Jeromy: make swarm connect return an error when it fails...
<ipfsbot> [go-ipfs] whyrusleeping opened pull request #1900: make swarm connect return an error when it fails (master...fix/swarm-con-err) http://git.io/vWVus
voxelot has joined #ipfs
voxelot has quit [Changing host]
voxelot has joined #ipfs
<hoboprimate> does ipfs clash in some way with Selinux?
<hoboprimate> trying to think if it's the reason my node additions are slowly distribitued on the ipfs network...
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWVzo
<ipfsbot> node-ipfs-api/solid-tests bc72e36 David Dias: add run sequence
<richardlitt> Afternoon'
<richardlitt> I feel like I'm missing something: How am I supposed to remember where things are in IPFS? Hashes are not easily remembered by humans.
<ion> Bookmark it.
<ion> URLs in general are not easily remembered by humans, that’s why we have bookmarks.
<richardlitt> Bookmark using my browser?
<ion> For example. It will be better when browsers support IPFS natively, but there’s nothing wrong with bookmarking a HTTP gateway URL.
<richardlitt> In my normal file system, I have a folder called "Pictures". Is there something equivalent I can use locally?
<richardlitt> btw, thanks ion
<ion> If you have mounted /ipfs in the filesystem, you can make a symbolic link to an IPFS object.
mvanveen has joined #ipfs
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWVgD
<ipfsbot> node-ipfs-api/solid-tests e427e91 David Dias: fix swarm.connect and swarm.peers tests
<rendar> ion: but if you make a symbolic link to an ipfs object, you don't have a real link to a real file
simpbrain has joined #ipfs
<rendar> ion: that's because ipfs breaks the file into smaller chunks
<richardlitt> (Figuring out how to mount)
<ion> rendar: The /ipfs mount shows unixfs objects as directories and files.
<ion> s|shows|has|
<rendar> ion: i see, does it uses the Fuse thing?
<ion> yes
<rendar> ok then
anticore has quit [Remote host closed the connection]
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
atrapado has joined #ipfs
rht has joined #ipfs
martinkl_ has quit [Ping timeout: 250 seconds]
<victorbjelkholm> richardlitt, what I usually do when I find something interesting is to download it and store it under my own node. Will be same hash but I can find it under my ID
<richardlitt> victorbjelkholm: What do you mean by 'find it under my ID'?
<victorbjelkholm> richardlitt, when you use IPNS, your ID (from IPFS) is your name, that resolves to any hash you point it to. So I point it to a folder where I keep things I want to access from my own computer and elsewhere
martinkl_ has joined #ipfs
<ion> It seems IPNS could potentially be used for streaming already. I’m publishing a new pointer every few seconds and the public gateway seems to see the updates rather well when polling <https://ipfs.io/ipns/QmUp7mL2tvPBBCRqaYoTeiUhicvMpuS75QmDrwNmcRpgHp>.
<ipfsbot> [node-ipfs-api] diasdavid pushed 2 new commits to solid-tests: http://git.io/vWVrT
<ipfsbot> node-ipfs-api/solid-tests 5becd24 David Dias: fix ls
<ipfsbot> node-ipfs-api/solid-tests faf9495 David Dias: fix refs
NeoTeo has joined #ipfs
<victorbjelkholm> ion, I'm not wrong, one of the examples involves streaming?
<victorbjelkholm> s/I'm/If I'm
<multivac> victorbjelkholm meant to say: ion, If I'm not wrong, one of the examples involves streaming?
acorbi has quit [Ping timeout: 244 seconds]
jwheh has joined #ipfs
<jwheh> messing around with the ipfs protocol... this is the second time i've ran in to this with protobuf... still cannot figure it out. http://pastie.org/private/tkjkbblqt43o4bp1ktekkq wtf is the data at the end there? it's between almost all network frames and even between header and payload sometimes...
<jwheh> and it's not valid for any proto schemas
<dignifiedquire> daviddias: so I’m seeing lots of failing things in the browser
<dignifiedquire> looks like you broke everything again ;)
<dignifiedquire> why did you remove isNode and such?
<dignifiedquire> daviddias: also seems like cors are not enabled now: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:9876' is therefore not allowed access.
<daviddias> I haven't been able to run the tests in the browser https://github.com/ipfs/node-ipfs-api/pull/81#issuecomment-150965704
<daviddias> the isNode was not used anymore once the start and stop daemons passed to gulp
<dignifiedquire> ah okay right
<ipfsbot> [node-ipfs-api] Dignifiedquire pushed 1 new commit to solid-tests: http://git.io/vWVoH
<ipfsbot> node-ipfs-api/solid-tests 57056ae dignifiedquire: Add debug mode
<dignifiedquire> I’ve just pushed a debug mode
<dignifiedquire> run it like this: DEBUG=true gulp test:browser
<dignifiedquire> that will run the tests in chrome
<dignifiedquire> so you can see issues and debug locally
<dignifiedquire> if you comment out singleRun: true it stays open and you can click on “DEBUG” in the karma browser window, open up devtools and dig into the failures
atrapado has quit [Quit: Leaving]
rendar has quit [Ping timeout: 240 seconds]
chriscool has joined #ipfs
chriscool has quit [Client Quit]
rendar has joined #ipfs
<daviddias> thanks dignifiedquire , setting up now HTTPHeaders to enable CORS
chriscool has joined #ipfs
xelra has quit [Ping timeout: 256 seconds]
wtbrk has quit [Ping timeout: 250 seconds]
xelra has joined #ipfs
kreek has quit [Quit: Connection closed for inactivity]
lazyUL has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
chriscool has quit [Quit: Leaving.]
chriscool has joined #ipfs
<nicolagreco> can someone tell me if I am wrong?
<nicolagreco> Is there anything on the web that builds a dht on the pages I am in?
<nicolagreco> so let's say I am on http://google.com and I cache it
<ipfsbot> [node-ipfs-api] diasdavid pushed 2 new commits to solid-tests: http://git.io/vWVX5
<ipfsbot> node-ipfs-api/solid-tests 1da5999 David Dias: add option to change CORS
<ipfsbot> node-ipfs-api/solid-tests 5b90cd6 David Dias: Merge branch 'solid-tests' of github.com:ipfs/node-ipfs-api into solid-tests
<nicolagreco> other users that go to google and google is down, can still get the page from me
<nicolagreco> a distributed archive.org
<nicolagreco> I think one can do this with ipfs for sharing the actual content
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<ion> nicolagreco: That needs some kind of a trust model, either you trust a specific central archiving authority or have a web of trust kind of thing with your friends. I don’t know whether anyone has written a thing like that yet.
<nicolagreco> unless google signs the page and you have the signature
<ion> The site might as well use IPFS natively then. :-)
<nicolagreco> yes exactly
<nicolagreco> but that is active
<nicolagreco> while I want something passing
<nicolagreco> passive
<nicolagreco> so not the providers but the users
<nicolagreco> do the distribution
<nicolagreco> so to give you an example
<nicolagreco> I am now looking for a pdf online
<nicolagreco> If I knew the hash of the pdf, I could have find it on ipfs/torrent
<nicolagreco> so I went on archive.org and the link is broken
<nicolagreco> in other words, if I had a cached mapping of the links to ipfs that would have been winning
<nicolagreco> since I knew it was there like 10 days ago
<nicolagreco> I can just ask the network if someone knows what is the hash of a link
<nicolagreco> and then look for that hash
<victorbjelkholm> Hah, found "Integrity Property Financial Services"... IPFS :)
<victorbjelkholm> cooler logo
acorbi has joined #ipfs
chriscool has quit [Ping timeout: 256 seconds]
<mungojelly> one liner to print a random word of gutenberg's shakespeare: python -c "import random; print random.choice(open(\"/ipfs/QmdKaHunC5t9eSbGFBRR1jgNJDCdWhjkKBZqsvyQ5phP1T\").read().split())"
<codehero> invalid syntax
<codehero> hrm
<codehero> ah
<codehero> python2
<codehero> there we go
chriscool has joined #ipfs
<mungojelly> how does python3 change that syntax? should i learn python3?
<codehero> would be a good idea
<mungojelly> anyway that's just the first idea that popped into my head of a program that depends on ipfs data
<mungojelly> it says "thou" sometimes so i assume that means it's working
<mungojelly> ooh this time i got "discretion," with the comma, i like that one
<ion> It would be more efficient to split the file into chunks (which is already done, although not at line boundaries), annotate each chunk pointer with its line count and then traverse to the chunk that contains the line you chose randomly.
<ion> s/line/word/g
<multivac> ion meant to say: It would be more efficient to split the file into chunks (which is already done, although not at word boundaries), annotate each chunk pointer with its word count and then traverse to the chunk that contains the word you chose randomly.
<ion> You could have small chunks in a tree with a higher depth, too.
<mungojelly> efficient in terms of computer time when executing it you mean? i guess a high speed shakespeare random worder might be useful. caching the shakespeare would be the first step!
<mungojelly> but the point is it's just ridiculously easy to require a specific field of data into something
<mungojelly> my next idea is a glitcher that refers to a few images and screws with them
<mungojelly> putting urls into things is sorta theoretically always been similar but then in practice you can FEEL that with urls you have no clue really what data you're referring to, you just hope it holds together for a little while, and nothing interesting ever does
acidhax_ has joined #ipfs
chriscool has quit [Quit: Leaving.]
chriscool has joined #ipfs
AlexPGP has quit [Remote host closed the connection]
acidhax has quit [Ping timeout: 272 seconds]
gamemanj has quit [Ping timeout: 240 seconds]
sonatagreen has quit [Ping timeout: 264 seconds]
sonatagreen has joined #ipfs
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
go111111111 has quit [Quit: Leaving]
reit has quit [Quit: Leaving]
reit has joined #ipfs
Matoro has quit [Ping timeout: 264 seconds]
dignifiedquire has quit [Quit: dignifiedquire]
border0464 has quit [Ping timeout: 252 seconds]
border0464 has joined #ipfs
lazyUL has joined #ipfs
amade has quit [Quit: leaving]
anticore has joined #ipfs
chriscool has quit [Quit: Leaving.]
chriscool has joined #ipfs
Matoro has joined #ipfs
chriscool has quit [Ping timeout: 255 seconds]
acorbi has quit [Ping timeout: 250 seconds]
<mungojelly> this makes a smudged version of a purportedly public domain image of some old books: convert /ipfs/QmcP4roJWETbR7x8PpyypM3WoMkwMZEPu84SFr5RmFc6mX -interpolate nearest -virtual-pixel mirror -spread 70 transformed-old-books.jpg
NeoTeo has quit [Quit: ZZZzzz…]
<mungojelly> it feels trippy to make a script that does something to a file and then delete the file and the script still works
chriscool has joined #ipfs
Encrypt has joined #ipfs
Matoro has quit [Ping timeout: 255 seconds]
pfraze has quit [Remote host closed the connection]
chriscool has quit [Quit: Leaving.]
chriscool has joined #ipfs
captain_morgan has quit [Ping timeout: 240 seconds]
<kpcyrd> mungojelly: use ( ) for the print
<nicolagreco> is there a list of ipfs websites?
<sonatagreen> not that i'm aware of, we should try to get a wiki/directory thing going
anticore has quit [Ping timeout: 250 seconds]
jfntn has quit [Ping timeout: 240 seconds]
lazyUL has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<mungojelly> my first thought is to call it The Immutable Wiki or The Permanent Wiki and to just start it as links to git repos, and then if something isn't based off the most recent version we can just use git to merge it