whyrusleeping changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/ipfs/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- code of conduct at https://github.com/ipfs/community/blob/master/code-of-conduct.md -- sprints + work org at https://github.com/ipfs/pm/ -- community info at https://github.com/ipfs/community/
magneto1 has joined #ipfs
devbug has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ScoreUnder has joined #ipfs
Score_Under has quit [Ping timeout: 268 seconds]
voxelot has quit [Ping timeout: 256 seconds]
Score_Under has joined #ipfs
ScoreUnder has quit [Ping timeout: 268 seconds]
tsenior`` has joined #ipfs
Spinnaker has quit [Ping timeout: 256 seconds]
<ipfsbot> [go-ipfs] whyrusleeping created godep-update (+2 new commits): http://git.io/vZbsR
<ipfsbot> go-ipfs/godep-update 973c5fa Jeromy: update go-peerstream to newest version...
<ipfsbot> go-ipfs/godep-update 35a5ca0 Jeromy: update go-datastore to latest...
<ipfsbot> [go-ipfs] whyrusleeping opened pull request #1709: Godep update (master...godep-update) http://git.io/vZbsg
Vylgryph has joined #ipfs
pfraze_ has joined #ipfs
shea256 has joined #ipfs
wasabiiii1 has joined #ipfs
drathir87 has joined #ipfs
gwollon has joined #ipfs
wasabiiii has quit [Ping timeout: 244 seconds]
Vyl has quit [Read error: Connection reset by peer]
<giodamelio> I just got my new blog up and running on ipfs(https://gateway.ipfs.io/ipns/QmWGb7PZmLb1TwsMkE1b8jVK4LGceMYMsWaSmviSucWPGG, with versioned history at https://gateway.ipfs.io/ipns/QmWGb7PZmLb1TwsMkE1b8jVK4LGceMYMsWaSmviSucWPGG/history/), and I was wondering if there is any way to point my domain to the gateway?
prosody has quit [Ping timeout: 268 seconds]
ehd has quit [Ping timeout: 268 seconds]
gwollon has joined #ipfs
M-Kodo has quit [Ping timeout: 268 seconds]
d6e has quit [Ping timeout: 268 seconds]
gwillen has quit [Ping timeout: 268 seconds]
gwollon has quit [Changing host]
pfraze has quit [Read error: Connection reset by peer]
akhavr has quit [Remote host closed the connection]
ehd_ has joined #ipfs
Davididid has quit [Ping timeout: 264 seconds]
M-trashrabbit has quit [Ping timeout: 264 seconds]
ehd_ has joined #ipfs
drathir has quit [Ping timeout: 264 seconds]
ehd_ has quit [Changing host]
gwollon is now known as gwillen
drathir87 is now known as drathir
tsenior`` has quit [Ping timeout: 240 seconds]
ehd_ is now known as ehd
ehd is now known as Guest62677
akhavr has joined #ipfs
prosody_ has joined #ipfs
M-Kodo has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
prosody_ is now known as prosody
M-trashrabbit has joined #ipfs
devbug has quit [Read error: Connection reset by peer]
shea256 has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
<lgierth> giodamelio: yes there is! https://github.com/ipfs/notes/issues/39
shea256 has joined #ipfs
<lgierth> use option 2 for now
<lgierth> also, use the /ipfs/<path>, as ipns isn't complete yet
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
devbug has joined #ipfs
elavoie_ has joined #ipfs
Luzifer_ has joined #ipfs
feross_ has joined #ipfs
null_rad- has joined #ipfs
pfraze has joined #ipfs
pfraze_ has quit [Ping timeout: 250 seconds]
kumavis_ has joined #ipfs
necro666_ has joined #ipfs
AtnNn_ has joined #ipfs
Skaag_ has joined #ipfs
xenkey1 has joined #ipfs
daviddias_ has joined #ipfs
cleichner_ has joined #ipfs
svetter_ has joined #ipfs
M-jbenet1 has joined #ipfs
<blame> What is the ideal liklihood of a hashing being 1 in a bloom filter?
akhavr has quit [*.net *.split]
Guest62677 has quit [*.net *.split]
reit has quit [*.net *.split]
AtnNn has quit [*.net *.split]
FunkyELF has quit [*.net *.split]
daviddias has quit [*.net *.split]
cleichner has quit [*.net *.split]
null_radix has quit [*.net *.split]
oed has quit [*.net *.split]
nicknikolov has quit [*.net *.split]
xenkey has quit [*.net *.split]
Skaag has quit [*.net *.split]
svetter has quit [*.net *.split]
giodamelio has quit [*.net *.split]
feross has quit [*.net *.split]
elavoie has quit [*.net *.split]
M-jbenet has quit [*.net *.split]
davidar has quit [*.net *.split]
kumavis has quit [*.net *.split]
ogd has quit [*.net *.split]
Luzifer has quit [*.net *.split]
necro666 has quit [*.net *.split]
elavoie_ is now known as elavoie
svetter_ is now known as svetter
Luzifer_ is now known as Luzifer
daviddias_ is now known as daviddias
cleichner_ is now known as cleichner
oed has joined #ipfs
feross_ is now known as feross
<lgierth> jbenet: could you add dnslink-deploy.git to the website team, and also add richardlitt to it? i think we locked ourselves out of that repo by transfering it over
M-davidar has joined #ipfs
<jbenet> giodamelio: that's great!! be wary that ipns is still being worked on and not robust yet.
JohnClare has joined #ipfs
FunkyELF has joined #ipfs
kumavis_ is now known as kumavis
ogd has joined #ipfs
akhavr has joined #ipfs
Guest62677 has joined #ipfs
reit has joined #ipfs
Guest62677 has quit [Changing host]
Guest62677 has joined #ipfs
giodamelio has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
JohnClare has quit [Ping timeout: 252 seconds]
d6e has joined #ipfs
<blame> (experimental) search and indexing are now a thing at http://cachewarmer.blamestross.com/
<jbenet> lgierth: done
<lgierth> jbenet: thanks
<blame> right now the only indexed page is the ipfs homepage!
<lgierth> i just noticed the protected branches feature which prevents force-pushes to certain branches
<achin> i'm looking for a description for the on-disk block format (i.e. how to read files in ~/.ipfs/blocks/). where's the best place to look up this info?
ianopolous2 has joined #ipfs
<ipfsbot> [go-ipfs] jbenet closed pull request #1709: Godep update (master...godep-update) http://git.io/vZbsg
ianopolous has quit [Ping timeout: 246 seconds]
* blame quietly vies launch errors
<blame> Foo, Blame, fdas..
<jbenet> lgierth: yeah its nice. could protect master in go-ipfs.
<lgierth> s/could/should/
<lgierth> :)
<blame> It is kinda weird watching people try it in the log.
<lgierth> heh
<blame> big problem is that bloom filtering all the "word shaped things" in a webpage tends to saturate the filter
<lgierth> OH on another irc today: the only thing worse than "your computer is insecure" is "i heard your computer is insecure"
<jbenet> the spec is a bit outdated, so code is best
devbug has quit [Ping timeout: 240 seconds]
<achin> that first link is broken :) ok i'll poke at the code, thanks
simonv3 has joined #ipfs
<achin> excellent, thanks jbenet
fazo has joined #ipfs
jfis has joined #ipfs
ScoreUnder has joined #ipfs
Score_Under has quit [Ping timeout: 246 seconds]
<jbenet> Afk for a while
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
voxelot has joined #ipfs
voxelot has joined #ipfs
Guest73396 has joined #ipfs
tsenior`` has joined #ipfs
<mafintosh> daviddias: i think this pattern doesn't work if i pass in a peer from a different version of ipfs-peer than you are requiring inside that mode, https://github.com/diasdavid/node-ipfs-mdns/blob/master/src/index.js#L22
<ipfsbot> [go-ipfs] whyrusleeping force-pushed feat/rm-worker from 3d2f516 to 63f72a5: http://git.io/vZyYa
<ipfsbot> go-ipfs/feat/rm-worker 47b85c7 Jeromy: dont need blockservice workers anymore...
<ipfsbot> go-ipfs/feat/rm-worker 63f72a5 Jeromy: remove context from HasBlock, use bitswap process instead...
<mafintosh> module not mode :)
tsenior`` has quit [Ping timeout: 268 seconds]
Quiark has joined #ipfs
shea256 has quit [Ping timeout: 255 seconds]
wasabiiii has joined #ipfs
wasabiiii1 has quit [Ping timeout: 260 seconds]
magneto1 has quit [Ping timeout: 260 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Guest73396 has quit [Read error: Connection reset by peer]
samiswellcool has quit [Quit: Connection closed for inactivity]
<fazo> wow, ipfs is really nice
<fazo> except managing pinned files with go-ipfs
<fazo> too bad I can't understand go :(
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> fazo: yeah, we want to find a nicer way to manage pinned files
<whyrusleeping> always interested to hear ideas on UX for that
<fazo> I tried to look how pinned files are stored, but no luck yet
<fazo> I mean how the references are stored
<fazo> I think a solution could be to associate a local name to the pinned hash
<fazo> because you either "ipfs add", so you can get the name from the actual file names
<fazo> or you manually pin stuff
<fazo> so having the possibility to label a pinned hash could be just enough
<fazo> just locally
<fazo> and maybe storing the disk space that a file/folder takes up
<fazo> so when you "ipfs pin ls"
<fazo> you see the hashes, the labels (if they were ever set) and the size
<fazo> way easier to manage
qqueue has joined #ipfs
shea256 has joined #ipfs
<whyrusleeping> yeah
<whyrusleeping> i like it
<whyrusleeping> something like 'ipfs pin add <hash> -tag=blah'
<whyrusleeping> or something
<fazo> then you could "ipfs label <hash> <name>" to set a label and "ipfs label <hash>" to get it
<fazo> it also serves as a way to remember what an hash actually is
<fazo> what it contains*
<fazo> @whyrusleeping: what do you think?
<fazo> yeah of course
<spikebike> whyrusleeping: and use tags for all metadata?
<spikebike> whyrusleeping: like say parent dir
<fazo> or if you pipe data to 'ipfs add' you could do something like '<data> | ipfs add --label=somefile'
<whyrusleeping> mmm, ipfs label... maayybeee
<fazo> whyrusleeping: well, it needs to be ironed out of course
<spikebike> maybe autopopulate some tags, like say creation-date
<fazo> but I think we really need a way to label stuff locally or else when we have hundreds of pinned files and want to remove that 2 gb folder...
<whyrusleeping> i think i like the direction this is going
<whyrusleeping> i was mulling over an ipfs alias for peer IDs
<whyrusleeping> but i think having a generic 'label' makes more sense
<fazo> yes, labeling peers too is a great idea
<spikebike> tags seem like a natural compliment to a content addressable store.
<whyrusleeping> especially one who pulls heavily from git
<whyrusleeping> ;)
<fazo> uhm, I think label is the better word
<fazo> because you can have multiple tags on an object, but the label is unique
<fazo> the point is to recognize objects, categorizing them doesn't look as useful to me
<spikebike> ipfs stat would show all of an objects tags
<spikebike> seems like there should only be one unique label per object, the sha256 hash
<fazo> spikebite: by tags, you mean user definable tags or just metadata like the date it was pinned?
<spikebike> fazo: ideally some defaults like the standard unix metadata (size, owner, creation date, ...)
<fazo> spikebite: yes, of course, but I was thinking about a local, optional and user defineable label used to recognize pinned objects
<spikebike> and user defineable tags
<spikebike> fazo: an image app might for instance expose image metadata to the IPFS tags
<spikebike> (size, resolution, x/y, ISO, long/lat, etc.
<spikebike> )
<spikebike> IMO one of the biggest problems with current traditional filesystems is the lack of user defined metadata
<spikebike> thus each app has a different and incompatible way to add that metadata, like apple's resouce forks, or custom giant binary blobs for databases, etc.
<fazo> spikebite: yeah, you would need to include metadata in the actual data to do that I think
<fazo> spikebite: I agree with local metadata and user-defineable label, maybe even tags even though I don't really see the point, but honestly I think this data needs to be local
<fazo> spikebike: I guess you're right. Though including metadata like that would probably destroy the protocol
<fazo> spikebike: what I have an mp3 song which is the same identical song with identical bytes as the one you have, but mine has different metadata
<spikebike> dunno. put, and get could be similar, but there's need to be a tag/untag that's similar
<fazo> spikebike: is the hash different? then it's not the same file?
<spikebike> tags shouldn't change the hash.
<spikebike> (IMO)
<spikebike> so when you upload a movie, picture, or audio file the sha256 of the data should never change, and you can always read it out to a normal filesystem unchanged
shea256_ has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
<spikebike> similarly changing metadata on a unix filesystem doesn't change the checksum of the underlying files
<spikebike> (unless you count the directory itself as a file)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fazo> spikebike: yes, you're right. But there is no way to share them via IPFS if they can't change the hash
<fazo> spikebike: including file metadata in the protocol requires it to be rethinked almost entirely
<fazo> spikebike: I don't understand, if metadata doesn't affect the hash of the data, where is the metadata stored?
<spikebike> hrm, well does the ipfs protocol include a directory object?
<spikebike> could the tags be hidden in the directory object, and then each object has a reference to the sha256 for the subobjects?)
<spikebike> that way tag updates change the directory object, but not any of the referenced objects.
<fazo> spikebike: hey, it's not bad.
<fazo> spikebike: it's actually a nice idea
<spikebike> seems like IPNS isn't done anyways, and that's where you need the metadata mostly anyways
Spinnaker has joined #ipfs
<fazo> spikebike: the way I'm using ipfs, I don't really need ipns
<fazo> spikebike: but I need the ability to label pinned objects :(
<spikebike> IMO enabling anything local only sounds like a bad idea
<fazo> spikebike: why is that?
<spikebike> It's not I(except for special local access)PFS
<spikebike> and if someone has a directory of valuable tagged data seems silly to not be able to share the tags
<spikebike> I'm all for userA making a new view of UserB's data with their own personal tags
<whyrusleeping> ^ +1
<spikebike> but userA should be able to share that view, not make it local only
voxelot has quit [Ping timeout: 250 seconds]
<spikebike> so the filesystem should be a global namespace.
<fazo> spikebike: makes sense
voxelot has joined #ipfs
<spikebike> there is a new tagging standard for files system, basically a generation newer than EXIF
<spikebike> kinda cool, I can now tag photos locally on my desktop, and then upload them to a web gallery that will make different views on the same data based on their tags
<fazo> spikebike: what about a way to recognize what an hash represents in the pinned hash list
<fazo> spikebike: I'm talking about go-ipfs
<fazo> spikebike: do you think being able to set a local name to recognize hashes and peers is a good idea?
<spikebike> well the directory object, lets thing of it like a SQL database for the moment
<spikebike> could have a map between tags and sha256sums (referring to an object)
qqueue has quit [Ping timeout: 240 seconds]
<spikebike> of course you could query it by asking what tags do you ahve for a specific checksum
<spikebike> which would describe the object
qqueue has joined #ipfs
<spikebike> and some tags would have special meaning, like say that they are pinned
<spikebike> so you could do a query to show all pinned files and their filenames.
<fazo> spikebike: wow, you found a way better solution than mine.
<spikebike> I'm biking home shortly, I'll ponder. a Sha256 for the directory object plays well with CAS and CoW
<spikebike> it would still be painful to iterate across all dirs though
magneto1 has joined #ipfs
akhavr1 has joined #ipfs
pfraze has quit [Remote host closed the connection]
wasabiiii has quit [Ping timeout: 250 seconds]
wasabiiii1 has joined #ipfs
compleatang has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
compleatang has joined #ipfs
Score_Under has joined #ipfs
thomasreggi has joined #ipfs
<spikebike> hrmpf
<spikebike> trying to find the new tagging standard that digikam and piwigo support
ScoreUnder has quit [*.net *.split]
fazo has quit [*.net *.split]
giodamelio has quit [*.net *.split]
qqueue has quit [*.net *.split]
akhavr has quit [*.net *.split]
Guest62677 has quit [*.net *.split]
reit has quit [*.net *.split]
akhavr1 is now known as akhavr
magneto1 has quit [Ping timeout: 256 seconds]
jager is now known as stormbringer
stormbringer is now known as jager
giodamelio has joined #ipfs
Guest62677 has joined #ipfs
qqueue has joined #ipfs
Guest62677 has quit [Changing host]
Guest62677 has joined #ipfs
pfraze has joined #ipfs
HostFat has quit [Read error: Connection reset by peer]
nessence has quit [Ping timeout: 240 seconds]
<voxelot> is there a way to add a CORS exception to the daemon for the api when not running the api on a server but in the browser?
voxelot_ has joined #ipfs
devbug has joined #ipfs
voxelot has quit [Ping timeout: 246 seconds]
<voxelot_> nvm, as long as you except local:8080 and run it on your gateway
sseagull has quit [Quit: leaving]
wasabiiii has joined #ipfs
wasabiiii1 has quit [Ping timeout: 265 seconds]
gatesvp has joined #ipfs
<gatesvp> @jbenet @whyrusleeping: trying to build Master and it's failing for me. The `blockstore` is referencing go-log-v1.0.0 (https://github.com/ipfs/go-ipfs/blob/master/blocks/blockstore/blockstore.go#L15)
<gatesvp> But that's not a file, it's a symlink: https://github.com/ipfs/go-ipfs/blob/master/vendor/go-log-v1.0.0
<whyrusleeping> gatesvp: what platform?
<gatesvp> which seems to break windows
<whyrusleeping> gatesvp: ughhhh, thats not what i wanted to hear :(
<gatesvp> hmmm... might have some tips here: http://stackoverflow.com/questions/5917249/git-symlinks-in-windows
<whyrusleeping> interesting...
step21 is now known as step21_
reit has joined #ipfs
simonv3 has quit [Quit: Connection closed for inactivity]
<gatesvp> @whyrusleeping: tried a couple of the options including junction and mklink and they're not working for me... it's funny because I can `ls` into the directory without issue
<gatesvp> but when I run `go build` it seems to lose its braing
<gatesvp> *brain
<whyrusleeping> wtf
<whyrusleeping> thats annoying...
<whyrusleeping> we might just have to use makefiles for windows
<whyrusleeping> and have it replace all symlinks with windows 'mklink' or something
<whyrusleeping> this is really interesting because even cross compiling for windows wouldnt have caught the issue
<gatesvp> Symlinks are _very rarely_ used in Windows. The `mklink` command requires that you be an admin to run, the `junction` command is from the sysinternals tool I downloaded that is probably not on half of the Dev machines at my day job.
<gatesvp> i.e.: it's a power tool among power tools :)
<whyrusleeping> lol
<whyrusleeping> greatttt
<whyrusleeping> i totally used links in windows when i was gaming to trick steam into using my other hard drive
voxelot has joined #ipfs
voxelot has joined #ipfs
Leer10 has joined #ipfs
Tv` has quit [Ping timeout: 255 seconds]
voxelot_ has quit [Ping timeout: 246 seconds]
<whyrusleeping> gatesvp: what we could do is have a make script for windows (does windows even have make? crap) that replaces the symlink imports with the real folders they reference
<whyrusleeping> which kinda sucks, but it should work
<spikebike> windows doesn't have make afaik
<spikebike> cygwin can add make of course
Tv` has joined #ipfs
<M-davidar> whyrusleeping, Re pinning ux, I'd also like an option where you could specify a list of hashes, and it would pin a random subset of blocks without many seeders, up to a storage limit specified by the user
<M-davidar> Would be really helpful for archives
<spikebike> M-davidar: maybe something like ipfs pin --least-popular=10 or something
<spikebike> ?
<spikebike> where 10 = 10%
wasabiiii1 has joined #ipfs
<M-davidar> Preferably something like limit=10GB
<M-davidar> since 10% might be a huge amount of data
<spikebike> seems like that would be better in ~/.ipfs/config for similar
wasabiiii has quit [Ping timeout: 272 seconds]
<M-davidar> Sure, my point being that the user can control it somehow
M-davidar is now known as davidar
<spikebike> sure, just seems like pinning is an interactive decision
<spikebike> and often changed
<spikebike> total disk used not so much
devbug has quit [Ping timeout: 250 seconds]
<davidar> Sorry, I didn't mean total disk used, I meant how much space the user is willing to donate to hosting part of that specific dataset
<gatesvp> @whyrusleeping: had an IRL thing to do... so back in Windows-land, `symlink` seems unreliable within the `go build` and `make` doesn't work on Windows at all.
<davidar> As in, here's a 10TB dataset, but I only want to host 50GB of it
<davidar> And let a bunch of other people host the rest
<gatesvp> @whyrusleeping: even trying to replace symlink with the windows `mklink` version seems to fail so I'm investigating further options
<davidar> seems whyrusleeping is sleeping again...
<davidar> We need a bot that tells us when people's sleeping hours are
<davidar> Maybe just: timebot, what time is it for davidar?
<gatesvp> So the good news is that I can manually replace the symlink with the appropriate code and everything builds...
<gatesvp> so it's really just this one spot
<davidar> multivac, how would you feel if I scooped out your brain and replaced it with sopel.chat?
<multivac> I don't know
<davidar> Good answer
shea256 has joined #ipfs
shea256_ has quit [Read error: Connection reset by peer]
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
voxelot has quit [Ping timeout: 246 seconds]
voxelot has joined #ipfs
devbug has joined #ipfs
Tv` has quit [Quit: Connection closed for inactivity]
wasabiiii has joined #ipfs
twistedline has quit [Ping timeout: 246 seconds]
wasabiiii1 has quit [Ping timeout: 240 seconds]
Quiark has quit [Ping timeout: 256 seconds]
Quiark has joined #ipfs
<borgtu> lgierth: I already wrote them after the it happened, told them what it was and gave them af reference for ipfs.io. I don't think they will fix anything tho' :)
<whyrusleeping> gatesvp: sorry, was out for a little bit
<whyrusleeping> davidar: re: pinning least popular, thats an interesting idea. i like it
mildred has joined #ipfs
<whyrusleeping> although it shouldnt be part of the pin command
<whyrusleeping> it should just be its own thing, probably even its own tool that just uses the api
<ipfsbot> [go-ipfs] jbenet pushed 1 new commit to master: http://git.io/vZNqu
<ipfsbot> go-ipfs/master c6166e5 Juan Benet: Merge pull request #1622 from ipfs/feat/rm-worker...
pfraze has quit [Remote host closed the connection]
Leer10 has quit [Ping timeout: 240 seconds]
JohnClare has joined #ipfs
Leer10 has joined #ipfs
JohnClare has quit [Ping timeout: 252 seconds]
mildred has quit [Ping timeout: 255 seconds]
voxelot has quit [Ping timeout: 265 seconds]
voxelot has joined #ipfs
Vendan has quit [K-Lined]
Vendan has joined #ipfs
mildred has joined #ipfs
wasabiiii1 has joined #ipfs
wasabiiii has quit [Ping timeout: 240 seconds]
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
mildred has quit [Ping timeout: 264 seconds]
JohnClare has joined #ipfs
<cryptix> good mooorning
JohnClare has quit [Ping timeout: 250 seconds]
qqueue has quit [Ping timeout: 264 seconds]
<whyrusleeping> cryptix: g'mornin! i'm just about off to bed :)
<whyrusleeping> goodnight!
qqueue has joined #ipfs
<davidar> jbenet, awake?
<cryptix> whyrusleeping: cya :))
dignifiedquire has joined #ipfs
voxelot has quit [Ping timeout: 240 seconds]
rendar has joined #ipfs
iHedgehog has joined #ipfs
Guest62677 has quit [*.net *.split]
dignifiedquire has quit [*.net *.split]
giodamelio has quit [*.net *.split]
<Vylgryph> Is there any way to view a list of objects that ipfs has stored locally?
Vylgryph is now known as Vyl
<Animazing> yeah I wondered that myself yesterday; I kinda expected ipfs ls to do that if you didn't give it any argumetns
iHedgehog has left #ipfs [#ipfs]
Qwertie has joined #ipfs
<Qwertie> Hi o/
wasabiiii1 has quit [Ping timeout: 246 seconds]
samiswellcool has joined #ipfs
wasabiiii has joined #ipfs
<Qwertie> If you put a page on ipfs is it possible to edit it later or is it stuck there forever?
dignifiedquire has joined #ipfs
giodamelio has joined #ipfs
Guest62677 has joined #ipfs
<cryptix> Qwertie: since everything is adressed by its content, you need to add the data again and use the new hash to access the updated version
<cryptix> you can use ipns on top though to keep the link the same and update the hash it points to
iHedgehog has joined #ipfs
iHedgehog has left #ipfs [#ipfs]
<Qwertie> Is there any page explaining how ipns works?
dignifiedquire has quit [Quit: dignifiedquire]
<Animazing> I think the paper addresses it Qwertie
<Qwertie> I will check that out now then :D
ygrek has joined #ipfs
gatesvp has quit [Ping timeout: 246 seconds]
mildred has joined #ipfs
dignifiedquire has joined #ipfs
twistedline has joined #ipfs
thomasreggi has quit []
JohnClare has joined #ipfs
<davidar> We really need a thing on ipfs.io saying "this page is hosted on ipfs, and here's how it works"
<davidar> In fact the whole landing page desperately needs an overhaul
<davidar> I think that would resolve 90% of people's confusion
<M-amstocker> I agree ^ The videos are great but I think some people with only a passing interest won't watch
j0hnsmith has joined #ipfs
<davidar> amstocker, yay, chat.ipfs.io? :)
<M-amstocker> yeah im on it right now
<davidar> Thought so
<M-amstocker> is this your project?
<davidar> I put it onto ipfs, but most of the credit goes to vector.im and matrix.org
atomotic has joined #ipfs
<Animazing> Does ipfs cache (popular) random blocks on disk or does it only try to retrieve blocks that somebody specifically requested
<cryptix> Animazing: nope - not yet
<Qwertie> What stops people from filling ipfs with TBs of useless data?
<M-amstocker> David: it's cool!
<M-amstocker> reading up on matrix now
<Animazing> cryptix: that probably means that there is no plausible deniablity for files?
<spikebike> I kinda like that nothing in on my IPFS node except for content I want
<cryptix> Qwertie: the public gateway garbage collects from time to time
<Animazing> If I host the ttip files if they leak will it be a DMCA risk to host them?
<spikebike> I would however like a cache/proxy/browser plugin that would cache content I browse so that anyone on the planet could get to it even if the original server was down
<davidar> amstocker, also, if you have any ideas for how to improve ipfs.io for new users, I'd love some help putting something together
amstocker has joined #ipfs
<cryptix> Animazing: the public GWs have a dmca blacklist iirc but be aware that ipfs is not anonymous
<Qwertie> spikebike, That would probably reveal a list of everything you have opened
<spikebike> Qwertie: ya.
<spikebike> everything not over SSL anyways
JohnClare has quit [Remote host closed the connection]
<Animazing> cryptix: yeah I see nodes can be resolved to IP. It's just that if you were to have a cache of popular blocks locally this could mean that you could be hosting files without you knowing it; making it less of a risk to host it if you were knowing it
<davidar> Animazing, ipfs isn't really good for hosting anything you wouldn't be willing to host on a normal web server (by design I suspect)
<davidar> Also, ipfs never hosts anything without your consent
<M-amstocker> David, I would definitely be willing to help
<cryptix> spikebike: like the idea too - maybe a instapaper-on-ipfs for starters, i share Qwertie's privacy concerns
<M-amstocker> not that i'm some ux expert
<Animazing> yeah looks like it; and I'm ok with that. Just trying to figure out where to put ipfs on the spectrum :)
<Qwertie> Cant you request a file from someone else even if they dont have it? Or is that just the public nodes?
<davidar> amstocker, me either, mostly interested in the content rather than making it look pretty
wasabiiii1 has joined #ipfs
<spikebike> cryptix: ya, I'd want a whitelist/blacklist for what's in the proxy. Nothing with my bank, taxes, insurance, healthcare, etc. But happy to share anything over http from youtube, slashdot, hacker news, weather sites, etc.
Beer has joined #ipfs
<Beer> Morning
wasabiiii has quit [Ping timeout: 244 seconds]
<cryptix> spikebike: yup! id also like to throw in youtube-dl so that you dont just cache the youtube page but the video as well
<spikebike> well I'd hope youtube would just connect to IPFS, seems like it would be good to them as long as the ads are embedded.
<cryptix> Qwertie: if some node on the swarm has added it, and you know the hash, you can get it
<spikebike> IPFS could make for a very cool CDN
<cryptix> spikebike: wont happen :)
<cryptix> well.. would be nice
<Qwertie> Just wondering if someone could make your node start hosting copyrighted files even if you never requested them
<cryptix> Qwertie: no - your node just stores data that you requested
ianopolous2 has quit [Ping timeout: 265 seconds]
<cryptix> at some point there could be nodes that look for frequently requested blocks but thats not there yet afaik
<spikebike> seems like a waste and dangerous
<spikebike> if even 1 in 1000 clients were ipfs enabled any popular content would get in directly
<amstocker> davidar, where should I put some ideas for the ipfs.io landing page? submit an issue on github?
<cryptix> amstocker: yea gh/ipfs/website
<spikebike> instead of trying to guess what other clients are asking for
<davidar> cryptix, I've discussed the CDN idea with jbenet before
chiui has joined #ipfs
<davidar> er, spikebike
<cryptix> spikebike: id be cautious with guessing techniques too - a shared proxy would grow a cache much more natural, i guess. just saying that nothing stops you from building a node that does just that :)
<davidar> amstocker, yeah, what cryptix said, cc me and I'll add some thoughts too
<spikebike> populating a CDN by proxy efficiently would require #1) an accurate and unspoofable world few of traffic #2) an accurate and unspoofable world view of how many IPFS nodes have a copy
<davidar> +1 for proxy-to-ipfs
<spikebike> and would give you much the same as a stupid/naive proxy/cache
<davidar> spikebike (IRC): filecoin.io is supposed to solve 2
<davidar> I've also been thinking about the first one
<spikebike> yeah, I'm quite dubious of that approach
<spikebike> when I hear block chain I think "slow as shit"
<davidar> spikebike (IRC): the data itself isn't stored on the block chain
<davidar> And bitcoin just happens to have a rather slow block chain
<spikebike> davidar: anything I can imagine storing in the blockchain would still be hugely expensive
<spikebike> davidar: ya, but there's like 0.0001% as many bitcoin transactions as cdn operations
<spikebike> or cache operations
<spikebike> or bandwidth, whatever.
<spikebike> not sure why something much more simple can't work just as well
<spikebike> like say bittorrents tit-for-tat
<davidar> That's bitswap
<spikebike> why can't my IPFS node and that 30-40 peers trade storage/bandwidth/cpu as needed among themselves
<davidar> It doesn't solve the problem of proving nodes have a copy of data efficiently though
<spikebike> instead of trying to make a global ledger to try to keep the same metrics fair.
<davidar> That's what the block chains for
<davidar> That's also planned
<davidar> "IPFS cluster"
<spikebike> say there's 1M IPFS nodes, with conservatively 1B files, that talk to 10 clients each every day
<spikebike> what parts do you think would be reasonable to keep in that ledger?
<spikebike> and how would it be bad if a node did the equivalent of a double spend in bitcoin?
<davidar> I don't know, ask jbenet ;)
<spikebike> A single raspberry pi using traditional methods could process the planet's worth of bitcoin transactions
<spikebike> namecoin seems elegant and attractive... if you ignore performance
<spikebike> which is a biggy, especially for IPFS which is otherwise quite performant
<davidar> The point I'm trying to make is that these are all issues that people have been thinking about, so try searching the irc logs or github if you want to know more
<spikebike> I'm not an expert, but I ahve been reading the docs and watching the irc channel for many months
amstocker has quit [Ping timeout: 246 seconds]
<davidar> Yes, I agree that block chains aren't ideal, but there needs to be some way for everyone to agree on certain things
<spikebike> ipfs as a cdn seems reasonable, file sharing seems reasonable, proxy/browser plugin seems reasonable
<Qwertie> Im getting "ERR_CONNECTION_REFUSED" when connecting to the webui
<Qwertie> Do I need to start it after running init?
necro666_ is now known as necro666
<spikebike> davidar: not sure on the namecoin, especially for content addressable storage, if the checksum is right you know you got the right value
<spikebike> Qwertie: did you start teh daemon?
<Qwertie> Just ran ipfs init
<spikebike> run ipfs daemon
<spikebike> give it 30-60 seconds, then try the web ui again
<Qwertie> Seems to be working
<davidar> spikebike, I was talking specifically about how to prove a node is storing some data without having to download it all and verify
<davidar> So that you don't have malicious nodes pretending to
<spikebike> davidar: my favorite method for that is do ask for a small fraction of it, big random ranges, and ask for a checksum of the range
<davidar> I think that's what file coin does...
<spikebike> say some node is hosting 1M files, once a day pick 0.1% and 2 random numbers and ask for the checksum of that range of that object
<spikebike> davidar: so every ipfs add requires adding the name and checksum to the global blockchain?
<spikebike> only thing I can thing of that could be reaosnable for namecoin is the equivalent of a domain registration
<davidar> No, just the checksum proofs you mentioned
<Beer> anyone have an IPFS hash I can try connecting to?
<davidar> Beer, as in, something hosted on ipfs?
<Beer> #ipfs: yep
<Beer> uh, davidar: yes
<Qwertie> Is it possible to use ipfs without port forwarding?
<Beer> ID QmYumn1eZZv3AXPNibQoiBapmb7XS3CXUbrkCSGQNQHfiA up 1342 seconds connected to 0:
<davidar> multivac: archives <reply> https://github.com/ipfs/archives
<multivac> Okay, davidar.
<Beer> :(
<davidar> Beer ^
<davidar> Beer, oh, you mean peers?
<Beer> Yes, sorry
<Beer> Peer ID
<Beer> But that ID is also a hash, right ? ;)
<davidar> that should be happening automatically :/
<Qwertie> Im connected to 0 peers
<Beer> ipfs version 0.3.8-dev
<davidar> yeah, but calling it a hash is a little confusing
<davidar> hrm, that's no good
<davidar> do you have agressive firewall/nat?
<Beer> Yes.
<Beer> + proxy
qqueue has quit [Ping timeout: 240 seconds]
<davidar> hmm, a few people have been having issues with that
<davidar> either wait for whyrusleeping to arrive, or submit an issue and cc him i guess
<spikebike> the issues seem to get attention, no cc: necessary
<davidar> at the very least ipfs needs to make it more obvious what the problem is and how to fix it
qqueue has joined #ipfs
<Qwertie> davidar, What ports should I open to use ipfs?
<davidar> qwertie: 4001, iirc
<davidar> it should tell you the ports its listening on when you start the daemon
magneto1 has joined #ipfs
<Qwertie> I opened up 4001 but still no connections :L
<spikebike> ipfs cat Qmb8zoHBRxxpqmNk6k4hAimCxoCJ5BZ2KP7GhrBh8dG561 | mplayer -
<spikebike> Qwertie: do a ipfs id
<spikebike> if your connection ipv4 only? IPv6 as well?
<spikebike> is
<Qwertie> Should be both
<spikebike> is IPv6 open incoming?
<Qwertie> Yes
<spikebike> try
<spikebike> ipfs swarm connect /ip6/2607:f810:c20:1d:ca60:ff:fec8:76df/tcp/4001/ipfs/QmNoDNpuYFWnVFN4hHuGojHcewF4bamd7c7BTp3g41uei2
<spikebike> ipfs warm connect /ip6/2601:200:c001:5028:76d0:2bff:fe90:8b90/tcp/4001/ipfs/QmfJegzj2e6R38KP2MYFX2UmCfbvA8h7EavBagqnxgYc3H
<Qwertie> Error: invalid peer address: failed to parse ipfs: QmNoDNpuYFWnVFN4hHuGojHcewF4bamd7c7BTp3g41uei failed to parse ipfs addr: QmNoDNpuYFWnVFN4hHuGojHcewF4bamd7c7BTp3g41uei multihash length inconsistent: &{80 0 [30 8 255 63 63 84 223 24 1 69 110 29 163 169 20 112 200 214 184 109 52 104 6 20 220 11 8 218 17 78 107]}
<spikebike> oops, swarm not swarm
devbug has quit [Ping timeout: 252 seconds]
<spikebike> oops, swarm not warm
<spikebike> did the first work?
<Qwertie> That was the first one
<spikebike> on a single line?
<Qwertie> The second one doesnt seem to have finished
<spikebike> $ ipfs swarm connect /ip6/2607:f810:c20:1d:ca60:ff:fec8:76df/tcp/4001/ipfs/QmNoDNpuYFWnVFN4hHuGojHcewF4bamd7c7BTp3g41uei2
<spikebike> connect QmNoDNpuYFWnVFN4hHuGojHcewF4bamd7c7BTp3g41uei2 success
<spikebike> that's what I was expecting
<Qwertie> Should I have the daemon running?
<spikebike> yes
<spikebike> daemon is required for pretty much anything
<Qwertie> Hm, i ran that last line and its not doing anything
<Qwertie> Its still running
<spikebike> ping6 -c 3 2607:f810:c20:1d:ca60:ff:fec8:76df
<spikebike> try that
<Qwertie> 3 packets transmitted, 3 received, 0% packet loss, time 2000ms
<spikebike> that's good
<spikebike> $ ipfs swarm connect /ip6/2601:200:c001:5028:76d0:2bff:fe90:8b90/tcp/4001/ipfs/QmfJegzj2e6R38KP2MYFX2UmCfbvA8h7EavBagqnxgYc3H
<spikebike> connect QmfJegzj2e6R38KP2MYFX2UmCfbvA8h7EavBagqnxgYc3H success
<spikebike> weird, works both way for me
<spikebike> $ telnet 2607:f810:c20:1d:ca60:ff:fec8:76df 4001
<spikebike> try that
<Qwertie> Connected to 2607:f810:c20:1d:ca60:ff:fec8:76df.
<spikebike> hrm, so no ipv6 firewall blocking 4001
d11e9 has joined #ipfs
<Qwertie> I need to run ipfs from ~/go/bin does that matter?
<spikebike> $ ipfs id | grep AgentVersion "AgentVersion": "go-ipfs/0.3.8-dev",
akhavr has quit [Read error: Connection reset by peer]
<spikebike> $ which ipfs
<spikebike> /home/bill/pkg/go/bin/ipfs
<spikebike> ipfs id works?
<Qwertie> grep: AgentVersion:: No such file or directory
<Qwertie> grep: go-ipfs/0.3.8-dev,: No such file or directory
<spikebike> $ ipfs id | grep AgentVersion
<spikebike> "AgentVersion": "go-ipfs/0.3.8-dev",
<spikebike> $ ipfs id | grep ip6 "/ip6/2607:f810:c20:1d:ca60:ff:fec8:76df/tcp/4001/ipfs/QmNoDNpuYFWnVFN4hHuGojHcewF4bamd7c7BTp3g41uei2"
Spinnaker has quit [Ping timeout: 256 seconds]
<Qwertie> ipfs wont run. it just gets stuck
akhavr has joined #ipfs
<spikebike> yeah if ipfs id doesn't work I don't think anyhting will
<Qwertie> Wait it worked
<Qwertie> "AgentVersion": "go-ipfs/0.3.8-dev",
<spikebike> $ ipfs id | grep ip6
<Qwertie> Just needed to restart the daemon
<Qwertie> I just did ipfs swarm connect /ip6/2607:f810:c20:1d:ca60:ff:fec8:76df/tcp/4001/ipfs/QmNoDNpuYFWnVFN4hHuGojHcewF4bamd7c7BTp3g41uei2
<Qwertie> and it worked
<Qwertie> webui is still showing 0 connected peers though
<spikebike> try the other one as well
<spikebike> $ ipfs swarm peers | wc -l
<spikebike> that as w2ell
<Qwertie> ./ipfs swarm peers | wc -l
<Qwertie> 46
<spikebike> excellent
<spikebike> now localhost:5001/webui
<spikebike> (in a browser)
danslo has quit [Remote host closed the connection]
<Qwertie> Still showing 0 peers
<Qwertie> Is there a link I can try to see if I can load anything
<spikebike> ipfs cat Qmb8zoHBRxxpqmNk6k4hAimCxoCJ5BZ2KP7GhrBh8dG561 > foo.avi
<spikebike> did you type in that localhost url?
<spikebike> it might have cached the old broken one
<Qwertie> I think so
<spikebike> click on that
<Qwertie> Yeah, 0 peers
<spikebike> that shouldn't show peers
<spikebike> that should just show node info and addresses
<Qwertie> On the connections tab
<Qwertie> I did get foo.avi though
<d11e9> hey all i know windows is not officially supported but was wondering if anyone has found a way around not being able to run ipfs a second time? I've been having to re init each time.
wopi has quit [Read error: Connection reset by peer]
wasabiiii1 has quit [Ping timeout: 250 seconds]
wopi has joined #ipfs
<Qwertie> What am I looking for?
<spikebike> Qwertie: try connections under that url
mildred has quit [Ping timeout: 272 seconds]
wasabiiii has joined #ipfs
<Qwertie> It shows 0 but ipfs swarm peers | wc -l says 76
<Qwertie> I also managed to download the video
<spikebike> Qwertie: I don't trust or use the webui much
<spikebike> Qwertie: maybe you are running two copies of ipfs?
<spikebike> d11e9: not off hand, I don't do windows, but I'd check the go-ipfs issues for whatever is current on the windows client
<d11e9> does the console in your browser show 403 errors?
<Qwertie> Well it seems to be working. Just showing the wrong number of peers
<spikebike> foo.avi should play in mplayer/vnc btw
<Qwertie> It did
<spikebike> cool
<spikebike> Qwertie: check for two ipfs's running
<spikebike> or just ignore the webui
d11e9 has quit [Ping timeout: 246 seconds]
solarisfire has joined #ipfs
slothbag has joined #ipfs
brandonemrich has joined #ipfs
wasabiiii1 has joined #ipfs
wasabiiii has quit [Ping timeout: 272 seconds]
JohnClare has joined #ipfs
step21_ is now known as step21
ei-slackbot-ipfs has quit [Remote host closed the connection]
ei-slackbot-ipfs has joined #ipfs
atomotic has quit [Quit: Textual IRC Client: www.textualapp.com]
JohnClare has quit [Ping timeout: 255 seconds]
nicknikolov has joined #ipfs
atgnag has quit [Read error: Connection reset by peer]
devbug has joined #ipfs
chiui has quit [Ping timeout: 246 seconds]
nomoremoney is now known as VictorBjelkholm
atgnag has joined #ipfs
Quiark has quit [Ping timeout: 272 seconds]
d11e9 has joined #ipfs
hellertime has joined #ipfs
<samiswellcool> does ipfs implicitly pin anything you add yourself?
bedeho has quit [Ping timeout: 250 seconds]
<Beer> samiswellcool: Don't think so
<ipfsbot> [node-ipfs-api] diasdavid created add-circle-ci (+1 new commit): http://git.io/vZA4m
<ipfsbot> node-ipfs-api/add-circle-ci c67a48b David Dias: Add circle CI badge...
<ipfsbot> [node-ipfs-api] diasdavid opened pull request #59: Add circle CI badge (master...add-circle-ci) http://git.io/vZA4Y
<spikebike> samiswellcool: good question, seems like add would be similar to pin
<achin> yes, "ipfs add" implicity pins
<spikebike> wouldn't made sense otherwise
<spikebike> both pin and add should be immune to garbage collection
<achin> indeed, the point of pinning is to mark blocks to not be garbage collected
<davidar> !pin QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK
<pinbot> now pinning /ipfs/QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK
<samiswellcool> makes sense, I thought it would be the case, just wanted to check. Cheers!
<pinbot> [host 7] failed to pin /ipfs/QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK: unknown ipfs-shell error encoding: text/html - "<html>\r\n<head><title>502 Bad Gateway</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>502 Bad Gateway</h1></center>\r\n<hr><center>nginx/1.9.3</center>\r\n</body>\r\n</html>\r\n"
wasabiiii1 has quit [Ping timeout: 246 seconds]
<pinbot> [host 0] failed to pin /ipfs/QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK: unknown ipfs-shell error encoding: text/html - "<html>\r\n<head><title>502 Bad Gateway</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>502 Bad Gateway</h1></center>\r\n<hr><center>nginx/1.9.3</center>\r\n</body>\r\n</html>\r\n"
<pinbot> [host 5] failed to pin /ipfs/QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK: unknown ipfs-shell error encoding: text/html - "<html>\r\n<head><title>502 Bad Gateway</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>502 Bad Gateway</h1></center>\r\n<hr><center>nginx/1.9.3</center>\r\n</body>\r\n</html>\r\n"
<pinbot> [host 6] failed to pin /ipfs/QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK: unknown ipfs-shell error encoding: text/html - "<html>\r\n<head><title>502 Bad Gateway</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>502 Bad Gateway</h1></center>\r\n<hr><center>nginx/1.9.3</center>\r\n</body>\r\n</html>\r\n"
wasabiiii has joined #ipfs
<pinbot> [host 1] failed to pin /ipfs/QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK: unknown ipfs-shell error encoding: text/html - "<html>\r\n<head><title>502 Bad Gateway</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>502 Bad Gateway</h1></center>\r\n<hr><center>nginx/1.9.3</center>\r\n</body>\r\n</html>\r\n"
<pinbot> [host 3] failed to pin /ipfs/QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK: unknown ipfs-shell error encoding: text/html - "<html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.9.3</center>\r\n</body>\r\n</html>\r\n"
<pinbot> [host 2] failed to pin /ipfs/QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK: unknown ipfs-shell error encoding: text/html - "<html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.9.3</center>\r\n</body>\r\n</html>\r\n"
<davidar> dammit, forgot it was 25 past
<solarisfire> 25 past?
<davidar> 25 past and 55 past the hour are when the gateway daemons restart
<solarisfire> Whoops.
<solarisfire> Back now.
<solarisfire> Pinning on my node.
<ipfsbot> [node-ipfs] diasdavid pushed 1 new commit to master: http://git.io/vZARs
<ipfsbot> node-ipfs/master 8693a2c David Dias: Merge pull request #28 from ipfs/feature/cleanup-roadmap-add-issue-links...
<davidar> solarisfire: thanks
<solarisfire> So can anyone use the pin command in here?
<solarisfire> !pin QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK
<solarisfire> Guess not...
pinbot has quit [Ping timeout: 244 seconds]
<davidar> solarisfire: you have to beg whyrusleeping :p
<solarisfire> Meh, not too bothered...
chriscool has joined #ipfs
<solarisfire> I try and keep my node up 24/7.
<davidar> and run the interplanetary gauntlet...
pinbot has joined #ipfs
<davidar> we should totally make that
<davidar> like that competition dropbox (used to) run, where you had to solve dropbox-related puzzles for free space
<davidar> dropquest!
<davidar> jbenet: we should make an interplanetary quest like dropquest! :D
<solarisfire> Would be cool if I could subscribe to pinbot so when someone runs it it runs on my node too...
chriscool has quit [Ping timeout: 272 seconds]
<davidar> solarisfire: yeah, i think that's been suggested before
<davidar> solarisfire: we're also planning to setup a service where people can donate space/bandwidth towards helping ipfs/archives :)
<solarisfire> Nice.
<solarisfire> wish ipfs pin -r didn't just hang for me :(
<solarisfire> *pin add -r
<davidar> solarisfire: yeah, usually better to run `ipfs refs -r` first to warm the cache
<davidar> whyrusleeping has said improving the pinning ux is on his agenda
<solarisfire> That just hangs too...
<solarisfire> Spat out the first hash, now just sitting there.
<solarisfire> Causing the whole linux server to hang tbh, other commands in other terminals are slow.
<solarisfire> High iowait :(
dignifiedquire has quit [Quit: dignifiedquire]
<solarisfire> And the ipfs daemon actually died...
<solarisfire> solarisfire@ipfs:~$ ipfs refs -r QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK
<solarisfire> QmfMKwvjRzwQM36GXtfbJ32A78iqh3Yr89zbVhnFkBYoi4
<solarisfire> ERRO[12:43:32:000] unexpected EOF module=commands/http
<solarisfire> [1]+ Killed ipfs daemon
kerozene has quit [Max SendQ exceeded]
dignifiedquire has joined #ipfs
chiui has joined #ipfs
kerozene has joined #ipfs
<solarisfire> ipfs is trying to do 3746.73 wrqm/s??? That's nuts...
atomotic has joined #ipfs
<spikebike> I have been pretty impressed with IPFs's performance
<solarisfire> Yeah, makes me wonder if I have something set up wrong...
<solarisfire> How long does it take you to run: ipfs refs -r QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK
<spikebike> I run 2 ipfs servers so I can test by adding to one and then reading it from the other
<solarisfire> What's the recommended version of golang to use?
<spikebike> go-ipfs/0.3.8-dev
<solarisfire> no, actual golang itself...
<spikebike> oh, sorry, umm, I think newer = better, I think they are currently testing against 1.4.X
<davidar> solarisfire: yeah, mine's still running, I suspect it's currently being seeded from a slow connection
<spikebike> i'm running 1.3.1 without problem
<VictorBjelkholm> solarisfire, "ipfs refs -r QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK" => "Error: merkledag: not found"
<VictorBjelkholm> my fault, didn't have the daemon running (facepalm)
<achin> btw, i was the one who initially added QmNdozNckkKUAC2YfTLDMyASRsF8bKpyYgZrBR1Syiv2XK
kerozene has quit [Max SendQ exceeded]
<reit> is it currently possible to manually specify segmentation?
<reit> like for example, my web site has a footer that is identical across the whole site
<reit> is it possible to split that off into its own object?
<achin> (after i added it, i had to forcefully kill the daemon to release some memory, but i don't think that had any negative impact on that ref)
fazo has joined #ipfs
* achin -> afk for a 2 hour meeting
kerozene has joined #ipfs
twistedline has quit [Ping timeout: 246 seconds]
twistedline has joined #ipfs
Roxamis has joined #ipfs
underpantz_ has joined #ipfs
kerozene has quit [Max SendQ exceeded]
<davidar> reit: rabin is *supposed* to detect stuff like that automatically
<davidar> whether it actually does is another issue
<davidar> btw: ipfs add -s rabin
kerozene has joined #ipfs
<davidar> achin: ah, thanks :)
chiui has quit [Ping timeout: 240 seconds]
mildred has joined #ipfs
wasabiiii1 has joined #ipfs
wasabiiii has quit [Ping timeout: 240 seconds]
<achin> i was a noob and didnt bother to check for a tar.gz download of everything
<reit> davidar: i see, that's pretty cool!
chiui has joined #ipfs
devbug has quit [Remote host closed the connection]
<reit> still, it'd be nice to be able to manually stitch things together in cases where you have something more complicated
<reit> in cases where the user knows /exactly/ where sensible places to split would be, which data will never change
Spinnaker has joined #ipfs
<d11e9> do ipfs hashes have a checksum? to verify a string is a valid (not that it exists) hash
ygrek has quit [Ping timeout: 244 seconds]
<solarisfire> Going back to our convo just before 1pm, I'm running go1.5.1, is that too new?
<reit> d11e9: i don't think so, try altering some character in an ipfs hash and the client will go off and try and fetch it anyway
<Bat`O> the lenght is probably checked
<reit> i remember ethereum had the same problem
<reit> that was more dangerous though, fatfingering something could send your money off into the void
Roxamis has left #ipfs ["Leaving"]
<reit> i think $8000 from various individuals has been lost like that
<Bat`O> also, base58 already reduce the risk of typo
d11e9 has quit [Ping timeout: 246 seconds]
samiswellcool has quit [Ping timeout: 246 seconds]
samiswellcool has joined #ipfs
svetter has quit [Ping timeout: 246 seconds]
edrex has quit [Ping timeout: 246 seconds]
silotis has quit [Ping timeout: 246 seconds]
<reit> yeah it was ethereum
<reit> bitcoin however has a safety measure to stop exactly that sort of thing from happening https://www.reddit.com/r/Bitcoin/comments/26s1i5/can_you_lose_a_fortune_by_mistyping_a_bitcoin/
<reit> i wonder if an extension to multiaddr for this is practical
edrex has joined #ipfs
silotis has joined #ipfs
svetter has joined #ipfs
slothbag has quit [Quit: Leaving.]
Beer has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
<reit> er, *multihash
voxelot has joined #ipfs
herch has joined #ipfs
edrex has quit [Ping timeout: 246 seconds]
edrex has joined #ipfs
pfraze has joined #ipfs
JohnClare has joined #ipfs
herch has quit [Ping timeout: 246 seconds]
AtnNn_ is now known as AtnNn
qqueue has quit [Ping timeout: 265 seconds]
qqueue has joined #ipfs
Beer has joined #ipfs
JohnClare has quit [Ping timeout: 272 seconds]
danslo has joined #ipfs
wasabiiii has joined #ipfs
wasabiiii1 has quit [Ping timeout: 246 seconds]
qqueue has quit [Ping timeout: 252 seconds]
qqueue has joined #ipfs
Poefke has joined #ipfs
<davidar> reit: multichecksum? :)
danslo has quit [Remote host closed the connection]
JohnClare has joined #ipfs
<reit> or multimulti and encapsulate everything ;)
herch has joined #ipfs
qqueue has quit [Ping timeout: 252 seconds]
apophis has joined #ipfs
apophis has quit [Remote host closed the connection]
qqueue has joined #ipfs
apophis has joined #ipfs
apophis has quit [Remote host closed the connection]
<herch> I am interested in to know what people are working on with IPFS. Not the actual IPFS development itself, but rather uses of IPFS replace existing web sites.
apophis has joined #ipfs
apophis has quit [Remote host closed the connection]
apophis has joined #ipfs
apophis has quit [Remote host closed the connection]
JohnClare has quit [Ping timeout: 246 seconds]
apophis has joined #ipfs
apophis has quit [Remote host closed the connection]
<solarisfire> herch: http://ipfs.pics/
<solarisfire> Like that?
<herch> Cool. So no more imgur I guess? :)
<solarisfire> *fingers crossed*
<herch> I think twitter clone should be possible. what do you say?
<herch> I don't know how the search is going to work, but just series of short messages sorted by time should be possible.
<Erkan_Yilmaz> does ipfs notice when I change a file?
<solarisfire> No, you'd have to re-add that file.
<Erkan_Yilmaz> it seems not when I open the hash in the browser
j0hnsmith has quit [Read error: Connection reset by peer]
<Erkan_Yilmaz> but when I readd the file, I get a new hash
<solarisfire> Yup
j0hnsmith has joined #ipfs
<Erkan_Yilmaz> which is OK, but imagien I give the hash to someone, I'd need to give them a new hash
Beer has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
<reit> Erkan_Yilmaz, in that scenario you'd want to use IPNS
<reit> ipfs name publish [hash]
<Erkan_Yilmaz> ok, let me read into this. thx
<solarisfire> Read that ^^ Erkan_Yilmaz
<Erkan_Yilmaz> will do, thx to both of you
Dizzy has joined #ipfs
Dizzy has quit [Client Quit]
Leer10 has quit [Read error: Connection reset by peer]
Beer has joined #ipfs
apophis has joined #ipfs
<VictorBjelkholm> herch, I'm currently building a twitter clone actually!
tsenior`` has joined #ipfs
<herch> VictorBjelkholm: Wow! if you are writing it on github I will like to note it down. I am collecting all the interesting stuff that people are doing using IPFS.
<VictorBjelkholm> I'm not completely sure about what I'm doing, but that's what I'm trying to build. It's kind of shitty though since I basically hit random peers to see if they are using my twitter clone, and also since I need to poll all the people you "follow" for updated messages. But it's ticking along :)
ygrek has joined #ipfs
<VictorBjelkholm> herch, right now, no. I'll write here once I have something to share
apophis has quit [Read error: Connection reset by peer]
<herch> VictorBjelkholm: Sure, no problem.
apophis has joined #ipfs
qqueue has quit [Ping timeout: 246 seconds]
qqueue has joined #ipfs
<fazo> VictorBjelkholm: you built a twitter clone using ipfs? :O
<solarisfire> Yeah, is that not a little too dynamic for ipfs?
<VictorBjelkholm> fazo, building! But yeah, I am!
pfraze has quit [Remote host closed the connection]
<VictorBjelkholm> solarisfire, nah, I think it's gonna work out in the end. But, it's merely a fun experiment, to see if it's possible :)
<fazo> VictorBjelkholm: I think I have an idea on how to make it work
<VictorBjelkholm> version 0.1 went fine, had some syncing issues so doing a refactor now, putting pieces where they are supposed to be, so I can open source it
<VictorBjelkholm> fazo, if you know a way that doesn't involve polling, PLEASE tell me!
pfraze has joined #ipfs
<fazo> VictorBjelkholm: what if you save to localstorage the ipfs address of a json file with the ipfs id of your followers/followed and an array of refs of all the tweets
<fazo> your tweets, not the others
<fazo> then you make a IPNS name associated to an object
<VictorBjelkholm> fazo, I'm kind of doing something like that, but in FS instead
<VictorBjelkholm> and in a backend server that runs in a container
<fazo> which is like a db for the user
<fazo> yeah that would work of course
<fazo> but I think there's a way to make it not rely on external services
<fazo> each user is identified by his ipfs id
<fazo> the public key
<fazo> then, you get the hash associated to the "twitter-app-db" name of that user
<fazo> which contains his profile and his tweets
<fazo> I think it makes sense
apophis has quit [Quit: This computer has gone to sleep]
<fazo> the list of followers and followed you save it to the browser's local storage and create some way to export it in the app
<fazo> this way you can instantly see new tweets in the app
<fazo> and it only relies on IPFS and IPNS
qqueue has quit [Ping timeout: 246 seconds]
qqueue has joined #ipfs
<solarisfire> Currently running an experiment for another possible example... hope it works :-)
<davidar> multivac, aggregation
<davidar> fazo VictorBjelkholm solarisfire herch: ^
apophis has joined #ipfs
<VictorBjelkholm> fazo, don't worry, I've already figured that part out :) The only thing I'm waiting for now is to have aggregation and/or pub/sub
<VictorBjelkholm> fazo, you exactly described how I do it, but I'm doing it in a backend, not local storage
<solarisfire> Wonder if ipdb could be on the cards, or is that too much? XD
<fazo> VictorBjelkholm: cool :)
<fazo> VictorBjelkholm: do you have a github repo?
<VictorBjelkholm> fazo, as I said before, not completely ready to publish the code, I'll write here once it's done, should be now during the week or weekend
<solarisfire> hugo
<fazo> VictorBjelkholm: too bad! Really wanted to have a look
<fazo> VictorBjelkholm: may I ask how you implemented the backend?
shea256 has quit [Remote host closed the connection]
shea256 has joined #ipfs
<davidar> if you guys have any more ideas about how to implement dynamic sites on ipfs (especially dealing with user-submitted content), please add them to that issue
<VictorBjelkholm> fazo, give me your username and I'll invite you to the prototype
<VictorBjelkholm> github username
<fazo> VictorBjelkholm: it's fazo96 :)
<voxelot> i'm working on an angular site hosted on ipfs, uses the node api to make calles the gateway that you're veiwing the hash on
<fazo> davidar: I read the issue and the one about discoverability, I think you guys already figured it out
<VictorBjelkholm> should have access now. Beware of ugly code, it's just a prototype and the RC will be open sourced soon :)
<ipfsbot> [node-ipfs] RichardLitt deleted feature/cleanup-roadmap-add-issue-links at edc5886: http://git.io/vZxZE
<VictorBjelkholm> fazo, index.js is the only file you care about
<fazo> wow, there's a node ipfs api already?
<fazo> is it a wrapper around the daemon api?
<VictorBjelkholm> fazo, yup!
<VictorBjelkholm> it's pretty painless to use. I wished it used promises but that's for the future :)
<fazo> cool! I can't wait to build decentralized services on ipfs
<fazo> but I think to get it to work really well we need a js implementation of ipfs
ygrek has quit [Ping timeout: 264 seconds]
<fazo> then you can serve a static "bootstrap page" that loads ipfs and then the entire app via ipfs
cdslayer has joined #ipfs
apophis has quit [Read error: Connection reset by peer]
<fazo> you could have entirely dynamic applications hosted on GitHub Pages.
<fazo> with no infrastructure cost
cdslayer is now known as apophis
M-erikj has quit [Quit: node-irc says goodbye]
<VictorBjelkholm> fazo, look at the nods-ipfs-api, it works in the browser too!
voxelot has quit [Ping timeout: 264 seconds]
M-erikj has joined #ipfs
<fazo> VictorBjelkholm: yeah, I saw that earlier but it doesn't look like it works yet
<VictorBjelkholm> I haven't tried it myself, but it should! https://github.com/ipfs/node-ipfs-api
<VictorBjelkholm> oh
<lgierth> fazo: you can host everything entirely in ipfs
<lgierth> ipfs.io for example is a static page hosted outside of ipfs
<lgierth> *out of ipfs
<VictorBjelkholm> fazo, if you tried it and you couldn't get it to work, care to open up an issue in the repo?
<fazo> VictorBjelkholm: I didn't try it, I was just under the impression that the js implementation doesn't work
<fazo> VictorBjelkholm: I'm happy I was wrong then!
<fazo> VictorBjelkholm: anyway, later I'm going to build a file manager that lets you save bookmarks in the browser local storage
<fazo> VictorBjelkholm: so you can bookmark data you see and like
<VictorBjelkholm> fazo, ooh, that's a good idea! Love to see it
<fazo> VictorBjelkholm: yeah, another thing I wanted to add is the ability to choose which ipfs hosted app to use to open media files
<fazo> VictorBjelkholm: for example is a file is a video it could open it with that video player hosted on ipfs
<fazo> VictorBjelkholm: the problem is figuring out what the file actually is, I don't think there's an easy way to do that
<herch> Is clone of Pinterest possible?
<VictorBjelkholm> herch, why not?
<VictorBjelkholm> fazo, there is tools to try to figure out the filetype from the data itself. I know the standard library in golang have something like that, probably exists many solutions for it
<samiswellcool> I'm kicking around a little cli app for storing the files you publish, generating a html page detailing all of them and their versions and then setting your ipns to that page. I plan to use it to keep track of front end projects and their ipns previous versions and links
<herch> No i am just trying to figure out what's possible and what's not. e.g. I haven't yet figured out how would you implement a nosql db on IPFS and run queries on it
<VictorBjelkholm> and also there was some talk about getting the IPFS.io gateway to get better at it, possibly write a new one
<fazo> samiswellcool: that's awesome
<herch> something that competes with Amazon dynamodb
<VictorBjelkholm> herch, use IPNS and something like Sqlite and it's done :)
<fazo> herch: like VictorBjelkholm said, you update the ipns name to the hash of the new db every time the db changes. Then it's always available via ipfs at the same name
besenwesen has quit [Ping timeout: 255 seconds]
besenwesen_ has joined #ipfs
<herch> VictorBjelkholm, fazo : ok
<fazo> but what happens when you $.ajax a directory?
<herch> but it should be simpler, like someone should be just able to createdb = new db() and it should work
<fazo> herch: libraries can be built to do something like that, we're still in the early days
<herch> fazo : I know :)
<herch> fazo: just throwing things out here, thinking out loud
wasabiiii1 has joined #ipfs
pfraze has quit [Remote host closed the connection]
wasabiiii has quit [Ping timeout: 252 seconds]
vijayee_ has joined #ipfs
<herch> we definately need good apps on IPFS, but we should focus more on libs. This infra will allow a lot many people to have lot many apps that we cannot even imagine.
shea256 has quit [Remote host closed the connection]
atomotic has quit [Quit: Textual IRC Client: www.textualapp.com]
Beer has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
screensaver has joined #ipfs
<screensaver> SqLite? I believe NeDB even runs in a browser
<achin> om nom nom. the ipfs daemon is currently using about 13G memory (as i am adding about 300MB of data)
<blame> You would want a database method that knows about the merkledag, so you can minimize the costs of updates. But honestly, a flatfile with ipns pointer does the trick.
apophis has quit [Ping timeout: 244 seconds]
apophis has joined #ipfs
<VictorBjelkholm> herch, I'm not sure I agree. The need for libs comes from the need of apps. Without building apps, you don't know what the lib needs
sseagull has joined #ipfs
apophis is now known as cdslayer
Tv` has joined #ipfs
<Bat`O> VictorBjelkholm: it's never too early to open-source something
<Bat`O> VictorBjelkholm: push the button !
<VictorBjelkholm> well, the thing has to be working properly before :)
shea256 has joined #ipfs
screensaver has quit [Ping timeout: 246 seconds]
voxelot has joined #ipfs
mildred has quit [Ping timeout: 250 seconds]
shea256_ has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
shea256_ has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
besenwesen_ is now known as besenwesen
besenwesen has quit [Changing host]
besenwesen has joined #ipfs
devbug has joined #ipfs
nessence has joined #ipfs
screensaver has joined #ipfs
wasabiiii has joined #ipfs
<screensaver> Q: are there plans for a Decentralized GitHub? In May I read this piece http://blog.printf.net/ but I'm not sure if this is a duplication of IPFS?
wasabiiii1 has quit [Ping timeout: 260 seconds]
devbug has quit [Ping timeout: 250 seconds]
<fazo> screensaver: a decentralized github is relatively easy to build on ipfs, provided that ipns works
<Bat`O> VictorBjelkholm: that's too much teasing :p
qqueue has quit [Ping timeout: 240 seconds]
<VictorBjelkholm> Bat`O, hah, there is never too much teasing! No, but I'll open it up tonight, once I stop work on day-stuff
<screensaver> fazo: sounds good
<Bat`O> screensaver: see also https://github.com/cryptix/git-remote-ipfs
<screensaver> Bat`O: thx
qqueue has joined #ipfs
captain_morgan has quit [Ping timeout: 246 seconds]
simonv3 has joined #ipfs
<jbenet> achin after you add it, see if it drops down. if not, harvest a stack trace for us? (ctril+\ drops stack trace to stderr)
<ion> Is there a standard way to limit the bandwidth used by ipfs daemon? trickle doesn’t seem to affect it.
<jbenet> ion, not yet but coming soon: https://github.com/ipfs/go-ipfs/issues/1482
<ion> Alright
qqueue has quit [Ping timeout: 240 seconds]
cdslayer has quit [Quit: This computer has gone to sleep]
<achin> jbenet: virt is still at 13g (res is at about 5g)
qqueue has joined #ipfs
<jbenet> that's terrible.
<achin> oh man. the strack trace is huge, and blew past the 1800 line tmux buffer i have. will only the last 1800 lines be useful at all?
<voxelot> how do I add a CORS exception to the api on my gateway? locally this works export API_ORIGIN="http://localhost:8080"
<voxelot> my gateway is on /ip4/158.69.62.218/tcp/80 but export API_ORIGIN="http://158.69.62.218:80" is 403
<voxelot> not sure if this is even supposed to work, running a twitter wall made in angular all client side with node api, hashed then ran on a gateway to serve the api requests
<VictorBjelkholm> voxelot API_ORIGIN should be the domain you're making the requests from, not the destination
<voxelot> crap
<voxelot> the request is coming from random browsers
<VictorBjelkholm> also, useful while developing might be to set API_ORIGIN to *
<VictorBjelkholm> voxelot, but the browsers accessing an address, no?
<VictorBjelkholm> I mean, if you have your frontend on my_frontend.com and your daemon on whatever, you'll need to set API_ORIGIN to my_frontend.com
<voxelot> 'export API_ORIGIN *' ?
M-alien has left #ipfs ["User left"]
<lgierth> we might ventually just set it to *
<VictorBjelkholm> lgierth, if you're allowing localhost and such, why not just set * ???
<VictorBjelkholm> yeah, I think that's easier, especially when you want anyone, anywhere to be able to access it
<voxelot> thanks guys, let me try it
<lgierth> VictorBjelkholm: because * includes everything, and we're not 100% sure yet that's the best idea
<VictorBjelkholm> voxelot, the export command above should be 'export API_ORIGIN='*'' FYI
<voxelot> yup got it haha ty
<VictorBjelkholm> lgierth, well, you already have localhost which basically can be anything. So my feeling is that you don't really want to limit it, hence * makes sense
<VictorBjelkholm> sucks to have the nickname So btw
<lgierth> no, localhost is localhost
<lgierth> not sure what you mean with localhost being potentially anything
<lgierth> yeah as i said we'll probably allow * eventually
<lgierth> or soon
<lgierth> who knows :)
<VictorBjelkholm> well, I was thinking slightly wrong but my thinking was that if I wanted to access `API` directly from the browser, from Trello, I could just proxy Trello through localhost
<VictorBjelkholm> but anyways, slightly wrong thinking so ignore me politely...
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
<fazo> VictorBjelkholm: I just got the barebones interface to add and remove bookmarks to work. It also saves them to localstorage
<jbenet> !pin /ipfs/QmY1sJAqkxfJ6YbMaFBvs15hsmmbeBTKPe6jyd8GSZMtpP
<pinbot> now pinning /ipfs/QmY1sJAqkxfJ6YbMaFBvs15hsmmbeBTKPe6jyd8GSZMtpP
qqueue has quit [Ping timeout: 240 seconds]
<jbenet> i meant
<jbenet> !pin /ipfs/QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ
<pinbot> now pinning /ipfs/QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ
qqueue has joined #ipfs
<jbenet> yeah that's more like it.
xenkey1 is now known as xenkey
thomasreggi has joined #ipfs
<fazo> jbenet: is that... every rfc?
<jbenet> yep
dignifiedquire has quit [Quit: dignifiedquire]
<fazo> jbenet: and now, they're on the permanent web.
<jbenet> Boom.
<achin> so in total, those RFCs on disk take up about 300MB, but if you sum up the link sizes of that root block, it's about 500MB
<jbenet> yay dedup
<jbenet> interesting... i would've expected text to not dedup that much
<jbenet> maybe the pdfs?
<lgierth> or MUST and SHOULD
<lgierth> :)
<achin> someone pinned a previous version of that collection which has the .txt, but not the PDFs
<ion> I ^C’d an “ipfs pin add -r QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ” (due to bandwidth issues) and even restarted the daemon, but the daemon still keeps downloading the blocks. What to do?
<achin> so when you pinned this new one, it should have only needed to download the .pdf and .ps files
<lgierth> ion: ipfs pin rm
<whyrusleeping> ion: if you restarted the daemon, it shouldnt be downloading more blocks
<fazo> ion: that's why you dont pin data from strangers lol
<whyrusleeping> you can see what blocks your daemon is trying to fetch by running 'ipfs bitswap wantlist'
<lgierth> whyrusleeping: not even for pins?!
<achin> (also, /me is eminence on github)
<jbenet> achin: ah nice! do you have a twitter handle?
<jbenet> i just tweeted out about it and want to thank you
<whyrusleeping> lgierth: nope, a block wont be pinned until it is fully fetched. so if you kill a daemon in the middle of the process, you wont have a pin
<lgierth> oh ok
pinbot has quit [Ping timeout: 244 seconds]
<achin> i do, but since i never use it, it's as if i don't exist on twitter
<jbenet> ion: ^C attempts graceful teardown. do it again for full shut doen
<jbenet> ok
<fazo> if I download some file using go-ipfs
<fazo> does it use multiple sources to speed up the download yet?
<fazo> or is that not yet implemented
apophis has joined #ipfs
<drathir> fazo: im guess it using the closer one location onlt atm...
<drathir> only*
<ion> The wantlist is empty. When i run ipfs repo gc repeatedly it find blocks to remove every time. When i look at the blocks being added in ls -lrt ~/.ipfs/blocks/*/* they are all RFC stuff. I have verified that no ipfs processes are running and restarted the daemon.
<achin> RIP pinbot
<jbenet> hahaha yeah
<jbenet> lgierth: pinbot died. should i reboot it?
<lgierth> jbenet: umm, yes please! ssh root@deimos docker restart pinbot
<lgierth> it's probably still running
pinbot has joined #ipfs
<jbenet> !pin /ipfs/QmVYvR2hspELVVJZTGa57NhPFBsczyArH7FqdsKRRozWi6
<pinbot> now pinning /ipfs/QmVYvR2hspELVVJZTGa57NhPFBsczyArH7FqdsKRRozWi6
apophis has quit [Ping timeout: 244 seconds]
apophis has joined #ipfs
<pinbot> [host 3] failed to grab refs for /ipfs/QmVYvR2hspELVVJZTGa57NhPFBsczyArH7FqdsKRRozWi6: unknown ipfs-shell error encoding: text/html - "<html>\r\n<head><title>504 Gateway Time-out</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>504 Gateway Time-out</h1></center>\r\n<hr><center>nginx/1.9.3</center>\r\n</body>\r\n</html>\r\n"
<jbenet> !pin /ipfs/QmVYvR2hspELVVJZTGa57NhPFBsczyArH7FqdsKRRozWi6
<pinbot> now pinning /ipfs/QmVYvR2hspELVVJZTGa57NhPFBsczyArH7FqdsKRRozWi6
<jbenet> !botsnack.
<jbenet> !botsnack
<pinbot> om nom nom
<achin> is it nomming more tasty memories?
<lgierth> hrrrrmmm
<lgierth> these timeouts :/
<lgierth> i told nginx not to
<jbenet> achin: do you have the archive behind a nat? is your daemon running?
<jbenet> run https://github.com/jbenet/node-bsdash to see blocks moving
<achin> my daemon is running, and it's behind a NAT with TCP/UDP 4001 forwarded to the daemon
<achin> `ipfs swarm peers` reports that i'm connected to 128 peers so i think i'm well connected
<jbenet> yeah sounds good
chiui has quit [Read error: Connection reset by peer]
qqueue has quit [Ping timeout: 246 seconds]
<solarisfire> So can host sites built with hugo pretty easily: https://ipfs.io/ipfs/QmbJw94bsQa2rUG78seMquWYUkRSn3z5knKbJLStuSEtUR/
<solarisfire> Hmmmm or not, makes too many absolute links, need to work on that...
<solarisfire> I'm pretty well connected :D
<solarisfire> solarisfire@ipfs:~/hugo$ ipfs swarm peers | wc -l
<solarisfire> 166
<xenkey> needs work
<VictorBjelkholm> solarisfire, regarding hugo, add -r the directory and use that instead and it will probably work
qqueue has joined #ipfs
<VictorBjelkholm> "Path Resolve error: no link named "about.html" under QmeQT76CZLBhSxsLfSMyQjKn74UZski4iQSq9bW7YZy6x8" would be fixed by that I think, for example
<solarisfire> Problem is you have to know the directory before you run hugo...
<solarisfire> But you add the files after...
<achin> i am starting to see other providers for some of these .pdf files, so it looks like pinbot did his job
<solarisfire> how do you check that?
<jbenet> solarisfire: i ran into this, hugo then had no way of doing properly realtive links. not sure if fixed i posted an issue
<whyrusleeping> solarisfire: 'ipfs dht findprovs <hash>'
<solarisfire> Cool...
<solarisfire> Gives me a few then goes "error: routing: not found"
<jbenet> solarisfire: can probably write a script that post processes.
<jbenet> and changes the links
<solarisfire> yeah I think I might do that jbenet...
<jbenet> whyrusleeping: i'm OOMing hard. something's leaking.
<solarisfire> Or I could rewrite their themes to be ipfs friendly...
<whyrusleeping> jbenet: uh oh. repro?
<jbenet> that would be a better solution. but prob easier to start with the script (80% solution), then move to that
<drathir> jbenet: any info about default ports colision with iperf? ideas on which one to change?
<jbenet> whyrusleeping just calling ipfs refs -r QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ on a machine with 512MB ram
<jbenet> drathir yeah we'll change them, not sure yet.
<solarisfire> if I do ipfs refs -r on a big directory it just kills my box...
<whyrusleeping> jbenet: its because the networks grown
<jbenet> we thought of 4737 for swarm protocol
<jbenet> (ipfs in dial pad)
<solarisfire> I see a ton of iowait, my queues grow to insane sizes, and my console freezes...
<jbenet> whyrusleeping :/ udp time.
wopi has quit [Read error: Connection reset by peer]
<whyrusleeping> yeah.
<drathir> jbenet: let me know on which one port to change i will try to make PR...
<whyrusleeping> tcp was fun...
wopi has joined #ipfs
qqueue has quit [Ping timeout: 240 seconds]
<jbenet> drathir: need to settle on all ports first, i'll post on go-ipfs
wasabiiii has quit [Ping timeout: 264 seconds]
wasabiiii1 has joined #ipfs
qqueue has joined #ipfs
jeffreyhulten has joined #ipfs
<ion> When initializing an ipfs repo, why isn’t just /ipfs/QmVtU7ths96fMgZ8YSZAbKghyieq7AjxNdcqyVzxTt3qVe pinned recursively but also all the files under it?
jeffreyhulten has quit [Quit: WeeChat 1.3]
jhulten has joined #ipfs
<giodamelio> solarisfire: you can make hugo use reletive links if you set baseUrl="" and relativeUrls=true. See https://gateway.ipfs.io/ipns/QmWGb7PZmLb1TwsMkE1b8jVK4LGceMYMsWaSmviSucWPGG/
<drathir> jbenet: ok thank a lot, and sorry for extra task adding...
wasabiiii1 has quit [Ping timeout: 252 seconds]
apophis has quit [Ping timeout: 255 seconds]
<jbenet> drathir: no thank you this needs to get done
apophis has joined #ipfs
<jbenet> ion: i think it's because of the code. it should be just pinned recursively
wasabiiii has joined #ipfs
<samiswellcool> Think after I've finished the first bit of my ipfs project tracker, I'll be putting together a list of ideas in case I never get around to making any of them. I'm trying to write stuff down as I think of it
tsenior`` has quit [Ping timeout: 255 seconds]
dignifiedquire has joined #ipfs
danslo has joined #ipfs
<solarisfire> giodamelio: CRITICAL: 2015/09/16 unknown flag: --relativeUrls
<solarisfire> giodamelio: and baseURL is ignored by some themes so depends which one you use...
<giodamelio> solarisfire: relativeUrls is a config options(only in the newist version I think), as for the baseUrl thing, I guess you will have to try theme by theme. I am using hyde-x and it works fine.
<solarisfire> giodamelio: yeah hyde is one of the ones it works in... I used "go get" so I'm pretty sure I'm on the latest version??
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<solarisfire> solarisfire@ipfs:~/hugo$ hugo version
<solarisfire> Hugo Static Site Generator v0.15-DEV BuildDate: 2015-09-16T15:06:45+01:00
shea256 has quit [Remote host closed the connection]
<giodamelio> Ya, then as long as the template is good, it shoud work.
tsenior`` has joined #ipfs
j0hnsmith has quit [Quit: j0hnsmith]
dignifiedquire has quit [Quit: dignifiedquire]
jhulten has quit [Quit: WeeChat 1.3]
<fazo> It works
<fazo> I wrote an ipfs bookmark manager
<fazo> UX horrible atm though
<whyrusleeping> fazo: ipfs bookmark manager??
<fazo> if anyone wants to try it: QmSyURHE4qBKWCNYoX51B6VRMMfFS4HBhbjKY6vSnJSByN
* whyrusleeping has wanted that for a while...
<fazo> yes it's a small web app
<fazo> very simple actually
<fazo> hacked toghether in 1 hour with questionable skills forgotten during the last months
<fazo> but it works
<whyrusleeping> ah neat
<fazo> I'm hosting it on a shitty connection though, it will take a while to load
<whyrusleeping> already got it, no worries
<whyrusleeping> (the network appears to be a little less congested this morning)
qqueue has quit [Ping timeout: 264 seconds]
<fazo> damn that was fast, it took 50 seconds to upload it to my server
<giodamelio> It only took a second for me
<fazo> it means the network is working very well
<fazo> or you are all using the default gateway which has it cached
<fazo> by the way you can see the github page if you click on the title
herch has quit [Ping timeout: 246 seconds]
<Vyl> Never got an answer to my question... is there any way to view a list of what's stored on my node?
<fazo> Vyl: yes, ipfs ls
<fazo> no wait
<fazo> that's wrong
<Vyl> Tried that.
<Vyl> I'm giving some thought to indexing ideas.
<whyrusleeping> Vyl: 'ipfs refs local'
<fazo> Vyl: if you want to see pinned stuff (stuff that wont be deleted when you run ipfs gc)
<fazo> Vyl: just run ipfs pin ls --type all
atrapado has joined #ipfs
jhulten_ has joined #ipfs
<Vyl> ipfs refs local... thanks.
jhulten has joined #ipfs
jhulten_ has quit [Client Quit]
<whyrusleeping> Vyl: yeah, that shows every block that is on your disk (except for a few that dont show up due to a bug :/ )
<Vyl> I'm going to write a quick perl script that will try to make sense of it.
wopi has quit [Read error: Connection reset by peer]
<Vyl> Help tell you a bit about what you have.
<jbenet> Vyl: awesome! that would be very useful.
wopi has joined #ipfs
<jbenet> there's a "find roots" script that whyrusleeping wrote-- let me find it for you
shea256 has joined #ipfs
<Vyl> That might save me some time.
<whyrusleeping> jbenet: maybe ipfs refs local should take a --roots arg that essentially runs that script?
<jbenet> whyrusleeping +1
<jbenet> whyrusleeping i'm looking for it in your gists-- is it public?
<whyrusleeping> although, if you make a node that links to everything else, then only that one will show up, lol
<whyrusleeping> uhm... maybe?
<whyrusleeping> one sec...
<achin> whyrusleeping: jbenet: i wonder if it would be possible to have keybase.io record/prove your ipfs peerID
<whyrusleeping> apparently not public: https://gist.github.com/whyrusleeping/9f2c83f3feed340e21ac
<whyrusleeping> achin: i have wanted that ever since i started using keybase
<jbenet> achin yes, but please dont get too tied to ipfs ids yet, we may have to change the storage format of keys which would change the output hash
<jbenet> whyrusleeping: ah thanks
<jbenet> Vyl o/ o/ o/ --- what it's doing is looking for refs that aren't linked to by anything else, hence "find roots
<whyrusleeping> i need to update that code, it wont build
<whyrusleeping> gimme a sec
<jbenet> whyrusleeping also change the roots[u.Key(lnk.Hash)] = false to a delete(roots, u.Key(lnk.Hash))
<jbenet> else you're going to OOm
M-atrap has joined #ipfs
<achin> you could sign a message using your peer private key and post it at /ipns/peerid/proof.txt which keybase could then verify
<achin> er, not your peer private key, but your keybase private key
shea256 has quit [Ping timeout: 256 seconds]
<whyrusleeping> there, updated the gist
thomasreggi has quit [Read error: Connection reset by peer]
<jbenet> achin: nice!
nausea is now known as neofreak
<jbenet> whyrusleeping: delete?
thomasreggi has joined #ipfs
<whyrusleeping> changing now...
<jbenet> oh i see, wont work
<whyrusleeping> ?
<whyrusleeping> why not?
<jbenet> because it may be added back later
<whyrusleeping> ah, yeah.
<whyrusleeping> i thought there was a reason i did it that way
<fazo> is running multiple nodes with the same keypair supported?
<whyrusleeping> (good thing i checked irc again before pushing that commit)
<whyrusleeping> fazo: no
<fazo> whyrusleeping: yeah, I figured. Just wanted to be sure :)
<ion> I initialized a fresh ipfs repo, ran the daemon, downloaded /ipfs/QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ/rfc11.pdf and ran watch ipfs bitswap stat. After the download had finished (and the wantlist was empty), “blocks received” and “dup blocks received” kept growing for quite a while, meanwhile iftop showed the daemon was downloading from *.i.ipfs.io at 100 to 500 kB/s. It stopped around 120
<ion> duplicate blocks the first time. I ran a GC and downloaded the same file again. The same thing happened and it’s downloading duplicate blocks from 162.243.248.213 and 104.236.32.22, 400 dup blocks and counting.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> ion: yeah.... bitswap is really dumb right now
<Vyl> Hmm... going to have to reimpliment that script, I do not know the language. But I get the gist of it.
<whyrusleeping> we made it dumb so that file transfers would work while we built out the rest of the system
<rendar> what is bitswap used for?
apophis has quit [Quit: This computer has gone to sleep]
Score_Under has left #ipfs [#ipfs]
wasabiiii has quit [Ping timeout: 260 seconds]
jfis has quit [Ping timeout: 264 seconds]
wasabiiii has joined #ipfs
<achin> for moving bits around between nodes
<whyrusleeping> ^
<whyrusleeping> lol
<achin> (see the paper for more details)
<whyrusleeping> bitswap is the agent in ipfs that decides who to request blocks from, and who to send blocks to
<whyrusleeping> right now its answers to that are 'everyone' and 'everyone'
<achin> bits for everyone \o/
<whyrusleeping> yay!
<whyrusleeping> jbenet: it seems like my endless daemon shutdown fixes have regressed again
<whyrusleeping> its not as snappy to shut down as it once was
<ion> So this explains why i’ve been managing to transfer data at tens of kB/s while the daemon has kept my ADSL connection saturated at hundreds of kB/s. So it’s just a matter of time until a better bitswap implementation exists and these problems are no more?
apophis has joined #ipfs
<whyrusleeping> yeap!
ianopolous has joined #ipfs
<whyrusleeping> i thought we would get to doing it by the time it became a problem, but things happenned faster than expected
<ion> Alright, thanks for the explanation.
<ion> And thanks for the great work so far.
<whyrusleeping> ion: you are very welcome :)
shea256 has joined #ipfs
jfis has joined #ipfs
<achin> if you get ion interested enough, he might even write you an ipfs implementation in haskell!
<ion> Sure, that seems like easy one-afternoon project, right?
<achin> sure :)
<whyrusleeping> jbenet: i dont know if you read backlog, but we broke builds on windows (i broke builds on windows)
<whyrusleeping> windows cant handle symlinks apparently
shea256 has quit [Ping timeout: 256 seconds]
tsenior`` has quit [Ping timeout: 260 seconds]
<Vyl> Is there a way to find out if an address is a directory?
<Vyl> An easy way, I mean.
<achin> ipfs ls <hash> should do it, i think
<achin> if you see stuff, it's a directory. else, it's not
<achin> i suppose this isn't perfect, because the hash could have named links, but not be a directory. but i don't think there is anything that would create those nodes in today's ipfs?
<jbenet> whyrusleeping great... i havent read backlog because i havent caught up with everything else.
<jbenet> Vyl: ipfs object get <object> --enc=json shows you the object.
<jbenet> warning: format will change a bit
<whyrusleeping> jbenet: yeah... was discussing it with gatesvp last night, i'm hoping he can help us come up with a solution. otherwise i have to revive my windows machine
vijayee_ has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
<ipfsbot> [go-ipfs] whyrusleeping created mfs-redo (+1 new commit): http://git.io/vZpSE
<ipfsbot> go-ipfs/mfs-redo 04594dd Jeromy: WIP...
<rendar> achin: which paper were you refering about?
<whyrusleeping> its really scary. pretty sure it has windows 8 on it, and chrome didnt work last time i tried
<achin> rendar: the IPFS paper has lots of good info in it. it's on the ipfs.io homepage
<whyrusleeping> its sole purpose in life was to open steam and run dark souls 2
tsenior`` has joined #ipfs
M-atrap has left #ipfs ["User left"]
shea256 has joined #ipfs
shea256 has quit [Remote host closed the connection]
shea256 has joined #ipfs
dignifiedquire has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping opened pull request #1715: mfs redo (dev0.4.0...mfs-redo) http://git.io/vZpHn
wasabiiii has quit [Quit: Leaving.]
<ipfsbot> [go-ipfs] whyrusleeping force-pushed mfs-redo from 04594dd to b8912bc: http://git.io/vZpH2
<ipfsbot> go-ipfs/mfs-redo b8912bc Jeromy: Refactor ipnsfs into a more generic and well tested mfs...
apophis has quit [Quit: This computer has gone to sleep]
mildred has joined #ipfs
devbug has joined #ipfs
<rendar> achin: do you mean filecoin.pdf?
mildred has quit [Ping timeout: 272 seconds]
ELFrederich has quit [Quit: Leaving]
<achin> no, i mean the IPFS paper (since i think you were asking about ipfs). https://github.com/ipfs/papers/raw/master/ipfs-cap2pfs/ipfs-p2p-file-system.pdf
<jbenet> whyrusleeping: what happened? why are we OOMing on refs? i used to pin huge stuff without problem. nodes seem stable when not refs-ing.
<jbenet> whyrusleeping: maybe the 1.5 runtime is larger?
<whyrusleeping> jbenet: sorry, context? this is getting OOM from runnings ipfs refs?
<jbenet> yeah, on a large thing
<whyrusleeping> i'm pretty sure its just that the number of connections each node has is that much larger
<achin> (if that large thing is the RFC repo, my daemon went to 15GB just by adding it. might be related?)
<whyrusleeping> each active connection takes like, 1MB or so of memory if i'm right
pfraze has joined #ipfs
apophis has joined #ipfs
qqueue has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
tsenior`` has quit [Remote host closed the connection]
ianopolous2 has joined #ipfs
tsenior`` has joined #ipfs
<Vyl> Halfway done.
mildred has joined #ipfs
ianopolous has quit [Ping timeout: 250 seconds]
devbug has quit [Ping timeout: 240 seconds]
<jbenet> need to drop all this resource usage
ianopolous2 has quit [Remote host closed the connection]
ianopolous2 has joined #ipfs
<whyrusleeping> jbenet: yeah...
<whyrusleeping> could you guys run with IPFS_PROF=true ?
<whyrusleeping> and copy the ipfs.memprof file when the memory gets huge
<whyrusleeping> and then send me that along with your ipfs binary
<atrapado> https://gateway.ipfs.io/ipns/... says: internalWebError: context deadline exceeded
<Vyl> There.
<Vyl> It's ugly, but it should work.
<atrapado> it works now
<Vyl> QmU2DpieMuxxU8wSv98ZqY3ao7rkbdUcYEfZierDftcPY8 if anyone wants to test it.
<Vyl> It's an ugly load of perl only, but it'll show you what you have stored, and dump it to files.
bedeho has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
qqueue has quit [Ping timeout: 255 seconds]
qqueue has joined #ipfs
<Vyl> Mostly for satisfying curiosity.
<Vyl> Basically just takes 'ipfs refs local', strips out all the directories, and strips out anything that's referenced by another block. So what's lift is just the root blocks for files.
<jbenet> Vyl nice stuff! maybe make a post about it in https://github.com/ipfs/notes or even https://github.com/ipfs/ipfs ?
<Vyl> I'm not the most social type. Never actually used github.
<Vyl> Free free to post it there though.
shea256 has quit [Remote host closed the connection]
<Bat`O> does someone have the hash of a big file available somewhere so i can do some test ?
tsenior`` has quit [Remote host closed the connection]
shea256 has joined #ipfs
tsenior`` has joined #ipfs
<Vyl> It's an ugly script, probably stop working in future, but it's ok for showing a concept.
<ipfsbot> [go-ipfs] whyrusleeping force-pushed mfs-redo from b8912bc to 72c0282: http://git.io/vZpH2
<ipfsbot> go-ipfs/mfs-redo 72c0282 Jeromy: Refactor ipnsfs into a more generic and well tested mfs...
Pyrotherm has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Pyrotherm> So, if the reference implementation is written in go, how does it interact with FUSE again?
<whyrusleeping> Pyrotherm: through fuse bindings
<whyrusleeping> these bindings: https://github.com/bazil/fuse
<Pyrotherm> hehe, this is awesome ;-)
<Tv`> "bindings" is a funny word
<Tv`> it's just a library
<whyrusleeping> well, yeah. not bindings i guess
shea256 has quit [Remote host closed the connection]
<whyrusleeping> its a native implementation of the protocol, right?
<Tv`> yes
shea256 has joined #ipfs
<whyrusleeping> i should probably stop calling it a binding then...
<Pyrotherm> i've been mostly python guy for years, but your project is going to make me learn go...
qqueue has quit [Ping timeout: 252 seconds]
<whyrusleeping> Pyrotherm: i've heard that lots of python people switch to go really easily ;)
<Pyrotherm> yeah, the syntax looks really familiar
<Pyrotherm> I haven't written any async/parallel code before, is there a lot of that in here?
<whyrusleeping> theres quite a bit of it, lol
<whyrusleeping> we have thousands of asynchronous routines running at any given moment
<Pyrotherm> well, looks like it's time to hit the books again
<whyrusleeping> (go makes this really cheap)
qqueue has joined #ipfs
<edrex> And the concurrency primitives are pretty easy to reason about
<whyrusleeping> ^ +1
j0hnsmith has joined #ipfs
<Pyrotherm> any good sites/books that you would recommend for learning go?
<edrex> Pyrotherm: that's how I learned Go: got really interested in another open source storage system, Camlistore
<pjz> yeah, ipfs and camlistore guys have to get together sometime and figure out how to interop ;)
<edrex> Pyrotherm: the tour is a good start, you can chew it up in a couple of hours https://tour.golang.org/welcome/1
<whyrusleeping> Pyrotherm: tour.golang.org is really good. but dont worry about the details too much
<edrex> (at the end I think you make a concurrent webserver
<pjz> it would be cool if you could use camlistore as a blockstore
rendar has quit [Ping timeout: 255 seconds]
<Pyrotherm> I've been trying to build a digital media library system for years in fits and starts, but there's always some better tech on the horizon, but THIS... well, ipfs is game-changing
<edrex> pjz: agreed. For now I've shelved Camlistore, but I think the metadata model is really well designed and interesting.
ianopolous2 has quit [Ping timeout: 260 seconds]
<edrex> I used it to store my document archive for 3 years
<pjz> Pyrotherm: I wish ipfs didn't require pulling everything into its own blockstore, though, esp. since it's not amazingly stable.
<Pyrotherm> my first concern is blockchain size...
voxelot has quit [Ping timeout: 272 seconds]
<edrex> but avoided using any of the metadata stuff because I knew it wasn't portable
<pjz> Pyrotherm: there's no blockchain /
<Pyrotherm> no?
<pjz> Pyrotherm: there's a content-based merkle-dag, but it's rooted at every node and only refers to that node's contents
qqueue has quit [Ping timeout: 240 seconds]
<Pyrotherm> each node being a machine on the network?
<edrex> pjz: you could definitely use camlistore as a blockstore
wasabiiii has joined #ipfs
<edrex> the trouble is that they have very similar but incompatible data structures
<Pyrotherm> pjz: what do you mean by "blockstore"?
<edrex> ie the hash format, sha1-xxxxxx vs 11xxxxx
vijayee_ has joined #ipfs
<edrex> Pyrotherm: the low-level format treats everything as a content-addressed "block" (ipfs and camlistore both)
rendar has joined #ipfs
<edrex> a block store is a very simple interface: PUT some data, get back a content address
<Pyrotherm> so just a KV store then
<edrex> GET a content address, retrieve the original data
<edrex> the difference is that the store computes the keys for you
<edrex> (content addressable)
<Pyrotherm> right, the central db guaranteed global ID uniqueness
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
ianopolous has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<edrex> well, because the keys are computed, rather than assigned, you don't need a central authority to "guarantee" (hash collisions are possible) global uniqueness
<edrex> so you can put the same object in the store on jupiter and inside a black hole, and get the same key
<Pyrotherm> so, there's possibility my data will collide woth other data?
<whyrusleeping> Pyrotherm: yeah, a 1/(2^256) chance
<edrex> oops, change "object" to "block" - objects refer to higher level entities in IPFS (and also in Camlistore)
<Pyrotherm> I assumed that it was a tiny chance, but what happens in this case?
<Pyrotherm> whoops your file has been trumped by another?
<achin> you could probably be semi-famous for finding a SHA256 collision
<whyrusleeping> Pyrotherm: not sure.
<whyrusleeping> if you can find a collision, please let me know
<edrex> Pyrotherm: things break? depends on the implementation
<rschulman> lol @achin
<edrex> if you are using a weak hash function, generating collisions is relatively easy
<whyrusleeping> if there was a collision, you could get the wrong data when requesting a hash
<rschulman> edrex: IPFS tries not to use those functinos.
<Pyrotherm> so basically, you could end up where the routing system has a closer node with a different set of data with the same address
<edrex> but generally people coding for CAS interfaces treat that as an impossibility (which is ok, as long as you are using good hash functions)
<achin> has there been one? ever?
<Pyrotherm> so, the hash function uses seeds that are unique to my machine to some extent, which makes the probability of a collisions that much less?
<edrex> (that's just cute)
shea256_ has joined #ipfs
shea256_ has quit [Remote host closed the connection]
<achin> getting a few bits in a hash to match is a far cry from a full collision
shea256_ has joined #ipfs
<whyrusleeping> achin: no, there has never been one for sha256
<edrex> Pyrotherm: no, the whole point is the key generation is globally consistent
<edrex> this allows global deduplication
<whyrusleeping> the only reason we are slightly worried about sha256, is in the case someone proves it is not a perfect hash
<whyrusleeping> which *is* much more probably than finding a genuine collision
<Pyrotherm> ok, so if you have two different sets of data with the same address and you try to replicate that data across the network...
<edrex> (coupled with content based chunking, which breaks large blocks into small blocks in a deterministic way)
<achin> is a possible attack on SHA256 likely to impact SHA512 as well? i don't know how similar they are in construction
shea256 has quit [Ping timeout: 240 seconds]
<edrex> Pyrotherm: they won't have the same address, unless god exists and likes to fuck with cryptographers
<Pyrotherm> am I just worried about somethign that will never happen? do hard drives or any other kind of supposedly-reliable storage medium use a has function like this?
<edrex> Pyrotherm: yes, content addressable storage is a robust and widely-deployed technology in the storage industry
<Pyrotherm> or maybe your CPU becomes quantum-entangled with mine and produces same hash? /jk
<Pyrotherm> ok, so in more of a deduplicated-SAN type system that uses block storage...
<Pyrotherm> I am starting to see how this fits
<M-hash> achin: sha256 and sha512 are the same algorithm internally; the sha2 family. so, yes, attacks on one will almost certainly be relevant to the other.
<edrex> Pyrotherm: yup
<vijayee_> whyrusleeping: I was following along on the "roll your own" example and I keep getting an error at the IDB58Decode step for the client connect "multihash too short. must be > 3 bytes"
<vijayee_> is the string output for the host step the right format to pass in?
wasabiiii1 has joined #ipfs
<Pyrotherm> if I wanted to help this project in any way, what would be a good starting point?
<giodamelio> I was wondering the same thing
wasabiiii has quit [Ping timeout: 250 seconds]
<Pyrotherm> I'll be spinning up some VMs with decent bandwitdh tonight for fun, and to add some capacity to the network
shea256_ has quit [Read error: Connection reset by peer]
<achin> M-hash: it seems fitting that someone named M-hash can answer a question about hash functions :)
j0hnsmith has quit [Quit: j0hnsmith]
<whyrusleeping> vijayee_: oooh, yeah. its printing out <peer abcde> right?
<whyrusleeping> you need to pass in the whole hash...
<M-hash> heh. it only took some of my closest friends a calendar year to realize maybe my moniker choice represents something of a religious obsession. ;) it's awful in this channel though. so many pings haha.
<whyrusleeping> M-hash: lol, hash hash hash!
<vijayee_> yeah that is what it prints
<vijayee_> so if I pass it the full hash do I even need to encode it?
simonv3 has quit [Quit: Connection closed for inactivity]
<whyrusleeping> vijayee_: yeah, i need the change the print to be id.B58String() or something
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<achin> we all should shorten multihash to M-hash :D
shea256 has joined #ipfs
pfraze has quit [Remote host closed the connection]
<edrex> pjz: If you're interested, I've thought about camlistore interop a lot.
<whyrusleeping> jbenet: anacrolix's utp lib gets a little over 100Mbit on the localhost
<vijayee_> whyrusleeping: thanks, it works now. I changed it to output node.Identity.Pretty()
<whyrusleeping> i think i can probably improve that a bit, but i think it might be good enough to move forward with
<whyrusleeping> vijayee_: awesome :)
dignifiedquire has quit [Quit: dignifiedquire]
<jbenet> whyrusleeping: so the report is incorrect?
<jbenet> whyrusleeping have a script to try it?
<edrex> pjz: One thing I think would really help in that regard would be a human-readable, "plain old filesystem" storage/export format
<edrex> which keeps all the richness of the data model (bidirectional) while exposing a static version of it in a way that's useful to a human browsing it.
<M-hash> whyrusleeping: evul D:<<
<edrex> (not the low-level data model, the high level one)
tsenior`` has quit [Ping timeout: 260 seconds]
vijayee_ has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
vijayee_ has joined #ipfs
<jbenet> whyrusleeping: bandwidth = 4367629.673922 b/s
<jbenet> that reads as 4.3Mb/s to me.
<jbenet> (well, sort of, need the 1024 division)
voxelot has joined #ipfs
<jbenet> vijayee_ still working on the tour?
<jbenet> giodamelio offereed to help
<whyrusleeping> jbenet: bandwidth = 14344813.081986 b/s
<whyrusleeping> on my machine
<jbenet> so 14 Mb/s
<whyrusleeping> and those are bytes, not bits
kerozene has quit [Max SendQ exceeded]
<jbenet> so 14MB/s
<whyrusleeping> == > 100Mbit
<vijayee_> jbenet: I haven't touched it in a while, I could kick back into gear if its needed. I wanted to rewrite the testing portion of it to be independent of the go test framework
<jbenet> yeah idk. it's not great.
<jbenet> vijayee_ it would be really helpful to have this soon
<whyrusleeping> its not great, but its pure go, theres room for improvement, it works, and it would help our resource consumption a lot
<vijayee_> I'm not sure what the test cases should be for the different chapters
<jbenet> whyrusleeping: we're not going to be go-gettable in the long run.
<jbenet> i think we should just wrap libutp
dignifiedquire has joined #ipfs
<whyrusleeping> yeah, we could do that to
<jbenet> we have prebuilt bins, what's the major issue with that?
kerozene has joined #ipfs
<whyrusleeping> nothing against doing that, but we have a pure go utp lib, and we dont have a utp lib wrapper
<jbenet> whyrusleeping there's also libquic
<whyrusleeping> libquic would be cool too
<vijayee_> jbenet: I can probably finish all but what the literal test cases are by monday
<achin> (forgive the uneducation interjection, but is UDT useful at all here? http://udt.sourceforge.net/ )
<whyrusleeping> achin: udt is one of our options, we have it wrapped already
<jbenet> Oh interesting
captain_morgan has joined #ipfs
<jbenet> whyrusleeping: aparently i started on this a while back https://github.com/jbenet/go-libutp
<whyrusleeping> lol
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> this was way before https://github.com/jbenet/go-udtwrapper
therealplato has joined #ipfs
<jbenet> so like it probably doesnt do anything
ianopolous has quit [Ping timeout: 240 seconds]
<jbenet> yeah it's not even net pkg friendly. i'll write these-- here try out the udt wrapper for now
<jbenet> and see how it goes
<whyrusleeping> your udtwrapper?
leeola has joined #ipfs
<jbenet> giodamelio: pls note IPNS is still wonky. i suggest not updating that page until tomorrow and th HN traffic passes.
<jbenet> whyrusleeping yeah:
patcon has joined #ipfs
<giodamelio> jbenet: ok, ill hold off
ygrek has joined #ipfs
<jbenet> i'm proud that our gateways handle the HN blasts so well. ipfs.io is so snappy.
<jbenet> gj lgierth whyrusleeping and everyone else making ipfs
<achin> if i wanted to help host the NH site, i could easily pin it. but i might not want to pin it forever. id' be neat to have a way to say "pin this content for the next 72 hours only"
apophis has quit [Quit: This computer has gone to sleep]
<jbenet> archin: indeed, i think there's a ton to do with pins, could have a whole set of interesting pin programs, that check other conditions before pinning or unpinning
<jbenet> (thankfully, it's all shell scriptable ;) )
<achin> yeah, there are a lot of neat usecases to support in the future
<jbenet> whyrusleeping: another option is to add both anacrolix and udt, open both ports, and see how they both go.
vijayee_ has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
vijayee_ has joined #ipfs
<jbenet> btw, "https://ipfs.io/ipns/ipfs.io" is one of my favorite urls. it's just protocols.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
dokee has joined #ipfs
<dokee> hello
<achin> hi
<dokee> I post a message in google group but maybe the discussion will be easier here. It s about cycling hyperlinks. For example I m wondering how we can set this very small static website with ipfs : http://bucketaatest.s3-website-us-east-1.amazonaws.com/static1.html
<vijayee_> jbenet: for testing each command should I have the user pipe in the output from their local ipfs to the tour then verify their current chapter?
<vijayee_> I'm not sure I know what you are imagining from the current content
tsenior`` has joined #ipfs
<jbenet> Dokee: options: (1) relative links (href="./static2.html"), (2) if truly cannot do relative (i suspect you can), use IPNS
shea256 has quit [Remote host closed the connection]
<dokee> jbenet: I can relative ... but wont the link change when the static2.html file will be hashed ?
<jbenet> vijayee_ not sure. whatever is convenient for you. ideally user could just run "<cmd> test" and it should do all the checking
<jbenet> Dokee: add a directory
<jbenet> gimme a sec i'll show you
<dokee> jbenet: thnkas
<fazo> Dokee: if you "ipfs add -r <directory>" then you will have a hash for every file and directory. If the directory hash is for example "somehash", and the directory contains "index.html" and "style.css" you can /ipfs/somehash/index.html and /ipfs/somehash/style.css
j0hnsmith has joined #ipfs
<dokee> fazo: ok then the answer I was looking for then means the final file name doesn t change, we still use the full name not the hash
<fazo> Dokee: so relative paths will work unless you try to access the parent directory of the directory you added
<fazo> Dokee: yes you can use the original name, but you need to use the hash for the first folder in the path
<fazo> Dokee: so after /ipfs/ you have an hash, but then you can have a normal relative path
<dokee> fazo: ok. if the directory hash in relation with its content ? ie if the content changes does the directory hash change ?
<dokee> is*
<fazo> Dokee: yes
<fazo> Dokee: the same file will always give the same hash and different files will always give different hashes
<dokee> fazo: hummmm... I ll wait for jbenet and his example then ... still not get it
<fazo> Dokee: so if we both add the bootstrap.min.css from getboostrap.com, if the file is identical the hash is the same, so to ipfs, it's the same file
<giodamelio> Dokee: I just put up a little tutorial that shows almost exactly what you want. https://gateway.ipfs.io/ipns/QmWGb7PZmLb1TwsMkE1b8jVK4LGceMYMsWaSmviSucWPGG/2015/09/15/hosting-a-website-on-ipfs/
<fazo> Dokee: don't worry :)
wasabiiii has joined #ipfs
wasabiiii1 has quit [Ping timeout: 250 seconds]
<dokee> giodamelio: thanks but not really. The problem is not really the link but the cycling link. There you don t have a cycle in your links. your css is not referencing anything
<giodamelio> Ah, sorry I missed that part
<spikebike> giodamelio: very nice
<giodamelio> spikebike: thanks
<jbenet> notice my asciinemas o/ they're hosted on ipfs
<dokee> @jbenet: ok saw your example. Understood you can use the full name if you re using relativ path ! now I suppose then ipfs can still distribute it by using the directory hash... great ! So next question: it s not in the same directory or same server.
<dokee> @jbenet: thanks for taking care creating this example by the way :)
<fazo> Dokee: it doesn't matter which server has it, as long as at least 1 server who has the file is online, the file will arrive
qqueue has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet> Dokee: the point is you can always create a "directory" that points to the two other things, and put them on "the same place"
<fazo> Dokee: that's the point of IPFS: you don't care where the file is
<dokee> even if I don t own the 2 files ?
<fazo> Dokee: to access a file you just need the hash
<jbenet> so i can make a file that always links to: <a href="./something_else.html">surprise!</a> or even ../something_else.html, and others can "add it to their own dirs".
<fazo> Dokee: you can create an IPFS folder with files you don't own right now: a folder is just a list of links to its files or other folders
<fazo> Dokee: your machine will download the files you need from some other machine
<lgierth> jbenet: do we have yet another HN post?
<dokee> no copyright issue copying the file ?
<lgierth> just asking cause that was plural up there ^ :)
mildred has quit [Ping timeout: 264 seconds]
<giodamelio> lgierth: ya, that was me
<giodamelio> sorry if it's causing any trouble
<fazo> giodamelio: looks like my node has trouble downloading your blog :(
apophis has joined #ipfs
<fazo> giodamelio: can only see it from gateway.ipfs.io
<Bat`O> is there a reason for ipfs pin add to hang until everything is local and to add the pin only at the end ?
<giodamelio> fazo: really? I can see it fine from my local gateway. How would I go about troubleshooting my connection to the network?
<spikebike> Bat`O: if pin doesn't block, how can you tell if it worked?
<fazo> giodamelio: uhm I don't know
<fazo> giodamelio: we could see if our nodes are connected.
<jbenet> lgierth: no it's giodamelio on HN today
<vijayee_> jbenet: should the chapter contents be the same as the help text for the corresponding command for that chapter
<fazo> that's me: QmdATh2XUvfy5jys78RoeFSKLyBjeMDv9cjby2TRwjCd3X
apophis has quit [Client Quit]
<jbenet> vijayee_ not sure, it doesnt matter. i originally had some plan, but i think we can try many things and see what works
<Bat`O> spikebike: you could store "i want this hash" and download in the background
<spikebike> Bat`O: then future commands that need that hash may or may not work
nessence has quit [Read error: Connection reset by peer]
apophis has joined #ipfs
nessence has joined #ipfs
<fazo> spikebike: Bat`O: maybe an option to download in the background
<fazo> that places the pin when it finishes downloading
<fazo> or well, just add & to the command
<Bat`O> ok, say otherwise, is there a way to trigger the download of a hash without hanging ?
vijayee_ has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
<Bat`O> fazo: i'm use the http API to build an app, so not doable
<Bat`O> using*
apophis has quit [Client Quit]
<fazo> Bat`O: oh, makes sense
vijayee_ has joined #ipfs
<fazo> is there a way to add a file or directory but get it wrapped in a directory with a custom name?
mildred has joined #ipfs
<fazo> for example "ipfs add -r dir/ --name hello" would create a directory with a directory named "hello" with the content of "dir/"
<fazo> it could be useful to then publish the hash
<fazo> oh I'm an idiot,
<fazo> it's in ipfs name publish already
<Bat`O> fazo: ipfs add -w
gordonb has joined #ipfs
<fazo> Bat`O: uhm, but what about the directory name. It takes it from the object i add?
<Bat`O> no idea :)
<fazo> I have another question then
<fazo> is there a way to manually build a directory object?
<fazo> like I give it the hashes and the name to associate to the hashes
<fazo> and it creates the directory and gives me the hash
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<j0hnsmith> I’m going to Pycon UK tomorrow, I’m going to put myself up for a talk about IPFS. I can cover the basics (I only found out about IPFS at container.camp on Friday) but I’m looking for any python specific topics, any ideas?
<Bat`O> fazo: maybe try ipfs object {new, patch}
<fazo> j0hnsmith: I think there's an api wrapper written in python
<fazo> Bat`O: I'll see if it fits
<Bat`O> fazo: not sure what it does
vijayee_ has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]
<fazo> Bat`O: ipfs object put does what I need :)
<fazo> Bat`O: try ipfs object put --help if you're curios
pfraze has joined #ipfs
captain_morgan has quit [Ping timeout: 246 seconds]
<j0hnsmith> fazo: yes, I saw that
<achin> ion: i saw. that is some good rsyncfoo
qqueue has quit [Ping timeout: 260 seconds]
screensaver has quit [Ping timeout: 246 seconds]
qqueue has joined #ipfs
Spinnaker has quit [Ping timeout: 256 seconds]
apophis has joined #ipfs
<fazo> Why can't multiple ipns names for a node be implemented by representing the list of names as a folder?
<fazo> Every node would have 1 IPNS hash which points to a folder with many other files/folders, each one is a different ipns "name" originating from that ipns node
Spinnaker has joined #ipfs
<fazo> When a name is updated to a different hash, just recreate the folder and set it as the new ipns value
<ion> jbenet, whyrusleeping: It would be cool to have an “ipfs diff” which recursively gets/accesses only the blocks whose checksums differ to pass to “diff”. And it will be even more efficient when files get chunked by a rolling hash.
tsenior`` has quit [Ping timeout: 240 seconds]
<whyrusleeping> ion: what do you mean?
<whyrusleeping> what would the usecase be?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> j0hnsmith: if you have an outline or some content written down for you talk, i can definitely take a look and give some feedback
<ion> whyrusleeping: For instance, i just looked at diff -u <(ipfs ls /ipfs/QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ) <(ipfs ls /ipfs/Qmew9eTZSvPPJVZcv9WLdH7VJ9nKe4QVxxsi4iSLXNhkrq) to see what changed between the RFC dumps and then looked at the difference between /ipfs/QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ/rfc-index.txt and /ipfs/Qmew9eTZSvPPJVZcv9WLdH7VJ9nKe4QVxxsi4iSLXNhkrq/rfc-index.txt. It would
<ion> be nice to have “ipfs diff /ipfs/QmeZnaoktRvs3Ek8pQNyj8BbrkKuJ5Ez6JHi32dazL7cEJ /ipfs/Qmew9eTZSvPPJVZcv9WLdH7VJ9nKe4QVxxsi4iSLXNhkrq” which determines which files in the directories have changed, and why not also which blocks within files have changed, to provide a diff for human consumption.
<whyrusleeping> ooooh, i totally wrote code for that
mildred has quit [Ping timeout: 240 seconds]
<ion> Thanks
<whyrusleeping> its not exposed through the API at all, but its there
<ion> nice
G-Ray has joined #ipfs
<G-Ray> Hi
<j0hnsmith> whyrusleeping: I don’t have anything yet, I was thinking I’ll just do a demo of the basic commands etc etc
<G-Ray> Does anybody is working on implementing i2p routing in IPFS ?
<whyrusleeping> G-Ray: afaik, nobody is working on i2p stuff, there has been some work on tor though
<whyrusleeping> j0hnsmith: okay, well let me know if you want any feedback, i'd love to help make your talk awesome
<j0hnsmith> whyrusleeping: thx
<G-Ray> IPFS + i2p would be amazing right ?
<G-Ray> as long as speed is correct
<whyrusleeping> G-Ray: it would be pretty awesome to have, yeah
j0hnsmith has quit [Quit: j0hnsmith]
<G-Ray> whyrusleeping: Do you know how hard it would be ?
<whyrusleeping> G-Ray: right now, i'm not sure. I dont think it would be an absurd amount of work, for someone who is familiar with i2p
qqueue has quit [Ping timeout: 252 seconds]
<whyrusleeping> and depending on if you have to implement it manually or if you can use an existing library
<G-Ray> whyrusleeping: I may take a look at it soon. It would be a great project to do :)
<whyrusleeping> as far as changes to ipfs, you would have to implement an i2p net interface
<whyrusleeping> in the multiaddr-net package
<whyrusleeping> and build in support for i2p addresses in multiaddr
wasabiiii1 has joined #ipfs
qqueue has joined #ipfs
wasabiiii has quit [Ping timeout: 240 seconds]
jhulten has quit [Ping timeout: 240 seconds]
dokee has quit [Quit: Page closed]
<G-Ray> whyrusleeping: Thanks ! I can't imagine how great it would be if ipfs+i2p was widely used
<whyrusleeping> G-Ray: i'm all about ipfs being used all over the place :)
<G-Ray> whyrusleeping: Are you and jbenet working on IPFS at full time ?
<whyrusleeping> G-Ray: we are, yes
<G-Ray> whyrusleeping: Are you paid ?
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping> G-Ray: yeap, i'm an employee of Protocol Labs Inc
j0hnsmith has joined #ipfs
<achin> is Protocol Labs Inc also known as Interplanetary Networks?
<whyrusleeping> achin: yeah, we changed the name
<elavoie> Hi all, I am a newbie here. I am keenly interested in using IPFS for a distributed computing platform for the Web for my PhD project. (I saw daviddias new repo with similar goals ;-))
<whyrusleeping> we werent super sold on the earlier name
<whyrusleeping> elavoie: welcome to the channel!
<daviddias> elavoie: Welcome! :D
<G-Ray> whyrusleeping: So if it's your company, from where does your found come from ?
<daviddias> did you see webrtc-explorer or the browserCloud.js research?
<G-Ray> funds
<elavoie> quick question: suppose I add content on IPFS using 'ipfs add ...' while my daemon is running and I subsequently shut it down, will IPFS currently maintain a copy of the files?
<daviddias> that's very cool btw :)
<daviddias> elavoie: the content will be discoverable through IPFS as long as at least one node serves it
<elavoie> as far as I understand no, right? I need to keep republishing for it to stay alive?
<daviddias> if your node was the only one serving it, then the answer is no
<whyrusleeping> G-Ray: from investors in the company right now
<elavoie> I see, so atm someone else needs to explicitly pin it down for it to stay permanent
<daviddias> but for example, if you had it shared with another person and they downloaded it through IPFS, then they will have a copy that they can serve
<fazo> whyrusleeping: I made a partial archive of xkcd so that we can test it. http://gateway.ipfs.io/ipfs/QmSeYATNaa2fSR3eMqRD8uXwujVLT2JU9wQvSjCd1Rf8pZ
<fazo> whyrusleeping: I included the license and randall's "about" html page
<fazo> whyrusleeping: and the script I wrote to download the data
<elavoie> it that a cached copy that would eventually be purged?
<elavoie> is*
<achin> yes, unless someone else pinned it
<Vyl> Eventually, yes, as I understand it, unless one of two conditions are met: Either someone sets it to pin, or requests for it continue to sustain it. I think that's right.
<fazo> elavoie: when you do ipfs add, the copy is never purged from your local machine
<fazo> elavoie: because it's pinned
<elavoie> I see
<whyrusleeping> fazo: looking good!
<fazo> elavoie: as long as someone is connected to a server which has the file then the file will be avaiable to that someone
<elavoie> so in other words, the storage backend for IPFS and the economic incentives to make people hold the data are still in the works
<fazo> elavoie: so if you want to host something and be sure it will be avaiable you need to have at least 1 machine which is on the net 24/7 and has it pinned
<fazo> elavoie: well the storage backend already works fine
<elavoie> understood, so I should think of IPFS as a combined GIT+BitTorrent for the moment
<fazo> elavoie: ipfs doesn't make it so you don't need to host data anymore
<ekaron> is there a plan for intelligent pinning in the protocol, or is that up to end users or end user's application logic? Is there a network incentive to pin?
<fazo> elavoie: yes, the point of ipfs is that you don't care where the data is
<ekaron> or is it just there to limit the periodic downtime of one central location?
<Vyl> You can regard it as a 'magic store.' Stuff goes in, stuff comes out queried by hash.
<daviddias> elavoie: have you looked at http://filecoin.io/ ?
<fazo> ekaron: there's no incentive yet, except that you care about the data
<elavoie> yes
<Vyl> How it handles internally only matters if you're actully designing it.
<daviddias> ok :)
<jbenet> daviddias elavoie: we should probably make diagrams out of all these questions.
<elavoie> so that is sufficient for my first use case: A lab shares their public data on IPFS, runs computation on them and use IPFS as an efficient discovery and distribution platform and stores back the results locally.
<achin> need more spacecats
<fazo> elavoie: you could use ipfs to efficiently distribute the data, expecially if multiple servers end up needing the same data
<daviddias> jbenet: you make always such awesome gifs
<jbenet> daviddias: thanks! need to be "playables so you can pause them and advance them yourself"
<jbenet> (does webm allow this?)
<daviddias> sorry, never tinkered with it
<achin> you can indeed pause webms, but to advance you generally have a scrollbar widget. so you can't easily advance by exactly 1 frame, for example
<elavoie> Do you guys have plans to provide distributed storage where the network would proactively and automatically manage data to make sure it stays available regardless of nodes joining in or leaving? (Similar to what MaidSafe is aiming for?)
<fazo> jbenet: how do you do such awesome gifs?
<fazo> elavoie: I don't think such system is in the scope for IPFS, but you could use IPFS to build it
<jbenet> fazo: keynote -> export pngs -> splice
<spikebike> elavoie: not currently.
thomasreggi has quit [Remote host closed the connection]
qqueue has quit [Ping timeout: 256 seconds]
<jbenet> oh also, add more cats. always.
<jbenet> a gif cannot have too many cats
<spikebike> jbenet: any libp2p news?
<Vyl> There have been similar projects before, but with more specialised ambitions.
<fazo> jbenet: I'm going to steal that
<jbenet> fazo go for it!
<daviddias> spikebike: I guess I have the answer for you
<elavoie> thanks, the project is actually complementary then.
thomasreggi has joined #ipfs
thomasreggi has quit [Remote host closed the connection]
<Vyl> Freenet is based on a similar concept, but optimised for extreme privacy and censorship-resistance, for example. Which comes with serious performance penalties due to the amount of redirection required - it's very, very slow.
<fazo> jbenet: you mean the ipfs-cat-gifs or the automatically managed distributed data storage? because there's a slight difficulty difference
simonv3 has joined #ipfs
<spikebike> elavoie: two hardest issues are who pays for the extra storage and how can you protect people from hosting illegal content
<spikebike> daviddias: oh?
<daviddias> in short, yes, we are very close as I did the first iteration (means it works) of the Distributed Record Store using the DHT (and therefore protocol muxing, stream muxing, discovery, all of that)
thomasreggi has joined #ipfs
<jbenet> fazo: i meant the cats
<daviddias> right now I'm factoring swarm to make it more modular and enable more transports
<Vyl> And decentralised stores have been commonplace in the piracy community ever since the first gnutella and ed2k protocols. Which gave them something of a poor reputation, as their primary use was for copyright infringement.
<jbenet> elavolie and fazo: but you should see filecoin.io -- which is just that
<fazo> elavoie: such distributed storage system that you described could also be built just by using a different ipfs implementation which is a lot smarter with handling cached data
qqueue has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping pushed 1 new commit to private-blocks: http://git.io/vZjUl
<ipfsbot> go-ipfs/private-blocks 54bae3e Jeromy: fixup naming...
<spikebike> daviddias: sounds excellent. I'm eyeing building a p2p backup widget on top of IPFS.
<Vyl> Anyway, sleeptime for me.
<daviddias> then there is a thing we have to review and make the final call on IPRS https://github.com/ipfs/specs/issues/24#issuecomment-140136140
<fazo> jbenet: of course, but I'm going to use my own house cats
<achin> mrow
<whyrusleeping> fazo: thats the best kind. cats that havent been on the internet yet
<whyrusleeping> googles cats are overused
<jbenet> daviddias: on that-- i think our last two comments work fine.
<daviddias> and then, the thing other than extensive testing and coverage, is creating the WebCrypto module that takes care of using the WebCryptoAPI or the Node.js crypto module, so that we can use it on the browser as well
<daviddias> jbenet: I believe it too :) but well, I still want to check notary stuff and make sure I'm not missing anything
<jbenet> achin: dammit, how did you guess my password to everything
<fazo> whyrusleeping: what do you mean never been on the internet? mittens runs gentoo
<elavoie> My second use case would be to eventually send computations where the data is on the network to avoid moving it around and have everything happen in parallel.
<G-Ray> any progress on filecoin ?
<G-Ray> that you can share
<jbenet> G-Ray i was going to say yes, but :)
<achin> jbenet: fun fact: my wifi SSID is "mrow3" (mrow and mrow2 have been retired)
<jbenet> G-Ray sorry for the delay, but we want to make things right, and pushing out IPFS first is important.
<jbenet> achin: nice
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
atrapado has quit [Quit: Leaving]
<elavoie> daviddias: I am planning to go spend March-April in Lisbon to work with Miguel Correia at IST. Maybe I'll see you there ;-)
ianopolous has joined #ipfs
<daviddias> that sounds great! Who is Miguel Correia?
<fazo> wait, is there a connection between filecoin and ipfs?
<achin> both are developed by http://ipn.io/
<elavoie> http://www.gsd.inesc-id.pt/~mpc/ He has done a lot of work in Byzantine algorithms
<fazo> achin: thanks
<jbenet> haha, nice user name
<jbenet> "mpc"
<jbenet> (multiparty computation)
<elavoie> ;-)
<daviddias> aaaha 'Pupo', now I know him :)
<elavoie> well I have not stayed long enough last time to know him by his nickname ;-) I'll make sure I use it next time ;-)
<spikebike> daviddias: "used considerably"? (from the first line of that ticket).
<daviddias> that is really cool
<daviddias> Well, he is Miguel Pupo Correia, and since Pupo is more unusual, everyone knows him by Miguel Pupo, that is why :)
<daviddias> spikebike, like a lot of times? (that is what I meant)
<G-Ray> IPFS now encrypts data transfered between nodes ?
<jbenet> G-Ray depends on what you mean
<jbenet> we have an encrypted transport, which is not known to be secure yet, so we don't trust it yet. (we will before any 1.0)
<spikebike> daviddias: usually with crypto it's more along the lines of use it once and no more, thus the use of noonces.
<jbenet> and you can encrypt the data too, (this will be native eventually. it's in our roadmap)
<G-Ray> yes, I mean encrypted transport
<fazo> do you guys have any useful resources to learn golang? I have experience with python, js/node, java, a little c and a little haskell
<elavoie> I see ;-). I still have a few things to take care of, but I eventually want to take a closer look at the node implementation and see what is missing to have it be used in a web browser. I could totally use it and test drive it, and possibly contribute to the implementation. We'll see.
<fazo> I figured I'd learn go since a lot of people are using it
<daviddias> elavoie: what is the subject of your PhD?
<daviddias> Have you spoken with Luís Veiga too? (he was my MSc thesis prof)
<elavoie> "Pando: Scalable Dynamic Array-based Computing on the Web"
<daviddias> http://www.gsd.inesc-id.pt/~lveiga/ for reference
<elavoie> yes, we had lunch and a few political discussions
<jbenet> fazo: do the tour, then just build small things.
<daviddias> are you going to use any P2P stuff from the Web Platform?
<elavoie> that is the goal
<fazo> jbenet: thanks :)
<elavoie> the vision is to have 100,000 computing nodes available on the web for scientists, initially for embarrassingly parallel computation
<elavoie> we will see for the rest after
<daviddias> nice!
qqueue has quit [Ping timeout: 240 seconds]
<daviddias> so, not knowing if you found it already, but having your feedback on browserCloudjs would be nice
<daviddias> I did some ray tracing just for tests, but it could have been used for anything
<elavoie> I have found it, I can review it in more details
ygrek has quit [Ping timeout: 240 seconds]
<daviddias> I was following the principle of moving the computation to the data and not the data to the computation
<elavoie> yes I saw your talk at OPOJS
<daviddias> nice :D
Spinnaker has quit [Quit: sinked]
<daviddias> and thank you :)
ygrek has joined #ipfs
<elavoie> and the demo where you ran a distributed computation on an array of numbers that were randomly sent to computation nodes
<elavoie> you totally beat me to it ;-)
<daviddias> One of the major problems I found was not even on building the parallel platform, but yet, how to properly load thousands of browser nodes
<daviddias> I've some notes where I want to take webrtc-explorer next (specially now with all the knowledge adquired from libp2p) and make it run in Node.js and the Browser, so I can get something like a PlanetLab of hybrid nodes for proper testing
<elavoie> you mean, your problem was finding enough work for them to be busy ;-)
<whyrusleeping> jbenet: those udt wrappers are ungodly slow
<whyrusleeping> either that or it hangs
<elavoie> yes that is exactly it, that is awesome
<whyrusleeping> ah wait, maybe user error
wopi has quit [Read error: Connection reset by peer]
<daviddias> there is a small set of problems that has to be solved for parallel computing
<whyrusleeping> okay, good. its faster
<whyrusleeping> was worried for a sec
<daviddias> like distribution of data, orchestration, messaging, coordination and aggregation of data
wopi has joined #ipfs
qqueue has joined #ipfs
<daviddias> that can be succinctly described and tackled individually, but the majority of the solutions just mix them all up
<daviddias> very much like the network stack
<daviddias> and so, very much like libp2p, that is where my idea is flowing on the browserCloudjs R.D is going
ygrek has quit [Remote host closed the connection]
<daviddias> if you are going to work on those problems during a PhD, I will be very interested to follow that development :)
<elavoie> I see. I'll read your master thesis, properly cite it in my area proposal exam written document and oral presentation and give you some feedback ;-)
<jbenet> whyrusleeping maybe do a benchmark of all? tcp, raw udp, anacrolix/utp, udtwrapper
<daviddias> elavoie: that is awesome :) thank you!
<spikebike> daviddias: ah, that's not the normal noonce issue, sounds like the openssl folks think it will be fixed
hellertime has quit [Ping timeout: 250 seconds]
ygrek has joined #ipfs
nessence has quit [Remote host closed the connection]
<spikebike> daviddias: not sure IPFs should really worry about IPFS nodes getting attacked from the same node via complex timing attacks
<whyrusleeping> jbenet: i'm just gonna run with this for now, swapping it for another udp protocol will be really easy later if we need
<daviddias> spikebike: it is a security best practice to ensure key rotation and freshness
<jbenet> elavolie daviddias: just catching up on your convo, but, getting a worldwide planetlab on consumer devices is something i really really want
nessence has joined #ipfs
<jbenet> whyrusleeping: prob takes a few min to adapt your utp bench script to do the others, good to have, good to know.
<elavoie> jbenet: me too ;-)
<whyrusleeping> jbenet: already adapted it to the udt stuff, 64MB/s
<jbenet> elavoie: maybe you should join daviddias and i on a call one of these weeks to discuss some plans we're hatching for that
<whyrusleeping> == roughly 512mbit
<jbenet> yay
<elavoie> jbenet: sure
<jbenet> thats loopback-- who knows how all these fail when you add latencies and bw limitations
<jbenet> transports are surprisingly annoying to get righ
<jbenet> tt
<daviddias> jbenet: I so wanted that :) I started by putting browsers on Docker containers, but I had so many GPU erros that I had no idea what I was looking at and had to postpone. Fortunately, and thanks to Jessie, I got them to work very close to the delivery, but still haven't had the chance to make some Terraform magic + piri-piri (my p2p browser orchestration
<daviddias> tool)
<daviddias> s/wanted/want/ :)
<whyrusleeping> tcp is 1.6GB/s, so roughly 12.8gbit on the localhost ;)
<jbenet> but hopefully not "will want"
<jbenet> wow.
<daviddias> elavoie: that is the first part, the project proposal. let me point to the actual thesis
<whyrusleeping> tcp is pretty good at being tcp, lol
<jbenet> yeahhhhhh you'd hope.
<jbenet> but again, latencies, packet loss.
<jbenet> UDT is supposed to outperform TCP on fast backhaul.
<elavoie> daviddias: great thanks
nessence has quit [Ping timeout: 252 seconds]
<daviddias> :D
<whyrusleeping> jbenet: we are using bindings to a c++ lib
<jbenet> yeah>
<whyrusleeping> theres no way in hell we are going to touch a native tcp impl
<whyrusleeping> not even remotely a possibility
<whyrusleeping> every call into C incurs a noticeable delay
<elavoie> Wait, you submitted in May? I was in Lisbon for almost three weeks in March!
<jbenet> hmm not so sure about that. im sure QUIC has hacks around this-- there's ways to get NIC notifs directly i think
ygrek has quit [Ping timeout: 264 seconds]
<whyrusleeping> jbenet: nope
<daviddias> elavoie: not sure if we crossed each other, sad that we didn't met before. I typically was at the Tagus Park Campus and not the Alameda one (that is where Telecom Eng is taught)
<elavoie> oh that is why
<jbenet> whyrusleeping: yes, there are. there are userland implementations of network stacks that outperform kernel stacks.
<whyrusleeping> thats not what i was saying at all
<elavoie> but your advisor office is at Alameda right?
<whyrusleeping> i'm saying making cgo calls will never outperform a native impl
<whyrusleeping> a good implementation in go can be just as good as an implementation in C
<daviddias> elavoie: he has one in both (I had one in both too, but Tagus was closer to my home back then)
<whyrusleeping> but calling over the boundaries is expensive (this is a limitation of go)
wasabiiii has joined #ipfs
<daviddias> elavoie: I am really excited to know you are going to work on this stuff :) I hope I can help by shipping libp2p that you can use
wasabiiii1 has quit [Ping timeout: 256 seconds]
<jbenet> whyrusleeping: ahhhhh i see. interesting, that's a bummer. why?
<whyrusleeping> theres a lot of stack accounting that it has to do
<whyrusleeping> to ensure that things remain safe
<jbenet> mmmmm.
<jbenet> by the way, i've been thinking of adding "external" transports to ipfs
<elavoie> daviddias: That would be great, my background is in compilers and vm implementations and not distributed systems. The more I can use libraries developed by actual experts the better I will be ;-). (I am keen on learning though and contributing though!)
<jbenet> like: "ipfs swarm transport add <multiaddr> <unix-domain-socket>" or something
<fazo> jbenet: you mean a way to let instances talk with a custom protocol?
j0hnsmith has quit [Quit: j0hnsmith]
<jbenet> what i mean is, have a different program run the protocol you want to have a transport on, and speak over some pipe to it
<jbenet> so we can use some of the transports available in other programs natively.
<jbenet> it's annoying ot have to do that, but woudl make it way easier to add support for a bunch of other things, like proxies, bluetooth, audio, etc.
<jbenet> fazo: no i mean a transport like udt/utp/ble/etc.
<jbenet> whyrusleeping: oh interesting: https://github.com/bioothod/unetstack we _could_ do TCP/UDP
<whyrusleeping> jbenet: lol, but we dont actually want tcp
<elavoie> daviddias: I putting the final revisions to my area proposal exam. I would also like your feedback on it so you tell me what to expect in terms of technical complexity for the Distributed System part and whether my time estimates for the Pando project are reasonable!
pfraze has quit [Remote host closed the connection]
<jbenet> oh i know, but a year ago i was thinking of that as a hack.
<daviddias> elavoie: sounds good to me :)
<elavoie> daviddias: awesome, the project is getting more and more exciting!
<daviddias> Make sure to account for the testing infrastructure
bbfc has joined #ipfs
<elavoie> ok!
<daviddias> I didn't realise it was going to be so big and time consuming and then got into huge stress by running against the clock to make something good enough to test
gordonb has quit [Quit: gordonb]
<daviddias> current browser testing frameworks don't account for P2P applications
<elavoie> right
<whyrusleeping> jbenet: oh! yeah, lol
<daviddias> but yeah, send it to me when it is ready, I'll be happy to review it :)
<elavoie> hopefully by the end of the day tomorrow
<daviddias> sgtm :)
<elavoie> ok, I'll get that done and will show back up later!
<jbenet> hey, who here wants to help us answer lots of questions? i'll voice people. i'll do it for daviddias, davidar, cryptix, mappum, anyone else want?
<whyrusleeping> daviddias: are you registered with freenode?
<daviddias> nice, going to have my '+' again that IRC cloud continues to loose it for me :D
captain_morgan has joined #ipfs
<achin> are answers from people with hat supposed to be more trustworthy?
<daviddias> yes
<whyrusleeping> daviddias: hrm... chanserv should be autovoicing you
gordonb has joined #ipfs
<achin> i'm happy to help answer questions, but i don't need +v for that
<jbenet> whyrusleeping: he had +v, not +V i think
<ion> Wat. I clicked on a search result and it took me to http://phim-tv.com/index.php/xem-phim-online/github.com/ipfs/specs/pull/19
<whyrusleeping> oh, that could be
<ion> What is phim-tv.com and why are they mirroring GitHub? :-P
<jbenet> achin: thanks! yeah lots of people help, i just want to make sure newcommers can try tagging the voiced people for answers
<Xe> hi jbenet
<daviddias> whyrusleeping: Oh, I see, something weird happen and irccloud didn't have my creds anymore
<daviddias> now it is all good :)
<whyrusleeping> daviddias: ah! good
<jbenet> Xe hello
<whyrusleeping> i was worried i didnt know how to use irc anymore
<daviddias> thanks :)
<whyrusleeping> ion: that looks super phishy
<whyrusleeping> 'sign in'
<ion> yeah
<whyrusleeping> it looks like its maybe a vietnamese site?
<ion> It seems to be proxying everything. http://phim-tv.com/index.php/xem-phim-online/ipfs.io
<jbenet> ion report it to github
<jbenet> ohhh
<jbenet> it may just be a proxy
<achin> a bit strange indeed
<ion> Is the libp2p on that page the same thing as this? https://github.com/ethereum/wiki/wiki/libp2p-Whitepaper
<jbenet> hmmm somone should make libs in go and node for http servers on top of utp/udt/sctp. could be good.
<jbenet> ion yeah that page was written after a conversation i had with gav
<lgierth> giodamelio: hey, had guests all evening -- nice posting about static pages :)
<ion> jbenet: Alright
<achin> libp2p (aka ÐΞVp2p)
<ianopolous> I keep discovering new corners of the ipfs http api by looking at different implementations/ the command listing :-S
<achin> what's wrong with ascii!
<giodamelio> lgierth: thanks
captain_morgan has quit [Ping timeout: 240 seconds]
<jbenet> ianopolous yeahh..... `ipfs commands` should list them all.
tsenior`` has joined #ipfs
<ipfsbot> [webui] jbenet pushed 4 new commits to master: http://git.io/vZjCV
<ipfsbot> webui/master 551b891 Sébastien Chopin: Fixed Dag Path on UI...
<ipfsbot> webui/master f21218f Sébastien Chopin: Merge pull request #1 from Atinux/Atinux-patch-1...
<ipfsbot> webui/master 7a31b7f Sébastien Chopin: Fix DAG path on search
HostFat has joined #ipfs
jcorgan has joined #ipfs
Qwertie has quit [Ping timeout: 256 seconds]
<ipfsbot> [node-ipfs-api] jbenet pushed 1 new commit to master: http://git.io/vZjWC
<ipfsbot> node-ipfs-api/master db05b84 Juan Benet: Merge pull request #59 from ipfs/add-circle-ci...
<voxelot> jbenet: daviddias: node-ipfs-api: is missing something like 'self.name = {publish: argCommand('publish')' for ipns api correct?
<voxelot> built a twitter wall all hosted in ipfs, only thing is mutating the wall data, could do with ipns api
pfraze has joined #ipfs
<jbenet> voxelot yeah i think so
<voxelot> cool just making sure, i'll try to work on it
<daviddias> jbenet: isn't this it https://github.com/ipfs/node-ipfs-api/pull/41/files ?
<jbenet> voxelot that's great, would love to see it. we need to get ipns fixed up for you.
<voxelot> sure
<voxelot> ohh that looks like it
<daviddias>
<jbenet> whyrusleeping: how do you feel about proving me wrong about all the holdup with records and adding a trivial int to the dht records for now, to get a bit of a boost on ipns?
<jbenet> oh sure
<jbenet> i havent kept up
<whyrusleeping> jbenet: lol, where you wanna put that on my priority list?
<whyrusleeping> top of the list right now is the udp stuff
<whyrusleeping> behind that and finishing up PRs, i was going to put the chunking algos in the unixfs roots
<fazo> whyrusleeping: may I ask what problem you're having with udp?
wopi has quit [Read error: Connection reset by peer]
<whyrusleeping> fazo: no problems, just have to integrate it
notduncansmith has joined #ipfs
<fazo> whyrusleeping: so it uses tcp now?
notduncansmith has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
<whyrusleeping> yeap, ipfs uses tcp
<whyrusleeping> and we are catching fire because of it
<whyrusleeping> apparently tcp uses up a lot of file descriptors
<fazo> whyrusleeping: oh that problem
<whyrusleeping> yeah, with udp we will be able to have just one file descriptor
<whyrusleeping> which will be sweet
<fazo> whyrusleeping: but don't you need to write a new part of the protocol if you switch to udp?
<whyrusleeping> fazo: no, we're going to be using a reliable ordered udp protocol
<whyrusleeping> (UDT)
<whyrusleeping> so it should work just the same
<fazo> whyrusleeping: oh so of course it's not just plain udp
<fazo> whyrusleeping: that would be crazy
<whyrusleeping> nope, that would be hard
<rendar> whyrusleeping: what is the hard part there?
kerozene has quit [Max SendQ exceeded]
apophis has quit [Quit: This computer has gone to sleep]
<whyrusleeping> rendar: not much difficulty to it, just work
<whyrusleeping> gotta touch a lot of things
<whyrusleeping> test a bunch of other things
<whyrusleeping> and make sure it all works as expected
<rendar> yeah, i agree
<rendar> who bakes ipn Inc. financially?
qqueue has quit [Ping timeout: 250 seconds]
<bbfc> Hello, anyone know how can i debug/diagnose 'context deadline exceeded' errors from gateway.ipfs.io ? I'm the only node with that IPFS Hash, maybe it's firewall/slow upload net ?
<whyrusleeping> rendar: thats a question for jbenet
<achin> what's the hash bbfc ?
<whyrusleeping> i just write code :)
<rendar> whyrusleeping: lol, ok :)
<whyrusleeping> we did change the name from ipn to protocol labs though
<fazo> bbfc: saw that a few times, it went away after I anxiously refreshed the page
kerozene has joined #ipfs
<rendar> whyrusleeping: cool
<whyrusleeping> bbfc: thats either your node unable to connect to the network for some reason, or just that discovery process being slow
<whyrusleeping> which is known to happen
qqueue has joined #ipfs
<bbfc> its /ipfs/QmQnnvQ5SN2MSfNvAZoohRRyiejiCJpvQUC1YsFnxPzzwW/index.html
<achin> according to my node, no one has a copy of QmQnnvQ5SN2MSfNvAZoohRRyiejiCJpvQUC1YsFnxPzzwW
<fazo> achin: same
<fazo> maybe bbfc is not correcly connected to the network
<achin> bbfc: what does "ipfs swarm peers" say on your node?
<bbfc> archin: 71 nodes, first one is /ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ
<spikebike> hrm
<spikebike> https://ipfs.io/docs/commands/ looks broken
<spikebike> all the URLs should have "docs/commands" in them, but don't, so they go to the front page
<fazo> bbfc: do you have QmdATh2XUvfy5jys78RoeFSKLyBjeMDv9cjby2TRwjCd3X which is me?
<whyrusleeping> mars can see your stuff
<fazo> bbfc: you can try ipfs swarm peers | grep QmdATh2XUvfy5jys78RoeFSKLyBjeMDv9cjby2TRwjCd3X too see if I'm there
<whyrusleeping> (thats the QmaCpd node)
<whyrusleeping> spikebike: mind filing an issue for that on the ipfs/website repo?
<fazo> bbfc: my node just downloaded your page :)
<whyrusleeping> we're still getting the process of mirroring our page on ipfs right
<fazo> whyrusleeping: yeah my node got that hash too now
<edrex> Terminology question: I'm using "root" to refer to the base node of an IPFS traversal path like `/ipfs/<root node>/<some path>`, but it's an overloaded term. Better term?
<achin> i like how IPN has machines named afer planets :)
<whyrusleeping> edrex: uhm... path-root is normally what i say
<whyrusleeping> or hashroot
<edrex> thx whyrusleeping, that's better
<whyrusleeping> its definitely still a root of some kind
<fazo> achin: mine and my friend's machines are named after Don't starve characters
<fazo> achin: actually it's a mix of don't starve and life is strange characters now because don't starve has "Maxwell" and "Maxine" was too good to be thrown away
<bbfc> fazo: no, i don't have you.
<bbfc> mars
<fazo> bbfc: too bad, my node managed to download your data though
<achin> fazo: yeah, but you didn't name your company "Interplanetary Networks" :P
wasabiiii has quit [Ping timeout: 265 seconds]
<fazo> achin: I don't have a company :(
Guest62677 is now known as ehd
wasabiiii has joined #ipfs
ehd is now known as Guest79487
<bbfc> fazo/mars: thanxs! Since you guys added my HASH, it's working now on the gateway. Any idea why ? Because i'm a new node ? Can i get a 'higher' rank on nodelist? I'm new to IPFS, just trying to understand
<fazo> bbfc: I don't think there is a rank
<fazo> bbfc: I think it's just that your node has a few connection problems (maybe slow bandwidth, aggressive firewall, strong nat?"
domanic has joined #ipfs
<bbfc> yeah, strong nat, slow upload speed [third world countries]
<achin> how long ago did you add that content?
<fazo> bbfc: I'm sorry. I have 7 megabits download and 0.5 megabits upload.
<bbfc> achin: 1 hour
<whyrusleeping> this slowness and bandwidth usage is going to be much improved
<bbfc> achin: Strange, i have the same, ~5mb download, ~0.3 upload
<fazo> whyrusleeping: it's already working well for small files
<whyrusleeping> right now bitswap has a perfectly naive implementation, which makes development easy, but wastes bandwidth and results in redundantly sent block
<whyrusleeping> s
<bbfc> fazo: I will fix the nat/upnp shit on the router
<fazo> whyrusleeping: the problem is huge files because the downloads are not distributed
<bbfc> fazo: thxs very much
<fazo> bbfc: I think go-ipfs supports upnp
<whyrusleeping> fazo: yeah, you end up getting lots of duplicate blocks because everyone is willing to send everyone everything
<fazo> bbfc: yep, it's using upnp fine on my system
<jbenet> whyrusleeping: your priorities sounds good. i think the udp stuff is pretty high prio.
<fazo> whyrusleeping: maybe you could publish that you need the file and get a block list as an answer
<fazo> whyrusleeping: then you can ask for the single blocks?
<fazo> whyrusleeping: maybe not even a block list, just a block count (if the blocks have all the same size)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fazo> maybe I should look at what a block is exactly before suggesting
<achin> always a good idea!
Guest79487 is now known as ehd
rendar has quit []
<ehd> cryptix: yo, i just saw you don't need a paid dev account anymore for deploying to your iOS device: https://developer.apple.com/xcode/
brandonemrich has quit []
<jbenet> rendar: i'm happy to answer questions about protocol labs, but i really want to keep that sort of discussion mostly out of this channel, because i always hated when $bigcorps run psuedo-open open source projects, at which you pretty much have to be part of the company to get a say. i haaaaated that. so you don't hear us talk a ton about protocol labs here.
<jbenet> this is the #ipfs community, which overlaps, but is not protocol labs.
tsenior`` has quit [Ping timeout: 240 seconds]
qqueue has quit [Ping timeout: 264 seconds]
captain_morgan has joined #ipfs
<edrex> giodamelio: how are you finding hugo?
domanic has quit [Ping timeout: 240 seconds]
<jbenet> rendar: in short, we're funded by a number of investors (inc YC), who care about various things the future of the web, to CDNs, to devops, to bitcoin, to blockchains, to crypto tech. filecoin.io is going to be a source of revenue for us, to ensure we can continue upgrading the network stack.
qqueue has joined #ipfs
<edrex> sorry, OT
<jbenet> edrex: no worries. this is a pretty organic channel :)
jcorgan has left #ipfs [#ipfs]
<giodamelio> edrex: Pretty good, I just started with it yesterday, but I have been able to do pretty much everything I want with it. It's put togather pretty well and I think it has all the features I will need. Plus its under active devolpment and the dev are super nice.
tsenior`` has joined #ipfs