jbenet changed the topic of #ipfs to: IPFS - InterPlanetary File System - https://github.com/ipfs/ipfs -- channel logged at https://botbot.me/freenode/ipfs/ -- Code of Conduct: https://github.com/ipfs/community/blob/master/code-of-conduct.md -- Sprints: https://github.com/ipfs/pm/ -- Community Info: https://github.com/ipfs/community/ -- FAQ: https://github.com/ipfs/faq -- Support: https://github.com/ipfs/support
<kyledrake> jbenet yep
<kyledrake> ping me when ready
<whyrusleeping> ion: if youre going to add larger files that will generally be accessed in a serial manner (not lots of random seeks), i would recommend using --trickle
<ion> whyrusleeping: Yeah, i always forget about that. Also --chunker=rabin
<whyrusleeping> yeap! i'm actually going to PR rabin as the default soon
<ion> I readded the Stanford talk with --trickle --chunker=rabin and looked at the dag structure. Looks neat.
devbug has joined #ipfs
gordonb has quit [Quit: gordonb]
<hoboprimate> Hi! what port should we open in our computer's firewall to let our node send stuff to other peers?
fingertoe has quit [Ping timeout: 246 seconds]
<ion> hoboprimate: Run “ipfs id” to see what addresses the node advertises.
gordonb has joined #ipfs
<hoboprimate> ion: thank you, it's 4001. I was wondering why now it wasn't even sending any, I had forgot to set the port open permanently
acidhax has joined #ipfs
barnacs has quit [Ping timeout: 255 seconds]
devbug has quit [Ping timeout: 260 seconds]
gordonb has quit [Client Quit]
acidhax has quit [Remote host closed the connection]
therealplato1 has joined #ipfs
therealplato has quit [Ping timeout: 265 seconds]
<ipfs_intern> any idea how ipfs.pics is working ?
<demize> ipfs_intern: As in if it works well or how it works programmatically?
<ipfs_intern> pro-grammatically, how backend works
<ipfs_intern> ion : i have seen this, can you explain how server is working
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<ion> What part of its operation are you confused about?
<ipfs_intern> ion : the project is hosted somewhere, so how from the server it is using node-api to add or retrieve pics. It must need to run the ipfs daemon
<ion> Yes, that’s right.
nessence has joined #ipfs
<ipfs_intern> ion: and how ipfs.pics also acts as a gateway ? can we implement ipfs gateway over any domain like abc.com/hash or abc.com/ipfs/hash
<ansuz> codehero and I looked at it the other day
<codehero> huh?
<ansuz> ipfs.pics
<ansuz> wasn't that you?
<ansuz> I'm no good with faces
<codehero> i don't know. depends on what you mean by looked
* ansuz shrugs
<codehero> i mean. i tried to break it
<codehero> maybe you mean that?
<ansuz> ^
<codehero> k
<ansuz> ipfs_intern: the code is on github
<ipfs_intern> ansuz : yes i have seen the code, i have 2 confusions, 1) the project is hosted somewhere, so how from the server it is using node-api to add or retrieve pics. It must need to run the ipfs daemon 2) how ipfs.pics also acts as a gateway ? can we implement ipfs gateway over any domain like abc.com/hash or abc.com/ipfs/hash
<achin> there is a normal web server running on pics.ipfs
cemerick has joined #ipfs
<ansuz> it calls curl to get stuff from the ipfs daemon
<ansuz> I think
<ipfs_intern> ansuz: i know how it is uploading and retrieving pics but i don't get how ipfs daemon runs on a normal web server
<ipfs_intern> ipfs daemon needs to run for uploading and retrieving
<ipfs_intern> according to the the ipfs.pics code
<ansuz> right
<ansuz> it's not clear to me whether you understand that a webserver refers to both a process, and a system running that process
<ansuz> in this case, the system is running both a webserver application and the ipfs daemon
<ansuz> it's no different than running a webserver and a database daemon like mariadb or mongodb
<ion> ipfs_intern: You can run a public gateway on any host, but ipfs.pics doesn't actually run one, they serve /ipfs paths with their own code.
<ipfs_intern> ansuz: yes this could be a possibility
<ipfs_intern> thnx :)
captain_morgan has joined #ipfs
<ansuz> https://github.com/ipfspics/server/blob/master/server.js#L82-L87 <- php server talks to a nodejs server which runs exec
<ion> I wonder if ipfs.pics replicates the images to multiple nodes? Or is everything not manually pinned elsewhere gone if the service dies?
<ansuz> I see nothing about pinning
<ansuz> `ipfs {add -q, object stat, cat}`
<ion> Would be quick to test (upload an image, watch findpeers) but I'm in bed without convenient access to an ipfs daemon at the moment.
<ion> They could have another program replicating the pins to other servers periodically.
<ansuz> indeed
<ansuz> golang, shell scripts, nodejs, php, css, ...
<ansuz> might as well throw another lang into the mix at that point
captain_morgan has quit [Ping timeout: 246 seconds]
simonv3 has quit [Quit: Connection closed for inactivity]
<ion> Heh
<ansuz> oh, there it is
<ansuz> java
<ansuz> <3
<ion> :-D
<ipfs_intern> yes, i uploaded a pic and then check the providers from ipfs dht findprovs, and there was only one node
<ion> Button btnNewButton_5 = new Button(shell, SWT.NONE); btnNewButton_5.setBounds(798, 722, 477, 29); btnNewButton_5.setText("Copy To Clipboard");
<ion> Yes, those look like absolute pixel coordinates.
<ansuz> #yolo
<jbenet> hey, it's shipped. and works :)
<ansuz> maybe #yolo has a ruder connotation in the USA
pfraze has joined #ipfs
<ansuz> ain't nobody got time for not hardcoding java pixel values
<_jkh_> ansuz: rude, I dunno, but it definitely conjures to mind someone who’s about to leave the gene pool with “Hey, bubba - hold my beer and watch me do this!”
<_jkh_> there’s probably a programming equivalent
<ansuz> hardcoding pixel values, I think
<_jkh_> “Hey, bubba, hold my beer while I implement a microkernel in Rust!”
<ansuz> heh
<_jkh_> hardcoding pixel values is just called “rapid prototyping"
<_jkh_> ain’t nobody got time to define constants
<ansuz> 'hold my beer and watch me do this' sounds like rapid prototyping to me
<ansuz> meh
<ansuz> cloutier's my canadian bro
* ansuz not hatin'
<ansuz> I've hardcoded a pixel or two in my life
devbug has joined #ipfs
r04r is now known as zz_r04r
Guest73396 has joined #ipfs
kandinski has joined #ipfs
Guest73396 has quit []
hoboprimate has quit [Quit: hoboprimate]
Guest73396 has joined #ipfs
voxelot has joined #ipfs
acidhax has joined #ipfs
Guest73396 has quit [Ping timeout: 240 seconds]
<davidar> jbenet (IRC): o/
border0464 has quit [Ping timeout: 252 seconds]
devbug has quit [Ping timeout: 250 seconds]
border0464 has joined #ipfs
mvanveen has joined #ipfs
<mvanveen> Hi! My friend and I are both new to ipfs, but we're quite interested in hacking on alternative language implementations. I'm particularly excited about hacking on the Python effort that seems to be starting up.
<mvanveen> I noticed jbenet mentions that there's some momentum behind a test suite for the ipfs spec.
<mvanveen> Wondering what the best way to get in touch/coordinate with those people would be?
<mvanveen> Seems like the written protocol specification and the aforementioned test suite are important apriori requirements to a strong polyglot ecosystem.
wjiang_laptop has joined #ipfs
therealplato1 has quit [Ping timeout: 256 seconds]
acidhax has quit [Remote host closed the connection]
sseagull has quit [Quit: leaving]
ipfs_intern has quit [Quit: Page closed]
<codehero> so, i looked through the backlog and saw a mention of --trickle. what does it do?
devbug has joined #ipfs
qgnox has quit [Ping timeout: 244 seconds]
cemerick has quit [Ping timeout: 264 seconds]
nessence has quit [Remote host closed the connection]
nessence has joined #ipfs
fingertoe has joined #ipfs
qgnox has joined #ipfs
nessence has quit [Ping timeout: 268 seconds]
voxelot has quit [Ping timeout: 240 seconds]
devbug has quit [Ping timeout: 265 seconds]
<achin> mvanveen: hi! the only thing i have to share is this: https://github.com/ipfs/py-ipfs
acidhax has joined #ipfs
<achin> looks like there is not much there yet, but that will hopefully help you figure out who to get in touch with
qgnox has quit [Ping timeout: 250 seconds]
nessence has joined #ipfs
Guest73396 has joined #ipfs
qgnox has joined #ipfs
dd0 has quit [Ping timeout: 256 seconds]
tilgovi has quit [Ping timeout: 250 seconds]
neuromast has joined #ipfs
acidhax has quit [Remote host closed the connection]
devbug has joined #ipfs
tilgovi has joined #ipfs
Guest73396_j has joined #ipfs
Guest73396 has quit [Ping timeout: 264 seconds]
sonatagreen has joined #ipfs
hellertime1 has quit [Quit: Leaving.]
acidhax has joined #ipfs
acidhax has quit [Remote host closed the connection]
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
mquandalle has quit [Quit: Connection closed for inactivity]
devbug has quit [Ping timeout: 264 seconds]
<sonatagreen> https://github.com/ipfs/go-ipfs/issues/1688 is giving me trouble again.
nicolagreco has joined #ipfs
<whyrusleeping> sonatagreen: how are you getting a repro?
voxelot has joined #ipfs
voxelot has quit [Changing host]
voxelot has joined #ipfs
<sonatagreen> I've narrowed it down to a particular glyph, let me see if I can upload/link it somewhere
<whyrusleeping> sweet
<whyrusleeping> having a minimal repro will be great
<sonatagreen> I did post a repro file to the thread earlier
<sonatagreen> and that file's as minimal as I could get it
voxelot has quit [Ping timeout: 268 seconds]
acidhax has joined #ipfs
devbug has joined #ipfs
<sonatagreen> for obvious reasons I can't actually upload the repro to ipfs, but you can get it with:
<sonatagreen> gunzip /ipfs/QmXhHWLZcT4BbVrihUZK6bm9Detty9vAztmiMR2M9kP58g/crashfile.txt.gz -c > crashfile.txt
<sonatagreen> hopefully this helps?
<sonatagreen> whyrusleeping?
<whyrusleeping> sonatagreen: sorry, here now
* whyrusleeping tries it out
<whyrusleeping> cool, that breaks for me
<whyrusleeping> !pin /ipfs/QmXhHWLZcT4BbVrihUZK6bm9Detty9vAztmiMR2M9kP58g
<pinbot> now pinning /ipfs/QmXhHWLZcT4BbVrihUZK6bm9Detty9vAztmiMR2M9kP58g
M-prosodyContext has quit [Quit: node-irc says goodbye]
neuromast has quit [Ping timeout: 255 seconds]
<sonatagreen> My earlier post in the github issue thread talks about some of the things that I found to cause the error to appear or disappear
<sonatagreen> on the basis of which I suspect the file is haunted
qgnox has quit [Ping timeout: 240 seconds]
acidhax has quit [Remote host closed the connection]
ygrek has quit [Ping timeout: 268 seconds]
devbug has quit [Ping timeout: 268 seconds]
<ipfsbot> [node-ipfs-api] diasdavid pushed 2 new commits to solid-tests: http://git.io/vWXq0
<ipfsbot> node-ipfs-api/solid-tests 3d5f781 David Dias: properly disable mdns
<ipfsbot> node-ipfs-api/solid-tests b2e3076 David Dias: .config.replace test
qgnox has joined #ipfs
closefistedness has joined #ipfs
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWXmB
<ipfsbot> node-ipfs-api/solid-tests e0bb392 David Dias: ping test
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWXYJ
<ipfsbot> node-ipfs-api/solid-tests 5fd3c6d David Dias: dht findprovs test
captain_morgan has joined #ipfs
<ipfsbot> [node-ipfs-api] diasdavid pushed 2 new commits to solid-tests: http://git.io/vWXY9
<ipfsbot> node-ipfs-api/solid-tests e83863e David Dias: dht findprovs test
<ipfsbot> node-ipfs-api/solid-tests d174fd7 David Dias: dht findprovs test
nicolagreco has quit [Quit: nicolagreco]
Tv` has quit [Quit: Connection closed for inactivity]
Tv` has joined #ipfs
<davidar> sonatagreen (IRC): cat /dev/ghosts > test.txt
<sonatagreen> I am 98% sure that is not how I generated my file
tilgovi has quit [Ping timeout: 240 seconds]
<davidar> echo "
* davidar wonders if that would break ipfs
<sonatagreen> Nope, that one adds fine.
<sonatagreen> /ipfs/QmfWs6rZrkK7t816RHoBWTwM6rMiZmaFeSqCWNrxuNzr3i
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWXGZ
<ipfsbot> node-ipfs-api/solid-tests 0986ce7 David Dias: pin add, list and remove tests
Guest73396_j has quit [Ping timeout: 265 seconds]
<davidar> !pin QmfWs6rZrkK7t816RHoBWTwM6rMiZmaFeSqCWNrxuNzr3i
<pinbot> now pinning /ipfs/QmfWs6rZrkK7t816RHoBWTwM6rMiZmaFeSqCWNrxuNzr3i
fingertoe has quit [Ping timeout: 246 seconds]
niekie has quit [Ping timeout: 265 seconds]
<ipfsbot> [node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWXZD
<ipfsbot> node-ipfs-api/solid-tests 58d2d83 David Dias: remove gateway calls as that does not exist anymore
<jbenet> davidar \o
ygrek has joined #ipfs
sonatagreen has quit [Ping timeout: 264 seconds]
<davidar> jbenet: hey, it's been a while :p
<davidar> get my pm?
niekie has joined #ipfs
multivac has quit [Remote host closed the connection]
barnacs has joined #ipfs
<ipfsbot> [go-ipfs] jbenet deleted fix/streaming-output at 5dc2c7e: http://git.io/vWXWW
<whyrusleeping> huh, i'm trying out symlinks stuff in windows
<whyrusleeping> and apparently 'ln -s' makes a hard link?
fingertoe has joined #ipfs
devbug has joined #ipfs
devbug has quit [Ping timeout: 264 seconds]
<whyrusleeping> jbenet: for the other PRs, just give me a thumbs up and i'll rebase and merge so we dont superbubble
sharky has quit [Ping timeout: 240 seconds]
sharky has joined #ipfs
Guest73396 has joined #ipfs
<davidar> whyrusleeping (IRC): duh, you have to run 'ln /Synergize'
closefistedness has quit [Ping timeout: 244 seconds]
ilyaigpetrov has joined #ipfs
fingertoe has quit [Ping timeout: 246 seconds]
nicknikolov has quit [Remote host closed the connection]
nicknikolov has joined #ipfs
<whyrusleeping> lol
<alu> I'm painting virtual reality now with neural networks
harlan_ has joined #ipfs
<whyrusleeping> alu: you can just use ipfs.io/ipfs/... now
mvr_ has quit [Quit: Connection closed for inactivity]
<alu> oh lol
<alu> I think I'm gunna get a Tango Tablet dev kit
mildred has joined #ipfs
jhulten has quit [Ping timeout: 240 seconds]
Tv` has quit [Quit: Connection closed for inactivity]
devbug has joined #ipfs
dignifiedquire has joined #ipfs
r1k0 has quit [Remote host closed the connection]
devbug has quit [Ping timeout: 240 seconds]
chriscool has joined #ipfs
<spikebike> alu: I'm holding off till a decent 3d sensor gets cheap
<spikebike> cheaper anyways
jhulten has joined #ipfs
rendar has joined #ipfs
zz_r04r is now known as r04r
<dignifiedquire> daviddias: please ping me when you have a moment, re browser tests and api inconsistencies
dignifiedquire has quit [Quit: dignifiedquire]
s_kunk has quit [Ping timeout: 272 seconds]
chriscool has quit [Ping timeout: 246 seconds]
chriscool has joined #ipfs
pfraze has quit [Remote host closed the connection]
nonmoose has joined #ipfs
dignifiedquire has joined #ipfs
<alu> arrrgh spikebike its like $400 tho
<dignifiedquire> whyrusleeping: I found where there is streaming json in 0.3.7
<dignifiedquire> whyrusleeping: it only happens in the browser though for some reason
<dignifiedquire> whyrusleeping: that is the repsonse when calling ping from the browser
<dignifiedquire> cc daviddias jbenet ^^
cemerick has joined #ipfs
NeoTeo has joined #ipfs
Guest73396 has quit [Ping timeout: 260 seconds]
<ipfsbot> [go-ipfs] jbenet deleted test-commands-flags at 8a3bf95: http://git.io/vWXAe
gaboose has quit [Remote host closed the connection]
Guest73396 has joined #ipfs
<ipfsbot> [go-ipfs] jbenet pushed 3 new commits to master: http://git.io/vWXxJ
<ipfsbot> go-ipfs/master cac6b37 Jeromy: allow ipfs id to work on self...
<ipfsbot> go-ipfs/master 7128816 Jeromy: add small test to ensure ipfs id <self> works...
<ipfsbot> go-ipfs/master fff95d6 Juan Benet: Merge pull request #1856 from ipfs/fix/id-self...
martinkl_ has joined #ipfs
jhulten has quit [Ping timeout: 240 seconds]
harlan_ has quit [Quit: Connection closed for inactivity]
s_kunk has joined #ipfs
s_kunk has joined #ipfs
bedeho has quit [Ping timeout: 264 seconds]
Guest61445 has joined #ipfs
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
martinkl_ has joined #ipfs
gamemanj has joined #ipfs
ygrek has quit [Ping timeout: 268 seconds]
martinkl_ has quit [Ping timeout: 240 seconds]
Encrypt has joined #ipfs
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
edsilv has joined #ipfs
edsilv has quit [Client Quit]
devbug has joined #ipfs
Guest61445 has quit [Quit: Leaving]
<ipfsbot> [node-ipfs-api] Dignifiedquire pushed 1 new commit to solid-tests: http://git.io/vW1Oc
<ipfsbot> node-ipfs-api/solid-tests 3f8a9a1 dignifiedquire: Improve gulp setup and run browser tests
martinkl_ has joined #ipfs
devbug has quit [Ping timeout: 246 seconds]
<dignifiedquire> daviddias: we nearly done with the browser tests :)
<dignifiedquire> *are
jhulten has joined #ipfs
<ipfsbot> [go-ipfs] jbenet closed pull request #1890: bitswap: clean log printf and humanize dup data count (master...fix/bitswapLogging) http://git.io/vW8C3
Guest73396 has quit [Ping timeout: 244 seconds]
jhulten has quit [Ping timeout: 240 seconds]
<cryptix> ohai
chriscool has quit [Ping timeout: 250 seconds]
chriscool has joined #ipfs
<daviddias> dignifiedquire: here :)
<daviddias> nice!
<rendar> hmm, what is bitswap used for?
<dignifiedquire> daviddias: did you get some deserved sleep? ;)
<daviddias> managed to get some, yes, thank you :) It was in chunks of 4 hours
<cryptix> rendar: its the protocol to move DAGs from peer to peer
<daviddias> however*
<dignifiedquire> daviddias: sounds like you are streaming everything these days :P
<daviddias> ahahah
<dignifiedquire> so there is a bug in node-ipfs-ctl as I’ve discovered
<dignifiedquire> just running requiring generates some process connections that result in gulp not exiting on its own
<daviddias> great that you managed to isolat that to node-ipfsd-ctl
<daviddias> I was confused that one of the things suggested in gulp-mocha was actually to do process.exit and wondered if that was normal
r1k0 has joined #ipfs
<dignifiedquire> it’s really bad to call process.exit yourself as you can’t chain tasks anymore, but there is a longstanding bug in gulp which results in it not exiting when there are any open connections to other processes or databases, that’s why one needs to be very careful of cleaning up
<ipfsbot> [go-ipfs] jbenet pushed 1 new commit to master: http://git.io/vW188
<ipfsbot> go-ipfs/master 4022d8b Juan Benet: Merge pull request #1885 from ipfs/fix/ndjson...
nessence has quit [Remote host closed the connection]
nessence has joined #ipfs
TheWhisper has quit [Read error: Connection reset by peer]
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
TheWhisper has joined #ipfs
<dignifiedquire> daviddias: also any ideas about why go-ipfs resonds differently from the browser than node? https://github.com/ipfs/node-ipfs-api/issues/86
<daviddias> dignifiedquire: that is a good question
nessence has quit [Remote host closed the connection]
nessence has joined #ipfs
chriscool has quit [Ping timeout: 260 seconds]
<gamemanj> hmm... IPNS is fast when your network consists of two nodes, but slow when it's actually connected to the real IPFS network. But it doesn't work at all with just one node.
wjiang_laptop has quit [Quit: WeeChat 1.2]
martinkl_ has joined #ipfs
compleatang has joined #ipfs
border0464 has quit [Ping timeout: 256 seconds]
nessence has quit [Remote host closed the connection]
border0464 has joined #ipfs
nessence has joined #ipfs
devbug has joined #ipfs
<davidar> cryptix: i got a cert earlier today, but not serving a gateway on it :p
<davidar> cryptix (IRC): looks like I got there before you ;)
<davidar> oops
<davidar> not brave enough to switch over the root domain yet :p
devbug has quit [Ping timeout: 260 seconds]
<cryptix> my dns provider seems to be very stupid about that... somehow i cant set an A on the root :-/
<cryptix> davidar: btw: can i use tex.js for my on tex files or is it to bleeding edge? :P
<davidar> cryptix: lol
<davidar> you're welcome to try it out, but I can't guarantee it won't break :p
arpu has joined #ipfs
hellertime has joined #ipfs
<davidar> you'll need to run it through LaTeXML first, and then manually edit the output a little bit
<cryptix> that's bordering on "ugh.. " but i'd still might give it a try - just too awesome ;)
<davidar> hehe, yeah, I aim to make it easier once it matures ;)
mildred has quit [Ping timeout: 264 seconds]
Encrypt has quit [Quit: Quitte]
anticore has joined #ipfs
shrimpishness has joined #ipfs
JasonWoof has quit [Ping timeout: 255 seconds]
mildred has joined #ipfs
JasonWoof has joined #ipfs
JasonWoof has quit [Changing host]
JasonWoof has joined #ipfs
jhulten has joined #ipfs
<dignifiedquire> daviddias: did you try reinstall node_modules?
<daviddias> many times
<dignifiedquire> :(
<daviddias> it is the same problem as yesterday, i still don't get why
<daviddias> running on Sauce is fine though
<dignifiedquire> that is even stranger
<dignifiedquire> what chrome are you running?
<dignifiedquire> you could try running in firefox
<dignifiedquire> though that doesn’t make a difference I guess as it looks like the error occurs when browserify is trying to generate the bundle
jhulten has quit [Ping timeout: 250 seconds]
<dignifiedquire> I just looked at the stack trace you pasted, and one thing that seems really odd to me is that it references usr/local/lib/node_modules/
<dignifiedquire> could you try running DEBUG=true node_modules/.bin/karma start —single-run=true karma.conf.js
anticore has quit [Ping timeout: 244 seconds]
devbug has joined #ipfs
cemerick has quit [Ping timeout: 268 seconds]
anticore has joined #ipfs
edcryptickiller has joined #ipfs
edcryptickiller has left #ipfs [#ipfs]
devbug has quit [Ping timeout: 240 seconds]
<daviddias> oh, so I have a old karma globaly
<daviddias> » DEBUG=true node_modules/.bin/karma start —single-run=true karma.conf.js
<daviddias> 27 10 2015 12:02:47.982:ERROR [config]: File /Users/david/Documents/code/ipfs/node-ipfs-api/—single-run=true does not exist!
<daviddias> I meant 'gulp'
<daviddias> reinstalled gulp globally
<daviddias> and do it does its thing, woot! :)
<dignifiedquire> lol
<dignifiedquire> well I don’t care as long as it works finally
<daviddias> sweeeeet!
<ipfsbot> [node-ipfs-api] diasdavid pushed 3 new commits to solid-tests: http://git.io/vW17z
<ipfsbot> node-ipfs-api/solid-tests 540a263 David Dias: name.publish and name.resolve tests
<ipfsbot> node-ipfs-api/solid-tests adc062c David Dias: Merge branch 'solid-tests' of github.com:ipfs/node-ipfs-api into solid-tests
<ipfsbot> node-ipfs-api/solid-tests ebc6830 David Dias: add debug mode as a npm script
<dignifiedquire> daviddias: and another project converted successfully to karma :)
<daviddias> oh yeaaah!
<daviddias> that was your goal all this time, wasn't it? :P
<daviddias> preach Karma to people
<dignifiedquire> of course ;)
Taek42 is now known as Taek
nicolagreco has joined #ipfs
therealplato has joined #ipfs
vanila has joined #ipfs
<vanila> hello
<vanila> I'm trying to compute the IPFS hash of a string in C, but Iran into some trouble
<vanila> I'm taking the base58 version of: 0x12 then the sha256 sum
<Stskeeps> a/g w00t
<vanila> but then my outputs starts with 6, I wanted it to start with Qm
<vanila> and of course the rest of the hash doesn't match up either
<vanila> oh I forgot the length, works now
<victorbjelkholm> vanila, do you have some example code? Interested in this too but my expertise this with crypto hashes... Looking at the code would help
<locusf> where are the blocks from the given file generated in go-ipfs source code?
<vanila> Qm at start but the hashes don't match-
<vanila> let me make a makefile quickly
<gamemanj> the hashes won't match if you're uploading a file and trying to get the hash of it
<gamemanj> the hash of a file is the hash of it's first block,
<gamemanj> there's layering in-between
<gamemanj> like the merkledag on top of the block,
<gamemanj> the unixfs file (including different ways of chunking!) on top of the merkledag
<vanila> gamemanj, OHhhh damn I think that's whats bit me
<gamemanj> just use the "dry run add"
nicolagreco has quit [Quit: nicolagreco]
<vanila> so I can't just hash the file, I have to do a lot more work
<gamemanj> or interface with go-ipfs
<gamemanj> or, if you really want to have complete control of the process,
<gamemanj> and, mind, you do have to be somewhat crazy to want to do this,
<gamemanj> you could perform all the operations yourself then upload the file as blocks
<gamemanj> so that it's guaranteed that what you're sending is what your hash calculations were based upon
<gamemanj> But seriously, for the love of the universe and kittens, don't
<gamemanj> just. dry run. add.
<victorbjelkholm> I've heard that a panda gets killed every time someone tries to compute a IPFS hash themselves
sharky has quit [Ping timeout: 252 seconds]
<achin> lies
<gamemanj> Not the pandas!
<gamemanj> NOT THE CUTE PANDAS!
<davidar> vanila (IRC): yeah, it's kind of annoying that the ipfs hash didn't match the raw hash, but it's for good reason
<davidar> *doesn't
<vanila> yeah, means i gotta redesign this significantly
<dignifiedquire> victorbjelkholm: not a panda, a jellyfish
<victorbjelkholm> pff, no one cares about the jellyfish! There are too many of them!
<victorbjelkholm> Pandas... They barely exists anymore
<gamemanj> what about starfish!
anticore has quit [Remote host closed the connection]
dignifiedquire has quit [Quit: dignifiedquire]
nicolagreco has joined #ipfs
SingularE has joined #ipfs
devbug has joined #ipfs
devbug has quit [Ping timeout: 240 seconds]
<SingularE> It's kind of weird that the upper most parent directory name is removed in the hashing process.
<SingularE> This means to seed something with a name I need to do /name_of_thing/name_of_thing/subdirs so when it hashes it will show up as a dir with the name that refers to the content.
Guest89897 has joined #ipfs
jhulten has joined #ipfs
jhulten has quit [Ping timeout: 250 seconds]
<oed> nice!
<locusf> wot
<locusf> nice
dignifiedquire has joined #ipfs
anticore has joined #ipfs
acidhax has joined #ipfs
acidhax_ has joined #ipfs
dysbulic has joined #ipfs
<cryptix> very nice indeed
acidhax has quit [Ping timeout: 255 seconds]
border0464 has quit [Ping timeout: 252 seconds]
border0464 has joined #ipfs
therealplato1 has joined #ipfs
therealplato has quit [Ping timeout: 255 seconds]
Guest89897 has left #ipfs [#ipfs]
sonatagreen has joined #ipfs
cemerick has joined #ipfs
dysbulic has quit [Ping timeout: 250 seconds]
dysbulic has joined #ipfs
devbug has joined #ipfs
mungojelly_ has quit [Remote host closed the connection]
r1k0 has quit [Remote host closed the connection]
dysbulic has quit [Ping timeout: 260 seconds]
ashark has joined #ipfs
<ike_> yo yo yo i am back in the saddle
<sonatagreen> hi
<cryptix> hey ike_
<cryptix> ike_: ike.io is yours, right? you might want to update that dnslink ;)
<ike_> yeah, a few more changes before it's stable again
shrimpishness has quit [Ping timeout: 240 seconds]
jhulten has joined #ipfs
dlight has joined #ipfs
jhulten has quit [Ping timeout: 264 seconds]
anticore has quit [Ping timeout: 240 seconds]
<achin> SingularE: the -w (--wrap-with-directory) option is useful in that case
mvr_ has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
dignifiedquire has joined #ipfs
dignifiedquire_ has joined #ipfs
dignifiedquire_ has quit [Client Quit]
bsm1175321 has quit [Remote host closed the connection]
ashark has quit [Ping timeout: 256 seconds]
chriscool has joined #ipfs
bsm1175321 has joined #ipfs
nicolagreco has quit [Quit: nicolagreco]
pguth2 has quit [Ping timeout: 240 seconds]
step21 has quit [Ping timeout: 246 seconds]
step21_ has joined #ipfs
step21_ is now known as step21
chriscool has quit [Ping timeout: 244 seconds]
nicolagreco has joined #ipfs
acidhax_ has quit [Remote host closed the connection]
sharky has joined #ipfs
acidhax has joined #ipfs
acidhax_ has joined #ipfs
voxelot has joined #ipfs
voxelot has joined #ipfs
acidhax has quit [Ping timeout: 250 seconds]
jhulten has joined #ipfs
nicolagreco has quit [Quit: nicolagreco]
ashark has joined #ipfs
arpu has quit [Ping timeout: 256 seconds]
Tv` has joined #ipfs
amade has joined #ipfs
vakla has joined #ipfs
pfraze has joined #ipfs
anticore has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping force-pushed fix/windows-builds from a0c74dc to a838b6b: http://git.io/vWq8k
<ipfsbot> go-ipfs/fix/windows-builds 5e54dcb Jeromy: force godeps to save windows import...
<ipfsbot> go-ipfs/fix/windows-builds aa09fa2 Jeromy: fix path creation so it works on windows...
<ipfsbot> go-ipfs/fix/windows-builds 987c198 Jeromy: skip cli parse tests on windows due to no stdin...
cemerick has quit [Ping timeout: 240 seconds]
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
anticore has quit [Remote host closed the connection]
qgnox has quit [Quit: WeeChat 1.3]
hoboprimate has joined #ipfs
mildred has quit [Ping timeout: 272 seconds]
martinkl_ has joined #ipfs
warner has quit [Remote host closed the connection]
warner has joined #ipfs
acidhax_ has quit [Remote host closed the connection]
nephroparalysis has joined #ipfs
bedeho has joined #ipfs
arpu has joined #ipfs
nicolagreco has joined #ipfs
nicolagreco has quit [Client Quit]
nessence has quit [Ping timeout: 240 seconds]
simonv3 has joined #ipfs
Matoro has quit [Ping timeout: 272 seconds]
Matoro has joined #ipfs
victor_ has joined #ipfs
victor_ is now known as victorbjelkholm_
victorbjelkholm_ has quit [Client Quit]
<ipfsbot> [go-ipfs] chriscool closed pull request #1876: Split long short options (master...split-long-short-options) http://git.io/vWq9x
acidhax has joined #ipfs
acidhax has quit [Remote host closed the connection]
acidhax has joined #ipfs
hoboprimate has quit [Quit: hoboprimate]
hoboprimate has joined #ipfs
acidhax has quit [Remote host closed the connection]
acidhax has joined #ipfs
nephroparalysis has quit [Ping timeout: 246 seconds]
s_kunk has quit [Ping timeout: 244 seconds]
<achin> this is offtopic, but: does anyone in here have anything to say about the freenas builds by ixSystemes?
ygrek has joined #ipfs
Matoro has quit [Ping timeout: 260 seconds]
<whyrusleeping> achin: i think theyre pretty cool
<whyrusleeping> i dont run it myself, but a few friends i share disks with do, and its a very nice interface
<achin> the freenas interface?
<whyrusleeping> (they also happen to have ipfs on them, starting in version 10)
<whyrusleeping> yeah, the freenas web interface
<achin> freenas itself seems pretty awesome. i think i want it
<whyrusleeping> were you talking about something different?
<achin> but i'm wondering about getting the hardware from ixsystems, or from somewhere else
<achin> (i guess a little related to ipfs, because there's a lot of neat content outthere to hold, and i don't really have disk space at the moment)
<whyrusleeping> ooooh, yeah. i've never bought a prebuilt machine, i always build my boxes from scratch
<achin> the "certified by freeNAS" part is the part that interests me
mvr_ has quit [Quit: Connection closed for inactivity]
<whyrusleeping> eh, thats normally just marketing jibjab (having worked at a company that did similar things)
<whyrusleeping> it likely means they test their software on that hardware
<achin> yeah. and the freenas project itself seems to have pretty good docs on the types of hardware they recommend
<achin> i'd love to be able to dedicate a few TB of space to hosting ipfs-related stuffs
atrapado has joined #ipfs
chriscool has joined #ipfs
<ipfsbot> [go-ipfs] whyrusleeping force-pushed fix/log-writer from 7fe28c6 to 9a11cf7: http://git.io/vWgPu
<ipfsbot> go-ipfs/fix/log-writer 248e748 Jeromy: vendor in logging lib changes...
<ipfsbot> go-ipfs/fix/log-writer 9a11cf7 Jeromy: update code to use new logging changes...
Lana has joined #ipfs
devbug has quit [Remote host closed the connection]
devbug has joined #ipfs
<achin> bummer. the "hardware requirements" page is 404'ing
arpu has quit [Read error: Connection reset by peer]
nessence has joined #ipfs
Matoro has joined #ipfs
<whyrusleeping> achin: link?
<ipfsbot> [go-ipfs] whyrusleeping force-pushed fix/log-writer from 9a11cf7 to c023d18: http://git.io/vWgPu
<ipfsbot> go-ipfs/fix/log-writer c023d18 Jeromy: update code to use new logging changes...
chriscool has quit [Ping timeout: 240 seconds]
<whyrusleeping> _jkh_: where can we read about hardware requirements?
<gamemanj> have you noticed something about these release dates: http://doc.freenas.org/
<gamemanj> oh, they are continuing development...
bedeho has quit [Ping timeout: 246 seconds]
Encrypt has joined #ipfs
fingertoe has joined #ipfs
<whyrusleeping> gamemanj: yeah, thats for freenas 9
<whyrusleeping> freenas 10 is in alpha right now
s_kunk has joined #ipfs
s_kunk has joined #ipfs
ygrek has quit [Remote host closed the connection]
ygrek has joined #ipfs
stoopkid has joined #ipfs
therealplato1 has quit [Ping timeout: 240 seconds]
<stoopkid> do i need to have the daemon running to use the get command?
captain_morgan has quit [Ping timeout: 240 seconds]
therealplato has joined #ipfs
<stoopkid> i have a file on IPFS at /ipfs/QmYWbstJZ6roQuvTk9owTahNcjAvhHeGv2QfkhXuL27jZk and i can access it through the gateway so i know it's available
<stoopkid> but i have a second comp with IPFS, and i'm trying to run the command:
<stoopkid> ~/gopath/bin/ipfs get /ipfs/QmYWbstJZ6roQuvTk9owTahNcjAvhHeGv2QfkhXuL27jZk
pfraze has quit [Remote host closed the connection]
<stoopkid> but it just hangs
<stoopkid> i don't have the daemon running on my second comp though, so i'm wondering if that could be the problem?
<sonatagreen> I don't think so, I was able to ipfs get it with my daemon shut down just now
<stoopkid> "get /ipfs/QmYWbstJZ6roQuvTk9owTahNcjAvhHeGv2QfkhXuL27jZk" is correct?
<sonatagreen> are you sure the ~/gopath/bin/ipfs path is correct? do other commands work with it?
<sonatagreen> $ ipfs get /ipfs/QmYWbstJZ6roQuvTk9owTahNcjAvhHeGv2QfkhXuL27jZk
<sonatagreen> yes
<stoopkid> yea, it's the correct path, other things work with it
<sonatagreen> I dunno
<stoopkid> it's running on a friend's VPS, maybe that has something to do with it
Matoro has quit [Ping timeout: 260 seconds]
<achin> does your second machine actually have that file in its local cache?
<stoopkid> achin: not sure, how do i check that?
<achin> if it doesn't, then you will for sure need the daemon running to download it from a peer who has it
<achin> you can run "ipfs refs local" to see what your local machine has. this will run even without the daemon
<stoopkid> err, wait apparently the daemon is running (my friend must have turned it on earlier)
nicolagreco has joined #ipfs
<achin> anything that would involve transfering files to/from the network would require the daemon to be running
<stoopkid> i've got it the daemon running on both my local comp and on the VPS
<stoopkid> my local comp as the ref, the VPS does not
<stoopkid> has*
<achin> ok, so then run an ipfs daemon on the VPS, and then "Ipfs get hash" on the VPS
<stoopkid> still just hangs hmm
<achin> you might have to give it a few moments. the daemons will need to find each other and then connect to each other
<stoopkid> should i use "get /ipfs/<hash>" or just "get <hash>"?
Vaul has joined #ipfs
<achin> both should be fine
<achin> from the VPS, run "ipfs dht findprovs <hash>". if all is well, you should see a few nodes that have this content
<stoopkid> indeed it returns 4 nodes, then hits "error: routing: not found" and then back to cmd-line
<achin> it always lists "not found" for reasons that are not clear to me. but the important this is that your node knows of 4 other nodes that have that hash
Matoro has joined #ipfs
nicolagreco has quit [Remote host closed the connection]
dinamic has joined #ipfs
<dinamic> evning folks.
<dinamic> i just found this project and have a few questions..
devbug has quit [Ping timeout: 268 seconds]
<dinamic> first: How can i control how much data a my node stores ?
voxelot has quit [Ping timeout: 264 seconds]
gaboose has joined #ipfs
devbug has joined #ipfs
<stoopkid> reading this now: https://github.com/ipfs/examples/tree/master/examples/ipns , what is the current state of IPNS? i'll need to have mutable files with permanent addresses
chriscool has joined #ipfs
<stoopkid> i can potentially make due with the capabilities described on that github page, but, that's probably not preferable for what i'm trying to do
<stoopkid> specifically: "Now, there are a few things to note; first, right now, you can only publish a single entry per ipfs node."
<stoopkid> and i guess the "Second..." is an issue as well, but not quite as serious (for my purposes)
simonv3 has quit [Quit: Connection closed for inactivity]
NeoTeo has quit [Quit: ZZZzzz…]
ygrek has quit [Remote host closed the connection]
<sonatagreen> That 'single entry' can be a directory structure with arbitrarily many things under it.
ygrek has joined #ipfs
<stoopkid> hrmmm
<stoopkid> that might work
<stoopkid> now, doesn't this force other network nodes to have to request it from my node? if they're accessing '/ipfs/<my-peer-id>' ?
<sonatagreen> /ipns/<your-peer-id> rather, but... sort of?
<sonatagreen> like
<sonatagreen> first the static snapshot of the content is bundled up into an /ipfs/ link, and then that link is gettable from your node
captain_morgan has joined #ipfs
<sonatagreen> IPNS is not so much mutable files as mutable links to immutable files
<sonatagreen> and when you want to do an update, you just change the link to point to the new version
chriscool has quit [Ping timeout: 250 seconds]
<deltab> the way git works, if you know that
dignifiedquire has quit [Quit: dignifiedquire]
<stoopkid> so then, other host-nodes would then 'get' the updated file from my node (if/when it changes), and then user-nodes can request from the address "/ipns/<my-peer-id>", but this address would resolve to an IPFS address which can actually be retrieved from any of the participating host-nodes?
<stoopkid> heh i can reword that if necessary
NeoTeo has joined #ipfs
fingertoe has quit [Ping timeout: 246 seconds]
<sonatagreen> Yeah, that's right.
<stoopkid> ok cool, i think that would actually work pretty nicely, thank you very much! :)
<sonatagreen> And -- I /think/ -- if you shut down your node, the last IPFS address that your /ipns/<your-peer-id> resolved to, might stay around in other people's caches for about 24 hours.
<sonatagreen> so the /ipns/ link won't /necessarily/ break /right away/.
<sonatagreen> er.
<sonatagreen> the /ipns/<your-peer-id> won't _necessarily_ break _right away_.
gamemanj has quit [Remote host closed the connection]
gamemanj has joined #ipfs
<stoopkid> ah hm... hadn't considered this yet..
<stoopkid> i can partially mitigate the issue with something like an IPFS-analog of 'mirror sites', i.e. like other host-nodes setting their /ipns/<peer-id> addresses to resolve to the same /ipfs address
tilgovi has joined #ipfs
<sonatagreen> Interesting idea. You could do that, yeah.
<stoopkid> (i have a very forgiving user-community haha)
<sonatagreen> (You can also just tell people the /ipfs/ address, which has the advantage that it won't break, but the disadvantage that it won't tell them when there's an update available.)
border0464 has quit [Ping timeout: 252 seconds]
border0464 has joined #ipfs
hoboprimate has quit [Quit: hoboprimate]
hoboprimate has joined #ipfs
rendar has quit [Ping timeout: 256 seconds]
hellertime has quit [Quit: Leaving.]
hoboprimate has quit [Client Quit]
hoboprimate has joined #ipfs
voxelot has joined #ipfs
voxelot has joined #ipfs
<bret> sometimes after I `go get -u github.com/ipfs/go-ipfs/cmd/ipfs` the commanb ipfs version doesn't reflect the latest tag
rendar has joined #ipfs
hoboprimate has quit [Client Quit]
<bret> oh wait nm
<bret> my bad
hoboprimate has joined #ipfs
fingertoe has joined #ipfs
pfraze has joined #ipfs
<achin> dinamic: did you get an answer rto your question from a while ago?
Encrypt has quit [Quit: Quitte]
<codehero> come on imgur
<codehero> just use ipfs, people! D:
fingertoe has quit [Ping timeout: 246 seconds]
<victorbjelkholm> codehero, scream at the big guys instead of imgur :)
<victorbjelkholm> even better, scream at the browsers
<codehero> yeah
<vanila> what are imgur doing?
<codehero> well. i know a guy who works for mozilla, but i doubt that he can ipfs implemented :P
<codehero> especially not in its current state
<codehero> vanila: it's overloaded
<vanila> ahh well
<victorbjelkholm> jbenet still haven't clarified the note about one browser thinking about implementing ipfs in the the stanford talk he did
<vanila> if everyone was uploading to ipfs.pics
<vanila> and ipfs.pics got overloaded, we'd be in the same trouble - wouldn't we?
<vanila> there needs to be lots of nodes that also pin the same content
<sonatagreen> I'd like to reiterate my feature request for pinning IPNS links. The node should only be needed for /updating/ the link, not for preventing the existing value from rotting.
<vanila> so the ipfs pics sites have to talk to each other
<vanila> to distribute the hosting
<achin> an impl for ipfs-cluster will be helpful for people who want to make a large corpus of data available over ripfs
<victorbjelkholm> vanila, achin starting point for ipfs-cluster https://github.com/victorbjelkholm/openipfs
<achin> ipfs is coming along nicely, but there is no need to rush everyone into it. there are a lot of open questions to be answered and implemeneted
<vanila> I haven't studied hovw openipfs works yet but I did have one idea I'd like critism on,
<vanila> what do you think of this?
<codehero> vanila: well. yeah. that's why people would either need to do "ipfs cat/get" or via browser implementation
<vanila> each ipfs.pics site would have its own ipns 'name', and a list of names of the other nodes
<codehero> pinning wouldn't be incredibly important
<codehero> because as more people access the image, more people cache and share it. i think
<codehero> that's how it works, right?
<vanila> each time a picture is upload they can add that under their name to broadcast that the other nodse should pin it too
<vanila> and periodically, check up on each of the nodes you know about and pin the things they inform you fo
voxelot has quit [Ping timeout: 268 seconds]
<vanila> codehero, that is actually really clever.. so if the main site was down people would still be able to peice it together from many users caches
<victorbjelkholm> vanila, yeah, I've got a pull model in the backlog. Right now I'm working on pushing pinning to the nodes. But in the future, the nodes would pull a list of hashes to pin, with some filter
<achin> victorbjelkholm: how does a node that has joined openipfs figure out what hashes to pin?
voxelot has joined #ipfs
voxelot has joined #ipfs
<codehero> vanila: well. is that not how it works?
<vanila> victorbjelkholm, I haven't really thought of the tradeoffs between pushing and pulling, I guess using IPNS in this way would naturally be a pull
<victorbjelkholm> achin, push model = api call to pin the hash recently added. Pull model = periodically check the list of hashes to pin, see if there is anything new and pin that if it passes the checks
<victorbjelkholm> vanila, yeah, with IPNS it'll be easy to do pull mode
<vanila> codehero, yeah, I just hadn't realized that!
<codehero> i thought getting something from ipfs would cache it for some time and "pin" it while it's in the cache
<victorbjelkholm> I don't know, liked the real time feedback of having a push model, so went that road
<codehero> whyrusleeping: is that how it works?
<victorbjelkholm> but for general stability and performance, pull would be better
<victorbjelkholm> offering both would be nicest :)
Oatmeal has joined #ipfs
<achin> victorbjelkholm: openipfs will partition the total list of hashes into N smaller lists where {num_nodes} <= N
<victorbjelkholm> achin, right now all the content is distributed evenly across all nodes, for maximal redundancy. But for all means, open an issue in the repo and share your points there, I'm no pro with these things, just a happy amateur :)
<victorbjelkholm> btw, if someone has a better name than OpenIPFS, please suggest it!
<codehero> openipfs.xyz doesn't work?
<codehero> is that supposed to be like that
<victorbjelkholm> codehero, openipfs is implying that ipfs isn't already open
<victorbjelkholm> oh, the domain. Nah, haven't pointed the domain yet
<victorbjelkholm> still have some testing to do, so people can't bring down the cluster by doing something stupid
<codehero> yeah. the domain
<codehero> ah. okay
<codehero> just wanted to check
<victorbjelkholm> but be sure that I'll tell everyone once it's live, so we can stresstest it together :)
<sonatagreen> IpfsCluster?
<codehero> ^
<codehero> now. how are you going to deal with people uploading illegal stuff?
<sonatagreen> IPFSwarm?
<victorbjelkholm> actually, take a look here: 46.101.230.197
<victorbjelkholm> sonatagreen, that's a good name!
<victorbjelkholm> codehero, dmca is gonna be built into ipfs
<codehero> ah, okay
<codehero> that works
<victorbjelkholm> every node decides by themselves what they want to host
<victorbjelkholm> if you're ok with breaking dmca, which people in africa don't care about, you'll turn of the option
<codehero> yeah
<achin> i'm not sure "dmca built into ipfs" is quite the right characterization
<codehero> well, no. people can still choose
<codehero> can't find the github issue right now, though
<sonatagreen> I think it needs to be flexible enough to handle different things being banned/undesirable for different people
<achin> as i understand it, "built in to ipfs" will be the ability of a node/gateway operator to provide whitelists and blacklists
<codehero> victorbjelkholm: i suppose when ipfs gets encryption, you don't even have to take responsibility for files distributed by the cluster
<codehero> sort of like the new mega thingy
<sonatagreen> i assume you can just, like, run ipfs through tor?
<codehero> well, yeah. but that makes it super slow
<victorbjelkholm> codehero, not sure. I'm sure though that openipfs is neither gonna host the content itself or decide which content gets hosted on peoples nodes, there will be rules that the nodes can set themselves
<codehero> oh. okay
<codehero> i thought the openipfs clusters actually hosted the stuff
<victorbjelkholm> codehero, openipfs is just the middleman between people who want to have content pinned and people who want to help pin content
<codehero> cool cool
NeoTeo has quit [Quit: ZZZzzz…]
<dinamic> achin, nope
fingertoe has joined #ipfs
<achin> dinamic: the short answer is: your node never automatically downloads content. content is downloaded (and stored) on your code in 3 cases:
<achin> 1) you add something with "ipfs add". 2) you pin something with "ipfs pin" 3) you download something with "ipfs get"
dignifiedquire has joined #ipfs
<dinamic> achin, ah like bt ?
<achin> in cases (1) and (2), the data is pinned, and will never automatically removed. in these cases, you have to manually "unpin" the data
<achin> in case (3) the data gets cleaned during a garbage collection step, which at the moment only happens when you run manually "Ipfs repo gc"
<achin> yes, it's like bittorrent in the sense that your bittorrent client will neve rdownload anything if you don't give it a torrent/marget
<dinamic> achin, so the fs is more like sharing then storage ? eg. my files stored is available by hash by anyone ?
<achin> yes, that's right. an important thing to remember is: if you add something to your node (ipfs add), initially *no other node will have your data*
<achin> other nodes will only get a copy of your data if they download it (Using one of the 3 things i mentioned above)
<dinamic> mm ok
<dinamic> so "popular" content will be more redundant than private isolated data then..
<achin> exactly right
<dinamic> no block distribution
<dinamic> hmm
<dinamic> achin, thanks for the clarification
<achin> in the future, something like filecoin.io could be used to get your data automatcailly replicated by others on the network
<achin> but nothing like that exsits within ipfs at the moment
<fingertoe> I am assuming the security is "Don't put anything on IPFS you don't want public"? Asking for a hash from the network kinda reveals that a file is likely there, correct?
<dinamic> thats what i understand now ^
<achin> if you want something to stay secret, either encrypt it before adding it to ipfs, or don't add it at all
<dinamic> so if i want to be private i need to encryption layer on objects stored..
Rodya has quit [Ping timeout: 272 seconds]
<achin> fingertoe: i suppose that asking the network for a hash might suggest that someone thinks the hash exists, but it might have been a typo, for example
Rodya has joined #ipfs
<spikebike> speaking of which Qm = sha256, and the rest is teh sha256 checksum, but how is it encoded?
<spikebike> added QmYNmQKp6SuaVrpgWRsPTgCQCnpxUYGq76YEKBXuj2N4H6 foo
<spikebike> (after the "Qm" part)
<achin> there are some bytes for the hash length, and then the full hash, all encoded in base58
<codehero> achin: do files in the cache also get seeded?
<codehero> like, when i do ipfs get. will the file be shared until it's deleted by the gc?
<achin> yep. anything in your local cache can be shared with other peers
<spikebike> ah, base58
<codehero> okay. that's cool
<codehero> because having only pinned files be shared would be a major disadvantage
<codehero> when compared to bittorrent at least
a3nm has left #ipfs ["WeeChat 1.0.1"]
<fingertoe> Editing peers would still make it useful for internal use etc...
<achin> or maybe a peer whitelist
<dinamic> achin, can i control the bootstrap ?
<gamemanj> the bootstrap can be modified in the config
<achin> ~/.ipfs/config
<gamemanj> though you may want to start the daemon with the "dht" routing alg
ipfs_intern has joined #ipfs
<gamemanj> "ipfs_intern"?
mvr_ has joined #ipfs
atrapado has quit [Quit: Leaving]
<achin> just a normal person interested in ipfs, i think
<gamemanj> Will there eventually be a version of the API which allows showing a "do you want to give permission for this to happen" box wrapping any given API call? Ofc, it may be silly to ask permission for calls like "id" and "object/get", but in other cases, it's better than the current all-or-nothing permissions (especially when they default to "nothing" - not exactly convenient for applications)
<gamemanj> (and "all" as a default is asking for trouble...)
<vakla> Hi guys, I'm pretty new to ipfs but I'm interested in building a high-I/O capability cluster. Would any sort of dynamic IPFS CDN-like behavior be applicable to this sort of configuration?
nicolagreco has joined #ipfs
<nicolagreco> jbenet: are you around?
Matoro has quit [Ping timeout: 255 seconds]
<nicolagreco> I need to talk links for a couple of seconds
<nicolagreco> maybe someone else could help as well
<achin> ask away, we'll see what we can do
<nicolagreco> I was wondering whether we could add some RDFa in the gateway
<nicolagreco> so that I could use existing linked data apps with it
<nicolagreco> adding linked data markup to the html would be really the simplest fix
<nicolagreco> this is just for folders (sorry I should have said that)
fingertoe has quit [Ping timeout: 246 seconds]
<nicolagreco> or if we really could, serve it in json if the Accept-headers ask for json
<nicolagreco> (please do tell me if I am missing the point)
<achin> like the output of "ipfs object get" ?
<nicolagreco> yes
<gamemanj> but if you were talking about just folders
<gamemanj> then why would it modify the output of ipfs object get
<achin> if you only need link data, i guess my preference would be the JSON route. that sounds like a nice ide
<gamemanj> oh, I see
<gamemanj> TBH, it would be nice if the gateway allowed getting arbitrary JSON DAG nodes
krl has quit [Ping timeout: 272 seconds]
<achin> (i myself have a few use cases that would need the JSON data, so i like it too)
<nicolagreco> so in my particular need, I want to query uri-s, if the uri is a folder I want a parsable piece of data - possibly in a linked-data format
krl has joined #ipfs
<nicolagreco> even JSON+ld (which is basically JSON would be great)
<nicolagreco> or otherwise, I will have to write my own gateway
acidhax_ has joined #ipfs
<gamemanj> if the gateway IPFS has allowed getting arbitrary JSON DAG nodes, then reading stuff built on the DAG directly wouldn't need API usage, and getting the links of data would also be easy
<nicolagreco> but I think just making the default one interoperable with the linked data world would be great
<nicolagreco> gamemanj: my problem is from a different perspective, which is, imagine I am on the web
<nicolagreco> and in my app I say where my data is (so a valid http/s uri)
<gamemanj> and you want IPFS folders to carry link information?
<nicolagreco> I want this app to read the folder (in my language "container")
<nicolagreco> to list the files (in my language "resources") in a linked data fancy way
acidhax has quit [Ping timeout: 256 seconds]
acidhax_ has quit [Ping timeout: 250 seconds]
Matoro has joined #ipfs
ygrek has quit [Ping timeout: 260 seconds]
nicola52 has joined #ipfs
nicola52 has quit [Remote host closed the connection]
reit has quit [Quit: Leaving]
NeoTeo has joined #ipfs
nicolagreco has quit [Quit: nicolagreco]
nicolagreco has joined #ipfs
devbug has quit [Ping timeout: 250 seconds]
pfraze has quit [Remote host closed the connection]
gamemanj has quit [Ping timeout: 256 seconds]
reit has joined #ipfs
ipfs_intern has quit [Ping timeout: 246 seconds]
pfraze has joined #ipfs
devbug has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
sseagull has joined #ipfs
jhulten has quit [Ping timeout: 246 seconds]
nekomune has quit [Read error: Connection reset by peer]
devbug has quit [Ping timeout: 250 seconds]
jhulten has joined #ipfs
ashark has quit [Ping timeout: 246 seconds]
Vaul has quit [Quit: Lost terminal]
ygrek has joined #ipfs
pfraze has quit [Remote host closed the connection]
pfraze has joined #ipfs
Matoro has quit [Ping timeout: 255 seconds]
nekomune has joined #ipfs
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
fingertoe has joined #ipfs
border0464 has quit [Ping timeout: 256 seconds]
nicolagreco has quit [Remote host closed the connection]
nicolagreco has joined #ipfs
border0464 has joined #ipfs
pfraze has quit [Remote host closed the connection]
<victorbjelkholm> achin, regarding making stay secret. Let's say I want to place something secret in ipfs and the receiver to be able to decrypt it with a password. What would be a safe way to do it?
<ion> victorbjelkholm: At the moment, i’d use PGP. IPFS will have built-in stuff for that in the future.
<victorbjelkholm> ah, ok. I'll look into that then
<achin> ion: do you think IPFS will have the ability to encrypt with a peer's ipfs pubkey?
<ion> I don’t know what they have planned exactly, but i don’t see why not.
<achin> until told otherwise, my knowledge of crypto is generally "yes you can do that, but you probably shouldn't" :P
<victorbjelkholm> if you're aching to try openipfs, openipfs.xyz is live now. I take no responsibility for the access you give openipfs, it's very early and small amount of checks are in place
<victorbjelkholm> but let me know if you find something that can be better
<jbenet> nicolagreco the gist _right now_ is this: if using JSON-LD, just wrap the JSON-LD in an ipfs mdag object. if using RDFa, put it in html and add it with unixfs (over mdag). we are soon moving to something we're calling IPLD, which gives a "json-based" data model on which you can do JSON-LD trivially
<nicolagreco> jbenet that is perfect, I actually had a look at that
<jbenet> for ipld things, see the https://github.com/ipfs/go-ipld repo. but this is not yet merged, will land in 0.4.0 or soon after. for now have to wrap it with a regular protobuf object.
<nicolagreco> jbenet what I was suggesting was to just add to the current ui gateway some markup, so that I could actually parse it
<nicolagreco> so - although I am interested in the ipld, mine was just a temporary adjustment for the gateway & fam tools
<jbenet> nicolagreco: oh you mean to the directory listing?
<nicolagreco> yes exactly jbenet
<jbenet> nicolagreco: we've had some requests to allow different viewers on it. it shouldn't be too hard btw to make a javascript page that produces the desired html from a given ipfs root.
<jbenet> nicolagreco: so you would to a url like: https://ipfs.io/ipfs/<root-of-viewer>/viewer#/ipfs/<root-of-data-to-show>
<nicolagreco> jbenet yes I know, I wrote the js library folder-to-rdf :P
<jbenet> and the viewer would pull out /ipfs/<root-of-data-to-show> through the readonly API and render what you want
<jbenet> which js library? (haven't seen it)
<achin> do any of these ideas allow the ability to get the raw PBNode data (and list of PBLinks) from the public gateway?
<jbenet> nicolagreco: i usually have to spell things out pretty clearly because we get people at all levels here, and what we're doing is different enough,e tc.
<jbenet> nicolagreco: assuming a level of knowledge ends up making people feel like they're not smart enough which is not what we want in an open source, welcoming community. so we tend to be exhaustive, instead of assuming.
<nicolagreco> jbenet having a viewer is a great idea, but is outside the use case I guess
vanila has quit [Quit: Leaving]
<jbenet> nicolagreco: nice! glad to see more rdf / LD stuff on npm. need more easy to use tooling for ld
<nicolagreco> jbenet that is the way to go - however I am not sure why you are referring to this
<nicolagreco> so let me just recap, when I curl a gateway URL, I want to get the file, but when this url is a directory listing, I want it to have semantic tags in the html
<nicolagreco> since if I use a viewer, it will be a javascript app, that curl can't really parse
<jbenet> nicolagreco understood. you could submit a PR against go-ipfs to add the RDFa tags (we can probably merge them, i dont see why not) OR you can do it yourself now with a viewer.
<jbenet> right the curl won't work well.
<nicolagreco> well I could do my own gateway
<jbenet> own gateway works too. also, where do you need RDFa? i suspect you can curl the api
<ion> Choosing whether you want the index.html (if any) or a directory listing could be done with the Accept header.
therealplato1 has joined #ipfs
<jbenet> and get the raw json and convert that to RDFa in your tool
<jbenet> but its not super clean as that specifies the types (dir/folder) in the protobuf, which is annoying to get out (hence our move to IPLD)
<nicolagreco> but what I am suggesting is just some reference to http://www.w3.org/ns/posix/stat#Directory or http://www.w3.org/ns/posix/stat#File
<achin> ooh, neat, you can get the raw data from the gateway!
<nicolagreco> jbenet that's right
therealplato has quit [Ping timeout: 250 seconds]
<jbenet> the reference can be added: in the src gateway, by another gateway, or by the script that pulls it out. it's really a matter of what's most convenient to you
<jbenet> changing the src gateway is best solution in long term but slowest for you, your own gateway will take some management, if you can do it in the client script that's the easiest for you
<nicolagreco> I guess if we make it by default in the gateway, the gateway (or even in the API, especially because you really just need to add a Link in the HTTP Headers) becomes linked data compatible
<nicolagreco> but if that is an issue right now, I will understand and make my own gateway
<nicolagreco> jbener ^
<nicolagreco> btw, thanks for your comments
<jbenet> nicolagreco: how about this: modify go-ipfs to do what you want, we can merge the PR soon, and you can also run your own gateway (which is the same binary, just faster before we ship it out to everyone)
<nicolagreco> jbenet that is perfect
<jbenet> i believe what you want to edit is this https://github.com/ipfs/dir-index-html
<nicolagreco> jbenet perfect! can you point me out to the repo that serves the api?
<nicolagreco> I might send a PR there too
devbug has joined #ipfs
<jbenet> the API is served by go-ipfs, the dir listing is imported as a module (it's a bit tricky and annoying, how that's done, but it's good for modularity
<nicolagreco> btw, jbenet, since last time we spoke I went to the redecentralize conference and I met the person that introduced ipfs. I thought he gave a really great presentation, I can't recall his name, though..
<jbenet> just modifying the html in the repo i linked, and updating the module in go-ipfs should do the trick. module gets used by https://github.com/ipfs/go-ipfs/blob/master/core/corehttp/gateway_indexPage.go
<jbenet> nicolagreco that was ianopolous -- he hangs out here too
<nicolagreco> jbenet at the same time I am making mit peeps to share their files through ipfs, instead of "upload it to xyz"
<jbenet> :D yay
<jbenet> btw, i'll be good to go visit sometime in mid/late nov, i believe.
Matoro has joined #ipfs
<nicolagreco> and I got playing/reading and I realized that most of the questions I had were wrong questions
<jbenet> alu: damn. we really didnt work hard to stop this one at all. sigh.
<nicolagreco> jbenet and also, I somehow assumed that ipfs would have a chain if objects were updated, so that one could actually do multiple forks of a file and still track it back
ipfs_intern has joined #ipfs
<nicolagreco> @jbenet, you know, if you come here I can actually make you have a talk at the berkman center at harvard
<jbenet> nicolagreco we will, it's just not in yet, as we're still discussing best ways.
<nicolagreco> jbenet I want to assist these conversations if they are online
<jbenet> nicolagreco one way is the basic commit object way -- https://github.com/ipfs/notes/issues/23 -- another is the CRDT route https://github.com/ipfs/notes/issues/40
<jbenet> nicolagreco that would be very useful.
<jbenet> thanks
<ipfs_intern> solution of this error ?
<jbenet> tilgovi: do you have time to chat about annotations sometime this week?
<ipfs_intern> too many open files module=commands/http
<tilgovi> yes
<jbenet> ipfs_intern ipfs version ?
<ipfs_intern> @jbenet: 0.3.8-dev
<jbenet> ipfs_intern try the new 0.3.8 or 0.3.9-dev
<ipfs_intern> @jbenet: Ok thnx :)
<ipfs_intern> i have a confusion, i pinned a file and when i checked from dht findprovs i was a provider, fine. But then i unpinned the file and ran repo gc, still i am a provider
<jbenet> ipfs_intern the provider record may still be out there
<spikebike> cool ethereum widget: http://www.ethereum-alarm-clock.com/
<nicolagreco> actually jbenet I really look forward to meet you/have another call some time soon, since I have a lot of extreme/mad/edge questions
domanic has joined #ipfs
nessence has quit [Remote host closed the connection]
nessence has joined #ipfs
<voxelot> spikebike: nice.. i was just today talking about creating a blockchain timer on eth lol
<nicolagreco> btw for any of the bitcoin/ethereum devs, I hope someone comes up with BlockchainBnB
<voxelot> blockchain bed and breakfast?
<nicolagreco> airbnb made on the blockchain
<jbenet> nicolagreco: sounds good!
kpcyrd has quit [Quit: leaving]
<_jkh_> hey guys really stupid question time
<_jkh_> but where is the “ipfs rm” command to accompany “ipfs add” ?
<voxelot> you can crash at my place for 300 blocks :)
<achin> _jkh_: unpin then gc
<_jkh_> I was showing it to someone and the question came up and I said “oh well of course there is a way to remove something…”
* spikebike digs around for his legos
<_jkh_> and then I was embarassed by not being able to figure out how. ;)
<_jkh_> achin: ah, so even if something is not pinned you unpin it?
<_jkh_> achin: that seems… counterintuitive.
<achin> if it's not pinned, i blieve the unpin operation will be a noop (and can be skipped)
<nicolagreco> once something is on ipfs, it is there forever, until noone has it
<jbenet> _jkh_ add pins by default.
<jbenet> this should be clearer from the help
<_jkh_> achin: wouldn’t it tend to follow the principle of least astonishment to essentially have a “delete” or “rm” or something to accompany add which just marks something gc’able?
<_jkh_> I get that it wouldn’t “remove” it per-se
<jbenet> and maybe having a way to set --pin=false (acutaly maybe that's there now)
<_jkh_> but then it would nicely pair the operations
<_jkh_> add/remove pin/unpin
<jbenet> ah not yte.
<jbenet> yet*
<achin> well, it is called "pin rm", but yes i do agree with you in principle _jkh_
<_jkh_> where removing an unpin thing could even perhaps be a hint to gc it sooner than later, if there’s any sort of gc priority in the algorithm
<achin> though perhaps it would be a little surprising to run "ipfs delete" but still ahve the file on your drive, so perhaps it should unpin and then immeditally GC that node
<jbenet> _jkh_ yes, we could make rm unpin + gc a single object.
<_jkh_> that would work too
<_jkh_> probably better if the object was especially large
<achin> _jkh_: can i pester you about a freenas question?
<jbenet> i would want rm to write through (i.e. gc at least that object before completing). feel free to PR it up to go-ipfs
<_jkh_> you add an 8GB movie and share it with someone then they say they’re done and you rm it again to reclaim the space as soon as possible
<jbenet> the help should be very, very clear that rm only removes from local node, as it obviously cannot remove it from other nodes
<_jkh_> achin: always!
hellertime has joined #ipfs
<achin> _jkh_: if i go to the homepage http://www.freenas.org/ then mouseover 'About FreeNAS' and then click 'Hardware Requirements', i get a "Page not found" : http://www.freenas.org/hardware-requirements/
<_jkh_> !
<spikebike> normal users aren't likely to fully understand what gc is, and what that means to their disk space. How about ipfs flush (which garbage collects), and ipfs rm/del says "marked for delete, flush to record disk"
<spikebike> normal users aren't likely to fully understand what gc is, and what that means to their disk space. How about ipfs flush (which garbage collects), and ipfs rm/del says "marked for delete, flush to recover disk space"
<jbenet> spikebike: gc will be automatic in the future.
<jbenet> thresholds.
<spikebike> flush would still be nice if you need disk right away
<_jkh_> achin: thanks! Bug filed...
<spikebike> like say with unbound's command line tool, or maybe clean (like yum or apt-get)
<_jkh_> our web site maintainers are...
<_jkh_> rare and special flowers
<achin> _jkh_: thanks! i will eagerly mash F5
<_jkh_> “does ipfs have a pidfile” <- random freenas developer
<_jkh_> they’re referring to the daemon
<_jkh_> I’ll bet he’s trying to implement “service ipfs stop”
<achin> ~/.ipfs/repo.lock i think ?
<_jkh_> that’s just a file
<_jkh_> no pid in it
<achin> except there is no pid in it
<ion> pid files are evil.
<_jkh_> ion: they totally are
<_jkh_> however, we haven’t incorporated launchd yet
<_jkh_> so we’re all ghetto
<ion> #!/bin/sh set -eu printf '%s\n' "$$" >/path/to/pidfile exec ipfs daemon
lgierth has quit [Quit: WeeChat 1.0.1]
voxelot has quit [Ping timeout: 260 seconds]