<mungojelly>
i thought it could even be a wiki-like piece of software or better yet various softwares that work on the same data, so it gives you a nice interface and you make your changes and then it produces a hash of the permanant wiki with your changes that other people can work off of
<sonatagreen>
there is an ipfs wiki project going, have you seen it?
<mungojelly>
no?
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<fingertoe>
Is there a way to only open it to certain IP's or the like?
captain_morgan has quit [Ping timeout: 240 seconds]
<achin>
i think so? you could also put something like nginx in front of ipfs, and use that for access control
<ploopkazoo>
has there been a history of illegal usage of ipfs?
<ploopkazoo>
like, if I were to make an i2p service that forwards to my ipfs gateway on a vps that I pay for with a credit card, would bad things happen to me?
<sonatagreen>
not that i know of, but if it becomes at all popular I imagine it's just a matter of time
peacekeep3r has left #ipfs [#ipfs]
<sonatagreen>
(@history)
<sonatagreen>
no idea about i2p
wowaname has quit [Quit: <3]
wowaname has joined #ipfs
<achin>
(IANAL)
<achin>
there have been several examples where website operators have been in trouble for the content others have posted, or made available
<achin>
the specific country you and your vps are in are likely important factors, too
edrex_ has joined #ipfs
edrex has quit [Ping timeout: 246 seconds]
sonatagreen has quit [Ping timeout: 260 seconds]
<fingertoe>
Everything is illegal if you look hard enough... ;-)
<mungojelly>
and of course by taking any of that too seriously you're angering infinite basilisks
edrex_ has quit [Ping timeout: 240 seconds]
lazyUL has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
edrex has joined #ipfs
border0464 has quit [Ping timeout: 256 seconds]
border0464 has joined #ipfs
jfntn has joined #ipfs
voxelot has joined #ipfs
voxelot has joined #ipfs
ygrek has joined #ipfs
jfntn has quit [Ping timeout: 246 seconds]
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed fix/ndjson from 08ec178 to 30424be: http://git.io/vWgKZ
<ipfsbot>
go-ipfs/fix/ndjson 30424be Jeromy: add test for ndjson output...
acidhax has joined #ipfs
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed fix/ndjson from 30424be to 196f4ee: http://git.io/vWgKZ
<ipfsbot>
go-ipfs/fix/ndjson 196f4ee Jeromy: add test for ndjson output...
acidhax has quit [Read error: No route to host]
acidhax has joined #ipfs
go1111111 has joined #ipfs
SuzieQueue has quit [Ping timeout: 246 seconds]
acidhax has quit [Remote host closed the connection]
wjiang_laptop has joined #ipfs
voxelot has quit [Ping timeout: 272 seconds]
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to feat/ipns-cache: http://git.io/vWwdK
pfraze has quit [Remote host closed the connection]
fingertoe has quit [Ping timeout: 246 seconds]
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed fix/log-writer from 5f2056b to bdd5ca0: http://git.io/vWgPu
<ipfsbot>
go-ipfs/fix/log-writer bdd5ca0 Jeromy: fix bug in mirrorwriter removal and move to go-logging...
mildred has joined #ipfs
kdlv has left #ipfs [#ipfs]
chriscool has quit [Read error: Connection reset by peer]
chriscool has joined #ipfs
chriscool has quit [Client Quit]
chriscool has joined #ipfs
chriscool has quit [Ping timeout: 264 seconds]
screensaver has joined #ipfs
<daviddias>
whyrusleeping: did you had a chance to look at the ipfs config thing?
s_kunk has quit [Ping timeout: 268 seconds]
anticore has joined #ipfs
norn has quit [Ping timeout: 250 seconds]
anticore has quit [Remote host closed the connection]
nicolagreco has quit [Quit: nicolagreco]
zz_r04r is now known as r04r
nicolagreco has joined #ipfs
nicolagreco has quit [Client Quit]
NeoTeo has joined #ipfs
wowaname is now known as wowaname_
wowaname_ is now known as wowaname
jfntn has joined #ipfs
jfntn has quit [Ping timeout: 264 seconds]
acorbi has quit [Quit: acorbi]
ygrek has quit [Ping timeout: 246 seconds]
intertrochanteri has quit [Ping timeout: 256 seconds]
bedeho has quit [Ping timeout: 240 seconds]
Guest73396 has quit []
harlan_ has joined #ipfs
fingertoe has joined #ipfs
cemerick has joined #ipfs
Guest73396 has joined #ipfs
rendar has joined #ipfs
acorbi has joined #ipfs
fingertoe has quit [Ping timeout: 246 seconds]
s_kunk has joined #ipfs
s_kunk has joined #ipfs
acorbi has left #ipfs [#ipfs]
s_kunk_ has joined #ipfs
s_kunk has quit [Disconnected by services]
s_kunk_ is now known as s_kunk
s_kunk has quit [Changing host]
s_kunk has joined #ipfs
jfntn has joined #ipfs
jfntn has quit [Ping timeout: 240 seconds]
dignifiedquire has joined #ipfs
jo_mo has joined #ipfs
jo_mo has quit [Client Quit]
dignifiedquire_ has joined #ipfs
dignifiedquire has quit [Quit: dignifiedquire]
dignifiedquire_ is now known as dignifiedquire
<victorbjelkholm>
any size limit on what the gateway can load? I've added a 800mb file and want to try viewing it though the gateway...
<cryptix>
gmorning
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<davidar>
VictorBjelkholm (IRC): yeah, it times out if the download takes too long
jfntn has joined #ipfs
<victorbjelkholm>
davidar, ah, makes sense. Thanks
infinity0 has quit [Ping timeout: 268 seconds]
rollick has quit [Ping timeout: 265 seconds]
<victorbjelkholm>
Another question, re-read the paper yesterday, and nodes in the network are incentivized for staying up longer. In what way are they incentivized? And if I run it locally as soon as I start my computer, will I lose that as soon as the daemon stops or it's based on my peer id?
edcryptickiller has joined #ipfs
jfntn has quit [Ping timeout: 256 seconds]
edcryptickiller has left #ipfs [#ipfs]
acous has quit [Ping timeout: 268 seconds]
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
rollick has joined #ipfs
Guest73396 has quit [Ping timeout: 252 seconds]
martinkl_ has joined #ipfs
border0464 has quit [Ping timeout: 252 seconds]
harlan_ has quit [Quit: Connection closed for inactivity]
border0464 has joined #ipfs
jfntn has joined #ipfs
gamemanj has joined #ipfs
jfntn has quit [Ping timeout: 252 seconds]
therealplato has quit [Ping timeout: 265 seconds]
therealplato has joined #ipfs
hellertime has joined #ipfs
cemerick has quit [Ping timeout: 255 seconds]
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
reit has quit [Quit: Leaving]
reit has joined #ipfs
ilyaigpetrov has joined #ipfs
martink__ has joined #ipfs
martink__ has quit [Read error: Connection reset by peer]
martinkl_ has quit [Read error: Connection reset by peer]
jfntn has joined #ipfs
jfntn has quit [Ping timeout: 260 seconds]
koo7 has quit [Quit: Ragequit]
koo7 has joined #ipfs
edcryptickiller has joined #ipfs
edcryptickiller has left #ipfs [#ipfs]
Paqster has joined #ipfs
mquandalle has joined #ipfs
cemerick has joined #ipfs
martinkl_ has joined #ipfs
<gamemanj>
I think... it worked! Though it can't read what's written yet, and it needs API access right now (DAG manipulation... maybe the writable gateway could provide a JSON-encoded DAG endpoint? Oh well. It works.) /ipns/QmarsUDwQ5ak34KU6xJbr2iMuatB7e69WfuSjEBG3v7zvu
<gamemanj>
(Specifically, in the DAG viewer, look at the "mewler" link for the result)
martinkl_ has quit [Ping timeout: 255 seconds]
martinkl_ has joined #ipfs
<victorbjelkholm>
gamemanj, getting "There was an error performing an operation." upon load
jfntn has joined #ipfs
Paqster has quit [Quit: Page closed]
<gamemanj>
victorbjelkholm: Upon load, probably means it couldn't access the API - as I said, needs API access. Upon load it tries to get your peer ID... you'll have to use it from the API port - posting a message to IPNS without damaging any content you have on IPNS requires some DAG manipulation, so --writable wouldn't help...
jfntn has quit [Ping timeout: 244 seconds]
<gamemanj>
(it gets the object you have on IPNS, adds/replaces a link called "mewler", then republishes - the idea being that someone could add your peerID to a theoretical reader, and the result would be something like Twitter)
<gamemanj>
(except probably a lot more limited ^.^;)
abdoudjanifresh has joined #ipfs
hellertime has quit [Quit: Leaving.]
hellertime has joined #ipfs
<cryptix>
hrm gamemanj your peer seems to be NATed
<gamemanj>
I'm pretty sure there's NAT punch-through, but I have miniupnpc to force open the ports if needed
<cryptix>
okay, resolved :)
<gamemanj>
Just netcatted to my external IP, and it came up with some madness, so IPFS has probably already done that for me
<gamemanj>
yep, port redirections in upnpc -l calling themselves "http"
<gamemanj>
to the ports that the daemon said were for the swarm
<cryptix>
gamemanj: how did you create Qmc6E4UXRXy6AnQRmjz6HPJ9yBbyoowaST6KkmxXwqf2Gm ? 'ipfs ls ' complains a little - gateway serves it.. bit strange
<gamemanj>
cryptix: well, mewler adds it's data via patching in a link to the mews
<gamemanj>
in a link ofc called "mewler"
<gamemanj>
that way a mewler stream can be embedded in a site
<gamemanj>
it's a lot easier to work with raw DAG nodes than unixfs
<gamemanj>
but there are probably side-effects
<cryptix>
still not sure why it won't 'ipfs ls' it - the top object looks like a valid unixfs dir
<gamemanj>
oh, it is
r1k0 has quit [Remote host closed the connection]
<gamemanj>
mewler added an extra link into it
<cryptix>
maybe it's the zero size of mewler.. not sure
<gamemanj>
well, in this case the deploy script did - the deploy script automatically patches the mewler node back in so I don't lose it when updating
Pharyngeal has quit [Ping timeout: 272 seconds]
cemerick has quit [Ping timeout: 272 seconds]
<gamemanj>
but the difference is... none, really - mewler itself just does the same thing a patch does, but by manipulating the received JSON DAG object instead of via the API
<gamemanj>
if unixfs doesn't like an extra unmentioned link, then it probably needs some additions, because metadata is a thing people tend to like
<cryptix>
gamemanj: dont think that's it. see 'ipfs object get'
<gamemanj>
in which case ipfs ls probably does a get on the links, too, to get data about them
<cryptix>
there is no such thing as 'unmentioned link' in
<gamemanj>
which is absolutely fine until you need to add metadata that doesn't work in the unixfs way
<cryptix>
also wanted to test the actual publishing but cant get api access tow ork
* cryptix
hides emberassed
acidhax has joined #ipfs
<gamemanj>
yeah, it was acting weird for me too
<gamemanj>
but it worked in the end
<cryptix>
gamemanj: you should bring this up during sprint today
Pharyngeal has joined #ipfs
<cryptix>
i wonder how confused people will be about time (DST switched on in .de at least)
<gamemanj>
my computer was recently acting odd, and I was up late on the day that the timezone change happened
<gamemanj>
that was fun... "The time's changing now? How on earth..."
<achin>
i still think it would be useful if a merkledag could give hints about the type of data it contains
cemerick has joined #ipfs
Matoro has quit [Ping timeout: 272 seconds]
ashark has joined #ipfs
mungojelly has quit [Remote host closed the connection]
<gamemanj>
An idea might be a unixfs "custom" type, encoded as a single 0x0A at the start. If it has to be two bytes, 0x0D 0x0A.
<gamemanj>
That way it would look to any DAG-based programs as a newline, and unixfs programs could take note that was a DAG-based node and just give anyone wanting it the JSON representation of the DAG object.
<achin>
since the data could litterally be anything, we can't expect everything to be tagged. but maybe things that are known the ipfs (like unixfs) deserve special treatment
Encrypt has joined #ipfs
Guest73396 has joined #ipfs
<gamemanj>
Unixfs already gets special treatment, the trouble is that once the root of a tree is unixfs, all direct subchildren of that tree need to be unixfs or things like this happen... meaning things have to be cloaked as unixfs to avoid breaking unixfs tools. And in the case of recursive unixfs tools, the subnodes have to be cloaked too. This can't really end well.
Matoro has joined #ipfs
<achin>
i guess i mean "specialer treatment" :)
<gamemanj>
As it is, I'm now considering deleting the "first mew" in order to change protocol so that all mews start with the \u0008\u0001 sequence, which is probably enough cloaking to keep things from crashing.
<achin>
i guess i'm proposing the exact thing you just did a few minutes ago: a way for ipfs (or other unixfs tools) to know for sure "this merkletree was created in a way that i expected, i should feel comfortable trying to decode it"
r1k02 has joined #ipfs
<gamemanj>
Since applications embedding metadata need a namespace *anyway*, maybe DAG nodes could have an extra data layer containing the data type? Or maybe some neutral numeric registry, where anyone can ask for a guaranteed unique number in a 128-bit namespace.
<gamemanj>
The tool could then check the DAG node against their Very Special Numbers (not tm) and see what it is, or pass the user to the DAG viewer for unknown stuff.
akkad has quit [Excess Flood]
<achin>
could we do this by adding a new optional field on PBNode? i *think* that'll be backward compatible, thanks to protobuf magic
<gamemanj>
I don't know how to use protobuf, I just manipulate the JSON - easier to understand and probably more update-resistant...
akkad has joined #ipfs
<achin>
you're probably right. it's quite useful to just use JSON, but that requires a node to be running
<gamemanj>
There's a lot that requires a node to be running right now.
<achin>
for sure. but adding/reading content isn't one of them (if you read/write the blocks off-disk)
<gamemanj>
Ah, but how do you do that without a disk interface, like in the case of almost every IPFS-based app that will ever exist? :)
<gamemanj>
Besides, doing that means you get to suffer if IPFS so much as changes data format slightly. Not exactly the best idea.
<gamemanj>
(And I don't mean at the protobuf level... there's all sorts of areas in the on-disk format that probably are subject to change on the basis that only IPFS should ever be looking at them.)
r1k02 has quit [Quit: Leaving]
r1k0 has joined #ipfs
<gamemanj>
But back to PBNode... Having metadata on the DAG nodes to stop applications from stepping over each other's metadata is probably a good idea. The only trouble is that you still have the namespace... Mewler finds it's link by checking for a link called "mewler". Now, say someone operated a blog, indexed by post name(as they are these days), and they named a post about Mewler "mewler". What happens?
<gamemanj>
When they next send a mew, the post about mewler gets embedded into that person's Mewler stream, and Mewler would either crash (if it didn't check for the DAG type) or theoretically consider it an attachment (if it did). The only good news is that under that situation, their next mew would probably be about their blog post :)
<gamemanj>
Either that, or, if some future me decided that it was worth checking the last Mewler post in case it's not actually a Mewler post, it fails.
<gamemanj>
Aaand we're back to the namespace problem.
jfntn has joined #ipfs
martinkl_ has quit [Read error: Connection reset by peer]
sonatagreen has joined #ipfs
sseagull has joined #ipfs
<zignig>
q!
jfntn has quit [Ping timeout: 250 seconds]
* zignig
switches to the correct window.
martinkl_ has joined #ipfs
bedeho has joined #ipfs
martinkl_ has quit [Max SendQ exceeded]
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
<achin>
sorry, i stepped away for a bit
<achin>
i'm not saying you *should* bypass ipfs and go directly to disks, but you *can* :) and in fact i would hope that ipfs continues to support this (or at least considers it)
<gamemanj>
I'm making some revisions to Mewler's DAG format while I can...
<achin>
btw, what's mewler?
<achin>
i tried searching google, and i get cat pics (which i love, but i don't think they are ipfs related cat pics)
<gamemanj>
Well, it's not exactly a publicly announced project...
<achin>
is it something you can talk about?
<gamemanj>
ofc :)
<achin>
if not, just send me catpics and i'll be happy either way :)
<gamemanj>
What would happen if you used an IPNS entry as the root node of a chain of messages, like tweets?
<gamemanj>
But ofc rebranded "mews"?
<achin>
you'd have a nice list of date-sorted mews
<gamemanj>
and the purpose of such a thing would be...?
martinkl_ has joined #ipfs
<achin>
a twitter clone that can store data is a distributed manner, with decentralized publishing?
<gamemanj>
Something like that, but yes.
cjb has joined #ipfs
<achin>
am i missing a bigger picture?
<gamemanj>
Though the current restriction on only publishing to your own keypair means that there's no way of doing hashtags & such :)
<achin>
how so?
<gamemanj>
Each hashtag would have a keypair, and there'd be a chain of links to the associated mews.
* achin
tries to reconsile that idea with what he currently understands hashtags to be
<gamemanj>
So if someone wrote a mew with the hashtag "#ipfs", it would get the keypair from some repository (maybe in-built into the client, maybe the user configs it...), and adds it to that keypair's chain, so anyone searching for #ipfs hashtags would see their mew.
fingertoe has joined #ipfs
Matoro_ has joined #ipfs
<achin>
got it. it would need the private key, i believe
bsm1175321 has quit [Remote host closed the connection]
<sonatagreen>
I think that would weaken/remove the association between 'published to hashtag #ipfs' and 'the text of the mew contains the substring "#ipfs"'
acidhax_ has joined #ipfs
<sonatagreen>
wouldn't it be better to just treat hashtags as search terms?
<gamemanj>
Ah, but the other clients could reject it if that wasn't the case.
<gamemanj>
Also, search though what?
<sonatagreen>
through mews you already downloaded by reason of following people
<gamemanj>
That's an option, too.
<achin>
is there anything preventing abuse?
norn has joined #ipfs
martinkl_ has quit [Read error: Connection reset by peer]
Matoro has quit [Ping timeout: 252 seconds]
<sonatagreen>
(It might be worth looking at existing federated social networking to see their design decisions)
<gamemanj>
Well, the whole hashtag thing is theoretical, but clients could "correct" the chain if something was obviously wrong
<gamemanj>
like someone trying to revert/destroy the hashtag chain
martinkl_ has joined #ipfs
<sonatagreen>
(e.g. Diaspora*, pump.io)
<gamemanj>
or someone faking a post, since if someone just posted a mew, then that mew would have to be on their IPNS entry. If it's not, then somebody is faking it, so off it goes
<achin>
that makes sense. seems expensive, but i'm not sure
<gamemanj>
Well, it's expensive if you're checking that hashtag all the time.
<gamemanj>
Or if that hashtag was excessively popular.
acidhax has quit [Ping timeout: 240 seconds]
<achin>
anyway, these are very nice ideas. a great example, i think, of a app that can be build on top of ipfs merkledags (but isn't a simple unixfs filestore)
Matoro_ has quit [Ping timeout: 250 seconds]
martinkl_ has quit [Client Quit]
bsm1175321 has joined #ipfs
voxelot has joined #ipfs
Guest73396 has quit [Ping timeout: 255 seconds]
<gamemanj>
hmm, apparently my IPNS publishes aren't working
fingertoe has quit [Ping timeout: 246 seconds]
<gamemanj>
oh, I forgot to change a part of the deploy script, ofc...
voxelot has quit [Ping timeout: 268 seconds]
<gamemanj>
hehehe, let's see if my UnixFS cloaking fools ipfs ls into not-crashing...
voxelot has joined #ipfs
<gamemanj>
Apparently not.
<gamemanj>
Seems protobuf is sensitive to the length of the data, too...
fingertoe has joined #ipfs
<achin>
there might be security concern here somewhere
<gamemanj>
no, just Error: proto: can't skip unknown wire type 7 for unixfs_pb.Data
<gamemanj>
though admittedly that does threaten my mental security...
<achin>
hmm, i'm not sure exactly what i'm thinking
<gamemanj>
mental denial of service via length-based data interpretation transform
<gamemanj>
which is the complicated way of saying that changing behavior based on length could give someone a headache
<achin>
some deserlization libraries give warnings like "don't pass untrusted data to the decoder". i don't know if protobuf has this warning or not (a quick google says 'no')
domanic has joined #ipfs
<gamemanj>
as it is, I'll have to try encoding an empty UnixFS file, and see if that fills up the data field so it won't try interpreting my actual data that way
<gamemanj>
because using ipfs add means I'd have to do the link-patch separately, and that's a great way to waste even more time on response callbacks...
norn has quit [Ping timeout: 265 seconds]
mungojelly_ has joined #ipfs
<achin>
"echo -n |ipfs add" gives you a zero-len unixfs file: QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH
<gamemanj>
actually, I just did that, more or less - made an intermediate file first though :)
<achin>
`mkdir empty && ipfs add -r empty` is an empty unixfs-dir. the same as `ipfs object new unixfs-dir`. i wonder my unixfs-dir got a template, and unixfs-file didn't
<gamemanj>
maybe that's because making a unixfs-dir doesn't have a dedicated command AFAIK
<gamemanj>
You could add a directory, but where's that going to come from
<achin>
you make it :) but point taken
norn has joined #ipfs
martinkl_ has joined #ipfs
<gamemanj>
for some reason IPNS lookups turned from "slow" to "really slow" within a few minutes... I suspect I'll be inlining the SVG and the JS soon enough...
<achin>
i was seeing the same a few days ago
<gamemanj>
It took a whole minute to get an IPNS entry and it's contents.
<gamemanj>
And I have a timing diagram here.
<gamemanj>
wait, these are almost EXACTLY on minutes
<gamemanj>
something is going on here
<gamemanj>
something weird and spooky
<gamemanj>
loading the initial page, 2 minutes and 11 ms. Precisely 2 minutes to within human perception. Next request, the javascript. A full minute and 5ms. ID API request, 0ms. GET API request, 1 minute and 6ms.
<gamemanj>
Are you seeing a pattern here?
<gamemanj>
puts are really fast (88 and 72 ms for both puts involved in Mewler publishing), and the publish took 2 seconds and 544ms (aka: 2.544s).
<sonatagreen>
I agree, that is weird as hell
lazyUL has joined #ipfs
<gamemanj>
But the gets from IPNS are measured in almost exactly, and again, to within 20ms tolerance positive (and none negative), minutes.
<gamemanj>
Somewhere, I suspect a timer had an extra 0 on it...
<gamemanj>
Restarting daemon, seeing if that fixes it...
<gamemanj>
21 seconds. Much more realistic, if slow.
<gamemanj>
Except the next download is probably going to be... yep!
<gamemanj>
A full minute.
<gamemanj>
+21ms.
<gamemanj>
Something is seriously wrong here.
<gamemanj>
The application won't work, but I can still try loading it off of ipfs.io, see what the timings are there.
<ipfsbot>
[go-ipfs] mildred opened pull request #1902: Remove usage of merkledag.Link.Node pointer outside of merkledag (master...ipld) http://git.io/vWKrM
<gamemanj>
11 seconds... followed by 15 seconds, followed by 7 seconds.
<gamemanj>
So apparently looking up your own IPNS entry is a crime punishable by even longer than normal waits.
<gamemanj>
But if you do it through a different node, it's relatively ok.
<gamemanj>
I suspect madness.
captain_morgan has joined #ipfs
<gamemanj>
Also, last time I checked, I couldn't access an IPNS entry when offline.
<gamemanj>
Despite this IPNS entry being for the node running on my system.
<gamemanj>
I suspect there may be a link here.
lazyUL has quit [Read error: Connection reset by peer]
<gamemanj>
Now, in less sane news, I'll be inlining everything I can...
<gamemanj>
And maybe a few things I can't.
<mildred>
gamemanj: You can grep for timeouts in the code base
<mildred>
There is one minute timeout in path/resolver.go around line 125-130
<mildred>
There are probably more timeouts elsewhere
<dignifiedquire>
daviddias: so any more progress on the api tests?
<daviddias>
need to be able to set the config for the API headers
<daviddias>
if you can put that something being Access-Control-Allow-Origin=*
<daviddias>
without it throwing a error
<daviddias>
it would be top :)
abdoudjanifresh has joined #ipfs
<daviddias>
but it seems that something is weird on that part of the go code, cause it never works
<daviddias>
however, changing non nested values on a config file works like a charm
* gamemanj
goes looking about the ipns publishing code
<gamemanj>
it looks as if routing is the actual backend of ipns...
<gamemanj>
also, technically, the only reason records have an EOL is because by default the records say they do. I suppose that's a good way to ensure things don't get out of control.
jwheh has quit [Quit: Page closed]
abdoudjanifresh has quit [Client Quit]
<gamemanj>
And now I know where all the empty directory IPNS keys came from...
<gamemanj>
publisher.go:InitializeKeyspace
<victorbjelkholm>
daviddias, you're right, it does not work
<victorbjelkholm>
hm
pfraze has quit [Ping timeout: 240 seconds]
pfraze has joined #ipfs
* gamemanj
runs "ipfs log level all debug"
* gamemanj
is promptly spammed
<gamemanj>
wow, I assumed the network was a lot quieter
<gamemanj>
but there's activity going on all the time
acidhax_ has joined #ipfs
<gamemanj>
lots of activity
acidhax_ has quit [Remote host closed the connection]
acidhax has quit [Read error: Connection reset by peer]
acidhax has joined #ipfs
<gamemanj>
Hmm? INFO[16:37:25:000] cant connect to peer <peer.ID >: dial attempt failed: peer has no addresses module=bitswap
<gamemanj>
"<peer.ID >"
<victorbjelkholm>
daviddias, got it!
<victorbjelkholm>
ipfs config API --json '{"HTTPHeaders": {"Access-Control-Allow-Origin": ["*"]}}'
<victorbjelkholm>
was a bit tricky but there it is
<victorbjelkholm>
now get those tests running! :D
<gamemanj>
hmm, "Received invalid record! (discarded)" needs a bit more information to be of much use
<gamemanj>
like "why" and "who"
qgnox has joined #ipfs
<daviddias>
victorbjelkholm: sweeeet!
<victorbjelkholm>
even though I don't know much Go, it's very readable for a beginner like me. That's sweeeet! :)
<sonatagreen>
If I publish something to my ipns and then shut down my node, how long it take for the ipns link to rot?
<daviddias>
(shame on me, just realised I was trying to put the --json flag after the args all this time, and forgot that go cli tools don't understand that)
<victorbjelkholm>
daviddias, that doesn't seem to matter
<victorbjelkholm>
placed it in the end, still works
<victorbjelkholm>
the `ipfs config --help` doesn't show the order of the options though
<gamemanj>
sonatagreen: well, from what I've been looking at, 24 hours is the theoretical maximum, since apparently the records have a built-in self-destruct...
<gamemanj>
as for minimum, go ask someone who knows how it propagates
<sonatagreen>
Is there a way to disable the self-destruct? It would be nice for the permanent web to have links that don't rot...
<gamemanj>
sonatagreen: still not sure how propagation works, so not sure if there's any point... also, there might be a reason for giving the records an EOL
<sonatagreen>
it's just, I really want something that works like Freenet's USKs
<gamemanj>
looking at the publishing code actually seems to have helped me discover something
<gamemanj>
publishing does a r.GetValue, so if publishes are fast but resolves are slow,
<gamemanj>
then it might not be in r.GetValue that's slowing it down
<gamemanj>
entirely guesswork
vanila has joined #ipfs
<vanila>
hi
<vanila>
how do i compute the hash ipfs will give a file using C?
<vanila>
should I just go find sha256 code in C somewhere and base58
<ipfsbot>
[node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWKAo
<ipfsbot>
node-ipfs-api/solid-tests d65f6f1 David Dias: fix set config value, thanks to Victor!
border0464 has quit [Ping timeout: 252 seconds]
<M-hash>
vanila: the clever bits of IPFS protocol like block-finding to enable sub-file dedup probably require a little more code than that to replicate... you'll may be best off looking into using go-ipfs built as a library and imported to your C code.
<gamemanj>
If you're looking for the hash of a file, then chances are you're going to ipfs add it at some time, right?
<vanila>
i want the web server to save the file to disk with the ipfs hash as its filename before I ipfs add it
border0464 has joined #ipfs
hellertime1 has joined #ipfs
hellertime has quit [Ping timeout: 260 seconds]
<sonatagreen>
vanila, you may be interested in ipfs add -n
<ipfsbot>
[node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWKhB
<ipfsbot>
node-ipfs-api/solid-tests 929a073 David Dias: remove some tests from browser context through isNode
<sonatagreen>
a sort of 'dry run' option, computes the hash without actually writing anything to disk
<ipfsbot>
[node-ipfs-api] diasdavid pushed 1 new commit to solid-tests: http://git.io/vWKjQ
<ipfsbot>
node-ipfs-api/solid-tests 29a302e David Dias: remove unused stop daemon test
acidhax_ has joined #ipfs
<vanila>
hmm i can't really decide the best way
<vanila>
another option could be save the files with random names
<vanila>
and then make symlinks to them with the correct name after ipfs adding
<whyrusleeping>
acidhax_: i mean, as much as my time is my own and not controlled by the government and changed by an hour whenever they feel like it >.>
<dignifiedquire>
daviddias: so it’s actually the refs test that times out, but an error appears in the dht.get code
<dignifiedquire>
whyrusleeping: so sprint in an hour?
<daviddias>
Oh makes sense
<daviddias>
Refs is Reffing on the nested dir
<lgierth>
it's utc+1 as planned
<dignifiedquire>
ok
<lgierth>
s/utc+1/5 pm utc/
<multivac>
lgierth meant to say: it's 5 pm utc as planned
<daviddias>
Which is not added on browser tests
<daviddias>
Add the if (!node) clause to that one
<lgierth>
dignifiedquire: or is there anything in the way?
<lgierth>
(sorry my english is the hammer tonight)
<daviddias>
Does it pass everything else ?
<dignifiedquire>
lgierth: no it’s fine, just wanted to know
<whyrusleeping>
so is everyone in agreement that sprint starts now?
<whyrusleeping>
i'm totally okay waiting an hour
<dignifiedquire>
okay now, better in an hour for me
<whyrusleeping>
(might even be more convenient, the others might still be asleep)
<lgierth>
i'm okay to start now
<daviddias>
whyrusleeping: you know what we need, Vita's coffee
<vanila>
I'll take code from bitcoin
<whyrusleeping>
daviddias: mmm, yeah...
<whyrusleeping>
alright, lets wait an hour
<whyrusleeping>
jbenet might be up by then
<lgierth>
ok
<dignifiedquire>
daviddias: Executed 25 of 40 (2 FAILED) (skipped 15) (0.486 secs / NaN secs)
<dignifiedquire>
daviddias: and those two fail because of the error in the screenshot
<daviddias>
error in the screenshot?
<dignifiedquire>
ipfs link above
<daviddias>
ah, that screenshot
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed fix/log-writer from bdd5ca0 to 7fe28c6: http://git.io/vWgPu
<ipfsbot>
go-ipfs/fix/log-writer 7fe28c6 Jeromy: fix bug in mirrorwriter removal and move to go-logging...
<daviddias>
I still get the same 'can't find module vinyl'
<gamemanj>
press "load feed for peerID" once the script has loaded, don't edit the value in the box, since that's the peerID you're going to read - no point reading a peer with no mews, is there
wtbrk has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
<victorbjelkholm>
dignifiedquire, commented out everything saucelabs. process hangs here: "[19:11:40] Finished 'test:browser' after 19 s"
<dignifiedquire>
victorbjelkholm: fixed DEBUG=true now skips the sauce credentials test
acidhax has joined #ipfs
<ipfsbot>
[node-ipfs-api] Dignifiedquire pushed 2 new commits to solid-tests: http://git.io/vW6EX
<ipfsbot>
node-ipfs-api/solid-tests f94c628 dignifiedquire: Do not require credentials in debug mode
<ipfsbot>
node-ipfs-api/solid-tests 7ab13cd dignifiedquire: Do not run refs tests in the browser
<kyledrake>
Sorry, just got off the phone with a journalist, I'm here now
<dignifiedquire>
victorbjelkholm: that’s fine, that’s just a bug in the gulpfile
<gamemanj>
A journalist?
<dignifiedquire>
so did the test succeed?
wopi has joined #ipfs
<dignifiedquire>
just kill it with ctrl+c
* lgierth
here
<kyledrake>
I was showing someone at Vice Motherboard how to make a decentralized web site
acidhax_ has quit [Read error: Connection reset by peer]
<ion>
kyledrake: Cool
<dignifiedquire>
everybody please check in: jbenet daviddias mappum victorbjelkholm whyrusleeping richardlitt lgierth kyledrake and all those I forgot
<jbenet>
thanks dignifiedquire -- i'm here.
* lgierth
hereby checking in
<victorbjelkholm>
I'm here, for sure. I think. Maybe. Yeah, I am
<kyledrake>
Here
<gamemanj>
Meow
<dignifiedquire>
victorbjelkholm: pull with my changes and try again please, but it looks promising, daviddias looks like you are the only one with vinyl errors
* daviddias
here :)
<jbenet>
ok, i'll just take over running the sprint and hopefully daviddias and whyrusleeping comeback before it's over
<jbenet>
lgierth: thanks yep-- still working through massive github backlog
<jbenet>
will handle that today
<victorbjelkholm>
dignifiedquire, same error with your changes, one less test fails (refs), two still failing (dht)
<dignifiedquire>
victorbjelkholm: okay, lets continue after sprint
<kyledrake>
lgierth I'll look into it
Oatmeal has joined #ipfs
Oatmeal has quit [Max SendQ exceeded]
<jbenet>
lgierth: all that sounds good
Matoro has quit [Ping timeout: 260 seconds]
<jbenet>
lgierth: what's the fc00 invitation? (is that discovery?)
<kyledrake>
My guess is Hetzner has a similar policy. They all try to not allow people to turn servers into warez dumps
<kyledrake>
That are shooting full cap bandwidth all the time
<jbenet>
anything else lgierth?
<lgierth>
jbenet: i wanted to kick off go-cjdns by outlining what needs work, and inviting everybody who mentioned their interest so far
<lgierth>
jbenet: that's it
<jbenet>
lgierth: ahhh that makes sense. yep good idea
<lgierth>
kyledrake: probably you're right -- it just stuck out to me and wanted to double-check
<jbenet>
ok thanks. next up victorbjelkholm
<victorbjelkholm>
well, I don't really have list of items, just contributing for fun and was looking for participating in the hangout calls later
<victorbjelkholm>
my "focus" would be to help node-ipfs-api and node-ipfs + building some ecosystem of apps around IPFS. But nothing official, just stuff I do during my freetime
<jbenet>
victorbjelkholm indeed :) -- though i've seen a lot of things you've contributed. this sync is to give people a chance to list out all the things together, so that other people can read one digest and get a sense of progress across all the subprojects, and to get a sense of what changes, etc.
<jbenet>
(i.e. don't need to be part of a sprint from the beginning to be able to drop an update or whatever-- this is all very organic)
<victorbjelkholm>
jbenet, right. Jump to the next person and after that I'll have list ready what I'm focusing on
<jbenet>
<jbenet>
kyledrake ?
<gamemanj>
victorbjelkhom: In which case, here, have a thing! (needs API access to work, though, because it reads DAG nodes directly. Just press Load feed for peerID once it loads) /ipns/QmarsUDwQ5ak34KU6xJbr2iMuatB7e69WfuSjEBG3v7zvu/
<ion>
whyrusleeping: Perhaps the future human hive mind will be built on top of IPFS.
<jbenet>
dignifiedquire: btw, daviddias and i kept getting errors trying to run station. like, sometimes it would work and sometimes not. not sure if we figured out what was going on
<jbenet>
dignifiedquire: also yeah, dropping sketches here + in sprint issue would be sweet :)
<dignifiedquire>
jbenet: pretty sure that’s related to the econreset in the node-ipfs-api
<victorbjelkholm>
dignifiedquire, I see. Ping me once you have something ready, love to give some feedback
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<victorbjelkholm>
would be nice to have a collection of information people are interested to have available in the ui as well, so the design can be around that
acidhax_ has quit [Ping timeout: 268 seconds]
<whyrusleeping>
victorbjelkholm: graphs from ipfs stats bw
<dignifiedquire>
victorbjelkholm: you are taking the words out of my mouth :P
<jbenet>
dignifiedquire: can you make the map a setting? i really dont want to completely give up the earth sphere. a _lot_ of people love to play with it. (i.e. we can make it a toggle of some sort)
<whyrusleeping>
i can go if daviddias isnt ready
<jbenet>
ok daviddias seems to be afk. skip and return to him after others.
* daviddias
hjere here :)
<jbenet>
richardlitt want to go? (whyrusleeping going in order of saying "here")
<richardlitt>
jetlag :)
<jbenet>
ah ok go for it daviddias
<richardlitt>
Sure
* daviddias
in low energy mode, but here :)
<richardlitt>
I'll wait.
<dignifiedquire>
jbenet: we can make it a setting so people can change it to their most loved projection, but the current version is just nuts in terms of performance it makes the fans of my laptop go crazy as soon as I open it
<dignifiedquire>
and is actually hard to use in any usefull way besides playing around
<daviddias>
incoming:
<daviddias>
### SPRINT UPDATE
<daviddias>
- [~] node-ipfs-api solid tests, a lot of work going on, thanks to @dignifiedquire and @victorbjelkholm for chipping in on this one https://github.com/ipfs/node-ipfs-api/pull/81
<daviddias>
- [x] San Francisco and Palo Alto IPFS events stravaganza
<daviddias>
- [~] webrtc-explorer as one of the transports of libp2p (ongoing)
<daviddias>
- [x] Lots of flying
<daviddias>
- [ ] Speed up Jeromy on how libp2p is modularised
<daviddias>
- [x] Set up analytics things in Sum All
<daviddias>
- [x] get ipfsd-ctl to fire daemons without connecting to the central network
<jbenet>
dignifiedquire: hmm odd. wonder if there's something weird platform specific. my macbook air doesnt complain at all.
<daviddias>
- [x] complement ipscend README with dnslink
<daviddias>
- [x] add coverflow to ipscend preview (still needs a proper designer hands though)
<daviddias>
- [x] logistics for San Francisco IPFS meetup (food, bannerman, etc)
<daviddias>
- [ ] decide with @jbenet the articles roadmap for libp2p
<daviddias>
Was blocked on getting npm's registry on IPFS due to lack of machine with enough HDD and stable internet link (from moving to place to place constantly).
<jbenet>
ccccc-combobreaker
<padz>
dignifiedquire: might be useful if you ever decided to visualize satellites
<jbenet>
padz: indeed.
<dignifiedquire>
padz: yeah, and of course when adding mars
<jbenet>
it's actually not that far off, i've been looking at cube sat pricing. it's not insanely expensive.
<victorbjelkholm>
is the purpose of the webui to provide a map of clients though? Feels like there is a lot of better things to focus on when it comes to the ui
<whyrusleeping>
yeah, the globe thing makes my laptop unhappy too
<jbenet>
victorbjelkholm agreed that's just one part.
<victorbjelkholm>
the performance of the globe is probably because of browsers... jbenet uses safari on osx?
<dignifiedquire>
it’s just the first screen I started with because I had some rough ideas
<jbenet>
whyrusleeping: really? is it a webgl problem? it really shouldn't.
<jbenet>
victorbjelkholm nope chrome
<whyrusleeping>
yeah, i use chrome
<whyrusleeping>
fans start to spin up if i leave it running
<dignifiedquire>
yeah chrome here too
<jbenet>
daviddias: all that sounds good. let's sync up about libp2p roadmap with whyrusleeping today
<daviddias>
awesome :)
<jbenet>
daviddias: did you get the recordings of the meetup? would be nice to post the demos up somewhere
<daviddias>
still have to go through them and edit them
<daviddias>
but yep, got it on my camera :)
<jbenet>
right sounds good
<jbenet>
ok, next up-- richardlitt?
<richardlitt>
Aight
<richardlitt>
[ ] IPFS API documentation: Trying to figure out how to get swagger integrated into go-ipfs. Not entirely clear on this, any help would be great. Readme.io gave us access, but it is really unclear how this helps, as their swagger integration is basically just an input field. CF their FAQ:
<richardlitt>
Would be great to chat about this during one of the talks -- infrastructure? go-ipfs?
<jbenet>
btw, dignifiedquire: the flags + agent looks great to me. lots of good ideas to draw from torrent clients
<richardlitt>
Basically, I think I am just going to have to end up building readme.io docs by hand
<richardlitt>
Which... is OK? But really suboptimal
gaboose has joined #ipfs
<victorbjelkholm>
richardlitt, swagger has a editor linked on their website, helped me get a grasp on the concept. http://editor.swagger.io/#/edit
<lgierth>
richardlitt: let's check it out, i'm sure we can automate most of it
<dignifiedquire>
richardlitt: happy join talks about docs had my fair share of building those in the past
acidhax__ has quit [Remote host closed the connection]
<victorbjelkholm>
readme.io integration with swagger is limited in beta, basically you store your swagger elsewhere (github or whatever), then you copy-paste it to readme.io
<jbenet>
richardlitt: i think we're supposed to make swagger src files and we can render those with swagger or with readme.io
<victorbjelkholm>
jbenet, exactly!
<richardlitt>
jbenet victorbjelkholm yeah. So, I guess I'm just going to make swagger src files in another repo
<richardlitt>
but I'm not sure exactly where to look for code atm. just going through go-ipfs src and bin files
<victorbjelkholm>
richardlitt, this is for the REST API of the daemon, yeah?
acidhax_ has joined #ipfs
<richardlitt>
victorbjelkholm: Yes, that's one of the things. We should also have better docs just for the IPFS cli though, right, Juan?
devbug has joined #ipfs
<richardlitt>
Another option going forward is for me to help out more with node-ipfs-api doc building by victorbjelkholm, and then see if I can mirror changes to the readme in swagger files
acidhax_ has quit [Remote host closed the connection]
<victorbjelkholm>
richardlitt, should be able to generate the docs for go-ipfs cli, if it is built into the --help flag. Would help in maintaining stuff
<jbenet>
richardlitt: yeah we should have better docs but we should also have a language independent spec. (the swagger docs could be that spec)
<jbenet>
and yeah parts of the go-ipfs cli help could be generated from the swagger docs, not sure.
<richardlitt>
Ok, cool. I'll work on language-independent swagger docs in ipfs/api for all of our RESTful services, then.
<richardlitt>
Looking at go-ipfs, node-ipfs-api, and java-ipfs-api
<jbenet>
sounds good. (and btw, im not sure swagger is the best, it's just one that i found, feel free to find others)
<richardlitt>
That puts me on much better footing. Was kind of confused as to how to apprach this.
<richardlitt>
i'm not sure, either. I think just manually doing readme.io might ultimately be the prettiest
<jbenet>
(a bunch of people like swagger and i like the 3 column layout readme.io gives)
<richardlitt>
as their swagger integration is actually not good right now
<jbenet>
the problem with manual readme.io is we're stuck there
<victorbjelkholm>
jbenet, swagger is one of the better when it comes to api documentation
acidhax has quit [Ping timeout: 264 seconds]
<jbenet>
i'd prefer having some source we can feed into readme.io
<jbenet>
unless they also have an open spec?
<jbenet>
if they have an open spec that's equally good, i dont care. we can always translate it with code, too
<victorbjelkholm>
yeah, going full readme.io is not to recommend. We're there right now with Typeform and we have to manually change things without version control which is... Painful
<richardlitt>
yep
<richardlitt>
They don't really have a good open spec that I remember
<richardlitt>
I think swagger + Readme.io should be the best of both worlds
<richardlitt>
and ideally those worlds should be bridged soon
<jbenet>
ok sounds good
<jbenet>
thanks richardlitt -- ok whyrusleeping ?
<whyrusleeping>
yeap!
<whyrusleeping>
Got a lot of PRs opened up this week, lots of bugfixes
<whyrusleeping>
huh, is there a bug report for this somewhere?
<jbenet>
yeah that's what i filed o/
<jbenet>
but its an api bug, not a webui bug, i think
<whyrusleeping>
ah, could you provide any more information on it failing? like a repro or something?
<jbenet>
i thought daviddias had found the problem, not sure
wopi has quit [Read error: Connection reset by peer]
<daviddias>
it was the move to ndjson
<jbenet>
not sure, let's try to address it later today or whatever. what we should do is have one repo that uses go-ipfs and then runs all known ipfs-api things on top
wopi has joined #ipfs
<jbenet>
and tests all of them, to make sure we dont break any
<jbenet>
or if we do, that we warn + have a fix in before the release hits
<whyrusleeping>
we havent changed to ndjson yet
<daviddias>
isn't it on 0.3.8?
<whyrusleeping>
the actual newline part hasnt merged yet
<whyrusleeping>
the rest hasnt changed for months
<whyrusleeping>
although, the issue might be a change of defaults
captain_morgan has quit [Read error: Connection reset by peer]
kerozene has quit [Ping timeout: 268 seconds]
<daviddias>
is the ndjson on master then?
<whyrusleeping>
daviddias: nope, it hasnt merged yet
<daviddias>
so how did I get that
<dignifiedquire>
I was seeing ndjson in some api calls as well
<dignifiedquire>
and I only use the ipfs version published through npm
<whyrusleeping>
ooh, right.
<whyrusleeping>
we do a stream of json objects right now, but it doesnt have newlines delimiting them
<richardlitt>
jbenet: done. Should I open #41?
<whyrusleeping>
so i was referring to the actual addition of those newlines
<dignifiedquire>
that’s even worse Oo
<daviddias>
oh
<daviddias>
so, that makes it really hard/annoying to parse
<dignifiedquire>
well let me rephrase that the node api chokes and dies on that
<daviddias>
I thought you just had missed the newlines and it was intended to have new lines and now the new lines were there
<whyrusleeping>
the newlines will be there soon
<whyrusleeping>
which api calls are failing for you?
acidhax has joined #ipfs
acidhax_ has joined #ipfs
<jbenet>
richardlitt: yeah go for it--
<jbenet>
ok thanks whyrusleeping -- keep discussing with others re api. i'll move on
<victorbjelkholm>
can I paste my list now?
<jbenet>
yep go for it
<victorbjelkholm>
As I said, I'm a volunteer during my free time, which is not very much. But I do what I can and if you have any recommendations for other things to do, please let me know.
<victorbjelkholm>
IPFS in JS
<victorbjelkholm>
- [~] working on node-ipfs-api to pass the tests in browser + node env
<victorbjelkholm>
- [ ] find and help out with things to contribute to node-ipfs (will need help from daviddias with this)
<victorbjelkholm>
- [~] Finish writing docs and API docs
<victorbjelkholm>
go-ipfs
<victorbjelkholm>
- [ ] Start contributing and improve the API for setting config params, especially nested ones
<victorbjelkholm>
OpenIPFS
<victorbjelkholm>
- [~] having openipfs ready for public consumption, mainly to scratch my own need for ipfsbin
<victorbjelkholm>
IPFSBin
<victorbjelkholm>
- [~] new ui for ipfsbin to be able to select language amongst other things
<victorbjelkholm>
- [ ] start using openipfs for ipfsbin, to make content more available
<victorbjelkholm>
Community
<victorbjelkholm>
- [ ] preparing talk for local meetups here in Barcelona
<victorbjelkholm>
- [ ] collecting things for awesome-ipfs showcase
<victorbjelkholm>
eof
<kyledrake>
victorbjelkholm community hosted content is a great use case for IPFS, so it's awesome you're exploring that more
Matoro has quit [Ping timeout: 255 seconds]
<victorbjelkholm>
kyledrake, yeah, felt like a deal breaker for some, since permanent web is not really possible today, without a network of nodes willing to rehost that content for the community
acidhax has quit [Ping timeout: 250 seconds]
<victorbjelkholm>
so this is a start in that direction
<jbenet>
victorbjelkholm i love the the ipfsbin you made, and openipfs too
<jbenet>
victorbjelkholm openipfs could be a good use case to motivate making ipfs-cluster
<richardlitt>
big fan of collecting things for awesome-ipfs
<victorbjelkholm>
jbenet, thank you! ipfsbin is kind of crappy though, and I realized I needed openipfs before ipfsbin can be "done"
<kyledrake>
victorbjelkholm I have a similar use case for it, so I'll be taking a look at it this way. Ideally there will be a standard bearer for IPFS nodes sharing data in this context.
kerozene has joined #ipfs
<jbenet>
victorbjelkholm we can probably provide some temporary hosting for ipfsbin
<victorbjelkholm>
jbenet, ipfs-cluster? Have some shellscripts for launching a cluster of ipfs daemons right now. Maybe I should look into ipfs-cluster and contribute there
<victorbjelkholm>
kyledrake, yeah, I just want to get the ball rolling :)
<dignifiedquire>
whyrusleeping: can’t reproduce at the moment, not sure why though. daviddias did you change anything in that regards in the solid-tests branch?
<kyledrake>
The stanford talk was awesome and super accessible.
<kyledrake>
I PR'd to put it on the front page
<richardlitt>
agreed about accessibility
<richardlitt>
Would be good on front page.
norn has quit [Quit: Leaving]
Matoro has joined #ipfs
<jbenet>
dignifiedquire: oh no worries at all, my mistake for running this so late. i'll join in a min.
<dignifiedquire>
jbenet: well someone has to be last ;)
cemerick has joined #ipfs
<ion>
victorbjelkholm: It would be cool to have a locally running program that lists user-submitted IPFS links and has buttons to pin+upvote, upvote or downvote them (not unlike a subreddit).
<jbenet>
:D
<dignifiedquire>
daviddias: are you around to join?
<ion>
(I might not have space for something but i might still want to upvote it to make it more prominent for others.)
<daviddias>
I need 5 mins. Making sure I don't burn my dinner (trying my new batch of homemade boozy hot sauce)
<victorbjelkholm>
ion, hah, always something we can build on top of openipfs later :) I'll provide api endpoints for everything in openipfs so should be simple
rht has quit [Ping timeout: 252 seconds]
<vanila>
ippfit
<daviddias>
today's schedule with daylight saving changes are all weird and since typically we start the hangouts 2h after checkin, I was expecting for it to start in 20 mins
<vanila>
yeah this is a nice idea for a site
<dignifiedquire>
daylight saving is screwing us all
* lgierth
just updated the CEST times in ipfs/pm/readme.md to CET, as daylight savings changed yesterday here
<lgierth>
daviddias dignifiedquire cryptix krl ^
<hoboprimate>
lurker here: can I give a stab at redesigning a new logo for ipfs network, and send it to you with no guarantees it would be picked, or is everyone happy with the current one?
<hoboprimate>
just that a cube doesn't tell much.
<sonatagreen>
i don't think anyone would be /offended/ at suggestions
<hoboprimate>
ok cool
<ion>
We welcome suggestions as long as they are a cyan cube.
<hoboprimate>
ahahah
<sonatagreen>
snrk
<hoboprimate>
ok, I'll give it a shot, and pop here to show it later this week
<sonatagreen>
what do you have in mind?
<hoboprimate>
I still don't have anything in mind, thinking of going through the white paper and first get a better understanding of the innards of ipfs, see if it can suggest something
<gamemanj>
ion: So they could be a perspective cyan cube, an orthographic cyan cube, isometric, or an inverse-perspective cube (the further things go away from you, the bigger they are!)...
<gamemanj>
The cube could be rounded, carved with nice designs, manipulated in all sorts of ways...
<gamemanj>
And if "cyan cube with white borders" is official, all sorts of madness is possible...
<multivac>
[WIKIPEDIA] No Logo | "No Logo: Taking Aim at the Brand Bullies is a book by the Canadian author Naomi Klein. First published by Knopf Canada and Picador in December 1999, shortly after the 1999 WTO Ministerial Conference protests in Seattle had generated media attention around such issues, it became one of the most influential..."
<lgierth>
yeah my flatmate has that one here
<lgierth>
i have her "shock therapy"
acidhax has joined #ipfs
NeoTeo has joined #ipfs
acidhax has quit [Read error: Connection reset by peer]
<M-matthew1>
not sure what naomi would have to say about that...
* M-matthew1
drinks
NeoTeo has quit [Quit: Teo is off. Bye :)]
NeoTeo has joined #ipfs
<gamemanj>
the question is if they run a brewery, or if they just rebrand stuff so they don't advertise anyone's brand.
cemerick has quit [Ping timeout: 252 seconds]
atrapado has joined #ipfs
rendar has quit [Ping timeout: 240 seconds]
<ansuz>
lol
<ansuz>
when you affiliate with adbusters, you can expect to have your brand busted
besenwesen has quit [Ping timeout: 260 seconds]
besenwesen has joined #ipfs
besenwesen has quit [Changing host]
besenwesen has joined #ipfs
rendar has joined #ipfs
qgnox has quit [Quit: WeeChat 1.3]
<jbenet>
ok that took a million years. sorry. lgierth whyrusleeping if you're around and want to do infra chat, let's. else can move to node-ipfs with daviddias
<victorbjelkholm>
sorry guys, gonna have to leave. Would love to participate in the next call about infrastructure, regarding openipfs, but gonna have to disappear for 30-60 minutes...
<kyledrake>
jbenet I'd like to listen in on all the hangouts today
<lgierth>
jbenet: i'm here
mildred has joined #ipfs
<lgierth>
let me just switch to screen on the laptop
<jbenet>
kyledrake: always welcome, usually just ping the most interested people, but otherwise whatever works :)
<victorbjelkholm>
thanks for the opportunity to talk with you guys, getting inspiration from you and letting me show and talk about openipfs. It's awesome!
MrChico has joined #ipfs
<MrChico>
hey
<MrChico>
I am having some difficulties with this
<achin>
what's up?
<MrChico>
I run ipfs daemon
<MrChico>
where do I go from there
<achin>
leave ipfs daemon running in your terminal, and then open a new terminal
<achin>
and in the new terminal, you can do things like "ipfs add" or "ipfs get"
<MrChico>
new terminal window doesn't understand ipfs commands anymore
<MrChico>
zsh: command not found: ipfs
<achin>
ipfs was installed into a directory that's not in your default $PATH
<achin>
to start, try running $GOPATH/bin/ipfs
acidhax has joined #ipfs
<MrChico>
hmm, doesn't work
<MrChico>
crap, where did I install it then
<achin>
i should back up, perhaps. $GOPATH would have been manually set if you installed ipfs via "go get"
<achin>
but if you didn't (like if you just downloaded a binary), then ipfs lives whereever you put it
<MrChico>
I installed via go get
<stick`>
i have this in my ~/.bashrc
<MrChico>
but maybe I screwed up configuring my go[ath
<stick`>
export GOPATH=$HOME/.go
<stick`>
export GOBIN=$GOPATH/bin
<stick`>
export PATH=$PATH:$GOBIN
wtbrk has quit [Quit: Leaving]
<stick`>
if this (or sth similar) is applied before "go get" then it should work
<achin>
stick`: yes, but MrChico might be totally different than that :)
<achin>
MrChico: are you running on linux?
<MrChico>
echo $GOPATH doesn't return anything
<MrChico>
yea
<stick`>
achin: not really
<achin>
MrChico: use ctrl+c to stop your IPFS daemon, and then "echo $GOPATH" to see where you installed it to
<achin>
stick`: ok, well your envars are nothing at all like my envvars
<MrChico>
in my home folder
<MrChico>
ah, but now I remember this problem
<achin>
you mean that $GOPATH is equal to $HOME ?
<MrChico>
yea
<achin>
ok, so then $HOME/ipfs should work in your second terminal
<MrChico>
but whenever I set the gopath variable it is only for that terminal window
<jbenet>
kyledrake: would be good to sync up about blog posts and so on-- i kinda need a break from the screen to get some food, but if your timing is better now can talk now instead of later
<achin>
oops, sorry, i mean $HOME/bin/ipfs
<MrChico>
I am not able to set it globally
<jbenet>
thanks everyone who participated in the hangouts
<MrChico>
echo $GOPATH returns $HOME in one window, nothing in the other
<kyledrake>
jbenet im pretty time flexible this week, so we can do this tonight or tomorrow
<achin>
that's right. you can add something to your ~/.zshrc or ~/bash.rc or whatever you have, so that every new shell gets started with your custon ENV vars
<jbenet>
kyledrake: would 7:30 or 8pm work for you?
<jbenet>
errr that's EST
<kyledrake>
jbenet sounds good
<achin>
MrChico: when you run "export FOO=bar", the envvar $FOO is only set in that shell
<MrChico>
So, if I add something like export $GOPATH = /home/
<MrChico>
I should be better
<MrChico>
off
<MrChico>
to ./zshrc
acidhax has quit [Remote host closed the connection]
<achin>
that's the normal way of environment variables. to get them to be automatically set, 1 common way is to add them to .zshrc, bashrc, tschrc (whatever your particular shell uses)
<achin>
i'm heading home from work now, and will be back soon. MrChico hang around if you still have questions, others will be happy to help
<MrChico>
thanks a lot achin!
<MrChico>
Think I fixed it
hoboprimate has quit [Ping timeout: 272 seconds]
hoboprimate has joined #ipfs
MrChico has quit [Quit: Leaving.]
MrChico has joined #ipfs
edrex has quit [Read error: Connection reset by peer]
edrex has joined #ipfs
devbug has quit [Ping timeout: 244 seconds]
acidhax has joined #ipfs
acidhax_ has joined #ipfs
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
acidhax has quit [Ping timeout: 268 seconds]
martinkl_ has joined #ipfs
acidhax_ has quit [Remote host closed the connection]
<mungojelly_>
ooh could we make really short unicode hashes with teddy bears in them
<MrChico>
still there for me
<achin>
ah, the output of "ipfs swarm peers" are peerIDs
<achin>
which are indistingiushable from hashes that have content
<MrChico>
oh
<achin>
so in this case, you cannot use "ipfs ls" on this hash
<achin>
this isn't very obvious, and this trips up just about everyone who tries ipfs for the first time
<MrChico>
so they are just bouncing the other files around?
<achin>
what do you mean?
<achin>
(here's a hash you can try to use with "ipfs cat" : QmS4QJtQ9ZkfxTXUMvVusAGzMADQTJcrxZSwk2yLvjDqt8 )
<MrChico>
the peers that don't have any content in them are just relays that help share the data of peers with content?
<jbenet>
Hey everyone -- can you all take a moment to voice strong opposition to CISA? It's a big deal -- http://www.decidethefuture.org/ -- Tweets, fb posts, in particular Calls to Congress (sp Senators) are important.
cemerick has joined #ipfs
martinkl_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<achin>
MrChico: peers do indeed ahve content. but you generally do not know what pieces of content a given peer has
<achin>
so you can't ask a peer "what files do you have". you are supposed to know the hash of something that's interesting, and then ask the network "which peer has this hash", and then ask that peer "give me content for hash QmABC"
<MrChico>
haha, nice hello MrChico
<achin>
:)
<MrChico>
so I might have been trying to ask a lot of pictures and text files what folders they contain?
<achin>
sorry, i didn't quite understand the question. if you have a hash of a directory, you can do "ipfs ls hash" to see what files it contains
<MrChico>
Yeah. I think I get my problem. I was just doing ipfs ls on about everything, having no idea what was directories and what not
metaf5 has quit [Quit: WeeChat 1.3]
<victorbjelkholm>
mungojelly_, like QmWSUHWEhmXb8WmCXEiQjhBen5iuy5aBq6QkKTSwWMMFy9 ?
acidhax has quit [Remote host closed the connection]
<achin>
"ipfs ls" actually just shows you what objects are linked to from a given hash
<achin>
a directory object has links to other files and directories
<achin>
but a big file might be chunked into several blocks, and so it would have several links, too
acidhax has joined #ipfs
acidhax_ has joined #ipfs
<achin>
(here is an example of such a file: QmQA3zU4HPtnESMBwzinPQfnFJRCov3ioEd6zeheCjcNe1 )
cemerick has quit [Ping timeout: 260 seconds]
<padz>
what determines when and how a file is chunked
ipfs_intern has joined #ipfs
MrChico has left #ipfs [#ipfs]
<achin>
at the moment, "ipfs add" will chunk any file lager than 250kb into 250kb chunks
acidhax has quit [Ping timeout: 265 seconds]
<ion>
padz: ipfs add --chunker lets you select an algorithm, there is a default one that splits files into about 250 kB chunks and groups them into about 50 MB objects IIRC.
<ion>
I mean, objects linking to 50-ish MB worth of chunks
<padz>
what about really big files (hundred gb)?
<ion>
I don’t know whether they get a deeper tree, i haven’t tried or read the code.
<padz>
i imagine objects would be able to point to more objects