<achin>
(i'm having a ahrd time getting pluto to load a thing)
<lgierth>
achin: meh -- can i try?
kahiru has quit [Remote host closed the connection]
<Kubuxu>
lgierth: have you figured out how to make v4 and v3 on the same gateway? Nginx, nor HAProxy can't do it AFAIK
<The_8472>
streaming http parser of your choice -> rewrite to HEAD request -> dispatch to both -> replay original header to the first one that responds, pipe through the rest?
<multivac>
[WIKIPEDIA] Ralph Merkle#Awards | "Ralph C. Merkle (born February 2, 1952) is a computer scientist. He is one of the inventors of public key cryptography, the inventor of cryptographic hashing, and more recently a researcher and speaker on molecular nanotechnology and cryonics...."
<Kubuxu>
I have now over 400 plain pinned objects.
<lgierth>
2012 National Cyber Security Hall of Fame inductee
<ansuz>
'cyber'
<ansuz>
heh
<ansuz>
only govs ever call anything 'cyber', it seems
<ansuz>
unless ofc that hall of fame isn't related to any gov
<ansuz>
in which case I'll shut up
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
<multivac>
[WIKIPEDIA] File format#Magic number | "A file format is a standard way that information is encoded for storage in a computer file. It specifies how bits are used to encode information in a digital storage medium. File formats may be either proprietary or free and may be either unpublished or open.Some file formats are designed for very particular..."
voxelot has quit [Ping timeout: 240 seconds]
<Shibe>
Ape: what about size
<Ape>
Use 'ipfs file ls --enc json <file hash>' and look for "Size"
<Ape>
If it's a big file it should only get one block and know the size for the whole size from there
<Shibe>
what unit is the size in
<Ape>
Bytes, I think
<Shibe>
ok
<Ape>
If you want to test this with real data, try this:
<Ape>
ipfs file ls --enc json QmYgXEfjsLbPvVKrrD4Hf6QvXYRPRjH5XFGajDqtxBnD4W
<Ape>
That has is for Sintel MP4 video (open source movie)
<Ape>
Do you get the file size faster than downloading the whole file would take?
<Ape>
Also, you may try the file type detection
hoony has quit [Remote host closed the connection]
<Ape>
I tested this a bit, and at least 'file' command on mounted ipfs is working nicely
<Ape>
It returns quickly and says that it's an mp4
pfraze has quit [Remote host closed the connection]
compleatang has quit [Quit: Leaving.]
fsl has joined #ipfs
jatb has quit [Read error: Connection reset by peer]
compleatang has joined #ipfs
Encrypt has joined #ipfs
hoony has joined #ipfs
VegemiteToast_ has quit [Quit: Leaving]
fsl has left #ipfs [#ipfs]
zorglub27 has joined #ipfs
rendar has joined #ipfs
mildred has joined #ipfs
pfraze has joined #ipfs
M-fil has quit [Quit: node-irc says goodbye]
M-fil has joined #ipfs
pfraze has quit [Remote host closed the connection]
<daviddias>
ansuz: pewpew
dignifiedquire has joined #ipfs
hartor has quit [Quit: hartor]
pfraze has joined #ipfs
<Codebird>
The weird porn gallery site turns out to be pretty useful for assessing IPFS's performance, because it loads a ton of images. I really ought to make something like it, but a bit less... that.
<Codebird>
Maybe a bit of javascript that loads ten images and times how long they took to fetch.
pfraze has quit [Ping timeout: 260 seconds]
sivachandran has joined #ipfs
Kubuxu has quit [Ping timeout: 245 seconds]
<daviddias>
ansuz: around? Me and Jeromy are on the train to Paris St Lazare :)
devbug has joined #ipfs
zorglub27 has quit [Quit: zorglub27]
hoony has quit [Remote host closed the connection]
devbug_ has joined #ipfs
devbug has quit [Ping timeout: 240 seconds]
baselab has joined #ipfs
<computerfreak>
anyone combined ipfs with onename already?
e-lima has joined #ipfs
anticore has joined #ipfs
<patagonicus>
Looks a bit like Keybase but with BitCoin instead of PGP. Or is it something different?
<chriscool>
daviddias: about coffee shops, did you try Costa Coffee?
ianopolous has joined #ipfs
<whyrusleeping>
chriscool: we didnt go there yet, do they have spaces to sit and hack?
jaboja has quit [Ping timeout: 256 seconds]
<whyrusleeping>
jbenet: every time i have to do networking stuff i end up shaking my head and asking "why is this not exported?" https://golang.org/src/net/net.go#L100
jaboja has joined #ipfs
<dignifiedquire>
whyrusleeping: did you see my message about dist.json being empty?
<jbenet>
whyrusleeping: im mostly always sad about unexporteds in go
<chriscool>
whyrusleeping: I haven't tried Costa Coffee yet but someone told me it is the best place to hack and there is one 63 boulevard Haussmann which is near the Saint Lazare station
<whyrusleeping>
yeah, thats quite close
<whyrusleeping>
dignifiedquire: huh, okay
<whyrusleeping>
gimme a sec
<dignifiedquire>
whyrusleeping: by the way finally ordered the router you recommended :) looking forward to having all my connections finally go through a vpn tomorrow
<whyrusleeping>
dignifiedquire: that wndr4300?
<dignifiedquire>
yep
<whyrusleeping>
sweet
<whyrusleeping>
it worked great for me, but i was never able to ask it to ever go faster than 16mbit
<whyrusleeping>
because my isp didnt ever give me more than that
<whyrusleeping>
lol
<dignifiedquire>
well my isp gives me 100mbit so lets see if it can hold up
<whyrusleeping>
i'm rooting for it!
baselab has quit [Quit: Leaving]
cryptix has joined #ipfs
cryptix has quit [Client Quit]
zz_r04r is now known as r04r
<sivachandran>
Using 'ipfs refs local' I can see locally cached object hashes. But is there a way to find out their filenames?
prim1 has quit [Ping timeout: 272 seconds]
prim1 has joined #ipfs
reit has quit [Read error: Connection reset by peer]
Oatmeal has quit [Ping timeout: 255 seconds]
cow_2001 has joined #ipfs
devbug has quit [Ping timeout: 250 seconds]
<cow_2001>
i am trying to read the code in /cmd/ipfs/
<cow_2001>
is that where i should begin?
<cow_2001>
i wanna do this one easy labelled issue
<sivachandran>
whyrusleeping: can you point me to the page about the metadata?
<sivachandran>
whyrusleeping: or what is the ipfs command to attach metadata?
reit has joined #ipfs
<dignifiedquire>
whyrusleeping: also when you meet Jerome, talk to him about embedded IPFS :)
sivachandran has quit [Remote host closed the connection]
Oatmeal has joined #ipfs
Kubuxu has joined #ipfs
Tuned has joined #ipfs
<Tuned>
Hi people
<Tuned>
I am making myself familiar with the Py client
<Codebird>
I am unimpressed with the time IPFS can take to retrieve some things. If I view a webpage, it is awkward having to wait over a minute for images to load.
<Codebird>
I'm sure performance can be tweeked somehow though.
grahamperrin has joined #ipfs
<cow_2001>
deltab: cool! thanks!
<Tuned>
Anybody working on the Python stack?
corvinux has joined #ipfs
<jbenet>
Codebird: that's pretty bad-- usually way, way faster. more info on your setup/what you're doing will be useful. (and yes lots of perf improvements coming)
grahamperrin has quit [Ping timeout: 260 seconds]
Tv` has joined #ipfs
grahamperrin has joined #ipfs
grahamperrin has left #ipfs [#ipfs]
Kubuxu has quit [Quit: WeeChat 1.3]
Kubuxu has joined #ipfs
jaboja has quit [Ping timeout: 240 seconds]
grahamperrin has joined #ipfs
<daviddias>
chriscool: we (+ whyrusleeping and ansuz ) are hacking at Jérôme (gorhgorh) place/workshop, wanna join too? :)
<yangwao>
nice
<yangwao>
daviddias: send my regards to gorhgorh :D
<chriscool>
daviddias: ok I may join. What is the address?
yrjo has quit [Quit: reboot]
<daviddias>
yangwao: regards sent, he sents you a hug back :)
<yangwao>
daviddias: send him hug.gif, yay :)
fsl has joined #ipfs
pfraze has joined #ipfs
fsl is now known as fslfsl
fslfsl is now known as fsl
corvinux has quit [Read error: Connection reset by peer]
corvinux has joined #ipfs
<whyrusleeping>
richardlitt: where is the weekly thing posted?
corvinux has quit [Client Quit]
corvinux has joined #ipfs
e-lima has quit [Ping timeout: 265 seconds]
zorglub27 has joined #ipfs
<whyrusleeping>
dignifiedquire: alright, pushed more distributions stuff
grahamperrin has left #ipfs ["Leaving"]
janosch2 has joined #ipfs
janosch2 is now known as jgraef
anticore has quit [Quit: bye]
jgraef has quit [Quit: Leaving]
jgraef has joined #ipfs
rombou has joined #ipfs
<jgraef>
Are *-ipfs-api supposed to only expose the rpc interface to a language. Or could we include higher-level stuff?
<jgraef>
I'm thinking about rewriting python-ipfs-api, fixing it, but also adding high-level abstractions.
<jgraef>
I already have written a nice wrapper for the merkledag :)
<The_8472>
those abstractions should probably be handled by a consumer of the api
<jgraef>
Yeah, I thought so. It's cleaner. But python-ipfs-api needs fixing anyway.
<patagonicus>
Fixing it? Are there any serious bugs right now? (I'm going to need to write some scripts and I wanted to use python-ipfs-api for that)
<jgraef>
Many commands don't work or don't exist. At least for the object API
jaboja has joined #ipfs
<jgraef>
E.g. object_new didn't exist. object_patch has no meaningful way of using it.
<patagonicus>
Hmm. That's obviously not good.
<jgraef>
Particulary working with objects that have binary content is very hard.
<jgraef>
Also I think cat gives the whole blob, whereas it would be nicer to return a file-like object. Stuff like this
fsl has left #ipfs [#ipfs]
<ipfsbot>
[go-ipfs] whyrusleeping pushed 1 new commit to ipfs-volume: http://git.io/vuAVu
<ipfsbot>
go-ipfs/ipfs-volume 648fe54 Jeromy: force use of ipv4 in test...
fsl has joined #ipfs
<jgraef>
patagonicus, what scripts do you want to write? I worked around most stuff for objects.
anticore has joined #ipfs
<jgraef>
The developers of python-ipfs-api also don't seem to work on it anymore. Last commit is from October. I guess the RPC api changed since then.
<patagonicus>
jgraef: I haven't thought too much about it, but basically I have a IPFS unixfs dir and a dir on disk and I want to create a new unixfs dir that corresponds to the disk. And I'd like to do that without hashing files that are unchanged (so, same name, same size => just copy the hash).
<patagonicus>
I'm currently using the ipfs files stuff, a few lines of bash and (too much) manual work, but it's very slow.
<patagonicus>
But that might also be the disk and the filesystem, I don't know.
<patagonicus>
Also RAM, as the machine is doing more than it should and doesn't have a lot of free space for caches. :)
<jgraef>
Are you re-adding all files?
<patagonicus>
No, but I'm spawning ipfs files stat for every file. But right now I'm still in the process of adding the files for the first time.
<jgraef>
That could work with the python api. But there should be a solution in go-ipfs for that.
<jgraef>
I mean, for having a mutable directory mirrored in IPFS
<patagonicus>
I've taken a quick look at go-ipfs-api, seems to have all I need (basically List, Add, AddLink and NewObject). Would be a good reason to learn some go.
<patagonicus>
Yeah, that would be nice.
<patagonicus>
There's fuse mount + /ipns/local/, but fuse isn't that fast to begin with. And I don't want to publish a half-updated directory.
<jgraef>
Where is the bottleneck with fuse?
<jgraef>
I think just adding new objects to ipfs is a little slow?
<patagonicus>
Don't know, haven't done any profiling.
<Tuned>
jgraef: patagonicus: running the unittest of py-client for the first time. Hope to give more feedback later
rombou has quit [Quit: Leaving.]
cemerick has joined #ipfs
rombou has joined #ipfs
<Tuned>
Maybe is a superficial guess for now... but porting everything to py35 natively could be a nice start. Async support will be vital in next future
<jgraef>
They only have tests for 4 commands
<ipfsbot>
[go-ipfs] whyrusleeping force-pushed ipfs-volume from 648fe54 to ec1a886: http://git.io/v0cE1
<ipfsbot>
go-ipfs/ipfs-volume ec1a886 Jeromy: force use of ipv4 in test...
<jgraef>
Tuned, what do you exactly mean by that? That python-ipfs-api would be better py3-only?
<Tuned>
Ok, noted. Trying to get the whole pic
<jgraef>
I take that about tests back. I don't even understand what's going on there. Nothing is testing Client's methods.
<jgraef>
I think the tests are out-dated
<Tuned>
Yep i think the client yes. The project is so future-oriented that giving support to Py2 is only a complication i believe. For server-side stuff the thing is much different obviously.
<Tuned>
jsgraef: there are fakes in /functional
<jgraef>
Tuned, What the heck is going on there^^
<Tuned>
... only for .add ()
<jgraef>
Okay, I see
<jgraef>
I don't like the way they implemented add, cat, etc anyway. Those functions take file-names. When I do add, I want to give it a file-like object.
<Tuned>
I try to do some characterization test for the legacy
<jgraef>
And py3-only is a good choice I guess. But then I can just write my own api, instead of working on the fork.
<Tuned>
They accept both as far as I have read... am i wrong?
emery has left #ipfs ["Leaving"]
<jgraef>
I think cat doesn't for output
<jgraef>
But for add you're right.
<Tuned>
Ok, noted
<jgraef>
Well I was wrong. cat returns data. Nevermind
<Tuned>
If you want to rebase the client with py35 i am on it. Already started a local vm for that
<jgraef>
Tuned, So your point for using python3 is doing stuff asynchronously? Can't I do that with python2 too?
<Tuned>
A lot of stuff is already done btw... we could just refactor
<jgraef>
To be clear, you are talking about the API? Not the full node.
<Tuned>
Only the client
<jgraef>
And you already started a implementation? Can I have a look on the code?
<Tuned>
There are plenty of new stuff and more coming for Py3. And keep them unused because of back-porting approach brings more pain than advantages i think
<jgraef>
You are right. I also like writing code for py3 only anyway.
<Tuned>
I have just given the first sights now, told you.
<Tuned>
They are just first impressions
Peer3Peer has joined #ipfs
<Tuned>
Ok. I start with TDD and easy refactoring. See you later.
<jgraef>
Tuned, okay. Tell me when you have something I can look on.
Peer3Peer has quit [Client Quit]
neurosis12 has quit [Remote host closed the connection]
<Tuned>
jgraef ook
pfraze has quit [Remote host closed the connection]
cemerick has quit [Ping timeout: 260 seconds]
cemerick has joined #ipfs
cemerick has quit [Ping timeout: 250 seconds]
<lgierth>
M-davidar: meh -- which base data?
voxelot has joined #ipfs
chriscool has quit [Quit: Leaving.]
Not_ has joined #ipfs
zorglub27 has quit [Read error: Connection reset by peer]
maxlath has joined #ipfs
maxlath is now known as zorglub27
voxelot has quit [Ping timeout: 250 seconds]
corv has joined #ipfs
corvinux has quit [Ping timeout: 260 seconds]
<Tuned>
jgraef is there any documentation of the RPC API running on the local server at port 5001?
<jgraef>
Tuned, Not really, but the commands are the same as for the ipfs shell command. There is some pattern...
<jgraef>
Subcommands build a path, e.g. ipfs object get translates to /api/v0/object/get
<jgraef>
arguments are passed as ?arg=<arg1>&arg=<arg2> ... and options (e.g. --encoding bla) is &encoding=bla
chriscool has joined #ipfs
<jgraef>
And look here: https://ipfs.io/docs/api/ But I didn't find this very useful as it doesn't go into detail about specific commands.
<jgraef>
For the object patch commands (or maybe other complex commands), this issue was helpful https://github.com/ipfs/go-ipfs/issues/2070. Didn't get any of them to work though.
<jgraef>
BTW, there is no working protobuf2 for python3. So we would run into problems there. E.g. when fiddling with unixfs. But also object get/put expects either json, xml or protobuf2. And with json and xml I can't pass binary data.
<jgraef>
I'm working on a *very* minimal encoder/decoder for protobuf2 in python3.
<lgierth>
daviddias: dignifiedquire: let me know when registry-mirror and stackexchange are finished, i need to move the /homes to the raid
<lgierth>
not sure how i missed that during provisioning
<lgierth>
it's not super-urgent, just let me know
libman has joined #ipfs
<dignifiedquire>
lgierth: jbenet is there a node we could start to use to archive wikipedia (5TB - 40TB raw data) I think it would be really good if we could get that running as the current mirror state is pretty bad for wikipedia
<lgierth>
any more specific size? :)
<lgierth>
5TB is okay, 40TB needs more hardware
<The_8472>
lgierth, just symlink ~/.ipfs to your storage?
<dignifiedquire>
well ideally the upper limit but we could start with the 5TB package
<The_8472>
of course having /home on raid is always nice
<achin>
as it so happens, i'm working as we speak on some tools/experiments now to try to mirror wikipedia into ipfs
<lgierth>
The_8472: i don't want users to fill up the small rootfs, that's all
<whyrusleeping>
dignifiedquire: wait a month, i'll build a giant storage node
<lgierth>
dignifiedquire: also note that you need to double that number in the worst case
<dignifiedquire>
achin cool
<lgierth>
raw storage + ipfs storage
<dignifiedquire>
what part do you plan to mirror?
<achin>
yesterday i wrote a tool to extract the wikipedia ZIM archives into a directory. now i'm trying to export the files directly into $IPFS_PATH/blocks
<lgierth>
ideally we could do it through FUSE but that's not good enough at the moment i think, and lacks metadata which is required for rsync-like mirroring
<lgierth>
achin: interesting, bypassing ipfs add?
<achin>
yeah
<achin>
i'm hoping it'll be much faster
<dignifiedquire>
achin is that still needed on latest 0.4?
<achin>
i'm not sure yet
<achin>
there is also a problem that i'm not sure how to solve, in that the wikipedia hierarchies are very flat (if you extract from the zim archives)
<achin>
you'll end up with a massive PBNode containing links to every article
<achin>
i've mirrored all of the wikispecies site (which is small enough that i don't ahve to wait days to do stuff with it)
<achin>
the blockfile that contains the primary PBNode with every like is itself about 36M
Matoro has quit [Remote host closed the connection]
Matoro has joined #ipfs
Matoro has quit [Max SendQ exceeded]
<whyrusleeping>
dignifiedquire: i can make a 60TB box pretty easily
<lgierth>
with gbit connectivity?
<lgierth>
++
<M-mubot>
man i'm glad ipfs isnt written in c has 2 points
<whyrusleeping>
lgierth: yes indeed :D
Matoro has joined #ipfs
ygrek has joined #ipfs
<lgierth>
thank you mubot
<jgraef>
Tuned, that's why mixing JSON/XML with binary data, doesn't work. It's not a good idea anyway.
devbug has joined #ipfs
<achin>
wobot
<The_8472>
what is an "ASCII compatible binary encoding"?
<achin>
dumping wikispecies to disk only took like 7 minutes. dumping to a PBNode is taking way longer. i wonder if this is just the protocol buf stuff taking a while?
Matoro_ has joined #ipfs
devbug has quit [Ping timeout: 240 seconds]
<whyrusleeping>
achin: what do you mean by 'dumping to a PBNode' ?
Matoro has quit [Ping timeout: 256 seconds]
<lgierth>
whyrusleeping: he's bypassing ipfs add and instead writing blocks directly in the blockstore directory
<whyrusleeping>
huh
cjd has joined #ipfs
<cjd>
Hi, can I use IPFS to host files anonymously because the govenrment wants to take away my freedom
<achin>
also, maybe just slow disks, or excessive syncing. i haven't profiled yet
cjd was banned on #ipfs by whyrusleeping [*!~user@2c0f:f930:2:1::]
pinbot has quit [Remote host closed the connection]
pinbot has joined #ipfs
<achin>
cjd: there is not enough anonymity in IPFS
pinbot has quit [Remote host closed the connection]
<whyrusleeping>
lol
pinbot has joined #ipfs
<cjd>
oh maybe it should be over TOR because government hates our freedom
<achin>
(CPU usage isn't anywhere near 100% which is why i suspect disk IO as the blottleneck)
<lgierth>
achin: you can check where the cpu spends most of the time
<lgierth>
if it's in iowait, there's your io bottleneck
<lgierth>
brb
<Kubuxu>
When I download things with wget onto hdd bottleneck is the hdd.
<Kubuxu>
(on dedi)
<Kubuxu>
I think in case of IPFS it might be even worst.
<Tuned>
jgraef what does this imply for pickle?
<achin>
also, i am a dingus for not building a progress bar into this utility
<jgraef>
Tuned, I'm not sure if I understand your question. Pickle encodes strings as utf-8, I think
<jgraef>
But the important thing is, pickle can encode strings and bytes objects. Whereas JSON has no way of encoding bytes objects.
<jgraef>
Why is that important anyway?
<Tuned>
Because if we can't put bin into json or xml we should find an alternative to put data into an HTTP request i think. What alternatives we have?
e-lima has joined #ipfs
libman has quit [Read error: Connection reset by peer]
libman has joined #ipfs
<jgraef>
Tuned, depends what the HTTP endpoint expects. E.g. for object put you can choose input encoding between json, xml or protobuf2. protobuf2 seems to be the best choice.
<dignifiedquire>
achin: how did you download the dumps? I heard the official mirror is pretty slow, What speeds did you see?
<jgraef>
Tuned, also python-ipfs-api has FileCommands, which send data as multipart to the endpoint
<Tuned>
jgraef ok taking a look now at how the envelope is wrapped
<jgraef>
Tuned, as I mentioned earlier. "ipfs object put" also accepts protobuf2. The python-ipfs-api accepts any blob of data, like you would just pipe it into "ipfs object put". Would be better if it takes a dict of links and data and sends it to the endpoint encoded as protobuf2.
<jgraef>
ipfs object get always tries to get the object as json and then decodes it. This doesn't work with binary data in the node.
<jgraef>
*node = object
<Tuned>
jgraef ok so the trip is: client 》encoding 》request 》server 》decoding 》nix , correct?
<Tuned>
...and py3 doesnt have protobuf2. I understood rightly?
<jgraef>
Tuned, nix?. But it really depends on the command
<jgraef>
Yes, there is no working protobuf for py3
akkad has quit [Excess Flood]
akkad has joined #ipfs
<Tuned>
Cool thanks. Now i have a clearer look on the repo. I see they are defining an HTTP API, prob I need to wait for the design to be completed.
<dignifiedquire>
jbenet: have you talked to anyone at wikimedia before about running a full mirror on ipfs?
Gaboose has joined #ipfs
m0ns00n has joined #ipfs
<lgierth>
dignifiedquire: not before i move the /homes to the raid
<ion>
sherlock /holmes
rombou has quit [Ping timeout: 245 seconds]
<dignifiedquire>
achin: I can't seem to find any info about how frequently kiwix is updated from the original wikipedia, do you know anything about that?
<dignifiedquire>
lgierth: ok
<dignifiedquire>
lgierth: also i can stop the pinning and resume it if you want to do the move
<richardlitt>
whyrusleeping: it is posted on ipfs/ipfs
<richardlitt>
whyrusleeping, dignifiedquire, daviddias, lgierth, and jbenet: please prepare your endeavors for tomorrow. https://github.com/ipfs/pm/issues/79
<whyrusleeping>
richardlitt: but its like, thursday?
<dignifiedquire>
did you get your days of the week mixed up by all that travelling?
<richardlitt>
That's what happens when we push the week through
<richardlitt>
No, France is just behind in everything
<dignifiedquire>
looool
<achin>
dignifiedquire: i don't either, but i note that the timestamp on most of these archives are from october/november of last year, though the large english wikipedias are from the middle of 2015
<achin>
so not nearly as often as the XML dumps from wikimedia, but much more often than the static HTML dumps from wikimedia (which i don't thik have been updated in years)
reit has joined #ipfs
<richardlitt>
anyone need me today?
<C-Keen>
Is there a way to disable the fuse functionality when building ipfs?
<achin>
we all neeeeeed you richardlitt
<richardlitt>
haha
<richardlitt>
ok. Going back to dealing with my crying nephews.
<richardlitt>
See you all tomorrow
ed_t has joined #ipfs
chriscool has quit [Quit: Leaving.]
computerfreak has quit [Quit: Leaving.]
rombou has joined #ipfs
cjd has quit [Ping timeout: 240 seconds]
mildred has quit [Ping timeout: 245 seconds]
reit has quit [Quit: Leaving]
rombou has quit [Read error: Connection reset by peer]
<ipfsbot>
[go-ipfs] RichardLitt created feature/shutdown (+1 new commit): http://git.io/vupUL
<ipfsbot>
go-ipfs/feature/shutdown 8e2c77c Richard Littauer: Edited following @chriscool feedback...
<ipfsbot>
[go-ipfs] RichardLitt opened pull request #2180: Edited following @chriscool feedback (master...feature/shutdown) http://git.io/vupUq
rombou has joined #ipfs
rombou has quit [Read error: Connection reset by peer]
rombou has joined #ipfs
computerfreak has joined #ipfs
joshbuddy has joined #ipfs
mildred has joined #ipfs
rombou has left #ipfs [#ipfs]
anticore has quit [Quit: bye]
corvinux has quit [Remote host closed the connection]
corvinux has joined #ipfs
corvinux has quit [Ping timeout: 240 seconds]
Tuned has quit [Ping timeout: 252 seconds]
corvinux has joined #ipfs
<dignifiedquire>
whyrusleeping: around?
<jgraef>
Tuned, protobuf writer is also working :)
simonv3 has quit [Quit: Connection closed for inactivity]
corvinux has quit [Ping timeout: 246 seconds]
corvinux has joined #ipfs
zorglub27 has joined #ipfs
rombou has joined #ipfs
autoxeny has quit [Ping timeout: 276 seconds]
cemerick has quit [Ping timeout: 272 seconds]
<dignifiedquire>
whyrusleeping: just pushed distributions, run "gulp serve" after you built everything to get a preview, pretty ready now :)
<Kubuxu>
Ape: thank you, I have configured IPv6 only in my containers (where I test v0.4) and previously I was getting only 1 peer, with 5 I fell much better.
<Ape>
I'm currently testing and so spawning new temporary peers to live few minutes each
<Kubuxu>
Ah
<Ape>
Might be that was all 5 :)
<Ape>
But I only have 2 alive at once
<Igel>
ah u change anonymize to Accept
<whyrusleeping>
dignifiedquire: lol, looks good though
<whyrusleeping>
dignifiedquire: the ipfs-update section doesnt have any links
<dignifiedquire>
whyrusleeping: because there are no platforms in the dist.json
<whyrusleeping>
why not?
<whyrusleeping>
#worksOnMyMachine
<Kubuxu>
#worksForME
voxelot has quit [Ping timeout: 260 seconds]
joshbuddy has joined #ipfs
m0ns00n has quit [Quit: undefined]
rendar has quit [Ping timeout: 265 seconds]
leer10 has quit [Ping timeout: 276 seconds]
mildred has quit [Ping timeout: 260 seconds]
pskosinski_ is now known as pskosinski
rendar has joined #ipfs
radiophosp has joined #ipfs
simonv3 has joined #ipfs
corvinux has quit [Remote host closed the connection]
corvinux has joined #ipfs
<Confiks_>
Ape: I saw your issue #146. What are you trying to do there?
<Confiks_>
It uses findprovs on a 'service identifier' block, and then directly communicates with the peers it finds to get a more total view of the network. (and who wants to communicate with who)
ilyaigpetrov has quit [Quit: Connection closed for inactivity]
<dignifiedquire>
whyrusleeping: did you see my note about the script not cleaning up behind itself?
<whyrusleeping>
dignifiedquire: yeah yeah yeah
<dignifiedquire>
:D
<dignifiedquire>
whyrusleeping: but hey we are pretty close to shipping this thing I'd say, besides these small issues just the copy needs some cleanup and then go go go
<whyrusleeping>
:D
<whyrusleeping>
i just need to focus enough to get it done
<whyrusleeping>
and to stop trolling cjdns
<dignifiedquire>
so we will never ship, unless cjdns goes offline for a couple of days
voxelot has joined #ipfs
<cjd>
It's actually just the way we talk in Hyperboria, you think it's trolling but actually it's the local dialect
<dignifiedquire>
voxelot: finally awake? ;)
<whyrusleeping>
free dole
<whyrusleeping>
youre dumb
<daviddias>
#works-in-my-machine
<whyrusleeping>
it works on davids computer
<whyrusleeping>
dignifiedquire: ^
<whyrusleeping>
cjd: lol, i'm pretty sure ircerr took a minute or two to catch on
<cjd>
he must have figured it out pretty quick when I opped you
<whyrusleeping>
hahaha, i didnt even notice
<dignifiedquire>
daviddias: what version of jq do you have?
<whyrusleeping>
he just installed it from brew
pfraze has joined #ipfs
<dignifiedquire>
hmm
<daviddias>
Latest
<whyrusleeping>
cjd: am i allowed to kick racists from the channel?
<daviddias>
But you could change that dep to json tool from Npm
<daviddias>
Should get you going with all you need and no non npm dependencies
<cjd>
There are no rules to moderation, but that goes for you and us too :)
<dignifiedquire>
I installed it from brew as well
<whyrusleeping>
lol
<whyrusleeping>
dignifiedquire: hrm...
<cjd>
In general we try to be very very tollerant until someone crosses the line, then they're out forever.
<whyrusleeping>
try rm -rf releases/ipfs-update
<whyrusleeping>
and make ipfs-update
<whyrusleeping>
cjd: mmkay, sounds good to me
<dignifiedquire>
whyrusleeping: guess what I'm running right this moment