<whyrusleeping>
lgierth: on a scale of 1 to 10, how upset would you be if the peer count metrics in p2p/net/swarm/swarm.go got commented out until i can figure out a nicer way to use prometheus?
<multivac>
daviddias: I'll pass that on when dignifiedquire is around.
<whyrusleeping>
its importing something like 10k lines of code just for that one metric...
<daviddias>
whyrusleeping: what, why?
<daviddias>
dependencies?
<whyrusleeping>
yeah
<lgierth>
whyrusleeping: oooh now i got your remark about deps
<whyrusleeping>
the dependencies for it are not nice
<lgierth>
yeah!
<lgierth>
of course, go ahead
<whyrusleeping>
and like, i can do it and import it
<whyrusleeping>
its just really ugly
<lgierth>
it might actually be better to have a separate little tool which queries the api
<whyrusleeping>
thats a good idea
ygrek_ has joined #ipfs
<whyrusleeping>
aand with that, i think i'm done vendoring libp2p
ygrek has quit [Ping timeout: 250 seconds]
<whyrusleeping>
maybe
<lgierth>
whyrusleeping: feel free to just rip it out instead of commenting out
<whyrusleeping>
i did, lol
<lgierth>
good :)
<lgierth>
coding with backspace only
* zignig
likes the new mfs interface
<whyrusleeping>
zignig: good :)
Guest23423 has quit [Ping timeout: 250 seconds]
<zignig>
although the cp command won't let you add a local file 'ipfs files cp ./TEST.md /stuff/TEST.md'
<zignig>
Error: paths must start with a leading slash
drwasho has left #ipfs [#ipfs]
border0464 has joined #ipfs
Whispery is now known as TheWhisper
<surajravi>
can i give an arbitrary public key to `ipfs name publish pubkey ipfshash`
<surajravi>
or am i going about this wrong?
hoony has joined #ipfs
border0464 has quit [Ping timeout: 265 seconds]
<zignig>
surajravi: at the moment the publishing is only on your nodes key.
<surajravi>
zignig: thanks!
<surajravi>
zignig: so badically that is just the "ID" part of `ipfs id` correct?
infinity0 has quit [Remote host closed the connection]
<surajravi>
zignig: also, while i have your ear, i did a gc on my repo via `ipfs repo gc` but `ipfs refs local` still shows me old dirs in there
<surajravi>
zignig: anyway to remove stuff that i added via either `ipfs add -r` or `ipfs add` but currently do not exist on my actual harddrive
<zignig>
yep , so /ipns/<your id>/ is what you are publishing.
<surajravi>
yes
<surajravi>
but `ipfs refs local` shows older stuff too
<zignig>
when you do an 'ipfs add -r ' it pins the files to your repo , a gc will not clean it up.
<surajravi>
yes i am reading that now
<zignig>
do an 'ipfs pin ls'
<surajravi>
i get quite a few recursive hashes
<zignig>
those files will not be gc'd
<surajravi>
i see
<surajravi>
when when my dir changes
<surajravi>
and i redo `ipfs add ./`
<surajravi>
the old version still remains
reit has quit [Read error: Connection reset by peer]
<zignig>
yes, it's a feature ;)
<surajravi>
zignig: sorry, took me a while to get to that :-P
<zignig>
ipfs pin rm --help
<zignig>
remove the pins and then gc will clean those files out.
<zignig>
be carefull though , you don't want to remove the pin of files that you want.
<lgierth>
whyrusleeping: i should rework bootstrap-cjdns for libp2p right?
<zignig>
on a recent version you can ' ipfs add -r --pin <folder> '
<zignig>
and the files will _not_ get pinned.
<lgierth>
that means i can fat-finger the shiny new code! :)
<surajravi>
zignig: NOT????
<zignig>
they will be in the repo , but a gc will clean them out.
<surajravi>
zignig: so if I had done `ipfs add -r --ping dir_path` and then `ipfs name publish hash_from_prevcmd` then my stuff will still be accessible via /ipns/my_perr_id even if I gc my repo?
<zignig>
surajravi: no the pin is a negative, ipfs add -r dir , will pin it.
<surajravi>
zignig: hmm i get that negative part
<surajravi>
zignig: ok met me ask a similar question
<surajravi>
zignig: say I go offline (kill my internet connection) will my published stuff still be accessible? (sorry for the noob questions in advance)
<zignig>
surajravi: it will be available only if some one else has downloaded it.
<zignig>
an ipns records will expire after 24 hours if not refreshed.
<M-david>
kandinski: which parts of what?
<surajravi>
zignig: ok thanks, that clears that
<zignig>
one trick I ( and others ) are using is to have two nodes, one checks the other name entry and pins it.
<zignig>
if one node goes down all the data is still there.
<lgierth>
whyrusleeping: cool that you split libp2p out with subtree
<lgierth>
i hardly ever meet someone who uses it
hoony has quit [Remote host closed the connection]
guest234234 has joined #ipfs
guest234234 has quit [Read error: Connection reset by peer]
kode54 has quit [Quit: DOH!]
domanic_ has joined #ipfs
kvda has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
domanic has quit [Ping timeout: 260 seconds]
<lgierth>
whyrusleeping: can i get go-libp2p to run its tests at the moment?
<kandinski>
M-david: parts of the IPFS project
infinity0 has joined #ipfs
surajravi has quit [Quit: Leaving...]
reit has joined #ipfs
ygrek_ has quit [Ping timeout: 272 seconds]
<zignig>
whyrusleeping: mfs is missing two commands (imho)
kvda has joined #ipfs
<zignig>
ipfs files sethash <path> hash - stick an existing hash to the mfs
gjp4__ has left #ipfs ["Leaving"]
<zignig>
and ipfs files gethash <path> - files stat covers it but gives you other info
fleeky has quit [Remote host closed the connection]
fleeky has joined #ipfs
kbtombul has joined #ipfs
<whyrusleeping>
cryptix: :D
kbtombul has quit [Ping timeout: 240 seconds]
kbtombul has joined #ipfs
timgws has quit [Quit: My MacBook Pro has gone to sleep. ZZZzzz…]
domanic_ has joined #ipfs
rombou has joined #ipfs
jabberwocky has quit [Remote host closed the connection]
Stskeepz is now known as Stskeeps
Stskeeps has quit [Quit: Reconnecting]
Stskeeps has joined #ipfs
zz_r04r is now known as r04r
rombou has quit [Quit: Leaving.]
s_kunk has quit [Ping timeout: 240 seconds]
kvda has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
domanic_ has quit [Ping timeout: 265 seconds]
s_kunk has joined #ipfs
rendar has joined #ipfs
guest234234 has joined #ipfs
Qwertie has quit [Quit: Cya o/]
Qwertie has joined #ipfs
jokoon has joined #ipfs
compleatang has joined #ipfs
<jokoon>
hello
<jokoon>
How many online nodes are there at the moment ?
<jokoon>
on average
guest234234 has quit [Read error: Connection reset by peer]
jabberwocky has joined #ipfs
jhulten has quit [Ping timeout: 250 seconds]
<haadcode>
morning
<jokoon>
hi
chriscool has quit [Ping timeout: 272 seconds]
chriscool has joined #ipfs
hoony has joined #ipfs
nicolagreco has quit [Quit: nicolagreco]
dignifiedquire_ has joined #ipfs
<dignifiedquire_>
good morning everyone
Qwertie has quit [Quit: Cya o/]
Qwertie has joined #ipfs
<jokoon>
I was also wondering, does IPFS allow redundancy ? For example, let's say I want to build a p2p platform of file hosting, a little like bitsync, except with only one directory
<multivac>
[REDDIT] Is it possible to build a p2p file hosting service that runs on IPFS nodes? Meaning it would require redundancy and be highly fault tolerant. What do you think ? (self.ipfs) | 1 points (100.0%) | 0 comments | Posted by jokoon | Created at 2015-11-17 - 09:56:52
<daviddias>
ah, that feels like some kind of weird corrupted version hit some special chars that break the to.string
<dignifiedquire>
yeah :/
<daviddias>
could you see if that module passed the shashum?
<dignifiedquire>
doesn’t look like it was checked
<dignifiedquire>
ipfs-chrome-station looks cool
<dignifiedquire>
but so much code duplication :/
<daviddias>
I feel it was a "get it to work" run :)
<dignifiedquire>
probably
<dignifiedquire>
it’s cool that it works
<dignifiedquire>
:)
<daviddias>
let's see if toString bug happens again or if it is just that module
<dignifiedquire>
I saw it already twice today
<daviddias>
I had no idea there was so many corrupt tar balls inside npm
<dignifiedquire>
well it’s bound to happen at that scale
<daviddias>
oh, I see
jhulten has joined #ipfs
<The_8472>
the ipfs specs are not supposed to be in a usable state yet, are they?
<rendar>
daviddias: sorry last day i had to quit, we're speaking about the chunking algorithm, the 256k algorithm is just linearly dividing a file into 256k chunks right?
<davidar>
The_8472 (IRC): they haven't been finalised yet, no
<daviddias>
rendar that is correct
<rendar>
davidar: then how the other algorithm improve this?
<daviddias>
The_8472: You should be able to read it and get a lot of how everything works or how everything has to work
<davidar>
rendar (IRC): tries to position the chunk boundaries intelligently
<davidar>
For better dedup
<The_8472>
well, i get a rough idea, but it doesn't seem to be enough to reimplement it
<rendar>
daviddias, my guss is that it doesn't linearly break the file in a stupid way, but it tries to search patterns, or some similar chunks into the whole file...?
<daviddias>
rendar: when you have compression, smart chunking can make a big difference
<rendar>
hmm, right
<daviddias>
in our case, we reuse DAG nodes if they hash to the same thing
<daviddias>
so, if you grab an image
<daviddias>
and if you slice it up good and you get more chunks that are the same
<daviddias>
hashes will be the same
jhulten has quit [Ping timeout: 255 seconds]
<daviddias>
and space will be saved
<rendar>
daviddias: that's right, but my point is that you don't know that without having a sight on the entire data of ALL files..
<daviddias>
that is why rabin fingerprinting is magical
<rendar>
lol, magical? what is its key feature?
grahamperrin has joined #ipfs
<daviddias>
I defer that question to someone (like whyrusleeping ) who has implemented it and did all of reading to transmit the idea in a concise manner
<multivac>
[WIKIPEDIA] Rolling hash | "A rolling hash is a hash function where the input is hashed in a window that moves through the input.A few hash functions allow a rolling hash to be computed very quickly—the new hash value is rapidly calculated given only the old hash value, the old value removed from the window, and the new value..."
<rendar>
i know the concept of rolling hash, basically if you have ABCDE you hash ABCDE BCDE, CDE, DE, E, right?
<The_8472>
it's data-dependent slicing
<The_8472>
instead of relying on offsets
<rendar>
The_8472: data-dependent? what you mean?
<ion>
Pick a window size, roll the window along the data, compute a hash of your choice for every window (certain algorithms make this very efficient). Whenever the hash matches a rule of your choice such as “the ten MSBs are all ones”, split a new chunk at that byte.
<rendar>
hmm i see
<rendar>
ion: so in the example i did before, basically i used a window size of 1
<ion>
No, 5
<rendar>
hmm, ok i see
rombou has joined #ipfs
<rendar>
ion: btw, what are those algorithms make this very efficient? are they hash functions optimized to do this?
<The_8472>
it's mentioned in the WP article
<rendar>
The_8472: ok thanks
<ion>
At the very simplest, xor all the bytes within the window together. Given a checksum c computed from bytes 1000 through 1499, c xor data[1000] xor data[1500] is the checksum for bytes 1001 through 1500.
<ion>
If you make a shared constant random table mapping bytes to 32-bit numbers, compute (c shiftLeft 1) xor table(data[n]) xor table(data[n+windowSize]) and make sure the window size is a multiple of 32 (to make the shifts align), you basically have BuzHash.
doublec has quit [Ping timeout: 260 seconds]
<kandinski>
The_8472: there are some reimplementation efforts going on, the JS one is well advanced, but the Python one is only starting. Join us!
doublec has joined #ipfs
<rendar>
ion: hmm i see, but what i can't get is that choosing of arbitrary numbers like 1001, 1500, etc
<ion>
Choose 32·n
<The_8472>
kandinski, I'm somewhat interested, but having to dig through other people's sources is unpleasant. a working spec would be nicer ;)
<rendar>
ion: where n is?
<ion>
A positive integer
<rendar>
ion: ok, it seems btw even gzip uses this technique
<kandinski>
The_8472: I'm learning golang by going through the go-ipfs sources, and the JS implementator is working on a libp2p website
<kandinski>
the Python implementation plans to start at libp2p
<ion>
The first window consists of bytes 0…0+32·n–1, the next window consists of bytes 1…1+32·n–1 etc.
<rendar>
ion: ok
<rendar>
ion: window size is fixed, right?
<ion>
Yes
<ion>
You also want to impose a minimum and a maximum chunk size to avoid pathological cases.
<rendar>
ion: my point is: let's consider we have a gazillion of .bmp files, which repsents all black figures, expect for some thin (1-2 px) colored lines, so you have each horizontal line of the bmp file (the scanline) being 000000034 000000098 000000021 and so on, now, if you can "see" all the .bmp images togheter, you can notice that the *littler* chunk if all zeros 00000 has N bytes size, so if you
<rendar>
hash that, you can save *A LOT* of space, because if you hash chunks with N+1 you'll take some more data, having a lot of more chunks, imho, you can't do that even with rolling hash
yaraju has joined #ipfs
<ion>
You'll want compression for those files.
rombou has quit [Ping timeout: 260 seconds]
<rendar>
let me explain in an example: file A: "0000AA0000BB0000CC" file B: "000000EE000000FF" -- if i chose to hash with N=4, i will have those chunks: hash(0000), hash(AA), hash(BB), hash(CC), hash(00EE), hash(00FF) -- but you'll know that only if you see file A and B togheter, do you see my point?
<ion>
A rolling hash only helps when parts are removed from or added to the middle of a file.
<ion>
Or to the beginning. That is, not at the end.
<rendar>
ion: if you compress those files, you will increment data entropy, causing it to be more random, so if we have total random data, rolling hash is pretty unuseful, because *each* chunk will have a different hash than each other chunk, so in that case linear 256k chunk is the way to go, right?
<rendar>
e.g. what is the point of using rolling hash on a very large jpeg, but where each chunk is different than each other?
infinity0 has quit [Remote host closed the connection]
<ion>
Chunking to a constant size isn't really any better than chunking with a rolling hash with the same average size, so might as well use a rolling hash for everything even though only some of the files will benefit from it.
infinity0 has joined #ipfs
<rendar>
ion: but chunking a costant size will be much faster, on a computational level, i guess
<The_8472>
you'll end up calculating hashes for chunks anyway, no?
<The_8472>
cryptographic hashes are way more expensive
<rendar>
The_8472: yeah, do you think that it will be the bottleneck?
infinity0 has quit [Remote host closed the connection]
<The_8472>
once the data is already in the caches, probably
<rendar>
i see
<ion>
I wouldn't be surprised if reading a large file and computing a rolling hash turns out to have the IO as the bottleneck, not the computation. The algorithm is *really* efficient.
<rendar>
ok
<rendar>
ion: what is the algorithm used by ipfs?
<ion>
go-ipfs implements the Rabin one.
<ion>
BuzHash requires two table lookups (to a 1 KiB table, not too bad to cache) and a few in-register operations per byte.
jokoon has quit [Read error: Connection reset by peer]
NightRa has quit [Quit: Connection closed for inactivity]
<rendar>
ion: i see
hellertime has joined #ipfs
JasonWoof has quit [Read error: Connection reset by peer]
JasonWoof has joined #ipfs
guest234234 has joined #ipfs
voxelot has joined #ipfs
voxelot has quit [Ping timeout: 260 seconds]
<rendar>
question, if i have a very big file, and i modify only some very small part of it, it won't be updated the whole file's hash but only its small chunks i have modified's hash. but, since ipfs has a merkledag, updating the hash of even 1 or 2 small chunks, means updating the hash of the whole file, which will change, of its parent and so on until the merkle-root, right??
Qwertie has quit [Ping timeout: 246 seconds]
bedeho has quit [Ping timeout: 250 seconds]
merlijn_ has joined #ipfs
<ion>
Yes
<davidar>
rendar (IRC): yep, that's the whole point ;)
merlijn_ has quit [Ping timeout: 260 seconds]
reit has joined #ipfs
merlijn_ has joined #ipfs
NightRa has joined #ipfs
<lgierth>
get your 32c3 tickets while they're hot!
jhulten has joined #ipfs
jhulten has quit [Ping timeout: 252 seconds]
slothbag has quit [Read error: Connection reset by peer]
<multivac>
dignifiedquire: I'll pass that on when jbenet is around.
<lgierth>
dignifiedquire: still, get your ticket
<lgierth>
:)
<dignifiedquire>
lgierth: I have one :)
<dignifiedquire>
not missing it this time
<cryptix>
my payment was already processed - never saw a same day bank transfer before
<cryptix>
i guess it was before 12oo
ipfs_intern has joined #ipfs
<lgierth>
that, or different branches of the same company
<ipfs_intern>
daemon is running on microsoft azure but still connect with other nodes, gateway or any other node can't fetch things and when i try to ping it gives me this - Ping error: dial attempt failed: failed to dial <peer.ID eZ12Dp>
<ipfs_intern>
@whyrusleeping : any solution for this ?
<ipfs_intern>
@achin : i can run the command ipfs swarm peers which means daemon is working fine so why other nodes can't fetch things ? is it because of azure's firewall ?
<achin>
do you have an example hash of something that others can't fetch?
<ipfs_intern>
@achin: i have added this file right now can you fetch it ? QmP8KaHbXC4qAKCnys1hADRuoBaXhT4yGGYs5fizbTHQcq
<ipfs_intern>
try this
<achin>
i am able to download that file
<ipfs_intern>
@achin: sorry try this one : QmeeJ9fooWyirKKGQ6pqexx36GZAtLz9BG71buaVXW6HSw
nicolagreco has joined #ipfs
<ipfs_intern>
@achin: what about this hash ?
<achin>
yes, i am able to download that as well
nicolagreco has quit [Client Quit]
nicolagreco has joined #ipfs
<ipfs_intern>
@achin: it took around 5 minutes to fetch the file, i was also trying to fetch it
jhulten has joined #ipfs
<ipfs_intern>
@achin: sometime it works fine, and sometime it takes time to fetch
<achin>
does the node you are fetching from have a direct connection to your azure node?
jhulten has quit [Ping timeout: 240 seconds]
<ipfs_intern>
file is on azure's node, and i am trying to fetch it from my system, now i am getting this error - Ping error: dial backoff
jabberwocky has quit [Remote host closed the connection]
jabberwocky has joined #ipfs
reblonk is now known as cblgh
<ipfs_intern>
@achin: should i shift from azure ?
jabberwocky has quit [Remote host closed the connection]
nicolagreco has quit [Read error: Connection reset by peer]
nicolagreco has joined #ipfs
reit has joined #ipfs
e-lima has joined #ipfs
nicolagreco has quit [Client Quit]
mildred has quit [Ping timeout: 244 seconds]
elima_ has quit [Ping timeout: 252 seconds]
amade has joined #ipfs
hoony has quit [Quit: hoony]
border0464 has joined #ipfs
guest234234 has quit [Ping timeout: 252 seconds]
mildred has joined #ipfs
<achin>
this doesn't sound like an azure problem
<achin>
more like a generic firewall configuration issue
<achin>
perhaps both your azuare machine and your other node are firewalled
<ipfs_intern>
@achin: the problem only comes with the azure's node, my system is working fine, i'll try on amazon......btw thanks :)
<dignifiedquire>
daviddias: maybe we can organize some “intro + hacking times” where someone is around to explain and get people setup hacking on i
<ipfs_intern>
@achin: answer of which question ?
<achin>
perhaps both your azuare machine and your other node are firewalled. have you checked this?
<ipfs_intern>
@achin: let me check
mildred1 has joined #ipfs
<achin>
amazon EC2 machines are all firewalled by default, so if you wanted to run an ipfs node on one, you have to create the appropriate firewall rules (called a "security group" in AWS lingo). i suspect azure has something similar
mildred has quit [Ping timeout: 265 seconds]
pfraze has joined #ipfs
voxelot has joined #ipfs
voxelot has joined #ipfs
__konrad__ has quit [Quit: Leaving]
<daviddias>
dignifiedquire: we submitted a few talks
<daviddias>
7 to be exact, talks+workshops
<dignifiedquire>
daviddias: oh okay, that’s a lot
<daviddias>
Ahah not expecting for all to get accepted
<dignifiedquire>
sure, still a lot
<daviddias>
But we should definitely have a "get your node running" hour
<daviddias>
I'm up to all kinds of ideas
<dignifiedquire>
maybe you can post a list of the submitted talks to the issue and update when you know what got accepted
fazo has joined #ipfs
fazo has quit [Changing host]
fazo has joined #ipfs
fazo has joined #ipfs
__konrad_ has joined #ipfs
go1111111 has quit [Ping timeout: 250 seconds]
grahamperrin has quit [Remote host closed the connection]
pfraze has quit [Remote host closed the connection]
<daviddias>
Will do :)
<ipfs_intern>
@achin: yes it is firewalled by default and can't configure it, and i am not sure if there is a way to change it
<achin>
what about your non-azure node? is that also firewalled?
<ipfs_intern>
@achin: no it is not firewalled, that node is working fine
<achin>
then your azure node should be able to connect to it
<ipfs_intern>
@achin: nope getting this error when i tried to ping : Ping error: dial attempt failed: failed to dial <peer.ID eZ12Dp>
<achin>
can you try "ipfs swarm connect" first?
<ipfs_intern>
@achin: got this - connect QmeZ12DpjxMxggknyxzfz3Lw1wTyGBvPFAdnmA5ZXMnyUk failure: dial attempt failed: failed to dial <peer.ID eZ12Dp>
jabberwocky has joined #ipfs
<achin>
can you give me the full address and peerID of your non-azure node?
<ipfs_intern>
public ip: 103.255.4.41, ipfs public key: CAASpgIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDHnlwdNavydPZD2l+5qplR7cwoKdp2RS1GgSRUEwRHfhywP33C17VMtoFwnAGQu/pSzAKxmVD33niLDRtR/BKcaEFPxxN2cUOnUblArl4yUIZCIs4HrJ+/yyOqaKgPWVEubyFOJfhnssrTavamvQ2nYrPlg/cFPluMmOZ3vx12LN5iuBfe/4YNpSJjmaUTDhKBnLy5x50ljDhG56lyYxApM0rVERLrVf7RXjR51lAKFJTy36B0ljOGipqZDq/xYhfcNZwwZ48hloK8JeGUgcvhN1urOs82fZ//dEvGkyOP0bYccxEzLM/YBF0sH0gQOEotgzoXyNg2LuVJ906dtLedAg
jhulten has quit [Ping timeout: 272 seconds]
Rylee has quit [Quit: WeeChat 1.4-dev]
Rylee has joined #ipfs
merlijn_ has quit [Ping timeout: 240 seconds]
__konrad_ has quit [Quit: Leaving]
__konrad__ has joined #ipfs
__konrad__ has quit [Remote host closed the connection]
__konrad_ has joined #ipfs
__konrad_ has quit [Remote host closed the connection]
__konrad_ has joined #ipfs
<ipfs_intern>
@achinL can you connect to my azure node ? /ip4/10.142.146.9/tcp/4001/ipfs/QmQXox3enLdPPgffkCiwTbYMZ3yA8MnQR9EjMt59nhFEP9
<ipfs_intern>
@achin: can you connect to my azure node ? /ip4/10.142.146.9/tcp/4001/ipfs/QmQXox3enLdPPgffkCiwTbYMZ3yA8MnQR9EjMt59nhFEP9
<ipfs_intern>
@achin: you were talking about my ipfs public key right ?
tsenior`` has quit [Ping timeout: 240 seconds]
<achin>
there seems to be a firewall on 103.255.4.41:4001
<achin>
are you really really sure it's not firewalled or filtered or anything like that? because i believe it is
__konrad_ has quit [Remote host closed the connection]
__konrad_ has joined #ipfs
<dignifiedquire>
daviddias: do we want this? “Selecting yes or maybe in this field will add your assembly to the list of self-organized-session / workshop locations. This will allow you and other people to add and schedule sessions that will show up within the session calendar.”
<daviddias>
dignifiedquire: just to make sure I answer knowing how 32c3 works, because the CCC was "Villages" and a Village would have to take care of everything, they only provided power and wifi, tents, tables and so on are up to the Village
<daviddias>
if we become an Assembly, which kind of responsibilities when it comes to logistics will we have to care?
<dignifiedquire>
I’ll set sessions to maybe so we can decide that later
<dignifiedquire>
daviddias: “Select what kind of assembly you are: Are you either a group that just want a place to gather and hack in the hackcenter or do you want to show your projects and want to get in contact with other people?” probably just hack and gather for now?
<daviddias>
well, we can show stuff
<daviddias>
lgierth: has a bunch of NUC that we can have there running with IPFS
<dignifiedquire>
okay :)
<daviddias>
but we won't be a "vendor" like with rollups and flags, if that is what they are thinking of showing
<multivac>
[WIKIPEDIA] Dynamic HTML | "Dynamic HTML, or DHTML, is an umbrella term for a collection of technologies used together to create interactive and animated web sites by using a combination of a static markup language (such as HTML), a client-side scripting language (such as JavaScript), a presentation definition language (such as..."
Matoro has joined #ipfs
<achin>
ipfs_intern: what about your NAT?
<daviddias>
whyrusleeping: are you available?
<ipfs_intern>
@achin: internal IP addres(10.142.146.9) is mapped to 40.78.144.176
jabberwocky has joined #ipfs
merlijn_ has quit [Read error: No route to host]
jabberwocky has quit [Ping timeout: 246 seconds]
merlijn_ has joined #ipfs
<whyrusleeping>
daviddias: yeah, probably
merlijn_ has quit [Ping timeout: 250 seconds]
<daviddias>
need to ask some stuff about go-libp2p
<daviddias>
starting with, how do I use this thing? And how to set up a libp2pNode with secio off
* lgierth
seconds that question
<lgierth>
(the first poart of it)
<lgierth>
i thought i'd just work within go-ipfs.git/p2p for now, and subtree-merge the commits over to go-libp2p
ipfs_intern has quit [Quit: Page closed]
edrex has quit [Ping timeout: 240 seconds]
locusf has joined #ipfs
chriscool has quit [Ping timeout: 240 seconds]
baselab has joined #ipfs
true_droid has quit [Ping timeout: 246 seconds]
<whyrusleeping>
lol, right
<whyrusleeping>
so once you clone it (go get) to the correct location (its go, its picky about where it lives)
<whyrusleeping>
set GO15VENDOREXPERIMENT to 1
<whyrusleeping>
and then you can 'go test ./p2p/...'
s_kunk has quit [Ping timeout: 240 seconds]
nskelsey has quit [Ping timeout: 240 seconds]
<whyrusleeping>
using it in another application requires either gx, or copying the 'vendor' directory in go-libp2p to your project
<whyrusleeping>
i was working on an 'example' last night, but got distracted by new found knowledge of packet routing magic
<whyrusleeping>
i may have set up a series of VMs and manually routed ping packets through them >.>
jabberwocky has joined #ipfs
zmanian_ has quit [Ping timeout: 250 seconds]
jabberwocky has quit [Read error: Connection reset by peer]
<daviddias>
» go get github.com/ipfs/go-libp2p
<daviddias>
package github.com/ipfs/go-libp2p: no buildable Go source files in /Users/david/Documents/Code/go-projects/src/github.com/ipfs/go-libp2p
jabberwocky has joined #ipfs
<whyrusleeping>
thats okay
<daviddias>
» GO15VENDOREXPERIMENT=1 go test ./p2p/crypto/key_test.go ◉ ◼◼◼◼◼◼◼◼◼◼
<daviddias>
and then bundle it all up before publish?
<daviddias>
that sounds like npm
<daviddias>
do eet :D
baselab has quit [Quit: Leaving]
M-eternaleye is now known as eternaleye
eternaleye has quit [Changing host]
eternaleye has joined #ipfs
merlijn_ has joined #ipfs
Matoro has quit [Ping timeout: 272 seconds]
ygrek_ has joined #ipfs
<lgierth>
whyrusleeping: something's up with go-ipfs' deps: github.com/ipfs/go-ipfs/vendor/QmQg1J6vikuXF9oDvm4wpdeAUvvkVEKW1EYDw9HhTMnP2b/go-log
<lgierth>
e.g.: package . imports github.com/ipfs/go-ipfs/vendor/QmQg1J6vikuXF9oDvm4wpdeAUvvkVEKW1EYDw9HhTMnP2b/go-log: use of vendored package not allowed
<lgierth>
internal error: duplicate loads of github.com/ipfs/go-ipfs/vendor/QmQg1J6vikuXF9oDvm4wpdeAUvvkVEKW1EYDw9HhTMnP2b/go-log
<lgierth>
imports github.com/ipfs/go-ipfs/vendor/QmQg1J6vikuXF9oDvm4wpdeAUvvkVEKW1EYDw9HhTMnP2b/go-log: must be imported as QmQg1J6vikuXF9oDvm4wpdeAUvvkVEKW1EYDw9HhTMnP2b/go-log
<lgierth>
gotta run
<ipfsbot>
[go-ipfs] lgierth created discovery-cjdns (+1 new commit): http://git.io/v4g0r
<ipfsbot>
go-ipfs/discovery-cjdns 245a1a3 Lars Gierth: WIP...
<lgierth>
there's my code ^
Matoro has joined #ipfs
ikeafurn1ture has quit [Ping timeout: 272 seconds]
<richardlitt>
@jbenet @kyledrake @VictorBjelkholm @dignifiedquire: Please add your to dos to the sprint here: https://github.com/ipfs/pm/issues/55
reit has quit [Read error: Connection reset by peer]
Encrypt_ is now known as Encrypt
<whyrusleeping>
lgierth, you can't have the go15vendor var set when working on go-ipfs yet
jhulten has quit [Ping timeout: 240 seconds]
anticore has joined #ipfs
Encrypt has quit [Quit: Quitte]
Matoro has quit [Ping timeout: 244 seconds]
anticore has quit [Client Quit]
pfraze has joined #ipfs
chriscool has joined #ipfs
forth has quit [Quit: Lämnar]
xicombd has joined #ipfs
Matoro has joined #ipfs
merlijn_ has quit [Ping timeout: 250 seconds]
<NeoTeo>
is the current master = 0.4.0 dev?
chriscool has quit [Ping timeout: 260 seconds]
tsenior`` has joined #ipfs
<richardlitt>
dignifiedquire: do you cat your Readmes?
jhulten has joined #ipfs
jhulten has quit [Ping timeout: 264 seconds]
tsenior`` has quit [Ping timeout: 250 seconds]
jhulten has joined #ipfs
tsenior`` has joined #ipfs
domanic_ has joined #ipfs
Senji has joined #ipfs
rendar has quit [Ping timeout: 246 seconds]
fingertoe has joined #ipfs
rendar has joined #ipfs
CarlWeathers has joined #ipfs
elsehow has joined #ipfs
domanic_ has quit [Ping timeout: 246 seconds]
Obamatron has quit [Ping timeout: 276 seconds]
mildred has joined #ipfs
<whyrusleeping>
NeoTeo: not yet
go1111111 has joined #ipfs
domanic_ has joined #ipfs
<NeoTeo>
whyrusleeping k, thx.
dignifiedquire_ has joined #ipfs
<dignifiedquire_>
richardlitt: sorry, I’m not sure what you mean by “cat your readmes”
<richardlitt>
No worries! Do you read them in the terminal?
<dignifiedquire_>
I use emacs in a terminal yes
<richardlitt>
I thought everyone read GitHub Readmes on Github, where line endings don't matter. Was curious!
<dignifiedquire_>
well I do read the readmes on github, but I edit them in emacs and read them sometimes, so I’m a bit picky about line length everyhwere and in markdown as well
<richardlitt>
Like, for ipfs/awesome-ipfs, I just wouldn't read it in a terminal. Interesting use case I hadn't prepared for
<ipfsbot>
[ipfs] RichardLitt opened pull request #124: Added a mini-project directory and link to FAQ (master...feature/add-mini-directory) http://git.io/v42AW
tsenior`` has quit [Ping timeout: 252 seconds]
dignifiedquire_ has joined #ipfs
<haadcode>
victorbjelkholm: just pushed the UI source to the repo
s_kunk has joined #ipfs
<lgierth>
whyrusleeping: oh right yeah i have that set in .profile, thanks
<whyrusleeping>
lgierth: yeah, i'm fixing that in 0.4.0 so that the go15vendor var will be always requried
<whyrusleeping>
(until they ship go1.6 with it as the default or something)
hellertime has quit [Quit: Leaving.]
<lgierth>
victorbjelkholm: no fs-repo-migration yet
<victorbjelkholm>
ooh, I see. Ok
<victorbjelkholm>
another note, I get "23:00:22.459 ERROR swarm2: swarm listener accept error: proto: spipe_pb.Propose: illegal tag 0 (wire type 0) swarm_listen.go:129" every ~10 seconds when running the ipfs daemon. Ok or not?
<whyrusleeping>
lgierth: actually, just pushed that
<whyrusleeping>
there is an fs-repo-migration that works for 0.3.* to 0.4.0 now
<lgierth>
excellent!
<victorbjelkholm>
whyrusleeping, ipfs/fs-repo-migrations last updated april 27
<lgierth>
i'll go and do some pin maintenance on the gateways tomorrow bring back pluto's repo
<victorbjelkholm>
ah, 2-to-3 branch...
<lgierth>
like, make sure that all gateways have the same pinset, maybe also make a catalogue of what all that actually is
<whyrusleeping>
lgierth: yeah, make sure to take backups first before trying it on anything really critical
devbug has joined #ipfs
<whyrusleeping>
i've tested it on all my machines
<whyrusleeping>
and because of the way we treat indirect pins differently in 0.4.0, the outputs are a little different for indirect pin listings
<whyrusleeping>
but the recursive pins get trasferred over no sweat
<whyrusleeping>
once a few more people try it out i'll feel more confident
<lgierth>
what's the difference between recursive and indirect?
<whyrusleeping>
recursive pins are the roots of the tree
<whyrusleeping>
indirect is everything below
devbug has quit [Ping timeout: 272 seconds]
<lgierth>
ack
pfraze has joined #ipfs
roguism1 has joined #ipfs
mildred has quit [Ping timeout: 265 seconds]
roguism has quit [Ping timeout: 272 seconds]
<kandinski>
hi
<kandinski>
any py-ipfs peeps on irc right now?
pfraze has quit [Remote host closed the connection]
<daviddias>
dignifiedquire: can we make it a arg when instantiating the ipfsAPI obj?
doublec has quit [Ping timeout: 240 seconds]
Matoro has joined #ipfs
<daviddias>
"force" sounds too pushy :)
<dignifiedquire>
daviddias: we can make it an arg in general yes, but I would like to ship this like it is to bring the browser and node on the same level
<dignifiedquire>
we are doing the same thing in node, by hardcoding `http` in the api url
<daviddias>
victorbjelkholm: had a good point that the API is available through HTTP only, but that is for now
<daviddias>
making it an option and adding a note on the README, just as we have one for CORS, will help people understand that decision
<victorbjelkholm>
daviddias, not sure if that is actually a good point or not though. If you care about having security, you would encrypt the content and decrypt on the client. Otherwise, http is good enough. Right?
<daviddias>
and even learn that browsers are bossy when it comes to https :)
<daviddias>
victorbjelkholm: for any case that consists on controlling a node that is not local
<dignifiedquire>
daviddias: can you please give me access to webui?
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
ianopolous2 is now known as ianopolous
<davidar>
.seen amstocker_
<multivac>
davidar: I last saw amstocker_ at 2015-10-07 - 05:51:06 in here, saying davidar: is there money in it? :)
<davidar>
Hmm, amstocker has been afk for a while...
<daviddias>
880K for a JS library?
<daviddias>
woa
<daviddias>
isn't that, incredibly big?
<ansuz>
> transpiled gimp
<ipfsbot>
[js-ipfs-api] diasdavid pushed 2 new commits to master: http://git.io/v4a82
<ipfsbot>
js-ipfs-api/master 383a061 David Dias: build
<ipfsbot>
js-ipfs-api/master 7589234 David Dias: Release v2.7.6.
<dignifiedquire>
daviddias: yes, but not sure we can get it much smaller with all the vinyl add and request functionality, but we can look at making it smaller for sure, just don’t think there is much we can do. will also try if webpack helps as it sometimes creates smaller bundles
<dignifiedquire>
but not today, going to bed now
<ianopolous>
has anyone had to implement a merkle-b-tree yet in ipfs? If not, I've just done one which we're going to use in Peergos to switch in IPFS ( https://github.com/ianopolous/merkle-btree ) Thought someone might find it useful.
<daviddias>
we might, with time, have the js-ipfs-api modularized, so that things don't need to be required if not used
<dignifiedquire>
good night everyone
domanic_ has quit [Ping timeout: 264 seconds]
<daviddias>
night' dignifiedquire :)
domanic has joined #ipfs
ygrek_ has joined #ipfs
ricmoo has joined #ipfs
<ricmoo>
Heya all! I've recently become obsessed with IPFS, and am trying to implement a very simple system on it... I haven't been able to find any documentation on the wire protocol though... Basically all I'm looking to do at the point is figure out how to talk (I believe on port 4001) to the bootstrap nodes, get a list of other nodes to connect to and then send get block requests...
<victorbjelkholm>
sweet baby jesus! FYI, don't try to load random hashes that people are accessing across IPFS with a girlfriend next to you
dignifiedquire_ has quit [Quit: dignifiedquire_]
dignifiedquire_ has joined #ipfs
<ricmoo>
for example, "https://en.bitcoin.it/wiki/Protocol_documentation" was all I really needed to get me started writing my python bitcoin full node implementation (pycoind.org)... A document like that would be wonderful if such a thing exists... :)
elima has joined #ipfs
elima has quit [Excess Flood]
elima has joined #ipfs
jabberwocky has joined #ipfs
<fazo_>
ricmoo: uhm, are you trying to implement some kind of application on top of ipfs or building an implementation?
<ricmoo>
First the former, then the latter. :)
<fazo_>
wow that's a big project
<ricmoo>
The important thing for me now is to be able to talk over the wire to arbitrary nodes via its wire protocol. But I would probably expand the scope in the future.
<fazo_>
are you sure you want to do that?
<fazo_>
is that your end goal or you need it to accompish something else?
<ricmoo>
Which part is really complicated about it?
<ricmoo>
I've already read through as much of the go implementation as I could figure out... It's a crazy language :)
<ricmoo>
I've already implemented the API protocol, but I can't really talk to nodes using it because they block it...
<fazo_>
well, I don't really understand the internal protocol ipfs uses to talk to other nodes yet
jabberwocky has quit [Ping timeout: 240 seconds]
<fazo_>
I'm more interested in building applications on top of it, which I am doing
fazo_ is now known as fazo
domanic has quit [Ping timeout: 240 seconds]
fazo has quit [Changing host]
fazo has joined #ipfs
fazo has joined #ipfs
<ricmoo>
How are you building an application without talking to the nodes over their internal protocol?
<fazo>
ricmoo: you don't need to do that to build applications
<fazo>
IPFS uses content based addressing, so you can't really have data that changes, right?
doesntgolf has quit [Ping timeout: 246 seconds]
<ricmoo>
I certainly don't mind the simplest solution, but I haven't been able to talk via API calls, since they are restricted.
<fazo>
but there is also IPNS which is an extension (already implemented in all nodes) that allows you to have a pointer that changes over time
<ricmoo>
And running (or at least exposing through nat, etc) gateways is optional.
<fazo>
so you can have an address that tracks static content, but it can change, so you can use it to update content
<ricmoo>
Oh, I will certainly be using IPNS. :)
<ricmoo>
But how are you communicating to nodes?
<fazo>
there's a js implementation of IPFS being developed, but it's not working yet
<fazo>
when it's done it'll work in any browser with no backend
<ricmoo>
I've seen the API js version... But it involves running your own instance of ipfs, no?
<fazo>
ricmoo: I don't need to. each user publishes his profile using IPNS. Then I use a few tricks to find users, and you just download the object pointed by their IPNS publication to get their profile. From there you download their content
<fazo>
the application is just a static web app, it talks to go-ipfs via http
<ricmoo>
It looks like you are using the IPFS-API Javascript library?
<fazo>
ricmoo: yes, there's no other easy way around it
<fazo>
also that makes it more distributed, which is nice.
<fazo>
yes, I'm using js-ipfs-api to communicate to go-ipfs from a browser
<ricmoo>
So, you are required to run an IPFS node with the API exposed to your app... How will you eliminate that in the future? (apologies... My chat room skills are quite rusty... I feel my sentences are coming across attack-y... They aren't meant to. :))
<fazo>
don't worry :) your sentences look fine! also criticism is always welcome.
<fazo>
the solution is js-ipfs. It's a full implementation of IPFS, not just an API wrapper! It's not ready yet, but it's being worked on
NightRa has quit [Quit: Connection closed for inactivity]
<ricmoo>
My application is native, so I can open binary sockets to arbitrary hosts on port 4001... So, I should be able to just connect, and send a request for bootstrap and get, I would think... I haven't figured out where in the go code they do that though...
<ricmoo>
But js-ipfs will be a node application, right?
devbug has joined #ipfs
<ricmoo>
LOL! Yes, I've read the entire repo. All 3 files. :)
<daviddias>
pretty visualization, what am I looking at?
<daviddias>
and curious why it is node-ipfs-api
<daviddias>
it is the name of the folder you have your clone
<dignifiedquire_>
daviddias: this is the breakdown of all bundled dependencies of js-ipfs-api
<dignifiedquire_>
so you can go and analyze if there is something we can drop
<dignifiedquire_>
looks like a big chunk is coming from node-shims
<dignifiedquire_>
yes it’s the folder name on my local disk
<dignifiedquire_>
I will try and see if webpack creates smaller builds as there it’s much easier to controll these shims
<dignifiedquire_>
but that’s for tomorrow, just wanted to share this, no I’m really going to sleep :D
<fazo>
ricmoo: the source is split in other repos lol it's not empty
<fazo>
ricmoo: anyway, js-ipfs will be a node application but also work in the browser.
dignifiedquire_ has quit [Quit: dignifiedquire_]
<daviddias>
dignifiedquire_: I've recently heard that request is really bloated and that a lighter, more performant http client is wreck