ZaZ has quit [Read error: Connection reset by peer]
warner has quit [Quit: ERC (IRC client for Emacs 25.1.1)]
jaboja has quit [Remote host closed the connection]
rcat has quit [Remote host closed the connection]
infinity0 has quit [Ping timeout: 240 seconds]
infinity0_ has joined #ipfs
infinity0_ has quit [Changing host]
infinity0 has joined #ipfs
zrc has joined #ipfs
zrc has quit [Client Quit]
clemo has quit [Ping timeout: 240 seconds]
leebyron has quit [Remote host closed the connection]
randomstrangerb has quit [Ping timeout: 240 seconds]
randomstrangerb has joined #ipfs
todder has quit [Ping timeout: 256 seconds]
todder has joined #ipfs
leebyron has joined #ipfs
infinity0 has quit [Ping timeout: 240 seconds]
leebyron has quit [Ping timeout: 240 seconds]
gran has joined #ipfs
<gran>
Hello everyone. Very happy IPFS is a thing! =)
leebyron has joined #ipfs
infinity0 has joined #ipfs
bomb-on has quit [Quit: SO LONG, SUCKERS!]
ps5 has joined #ipfs
robattila256 has quit [Quit: WeeChat 2.0.1]
bomb-on has joined #ipfs
ckwaldon has joined #ipfs
<rodarmor>
I was reading https://github.com/ipfs/go-ipfs/issues/1396, which discusses the IPFS DHT. Unless things have changed, putting every block in the DHT seems like a bad idea. If a node is sharing a 10 GiB file, broken into 128 KiB blocks, it will insert itself into the DHT 81920 times, once for each block. Is that correct?
<rodarmor>
These blocks will also increase the size of want lists and have lists.
<deltab>
rodarmor: no, blocks aren't stored in the DHT; it's used to find nodes that can provide the blocks
<rodarmor>
Right, sorry, I should have been more clear. Having every node register itself in the DHT for every block it wants to share seems like a bad idea.
<deltab>
ah, yeah
<rodarmor>
deltab: Even if the block itself isn't inserted, it seems like a lot of load on the DHT
<deltab>
yeah
<deltab>
it's faster if you turn off announcements while adding
<rodarmor>
I think this might be a long term problem. Small block sizes are good, since it means you can receive and verify smaller blocks of data, so if we have large nodes that are sharing TiBs or PiBs of data, they'll insert themselves into the DHT millions or billions of times.
robattila256 has joined #ipfs
<rodarmor>
1 TiB at a 128 KiB block size is 8.3 million entries.
<deltab>
there's a few things that could be done about that, such as only announcing key blocks, e.g. directories
<deltab>
and blocks that are known to be available elsewhere
<MikeFair>
I've also been envisioning some kind of "route entry balancing algorithm"
<MikeFair>
Where peers keep exchanging route entries with each other so they all have very close to the same number of entries
<rodarmor>
I think that it would be a good idea to have a transfer mode optimized for large files with many small blocks. I would suggest using a tree hash with a small, fixed piece size, say 16 KiB. Then, large blocks can be hashed with this tree hash, and nodes will optimize their behavior for this hash: They can register themselves and look for nodes in the DHT using the root hash, and when they are sharing a block that has
<rodarmor>
this tree hash with another node they can use a single bitfield for their have and want lists, so a single bit per 16KiB leaf piece.
<MikeFair>
rodarmor, You mean like streaming video files?
<rodarmor>
Basically a BitTorrent v2-like mode for this specific scenario.
<rodarmor>
This would save a ton on DHT entries and traffic, as well as a great deal of P2P traffic in the form of concise have and want lists
<rodarmor>
MikeFair: I think this is applicable for any large files
<MikeFair>
rodarmor, I agree, I'm just saying the protocol may already be there and built in
<rodarmor>
MikeFair: When you do `ipfs add -Qr FILE`, does it treat that file as a single block?
<rodarmor>
Ah, I see, it generates a subdirectory of the blocks.
<MikeFair>
rodarmor, no, it .. yeah
<rodarmor>
Yeah, so it's not clear that that's going to be optimal. If it's a very large file it will be broken into many blocks for streaming. Each node will generate a huge amount of traffic to the DHT, and trade huge want/have lists with other nodes.
<rodarmor>
Also, the size of blocks that are best for streaming will not necessarily be the size of the blocks that are best for P2P transfer, so it's best not to force them to be the same.
<MikeFair>
rodarmor, I think what you might be saying is someking of tuple block system perhaps
<rodarmor>
MikeFair: Tuple block system?
<MikeFair>
rodarmor, I want block (CID_OF_LARGE_FILE, CID_OF_SMALLER_BLOCK_ENTRY)
<MikeFair>
rodarmor, that way only CID_OF_LARGE_FILE floats around the DHT
<rodarmor>
MikeFair: Yes, something like that. Where CID_OF_LARGE_FILE is a hash and CID_OF_SMALLER_BLOKC_ENTRY is an integer.
<MikeFair>
rodarmor, but any host that actually "provides" said file, also has to keep tabs on the smaller blcoks
<rodarmor>
But in fact, I think that something like what bittorrent does is optimal, where you want (CID_OF_LARGE_FILE, BITFIELD_OF_PIECES_I_HAVE)
<MikeFair>
rodarmor, I could see it being optional, I'm not sure it helps with block reuse though
<MikeFair>
There are far more reused blocks on the planet then custom blocks
<MikeFair>
imho
<rodarmor>
This is true, but local resources (ram, disk) are far cheaper than entries in the DHT, or data that must be sent/received across the network
<rodarmor>
This is true, but there are many large files that people are only interested in as a whole
<rodarmor>
For example, if there is a long video, it will likely not share sub-blocks with any other file.
<MikeFair>
rodarmor, But there are many caches that have to purge out smaller portions of large files
randomstrangerb has quit [Ping timeout: 256 seconds]
leebyron has quit [Remote host closed the connection]
randomstrangerb has joined #ipfs
leebyron has joined #ipfs
<rodarmor>
I don't think there will be many caches that store partial ISOs, partial videos, or partial tarballs
<MikeFair>
rodarmor, I do, because they once held the full thing, then had to kick blocks as more content was requested
<rodarmor>
Hmm, I suppose that's true, if there are caches that don't care about the content itself, that just have a fixed amount of disk space and try to store whatever blocks are most wanted.
<MikeFair>
which fragments the file, and the likelihood of "remnants of large files" sticking around seems pretty high
witten has quit [Ping timeout: 276 seconds]
<MikeFair>
Which seems to make sense if I was a "non-pinning" cache operator
<MikeFair>
(the cache might be large, but definitely "finite")
<rodarmor>
But, like I said before, we're talking a huge amount of wasted resources, 80k blocks in the DHT, and in have/want lists per 10 GiB of data.
<MikeFair>
That's what 128bytes per entry?
<MikeFair>
it's a CID and PeerID right?
leeb has joined #ipfs
<MikeFair>
maybe some "strucutre markers"
<MikeFair>
maybe some "structure markers"
<MikeFair>
It's not the number of entries I'm concerned about, so mcuh as their cumulative size
<rodarmor>
That's an additional 10 MiB stored in the DHT, times the number of unique entries
leebyron has quit [Ping timeout: 252 seconds]
<rodarmor>
I would be worried about the number of entries to. A node starts sharing a file and must make 80k inserts into the DHT
<MikeFair>
But that's where I was seeing the "DHT entry balancing" algorithm would come into play
<rodarmor>
How so?
<MikeFair>
It's not a fully baked idea yet, but the idea is within my peergroup I would "trade route entries" with my neighbors to ensure I had as close to the same number of entries as they do
<MikeFair>
We would use XOR closeness to determine which of us should have which entries
<MikeFair>
So in essence, my peers might be advertising my "provides" for me and me for them
leeb has quit [Ping timeout: 252 seconds]
<MikeFair>
but we'd have a balanced number of DHT entries
<MikeFair>
because of the way the peer groups overlap this would ultimately even out the entries across the entire swarm
<rodarmor>
I think that's reasonable, but still a lot of extra load and complexity.
<MikeFair>
and XOR routing would still enable us to find the right route entries
<rodarmor>
And there's a simple solution which saves orders of magnitudes on all these fronts.
<MikeFair>
Using a block bitmap seems like "more data" to me
<rodarmor>
Less data, right? If hashes are 256 bits, and each sub block needs to go out in have and want lists, the bitmap will be far more compact.
<MikeFair>
no more, becuae i have to track the root CID and the bit count for every block instead of just it's own CID
<rodarmor>
Ah, but this is for large blocks, so there would be fewer of them.
<MikeFair>
You're talking just the net comms
leebyron has joined #ipfs
<rodarmor>
DHT entries, DHT queries, and p2p messages.
<MikeFair>
Saying I provide "ROOT_CID, BITMASK_OF_BLOCKS" would be smaller data over the wire (potentially more data required in my local index)
<MikeFair>
I guess I could somehow use the filesytem itself to eliminate that extra local index RAM
<MikeFair>
(like every bit block gets a file, but not all of them have data in it)
jared4dataroads has quit [Ping timeout: 240 seconds]
<rodarmor>
A 10 GiB file split into 128KiB blocks, which have 256 bit hashes would mean 20MiB just for the hashes, but the bitmask would be 8KiB
leebyron has quit [Ping timeout: 252 seconds]
<MikeFair>
It's a little orthogonal, but you can also split up the hashes into routing segments; this is another idea I've been toying with
<MikeFair>
The CIDs essentially have a near random distribution, so you aggregate every 24-bits into a zone
<rodarmor>
They better have a fully random distribution, otherwise our hash function is fucked :)
<MikeFair>
when you issue a "provides" or "want" you then list them in an array according to their zones (call it one byte at a time if you really want to get granular)
<MikeFair>
I've experimented with this using a different random distribution namespace and it sorted out very well
<MikeFair>
So everything between a000000000 and b00000000 is one zone handled by one set of nodes
<MikeFair>
you can't XOR directly on the hash
<MikeFair>
you xor on the zones, then elongate the has as you get closer
<MikeFair>
you xor on the zones, then elongate the hash as you get closer
* MikeFair
keeps forgetting you can't "edit" on IRC. ;)
gran has quit [Ping timeout: 260 seconds]
<MikeFair>
I really like the resuability of blocks especially on large files
<MikeFair>
because what many people want are reliable backups
<MikeFair>
That storage can be optimized to maximize block reuse
<rodarmor>
Hmm, I think that reusability is useful, but that there is a very common case of large files with no shared blocks
<MikeFair>
but like you were saying, perhaps as a different algorithm
<MikeFair>
rodarmor, Except for the authors wanting to back those files up
<MikeFair>
editing that large movie had several iterations that all share a lot of data
zxk has joined #ipfs
ygrek has quit [Ping timeout: 268 seconds]
<rodarmor>
Actually, in the example of a movie, you would likely edit losslessly, so you don't make changes to the source files while editing, so they don't change, and after you're done editing you convert the source files + the edit information into a final version, which doesn't share any blocks with the source files (since it's a reencode)
<MikeFair>
right, but the originals are far bigger than the reencode
<rodarmor>
Right right, what I mean is the reencode, which might still be X0 GiB, is created and shared without any changes, and without sharing blocks with anything else
<MikeFair>
right, but relative to all the intermediate versions of the lossless copies, I see a net "lose" on total storage space consumed
<MikeFair>
I agree, the final product is "unique"
<rodarmor>
There are no intermediate versions of the lossless copies, they don't change, the editing software just stores a list of edits made
<rodarmor>
But that's just for this specific application, I agree that there might be other applications where block-reuse of large files could be super useful
<rodarmor>
I'll be interested to see how all this plays out
<rodarmor>
To be honest, I have a lot of concerns when it comes to scaling IPFS to 100s of millions of nodes, and EiB of files
<rodarmor>
I hope I'm wrong though!
<MikeFair>
rodarmor, The other "smoking gun" for insanely large files are system backups
<MikeFair>
and those definitely have a lot of reuse
<rodarmor>
Yeah, definitely.
ericxtang has joined #ipfs
<rodarmor>
Especially if the backup systems are aware of how to ensure that there's a lot of reuse, like using variable chunking using a rolling hash
<MikeFair>
I've been planning with the various use cases for having a node participate in multiple DHTs simultaneously
ckwaldon has quit [Quit: ckwaldon]
gran has joined #ipfs
<MikeFair>
rodarmor, WE've even had some people wanting to custom build their own files because of the reuse (headers/footers) they have in their files
<rodarmor>
Do multiple smaller DHTs have desirable properties vs a single large DHT?
<MikeFair>
It's similar to NAT
<gran>
so as it stands, if i want to have a truly IPFS driven experience cross-platform, users must get my app (bundled with IPFS) for their platform (W,Linux,M) and then run it. Is it possible to get a FULL / complete IPFS experience via web browser?
<MikeFair>
You can say stuff like "CID AAA through DDD is provided by PEERID"
<MikeFair>
gran, depends on what you mean by full and complete
<MikeFair>
will provide anything available on the IPFS network
<MikeFair>
o you can "provide" from your local machine, and they can "pull" from that website
<gran>
Right on. Thank you.
<MikeFair>
You can also use the "js-ipfs" library to launch web pages that start their own local ipfs node
<MikeFair>
You can serve these web pages from IPFS
<gran>
Aha. That's very cool.
<MikeFair>
(they are static pages; no "server side processing" allowed)
<MikeFair>
there's a lot you can do with JSON
<MikeFair>
There's also the "ipfsget" command line tool
<gran>
Well, what we are building has a blockchain, so the application either has to be cpp or Golang... and since IPFS already plays nicely in go... we're thinking Go is the way. Now i'm looking at these options and seeing that a native platform app is good, it'd be interesting to make a fully functional in-browser alternative.
<MikeFair>
You're aware of the BTC, ETH, blockchain browsers based on IPLD?
<MikeFair>
(IPFS can directly read those blockchains)
<gran>
I am not. That is good. But the blockchains themselves are propagated by BTC protocol clients. We need to make clients that propagate our chain as well as viewers. So that's exactly where my thinking lead... make a lightweight "read only" browser version...
<gran>
what does IPLD stand for, my friend?
<MikeFair>
InterPlanetary Link Database iirc
<gran>
How cool and progressive
<gran>
~~ *dances with the times* ~~
<MikeFair>
I dumb it down as "Structured data" (like JSON) with "Pointers"
<MikeFair>
but you can write "translators" for any binary formats
leebyron has joined #ipfs
thiagodelgado111 has quit [Quit: Lost terminal]
<MikeFair>
Only rule atm is the data path has to resolve to something immutable (or it messes with the cahcing algos)
<MikeFair>
data paths look like URLs or file paths and return "values" stored at those locations
<MikeFair>
The paths will automatically traverse links if required
<MikeFair>
rodarmor, So you can shrink the DHT entries by consolidating a range of entries onto another DHT (which looks like a different peer id)
<MikeFair>
It doesn't reduce the absolute number of entries like your your bitmask suggestion does; merely the number of copies of those entries
<gran>
@MikeFair interesting. I'm wondering if it's enough to make a clientside go program and have users connect to a localhost instance that is receptive to the network o.o
<MikeFair>
That's how I do it atm, but I'm not distributing the app far and wide
<MikeFair>
I'm really interested in the libp2p daemon service developing; I think that has potential to get installed on lots of nodes
<MikeFair>
and the libp2p-consensus-protocol
Neomex has quit [Read error: Connection reset by peer]
leebyron has quit [Remote host closed the connection]
leebyron has joined #ipfs
ericxtang has quit [Remote host closed the connection]
leebyron has quit [Ping timeout: 240 seconds]
ericxtang has joined #ipfs
ericxtang has quit [Remote host closed the connection]
ericxtang has joined #ipfs
muvlon has quit [Quit: Leaving]
ericxtang has quit [Ping timeout: 240 seconds]
athan has quit [Read error: Connection reset by peer]
leebyron has joined #ipfs
leebyron has quit [Ping timeout: 252 seconds]
witten has joined #ipfs
<gran>
Cool. I'll read up on those. So it's fully pozzible to make a clientside UI that has uses IPFS for networking
<gran>
because file redundancy (of user uploads) is important
<gran>
otherwise it could potentially be fully web it seems
randomstrangerb has quit [Ping timeout: 240 seconds]
randomstrangerb has joined #ipfs
leebyron has joined #ipfs
leebyron has quit [Ping timeout: 256 seconds]
gran has quit [Ping timeout: 260 seconds]
colatkinson has joined #ipfs
dimitarvp has quit [Quit: Bye]
Alpha64_ has quit [Quit: Alpha64_]
appa_ has quit [Ping timeout: 246 seconds]
Gytha has joined #ipfs
leebyron has joined #ipfs
leebyron has quit [Ping timeout: 252 seconds]
colatkinson has quit [Quit: colatkinson]
colatkinson has joined #ipfs
ulrichard has joined #ipfs
randomstrangerb has quit [Ping timeout: 256 seconds]
randomstrangerb has joined #ipfs
appa_ has joined #ipfs
leebyron has joined #ipfs
randomstrangerb has quit [Ping timeout: 252 seconds]
randomstrangerb has joined #ipfs
newhouse has joined #ipfs
leebyron has quit [Ping timeout: 252 seconds]
colatkinson has quit [Quit: colatkinson]
]BFG[ has joined #ipfs
colatkinson has joined #ipfs
cshorvath has joined #ipfs
MikeFair has quit [Quit: Leaving]
cshorvath has quit [Quit: Konversation terminated!]
yuhl has joined #ipfs
yuhl has left #ipfs [#ipfs]
captain_morgan has quit [Remote host closed the connection]
captain_morgan has joined #ipfs
captain_morgan has quit [Remote host closed the connection]
captain_morgan has joined #ipfs
leebyron has joined #ipfs
leebyron has quit [Ping timeout: 256 seconds]
kaotisk has joined #ipfs
inetic has joined #ipfs
Reinhilde is now known as Eamon
leebyron has joined #ipfs
randomstrangerb has quit [Ping timeout: 240 seconds]
randomstrangerb has joined #ipfs
espadrine has quit [Ping timeout: 240 seconds]
rendar has joined #ipfs
leebyron has quit [Ping timeout: 252 seconds]
Eamon is now known as Reinhilde
newhouse has quit [Read error: Connection reset by peer]
colatkinson has quit [Quit: colatkinson]
skywavesurfer has joined #ipfs
skywavesurfer has quit [Excess Flood]
skywavesurfer has joined #ipfs
skywavesurfer has quit [Excess Flood]
skywavesurfer has joined #ipfs
skywavesurfer has quit [Excess Flood]
ylp has joined #ipfs
skywavesurfer has joined #ipfs
skywavesurfer has quit [Excess Flood]
skywavesurfer has joined #ipfs
skywavesurfer has quit [Excess Flood]
leebyron has joined #ipfs
skywavesurfer has joined #ipfs
skywavesurfer has quit [Excess Flood]
skywavesurfer has joined #ipfs
skywavesurfer has quit [Excess Flood]
skywavesurfer has joined #ipfs
skywavesurfer has quit [Excess Flood]
leebyron has quit [Ping timeout: 252 seconds]
skywavesurfer has joined #ipfs
skywavesurfer has quit [Excess Flood]
raynold has quit [Quit: Connection closed for inactivity]
leebyron has joined #ipfs
Fess has quit [Ping timeout: 240 seconds]
david__ has joined #ipfs
leebyron has quit [Ping timeout: 252 seconds]
rcat has joined #ipfs
}ls{ has joined #ipfs
plexigras has joined #ipfs
konubinix has quit [Remote host closed the connection]
konubinix has joined #ipfs
leebyron has joined #ipfs
leebyron has quit [Ping timeout: 246 seconds]
bomb-on has quit [Quit: zzz]
jungly has quit [Remote host closed the connection]
jungly has joined #ipfs
jaboja has joined #ipfs
bomb-on has joined #ipfs
vmx has joined #ipfs
Caterpillar has joined #ipfs
random has joined #ipfs
<random>
Hello everyone, I am getting started with IPFS and have a (probably stupid) question.
<random>
Is it possible for me to run an IPFS gateway that other users can use over HTTP without having to store the keys?
<random>
What I mean by that, is the keys remain on the client side, and perhaps they send the file over, or sign it locally (I am not sure exactly how the private keys are used to prove ownership, I assume you sing something)
<random>
Basically I'd like to have an IPFS gateway that does not manage keys for the clients, and simply "proxies" (for the lack of a better word) the requests. While the keys and signing stays on the client side.
<random>
Hope that made sense, thanks!
justhangingaroun has joined #ipfs
justhangingaroun has quit [Quit: WeeChat 2.0.1]
<r0kk3rz>
sure, if you just want to transfer signed data then of course you can do that
<r0kk3rz>
there is also a neat trick if you want to share a private key by putting it in the URL
<random>
So I can sign the data client side, then send the whole thing to the gateway, and it will add it to IPFS on my behalf?
<random>
I'd really like to avoid the private key leaving the client
<r0kk3rz>
yeah you can do something like that if you want
<r0kk3rz>
IPFS isnt any different to a normal webserver in this regard
<random>
Okay, so I'll look into signing objects client side and sending them to the Gateway so it can add it to IPFS, ideally all private key related stuff can happen on the client, but the client won't have to run a ipfs node.
<random>
Thanks r0kk3rz.
david__ has quit [Ping timeout: 240 seconds]
ccii1 has quit [Quit: Leaving.]
ccii has joined #ipfs
jkilpatr has quit [Ping timeout: 240 seconds]
xnbya has quit [Ping timeout: 240 seconds]
xnbya has joined #ipfs
mtodor has joined #ipfs
l2d has quit [Remote host closed the connection]
l2d has joined #ipfs
jkilpatr has joined #ipfs
random has quit [Quit: Page closed]
ONI_Ghost has joined #ipfs
mtodor has quit [Remote host closed the connection]
<AphelionZ>
r0kk3rz: is that private key url feature documented?
<AphelionZ>
We might need something like that
<victorbjelkholm>
AphelionZ: no, it's a userland feature more :) Check out peerpad for a example
<victorbjelkholm>
think cryptpad uses that way as well
<AphelionZ>
Ohhh ok
<AphelionZ>
I get it
<AphelionZ>
Its something that needs to be programmed on the application layer
<victorbjelkholm>
yeah, would be complicated to include in the implementations itself
<AphelionZ>
Copy that
<r0kk3rz>
yeah its more a http gateway thing than anything built into ipfs as such
ericxtang has joined #ipfs
sim590 has quit [Ping timeout: 260 seconds]
ONI_Ghost has quit [Read error: Connection reset by peer]
ericxtang has quit [Ping timeout: 268 seconds]
ONI_Ghost has joined #ipfs
sim590 has joined #ipfs
sunflowersociety has joined #ipfs
ONI_Ghost has quit [Ping timeout: 256 seconds]
lord| has quit [Quit: WeeChat 2.0.1]
<AphelionZ>
well it's just a way of passing the key around, right?
<AphelionZ>
the HTTP gateway doesnt really *do* anything with the key, does it?
<r0kk3rz>
no
<AphelionZ>
thank you
<r0kk3rz>
the gateway wont even see it, which is kinda the point really
<AphelionZ>
totally
<AphelionZ>
if i "ipfs get" a folder, and then 'ipfs pin' the root, does it pin every hash in the folder?
leebyron has joined #ipfs
mtodor has joined #ipfs
erictapen has quit [Remote host closed the connection]
leebyron has quit [Ping timeout: 240 seconds]
erictapen has joined #ipfs
<EnricoFasoli[m]>
AphelionZ: yes by default. You can pass --recursive=false to disable it when you run ipfs pin add
<EnricoFasoli[m]>
if you leave it on it just recursively pins everything inside the folder and all subfolders
<AphelionZ>
Awesome thank you
trqx has quit [Read error: Connection reset by peer]
kaotisk has joined #ipfs
<Michcioperz[m]>
related question: can an ipns hash be pinned?
lemmi has quit [Ping timeout: 256 seconds]
<voker57>
no
<voker57>
ipns keys are distributed separately from bitswap
<r0kk3rz>
there was talk of such a feature
<voker57>
if you mean pinning content linked by ipns hash, yes, there is a PR in works for that
<r0kk3rz>
and updaing the pin when the ipns updates
matt-h has quit [Quit: Leaving]
sunflowersociety has quit [Ping timeout: 256 seconds]
leebyron has joined #ipfs
infinisil has quit [Quit: Configuring ZNC, sorry for the join/quits!]
infinisil has joined #ipfs
trqx has joined #ipfs
leebyron has quit [Ping timeout: 240 seconds]
mtodor has quit [Read error: Connection reset by peer]
mtodor has joined #ipfs
<Michcioperz[m]>
a cron job then? alright
<Michcioperz[m]>
thanks for answering
trqx has quit [Remote host closed the connection]
sunflowersociety has joined #ipfs
trqx has joined #ipfs
trqx has quit [Remote host closed the connection]
trqx has joined #ipfs
dimitarvp has joined #ipfs
plexigras has quit [Quit: WeeChat 2.0.1]
arpu has quit [Remote host closed the connection]
tbenett has joined #ipfs
ericxtang has joined #ipfs
sunflowersociety has quit [Read error: Connection reset by peer]
leebyron has joined #ipfs
letmutx has joined #ipfs
letmutx has quit [Client Quit]
ONI_Ghost has joined #ipfs
rodolf0 has joined #ipfs
shizukesa has joined #ipfs
leebyron has quit [Ping timeout: 252 seconds]
newhouse has joined #ipfs
ONI_Ghost has quit [Read error: No route to host]
ONI_Ghost has joined #ipfs
sunflowersociety has joined #ipfs
ONI_Ghost has quit [Ping timeout: 268 seconds]
ulrichard has quit [Remote host closed the connection]
leebyron has joined #ipfs
randomstrangerb has quit [Ping timeout: 260 seconds]
randomstrangerb has joined #ipfs
shizukesa has quit [Ping timeout: 256 seconds]
leebyron has quit [Ping timeout: 265 seconds]
randomstrangerb has quit [Ping timeout: 256 seconds]
randomstrangerb has joined #ipfs
gran has joined #ipfs
<gran>
:D
CmndrSp0ck has joined #ipfs
gran has quit [Ping timeout: 260 seconds]
lemmi has joined #ipfs
CmndrSp0ck has left #ipfs [#ipfs]
ralphthe1inja has joined #ipfs
ralphthe1inja is now known as ralphtheninja2
clemo has joined #ipfs
guts1 has joined #ipfs
ylp has left #ipfs [#ipfs]
clemo has quit [Quit: clemo]
lemmi has quit [Quit: WeeChat 2.0.1]
lemmi has joined #ipfs
Jesin has joined #ipfs
<guts1>
hello, I'm trying to build go-ipfs from source, and "make build" gets to "gx install --global" which fails with "GOPATH/src/github.com/ipfs/go-ipfs/bin/gx: No such file or directory"
rngkll has quit [Remote host closed the connection]
leebyron has joined #ipfs
leebyron has quit [Ping timeout: 252 seconds]
shizukesa has joined #ipfs
tsglove has quit [Quit: Leaving]
trqx has quit [Remote host closed the connection]
trqx has joined #ipfs
rngkll has joined #ipfs
weez17 has quit [Remote host closed the connection]
mtodor has quit [Quit: Leaving...]
bomb-on has quit [Quit: zzz]
coyotespike has joined #ipfs
weez17 has joined #ipfs
whenisnever has joined #ipfs
l2d has quit [Read error: Connection reset by peer]
l2d has joined #ipfs
jared4dataroads has joined #ipfs
yuhl has joined #ipfs
inetic has left #ipfs [#ipfs]
jared4dataroads has quit [Remote host closed the connection]
ralphtheninja2 has quit [Quit: leaving]
lounge-user36 has joined #ipfs
<victorbjelkholm>
guts1: try just running `make`
<victorbjelkholm>
guts1: sorry, `make bin/gx && make bin/gx-go`
<cipres>
QmNpkayLxAEvu62HRFcrAcMPTQxjg7D2rPpeqWMHm1igfs is the latest
<AphelionZ>
whoa ChrisMatthieu
<AphelionZ>
what... is that?
<AphelionZ>
and why is it mrh.a?
<AphelionZ>
I havent pinned anything lol
<AphelionZ>
cipres: how did I manage to pin this on any machines
<cipres>
i wonder
<AphelionZ>
lol
<AphelionZ>
maybe people are indexing or something
<cipres>
when did you add it
<AphelionZ>
either way, nobody can ipfs get it, try it
<AphelionZ>
I added it this morning
<gran>
@AphelionZ what happens when you ` ipfs ls /ipfs/QmXa2j6q52SX45Nn7MvrXh5tpXwYNKbjecWxnmB4K1imG3/2018-01-24-pushing-limits-ipfs-orbitdb/
<AphelionZ>
and then requested it through the gateway
Neomex has quit [Ping timeout: 246 seconds]
<AphelionZ>
that works gran
<AphelionZ>
i see index.html
<gran>
excellent
<AphelionZ>
yeah, just the get
<gran>
but GET doesn't function?
<AphelionZ>
correct
<ChrisMatthieu>
mrh.a
<ChrisMatthieu>
it's an Arpadyne TLD :)
<ChrisMatthieu>
:)
<cipres>
works here also, but listing on / blocks
<gran>
Do you have write privs to the directory you are in
<gran>
Also, when you do GET you cannot leave a trailing / on the Hash or else it will try to recursively copy a directory, which may silently fail on Powershell.
<AphelionZ>
yes
<AphelionZ>
arpadyne eh?
<AphelionZ>
is that ARPA + Cyberdyne?
<ChrisMatthieu>
indeed :)
Poeticode is now known as ImoutoCodes
<gran>
Do you already have a copy of a directory called `2018-01-24-pushing-limits-ipfs-orbitdb` in the active working dir?
<gran>
Ideally this thang don't fail silently when things go awry
ImoutoCodes is now known as Poeticode
leebyron has quit [Remote host closed the connection]
<gran>
what does `ipfs version` report, AphelionZ?
<cipres>
AphelionZ: i like your article "pushing the limits of IPFS and OrbitDB"
<AphelionZ>
thanks
<AphelionZ>
0.4.13
<ChrisMatthieu>
Please repost the blog link so I can tweet it
<gran>
okay same :D not tryna babysit just curious what the issue be on your machine
<cipres>
AphelionZ: did you add the website from the windows machine ?
<AphelionZ>
negative
<cipres>
i just can't ipfs ls on this, never seen this
<AphelionZ>
ChrisMatthieu: mrh.io, first one
rodolf0 has quit [Ping timeout: 252 seconds]
Poeticode is now known as solidcode
<cipres>
what kind of files are in the root of the website ?
Ecran has joined #ipfs
<ChrisMatthieu>
duh :)
<cipres>
listing any subdirectory works
solidcode is now known as Poetifox
rodolf0 has joined #ipfs
Neomex has joined #ipfs
PyHedgehog has quit [Quit: Connection closed for inactivity]
<raynold>
ahh it's a wonderful day
<AphelionZ>
cipres: nothing special!
<cipres>
AphelionZ: strange then .. if you update the website i'd be interested if you could give me the hash
tbenett has quit [Quit: WeeChat 2.0.1]
gran has quit [Ping timeout: 260 seconds]
<cipres>
the findprovs now returns 0 peers
<cipres>
something's up :)
xelra has quit [Remote host closed the connection]
<AphelionZ>
cipres: sure
<cipres>
AphelionZ: what does "ipfs dht get QmXa2j6q52SX45Nn7MvrXh5tpXwYNKbjecWxnmB4K1imG3" return on your laptop ?
<AphelionZ>
cipres: it hangs
<cipres>
this is a cursed hash
leebyron has joined #ipfs
<ehmry>
a hash of unspeakable evil?
Mateon3 has joined #ipfs
whenisnever has joined #ipfs
Mateon1 has quit [Ping timeout: 240 seconds]
Mateon3 is now known as Mateon1
<AphelionZ>
lol
Ronsor is now known as \uffff
\uffff is now known as Ronsor
leebyron has quit [Ping timeout: 246 seconds]
cehteh has quit [Excess Flood]
<cipres>
AphelionZ: would be interesting to add any file on the website's directory and ipfs add -r again and see if problem is the same on the new hash
cehteh has joined #ipfs
leebyron has joined #ipfs
<ChrisMatthieu>
whyrusleeping: Good question at BPASE18
rngkll has quit [Remote host closed the connection]
rngkll has joined #ipfs
xelra has joined #ipfs
retpoline has joined #ipfs
jkilpatr has quit [Remote host closed the connection]
leebyron has quit [Remote host closed the connection]
leebyron has joined #ipfs
rodolf0 has quit [Quit: Leaving]
rngkll has quit [Remote host closed the connection]
leebyron has quit [Ping timeout: 252 seconds]
reit has joined #ipfs
rngkll has joined #ipfs
rngkll has quit [Remote host closed the connection]
colatkinson has joined #ipfs
Encrypt has joined #ipfs
rngkll has joined #ipfs
colatkinson has quit [Client Quit]
rngkll has quit [Read error: Connection reset by peer]
rngkll has joined #ipfs
jesse22 has joined #ipfs
leebyron has joined #ipfs
leebyron has quit [Ping timeout: 252 seconds]
ccii has quit [Ping timeout: 252 seconds]
pvh has joined #ipfs
Neomex has quit [Ping timeout: 256 seconds]
Jesin has quit [Ping timeout: 240 seconds]
Neomex has joined #ipfs
leebyron has joined #ipfs
colatkinson has joined #ipfs
matthiasbgg[m] has left #ipfs ["User left"]
ccii has joined #ipfs
leebyron has quit [Remote host closed the connection]
leebyron has joined #ipfs
leebyron has quit [Remote host closed the connection]
leebyron has joined #ipfs
Gytha has quit [Ping timeout: 240 seconds]
matt-h has joined #ipfs
Encrypt has quit [Quit: Quit]
newhouse has quit [Read error: Connection reset by peer]
colatkinson has quit [Quit: colatkinson]
Jesin has joined #ipfs
Jesin has quit [Client Quit]
newhouse has joined #ipfs
Taoki has quit [Ping timeout: 268 seconds]
retpoline has quit [Quit: bye]
lord| has joined #ipfs
retpoline has joined #ipfs
Taoki has joined #ipfs
coyotespike has quit [Ping timeout: 256 seconds]
rngkll has quit [Read error: Connection reset by peer]
rngkll has joined #ipfs
ralphthe1inja has joined #ipfs
ralphthe1inja is now known as ralphtheninja2
angreifer has quit [Quit: ZNC @ 3K]
chris613 has quit [Ping timeout: 256 seconds]
whenisnever has quit [Ping timeout: 256 seconds]
James_Epp has joined #ipfs
Bat`O_ has quit [Ping timeout: 240 seconds]
bomb-on has quit [Quit: zzz]
han_shot_first has joined #ipfs
detran has quit [Remote host closed the connection]
Ecran has quit [Quit: Going offline, see ya! (www.adiirc.com)]
Bat`O has joined #ipfs
samae has quit [Remote host closed the connection]