stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.4.22 and js-ipfs 0.35 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of
clemo has quit [Ping timeout: 250 seconds]
D__ is now known as D_
xcm has quit [Ping timeout: 245 seconds]
zoobab has quit [Ping timeout: 265 seconds]
Adbray_ has quit [Quit: Ah! By Brain!]
Adbray has joined #ipfs
xcm has joined #ipfs
Junyou has joined #ipfs
jcea has joined #ipfs
CCR5-D32 has joined #ipfs
^andrea^75006683 has quit [Ping timeout: 276 seconds]
carldd has quit [Ping timeout: 276 seconds]
^andrea^75006683 has joined #ipfs
cxl000__ has quit [Ping timeout: 240 seconds]
cxl000 has quit [Ping timeout: 240 seconds]
jadedctrl has quit [Read error: Connection reset by peer]
cxl000__ has joined #ipfs
ygrek__ has joined #ipfs
carldd has joined #ipfs
cxl000__ has quit [Ping timeout: 246 seconds]
Encrypt has quit [Ping timeout: 246 seconds]
Encrypt has joined #ipfs
_whitelogger has joined #ipfs
llorllale has quit [Quit: WeeChat 1.9.1]
llorllale has joined #ipfs
mowcat has quit [Remote host closed the connection]
xcm has quit [Killed (wolfe.freenode.net (Nickname regained by services))]
xcm has joined #ipfs
cxl000 has joined #ipfs
jadedctrl has joined #ipfs
RockSteadyTRTLDi is now known as RockSteadyTRTLD4
RockSteadyTRTLD4 is now known as RockSteadyTRTLD6
RockSteadyTRTLD6 is now known as RockSteadyTRTLD8
RockSteadyTRTLD8 is now known as RockSteadyTRTL10
RockSteadyTRTL10 is now known as RockSteadyTRTL12
dopplerg- has quit [Ping timeout: 240 seconds]
cxl000 has quit [Remote host closed the connection]
verin0x2 has joined #ipfs
cxl000 has joined #ipfs
verin0x has quit [Ping timeout: 240 seconds]
verin0x2 is now known as verin0x
dopplergange has joined #ipfs
jcea has quit [Quit: jcea]
RamRanRa has quit [Read error: Connection reset by peer]
fleeky has quit [Ping timeout: 250 seconds]
fleeky has joined #ipfs
zeden has quit [Quit: WeeChat 2.6]
zoobab has joined #ipfs
user_51 has quit [Ping timeout: 268 seconds]
user_51 has joined #ipfs
hurikhan77 has joined #ipfs
Clarth has quit [Remote host closed the connection]
kakra has quit [Ping timeout: 240 seconds]
Belkaar_ has joined #ipfs
Belkaar has quit [Ping timeout: 276 seconds]
whyrusleeping has quit [Ping timeout: 246 seconds]
whyrusleeping has joined #ipfs
stoopkid has quit [Quit: Connection closed for inactivity]
baojg has quit [Remote host closed the connection]
baojg has joined #ipfs
nelson98228[m] has joined #ipfs
Fessus has quit [Remote host closed the connection]
Junyou has quit [Quit: Ping timeout (120 seconds)]
fleeky has quit [Ping timeout: 250 seconds]
fleeky has joined #ipfs
fleeky has quit [Ping timeout: 250 seconds]
kapil_ has joined #ipfs
fleeky has joined #ipfs
The_8472 has quit [Ping timeout: 252 seconds]
The_8472 has joined #ipfs
ipfs-stackbot has quit [Remote host closed the connection]
ipfs-stackbot has joined #ipfs
airstorm has joined #ipfs
rardiol has quit [Ping timeout: 268 seconds]
riemann has quit [Ping timeout: 268 seconds]
riemann has joined #ipfs
eyefuls has quit [Remote host closed the connection]
rendar has joined #ipfs
granularitys has joined #ipfs
xcm is now known as Guest22553
Guest22553 has quit [Killed (cherryh.freenode.net (Nickname regained by services))]
xcm has joined #ipfs
Lochnair has quit [Ping timeout: 240 seconds]
yosafbridge has quit [Ping timeout: 240 seconds]
misuto5 has joined #ipfs
misuto5 has quit [Client Quit]
sion|2 has quit [Ping timeout: 240 seconds]
hurikhan77 has quit [Ping timeout: 240 seconds]
yosafbridge has joined #ipfs
gpestana has quit [Ping timeout: 240 seconds]
misuto has quit [Ping timeout: 240 seconds]
Lochnair has joined #ipfs
gpestana has joined #ipfs
hurikhan77 has joined #ipfs
sion|2 has joined #ipfs
misuto has joined #ipfs
ylp has joined #ipfs
bengates has joined #ipfs
The_8472 has quit [Ping timeout: 248 seconds]
The_8472 has joined #ipfs
satyanweshi has joined #ipfs
ygrek__ has quit [Ping timeout: 240 seconds]
gmoro has joined #ipfs
Caterpillar has joined #ipfs
teleton has joined #ipfs
probono has quit [Ping timeout: 240 seconds]
probono has joined #ipfs
Monokles has quit [Ping timeout: 240 seconds]
faenil has quit [Ping timeout: 240 seconds]
Monokles has joined #ipfs
faenil has joined #ipfs
lord| has joined #ipfs
The_8472 has quit [Ping timeout: 252 seconds]
The_8472 has joined #ipfs
RamRanRa has joined #ipfs
satyanweshi has quit [Quit: Connection closed for inactivity]
bengates has quit [Remote host closed the connection]
bengates has joined #ipfs
kakra has joined #ipfs
hurikhan77 has quit [Ping timeout: 240 seconds]
ZaZ has joined #ipfs
bengates has quit [Remote host closed the connection]
bengates has joined #ipfs
rardiol has joined #ipfs
gmoro has quit [Remote host closed the connection]
gmoro has joined #ipfs
gmoro has quit [Read error: Connection reset by peer]
gmoro_ has joined #ipfs
gmoro__ has joined #ipfs
gmoro_ has quit [Ping timeout: 268 seconds]
gmoro has joined #ipfs
gmoro__ has quit [Ping timeout: 276 seconds]
gmoro_ has joined #ipfs
gmoro has quit [Read error: Connection reset by peer]
bengates has quit [Read error: Connection reset by peer]
bengates has joined #ipfs
gmoro has joined #ipfs
hurikhan77 has joined #ipfs
gmoro_ has quit [Ping timeout: 265 seconds]
kakra has quit [Ping timeout: 240 seconds]
gmoro has quit [Remote host closed the connection]
gmoro has joined #ipfs
__jrjsmrtn__ has quit [Ping timeout: 240 seconds]
__jrjsmrtn__ has joined #ipfs
oryxshrineDiscor is now known as shrineoryx6555[m
shrineoryx6555[m is now known as shrineoryxDiscor
zeden has joined #ipfs
Acacia has quit [Ping timeout: 245 seconds]
lufi has quit [Write error: Connection reset by peer]
zeden has quit [Client Quit]
zeden has joined #ipfs
ZaZ has quit [Read error: Connection reset by peer]
lufi has joined #ipfs
lufi has joined #ipfs
lufi has quit [Changing host]
KempfCreative has joined #ipfs
Acacia has joined #ipfs
zeden has quit [Quit: WeeChat 2.6]
jcea has joined #ipfs
zeden has joined #ipfs
zeden has quit [Quit: WeeChat 2.6]
mowcat has joined #ipfs
zeden has joined #ipfs
KempfCreative has quit [Ping timeout: 250 seconds]
KempfCreative has joined #ipfs
airstorm has quit [Quit: airstorm]
mowcat has quit [Remote host closed the connection]
halifoxDiscord[m is now known as halifox4409[m]
terry_hello[m] has joined #ipfs
eleitlDiscord[m] has joined #ipfs
<eleitlDiscord[m]> Assuming I want to store 30 TB ~2.5 million documents in public IPFS servers, by adding new servers with the documents pinned, how bad a time I'm going to have?
teleton has quit [Quit: Leaving]
ShadowJonathanDi has joined #ipfs
<ShadowJonathanDi> eleitl i'd recommend #filecoin or #ipfs-cluster
<eleitlDiscord[m]> Thanks, but ipfs cluster assumes I'm controlling all the servers, whereas this is a bunch of individuals joining the public IPFS grid with their own ones, where the stuff is pinned.
<ShadowJonathanDi> again, filecoin
<ShadowJonathanDi> and if its ppl pinning stuff themselves
<ShadowJonathanDi> the only thing you're probably ever gonna get a problem with is bandwith
<ShadowJonathanDi> * the only thing you're probably ever gonna get a problem with is bandwidth
<ShadowJonathanDi> i think
<ShadowJonathanDi> :conniehmm:
<eleitlDiscord[m]> Why do people need to deal with filecoin, if they just want to join the public ipfs grid?
<eleitlDiscord[m]> Bandwidth is not the problem, just principal availability of something that has one or a couple pins.
<ZerXes> you want to make sure no data is lost of one of the individuals just leave?
<ZerXes> if*
wzwldn[m] has left #ipfs ["User left"]
<eleitlDiscord[m]> Not necessarily. Missing data can be added from the original dataset later on.
<ShadowJonathanDi> how?
<eleitlDiscord[m]> So people coming and going is tolerable, somewhat.
<ShadowJonathanDi> :)
amatuni[m] has left #ipfs ["User left"]
<ZerXes> so whats the purpuse on putting the data in IPFS?
<ShadowJonathanDi> what you're describing is the general idea of IPFS
<ShadowJonathanDi> what exactly do you want to solve?
<ZerXes> yeah
<ShadowJonathanDi> * what you're describing is the general *idea* of IPFS
<eleitlDiscord[m]> I'm trying to push 2.5 million documents, about 30 TB total. The purpose is that not everybody has to deal with a 30 TB chunk.
<eleitlDiscord[m]> Would the current network have problems with that?
<ZerXes> no
<eleitlDiscord[m]> Good. Just wanted to check before trying it. Thanks.
<ZerXes> just put your node with all the 30TB data pinned online, and have the other users come online and start pinning the chunks they want.
<ZerXes> preferbly have them connect directly to your master-node
<ZerXes> to find the data faster
<ShadowJonathanDi> 30 TB of data is a lot to copy, though
<ShadowJonathanDi> :coffeesip:
<ZerXes> yeah it will still not be *fast*
<ZerXes> but your question was basically "will the IPFS network die?" and no, it wont
<eleitlDiscord[m]> Would it work if let's say 100 people published the documents on their own ipfs nodes? Would the document URL be deterministic, or unique for each document, even if it's the same one?
<ZerXes> the document URL will always be the same for the same content
<eleitlDiscord[m]> Excellent.
<ZerXes> IPFS will check the checksum of the document content and use that hash as the URL basically
<ShadowJonathanDi> there are no URLs in IPFS
<ShadowJonathanDi> only CIDs
<ZerXes> so if 2 different people publish the same document it will have the same address and both of them will just become "seeders" from start
<ShadowJonathanDi> Content IDs
<ShadowJonathanDi> CIDs are based off of the hash of the block it is referencing
<ShadowJonathanDi> blocks are 256kb max
<ShadowJonathanDi> blocks can reference other CIDs
<ShadowJonathanDi> this is how IPFS builds files, by bundling multiple blocks together
<ShadowJonathanDi> when a client wants a block, it'll put out that particular block on it's "want list" (the specific implimentation of this is bitswap)
<ShadowJonathanDi> other clients can then share the data, or not
<ShadowJonathanDi> also, the client can look for what clients actually can provide the data
<ShadowJonathanDi> and try to connect to a node as close as ones with the whole block recursively pinned
<ShadowJonathanDi> there are no URLs in IPFS
<ShadowJonathanDi> what you're thinking about probably is `/ipfs/Qm...`
<ShadowJonathanDi> those are ways for public HTTP gateways to differentiate between IPFS and IPNS
<ShadowJonathanDi> the latter of which is a way to mutably define a "pointer"
<ShadowJonathanDi> to a block of data
<aschmahmann[m]> eleitl (Discord): a couple things you might run in to.
<aschmahmann[m]> 1) Importing data into your local go-IPFS node might take a while (https://github.com/ipfs/go-datastore/issues/137, and https://github.com/ipfs/go-ipfs/issues/6523 are some issues about this). If you use the Badger Datastore with sync writes disabled it should help with that.
<aschmahmann[m]> 2) You may run into bandwidth issues advertising to the network that you have all of that content, so you may want to change the reprovider strategy (https://github.com/ipfs/go-ipfs/blob/master/docs/config.md) to pinned or roots
<ShadowJonathanDi> what `/ipfs/` is is a *wrapper* for the classic internet to be able to reference IPFS file trees as urls
<ShadowJonathanDi> i *really* recommend reading the whitepaper in general eleitl
<ShadowJonathanDi> it sounds like you're missing some core concepts in general about IPFS, and the ambition of hosting a 30TB dataset is nice, but please do it *right*, as importing 30TB of storage also copies it to the block store
<ShadowJonathanDi> and i personally dont even know if a system can handle 30 Billion hashes
<ShadowJonathanDi> or something of that number
<ShadowJonathanDi> * and i personally dont even know if a block system can handle 30 Billion hashes
<ShadowJonathanDi> * and i personally dont even know if a block system can handle 30 Billion multihashes
<eleitlDiscord[m]> The documents are typically few 100 kbytes to few Mbytes. Some outliers up to 100 GB, I think.
<eleitlDiscord[m]> So it would fit with the 256 kb blocks rather nicely.
TechWeasel has joined #ipfs
<ZerXes> what kind of documents are typically a few 100kb but up to 100GB? 0_o
<ShadowJonathanDi> that's not the point, im talking about how to think to structure and define that data
<ShadowJonathanDi> you can just as well gzip it all and put THAT huge chunk onto ipfs
<ShadowJonathanDi> it is possible
<ShadowJonathanDi> but is it useful?
<ShadowJonathanDi> is it structured
<ShadowJonathanDi> * is it structured?
<ShadowJonathanDi> no, of course not
<ShadowJonathanDi> is it efficient?
<ShadowJonathanDi> yes?
<ShadowJonathanDi> there's many probable ways you're gonna do this, but please have in mind how the underlying system is actually going to *work* when you're done with it
<ShadowJonathanDi> since it is immutable
<ShadowJonathanDi> you cant change it once you define it
<ShadowJonathanDi> only scrap it and then import it again
<eleitlDiscord[m]> The document are currently self-addressed by their own MD5 hash. So it is precisely the immutability here that is a good fit.
<ShadowJonathanDi> and you dont wanna have that "oh shit" moment when you already started spreading the data around
<ShadowJonathanDi> ...why md5?
<eleitlDiscord[m]> It's an old project, so tradition.
<ShadowJonathanDi> 1 sec
<ShadowJonathanDi> oh yeah
<ShadowJonathanDi> no
<ShadowJonathanDi> even if its gonna define the data with the md5 algorithm
<ShadowJonathanDi> you wont get the same hash
<ShadowJonathanDi> since its a hash of the block that has many more CIDs in it that refer to other blocks of data
<ShadowJonathanDi> so not the actual array of bytes the document is going to me
<ShadowJonathanDi> * so not the actual array of bytes the document is going to be
<ShadowJonathanDi> im sorry im so confrontational, but i kinda panicked
<eleitlDiscord[m]> No problem, the network must be protected 😉
<ShadowJonathanDi> no not that
<ShadowJonathanDi> i just wanna prevent problems on your end
<ShadowJonathanDi> that can easily be prevented
<eleitlDiscord[m]> To recap, I can get an unique URI specifying the document (which is a has of CIDs, etc.) regardless on which IPFS node the document has been published?
<ShadowJonathanDi> CID
<ShadowJonathanDi> not URL or URI
<ShadowJonathanDi> you can make a URL with a CID via a public gateway
<eleitlDiscord[m]> If it's deterministic, then it would work. If it depends on where it has been published, it's going to have to be centralized, which is obviously problematic.
<ShadowJonathanDi> it doesnt depend on where it's published
Caterpillar has quit [Quit: You were not made to live as brutes, but to follow virtue and knowledge.]
<eleitlDiscord[m]> I'm assuming I'm working with a public gateway, since the application front end would serve URIs instead of paths into the local filesystem like it currently does.
<ShadowJonathanDi> though the physical and network distance from requesting nodes to your own (number of hops between a client finding you or your data) is going to play a factor
<ShadowJonathanDi> * though the physical (bc latency) and network distance from requesting nodes to your own (number of hops between a client finding you or your data) is going to play a factor
<ShadowJonathanDi> yeah
<ShadowJonathanDi> then i recommend using `window.ipfs` first of all
<ShadowJonathanDi> and maybe think about using ipfs-js
<ShadowJonathanDi> so the system does not abuse public gateways
<ShadowJonathanDi> but instead actually replicates the data
<ShadowJonathanDi> and distributes it
<ShadowJonathanDi> while users see it
<ShadowJonathanDi> more redundancy that way
<ShadowJonathanDi> if you link everything to a centralized gateway, that gateway is either probably going to blacklist your reference domain, or its gonna throttle it
<ShadowJonathanDi> public gateways are not to be abused for this, use in-browser ipfs
<ShadowJonathanDi> that's my recommendation
<ShadowJonathanDi> * public gateways are not to be abused for this, use in-browser (or local) ipfs
<eleitlDiscord[m]> I'm assuming the added ipfs nodes would each run their own public gateway, and publish the document batches with automation.
<ShadowJonathanDi> :facepalm:
<ShadowJonathanDi> no, that's not what a gateway means...
<ShadowJonathanDi> * no, that's not what a public gateway means...
<eleitlDiscord[m]> You need a gateway in order to serve the application. Whether it's public, or not, it doesn't matter, as long as the application sees it.
<eleitlDiscord[m]> Would you recommend using the go IPFS implementation for this? Dockerize it, perhaps?
<ShadowJonathanDi> what're you talking about right now?
<ShadowJonathanDi> the viewing software
<ShadowJonathanDi> or a client that pins the data?
<ShadowJonathanDi> * the viewing software,
<ShadowJonathanDi> * the viewing software that uses a ipfs client to pull the data,
<eleitlDiscord[m]> The viewing software is nginx/PHP talking to a MariaDB and pulls filenames, which in this case would be references to an URI any public IPFS gateway could resolve.
<ShadowJonathanDi> :whatcat:
<eleitlDiscord[m]> In this case, there would be a local IPFS gateway talking to the application.
<ShadowJonathanDi> nginx and php are server applications
<ShadowJonathanDi> you wish to basically make the viewing software centralized?
<eleitlDiscord[m]> You could run your own copy of it, but it assumes that the document hashes resolve globally, so the backend is then distributed.
<ShadowJonathanDi> oh wait, the viewing software downloads files?
<ShadowJonathanDi> or at least
<ShadowJonathanDi> makes request to download the files
<ShadowJonathanDi> * makes requests to download the files
<ShadowJonathanDi> the actual files
<eleitlDiscord[m]> Sure, it's a bog normal web server with search functionality, serving up documents. Only in this case you're pulling them from the network.
<ShadowJonathanDi> ah yeah
<ShadowJonathanDi> hmm
<ShadowJonathanDi> in that case you can use a public gateway
<ShadowJonathanDi> but at the very least know that i think they wont be extremely happy about it
<swedneck> <eleitlDiscord[m] "Sure, it's a bog normal web serv"> why not just reverse proxy go-ipfs?
<ShadowJonathanDi> * in that case you can use a public gateway (and not make clients run ipfs locally)
<eleitlDiscord[m]> Yeah, sure you could probably bundle it into the nginx.
<swedneck> bundle it?
<eleitlDiscord[m]> Bundle the reverse proxy for go-ipfs (presumably, running on localhost or some other private space) along with the PHP application.
<swedneck> what's your goal?
<ShadowJonathanDi> bundle the..
<ShadowJonathanDi> * bundle the...
<ShadowJonathanDi> breath in
<ShadowJonathanDi> :facepalm:
<ShadowJonathanDi> first of all
<ShadowJonathanDi> please explain to me exactly how that is any more efficient or any more useful than just running an electron app
<ShadowJonathanDi> or making it a web app
<ShadowJonathanDi> website
<eleitlDiscord[m]> Thank you, you have been very helpful. I think I'll try to set up go-ipfs instance, and play with it.
<ShadowJonathanDi> ipfs file that is a website
<swedneck> you don't need anything else to view stuff on the ipfs network, you can literally just reverse proxy localhost:8080
<eleitlDiscord[m]> It's a web app, probably in a docker container.
<ShadowJonathanDi> or anything else than an application that runs on local clients, runs nginx to proxy something, and then use php to render it?
<ShadowJonathanDi> and where exactly does that docker container live
<ShadowJonathanDi> if i may ask
<swedneck> <eleitlDiscord[m] "It's a web app, probably in a do"> but why
<eleitlDiscord[m]> If I try to publish a 30 TB body decentrally, and tell each contributor to run a Docker container with a subset of the data set, I can package the go-ipfs into the Docker container along with the application, and script it to publish it to the tune of, say, 100 GB a chunk.
<ShadowJonathanDi> ....sure
<swedneck> hmm
<eleitlDiscord[m]> Any show stoppers I'm not seeing?
<swedneck> that sounds okay, but i don't think you need php or anything like that
<ShadowJonathanDi> i think your show only consists of them
<ShadowJonathanDi> tbh
<ShadowJonathanDi> im not gonna mince words, i personally think you have not thought this out in the slightest, or even know how to future proof or how to properly think about all of this
<ShadowJonathanDi> in a true structured way
<ShadowJonathanDi> and first of all
<ShadowJonathanDi> know exactly how ipfs works
<eleitlDiscord[m]> I obviously don't know how ipfs works, which is why I came here. And you have been very helpful, thank you very much.
<eleitlDiscord[m]> As to future proof, the application logic has been working for a decade without any changes, so I don't expect it to change.
<ShadowJonathanDi> :pinkbored:
<ShadowJonathanDi> me neither
<eleitlDiscord[m]> If it doesn't work, then I've wasted some of your time, and plenty of mine.
<ShadowJonathanDi> hmmm
Trieste_ is now known as Trieste
<ShadowJonathanDi> im gonna be honest with you here, i think your idea is amazing, but you probably need to ask a few more fellas around here
<ShadowJonathanDi> im also just new, so i can be wrong, but please talk to them properly about this before attempting to do this
<eleitlDiscord[m]> Sure, I'll fool around and will be probably back asking questions. Thanks again.
rardiol has quit [Ping timeout: 250 seconds]
<ShadowJonathanDi> no problem :stevonieeee:
<aschmahmann[m]> eleitl (Discord): since it sounds like you have a number of contributors you're expecting to host the large dataset it might be worth taking a look at IPFS cluster
<eleitlDiscord[m]> It would probably make the publishing more efficient.
<eleitlDiscord[m]> But organizing volunteers is probably harder.
rardiol has joined #ipfs
<aschmahmann[m]> seems reasonable, the organic ipfs growth might be easier to deal with. I'd still look into cluster more though (perhaps ask on the #freenode_#ipfs-cluster:matrix.org channel), since it may make it both easier to locate data and more likely for uncommon portions of the dataset to be replicated.
<eleitlDiscord[m]> Thanks, will look into it.
TechWeasel has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<M2dope[m]> <eleitlDiscord[m] "But organizing volunteers is pro"> eleitl (Discord):
jonnycrunch has joined #ipfs
The_8472 has quit [Ping timeout: 248 seconds]
ylp has quit [Quit: Leaving.]
The_8472 has joined #ipfs
jesse22 has joined #ipfs
a_0x07[m] has joined #ipfs
bengates has quit [Remote host closed the connection]
kapil_ has quit [Quit: Connection closed for inactivity]
jcea has quit [Remote host closed the connection]
The_8472 has quit [Ping timeout: 252 seconds]
nast has joined #ipfs
barrygDiscord[m] has joined #ipfs
<barrygDiscord[m]> Hey, I'm having an issue with js-ipfs. Apparently using `ipfs.add` does not wrap in a folder, just returning the first hash, while using the same command with the same array of content with http-client does return the hash of the folder containing all the content. Where can I troubleshoot this further?
The_8472 has joined #ipfs
alexgr has quit [Remote host closed the connection]
mapetik has joined #ipfs
AgenttiX has joined #ipfs
Ai9zO5AP has joined #ipfs
xelra has quit [Remote host closed the connection]
i9zO5AP has quit [Ping timeout: 276 seconds]
mowcat has joined #ipfs
xelra has joined #ipfs
ipfs-stackbot has quit [Remote host closed the connection]
nast_ has joined #ipfs
ipfs-stackbot has joined #ipfs
nast_ has quit [Quit: leaving]
Adbray has quit [Remote host closed the connection]
nast_ has joined #ipfs
Adbray has joined #ipfs
nast_ has quit [Quit: leaving]
Adbray has quit [Ping timeout: 250 seconds]
xelra has quit [Remote host closed the connection]
xelra has joined #ipfs
jonnycrunch has quit [Quit: Textual IRC Client: www.textualapp.com]
nast has quit [Quit: Leaving]
jcea has joined #ipfs
bsm1175321 has joined #ipfs
CCR5-D32 has quit [Quit: CCR5-D32]
Belkaar_ has quit [Quit: bye]
Belkaar has joined #ipfs
Belkaar has joined #ipfs
Belkaar_ has joined #ipfs
ottpeter has joined #ipfs
Belkaar has quit [Ping timeout: 268 seconds]
Jybz has joined #ipfs
<ottpeter> hi! After I add folder with ipfs add -r, I cant access it through https://ipfs.io/ipfs/GIVEN_HASH
<ZerXes> ottpeter: it will take some time for the data to find its way to the ipfs.io nodes
<ZerXes> keep trying the hash and after a few minuts it should appear
<ZerXes> depending on how large the folder you added is
<ottpeter> @ZerXes ok, thank you!
<ottpeter> yes, it's there
<ottpeter> thanks!
maksimvrs[m] has joined #ipfs
Clarth has joined #ipfs
mloki_ has quit [Ping timeout: 252 seconds]
Soo_Slow has joined #ipfs
rendar has quit []
bsm1175321 has quit [Ping timeout: 240 seconds]
xfzDiscord[m] has joined #ipfs
shizy has joined #ipfs
shizy has quit [Client Quit]
teleton has joined #ipfs
The_8472 has quit [Ping timeout: 248 seconds]
teleton has quit [Remote host closed the connection]
The_8472 has joined #ipfs
rardiol has quit [Ping timeout: 268 seconds]
The_8472 has quit [Ping timeout: 252 seconds]
The_8472 has joined #ipfs
rardiol has joined #ipfs
Jybz has quit [Quit: Konversation terminated!]
xelra has quit [Remote host closed the connection]
xelra has joined #ipfs
Newami has joined #ipfs
Newami has quit [Quit: Leaving]
mapetik has quit [Remote host closed the connection]
mapetik has joined #ipfs
FrozenFire[alt] has joined #ipfs
<FrozenFire[alt]> I'm doing some research for a project idea. My hope is to build an EFI application which implements the absolute smallest possible IPFS client implementation, which can retrieve another EFI application from an address on IPFS, verify, and load it.
<FrozenFire[alt]> Is there a good example of a "smallest implementation" of an IPFS client?
Clarth has quit [Remote host closed the connection]
Clarth has joined #ipfs
mloki has joined #ipfs
dwilliams has joined #ipfs
mapetik_ has joined #ipfs
mapetik has quit [Ping timeout: 252 seconds]
schmegeggy has joined #ipfs
xelra has quit [Remote host closed the connection]
xelra has joined #ipfs
schmegeggy has quit [Ping timeout: 265 seconds]
ecloud has quit [Ping timeout: 245 seconds]
cxl000 has quit [Quit: Leaving]
mapetik_ has quit [Remote host closed the connection]
KempfCreative has quit [Ping timeout: 240 seconds]
Soo_Slow has quit [Quit: Soo_Slow]
Fessus has joined #ipfs
ottpeter has quit [Quit: Leaving]
jakobvarmose has quit [Quit: ZNC 1.7.2 - https://znc.in]
Fessus has quit [Remote host closed the connection]
jakobvarmose has joined #ipfs
nast_ has joined #ipfs
ecloud has joined #ipfs
cris has quit []
nast_ has quit [Client Quit]
cris has joined #ipfs
darkout has joined #ipfs
antonio[m]2 has joined #ipfs
The_8472 has quit [Ping timeout: 248 seconds]
schmegeggy has joined #ipfs
schmegeggy has quit [Client Quit]
The_8472 has joined #ipfs
darkout has quit [Quit: WeeChat 2.6]
jcea has quit [Remote host closed the connection]
jesse22 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
nimaje has quit [Read error: Connection reset by peer]
nimaje has joined #ipfs
darkout has joined #ipfs
darkout has quit [Client Quit]
darkout has joined #ipfs
linuxdaemon has left #ipfs [#ipfs]
darkout has quit [Client Quit]
darkout has joined #ipfs
rardiol has quit [Ping timeout: 250 seconds]
darkout has quit [Client Quit]
darkout has joined #ipfs
darkout has quit [Client Quit]
darkout has joined #ipfs
VoiceOfReason has joined #ipfs
darkout has quit [Client Quit]
FrozenFire[alt] has quit [Remote host closed the connection]