<xrogaan>
odd thing: if I setup another daemon on my local network, it takes quite a while to discover it
<xrogaan>
thanks
<xrogaan>
shouldn't ipfs be aware of the local network and discover if other node exits within it?
G-Ray has joined #ipfs
maxlath has quit [Quit: maxlath]
funrep has joined #ipfs
s_kunk has quit [Ping timeout: 260 seconds]
<xrogaan>
mmh, is it possible to host a bbs over ipfs?
ianopolous has joined #ipfs
PseudoNoob has joined #ipfs
<funrep>
if ipfs would replace http, would it work like it does today but if someone has sharing a website i visit with other peers enabled (as i understand nodes aren't enforced to share their current content) i would recieve a copy from that node instead? (assuming its closer in the network/somehow measured faster)
bastianilso__ has joined #ipfs
ralphtheninja has quit [Ping timeout: 252 seconds]
ralphtheninja has joined #ipfs
bertschneider has joined #ipfs
bertschneider has quit [Client Quit]
bertschneider has joined #ipfs
Kane` has quit [Remote host closed the connection]
bertschneider has quit [Quit: Bye bye]
s_kunk has joined #ipfs
<funrep>
also, if the network is supposed to be permanent, where is the version control data stored? will the network grow slower and slower as the history grows bigger?
<xrogaan>
there is probably a cap on that history
<xrogaan>
or cache
<xrogaan>
I mean, people's hard drives aren't infinite
<xrogaan>
as for versioning, you'll always need a master.
Kazuhiro has joined #ipfs
bertschneider has joined #ipfs
<xrogaan>
it would be great if urls were easier to remember
<hsanjuan>
daviddias: do you know why jsipfs daemon would fail to start with "Error: listen EADDRINUSE :::4001" when I'm 100% positive that it is not in use?
<hsanjuan>
davidias: thanks, interesting google didn't even show this issue
mildred has joined #ipfs
<xrogaan>
so ipfs files, as of now, is independent from all the rest?
<xrogaan>
which is why ls returns an empty directory
<Mateon1>
xrogaan: `ipfs files` is a simplified mutable filesystem API
<Mateon1>
It is separate from all /ipfs hashes, to see hashes on your node type `ipfs refs local`
<xrogaan>
ipfs files is empty here, doesn't do anything
<Mateon1>
Yes, it starts empty
<xrogaan>
I added some files and directory through `ipfs add', but `ipfs files' remains empty
<Mateon1>
In order to add something to the mfs filesystem use `cat file | ipfs files write /path/to/thing` or if you have added it to IPFS already, `ipfs files cp /ipfs/QmHASH... /path/to/thing`
<xrogaan>
so, I don't really understand its purpose. If I add something to files, it doesn't show up in the files listed in the webui.
<Mateon1>
xrogaan: The add command only adds the file to the network, not to the mutable filesystem.
domanic has joined #ipfs
<Mateon1>
If you add a file, let's say some image via `ipfs add ./image.png` you will get a hash on the last line of the output.
<Mateon1>
That hash is the ID of that image on the entire network
<xrogaan>
yeah
<Mateon1>
In order to add that image to MFS, you need to do `ipfs files cp /ipfs/<Hash you got from ipfs add> /image.png`
<xrogaan>
ipfs should definitively adds stuffs to the mutable filesystem automatically when the user do something. And use mfs for a simplified user interface.
<xrogaan>
yeah, but mfs is useless to me
<Mateon1>
xrogaan: I can see the confusion, but ipfs doesn't actually aim to create a mutable filesystem. It mainly aims to be a protocol for linking data and transferring it over a peer to peer/distributed network.
<xrogaan>
would be easier to manipulate local files though
<Mateon1>
I use MFS, and I find it useful to keep a set of archives, instead of having a directory with all versions of the archive on disk, it's on MFS, mostly deduplicated. I only add a new version every once in a while
dominic_ has joined #ipfs
domanic has quit [Ping timeout: 256 seconds]
<Mateon1>
xrogaan: As an example, here's an archive of the xkcd comic: https://ipfs.io/ipfs/QmRcWfPTVNrDD7sUFwBsg1YcScp9WnPdo9rd4vaLhhf5ov The sum of the filesizes is pretty insane, about 3 gigabytes. If I kept a folder on disk with every single one of these subfolders, that would be the size. MFS creates a virtual filesystem with deduplication built in due to the way IPFS works.
<Mateon1>
For now, it's not possible to mount mfs via FUSE, but as far as I know, that's planned
<xrogaan>
oh, is it that the mfs doesn't share its content with the network?
<xrogaan>
anyhow, is there a way to remove something from the local cache? You know, if somebody put in there some bad content.
<Mateon1>
xrogaan: `ipfs repo gc` cleans all content that wasn't pinned to your node. That's why `ipfs pin add/rm` exists.
<xrogaan>
even content in the mfs?
<Mateon1>
All files added via `ipfs add` are pinned by default, and everything you request is cached and can be sent to other peers, but not pinned
<xrogaan>
good to know
<Mateon1>
xrogaan: Content in mfs is not pinned as far as I know. That is to support datasets larger than the node can store, for example scientific datasets terabytes in size.
<xrogaan>
I read about blocklists on the bts, I believe those should also be local to let individual the liberty to ban content.
<xrogaan>
alright
<Mateon1>
We do have blocklists, you can add a list of hashes you do not want to share with others. This is useful if you run a public gateway and don't want to share illegal content
Oatmeal has quit [Remote host closed the connection]
<Kubuxu>
Mateon1, xrogaan: mfs is not pinned but applies something we call "best effort pin"
<Kubuxu>
which means that content that is on the node and is linked from mfs won't be GCed
Oatmeal has joined #ipfs
domanic has joined #ipfs
maxlath has joined #ipfs
dominic_ has quit [Ping timeout: 276 seconds]
<xrogaan>
Kubuxu, got i
<xrogaan>
t
<xrogaan>
There will be undesirable content pushed on the network, like all system that permit free sharing of data. Some form of censorship/moderation will be needed.
<xrogaan>
not related to dmca
<Mateon1>
xrogaan: All blacklists are opt-in
<Mateon1>
The DMCA blacklist is just a specific list
<xrogaan>
consider this: somebody called manA wants to really annoy a third party manB. manA proceed to put private information about manB on ipfs to dox him.
dominic_ has joined #ipfs
domanic has quit [Ping timeout: 252 seconds]
<Mateon1>
xrogaan: Blacklists will support 'double-hashes', in order to not reveal the actual hash of the content. At least that's the plan (refs-denylist-dmca issue #2)
<xrogaan>
the people commenting on the issue do not seem very convinced :P
Akaibu has quit []
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
dignifiedquire has quit [Quit: Connection closed for inactivity]
domanic has joined #ipfs
dominic_ has quit [Ping timeout: 256 seconds]
neurrowcat has joined #ipfs
cemerick has joined #ipfs
rgrinberg has joined #ipfs
neurrowcat has quit [Client Quit]
neurrowcat has joined #ipfs
dominic_ has joined #ipfs
domanic has quit [Ping timeout: 260 seconds]
<A124>
Mateon1 Reverse proxy, code few lines, done.
<Mateon1>
Actually, how efficient will IPLD be for tons of small objects with a lot of links? How far are IPLD selectors along (to get a whole object at once instead of 'crawling' links from a root object). How much overhead will that imply? Just the size of the hash for each link?
captain_morgan has joined #ipfs
<haad>
richardlitt: yup, orbit-db would be great for that and more specifically the document store
<richardlitt>
haad: Thought so. Want to direct him there? Or should I?
<richardlitt>
how's Suomi?
<Polychrome[m]>
Where does ipfs stores its data on a Windows system?
<Mateon1>
Polychrome[m]: By default, in %USERPROFILE%/.ipfs (on Win 7/8/10 that's C:\Users\<username>\.ipfs)
anewuser has joined #ipfs
<Polychrome[m]>
Thanks.
<Polychrome[m]>
Think I'll try to link that dir to my secondary drive that is not a tiny little SSD.
<Polychrome[m]>
There is no feature for configuring any of this on IPFS itself, right? I have to use linking via the file system itself.
<haad>
richardlitt: I'm trying to stay afk as much as possible this week so if you don't mind directing him to it?
<richardlitt>
Already done
<haad>
richardlitt: Suomi is great, got first snow the other day, so chilling inside in front of the fire place :)
<haad>
richardlitt: looking forward to get back to work already :)
<Mateon1>
Polychrome[m]: Instead of linking via NTFS junction, set the IPFS_PATH environment variable.
<Mateon1>
Polychrome[m]: It's documented in `ipfs deamon --help`, and possibly `ipfs init --help`
<richardlitt>
haad: Awesome. :)
<Mateon1>
daemon*
cemerick has quit [Ping timeout: 250 seconds]
lulis has joined #ipfs
bertschneider has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<Polychrome[m]>
Materon1, well now :) I've completely missed that.
flyingzumwalt has joined #ipfs
jedahan has joined #ipfs
<flyingzumwalt>
haad are you around?
captain_morgan has quit [Ping timeout: 260 seconds]
<haad>
flyingzumwalt: idling. what's up?
<xrogaan>
Mateon1, aren't all file broken in small pieces anyhow?
<flyingzumwalt>
haad I'm going to set up labels-and-milestones for js-ipfs. I see that your config file for orbit stuff is in haadcode/labels-and-milestones
<flyingzumwalt>
so I need to fork labels-and-milestones for each of the projects I'm using it with? (js-ipfs, libp2p, etc)
<xrogaan>
Mateon1, I mean, at best it should just be a list of blocks and some metadata
<flyingzumwalt>
haad or is there a way for me to commit just a labels-and-milestones.yml to each repo and then run the tool against that yml file?
cemerick has joined #ipfs
<haad>
flyingzumwalt: hmm... might be better to fork it once and setup a config file per project. the "npm run sync" command is what I run and they seem to have all of that in the package.json file in the "scripts" section (https://github.com/haadcode/labels-and-milestones/blob/master/package.json). so perhaps you can edit that to fit your needs?
<Mateon1>
xrogaan: Currently, IPFS chunks files into 256kB blocks (plus slight overhead from protobuf encoding, but it's being worked on).
<haad>
flyingzumwalt: yeah that would be better, agreed.
<haad>
flyingzumwalt: ah, no, I was wrong! there's "gslm" command in that package.json: "gslm": "github-sync-labels-milestones -v -c config.json -t $GITHUB_TOKEN"
<haad>
flyingzumwalt: so I guess you can do what you propose ^
<xrogaan>
Giant bittorrent files takes a lot of time to process, my guess it will be the same for ipfs
<xrogaan>
(or ipld, whatever that is)
<flyingzumwalt>
thanks haad
<flyingzumwalt>
haad do you have it set up to run via CI or do you just do it manually?
<flyingzumwalt>
haad the process for running this tool is a bit strange.
<haad>
flyingzumwalt: I do it manually atm but I want to have it in CI. perhaps, if this becomes a lot of manual work for you, you can pioneer that path and set it up on CI for the projects you use it for? (if not, that's fine, I can also take a look someday(tm))
<haad>
flyingzumwalt: they have a CircleCI config file setup that should work out of the box according to their docs. maybe you can use that directly?
<flyingzumwalt>
Maybe I'm reading their docs wrong but it looks like it assumes you'll maintain a complete repo dedicated to this sync tool and rely on the CI for that repo to do all your synchronizing.
<haad>
flyingzumwalt: seems so, yeah
<flyingzumwalt>
I would want it to trigger when someone pushes to, say, js-ipfs, check if the labels-and-milestones.yml has changed, and re-run the `lint` and `sync` tasks when necessary
<flyingzumwalt>
Not an extremely complicated rewrite, but still would require some changes in the sync tool.
<flyingzumwalt>
For now I'll just do it manually.
ylp2 has left #ipfs [#ipfs]
AtnNn_ is now known as AtnNn
<haad>
flyingzumwalt: hmmm. would it make sense to put that labels-and-milestones repo under a directory in ipfs/pm, have all the project config files for it in the same repo and have CI run it from ipfs/pm on changes (CircleCI supports triggering from branches, tags or by author, so we could limit it to not trigger on every commit to master)? that way we would have one place for all milestones in one place, one source of truth.
PseudoNoob has joined #ipfs
<flyingzumwalt>
haad if we did it that way, people like diasdavid and whyrusleeping would have to edit a file in ipfs/pm in order to add labels or milestones and sync them across their repositories. Is that right? I guess that's not a horrible requirement to place on them...
mildred has quit [Ping timeout: 276 seconds]
anaptrix has quit [Ping timeout: 250 seconds]
jedahan_ has joined #ipfs
anaptrix has joined #ipfs
jedahan has quit [Ping timeout: 256 seconds]
xelra has quit [Remote host closed the connection]
bertschneider has joined #ipfs
Geertiebear has joined #ipfs
<flyingzumwalt>
haad I've got to step away from my computer for a sec, but here's the sample config I made for js-ipfs. I started to put a copy of all the labels-as-milestones code in that dir but then wondered about getting upstream changes from other versions of that code...
<flyingzumwalt>
sorry haad I forgot you're on vacation today. Thanks for helping me anyway!
<plaguedentity>
I am trying to use go-libp2p and I follow the installation instructions and it's not working.
anewuser has joined #ipfs
f4 has joined #ipfs
f4 has quit [Max SendQ exceeded]
Boomerang has quit [Quit: Leaving]
f4 has joined #ipfs
f4 has quit [Max SendQ exceeded]
matoro has quit [Ping timeout: 265 seconds]
f4 has joined #ipfs
f4 has quit [Max SendQ exceeded]
f4 has joined #ipfs
f4 has quit [Max SendQ exceeded]
f4 has joined #ipfs
f4 has quit [Max SendQ exceeded]
PseudoNoob has quit [Remote host closed the connection]
<plaguedentity>
I even tried using GX but that's not working. tcp.go:196: d.rd.DialContext undefined (type reuseport.Dialer has no field or method DialContext) so is there an issue with the struct.
f4 has joined #ipfs
f4 has quit [Max SendQ exceeded]
<plaguedentity>
So the package is broken where do I get the older package.
lindybrits has joined #ipfs
gmcquillan__ has joined #ipfs
gmcquillan__ is now known as gmcquillan
maxlath has joined #ipfs
<lindybrits>
daviddias: Hi! :) I have successfully added a file from the browser (I suspect). I returned a hash value and threw no error. Using this same hash, with the function ipfs.files.cat threw the following however 'Unhandled rejection: Error: The dag node is a directory'. Thoughts?
gmcquillan has quit [Write error: Broken pipe]
gmcquillan has joined #ipfs
Encrypt_ has joined #ipfs
<Kubuxu>
plaguedentity: if you do gx install and then gx-go rewrite
<Kubuxu>
it will work
<lindybrits>
daviddias: my bad. I forgot that adding files larger than 256kB will result in a directory-like structure :)
dignifiedquire has joined #ipfs
lindybrits has quit [Quit: Page closed]
maxlath has quit [Ping timeout: 265 seconds]
Aranjedeath has joined #ipfs
bastianilso__ has joined #ipfs
Qwertie has quit [Ping timeout: 256 seconds]
Qwertie has joined #ipfs
s_kunk has joined #ipfs
chax has joined #ipfs
wmoh has quit [Remote host closed the connection]
bertschneider has joined #ipfs
mmuller_ has quit [Ping timeout: 250 seconds]
mmuller has joined #ipfs
chax has quit [Ping timeout: 252 seconds]
funrep has quit [Ping timeout: 250 seconds]
Mizzu has joined #ipfs
funrep has joined #ipfs
locusf has quit [Read error: Connection reset by peer]
locusf has joined #ipfs
fiatjaf has quit [Ping timeout: 276 seconds]
dominic_ has quit [Ping timeout: 252 seconds]
chax has joined #ipfs
chax has quit [Remote host closed the connection]
thufir has quit [Remote host closed the connection]
dlight has quit [Quit: Leaving]
funrep has quit [Ping timeout: 256 seconds]
zeroish has joined #ipfs
ulrichard has joined #ipfs
funrep has joined #ipfs
Aranjedeath has quit [Quit: Three sheets to the wind]
funrep_ has joined #ipfs
funrep_ has quit [Ping timeout: 250 seconds]
chax has joined #ipfs
votz has joined #ipfs
Mizzu has quit [Quit: WeeChat 1.5]
bertschneider has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
funrep_ has joined #ipfs
bastianilso__ has quit [Remote host closed the connection]
bastianilso__ has joined #ipfs
lulis has quit [Ping timeout: 260 seconds]
funrep_ has quit [Ping timeout: 252 seconds]
maxlath has joined #ipfs
chax has quit [Remote host closed the connection]
chax has joined #ipfs
rgrinberg has joined #ipfs
chax has quit [Remote host closed the connection]
chax has joined #ipfs
funrep_ has joined #ipfs
funrep_ has quit [Ping timeout: 260 seconds]
matoro has joined #ipfs
taaem has quit [Ping timeout: 260 seconds]
funrep_ has joined #ipfs
lapinot has joined #ipfs
jedahan_ has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
funrep has quit [Remote host closed the connection]
ligi has joined #ipfs
funrep_ has quit [Ping timeout: 252 seconds]
chax has quit [Remote host closed the connection]
chax has joined #ipfs
anewuser has quit [Quit: anewuser]
jedahan has joined #ipfs
jedahan has quit [Changing host]
jedahan has joined #ipfs
chax has quit [Ping timeout: 260 seconds]
rgrinberg has quit [Ping timeout: 244 seconds]
funrep has joined #ipfs
ralphtheninja has quit [Ping timeout: 265 seconds]
ralphtheninja has joined #ipfs
matoro has quit [Ping timeout: 256 seconds]
maxlath has quit [Ping timeout: 250 seconds]
krs93 has joined #ipfs
rgrinberg has joined #ipfs
lulis has joined #ipfs
PseudoNoob has joined #ipfs
<funrep>
could someone share a hash to test?
chax has joined #ipfs
ralphtheninja has quit [Ping timeout: 252 seconds]
chax has quit [Remote host closed the connection]
maxlath has joined #ipfs
chax has joined #ipfs
krs93_ has joined #ipfs
<whyrusleeping>
Just had the bumpiest plane ride of my life
Geertiebear has quit [Quit: Leaving]
Aranjedeath has joined #ipfs
krs93 has quit [Ping timeout: 256 seconds]
lulis has quit [Quit: Leaving]
herzmeister has quit [Quit: Leaving]
mildred has joined #ipfs
Encrypt_ has quit [Quit: Sleeping time!]
herzmeister has joined #ipfs
m3s has quit [Read error: Connection reset by peer]
patcon has joined #ipfs
amgadpasha has joined #ipfs
cemerick has quit [Ping timeout: 260 seconds]
lulis has joined #ipfs
chax has quit [Read error: Connection reset by peer]
m3s has joined #ipfs
m3s has joined #ipfs
m3s has quit [Changing host]
chax has joined #ipfs
ralphtheninja has joined #ipfs
chax has quit [Remote host closed the connection]
chax has joined #ipfs
bastianilso__ has quit [Quit: bastianilso__]
shizy has quit [Quit: WeeChat 1.6]
bastianilso__ has joined #ipfs
chris613 has joined #ipfs
<lgierth>
:)
chax has quit [Remote host closed the connection]
<lgierth>
whyrusleeping: pollux is funny -- everything looks like it restarted (goroutines, peers, mem), but uptime doesn't show a restart. any idea?
<lgierth>
(not mem actually, but the rest)
chax has joined #ipfs
patcon has quit [Ping timeout: 260 seconds]
espadrine has joined #ipfs
chax has quit [Ping timeout: 256 seconds]
matoro has joined #ipfs
Chris[m]1 has joined #ipfs
shizy has joined #ipfs
PseudoNoob has quit [Read error: Connection reset by peer]
mildred has quit [Quit: Leaving.]
espadrine has quit [Ping timeout: 260 seconds]
<whyrusleeping>
>.>
<whyrusleeping>
thats odd
<lgierth>
and no cpu and gc burst that you usually see right after startup
chax has joined #ipfs
warner has quit [Remote host closed the connection]
warner has joined #ipfs
gmcquillan has quit [Write error: Broken pipe]
gmcquillan has joined #ipfs
cjb has quit [Quit: Ping timeout (120 seconds)]
funrep has quit [Quit: Lost terminal]
<whyrusleeping>
huh
<whyrusleeping>
goro dump?
chax has quit [Read error: Connection reset by peer]
rgrinberg has quit [Remote host closed the connection]
<slothbag>
hey guys, where can I read more details on the blockchain support in IPFS? I saw a recent commit with identifiers for different blockchains.. looks interesting
plaguedentity has quit [Ping timeout: 256 seconds]
maxlath has quit [Quit: maxlath]
<whyrusleeping>
slothbag: heh, you can look on my computer :P
<whyrusleeping>
its supposed to be a secret!
anonymuse has quit [Remote host closed the connection]
patcon has joined #ipfs
<whyrusleeping>
(not secret as in closed source or anything, just secret as in "i wanted to surprise people")
<slothbag>
haha.. oh sorry, thats two secrets i've spoiled now in as many days.. first the upcoming IPLD release and now this.. lol
<whyrusleeping>
lol, the ipld release isnt much of a secret
<whyrusleeping>
at least, as far as i know :P
<whyrusleeping>
slothbag: but basically, the plan is to be able to ingest blocks and transactions from various cryptocurrencies into ipfs directly
<whyrusleeping>
and be able to walk ipfs paths through them
<whyrusleeping>
and link to them from ipfs objects
patcon has quit [Read error: Connection reset by peer]
<slothbag>
it sounds great, I remember Juan mentioned something along those lines at DevCon.. but was excited when I saw the commit
chax has quit [Read error: Connection reset by peer]
jedahan has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<whyrusleeping>
yeap!
<whyrusleeping>
i've been hacking on it this past week
<whyrusleeping>
while hanging out with various blockchain peoples
<whyrusleeping>
i've got bitcoin and zcash parsing working moderately well
<whyrusleeping>
when i get to some more solid internet i'm going to test my crap code better and look at integrating it
<whyrusleeping>
then, i'm going to set up some machines that will sit and ingest zcash blocks and transactions into ipfs as they are mined
rgrinberg has quit [Ping timeout: 256 seconds]
<whyrusleeping>
havent gotten to ethereum yet because their blockchain is absolutely massive, and has a harder to parse format
<slothbag>
I guess the ultimate endgame would be for zcash, bitcoin etc to use IPFS as its backend directly
chax has joined #ipfs
chax has quit [Read error: Connection reset by peer]
<whyrusleeping>
yeap :)
<whyrusleeping>
and!
<whyrusleeping>
for us all to decide on a common format for describing merklelinks (CIDs ftw)
Kane` has quit [Remote host closed the connection]
<whyrusleeping>
so that the various blockchains and merkletree systems can all interop easily
chax has joined #ipfs
patcon has joined #ipfs
<Kubuxu>
whyrusleeping: currently go-multicoded (which is not used anywhere in go-ipfs), is using paths like `/json`, spec says `/json/` should I update them all?
lulis has quit [Quit: Leaving]
<whyrusleeping>
uhmmm
<whyrusleeping>
sure?
<whyrusleeping>
catching another flight... bbl
<Kubuxu>
have a nice one
<Kubuxu>
(less bumpy).
shizy has quit [Ping timeout: 256 seconds]
patcon has quit [Read error: Connection reset by peer]
chax has quit [Remote host closed the connection]
chax has joined #ipfs
chax has quit [Ping timeout: 260 seconds]
matoro has quit [Ping timeout: 250 seconds]
fleeky__ has joined #ipfs
rgrinberg has joined #ipfs
fleeky_ has quit [Ping timeout: 250 seconds]
rgrinberg has quit [Ping timeout: 252 seconds]
patcon has joined #ipfs
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
chax has joined #ipfs
patcon has quit [Ping timeout: 260 seconds]
chax has quit [Read error: Connection reset by peer]