<achin>
are symbolic links something that fuse has control over? it would be neat if you could create symlinks in /ipfs to give friendly names to things
<whyrusleeping>
achin: you can!
<whyrusleeping>
and we make symlinks in /ipns
<whyrusleeping>
once we have a sort of 'alias' or 'label' system, we can do that
<achin>
oh yeah! that's right. ok cool, so this should be possible
<fazo>
I was thinking about building a media sharing app
<fazo>
it needs a way to go around the fact that there can only be one ipns name per node
voxelot has quit [Ping timeout: 264 seconds]
<achin>
why's that, fazo ?
<fazo>
achin: I wanted to make it so that each user has an object at /ipns/<userid>/sharedmedia
<lgierth>
jbenet: i responded to a few HN comments but i'm going to bed now -- hillary and devan are here and i wanna have breakfast with them tomorrow before the leave for portland
<giodamelio>
I was wondering about that myself. What if I want to host more then one website(which I do), do I need to set up multiple nodes?
<fazo>
achin: which stores links to all shared media so that you can view it in a static app
<fazo>
giodamelio: no, you can manually create a folder object with a folder for every website
<fazo>
giodamelio: but it's a pretty ugly workaround
<fazo>
giodamelio: so you have this folder object with a folder inside for every website you want to serve, then you assign this object to your ipns name
<giodamelio>
But then your sites have a folder in their url?
<fazo>
giodamelio: yeah they would be like /ipns/<giodamelio_id>/blog/index.html
G-Ray has quit [Quit: Konversation terminated!]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fazo>
or /ipns/<giodamelio_id>/someapp/index.html
<jbenet>
lgierth: thanks for letting me know.
<jbenet>
(re HN)
<jbenet>
i'll pay attention
<fazo>
If you need to give friendly names to things, I built this: /ipfs/QmSyURHE4qBKWCNYoX51B6VRMMfFS4HBhbjKY6vSnJSByN it's very barebones atm but it works
<fazo>
I plan to make it so you can share your bookmarks with a ipfs link and other people can import them
<whyrusleeping>
HN?
<jbenet>
actually, no i won't. ill be afk for a couple hours. someone else? else i'll pick up later. (whyrusleeping you have a lot to do, exempt from hn and irc)
<whyrusleeping>
oh crap. didnt notice that
<jbenet>
lol
<whyrusleeping>
i've been attempting to be productive
<whyrusleeping>
(and failing at it)
<whyrusleeping>
so i havent been on media
<giodamelio>
fazo: Thats cool, but it seems like s bit of hack. Is there any reason one node can't have multiple ipns names?
<fazo>
giodamelio: I think it's just that it's missing the implementation
<fazo>
giodamelio: in other words, it's not done yet
<giodamelio>
Ah
<whyrusleeping>
giodamelio: 'coming soon!'
qqueue has quit [Ping timeout: 240 seconds]
<giodamelio>
Cool
<whyrusleeping>
because of your article i may have to push that to be much sooner, lol
<whyrusleeping>
i'm going to put together a better 'temp' implementation while the final impl is solidified in specs
<whyrusleeping>
('that' being ipns consistency, not multiple keys, scatterbrained ATM)
<giodamelio>
Awesome. I don't know enough about the codebase to help much yet, but I will help test it.
<whyrusleeping>
giodamelio: much appreciated :)
simonv3 has quit [Quit: Connection closed for inactivity]
<jbenet>
whyrusleeping maybe wanna push that out and UDP as 0.3.8 in a couple days?
<jbenet>
would help a lot
* whyrusleeping
needs more coffee
qqueue has joined #ipfs
<jbenet>
whyrusleeping: need less irc :)
gordonb has quit [Quit: gordonb]
<whyrusleeping>
dude, tell me about it
<jbenet>
I'll bb in a couple hours.
vijayee_ has joined #ipfs
<jbenet>
everyone: it would help the core dev team a ton if other people can help answer questions more, or direct people to the https://github.com/ipfs/faq or the irc logs.
<achin>
we will hold down the fort
<whyrusleeping>
achin: <3
<giodamelio>
I'll answer what I can(though I think I'll have more questions then answers for awile yet)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<voxelot>
but cant create node function ipfs.resolve(hash, function(err,res){)?
<achin>
maybe open source projects are backed by a company that employes developers
<achin>
s/maybe/many/
<lithp>
that's the answer I'm expecting but I'd rather not assume it :) It just leads to more questions though, like how is the company funded?
<achin>
this was answered a few hours ago, let me see if i can figure out how to link to a log
<achin>
jbenet sez: "in short, we're funded by a number of investors (inc YC), who care about various things the future of the web, to CDNs, to devops, to bitcoin, to blockchains, to crypto tech. filecoin.io is going to be a source of revenue for us, to ensure we can continue upgrading the network stack."
<achin>
^ that's it. thought weirdly it doesn't scroll to the message in question
<deltab>
does for me; the message isn't at the top though, so you can see context
<achin>
it just brings me to the top of the whole thing
<achin>
oh, durrr. this is because i have an extension blocking 3rd party JS
<achin>
my fault
amstocker has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
amstocker_ has joined #ipfs
amstocker has quit [Ping timeout: 240 seconds]
thomasreggi has quit [Remote host closed the connection]
<Pyrotherm>
achin: can you talk about, or are there plans to charge for the use of this service?
thomasreggi has joined #ipfs
<achin>
i don't speak for Protocol Labs (i was just quoting up there)
wasabiiii has quit [Quit: Leaving.]
<achin>
i just a random guy, so i can't even guess about how Protocol Labs is planning on making a profit (other than the brief mention about filecoin)
<fazo>
achin: my bet is on investors and filecoin
<fazo>
I think they'll use filecoin just like everyone else, but by being the developers they'll be the first so if filecoin takes off they'll make money
<achin>
mine too. the ipfs-go impl is quite sizable and has a very liberal license. it will almost certainly not be relicensed, and it's a distributed application. i don't see how anyone can charge anyone else to run IPFS
<fazo>
I don't see other options
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<voxelot>
Captain Jean-Luc Picard: The economics of the future are somewhat different. You see, money doesn't exist in the 24th century.
thomasreggi has quit [Remote host closed the connection]
thomasreggi has joined #ipfs
<davidar>
jbenet has said he has no plans of making ipfs any less open, as it would be counter to the entire purpose of the project
<Pyrotherm>
awesome, my work on getting a server online continues...
<davidar>
The business model is to offer services on top of ipfs, iirc
qqueue has quit [Ping timeout: 240 seconds]
amstocker__ has joined #ipfs
qqueue has joined #ipfs
amstocker_ has quit [Ping timeout: 265 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
qqueue has quit [Ping timeout: 260 seconds]
qqueue has joined #ipfs
fazo has quit [Remote host closed the connection]
HostFat has quit [Read error: Connection reset by peer]
apophis has joined #ipfs
sseagull has quit [Quit: leaving]
<amstocker__>
not sure if I'm missing something, but how would i publish an ipfs object to a custom path on ipns/
<achin>
that's not something you can do right now, except for doing some DNS trickery (adding a special TXT record)
<amstocker__>
ok so I can only publish one object at a time to ipns?
<achin>
yes, but that object can be a directory
<voxelot>
anyone worked with the api and ipns?
<amstocker__>
ok so you can manage paths just by managing a directory object
<amstocker__>
make sense
<amstocker__>
makes*
lithp has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
apophis has quit [Quit: This computer has gone to sleep]
<pinbot>
[host 7] failed to grab refs for /ipfs/QmWGb7PZmLb1TwsMkE1b8jVK4LGceMYMsWaSmviSucWPGG: Post http://[fcdf:a296:afe3:7118:4135:cc0b:ff92:4585]:5001/api/v0/refs?arg=/ipfs/QmWGb7PZmLb1TwsMkE1b8jVK4LGceMYMsWaSmviSucWPGG&encoding=json&stream-channels=true&r=true&: dial tcp [fcdf:a296:afe3:7118:4135:cc0b:ff92:4585]:5001: connection timed out
<kyledrake>
I'm getting a timeout on pinning that on my ipfs node too
<kyledrake>
well it's not timing out but it's taking a long time
<kyledrake>
And I imagine there's some sort of proxy in front of the pinbot that's timing out because it's using an HTTP request to the API, and a separate thing is timing out
<davidar>
kyledrake (IRC): I've been having disputes with pinbot too...
<davidar>
multivac, failed to pin
<multivac>
bad pinbot!
<kyledrake>
Perhaps it's down.
<whyrusleeping>
kyledrake: wheres that content from?
<whyrusleeping>
owen1_: yeah, you will need to run 'ipfs init' and also 'ipfs daemon' as part of his 'step 1'
<owen1_>
whyrusleeping: so far i didn't need the daemon command. not done yet (:
<whyrusleeping>
owen1_: if you run the 'publish' command without the daemon on, you wont be able to resolve your webpage
<whyrusleeping>
running the publish command in 'offline mode' wont broadcast your entry to other nodes in the network
<owen1_>
Path Resolve error: could not resolve name.
<owen1_>
ok. let me run the daemon
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
you will have to run the publish again once the daemon is online
<owen1_>
ok
<owen1_>
boom! works
<whyrusleeping>
wooo!
<whyrusleeping>
just a note though: ipns is still 'not done yet', some things may fail, take a long time, or randomly disappear until we finalize the implementation
<owen1_>
i have no idea what ipfs is...but it's cool (: so it's a way to publish website for free?
<owen1_>
and it's using blockchain?
<spikebike>
no
<whyrusleeping>
owen1_: haha, i encourage you to read the whitepaper.
<spikebike>
not yet
<whyrusleeping>
its not using a blockchain
<owen1_>
NO BLOCCHAIN? NOT COOL
<owen1_>
j/k
<whyrusleeping>
you could say that its a way to 'publish' website for free, but you still have to be concerned about the hosting
<whyrusleeping>
adding content to ipfs doesnt mean that it gets stored anywhere else on the internet
<whyrusleeping>
content in ipfs is only stored by nodes who request it
bedeho has joined #ipfs
<owen1_>
my site is hosted somewhere no?
<whyrusleeping>
well, currently its just on your computer
<whyrusleeping>
and temporarily on the nodes of anyone who views it
<spikebike>
i.e. o"open /home/oren/.ipfs/blocks/12200f17/put-181977319:"
<zignig>
spikebike: same thing
<whyrusleeping>
owen1_: is your daemon running?
<spikebike>
well opening tcp connections to 40 some hosts doesn't seem like that big a deal
<spikebike>
is there a file handle leak?
<owen1_>
whyrusleeping: now it's downloading a file. probably the video
<whyrusleeping>
spikebike: 250 some, and we open up to ten connections concurrently while dialing peers to speed up that process
<owen1_>
i thought it's goingto open it in the browser. stream it. that's what the demo showed.
<whyrusleeping>
and, on top of that, we have an epoll file descriptor for each tcp connection
<whyrusleeping>
owen1_: it will if your browser understands the video type
<owen1_>
whyrusleeping: how can i 'teach' my chromuim?
<owen1_>
streaming is nice. i don't see the download even progressing
<whyrusleeping>
owen1_: i'm not sure, it has something to do with mime types and headers and stuff
<spikebike>
whyrusleeping: I ask because I suspect some of the (to me) surprising performance of IPFS is because of tcp
<spikebike>
tcp seems to work well under a wide variety of situations that can be tricky to match with tcp
<whyrusleeping>
spikebike: yeah, tcp is really good at being tcp
wopi has quit [Read error: Connection reset by peer]
<whyrusleeping>
and we arent going to get rid of it
wopi has joined #ipfs
<whyrusleeping>
but going with a udp based transport will give us a lot of good benefits, easier nat traversal, no head of line blocking, less resource consumption
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<zignig>
whyrusleeping: and extra awesome sauce. ;)
<whyrusleeping>
anyways, i need some sleep. got a lot on my plate tomorrow
<whyrusleeping>
i'll leave zignig to talk about how cool udp is for me
<zignig>
gnite ! , sleep tight , don't let the nil pointers byte.
<whyrusleeping>
0.0
* whyrusleeping
gonna have nil pointer nightmares
bedeho has quit [Ping timeout: 250 seconds]
<zignig>
NOOOO!
<owen1_>
when is 'publishing' needed?
<zignig>
only when you want attach the data to name on your node.
<owen1_>
zignig: i don't understand what you just said (: i just added a picture to my website. i tried publishing again but i don't see my change.
<owen1_>
do i need to add -r again?
patcon has joined #ipfs
<zignig>
so you add data onto ipfs , it tells other nodes on the network that you have it.
<zignig>
_if_ ( and only if ) other users ask for it your node will hand it out.
bedeho has joined #ipfs
<zignig>
after that if anyone else asks for it, it will come from some or all of the nodes that have it.
<owen1_>
so i just add <img src="./lupus.jpg" alt="pic" /> to my index.html
<owen1_>
and add lupus.jpg ?
<zignig>
with 'publishing' ( which sort of works / is kind of broken at the moment ) you tell your node that it point to a different data hash.
<zignig>
with some signing trickery and timeouts and stuff , if anyone asks for the name version (ipns) of your node id it will attempt to get the latest version of the data you have published.
<zignig>
then you need to add the entire site again to get the new hash.
<zignig>
for adding lupus.jpg
shea256 has joined #ipfs
<owen1_>
i modified my html, republished and add -r again. i also restarted the daemon. just in case. but i still see the old content.
<owen1_>
i noticed that my hash is the same as the old one.
<owen1_>
maybe i am doing somethnig wrong
<zignig>
if it is the same hash , it has not been changed.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<owen1_>
ipfs add -r .
<owen1_>
ipfs publish <hash>
<owen1_>
and i see the same hash
<zignig>
you don't need the publish yet
shea256 has quit [Ping timeout: 244 seconds]
<zignig>
go up a directory ; ipfs add -r website
<zignig>
it will give you a hash back.
<owen1_>
ok. i see it now. also what is the difference between ipfs and ipns?
<owen1_>
this is sooo confusing ):
<owen1_>
btw, the globe is cool. i just don't know what to do with the web interface..
<spikebike>
owen1_: the web ui is a subset of the command line, so you can ignore it if you so choose
<zignig>
ipfs is always fixed , ipns is changeable.
<spikebike>
indeed, ipfs checksums Qm.... ALWAYS refers to the same data, same content always gives the same checksum, same checksum always gives the same content
<spikebike>
IPNS allows human friendly name -> checksum.
<spikebike>
so you can update IPNS every time you change your content
<Vyl>
google with site:gateway.ipfs.io does a pretty good job of indexing things :>
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike>
heh, cool, found a real time qrcode renderer
<owen1_>
spikebike: so i can move my entire blog/website to ipfs
<spikebike>
yes, but you depending on the gateway.ipfs.io giving you free bandwidth, unless you run your own gateway
<spikebike>
gateway seems more like a demo, not a bandwidth charity
<Vyl>
I use my own gateway now.
<Vyl>
Though my concerns regarding IPFS's lack of concern for privacy remain.
<Vyl>
This is going to be a problem, some day. Not today, but some day.
<spikebike>
yeah, today it's best for world readable static content
<owen1_>
Vyl: what does 'gateway' means? is it free? is it the same as buying a domain
<owen1_>
?
<spikebike>
the ipfs folks pay for bandwidth to gateway.ipfs.io which you use if you host your site on IPFS then send some 3rd party a http:// url to view
<owen1_>
and will gateway.ipfs.io ever go down?
<spikebike>
yes
<owen1_>
OH NOES
<spikebike>
if you want control over your site and it's uptime you should run your own gateway
<Vyl>
owen1_: You can run your own node on the IPFS network.
<spikebike>
at least until people start running IPFS aware browsers
<Vyl>
You don't need to do so for viewing alone, though it does reduce load on gateway.ipfs.io. But it is required for publishing anything.
rendar has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
lithp has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
rongladney has quit [Quit: Connection closed for inactivity]
notduncansmith has joined #ipfs
fleeky has quit [Ping timeout: 265 seconds]
notduncansmith has quit [Read error: Connection reset by peer]
fleeky has joined #ipfs
magneto1 has joined #ipfs
phobiai has joined #ipfs
<phobiai>
Tkdchdutkvdk
bedeho has quit [Ping timeout: 244 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
magneto1 has quit [Ping timeout: 250 seconds]
Encrypt has joined #ipfs
_jkh_ has joined #ipfs
* _jkh_
looks around
magneto1 has joined #ipfs
atomotic has joined #ipfs
<solarisfire>
Getting a lot of these this morning: ERRO[09:13:40:000] Path Resolve error: context deadline exceeded module=core/server
Encrypt has quit [Quit: Quitte]
<_jkh_>
Does anyone know, offhand, how to get ipfs to use a different location for its .ipfs directory than $HOME (or even / for some reason?)
<_jkh_>
oddly enough, if ipfs init is run at boot time it goes in /.ipfs, but if you run ipfs add -r later (also) as root, it looks in /root/.ipfs
<_jkh_>
however, I want it to go to /someplace/with/lots/of/space/.ipfs :)
<prosody>
I wonder how IPFS would be designed by the engineer who made At Ease.
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
elima has joined #ipfs
dignifiedquire has joined #ipfs
<solarisfire>
Can always use symlinks _jkh_ but not sure how to change it directly...
<_jkh_>
solarisfire: yeah, that’s eventually what I ended up doing, it just seemed like there should be a better way since following symlinks is generally a security hole and I figured they wouldn’t actually allow that, but they did. ;)
<_jkh_>
I sent Jeromy an email asking if there was a better way...
<solarisfire>
Whatever node is hosting the file I'm pulling, their upspeed is terrible XD
<solarisfire>
Damn, think that node went offline... :(
<prosody>
I've wondered if graphs for loading will help. Like torrents.
<spikebike>
_jkh_: check ~/.ipfs/config/Datastore
<solarisfire>
owen1_: Is your node offline??
voxelot has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
rht___ has joined #ipfs
<rht___>
(how to calculate the reference count (redundancy) of a hash over ipfs?)
ipfstudents has joined #ipfs
<ipfstudents>
hi
<ipfstudents>
can anybody guide us towards the database implmentations running over ipfs
<solarisfire>
Running on top of ipfs? Not sure I've heard of one...
<solarisfire>
whyrusleeping: ^^??
<ipfstudents>
any search engine for it to search the data
domanic has joined #ipfs
<solarisfire>
No search engine, it's static storage... You need a hash to be able to find something.
<ipfstudents>
are the hashes tagged in ipfs
<solarisfire>
tagged?
<lgierth>
good morning
* multivac
waves to lgierth
<lgierth>
!nobotsnack
<solarisfire>
hey lgierth
<ipfstudents>
let us assume I want to find a photo or a video in ipfs what should be the mechnism for it except the static hash collection
<lgierth>
you'd need to build some kind of index
<lgierth>
somebody wrote down a couple of notes about that, Blame i think
<ipfstudents>
can we implement a taggign mechnism with hash so that it can become searchable
notduncansmith has quit [Read error: Connection reset by peer]
elima has quit [Ping timeout: 265 seconds]
<phobiai>
How does ipfs work with frequently changing content?
<phobiai>
As it is aiming to replace http, how does it handle things like twitter and facebook
Zajo has joined #ipfs
rschulman has quit [Ping timeout: 246 seconds]
orzo has quit [Ping timeout: 265 seconds]
ygrek has joined #ipfs
ygrek has quit [Read error: Connection timed out]
orzo has joined #ipfs
ygrek has joined #ipfs
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
ipfstudents has quit [Ping timeout: 246 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<solarisfire>
phobiai: It's aiming to replace http for static content, it doesn't handle any server side script atm so no PHP or Rails or Python or APIs... So it could host all of the javascript and html for something like Facebook or Twitter, but not the back end.
<lgierth>
well... you'd design your application differently
<lgierth>
i.e. distributed
<lgierth>
in a way that doesn't rely on central servers
<phobiai>
But the actual content would be transferred using some other protocol? Like using REST api over http using client-side javascript?
elima has joined #ipfs
<Zajo>
Hello. How do i compute a hash of a file to try to fetch it from ipfs network?
<lgierth>
Zajo: if you already have the file, why fetch it?
<Zajo>
wel :D
<Zajo>
lgierth: well :D
<Zajo>
lgierth: Good question. I just wanted to try to compute the hash ...
<lgierth>
Zajo: then try ipfs add -n
<lgierth>
it hashes without actually adding
<Zajo>
lgierth: thanks a lot
<lgierth>
phobiai: you'll be able to do everything through ipfs, if you design your application right
therealplato has quit [Ping timeout: 252 seconds]
therealplato has joined #ipfs
<Zajo>
lgierth: I know why. I added the file a long time ago and i don't remember the hash. That's exactly what happened to me.
elwisp has left #ipfs [#ipfs]
CarlWeathers has quit [Remote host closed the connection]
ainmu has joined #ipfs
<ainmu>
hello
herch has joined #ipfs
ainmu has quit [Client Quit]
CarlWeathers has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
zugz has quit [Ping timeout: 252 seconds]
Guest73396 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
wopi has quit [Read error: Connection reset by peer]
wopi has joined #ipfs
Quiark has quit [Ping timeout: 244 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<VictorBjelkholm>
Oh, I just remembered that CDNjs is open source ( https://github.com/cdnjs/cdnjs ), we should definitely try to help them support storing the scripts in IPFS
<VictorBjelkholm>
is there any thoughts about setting limits in the daemon? It's eating all disk space and bandwidth currently so cannot run it since I have other services on the machine as well... (+ have to think about costs)
iHedgehog has joined #ipfs
iHedgehog has left #ipfs [#ipfs]
ygrek has quit [Ping timeout: 246 seconds]
<spikebike>
VictorBjelkholm: it only uses disk and bandwidth that you specifically ask for
<spikebike>
it's not caching/downloading content you didn't specifically ask for
<spikebike>
well there is some bandwidth and very little storage for maintaining the dht, but that's pretty minimal
ygrek has joined #ipfs
<lgierth>
VictorBjelkholm: yes there are plans for resource limits
<solarisfire>
bitswap isn't having a good time this morning, been after the same 183 blocks for hours XD
<VictorBjelkholm>
spikebike, sure, and I am using it sometimes so it gets some content. I just want to be able to limit it so when the storage is ~5gb, run the gc or something. Also, when I ask for content, I don't want it to use all the network resources
<VictorBjelkholm>
lgierth, ah, good to know. Thanks!
<lgierth>
the only semi-qualified thing i can say about giving my private keys to keybase is, nope thanks rather not
<lgierth>
and that's got nothing to do with keybase specifically
<davidar>
yeah, it seems like a really bad strategy if they're trying to promote better security
<lgierth>
it also screams "hey $badplayer, pleeeaaase make us a first class target"
<davidar>
The browser crypto situation is really sucky
<davidar>
Well, lots of things about browsers also suck ;)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
tsenior`` has joined #ipfs
<xeon-enouf>
try using one that doesn't allow javascript and animated GIFs and ... much worse fuglies :-)
wasabiiii has joined #ipfs
shea256 has joined #ipfs
<achin>
neat, i finally ran into the FD limit issue. i'm surprised it took me this long
jamescarlyle has joined #ipfs
<ansuz>
FD as in the address space?
jamescarlyle has quit [Remote host closed the connection]
<achin>
file descriptor limit
jamescarlyle has joined #ipfs
<ansuz>
gotcha
jamescarlyle has quit [Remote host closed the connection]
captain_morgan has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
ygrek has quit [Ping timeout: 246 seconds]
jamescarlyle has joined #ipfs
pfraze has joined #ipfs
shea256 has quit [Remote host closed the connection]
tsenior`` has quit [Ping timeout: 240 seconds]
doublec_ has joined #ipfs
jamescarlyle has quit [Remote host closed the connection]
voxelot has joined #ipfs
voxelot has quit [Remote host closed the connection]
wasabiiii1 has joined #ipfs
wasabiiii has quit [Ping timeout: 250 seconds]
voxelot has joined #ipfs
voxelot has joined #ipfs
voxelot has quit [Changing host]
doublec has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
tsenior`` has joined #ipfs
danslo has quit [Ping timeout: 246 seconds]
pfraze has quit [Remote host closed the connection]
Zajo has quit [Quit: Leaving.]
brab has quit [Ping timeout: 256 seconds]
Zajo has joined #ipfs
jhulten has joined #ipfs
od1n has quit [Ping timeout: 246 seconds]
Zajo has quit [Quit: Leaving.]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has joined #ipfs
<j0hnsmith>
So I just got an iOS update notification, 1GB * (number of iphones & ipads). Even at $0.005 per GB that bandwidth is costing a lot, obviously apple can afford it. Apple aren’t going to start pushing updates via IPFS tomorrow, but maybe one day they will.
G-Ray has quit [Ping timeout: 250 seconds]
<blame>
lgierth: In my experience, the vast majority of people who claim to be experts on cryptography/security in general are lying. Myself included. I consider people who are very clear about what they know and what they don't the most likely to be competent.
Zajo has joined #ipfs
* blame
taught kerberos to a bunch of students yesterday and is extra jaded today.
shea256 has joined #ipfs
Zajo has quit [Client Quit]
jamescarlyle has quit [Ping timeout: 240 seconds]
shea256 has quit [Ping timeout: 244 seconds]
jamescarlyle has joined #ipfs
<daviddias>
jbenet: are you around? need to braintrust a thing :)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has quit [Ping timeout: 272 seconds]
warner has quit [Quit: ERC (IRC client for Emacs 24.5.1)]
apophis has joined #ipfs
apophis has quit [Client Quit]
shea256 has joined #ipfs
wasabiiii has joined #ipfs
fazo has joined #ipfs
wasabiiii1 has quit [Ping timeout: 250 seconds]
shea256 has quit [Read error: Connection reset by peer]
notduncansmith has joined #ipfs
shea256 has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
apophis has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
vijayee_ has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
root1 has joined #ipfs
root1 is now known as od1n
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
shea256_ has joined #ipfs
Guest73396 has quit [Ping timeout: 252 seconds]
shea256 has joined #ipfs
shea256_ has quit [Read error: Connection reset by peer]
simonv3 has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
__uguu__ has left #ipfs ["WeeChat 1.3"]
j0hnsmith_ has joined #ipfs
j0hnsmith_ has quit [Client Quit]
j0hnsmith_ has joined #ipfs
j0hnsmith_ has quit [Client Quit]
j0hnsmith has quit [Ping timeout: 260 seconds]
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
necro666 has quit [Quit: cruel world]
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
tsenior`` has quit [Ping timeout: 240 seconds]
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
jamescarlyle has joined #ipfs
shea256_ has joined #ipfs
jamescarlyle has quit [Remote host closed the connection]
shea256_ has quit [Read error: Connection reset by peer]
shea256_ has joined #ipfs
jamescarlyle has joined #ipfs
shea256 has quit [Ping timeout: 255 seconds]
shea256_ has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
jamescarlyle has quit [Remote host closed the connection]
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
lithp has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
shea256_ has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
elima has quit [Ping timeout: 240 seconds]
shea256_ has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
shea256 has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
shea256_ has joined #ipfs
shea256_ has quit [Read error: Connection reset by peer]
<_jkh_>
good morning...
* multivac
waves to _jkh_
<whyrusleeping>
_jkh_: hey there!
* _jkh_
waves back
<whyrusleeping>
working on a response to your email :)
shea256_ has joined #ipfs
* _jkh_
continues publishing all of the FreeNAS downloads via ipfs
shea256 has quit [Read error: Connection reset by peer]
<whyrusleeping>
_jkh_: $IPFS_PATH can be set to make ipfs look where you want it for the .ipfs dir
<_jkh_>
whyrusleeping: yeah, I’ve bumped into a few more .ipfs related weirdnesses I was interested in discussing. Basically, my impression of ipfs is that it was designed with the notion of multiple users having their own independent “colllections” in mind
shea256_ has quit [Read error: Connection reset by peer]
<_jkh_>
whyrusleeping: which is not a bad usage metaphor, but on a NAS it’s more the case that all of the “collections”, whether they’re system-wide or per-user, may need to live under a common root because the user $HOME dirs may not even be trivially accessible
<_jkh_>
whyrusleeping: cool!
shea256 has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
<_jkh_>
whyrusleeping: I’m thinking of AD / LDAP users where the user is authenticating to the NAS with those credentials but may not necessarily be dragging their home directories along with them (it’s unusual for a NAS to mount foreign filesystems into its own namespace)
shea256 has joined #ipfs
<_jkh_>
whyrusleeping: I haven’t even gotten to the topic of re-exporting the FUSE mount via SMB, AFP or NFS; that’s a whole ‘nother can of worms. ;)
lithp has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<whyrusleeping>
haha, yeah. fuse is.... fun
<whyrusleeping>
we've also looked at users 'sharing' an ipfs installation across multiple computers
<jbenet>
whyrusleeping i'm here
shea256 has quit [Read error: Connection reset by peer]
<whyrusleeping>
jbenet: _jkh_ is in charge of the freeNAS project
<_jkh_>
whyrusleeping: but that raises another interesting question I also had (man, you’re going to regret me adding this to FreeNAS I can already tell… :) )
<whyrusleeping>
and hes looking at integrating with ipfs
<jbenet>
hello _jkh_! o/
<jbenet>
sweet! we'd love to help anyway we can
<_jkh_>
whyrusleeping: it seems that ipfs add <somefile> makes a copy, yes? At least, just based on disk space usage
shea256 has joined #ipfs
<_jkh_>
whyrusleeping: it doesn’t just create a reference
<jbenet>
_jkh_ yeah copies for now.
<_jkh_>
whyrusleeping: which raises an interesting chicken-and-egg problem for large media collections
<whyrusleeping>
that issue has been brought up a lot
<ion>
jkh: It adds the contents as object(s) within ~/.ipfs/blocks
<jbenet>
_jkh_ we have plans to not copy disk space at all, but it's not trivial to do right
<whyrusleeping>
we are going to work on a way to just make a 'reference'
<_jkh_>
OK, I figured it was easier for you to just import the bits since you have your own block store
<whyrusleeping>
yeah, the issue is that if we just 'link' to an existing file, and the user changes it
<whyrusleeping>
that breaks a lot of things
<_jkh_>
not grinding any axes over that, though obviously coming up with a reference model would allow users to trivially add their existing 4Tb movie/audio/blah stores
<_jkh_>
but let’s just say they have to copy and then treat the ipfs store as the new authoritative source
shea256_ has joined #ipfs
<_jkh_>
they would use the FUSE mount to see the previous topology?
<ion>
It would be neat to have a directory for each ipns key you’re publishing where the daemon would recursively watch for changes, generate and publish directory objects and serve the contents directly out of the directory.
<_jkh_>
there are a lot of FreeNAS users who, for example, keep their entire iTunes libraries on a dataset and then AFP or NFS export it to the Mac
<whyrusleeping>
_jkh_: yeah, they can view it through the fuse export, but it can also be viewed over http
<whyrusleeping>
and it would be really cool to have some sort of native ipfs->nfs functionality
<_jkh_>
OK, I will try this. We just got the ipfs service knobs into today’s build
shea256_ has quit [Read error: Connection reset by peer]
<_jkh_>
so I can look at the sharing UI later on today
<_jkh_>
NFS is a bit trickier because it’s an in-kernel service, but I believe our FUSE implementation has some hooks for exporting even FUSE volumes over NFS
shea256_ has joined #ipfs
<ion>
When I make changes under the FUSE-mounted /ipns/local, it seems ipfs pins each added file separately instead of just pinning the current top-level directory object (and hopefully unpinning the old version).
<_jkh_>
SMB and AFP are trivial because they’re entirely in userland
shea256 has quit [Ping timeout: 255 seconds]
<_jkh_>
ion: the problem with “watching for events” is that with kqueue (BSD) or the Linux equivalent, you have to explicitly register each file or directory you want to watch for changes in
<whyrusleeping>
_jkh_: oh right, the last place i worked had nfs in userland in freebsd, but that was 'their thing'
<_jkh_>
ion: we need something like the OS X fsevents model. :(
<_jkh_>
that would let us see everything that happened under a given volume
<_jkh_>
whyrusleeping: well, there are also NFS-in-userland implementations we could always use and stick on another service port
<_jkh_>
whyrusleeping: but I think I can make the in-kernel NFS work as well
<whyrusleeping>
_jkh_: i have faith :)
shea256_ has quit [Read error: Connection reset by peer]
<fazo>
whyrusleeping: I have a question about IPNS: is the plan to implement "ipns publish <name> <address>" by making it so that when you launch the command, it creates an ipfs folder with all the ipns names that are published and sets the <name> link to <address> ?
<ion>
jkh: Yes, but a bunch of programs already succeed at doing exactly that, despite it being a bit nasty to have to add a huge number of inotify watches.
shea256 has joined #ipfs
<fazo>
whyrusleeping: replace <address> with <hash>, sorry
<_jkh_>
so we just need to have the ipfs “datastore” set up early in the install process, then re-export it via FUSE and you can then populate your data directly into ipfs and keep just one copy
<jbenet>
_jkh_ as a heads up, as you push ipfs, you _will_ hit bottlenecks that we haven't gotten to address yet. we have an upgrade roadmap for sure, so just let us know what you hit any pain with and we'll get on it
<whyrusleeping>
fazo: nope, the '<name>' part will be the short name of the key you want to publish to
<_jkh_>
jbenet: Oh, I’ve already hit some. :) The ipfs recursive add operation, for example, is very very slow
wasabiiii has quit [Ping timeout: 240 seconds]
wasabiiii1 has joined #ipfs
<owen1_>
solarisfire: not sure
<_jkh_>
jbenet: which is not a big deal when your data does not change a lot or you don’t need to import large datasets over 40GbE
<jbenet>
_jkh_ oh yeah that's pretty bad atm.
<jbenet>
indeed
<fazo>
whyrusleeping: so if I want to publish let's say 5 files, I need to manually create a folder with the links to those 5 files and then publish that folder
<_jkh_>
jbenet: but when either of those things aren’t true, we’re going to need the references again because all large data shops are going to need to have a different workflow
<jbenet>
yep, of course.
shea256_ has joined #ipfs
<_jkh_>
whyrusleeping: So, this whole idea of referencing existing files...
<_jkh_>
whyrusleeping: You need some help with that? :)
<whyrusleeping>
_jkh_: would love some!
<_jkh_>
OK, I need to spin up one of my guys on Go
<_jkh_>
I guess it could be worse, you could have picked Rust
<jbenet>
hahahhaha
notduncansmith has joined #ipfs
<_jkh_>
whyrusleeping: thanks!
notduncansmith has quit [Read error: Connection reset by peer]
shea256_ has quit [Read error: Connection reset by peer]
<fazo>
what's so bad about rust?
<jbenet>
_jkh_ please have them come by here, and we can give pretty detailed rundown of the codebase (we can record it too, for other contributors)
<whyrusleeping>
_jkh_: if you know C and Python, Go will be easy
shea256_ has joined #ipfs
<_jkh_>
fazo: absolutely nothing
<jbenet>
_jkh_ we will be making it much more modularized and manageable shortly, but time
<_jkh_>
fazo: it’s just that the more obscure the language gets, the harder it is to find people who already know it. :)
<_jkh_>
I still hold out a fond but pointless hope that someday I’ll be able to write in scheme for a living
shea256 has quit [Ping timeout: 255 seconds]
<_jkh_>
but I learned lisp 25 years ago and nobody’s asked for it yet
<_jkh_>
oh well, go is fine. rust is fine. erlang, even, is fine.
<_jkh_>
javascript is not fine.
shea256_ has quit [Read error: Connection reset by peer]
<_jkh_>
but it’s like the drunken party guest who will never leave
shea256 has joined #ipfs
<_jkh_>
aaaand I digress!
<jbenet>
hahahahha
<_jkh_>
forget I said anything.
<fazo>
_jkh_: yeah javascript is like the devil. Everyone hates it but it won't leave because it's the only native language in the browser
shea256 has quit [Read error: Connection reset by peer]
<Vyl>
We need javascript, because it lets us get rid of the abomination, flash.
shea256 has joined #ipfs
<whyrusleeping>
javascript is like that guy that nobody wants to hang out with, but he also happens to host the coolest parties
<jbenet>
I'm going afk for a bit. but _jkh_ thanks for coming by! we'd love to help however we can
<whyrusleeping>
so people go hang out with him anyways
<Vyl>
It is a bit of an odd language though.
<_jkh_>
jbenet: thanks!
<Vyl>
Very lambda calculusy.
<fazo>
I also think node is a great platform, except for the fact that it runs on js
<jbenet>
for the record, i love js. i think it's a great programming system and more could learn from its clean approaches to modularity. :)
<_jkh_>
I could happily embrace JS’s lambda calculus notions (again, see scheme) if it had totally screwed up all of its basic datatype handling and had an overall syntax that only a balt salts addict could love
<jbenet>
(but i also love c, and go, and haskell, and ....)
shea256 has quit [Read error: Connection reset by peer]
shea256_ has joined #ipfs
<_jkh_>
^ had NOT
<jbenet>
(i just dont love c++ ... the kLOC to madness is really low. )
<Vyl>
Yes... untyped variables just feel wrong.
<jbenet>
(js has an even lower one, but modules.)
shea256_ has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
<whyrusleeping>
i like C and Go.
<jbenet>
--------------------------------------- languages contest eof (for the interest of being welcoming to everyone)
<whyrusleeping>
and lua
<whyrusleeping>
>.>
<_jkh_>
sorry, sorry… My bad I started it.
<_jkh_>
so, emacs. Who else prefers it to other screen editors?
<jbenet>
hahahahahaha
<ion>
Them’s fighting words! GRUB 2 is obviously better than Emacs.
shea256_ has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
<Vyl>
I used to use emacs, many years ago. only ever used the basic functions though.
<Vyl>
I never learned the finger-fu.
<whyrusleeping>
_jkh_: its a great operating system
shea256_ has quit [Read error: Connection reset by peer]
shea256 has joined #ipfs
<fazo>
when the distributed download part of the implementation is done, we should mirror linux distros isos
<whyrusleeping>
fazo: yes!
shea256 has quit [Read error: Connection reset by peer]
<Vyl>
Could, yes. Not sure it's the most useful thing... they already have a distribution infrastructure in place. Http, various mirrors, and all major distros have torrents too.
<fazo>
maybe even the package repositories.
shea256 has joined #ipfs
<_jkh_>
well, again, this is why *references* are going to be so important
<Vyl>
Now, package repos? *that* could be useful! But you'd have to also write software to allow apt-get, yum and the like to utilise ipfs.
nessence has joined #ipfs
nessence has quit [Remote host closed the connection]
<_jkh_>
if you can present yourselves as a side-store that doesn’t end up eating lots of space, then it becomes an alternative publishing method
<whyrusleeping>
Vyl: thats one of my 'secret' plans?
shea256 has quit [Read error: Connection reset by peer]
<fazo>
Vyl: it would download the packages via ipfs on your local node
<od1n>
I was actually just about to plan out how I would go about serving the distro ISOs from a website without forcing a client to download all the ISOs to their system
shea256 has joined #ipfs
<_jkh_>
even the linux distro sites have done kind of a split distribution thing where they offer the links via the usual array of mirrors and also a torrent file for the ISOs
<_jkh_>
so ipfs is basically just an easier-to-use torrent file
<od1n>
I'm thinking of hosting the ISO files and website seperately.
<fazo>
_jkh_: that's the point :)
<_jkh_>
but with a namespace
<od1n>
that's how i see it
<whyrusleeping>
_jkh_: with a bunch of other features :)
<fazo>
_jkh_: also it would be the same protocol used for http downloads and distributed downloads
<Vyl>
That's the idea. When completed, it'll be like torrent... but better.
<Vyl>
Better integratable with other software, like browers, so you can host entire websites on it.
<_jkh_>
OK, well, I have a guy who can come do the reference stuff for you guys
<Vyl>
Or use it to distribute automatic software updates.
<_jkh_>
but he’s got to ship the FreeNAS 10 BETA first
shea256 has quit [Read error: Connection reset by peer]
<_jkh_>
and if I show him this that will never happen because this problem is far more FASCINATING
<whyrusleeping>
_jkh_: 'ipfs add' can be made to use rabin fingerprinting for chunking the input, this *should* give you a LOT of space savings for lots of isos
shea256 has joined #ipfs
<_jkh_>
so, another month and I’ll show him this channel. :)
<_jkh_>
whyrusleeping: indeed!
<_jkh_>
ipfs deduplication alone is worth the price of admission
<whyrusleeping>
and! i recently wrote a tar importer 'ipfs tar add' that will pseudo unpack tar files to provide better dedupe for the files inside
<_jkh_>
ZFS deduplication becomes pathologically slow once your dedup table spills out of RAM so we don’t recommend that
<whyrusleeping>
good thing my zfs box has 64GB ram...
<_jkh_>
whyrusleeping: oh, if you really start to use dedup, that will not suffice at all. :)
<_jkh_>
we do sell some deduping boxes with all SSDs but they have 512G of RAM
shea256 has quit [Read error: Connection reset by peer]
shea256_ has joined #ipfs
<whyrusleeping>
o.o
<_jkh_>
and you have to carefully manage the ratio of RAM to storage pool size
<_jkh_>
because once you spill out of the in-memory table, it turns into an I/O amplifier
<whyrusleeping>
thats an interesting problem...
<_jkh_>
every write turns into a read and 2 writes
<_jkh_>
“Is it in the on-disk dedup table? No. OK, write the block and write the dedup table entry"
shea256_ has quit [Read error: Connection reset by peer]
<_jkh_>
rinse, lather, repeat
* whyrusleeping
cringes
shea256 has joined #ipfs
<_jkh_>
honestly, the ZFS folks should have just constrained the table to their available RAM limit (which btw is not all of RAM of course since that would totally hose your ARC)
<_jkh_>
after which they could have just immediately said “No!” after every failed cache hit
<_jkh_>
buuuuut they did not. and that is why ZFS dedup is basically bad.
<whyrusleeping>
how long does a $LARGE dedupe take to run?
shea256_ has joined #ipfs
shea256 has quit [Read error: Connection reset by peer]
jasongreen has joined #ipfs
<jasongreen>
Hello.
<whyrusleeping>
jasongreen: hello!
<jasongreen>
I have a question on DNS and IPNS.
<whyrusleeping>
jasongreen: fire away
shea256 has joined #ipfs
shea256_ has quit [Read error: Connection reset by peer]
tsenior`` has joined #ipfs
<jasongreen>
if I add a DNSTEXT record on my domain of "/ipns/long-hash-for-my-node" and my IPNS node is on a separate box . How do I get that txt record to resolve to the correct IP?
<jasongreen>
sorry dnslink
<ion>
jkh: Apparently the current bitswap implementation will need to become smarter for file distribution to become efficient. At the moment ipfs seems to transfer an order of magnitude more data than needed to get the data from A to B. But it’s just a matter of time until that work is done.
wopi has quit [Read error: Connection reset by peer]
<fazo>
ion: ooooohhh
<lgierth>
you can do the same locally: localhost:8080/ipns/ipfs.io
rschulman has joined #ipfs
wopi has joined #ipfs
<rschulman>
Am I right that gateway.ipfs.io going to https is going to make the chrome and firefox extensions inoperable?
<ion>
I’m getting a lot of “Error: read tcp 127.0.0.1:55899->127.0.0.1:5001: read: connection reset by peer” when running ipfs subcommands.
<rschulman>
or has someone pointed this out already?
<lgierth>
rschulman: how so?
<lgierth>
https is optional there
<lgierth>
no redirect happening
magneto1 has quit [Ping timeout: 264 seconds]
<fazo>
I noticed the extensions don't perform the redirects when it's over https.
<achin>
ion: any errors in the ipfs daemon log?
<jasongreen>
Thanks
<ion>
achin: nope
HostFat has joined #ipfs
<achin>
what if you do "ipfs log level all debug" ?
<rschulman>
lgierth: I'm just wondering if the browsers have a policy of not letting plugins mess with secure pages.
<cryptix>
fazo rschulman: the firefox addon does the redir also from https
jasongreen has quit []
<rschulman>
lgierth: If I load up a gateway page on https, the chrome plugin won't redirect to localhost
<rschulman>
if I take out the "s" it does
<rschulman>
cryptix: That's interesting to know.
<lgierth>
rschulman: i figure there's not much we can do about that?
<rschulman>
so it does. Interesting that firefox allows that and chrome doesn't.
<rschulman>
lgierth: Nope, probably not, just mentioning it.
<lgierth>
i guess browsers prevent the downgrade
<lgierth>
ah ok
<cryptix>
i meant to open an issue at dPow's chrome ext but i guess i forgot...
<ion>
Huh, this is weird. When i “ipfs add -r” my local RFCs directory, i’m getting different hash every time. Turns out rfc5248.txt is missing and rfc5247.txt contains both 5247 and 5248, separated by a MIME-style separator with a random boundary which causes the changing hash. See the diff between https://ipfs.io/ipfs/QmW39u8XvpaF6ZWqbiDfZCe3bnsykqRu7VH5AJhhyT9m4H/rfcs/rfc5247.txt and
<cryptix>
dpc: hrm have you verified connectivity between the nodes? (ipfs ping $nodeId)
<dpc>
I don't think I'm even running the deamon yet.
<lgierth>
ion: oh, are you directly feeding that into ipfs? cause i remember there was a bug report about content-transfer-encoding the other day. let me see
<dpc>
That fixed it. And now, how long should "name publish" take?
<cryptix>
dpc: to transfer static content, you dont need to publish the hash on ipns
<dpc>
cryptix: I want to have "static website"
pfraze has joined #ipfs
<cryptix>
dpc: okay, just saying - check if you can add & cat between nodes before you mix in ipns. you can also use the public gateway to see if you can get data out of a node
<dpc>
cryptix: That did not seem to work last time I tried.
<cryptix>
dpc: daemon mode also helps a lot :) (required if you want to use ipns reliably)
<dpc>
I do have daemon running now.
<dpc>
And had before.
<cryptix>
okay cool
<achin>
ion: i'm confused where that content-disposition came from
<dpc>
cryptix: But it looks like networking does not work.
<dpc>
I have toons of peers, yet transfering anything does not seem to work.
<cryptix>
most nodes run on tcp 4001, if that one is restricted, its a bit rough right now... :/
<cryptix>
huh
<whyrusleeping>
dpc: what are you trying to transfer?
<dpc>
Tiny files.
<cryptix>
if 'ipfs ping' works, bitswap should be able to transfer dags (and thus files)
<fazo>
whyrusleeping: but for now, every time I need to do that I have to manually recreate a folder containing all the stuff I want to serve over ipns
<ion>
I seem to have QmbuG3dYjX5KjfAMaFQEPrRmTRkJupNUGRn1DXCgKK5ogD in my wantlist perpetually.
<whyrusleeping>
fazo: if you can use fuse, you can just mount the fuse filesystem and make your changes in /ipns/local
<ion>
I’m getting a ping back from that node though.
magneto1 has joined #ipfs
<dpc>
whyrusleeping: Does it mean I'm OK?
<fazo>
whyrusleeping: ooh, so it's writeable and automatically updates ipns? Can I place symlinks from outside the fuse filesystem?
<whyrusleeping>
fazo: writeable and auto-updates == yes
<whyrusleeping>
symlinks....
<whyrusleeping>
uhm
<whyrusleeping>
no
<whyrusleeping>
that would be hard, lol
<whyrusleeping>
(but it may be possible!)
<ion>
No, it would be symbolic.
<whyrusleeping>
i wish you could see the look on my face right now
elima has joined #ipfs
<cryptix>
whyrusleeping: how hard would it be to add 'ipfs mount /ipfs/$hash $home/work2' with the current code?
<whyrusleeping>
cryptix: not too hard. probably around 10 hours of work for me
<achin>
ion: yeah, i'm running. ogD is my peerID
<fazo>
whyrusleeping: also, if I drop a 1 GB file in the fuse filesystem will it put it in the datastore and create another copy to show in the fuse filesystem or does it only copy it to the datastore?
<whyrusleeping>
it only copies to the datastore
<fazo>
whyrusleeping: thanks, that's awesome
<whyrusleeping>
yeap!
<whyrusleeping>
be wary, the fuse code can be finicky
<whyrusleeping>
it has broken for us under various workloads
<cryptix>
whyrusleeping: nice! i think its a nice tool if you just want access to a subtree of some dag - also saves you io and space
<whyrusleeping>
cryptix: yeah! jbenet and I have discussed it before, its on the todo list
<whyrusleeping>
along with the ability to mount other protocols too
<cryptix>
<3
zugz has joined #ipfs
phorse has quit [Ping timeout: 250 seconds]
<reit>
the rabin chunker seems neat, is it (or something like it) planned to eventually be the default?
<whyrusleeping>
reit: yep!
<reit>
ah cool
<whyrusleeping>
it may well land as the default in 0.4.0
<whyrusleeping>
you can help that happen by using the chunker and providing feedback
<whyrusleeping>
the more confident we are in it, the more likely we are to do that
<cryptix>
whats the eta on 0.4 btw?
<ion>
I haven’t read how the Rabin hash works yet but i’m familiar with Buzhash. Do they have a major difference?
<whyrusleeping>
cryptix: your guess is as good as mine, it depends on a lot of factors
<whyrusleeping>
ion: rabin actually lets you specify what hash you want to use
<whyrusleeping>
we use fnv right now
magneto1 has quit [Ping timeout: 240 seconds]
nikogonzo has joined #ipfs
<whyrusleeping>
cryptix: for 0.4.0, its going to be a network breaking change, so we want to make sure that 'breaking the network' only happens once
<cryptix>
iirc there is still some double wrap of of msgio right?
<ion>
whyrusleeping: ok
<whyrusleeping>
cryptix: in current master, yes
<whyrusleeping>
in 0.4.0, thats dead
<cryptix>
the 'pins as dags' also gives me goosebumps
<ion>
Is smarter bitswap planned for 0.4.0 or after it?
phorse has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<whyrusleeping>
smarter bitswap will be after 0.4.0, sadly
<whyrusleeping>
but 0.4.0 will have nice effects on transfer speeds anyways
<ion>
ok
<fazo>
so what goes in 0.4? the ipns fixes I assume
<whyrusleeping>
fazo some ipns fixes might land before
<whyrusleeping>
and the rest will likely come after
simonv3 has quit [Quit: Connection closed for inactivity]
<whyrusleeping>
which lays the groundwork for having multiple ipns keys
<whyrusleeping>
(and encryption!)
Eudaimonstro has quit [Ping timeout: 252 seconds]
patcon has joined #ipfs
<achin>
such much stuff \o/
<cryptix>
so much awesome stuff..! :))
<fazo>
uuuh, encrypted cache sounds very nice
<fazo>
when fuse gets reliable and encryption is in, I think I'll move all my data to ipfs
<cryptix>
fazo: :)) living on the bleeding edge
<fazo>
too often I need to access it from some computer which doesn't have Syncthing, or maybe I only need a few files out of 10 gb of data
<cryptix>
but yea - i also cant wait for keystore
<cryptix>
ie transparent encryption between nodes
<fazo>
cryptix: I run NixOS unstable on all my computers and servers except a server which has Arch. I don't think I can be more bleeding edge than this
<whyrusleeping>
i run arch on everything i can
<whyrusleeping>
the AUR is the best thing ever
<noffle>
arch <3
<cryptix>
hehe
<noffle>
well, arch <3 until I realize I haven't done a pacman -Syu for 2 months
<ion>
Inspecting the communication between ipfs add and the daemon, the https://ipfs.io/ipfs/QmTwYb2tCduRpp1HVacCxjuER661dVMBJ4cmch21aryxxv stuff in the middle of a file added with “ipfs add -r” is indeed part of the MIME-multipart HTTP API call that leaked into the content. I fail to see anything wrong with what ipfs add is saying to the daemon in the trace.
<cryptix>
noffle: :)) been there a lot
<fazo>
whyrusleeping: use to run arch everywhere too, then I tried NixOS
<whyrusleeping>
fazo: nixos doesnt have all the packages i want though
<fazo>
whyrusleeping: the only problem with distros I have is that if my computer nukes I can't rebuild my environment easily
<whyrusleeping>
(its probably better since i tried it last)
<fazo>
yeah that's still not perfect
<ion>
The leak happens between rfc5247.txt and rfc5248.txt consistently.
<fazo>
it has an ipfs package though.
<noffle>
fazo: nixos has that property? (letting you rebuild full state)
<fazo>
noffle: yes, that's it's selling point. You have one single configuration file and when you build the system using it, the outcome is always the same
<fazo>
noffle: but it's also written using a full programming language, so you can have dynamic things
<whyrusleeping>
ion whoa wait what
<whyrusleeping>
have you filed an issue for that yet??
<fazo>
noffle: and it's very easy to use
<noffle>
that's attractive..
<ion>
whyrusleeping: I’m going to but i’m still investigating and trying to have something more useful to say than “it dun werk”.
<fazo>
yes, and it works extremely well! My systems are all built using those configuration files + a backup of their /home
<whyrusleeping>
ion: awesome. thank you!
<cryptix>
fazo: now that sounds interesting
<fazo>
a few weeks ago I moved my install across disks
<fazo>
I created the new file system, copied /home inside it, then ran nixos-install --root /new/fs/path
<fazo>
rebooted to the new disk and I had my system exactly copied to the new disk, it took like 5 minutes
<fazo>
if you don't have a NixOS install to use to create the first system, you can use the ISO they give you
<fazo>
it also has: source-based packages with binary cache to speed up things, sandboxed packages by default, nix-shell which is AMAZING, atomic updates, ability to rollback to older configurations and to before-updates, built in tools to share the package cache etc
<fazo>
nix-shell allows you to start a shell with a custom environment inside (like with custom packages, older versions of them ecc)
<fazo>
you can "nix-shell -p python-2.5 --command <>" and run the command on the current system but using an older python version
<fazo>
without compromising your current environment by reverting the python back to 2.5
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<Xe>
I should add that to my portfolio site or something
fazo has joined #ipfs
fazo has quit [Client Quit]
<pjz>
nixos, like go, has the shared-lib problem, though.
fazo has joined #ipfs
wasabiiii1 has joined #ipfs
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
wasabiiii has quit [Ping timeout: 244 seconds]
sbruce has quit [Read error: Connection reset by peer]
sbruce has joined #ipfs
<fazo>
whyrusleeping: I can't get ipfs mount to work
<fazo>
it keeps saying "Error: fuse failed to access mountpoint /ipfs"
<fazo>
I tried changing mountpoint. Not sure if it's go-ipfs causing the issue or my environment though
<ion>
Xe: That's almost as cool as having your private key on IPFS.
<Rylee>
Xe: for extra lols, sign the declaration of its location with your private key
<Rylee>
s.t. you must have the pubkey which it is pointing to to verify it in the first place
<fazo>
I don't think it's matematically possible to have an ipfs object which contains its own hash
<deltab>
fazo: does /ipfs exist?
<deltab>
as an empty directory
<Vyl>
Fazo: It's mathematically possible, but computationally impractical.
<fazo>
deltab: yep, it's owned by the same user as the one running the daemon
<fazo>
Vyl: TIL
<Rylee>
fazo: well, I meant like, Xe said she'd put the GPG pubkey IPFS location on her website, and that declaration ("My GPG pubkey can be found at /ipfs/(...)") would presumably be signed with GPG
<Rylee>
:-P
<deltab>
fazo: running as root?
<fazo>
deltab: no, it's running as a regular account
<fazo>
deltab: but fuse is supposed to work as a regular account.. right?
<deltab>
I think mounting always needs root
G-Ray has joined #ipfs
<achin>
it can work as a regular account
<achin>
but the mount points need to be already created and owned by your regular account
<achin>
(both /ipfs and /ipns)
<fazo>
achin: I did that. they are owned by fazo:users
<fazo>
achin: wait, I have something else to try out
<fazo>
there, that's the problem
<achin>
what was it?
<fazo>
my systemd services run in a sandbox and while they should be able to access the filesystem, somehow they can't access /ipfs
<fazo>
it can access ~/.ipfs of course, but I tried /home/fazo/ipfs and it didn't mount either
phorse has quit [Ping timeout: 256 seconds]
<fazo>
uuh, it works
<achin>
you lost me at systemd and sandbox (since i'm not sure how those are related to ipfs), so i'm just glad you got it working :)
<fazo>
achin: it wasn't an ipfs issue :)
<fazo>
my fault
<achin>
\o/
<fazo>
but now I need to figure out why it doesn't work in the sandbox :(
<VictorBjelkholm>
currently broken due to me refactoring things, but enjoy. Read index.js for some horror and ideas on how it works
<Vyl>
Twitter clone? Isn't the IPFS concept intrinsicly unsuited to real-time communication?
<fazo>
Vyl: at the moment the implementation is a little unsuited, but the spec is not
<Xe>
i've been thinking it would be a great replacement for mogilefs or the like
<fazo>
Vyl: it is planned to use ipfs to store databases, handle almost-instant messaging etc
<VictorBjelkholm>
Vyl, well, yeah, relies on polling right now, which is kind of shitty. Waiting for aggregation and/or pub/sub
notduncansmith has joined #ipfs
patcon has quit [Ping timeout: 260 seconds]
<VictorBjelkholm>
see it as a PoC
notduncansmith has quit [Read error: Connection reset by peer]
<fazo>
Vyl: for this to work we need complete, reliable ipns implementation though
<VictorBjelkholm>
fazo, which at this point, I'm not sure we're there
<Xe>
it makes for a great pastebin clone!
shea256 has quit []
apophis has quit [Read error: Connection reset by peer]
<fazo>
Xe: I plan to fork hastebin to make it work entirely over ipfs
<VictorBjelkholm>
having some issues sometimes with ipfs name publish taking too long to propagate
bedeho has quit [Ping timeout: 260 seconds]
<Xe>
fazo: i've been thinking about making some hack in shell or go or something
apophis has joined #ipfs
<fazo>
Xe: I was thinking about forking hastebin, drop the server component and making it load the text from a IPFS hash
<Xe>
hmm
<fazo>
Xe: then I would host it on IPFS. So you could have /ipfs/<hastebin_hash>#/ipfs/<text_data_hash>
<fazo>
open that in the browser and you have your hastebin with the text data provided
G-Ray has quit [Ping timeout: 240 seconds]
G-Ray has joined #ipfs
sseagull has joined #ipfs
chriscool has joined #ipfs
<Xe>
hmm
jamescarlyle has joined #ipfs
patcon has joined #ipfs
tsenior`` has quit [Read error: Connection reset by peer]
voxelot has quit [Ping timeout: 252 seconds]
<Xe>
does IPFS have a concept of "private files"?
<fazo>
Xe: it's not implemented yet but planned
<achin>
what would be the use case for private files?
chriscool has quit [Ping timeout: 250 seconds]
rendar has quit [Ping timeout: 240 seconds]
<Xe>
backups
<whyrusleeping>
achin: so you and a group of friends can share content over the network privately
<whyrusleeping>
we are going to have the ability to encrypt your content with keys that you can share
notduncansmith has joined #ipfs
<Xe>
well I guess you can do it now with gpg
notduncansmith has quit [Read error: Connection reset by peer]
<achin>
neato
<Xe>
it'd be nice if it was transparent
<fazo>
whyrusleeping: when that's done I plan to create a small program that has a list of ipns friends, and keeps a directory synced by making sure everyone's ipns points to the same hash
<fazo>
whyrusleeping: so when I add some file, everyone has it automatically
<achin>
i assume some tie-in with bitswap so that a peer who isn't supposed to get my private data won't waste bandwidth trying to transfer something it can never read
<fazo>
achin: maybe bitswap could require a proof that receiver has ability to read the data before sending it
<fazo>
achin: the receiver could send something encrypted with the same key used for the data
<Xe>
i wonder if it'd be worth the time making things to interface to ipfs in haskell
<fazo>
achin: but people having copies of my data they can't read is not all bad: it makes delivering said data faster. Of course wether it's allowed must be configurable
lithp has joined #ipfs
<achin>
as a node with limited disk and bandwidth, i might not want to shuffle around data that is useless to me
<achin>
even if it helps others
<Xe>
so far it looks like i'm going to settle for a structure where all the user generated content is served by IPFS
<Xe>
https://nats.io looks like it might also do well for microservice communication
rendar has joined #ipfs
wasabiiii1 has quit [Quit: Leaving.]
wasabiiii has joined #ipfs
ipfsstudents has joined #ipfs
apophis has quit [Quit: This computer has gone to sleep]
<fazo>
Xe: to interface with haskell you could use the http api, or mount ipfs, or pipe data to ipfs add and parse the output. They aren't good solutions but they should all work
<Xe>
yeah
od1n has quit [Ping timeout: 250 seconds]
<fazo>
Xe: the best would be to write a wrapper around the go-ipfs http api.
ipfsstudents has quit [Quit: Page closed]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
wasabiiii1 has joined #ipfs
wasabiiii has quit [Ping timeout: 246 seconds]
Zajo has joined #ipfs
Zajo has quit [Quit: Leaving.]
Skaag_ is now known as Skaag
mildred has joined #ipfs
voxelot has joined #ipfs
ryepdx has quit [Ping timeout: 240 seconds]
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
Zajo has joined #ipfs
<ion>
Can anyone replicate this? Have ipfs daemon running and execute: mkdir test && perl -we 'print "a" x 0x2f3ff' >test/a && printf 'hello\n' >test/b && ipfs add -r test && rm -fr test
<ion>
Were both test/a and test/b added, or was just test/a added with the contents of both?
ryepdx has joined #ipfs
pfraze has quit [Remote host closed the connection]
<achin>
ion: looks like test/a and test/b were both added as QmScAx2rnmhM9Xg1Rt6ECWVwqVixgho2wSUebkgYXFSdmq and QmZULkCELmmk5XNfCgTnCyFgAVxBRBXyDHGGMVoLFLiXEN
<spikebike>
added QmRTKh6QTta2YbKxx6PSdDDtwwss9x4Zq54cmvRtpFWC4R test
<spikebike>
it didn't mention adding test/b
<spikebike>
-rw-rw-r-- 1 bill bill 193535 Sep 17 13:16 a
<spikebike>
-rw-rw-r-- 1 bill bill 6 Sep 17 13:16 b
<spikebike>
add
<ion>
spikebike: Thanks. So i’m not the only one with the bug.
<ion>
spikebike: Are you using a binary from gobuilder.me?
legobanana has joined #ipfs
<spikebike>
I think so, but I'm also using go-1.3.1
<spikebike>
I think ipfs wants 1.4
<achin>
ion: i confirm that i also reproduce using ipfs_master_linux-amd64.zip
<ion>
thanks
<achin>
this looks gnarly
<spikebike>
open an issue and post a link
<spikebike>
then we can fill in config details and that it's been replicated
Zajo has quit [Quit: Leaving.]
<ion>
spikebike: I’m about to open an issue after I’ve reached a minimal command to trigger the bug.
dignifiedquire has quit [Quit: dignifiedquire]
<spikebike>
that's plenty minimal (IMO)
<ion>
For instance, 0x5ff seems to also trigger it. In fact, 0x5ff + n*8000 for any n ≥ 0.
dignifiedquire has joined #ipfs
<ion>
Whoops, my mistake.
<achin>
note that using the ipfs client from the pre-built binary with ipfs daemon from git did not repro, so maybe the issue is in the server, not client
<spikebike>
I'm using both from the prebuilt binaries
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
doublec_ is now known as doublec
* whyrusleeping
reads
<whyrusleeping>
ion: does the same issue happen if you run all of those commands separately?
<whyrusleeping>
achin: not that i know of, but it should be 'latest master'
ygrek has joined #ipfs
<achin>
so then the weird part is that i can't reproduce this bug when i use "go get" to install ipfs. but it's reproducable using that pre-built binary, which should in theory be the same code
Zajo has joined #ipfs
<achin>
maybe the deps are different? the amount i know about the go build devtools == zero
<spikebike>
note how test/a gets 2 different object IDs
<ion>
spikebike: That’s because of the random MIME multipart boundary.
Zajo has quit [Quit: Leaving.]
<spikebike>
ah, k, now I can write a test
<ion>
For example, in my paste in the bug report: --688cb596c381f436dae2e51d05362bf8e27f08225f70cb0712bfc1b7e0b8
G-Ray has quit [Quit: Konversation terminated!]
wasabiiii1 has quit [Ping timeout: 252 seconds]
wasabiiii has joined #ipfs
<ion>
achin: If you get and unzip QmfYQRPWworETuBudtayhkmCAMxmz8LV3YXm3KXwytyrfM and ipfs add -r the resulting directory, is the bug triggered with the binary you built? https://github.com/ipfs/go-ipfs/issues/1688
Zajo has joined #ipfs
<spikebike>
whyrusleeping: got a test script for ya
<fazo>
Xe: then it will be readable via http and writeable via fuse
<achin>
ion: bug not triggered. all 13 files were reported during the output of `ipfs add -r`
Zajo has quit [Client Quit]
<fazo>
whyrusleeping: I'm working on a tool you can use to tell your daemon to change its ipns published hash when one of a list of given nodes changes it
<ion>
Xe: No, hash-of-a-public-key-based names. You can optionally replace /ipns/<hash> with /ipns/<name> where <name> has a DNS TXT record pointing to <hash>.
hellertime has quit [Quit: Leaving.]
<Xe>
hmm
<fazo>
whyrusleeping: so that you can have synchronized folders and files
<fazo>
whyrusleeping: it could be implemented in go-ipfs, but I'm not sure it belongs
<whyrusleeping>
fazo: neato. that should be useful
<whyrusleeping>
it should be something on top that consumes the API
<fazo>
whyrusleeping: yeah I'm using the http api and building the tool as a node cli daemon
<ion>
achin: Want to add a comment to #1688? Or should I?
<fazo>
whyrusleeping: it should be totally less than 500 lines when done
<Xe>
whyrusleeping: i can't seem to get that working
<whyrusleeping>
Xe: i havent actually tried it myself, ping cryptix
<whyrusleeping>
i would also file an issue
<edrex>
interesting reading the backlog with _jhk_; 3 of my top (unvoiced, trying to lurk but the bait...) questions came up: 1) support for system-wide node rather than per user, 2) support for reference-based adding (IK it would be messy and breaks dedupe - making the FUSE read support really solid seems like the best alternative) 3) siphoning off bittorrent users
<Xe>
cryptix: ping
<achin>
ion: sorry i'm afk for a little bit
<edrex>
thinking about a BT seeder that reads from IPFS - but the chunks wouldn't line up unfortunately
<edrex>
also pretty neat to hear that freenas are interested it/working on several integration points
wasabiiii has quit [Quit: Leaving.]
<ion>
achin: No problem, i went ahead and commented.
<spikebike>
edrex: torrents support multiple chunksizes I thought
<ion>
(either 16 kB or 16 KiB) · 2^n IIRC.
<ion>
And a chunk boundary is likely not a file boundary.
<edrex>
spikebike: you mean a requester can request an arbitrary range as a chunk? or a provider can advertise/serve an arbitrary range? Or just that the creator can break the files up how they want at create time?
mildred has quit [Ping timeout: 250 seconds]
<Xe>
is there something where I can pipe a tar file to ipfs?
<Xe>
wait shit that won't work
<edrex>
The workflow I was thinking of is this: 1) fetch the contents of the torrent 2) move the files around arbitrarily, maybe edit some 3) still seed the chunks you have
<spikebike>
edrex: torrents are a list of checksums for a blocksize decided on at torrent creation time
<edrex>
The only way that would work though, is if the original BT chunks were stored in IPFS, which would work (the chunker can chunk however it wants to) but would break dedupe
<edrex>
spikebike: yeah, that's what I thought.
<whyrusleeping>
Xe: 'ipfs tar add something.tar'
rendar has quit []
<spikebike>
heh, ipfs is basically the opposite of dedupe
<whyrusleeping>
spikebike: whatcha mean by that?
<Xe>
whyrusleeping: how about adding dotfiles?
<spikebike>
10 people reading a file ends up with identical files being stored on 10 nodes.
<Xe>
ah
<Xe>
--hidden
<edrex>
So you could feed torrent blocks into IPFS, creating the IPFS schema file structure at the same time, and then seed out of it, but those blocks wouldn't align with regular IPFS-chunked blocks (or the same content in other torrents) and so dedupe wouldn't work at all.
<ion>
Deduplication is somewhat orthogonal to redundancy. Both can be useful simultaneously.
<spikebike>
sure
<edrex>
This yet again makes me think about the importance of standardized chunking algorithms and parameters.
<edrex>
SO important.
<spikebike>
dunno
<spikebike>
helping torrents doesn't seem like a big use case
patcon has quit [Ping timeout: 252 seconds]
<spikebike>
kinda like designing an advanced webserver to be compatible with ftp
<spikebike>
edrex: what use case do you see for byte addressable blocks?
<edrex>
Specifically, I was thinking through the technical viability of representing the blocks in a torrent using the IPFS schema, as a secondary index into normal IPFS-chunked representation of the directory served by the torrent
<dts>
So I discovered ipfs recently, and it's pretty cool. I'm having some trouble with setting up a DNS record pointing to ipns, and I'm not sure if it's my own stupidity or just not implemented yet
<edrex>
With camlistore, the schema supports it (you can use offset and size together to point to windows inside each chunk), but maybe not with IPFS
<dts>
I can point to a ipfs address fine
<edrex>
This secondary index would allow continuing to seed a torrent, even after the original file system structure has been broken apart/renamed etc
<ion>
dts: What’s the DNS name for which you’re adding a TXT record?
<dts>
I'd share it but it has my real name, so I'd rather not if possible
<dts>
host -t TXT {hostname} gives: descriptive text "dnslink=/ipns/QmTTXJoGU1cxTtCTeTNVn6BihCd4Th9BCsiLAdigcptp1B .."
<dts>
which is just a hello world
<whyrusleeping>
dts: ipns is not reliable yet, so it may just be that ipns is failing on you
<dts>
whyrusleeping: ah, thanks. ipfs works fine so it must be an ipns problem. Maybe I'll get a $1 domain name and use it as a demo case for the bug tracker
<ion>
achin: “After 24 hours, the entry will drop from the network unless you run publish again.” I didn’t know that. Do you have a cron job for the IPNS pointer to your RFC mirror?
pfraze has quit [Remote host closed the connection]
<whyrusleeping>
ion: i'm not aware of that mirror, but thats how you would currently have to do it
ygrek has quit [Ping timeout: 265 seconds]
<whyrusleeping>
ion: although i *just* commented on that issue, so take another look
<whyrusleeping>
ion: ah, yeah. It will have to be republished once a day or so
<whyrusleeping>
brb, IRL things
rawtaz has quit [Ping timeout: 250 seconds]
jamescarlyle has quit [Remote host closed the connection]
rawtaz has joined #ipfs
jedahan has quit [Read error: Connection reset by peer]
jedahan_ has joined #ipfs
<giodamelio>
jbenet, whyrusleeping: Does anyone have access to full text logs of this channel(not the online version on botbot.me)?
<fazo>
giodamelio: should have a bot archive them on ipfs, in a folder bound to its ipns name
<whyrusleeping>
giodamelio: i beleive i have such logs
<giodamelio>
Are they online somewhere?
<whyrusleeping>
giodamelio: theyre in my .weechat/logs folder on my vps
<whyrusleeping>
lol
<jbenet>
whyrusleeping: net splits
<jbenet>
giodamelio: what's wrong with botbot logs?
<jbenet>
i'd love to get a general solution for botbot logs
<whyrusleeping>
jbenet: yeah, they wont be guaranteed to have everything. but it will be better than botbots logs
<jbenet>
so that we can archive all of them
<whyrusleeping>
(i have more uptime than botbot! eat it [0__0] )
<giodamelio>
I am writing a quick irc stats program to practive go(like http://sss.dutnie.nl/). I have been keeping my own with weechat, but it would be cool to have the backlog to run it over.
<jbenet>
edrex: what do you mean? of course you can point to specific byte ranges in a file-- you can seek on a file on top of ipfs just fine...
<jbenet>
edrex: also, i dont think you want to represent an ipfs graph as a torrent, that's going backwards. the ipfs graphs are strictly a superior format, that's why they exist. (based on git instead)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<jbenet>
(you might want to do this for compatibility's sake, but really, we worked really hard to move us out of that quagmire.
<edrex>
jbenet: can whatever part of the schema is responsible for representing a file's bytes as a tree of chunk blocks point to ranges within the chunk rather than the entire chunk?
<jbenet>
you could do that if you wanted to, yes, and that's what stackstream (or whatever language we use) might do. but so far we haven't needed to do this. chunks are small.
<jbenet>
oftentimes it's actually cheaper to just make a new chunk.
<edrex>
jbenet: what I was thinking is something that will take a torrent file and the resulting directory, add both to IPFS, and then build a secondary index in IPFS corresponding to the torrent's blocks
<jbenet>
than handle dedup with offsets.
<jbenet>
yeah so representing a torrent file within ipfs-- (yep this is one of two compatibility layers)
<giodamelio>
whyrusleeping: If you could send me the old logs, I can merge them with mine and keep my own going forward. When the stats program is done, I will make it post to ipfs daily along with a full copy of the logs.
<whyrusleeping>
giodamelio: cool, i should be home within an hour
<ion>
Given a directory with a large number of files (which might also receive updates), could ipfs perhaps represent the directory itself as a tree instead of a flat list?
<jbenet>
there's some work towards makign git, torrents, and bitcoin first class citizens as they are. we're not sold on it (instead of just wrapping the objects), but it is valuable to come up with "the canonical way to do it" so we can ingest all {git repos, torrents, bitcoin, etc}
<whyrusleeping>
errands and such..
<giodamelio>
whyrusleeping: Thanks
<edrex>
jbenet: the idea being to seed the torrent efficiently directly from IPFS, as a compatibility layer
<edrex>
the only qualitative benefit VS seeding from FUSE mounted IPFS would be that you could unpin the original directory (assuming some mutable IPNS fuse endpoint, maybe the user renames/mutates some of the files)
<edrex>
and even maybe allow the blocks you don't explicitly need to go away, while still seeding the ones you've kept.
<jbenet>
yep, it is definitely useful. take a look at how whyrusleeping implemented tar support, this may look similar.
<edrex>
but for the use case of serving Linux ISOs and the like, that isn't an advantage
<ianopolous>
Quick question: I've noticed that object.put returns the hash under "Key" whereas everywhere else in the http API it is returned under "Hash". Is this deliberate?
<edrex>
Except... different versions of the artifact would be auto-deduped. So totally useful!
captain_morgan has quit [Ping timeout: 246 seconds]
<edrex>
will ipfs add -r eventually automatically give tars a special treatment?
<fazo>
quick question: if I "ipfs name publish <hash>" but I don't have <hash> locally, will it work?
<fazo>
wait, why shouldn't it work
<edrex>
jbenet RE: first class foreign content addressable objects VS compatibility layers, and having a canonical way to do it, totally agreed
<fazo>
I spent 5 minutes trying to figure it out and only got it when I wrote it down
<edrex>
related to establishing a standard chunking algorithm/parameters across CAS systems..
wopi has quit [Read error: Connection reset by peer]
<edrex>
Also there's been a ton of thinking around that in the Camlistore project
wopi has joined #ipfs
<spikebike>
what's the other similar distributed filesystem with encryption and dedupe?
dignifiedquire has quit [Quit: dignifiedquire]
giodamelio has left #ipfs ["WeeChat 1.3"]
<spikebike>
oh LAFS
<jbenet>
edrex: we're thinking about how to do that correctly (the giving tar special treatment in regular add) it's a very tricky thing to get right.
<jbenet>
edrex: yeah exactly
<ion>
For instance, “ipfs object get --encoding=protobuf /ipns/QmbuG3dYjX5KjfAMaFQEPrRmTRkJupNUGRn1DXCgKK5ogD/archives/RFCs | wc -c” indicates almost half a megabyte of data for this directory listing which is also updated weekly. Adding a layer of indirection by chunking the list of links itself might be useful.
<jbenet>
edrex: yep, happy to support whatever work from there is useful to us!
giodamelio has joined #ipfs
<edrex>
i'm interested to hack on the webui (pretty handy with reactjs). seems a lot of issues get reported to go-ipfs repo
giodamelio has left #ipfs [#ipfs]
giodamelio has joined #ipfs
<edrex>
so it seems like the IPFS schema for composing multiple chunks into a file doesn't have an equivalent of Camlistore's offset param (see link above for that schema)
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike>
building files out of chunks+offsets seems kinda gross, complex, and slow.
<edrex>
spikebike: how is it slower than without the offsets?
giodamelio has quit [Quit: WeeChat 1.3]
<spikebike>
more code and more seeks, more verbose protocols, and often somewhat better storage efficiency, but at a substantial cost in ram, cpu, and IOPs
wasabiiii has joined #ipfs
giodamelio has joined #ipfs
<edrex>
Doesn't seem like it would cause more seeks for standard chunking (which wouldn't use the offsets). What it would do is support various secondary indices allowing compatibility layers with other on-disk formats (like bittorrent, git, camlistore)
<dts>
Alright, took long enough to propagate, but here we go...
<dts>
ipns.elcheapodomain.xyz has a ipns TXT store
<dts>
ipfs.elcheapodomain.xyz has a ipfs TXT store
<dts>
i can ipfs dns ipfs.elcheapodomain.xyz but not ipfs dns ipns.elcheapodomain.xyz
<dts>
(recursion limit exceeded)
<edrex>
(of course it would also be possible to use a different schema supporting offsets for the secondary indices)
captain_morgan has joined #ipfs
<dts>
(aside: the new TLDs only being $1/yr is nice)
<spikebike>
heh, nice = spammers love i = often blocked by mail servers
<dts>
ah, interesting. looks like I actually *can* do this: ipfs cat /ipns/ipns.elcheapodomain.xyz
<dts>
but ipfs dns ipns.elcheapodomain.xyz doesn't work
Eudaimonstro has quit [Ping timeout: 240 seconds]
<dts>
well, mail servers even block my .com domain so *shrug*
<giodamelio>
dts: wait, where can I buy a domain for a dollar a year?
<dts>
i got that one from namecheap, but i'm sure just about any domain seller will do if you're getting the crappy TLDs
<ion>
dts: ipfs resolve -r /ipns/ipns.elcheapodomain.xyz also works.
<dts>
ion: ah, hmm, so maybe it's my own misunderstanding. Is ipfs dns not supposed to work on an ipns-pointing dns entry?
<ion>
dts: I don’t know why ipfs dns doesn’t work here, it may be a bug.
wasabiiii has quit [Ping timeout: 255 seconds]
<dts>
brb, irl calls
<voxelot>
anyone else had problems with ipns hanging when trying to publish?
<spikebike>
yes
<voxelot>
any work arounds?
<spikebike>
patience is the only one I'm aware of
wasabiiii has joined #ipfs
notduncansmith has joined #ipfs
<spikebike>
IPNS seems not quite there yet
notduncansmith has quit [Read error: Connection reset by peer]
<spikebike>
but opening a ticket (if there isn't one) would be a good idea
<spikebike>
include OS, version of go, version of ipfs, and an example
<voxelot>
restarting the daemon seemed to help
elima has quit [Ping timeout: 240 seconds]
dts has quit [Ping timeout: 252 seconds]
doei has joined #ipfs
apophis has quit [Quit: This computer has gone to sleep]
apophis has joined #ipfs
<achin>
ion: i wasn't aware that IPNS publishes are not durable. i'll schedule a recurring publish. thanks for the heads up, where was that documented?
<jbenet>
sorry please PR to document it more in the command. IPNS needs a lot of work, my fault for dealying it
<jbenet>
delaying*
<achin>
jbenet: no problem. that's why there is a community to help keep everyone up-to-date
<lithp>
Are there any plans to eventually make IPNS entries permanent? Or will they always require that you, or one of your agents, publish every day?
<jbenet>
thanks achin :)
<jbenet>
lithp: they are supposed to be permanent, however when routed on top of a DHT, the nodes "responsible for the record" have to republish to the dht.
<jbenet>
so basically, "IPNS records go into a DRS (distributed record store)", permanently. the current global DRS is implemented on top of a vanilla Kademlia DHT, which requires perioidc republishing of all DRS values. (other DHTs or other systems may be different altogether)
<lithp>
Ah, so everything on the DHT goes away after 24 hours, IPNS entries aren't an exception
<jbenet>
right. this is what prevents vanilla Kad-DHTs from getting clogged with crap.
<jbenet>
(note the "Kad" part, because other DHTs have different mechanisms)
<spikebike>
would that line make all href's relative to that URL?
<fazo>
wouldn't it be enough to keep one IPNS name per node in the DRS, with each name having a creation date and a signature from the publisher?
<fazo>
when a more recent ipns name from a publisher is found it just replaces the old one, then nodes can decide to drop old names to not fill the DRS
<fazo>
maybe have nodes keep a "last requested" date locally so that they can keep popular ipns names even if they're old
<fazo>
I think it would be permanent and not full of garbage
<fazo>
ipns value updates could propagate in the network and anti flood could be implemented by limiting time between updates
jhulten has quit [Ping timeout: 264 seconds]
<spikebike>
well ideally an IPNS node could host an arbitrary number of websites, not just one
<spikebike>
IPFS node rather
<fazo>
spikebike: yes but that is a problem of the implementation in my opinion, because you could just host a single website which is a folder, each object inside the folder is a website
<spikebike>
thus the need for an arbitrary number of IPNS records
<fazo>
this way the only IPNS record of each node is just a folder with an object inside for every website or file or folder it wants to expose
<fazo>
I don't get why multiple websites/files would be implemented in any other way
<spikebike>
but yeah DNS like TTLs and related make sense
<spikebike>
I don't follow
<spikebike>
say I have an IPFS server and want to host foo.com and bar.com
<spikebike>
I need IPNS entries for each to point that at the correct directory
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]
<fazo>
spikebike: you're right, I totally forgot about readable names
<spikebike>
that is one of the top priority goals for IPNS (afaik)
Eudaimonstro has joined #ipfs
<fazo>
spikebike: yes, it makes a lot of sense. The cheap fix would be to use multiple identities on the same daemon
<fazo>
spikebike: another way could be hosting a single ipns name, tied to multiple DNS names, then have it point to a folder with one folder/file for each dns txt record you use
<fazo>
this way it can still be light on the DRS while allowing the clients to figure out which folder to pick based on the actual address
<spikebike>
interesting
<fazo>
and if there is no file or folder matching the DNS address (for example "foo.com") the root ipfs folder is returned
<fazo>
the one directly served via ipns
<spikebike>
sure, one IPNS record per domain could all point to the same dir, then in that dir each domain would be a subdir
<spikebike>
so foo.com and bar.com would point into a dir that had a foo.com and bar.com subdir
<fazo>
exactly
devbug has joined #ipfs
<spikebike>
given the names of IPFS though that seems rather inefficient.
<fazo>
then the client returns the hash of foo.com or the one of bar.com or the one of the root folder depending on the address requested
<fazo>
why is that? the root folder would need to be downloaded anyways
<spikebike>
that object would have to change if a single bit of either website changed
<spikebike>
and then you'd have more TTL type issues with out of date directories acached, not to mention out of date DNS
<fazo>
spikebike: oh I didn't think about that, you're right
<fazo>
looks like there's no way around needing multiple ipns records
<spikebike>
seems best to keep single domain -> single DNS -> IPFS entry -> single IPFS object
devbug has quit [Remote host closed the connection]
<spikebike>
a DHT full of these things is pretty cheap, it's just that the code hasn't been written for the >1 case (afaik)
<spikebike>
then people can set the TTL of that DNS entry per domain
HostFat_ has joined #ipfs
domanic has joined #ipfs
patcon has quit [Ping timeout: 246 seconds]
HostFat has quit [Ping timeout: 244 seconds]
HostFat_ is now known as HostFat
notduncansmith has joined #ipfs
notduncansmith has quit [Read error: Connection reset by peer]