<lgierth>
nycoliver: what's your ipfs version --commit?
<nycoliver>
ipfs version 0.4.0-dev-
<nycoliver>
happens when i add a specific file with the js api
<nycoliver>
i'll try to isolate the issue
<lgierth>
sounds scary
<lgierth>
could you file an issue?
Pharyngeal has joined #ipfs
<lgierth>
we just merged dev0.4.0 into master today and we need to fix these kinds of issues
<nycoliver>
ah ok
<nycoliver>
will let you know if i find anything
<ipfsbot>
[webui] greenkeeperio-bot opened pull request #197: Update lodash to version 4.0.0
chris6131 has joined #ipfs
Guest39845 has quit [Remote host closed the connection]
moreati has left #ipfs [#ipfs]
otherbrian has quit [Quit: otherbrian]
otherbrian has joined #ipfs
overdangle has joined #ipfs
Matoro has joined #ipfs
Matoro has quit [Read error: Connection reset by peer]
voxelot has quit [Ping timeout: 265 seconds]
<guruvan>
so - If I'm going to run a website form IPFS, and have regular updates - are there any general housekeepings I should be doing?
Matoro has joined #ipfs
nycoliver has quit [Ping timeout: 246 seconds]
<brimstone>
guruvan: i think just unpin the previous directory, add the new one, then repo gc
<brimstone>
oh, and then publish the new directory hash
padz_ is now known as padz
<guruvan>
ok - so in general not much more than I figured - thanks brimstone
hoony has joined #ipfs
ed_t has quit [Quit: Leaving]
* jbenet
idea: "recursive bubble pins" -- recursive pins that automatically disappear when another recursive (or bubble recursive) pin is added as an ancestor. (so that in a version history tree, the older stuff is gc-ed out. --- we still need regular recursive pins, because sometimes we want to ensure the survival of something, even if other things/ancestors are
* jbenet
pinned/unpinned
<brimstone>
or maybe an option to name publish to unpin the previous hash
otherbrian has quit [Quit: otherbrian]
nycoliver has joined #ipfs
r04r is now known as zz_r04r
Senji has joined #ipfs
<jbenet>
brimstone: that may be taken care of later by "pinning a name and it's target" (so re-targetting the name automatically orphans the other stuff and gc can free it)
<jbenet>
achin: btw -- hack workaround to deal with nasty "objects too big" problem: `ipfs add the $repo/blocks/<file-of-root-obj>` and `ipfs cat $hash >$repo/blocks/<file-of-root-obj>` elsewhere
<achin>
i'm not sure i follow
<achin>
am i adding to ipfs a unixfs file that lists all the hashes?
<jbenet>
the file-of-root-obj is a single file representing the root, which is big, in the directory hierarchy. the dir hierarchy is based on the hash. but in hex, not base58.
simonv3 has quit [Quit: Connection closed for inactivity]
fpibb has quit [Quit: WeeChat 1.3]
vijayee has joined #ipfs
rombou has quit [Ping timeout: 240 seconds]
Senji has quit [Ping timeout: 260 seconds]
jamie_k_ has quit [Ping timeout: 256 seconds]
jamie_k___ has joined #ipfs
fuzzybear3965 has joined #ipfs
dignifiedquire has quit [Quit: Connection closed for inactivity]
otherbrian has joined #ipfs
jimmythewhale has joined #ipfs
otherbrian has quit [Read error: Connection reset by peer]
<jimmythewhale>
hey champs
<jimmythewhale>
how does ipns work?
<jimmythewhale>
(also big fan, y'all are killer, jbenet is king)
otherbrian has joined #ipfs
<jimmythewhale>
also holy shit, # of users in this channel has gone nuts since August
otherbrian has quit [Client Quit]
<chris6131>
Network just got a new node! Seems I've added content: /ipfs/QmbzwY51UoEuAcfeqkCdTzZGsY7Coh4vzaDRSKE2KkWCwY
jamie_k___ has quit [Ping timeout: 255 seconds]
<chris6131>
Seem to be having trouble making that dir object the value of my ipns name, though. I get "Published to QmcbE6epS1SSAodddwZqXuk1TpQ4BMTLmsZu3manCuVMcE: QmbzwY51UoEuAcfeqkCdTzZGsY7Coh4vzaDRSKE2KkWCwY" but when I look it up via the gateway (ipfs.io OR my local gateway) it just hangs forever
<jimmythewhale>
wait i thought there was a big problem with dynamic services
<jimmythewhale>
on ipfs
<noffle>
afaik yes, there are some issues (propagation can be slow, unreliable, no pubsub) -- it's still being worked on
ygrek has quit [Ping timeout: 250 seconds]
<jimmythewhale>
;_;
prf has joined #ipfs
<jimmythewhale>
it's the one thing i don't get... i've seen every jbenet lecture, blah blah... and ipns is always glossed over
<jimmythewhale>
the rest is explained well for us code-illiterates; ipns, or the git-layer or w/e, always the "wave hands a bit" section
<jimmythewhale>
like, has anyone figured out how to do backlinks yet?
nycoliver has quit [Ping timeout: 245 seconds]
prf has quit [Remote host closed the connection]
<noffle>
jimmythewhale: have you read the whitepaper's section on ipns? it's explained pretty well, but yeah, getting high performance and pubsub is still an area of ongoing R&D
<jimmythewhale>
i don't mean to be retarded or politically incorrect @noffle, but i don't see the whitepaper at that link... my heat is out and i've been hitting a bottle of whiskey for a while and i may just be blind now
fiatjaf has quit [Remote host closed the connection]
<noffle>
jimmythewhale: fyi "retarded" is probably a poor choice of words
<jimmythewhale>
yes, i followed the debate about whether IPFS would use "allow/deny" list vs "white/black" list. i did approve of jbenet's rapid interjection that allow/deny was a far better choice.
<jimmythewhale>
ty for the link
<noffle>
np
reit has quit [Quit: Leaving]
prf has joined #ipfs
VegemiteToast has joined #ipfs
Not_ has quit [Ping timeout: 272 seconds]
<VegemiteToast>
what is the difference between builds dev0.4.0 and v0.4.0-dev ?
<lgierth>
nevermind either of them, that branch has been merged into master
<jimmythewhale>
ok: so my q is... if publishing a mutable path relies upon the routing system -- "(1) publish the object as a regular immutable IPFS object, (2) publish its hash on the Routing system as a metadata value" -- why are dynamic services hard? Why are backlinks a serious problem?
<jimmythewhale>
is it because IPNS is somehow inefficient?
<jimmythewhale>
and once again, my apologies, but i am mostly illiterate.
<jimmythewhale>
i've been trying to grasp why the difficulty exists
<VegemiteToast>
lgierth: ty
<jimmythewhale>
i mean, object permanence would be such a coup for the social web... but as far as i can tell ipfs has run into a real roadblock whereby even developing a simple distributed web forum would be a huge issue
simonv3 has joined #ipfs
Not_ has joined #ipfs
computerfreak has quit [Quit: Leaving.]
patcon has joined #ipfs
O47m341 has quit [Ping timeout: 246 seconds]
grahamperrin has joined #ipfs
grahamperrin has left #ipfs [#ipfs]
<jimmythewhale>
OK a dear friend has tried to explain the recent push on Records to me... maybe someone can integrate that into the answer?
kvda has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<gaffa>
Hi! Does ipfs use other ports than the configured one for the node (fx. 4001)? I mean is it possible to control the ports that are used for sending to other ipfs nodes?
<gaffa>
My idea is to create a semi-open IPFS network, limited to a country. I'm doing that with iptables, but with that solution I have to know the outgoing ports as well.
<brimstone>
gaffa: i know you an change the listening ports in the config file, but i don't think you can control the outbound ports
jhulten has joined #ipfs
jaboja has quit [Ping timeout: 240 seconds]
<gaffa>
Yea, that's what I thought. I assume that the node will be receiving new nodes from peers, so it's not enough to limit the incoming port. I guess I'll have to create a pre-loaded library that hooks into sendto, recvfrom etc. and do the filtering.
elima has quit [Ping timeout: 245 seconds]
<gaffa>
I'll take a look at the code and see how ports are assigned.
zorglub27 has joined #ipfs
jhulten has quit [Ping timeout: 265 seconds]
<Ape>
gaffa: Is there really any reason to limit the peer to a certain country (while still being semi-open)?
<gaffa>
Ape; The only way to avoid that would be if all the nodes were limited, so that you don't get any outside peers from them.
mildred has quit [Ping timeout: 255 seconds]
<Ape>
What are you trying to create exactly? Why do you not want to connect to other countries?
<gaffa>
I want a node that does not send or receive to nodes outside my country.
<Kubuxu>
gaffa: but it will be possible to relay data it sends.
<gaffa>
Kubuxu; That doesn't matter. That's want I ment with semi-open.
<gaffa>
Basically I want a swarm that's limited to my country. If a node want to relay that's okay.
<Ape>
You could also run ipfs in a container and use iptables there
<Ape>
Or use user specific iptable rules
<gaffa>
Ape; Yes, a contained environment is another solution.
<Kubuxu>
Or use 'owner' module to filter by PID
hellertime has joined #ipfs
<Kubuxu>
Or create bridge and network interface just for IPFS
<Kubuxu>
then just filter on forward chain.
elima has joined #ipfs
jholden_ has joined #ipfs
<gaffa>
I will have to read up on that, but sounds like it could work :)
Encrypt has quit [Quit: Quitte]
O47m341 has quit [Ping timeout: 260 seconds]
<Ape>
Peer whitelist might be a useful feature for ipfs anyway
<Ape>
Perhaps usually to limit to a specific set of peers, not to a whole country, but still
<The_8472>
<gaffa> My idea is to create a semi-open IPFS network, limited to a country. I'm doing that with iptables, but with that solution I have to know the outgoing ports as well. <- you don't have to know that
<The_8472>
just set the iptables rules in a separate network namespace
<Kubuxu>
Does WebUI is included out of the box or I have to install it? If I have to install it, is there possibility of doing it w/o whole npm and so on.
<The_8472>
so you can restrict all traffic, no matter the port
<gaffa>
The_8472; Yes, I'm looking into doing something like that right now, thank you :)
<The_8472>
containers provide isolated virtual network devices out of the box
<richardlitt>
"go-ipfs is the main implementation of IPFS. It is the base distribution,"
<dignifiedquire>
ape Oo no
<Ape>
chromium 47.0.2526.106
<dignifiedquire>
Ape do you have javascripts enabled?
<Ape>
I think so, yes
<dignifiedquire>
hmm
<dignifiedquire>
any errors in the dev console?
<dignifiedquire>
richardlitt: no idea, that copy is from jbenet
<richardlitt>
Cool. No worries then.
<Ape>
dignifiedquire: "The key "shrink-to-fit" is not recognized and ignored."
<Ape>
Nothing else
<dignifiedquire>
very odd
<dignifiedquire>
any failed requests in the network panel? (open dev tools, go to network panel, reload)
jaboja64 has joined #ipfs
<Ape>
dignifiedquire: No errors
<dignifiedquire>
Ape, I don't get it :/
cemerick has quit [Ping timeout: 260 seconds]
ashark has joined #ipfs
<dignifiedquire>
works fine in Chrome and FF for me
<achin>
when i click the 'ipfs-update' link on the left sidebar, it correctly scrolls to the ipfs-update section, but the left sidebar highlights go-ipfs (instead of ipfs-update)
jaboja has quit [Ping timeout: 272 seconds]
<Ape>
dignifiedquire: Might be a font issue. If I disable body.font-family css it looks better
<ipfsbot>
go-ipfs/feature/env-var-docs 6f22271 Richard Littauer: Added env var info to init, dameon help...
<ipfsbot>
[go-ipfs] RichardLitt opened pull request #2195: Added env var info to init, dameon help (master...feature/env-var-docs) https://github.com/ipfs/go-ipfs/pull/2195
<Ape>
And actually it looks buggy exactly the same way on firefox, too
<Ape>
On a new profile
<Ape>
But FF seems to have some font errors on the log
<dvn>
ansuz, you should make all of your irc traffic go over cjdns then
Codebird has quit [Read error: Connection reset by peer]
Codebird has joined #ipfs
nycoliver has quit [Ping timeout: 256 seconds]
<Kubuxu>
dvn: (not an.suz) When I use my notebook it does.
<Kubuxu>
weechat on server and glowing-bear in browser connected via cjdns.
<Ape>
whyrusleeping: Well, usually pull requests are made against master. Maybe the solution is to not have 0.4.0 branch, but instead a 0.3.x branch with 0.4.0 stuff being in the master
<Ape>
I mean it would be possible to never make a branch for future releases, but instead make stable-branches if needed
<Kubuxu>
0.4 is already master, it is just a pain as previously 0.4 was different branch and now PRs have to be recreated against master.
jaboja64 has joined #ipfs
NightRa has joined #ipfs
<Ape>
Yes, I know. But for the future, let's not make a 0.5.0 branch ever
<Ape>
Instead just develop experimental stuff on master and use some 0.4.0-stable branch for making 0.4.x releases
<Kubuxu>
People will clone from master by accidents. Other solution is to implement gitflow.
<Kubuxu>
Where master is always stable and you develop on other branch.
<Ape>
I really think master can be the bleeding edge
lidel has quit [Remote host closed the connection]
<Ape>
It should of course build successfully always
<Kubuxu>
master is fetched by 'go get' so ...
lidel has joined #ipfs
<Ape>
Releases can be tags, and people can just them for stability
<Kubuxu>
Problem with master being bleeding edge and projects that are built by people themselves, is that they will build master and complain that it does not work.
<Kubuxu>
cjdns accounts just for that: we have master and crashey, which hardly ever lives up to its name. Work flow is: PRs and new code to crashey, which then is tested (some people use crashey all the time). For new version release: merge crashey to master and make final adjustments if necessary, tag the release.
<Kubuxu>
It is almost git-flow.
<lgierth>
M-davidar: pong -- will be back in ~3h
kahiru has joined #ipfs
tilgovi has quit [Remote host closed the connection]
<whyrusleeping>
yellowsir: what is the domain name?
<whyrusleeping>
or just try 'ipfs ls /ipns/yourdomain.name'
m0ns00n has quit [Quit: undefined]
<yellowsir>
thx i'm trying
m0ns00n has joined #ipfs
<yellowsir>
ls worked
joshbuddy has quit [Quit: joshbuddy]
m0ns00n has quit [Client Quit]
<yellowsir>
thx!!
ELFrederich is now known as Guest86137
m0ns00n has joined #ipfs
Matoro has quit [Ping timeout: 256 seconds]
fuzzybear3965 has joined #ipfs
<whyrusleeping>
lgierth: status on the gateway multiplexing 0.3.* and 0.4.0 requests?
grahamperrin has joined #ipfs
<whyrusleeping>
yellowsir: thats interesting... could you please file an issue about ipfs dns not working?
grahamperrin has left #ipfs [#ipfs]
<voxelot>
whyrusleeping: publish and resolve are slow today
<whyrusleeping>
voxelot: on 0.3.* ?
<whyrusleeping>
or master(0.4.0) ?
<achin>
the ipfs gateway nodes need more coffee
<voxelot>
feel like we should have a site like a traffic report that will be like.. it's raining, ipfs resolve will be slow today
<voxelot>
so randowm
<voxelot>
no on 0.4-dev
<whyrusleeping>
okay
<whyrusleeping>
could be that more nodes are joining 0.4.0 network
<whyrusleeping>
it merged into master
<voxelot>
i do see more peers than normal
<whyrusleeping>
eh, i'd just wait it out
<whyrusleeping>
maybe do some 'ipfs dht' queries to see which parts are slow
<achin>
a dashboard showing the number of 0.3x peers versus 0.4x peers would be neat. could be used to figure out for how much longer i should run both nodes
<whyrusleeping>
its likely that more nodes are joining with bad NAT situations
<whyrusleeping>
achin: that would be a pretty cool idea
<whyrusleeping>
lets ping lgierth about it
<whyrusleeping>
i think he has all that info anyways
<voxelot>
get those dirty peers off my network! ;)
<achin>
seems like a quick job for a small script and maybe rddtool or something
<whyrusleeping>
achin: do it :D
<achin>
i'll put it on my list!
e-lima has quit [Ping timeout: 260 seconds]
<achin>
also whyrusleeping yayy for merging dev040 into master!
Matoro has joined #ipfs
pjz has quit [Quit: leaving]
<whyrusleeping>
achin: :D
<whyrusleeping>
its happening!
<achin>
i noticed that dev040 branch still exists, though.
pjz has joined #ipfs
<voxelot>
open the flood gates! so hows Oh.4.1-dev going? ready to add permissions to publish?
<Kubuxu>
Performance tests of global network aren't that bad of an idea.
<whyrusleeping>
dignifiedquire: distributions? :D
<Kubuxu>
You might be able to correlate deployment of new versions with performance regression. It will be hard but it would be better than finding out after few months that something could have made whole network slower.
<whyrusleeping>
my next step is finalizing the extraction of libp2p
joshbuddy has joined #ipfs
<whyrusleeping>
once we get that done, we can start others on developing libp2p code independent of ipfs, and then pulling the new changes in as we need
joshbuddy has quit [Client Quit]
ianopolous has joined #ipfs
ianopolous2 has joined #ipfs
ianopolous3 has joined #ipfs
ianopolous has quit [Ping timeout: 264 seconds]
e-lima has joined #ipfs
ianopolous3 has quit [Read error: Connection reset by peer]
ianopolous2 has quit [Ping timeout: 260 seconds]
ianopolous has joined #ipfs
<voxelot>
ipfs update is disabled :(
<voxelot>
or no maybe just slow atm as well
ianopolous2 has joined #ipfs
ianopolous has quit [Read error: Connection reset by peer]
<voxelot>
ERROR: Failed to query versions: context deadline exceeded
simonv3 has quit [Quit: Connection closed for inactivity]
ianopolous2 has quit [Read error: Connection reset by peer]
rendar has quit [Ping timeout: 272 seconds]
joshbuddy has joined #ipfs
nycoliver has joined #ipfs
rendar has joined #ipfs
computerfreak has quit [Remote host closed the connection]
Encrypt has quit [Quit: Quitte]
nycoliver has quit [Ping timeout: 264 seconds]
<Ape>
whyrusleeping: Is there a command for counting all peers in the network?
<achin>
Ape: i do this:
<achin>
ipfs diag net --timeout=120s |grep "connected to"
<richardlitt>
achin: Yeah. I'm putting that there because last week we got some feedback on that weekly in the main issue
<richardlitt>
which I think we want to minimize?
<richardlitt>
Stuff like "Broken link" should be put elsewhere.
<richardlitt>
although awesome_bot should help with that.
<achin>
just, make sense
<achin>
i'm having hard time reading this diff. did you just reverse the order of the last two things in the Updates section?
<achin>
oh wait, no you added a thing too
<richardlitt>
Oh, yeah
<achin>
those edits LGTM
<richardlitt>
I wanted project repos to be more prominent
<achin>
github and show diffs in the rendered markdown. awesome
<achin>
github is besthub
fiatjaf has joined #ipfs
<achin>
s/and/can/
<richardlitt>
heh
nycoliver has quit [Ping timeout: 250 seconds]
<richardlitt>
achin: let's change draft's name
<richardlitt>
what should it be?
<richardlitt>
weeklys? roundups?
<richardlitt>
draft sounds like they're incomplete. They're not.
<achin>
yeah, i noticed that
<richardlitt>
We should also change the naming schema to include version numbers
<achin>
version numbers?
<richardlitt>
`archive`?
<richardlitt>
published?
<richardlitt>
I like published.
<richardlitt>
Sounds final.
<achin>
published, excellent
<richardlitt>
By verison numbers, I mean each file should have the version number
<achin>
version of what, though?
<richardlitt>
So, this just merged one should be Weekly #2 $DATE.md
<achin>
"issue number", maybe?
<achin>
also, 2016-01-12.md documents the "January 5th" sprint. i wonder if we can rename the .md files so they more closely match the date of the sprint they are covering
<richardlitt>
Hmm. That's hard.
<richardlitt>
I think date of writing might be better
<richardlitt>
But you may be right, that would make sense more from a human perspective
<richardlitt>
2-2016-Jan-5th-sprint.md
<richardlitt>
Doesn't sound good.
<Ape>
Maybe the sprints should be renamed to just incrementing numbers. E.g. Sprint 2
<richardlitt>
We had it that way once
<achin>
if you fuzz the dates, we're already pretty close to having a file named 2016-01-05 contain something other than the january 5th sprint :)
<achin>
next week i think we'll have this problem, since last week's meeting was delayed by 1 day i think
<achin>
i think i'm leaning towards naming the .md files after the date of the sprint they are covering (we'd have to rename the current two files)
<richardlitt>
Well
<richardlitt>
I could also rename the sprints
<richardlitt>
Sprint #34
ashark has quit [Ping timeout: 250 seconds]
<Ape>
Any Weekly 34? Or Roundup 34?
<Ape>
*And
<richardlitt>
Yeah
<achin>
true. i don't have much opinion on that, since i don't participate in them. but that certinally would work a bit better
<richardlitt>
That would work better.
<richardlitt>
Let's open an issue!
<richardlitt>
Everyone, into the bikeshed!
<achin>
green! i want it green!
<Ape>
Bikeshedding happens because it's often just fun :D
<achin>
your wrong "fun" is a terrible color, let's paint it greeeeeen
<richardlitt>
Oh
<achin>
wait no blue
<richardlitt>
That's why I changed it from numbers.
<Kubuxu>
lgierth: it might not be easy (I think it would require additional IP) but as transition mechanic it would be awesome to have host that would do 304 to ipfs.ip/ipns/$HOSTNAME as transition mechanic from normal hosting to gateway. Just something to think about.
<richardlitt>
Because Sprint #41 is issue #79
<richardlitt>
Which got really confusing.
<achin>
oooh
<achin>
i can see that yeah
<richardlitt>
Let's not go back to doing that.
<richardlitt>
I remember that was confusing as hell.
<richardlitt>
So: Weekly-2-Jan-5th.md
<richardlitt>
Weekly-1-2015-Dec.md
<richardlitt>
*Weekly-2-2015-Jan-5.md
<richardlitt>
That seems good to me.
<Kubuxu>
I think number and year+month are good enough
<achin>
not to be a total pain-in-the-ass, but is there a way we can name them so that when sorted alphanumerically they are also sorted by date?
<Kubuxu>
you have 4 weeklies in a month, you don't need a day
<richardlitt>
They should be sorted by number, not date.
<richardlitt>
1-2015-dec.md
<richardlitt>
2-2015-jan.md
<richardlitt>
3-2015-jan.md
<Kubuxu>
001
<richardlitt>
5-2015.feb.md
<richardlitt>
001-2015-dec.md
<richardlitt>
That looks fine to me.
<achin>
yeah, i like the 0-padded version
<richardlitt>
I think 002-2015-jan-5.md works best
<richardlitt>
because the date should match the sprint
<richardlitt>
Which doesn't change
<Kubuxu>
yeah it can be there
<richardlitt>
So the next will be 003-2015-jan-12.md
<richardlitt>
Alright. I am content with that.
<richardlitt>
Are ya'll content with that?
<achin>
i think i'm happy too!
<Ape>
It's green, right?
<richardlitt>
Yes.
<achin>
:D
<Kubuxu>
This gives us 20 years worth of weeklies, will we fit? :D
<dignifiedquire>
luigiplr: having fun with the linter? ;)
* luigiplr
contemplates suicide
<luigiplr>
That moment when your happy the amount of errors is in the 100 range
chriscool has quit [Quit: Leaving.]
<luigiplr>
dignifiedquire: i was thinking of some logic for the store
<luigiplr>
and adding some caching / throttling for requests
<luigiplr>
to prevent possible flooding in the future
<luigiplr>
perhaps a small abstraction layer?
<luigiplr>
something simple with async queue or a similar logic
<Codebird>
Question: When finding sources, does the software consider where it got related objects?
<dignifiedquire>
sounds like it would make sense yes
<luigiplr>
alrighty after i finnish this lint hell ill start that PR
<dignifiedquire>
especially for expensive operations
<luigiplr>
indeed
<Codebird>
ie, if it has fetched object blahblah from a certain node, and blahblah contains a link to stuffstuff, then it's likely that same node has stuffstuff too.
<Codebird>
Querying that would be a lot faster than doing a full DHT lookup.
<Codebird>
I ask this because after fetching a .html file from a low-popularity site, it takes bloody ages for the images to load.
Akaibu has quit [Quit: Connection closed for inactivity]
<dignifiedquire>
Codebird: I think this is not implemented yet, but I know there was some talk about doing this //cc whyrusleeping
<ralphtheninja>
jbenet: what's the status of filecoin?
<luigiplr>
dignifiedquire: was the lint happy on the master branch?
<dignifiedquire>
luigiplr: yes
vijayee has joined #ipfs
<luigiplr>
mmm
<luigiplr>
I call shenanigans.
<luigiplr>
only.. 97 more togo
vijayee has quit [Client Quit]
<jbenet>
ralphtheninja: it's coming but we're a bit slammed with ipfs. we'll be collecting potential beta testers soon enough though, if you're interested
<jbenet>
ralphtheninja: oh didnt see you in the hangouts!
<ralphtheninja>
jbenet: I definitely am
<ralphtheninja>
jbenet: I had to go get my dog :)
patcon has quit [Ping timeout: 240 seconds]
zorglub27 has joined #ipfs
<jbenet>
ralphtheninja: ahhh cool :)
ashark has quit [Ping timeout: 240 seconds]
<jbenet>
(pictures!)
<ralphtheninja>
jbenet: I really like the idea of decentralised data and creating incentives for that
jgraef has joined #ipfs
drwasho has joined #ipfs
<drwasho>
hey folks
<jbenet>
drwasho hey-- how's it going
<dignifiedquire>
hey jbenet :) any chance we can talk later this week?
<jbenet>
ralphtheninja: indeed same.
<dignifiedquire>
and ralphtheninja yeeees for filecoin :)
m0ns00n has joined #ipfs
<jbenet>
dignifiedquire: yep! i emailed you a few days ago about setting up a time-- didnt get?
<jbenet>
fffff sorrry mail client didnt sent it-- it's in drafts.
<dignifiedquire>
for static props
<luigiplr>
Ohhh crap good catch dignifiedquire :D
<dignifiedquire>
jbenet: well I haven't hacked your mail account yet, so I'm afraid I won't be able to read those ;)
<luigiplr>
_yet_
<ralphtheninja>
if I want to run the latest "stable" ipfs, which version should I run?
<ralphtheninja>
I'd really like to run as bleeding edge as possible
<jbenet>
dignifiedquire: appreciate the kindness <3
<jbenet>
ralphtheninja: run whatever's on master
<jbenet>
it's dev0.4.0
<ralphtheninja>
kk
<jbenet>
err 0.4.0-dev (my silly naming choice at the beginning)
<ralphtheninja>
hehe
<ralphtheninja>
is the versioning semver?
zorglub27 has quit [Quit: zorglub27]
<jbenet>
ralphtheninja: not yet... it will be vanity-semver "<vanity>.<major>.<minor>.<patch>"
<jbenet>
so you can treat the 0.4.0 as 4.0.0
<jbenet>
4.0.0 semver*
<ralphtheninja>
oh okay
<dignifiedquire>
jbenet: answer sent, hopefully not to drafts
<jbenet>
semver causes a marketing problem with end-users. it's good for dev, but makes it annoying to be at "version 42" of a product. hard to make a proper "2.0" product.
<ralphtheninja>
lol yep
<ralphtheninja>
so when would the vanity be bumped?
<dignifiedquire>
jbenet: something is wrong with your email client, I just got a partial email from the one you sent before again Oo
<jbenet>
dignfiedquire: that was my mistake sorry sent twice.
<jbenet>
ralphtheninja: when you want to signify a huge, fundamental product change. think movie/game sequel. or
<dignifiedquire>
jbenet: no prob, just looked like it was starting to get it's own life
<jbenet>
hahahah mail correspondence is an awesome training set for AIs
<ralphtheninja>
jbenet: or maybe when you have launched filecoin? ;)
<ralphtheninja>
1.4.0
m0ns00n has quit [Quit: undefined]
O47m341 has quit [Read error: Connection reset by peer]
fuzzybear3965 has joined #ipfs
<luigiplr>
dignifiedquire: All done/
<dignifiedquire>
luigiplr: looking
<luigiplr>
:)
patcon has joined #ipfs
kvda has joined #ipfs
prf has quit [Remote host closed the connection]
jfis has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
drwasho has quit [Quit: Leaving.]
Encrypt has quit [Quit: Sleeping time!]
yellowsir has quit [Quit: Leaving.]
<luigiplr>
dignifiedquire: just to clarify
<luigiplr>
when you say more es7ish
<luigiplr>
are you referring to me defining the state in the constructor rather than as a parm of the class?
<dignifiedquire>
I wrote an example
<ralphtheninja>
are there any instructions for how to build go-ipfs from source without using 'go get ..'?
<ralphtheninja>
I assume 'go get ..' fetches the source from github?
<luigiplr>
Oh perf okay
<dignifiedquire>
it makes it so you don't need to write a constructor
<ralphtheninja>
<-- go newb
<dignifiedquire>
luigiplr: going to sleep now..will check it out tomorrow, thanks for all the work :) I bet you know your way around this codebase pretty good now :P
<luigiplr>
Haha yah
<luigiplr>
lol
<luigiplr>
nighto
<luigiplr>
hopefully its mergable come morning ;)
<dignifiedquire>
ralphtheninja: yes it does
<dignifiedquire>
into your gopath
<dignifiedquire>
it will be then in gopath/src/github.com/ipfs or sth like that