devyn changed the topic of #elliottcable to: yorickpeterse is undergroin
<alexgordon> risky click
<alexgordon> liveleak link from a russian? it would only be riskier if it were on 4chan
<whitequark> it's sfw
yorick has quit [Remote host closed the connection]
<Sgeo_> Anyone know where I can buy some rainscreen?
<Sgeo_> And... it turns out to be a real thing in a different context, dammit
<Sgeo_> The Onion has a weather thing that is saying "Slather on the rainscreen"
<purr\Paws> [Issues] ELLIOTTCABLE comment on issue #6: further alternative, although not an excellent one: block the creator of a new execution by default, and immediately/synchronously execute the new one up until it unstages itself.... https://github.com/Paws/Issues/issues/6#issuecomment-31101611
<ELLIOTTCABLE__> cuttle, alexgordon, whitequark: back to the train of thought where I make “holes” first-class,
<ELLIOTTCABLE__> and replace all libside references to "executions", with references to specific holes.
<ELLIOTTCABLE__> I.e. Anytime you're referencing/storing/holding an execution, it's annotated with a target-breaking-point. Or home.
<ELLIOTTCABLE__> hole*
<ELLIOTTCABLE__> If you stage it with a resumption-value, your staging is invalid, and won't result in resumption of the code, until the code reaches a point *matching* the point it was targeted at.
<ELLIOTTCABLE__> Basically, long story short, this would allow you to target *future* periods of execution with resumption, instead of exclusively the point it was paused at.
<ELLIOTTCABLE__> Solves specifically the situation were you're providing the first parameter to a an execution that has to do other (unstaging-involving) work *before* it can take it's first external parameter
<ELLIOTTCABLE__> making parameter-coconsumption explicit instead of implicit
<whitequark> ELLIOTTCABLE__: what is hole?
<whitequark> also if I understood anything from this musing, then it's that you want any computation to return a promise
<whitequark> basically
<whitequark> but honestly, I don't nearly understand most of the words you have said. you probably should write a page somewhere with all the terms you have invented
alexgordon has quit [Quit: My iMac has gone to sleep. ZZZzzz…]
<ELLIOTTCABLE__> lol whitequark
<purr> lol
<ELLIOTTCABLE__> hole is Micah's term
<ELLIOTTCABLE__> Think "yield" when using explicit continuations or coroutines,
<ELLIOTTCABLE__> or any call-site when using implicit continuations / CPS, like Paws
<ELLIOTTCABLE__> when a procedure is stopped/paused/frozen, 'unstaged,' then Micah describes it as "having a hole, waiting to be filled."
<ELLIOTTCABLE__> as for the rest of the terms: yes. I know. That's not news. Everybody knows I can't communicate. /=
<ELLIOTTCABLE__> Micah generally translates for me, to people who speak TrainedComputerScience, instead of speaking ElliottCable
<ELLIOTTCABLE__> at least, those who haven't already had Paws' concepts and terminology explained to them.
<ELLIOTTCABLE__> For some reason, I thought I'd already walked you through Paws, whitequark. Sorry, if not. Didn't mean to hilight you if you wouldn't understand. )'=
<whitequark> ELLIOTTCABLE__: I've asked you to walk me through like
<whitequark> ten times
* whitequark shrugs
<purr> ¯\(º_o)/¯
<ELLIOTTCABLE__> Sorry. I really am. Just really busy.
eligrey has quit [Read error: Connection reset by peer]
<ELLIOTTCABLE__> I'm up in Alaska right now, until the 1st. Maybe after that I'll spend some time online?
eligrey has joined #elliottcable
<ELLIOTTCABLE__> You owe me some embedded-dev tutorial *anyway*. Y'know, 'cause you love me and all that.
<whitequark> yeah
* whitequark has just watched Django Unchained
<whitequark> poor german dude :/
<cuttle> hi
<purr> cuttle: hi!
<cuttle> what should I watch that's on netflix
<cuttle> ELLIOTTCABLE__: quite honestly I think that we're getting bogged down
<cuttle> ELLIOTTCABLE__: when worrying about issues like that
<cuttle> ELLIOTTCABLE__: we're formulating the whole continuations/generators/yielding at *way* too low a level
<cuttle> and it's not giving us any extra utility or flexibility a la dynamic assembly language
<cuttle> it's just making us have to make hard dumb decisions we don't need to be making
<ELLIOTTCABLE__> Don't understand
<ELLIOTTCABLE__> idk, I'm pretty done with the whole thing. It's more low-level, nitty-gritty specifics that sometimes end up tangled, and need some massaging … that's mostly what's going on, there
* cuttle shrugs
<purr> ¯\(º_o)/¯
<ELLIOTTCABLE__> yeah. Same page. Don't care much.
<ELLIOTTCABLE__> Hey. I brought an iPhone for you up to alaska with me :D
<ELLIOTTCABLE__> Text me your address again? I'll mail it out tomorrow :D :D :D
<cuttle> :D :D :D texting
<cuttle> ELLIOTTCABLE__: just to try to climb a mountain near Paws-land, I want to try to make a simple language that formulates things much more closely to how the everyday Paws programmer will conceive of them
<cuttle> because we've been trudging through the swamps
<ELLIOTTCABLE__> Rephrase
<cuttle> I want to get a better more high-level view of what we're trying to do
<cuttle> and get a fun langauge with some Pawsness in it running
<cuttle> and then maybe design a VM and macro system that can underlie it
<ELLIOTTCABLE__> That's a great idea.
<ELLIOTTCABLE__> Paws is designed backwards, in some ways. low-to-high instead of high-to-low
<ELLIOTTCABLE__> Dunno if I'd try to be *too* paws-like, though.
<ELLIOTTCABLE__> Maybe take what you can from my error-handling observations, as well as trying to preserve the fall-forward semantics as perceived by the user
<cuttle> yeah, it won't be super paws-like
<cuttle> plan:
<cuttle> atomo/slate-esque multimethod prototypal
<cuttle> everything is generators
<ELLIOTTCABLE__> Slate? Unfamiliar
<ELLIOTTCABLE__> Everything is generators?
<cuttle> yeah, basically every name denotes a series of values
<cuttle> so the whole
<cuttle> print(list each)
<ELLIOTTCABLE__> Ah k
<cuttle> and slate and atomo are like
<cuttle> io fucking smalltalk
<ELLIOTTCABLE__> Kkk
<cuttle> actually through in like
<cuttle> multimethods
<cuttle> is the main thing
<whitequark> ELLIOTTCABLE__: "low-to-high" isn't backwards lol
<purr> lol
<whitequark> otherwise you get, well, haskell
<ELLIOTTCABLE__> lolwat no
<cuttle> whitequark: ...which is a beautiful language
<cuttle> whitequark: you don't design ruby from the perspective of asm
<ELLIOTTCABLE__> If it's not *experience*-driven, it doesn't matter.
<whitequark> cuttle: right, you design ruby from the perspective of ruby object model.
<cuttle> right
<ELLIOTTCABLE__> Only the absolutely lowest level of abstractions (assemblers, C-level shit) need to be lower-end informed
<cuttle> which is high-level and close to the programmer's brain
<whitequark> not from, say, how you would like rails to look in it
<ELLIOTTCABLE__> Everything else needs to be highest-end informed
<ELLIOTTCABLE__> Disagreez
<ELLIOTTCABLE__> Should absolutely design Ruby based on how you want a rails to feel.
<whitequark> ELLIOTTCABLE__: I disagree that execution model, for example, is anyhow low-level
<cuttle> whitequark: I think you'd agree that our current specs for paws are very low-level
<whitequark> it's irrelevant to the actual execution process. the runtime can and will change thoroughfully
<cuttle> compared to how programmers ideally will eventually think in paws
<whitequark> cuttle: naw
<whitequark> building a language involves composing abstractions
<whitequark> you *have* to do it low-to-high, if you want the result to be coherent
<whitequark> granted, you'd want to keep the intended high-level result in mind, but you still build it low-to-high
<cuttle> whitequark: python as a language was not deesigned in terms of stacks
<cuttle> whitequark: even though the original vm was stack-based
<cuttle> whitequark: that's what we're talking about here
<whitequark> most of the things you want to have on the highest level, at the end of the day, are either plain out impossible or have a cost you're not willing to pay
<whitequark> cuttle: I don't think so. python's spec doesn't include stacks.
<whitequark> paws's spec does include executions and whatnot
<whitequark> or am I missing something and you're discussing the implementation, and only implementation, here?
<cuttle> whitequark: what we are saying is that
<cuttle> whitequark: currently paws's spec includes executions
<ELLIOTTCABLE__> Paws is unique, and definitely low-to-high, yes.
<cuttle> whitequark: ideally paws's spec should not include executions
<cuttle> whitequark: as python's spec does not include stacks
<whitequark> hmpf
<cuttle> and i mean maybe ideally paws should still have executions in the spec, idek, but that's not the point here
<cuttle> we are describing something *wrong* about paws's spec
<cuttle> the way in which it differs from python's spec is a *problem*
<ELLIOTTCABLE__> LOL wow gettin' kinda heavy here about something that's not even real
<purr> LOL
<ELLIOTTCABLE__> So. How 'bout dem Red Sox?
<cuttle> haha
<ELLIOTTCABLE__> Hey. So. I'm not wiring a programming language right now.
<whitequark> huh?
<ELLIOTTCABLE__> I'm writing a terminal-pretty-printing library. Like a useless boss.
<ELLIOTTCABLE__> I suck at programming.
<cuttle> haha
<cuttle> ELLIOTTCABLE__: i thought you already wrote one
<ELLIOTTCABLE__> I wrote three. They all suck.
<whitequark> that's because you're using nodejs
<ELLIOTTCABLE__> no, it's just because I suck
<ELLIOTTCABLE__> :P
<whitequark> one does not exclude another
<whitequark> and I'm pretty fucking solid on that nodejs part
<cuttle> yeah, javascript is awful
<cuttle> and node.js is like
<cuttle> more awful on top of it
<cuttle> both like maximize cognitive load in different ways
<trolling> javascript is alright
<trolling> it's much easier to use directly than x86 assembly which I consider more or less equivalent
<whitequark> I don't think you can successfully pull it off with that nick
<whitequark> yeah, that's irrelevant
<trolling> it's the nick talking, I swear
<whitequark> I'm not even sure what are you trying to say with that last one
<trolling> dynamic typing is bad
<cuttle> trolling: haha, good comic
<whitequark> the only thing x86 assembly and js have in common is how they're widely deployed and hard to change
<whitequark> there's no further conclusion, no point to be taken
<trolling> they're both turing complete
<trolling> they can both be used to their full extent using only the ASCII character set
<trolling> they both come without a free set of steak knives
<cuttle> trolling: haha
<cuttle> whitequark: those two things are pretty important similarities
<cuttle> whitequark: and there are indeed further conclusions to be taken
<whitequark> I think that most people comparing js to x86 are jealous of the (incorrect) mental image of someone writing assembly they have in their head
<cuttle> whitequark: since they are the only way to make use of a widespread architecture
<trolling> they're both common languages for other languages to compile *to*
<cuttle> whitequark: in no way does anyone have that motivation
<trolling> which is actually where I was going with that
<whitequark> and the rest make the point without considering it further, thus spreading the confusion
<cuttle> nobody has asm envy
<cuttle> there are tons of languages that compile to js
<cuttle> just like there are tons of languages that compile to asm
<cuttle> (not nearly as many of course)
<cuttle> and it's for the same reason
<trolling> well, not quite the same reason, but a similar one
trolling is now known as prophile
<whitequark> the problem with that comparison is that, depending on your view on it, it's either factually incorrect or useless
prophile has quit [Changing host]
prophile has joined #elliottcable
<whitequark> factually incorrect when you try to attribute something to js which it doesn't have
<whitequark> useless, when you could say "widely deployed platform" instead of "the new x86"
<prophile> the similarities are useful beyond throwing them around as marketing terms
<cuttle> whitequark: ...
<whitequark> ("hard to change" and "tons of languages that compile to it" are direct consequences.)
<cuttle> widely deployed marketing platform is not a useful analogy
<cuttle> asm *is* in a lot of ways
<prophile> the ability to reuse code in backends between machine code targets and JS is something that quite a few people are interested in
<whitequark> cuttle: and by platform here I mean language execution platform, if it's not clear
<whitequark> there's nothing specific to either x86 or assembly that is important to the comparison
<whitequark> rather, s,assembly,machine code,.
<whitequark> prophile: elaborate?
<prophile> whitequark: have you any familiarity with, for instance, emscripten?
<whitequark> prophile: sure
<prophile> so the emscripten lot currently operate using what you might charitably call hackery
<prophile> and there's a movement in that project to redoing most of it as an LLVM backend
<whitequark> cuttle: what useful can you draw from an analogy with x86 which you cannot draw from a simple statement that js is a widely deployed execution platform?
<whitequark> which is my point in a nutshell.
<prophile> which is a bit of a problem because LLVM backends are more oriented towards dumping out blocks of machine code than structured programs
<whitequark> prophile: yes, I know
<prophile> so there's been quite a bit of movement on the front of representations of machine-like instruction sequences in structured programming languages
<prophile> a lot of it on how to rotate basic blocks into something that can be represented with the normal set of control flow operations
<whitequark> I think emscripten's relooper took care of that already?
<prophile> to a rather reasonable extent yes, but it's not actually a general solution
<prophile> and there's still tons and tons of scope for optimisation
<whitequark> I know it can't represent indirectbr, but that sounds like an implementation deficiency
<prophile> I don't think emscripten makes much/any use of break and continue for instance, which can bugger up the control flow quite a bit
<whitequark> I've implemented my own relooper in furnace-avm2 so I know a bit on the topic
<prophile> ah, shiny
<prophile> it also turns out that with reconstructing control flow the ideal set of optimisations can change a bit
<whitequark> you could always fall back to a giant switch statement, you see
<prophile> I... think so?
<prophile> I wonder about how phi gets handled in that case, but yeah
<whitequark> as usual, with temporary variables
<prophile> of course
<prophile> most of this I hear from a combination of a friend of mine who's doing a phd in model checking and the usual flurry of emails on the LLVM mailing list
<prophile> so it's already second-hand from me
<prophile> caveat emptor
<prophile> but it sounded quite interesting
<whitequark> sooo
<whitequark> <+prophile> the similarities are useful beyond throwing them around as marketing terms
<whitequark> but I still don't see how.
<whitequark> emscripten can reuse LLVM's target-independent optimizer specifically because it is not tailored for any specific target, machine-code or not
<whitequark> and it's utterly incompatible with all target-dependent machinery LLVM has
<prophile> I don't know that that's true inre the second point
<prophile> particularly since there are a number of things in LLVM which are canonicalisations only in the hope that they're handled in the target-dependent code
<prophile> and that kind of stuff is interesting to be able to reuse for targetting a structured language rather than machine code
<whitequark> the transformation still only depends on the C abstract machine, though
<prophile> not sure I follow?
<whitequark> those canonicalizations aren't target-dependent
<whitequark> either that's the passes specifically written for targets, or
<prophile> there is middle ground between specific to an ISA and completely target independent
<whitequark> they're passes which optimize code based on data layout
<prophile> SelectionDAG recombination things don't fall into either of those categories but still can be shared, at least in parts, between backends
<whitequark> SelectionDAG is only useful for machine code targets, though
<prophile> which comes back to the idea of hooking up emscripten as an ordinary LLVM backend
<whitequark> LLVM had a C backend, it didn't use any of the target-dependent machinery. I don't see how/why JS one would be fundamentally different
<whitequark> I don't really see a reason to pull it into LLVM either, though
<whitequark> but maybe there's one.
<prophile> the usual reason for pulling things in-tree - when things are changed it's kept up to date
<prophile> the C backend sucked
<whitequark> (kept up to date) that would still be a sole responsibility of the emscripten devs
<whitequark> llvm guidelines basically say that a prerequisite for getting your backend intree is to have at least 1-2 devs working on keeping it up to date, full-time
<prophile> no, it wouldn't
<prophile> well, yes
<prophile> hang on
<prophile> no
<prophile> if someone breaks the build in the emscripten backend they'll have to fix it
<whitequark> for trivial changes, yes
<whitequark> if something is being significantly modified, the emscrpiten team would either have to fix it themselves, or the backend goes out-of-tree again
<prophile> what I'm thinking of here is the incremental but large scale structural changes which are usually the cause of flailing on people with out-of-tree backends
<whitequark> sure, it would be cool to just drop maintenance to LLVM core team, but it ain't gonna happen :p
<whitequark> it's not linux. whoever is interested in having backend in-tree is responsible for fixing it.
<prophile> that hasn't been what I've seen (or at least, used to see when I paid attention) on the minor backends
<prophile> we're drifting off the point though
<whitequark> it may have changed lately. you could see that most, if not all backends, have a guy assigned on maintaining it full-time
<prophile> which was that there are interesting things to be looked at which structured languages have in common with machine code when used as target languages for compiling
<whitequark> yeah
<prophile> I maintain that there are
<whitequark> which?
<prophile> shared optimisations
<whitequark> which?
<prophile> those will probably do for a start
* whitequark sighs
<whitequark> I'm an llvm committer, I know how llvm works
<prophile> cool
<whitequark> you don't need to point me to the basic documentation
<prophile> I was doing so to make a point, not to cast aspersions on your knowledge
<whitequark> I'm asking specifically because I don't see from which optimizations, apart from C abstract machine optimizations, emscripten could benefit
<prophile> are C abstract machine optimisations not enough?
<whitequark> but they're not related to the similarities, or lack thereof, between x86 and js. they are defined exclusively in terms of the C abstract machine
<whitequark> of which LLVM IR is a very close translation
<whitequark> and incidentally, emscripten doesn't need to be a backend in order to use any of the passes on the page you linked.
<prophile> I'm aware
<prophile> okay, let me slightly modify my position here
<whitequark> I was thinking that *perhaps* being a backend would allow it to use the results of LLVM's analyses
<whitequark> like AA, or... well, actually, probably just AA
<prophile> it's interesting enough that something like LLVM can be used to compile to both structured languages and machine code
<prophile> of which JS and x86 are probably the most common respective examples
<whitequark> but I still don't see any way you could use that in a JS backend
<whitequark> prophile: let me rephrase that a bit
<prophile> not in any way I can think of which you couldn't do in LLVM
<whitequark> "you can translate the bytecode corresponding to C abstract machine to both real machine code and JS"
<whitequark> which is, well, obvious, since all three are turing-complete
<prophile> granted, creating efficient real machine code and efficient JS is what's more interesting
<whitequark> two out of three reasons https://github.com/kripken/emscripten/wiki/LLVM-Backend lists is speed and ease of use
<whitequark> and I suspect they just want a clean-room rewrite to get rid of legacy crap emscripten accumulated over the years
<prophile> I can empathise
<whitequark> prophile: (efficient) sure, but I don't see which similarities could allow you to do that
<prophile> mostly the trivial things like jumps being costly
<whitequark> are they?
<prophile> okay, not costly
<prophile> non-free
<prophile> as a general rule of thumb it's going to be cheaper not to branch than to branch
<prophile> put it that way
<whitequark> I'm not sure I understand what exactly you're saying
<whitequark> neither LLVM IR nor JS really have jump instructions that are either taken or not taken
<prophile> rrghr
<prophile> right
<whitequark> (and in x86, the cheapest is not "not to branch", it's "always do same thing")
<prophile> there are things which are slow on either target
<prophile> or fast on either target
<prophile> where the decisions in optimisation are the same
<prophile> deleting dead code is an even more trivial example
<whitequark> sure, but that is not a result of similarity between x86 and js. that is the property of, in this case, the C abstract machine
<whitequark> *anything* that executes code would work faster if there would be, say, no recalculation of constants
<prophile> of course it's a similarity between the two, the C abstract machine has no optimisation because it has no notion of efficiency of actually being executed
<prophile> because it's an abstract machine
<prophile> things that are optimised in LLVM IR aren't to make the LLVM IR itself somehow more efficient, it's to produce more efficient output from the backends
<whitequark> it's just as similarity between x86 target and js target as it is between js target and ruby target
<prophile> with the possible exception of code size shrinking which may make LLVM itself use less memory
<prophile> whitequark: I don't disagree
<whitequark> as I've stated above, the similarity, when stated, is either incorrect or useless. in this case, it's useless
<prophile> well it's not useless, is it, because you can use LLVM and get efficient code for both x86 and JS
<whitequark> "let's execute less instructions" really doesn't need anything except a concept of a sequential machine executing the code
<prophile> it's useful because the same optimisation can produce a performance increase in both
<prophile> the fact that it's a trivial optimisation doesn't change that
<whitequark> actually, that may not even be true
<whitequark> well, forget the last statement, it's a too specific case and not interesting
<whitequark> let me rephrase again: what would you gain from writing your algorithm while keeping in mind this similarity between x86 and js, as opposed to keeping in mind just the C abstract machine, which, indeed, is a sequential execution machine?
<prophile> sigh
<prophile> okay, here's another optimisation
<prophile> unsigned divide by two can be turned into shift right by 1
<prophile> with the knowledge that, on practical implementations, it is likely to be cheaper
<whitequark> yeah, this is useless on js
<prophile> even though it is the same number of instructions
<prophile> that's not necessarily true
<prophile> JS, from what I remember, is all doubles
<prophile> and in an implementation that's going to be true until whatever implementation's optimiser figures out it's dealing with ints
<whitequark> mhm
<prophile> this discussion is getting very uninteresting very quickly
<prophile> I'm sure you see my point
<whitequark> yes, that is right. but this neiter requires you to write a backend or makes the comparison to x86 useful in general
<prophile> I don't know really know what you mean by useful if that doesn't qualify
<whitequark> for example, you could note that MIPS and ARM are similar: both are RISC architectures, have lots of registers, no memory-to-memory transfer instructions, require constant islands, and so on
<whitequark> those are rather concrete similarities which allow you to share code between backends, rougly predict performance and code size for one backend based on another, maybe something else
<whitequark> you could draw a list of similarities between x86 and x86_64, or a smaller one for x86 and ARM, with much the same results
<whitequark> the similarities between js and x86 are largely philosophical. yeah, they're both sequential machines. it's like saying that both english and mandarin chinese are spoken with lips and tongues
<cuttle> whitequark: you can look at what has happened, industry-wise, regarding asm
<cuttle> whitequark: and make predictions and have a better understanding of js accordingly
<whitequark> sure, you could draw it and be technically correct, but it's more useful for throwing marketing terms around than any actual work
* cuttle sighs
<cuttle> you're being absoultely ridiculous here
<whitequark> cuttle: so, what predictions are you drawing?
<cuttle> none
<cuttle> nothing useful
<cuttle> at all
<cuttle> completely useless
<cuttle> no predictions
<cuttle> you're right
<cuttle> man you're smart
<whitequark> I, for example, look at how the amount of ARM devices surpassed x86 by a huge margin
<whitequark> what does that mean for JS?
<whitequark> I'm serious by the way, does it mean that JS will be eclipsed by a more power-efficient competitor with a liberal licensing scheme?
* cuttle sighs
<cuttle> power isn't really relevant
<cuttle> since we're talking about browsers vs hardware
<whitequark> exactly my fucking point
<cuttle> and people don't choose browser for their power efficiency
<cuttle> whitequark: so you choose a way in which they're *not* comparable
<cuttle> good job
<cuttle> you've proven that browsers and hardware are NOT THE SAME THING!
<cuttle> someone get this fucking guy published
<cuttle> spread the word
<cuttle> whitequark is a genius who has figured out that browsers are not hardware!
<cuttle> jesus fucking christ
* cuttle wonders if whitequark has seen an analogy in his life
<whitequark> yeah, would you be so kind as to channel half of your capability for sarcasm to listing the, you know, useful predictions?
<whitequark> apart from the usual "we got a billion people to use this thing, now it's hard to change, who could've thought?!"
<cuttle> whitequark: it doesn't take anyone any effort to call js the asm of the web
<cuttle> whitequark: and it doesn't cause any incorrect thinking
<cuttle> whitequark: and it's not caused by asm envy
<cuttle> whitequark: it's just a similarity that has been noticed
<cuttle> whitequark: you've talked more about how they're not exactly the same fucking thing than anyone ever has talked about them being similar
<cuttle> a fucking idle analogy like that doesn't have to have a long track record of excellent predictions about an industry to justify its meager existence
<cuttle> you talk about it being useful
<cuttle> but, even if it never led to a single fucking technical breakthrough
<cuttle> it would be more useful than you complaining about it
<cuttle> anyway I have to go
<whitequark> cuttle: to paraphrase you, I'm fucking tired of hearing about this shit
<whitequark> and that's it.
eligrey has quit [Quit: Leaving]
purr has quit [Ping timeout: 272 seconds]
cuttle has quit [Ping timeout: 272 seconds]
yorickpeterse has quit [Ping timeout: 272 seconds]
cuttle has joined #elliottcable
purr has joined #elliottcable
yorickpeterse has joined #elliottcable
purr has quit [Ping timeout: 240 seconds]
purr has joined #elliottcable
yorickpeterse has quit [Changing host]
yorickpeterse has joined #elliottcable
yorickpeterse has quit [Quit: The NSA took my baby]
yorickpeterse has joined #elliottcable
yorickpeterse has quit [Client Quit]
yorickpeterse has joined #elliottcable
yorickpeterse has quit [Changing host]
yorickpeterse has joined #elliottcable
yorickpeterse has quit [Client Quit]
yorickpeterse has joined #elliottcable
purr has quit [Ping timeout: 240 seconds]
purr has joined #elliottcable
yorick has joined #elliottcable
Sgeo_ has quit [Read error: Connection reset by peer]
<whitequark> can anyone here recognize the music from http://www.youtube.com/watch?v=6fuy8go5-dc ?
<whitequark> (I advise you not to watch the video)
<whitequark> hm
<whitequark> Symphony No. 9 in D Minor, Op. 125 "Choral": II. Molto vivace
<whitequark> Chicago Symphony Orchestra
<whitequark> apparently
<whitequark> yep, it is.
alexgordon has joined #elliottcable
<whitequark> aaaaaargh
<whitequark> fuck
<whitequark> fucking fuck
<whitequark> this guy just named a variable "delay" and put a timestamp in it
<whitequark> I
<whitequark> I'm fucking out of here
<whitequark> I'm not even sure what's worse, that I myself thought we should hire him or that he's more competent than most
<yorickpeterse> whitequark: oh, that's it?
<yorickpeterse> let me show you the code of my people
<yorickpeterse> a = s.scan(/"encodings":(\[.*?\])/).map{|st| JSON.parse(st[0]).select{|i| i['rcoRole'] == 'photo'}.max_by{|x|x['width'].to_i}}.compact.map{|x| @phost + x['url']}
<yorickpeterse> this was outsourced, go figure
<alexgordon> yorickpeterse: jesus birthday boy christ
<yorickpeterse> that's just the tip of it
<joelteon> hey guys
<joelteon> guys
<joelteon> guys
<joelteon> i hate VPNs
<joelteon> and everyone that uses them
<yorickpeterse> why, because you work for the NSA?
<joelteon> naw
<joelteon> it's because i can't connect outside of work
<joelteon> and that's what a vpn is FOR
<audy> joelteon ssh proxy over port 443
<joelteon> i totally would
<joelteon> if they would let me
<joelteon> but i'm 2,204 miles away from the office
<audy> call up the security guard late at night. Pretend to have a report due to Mr Kawasaki.
<audy> say that he'll commit hari kari
<audy> Get him to read you the number on the model
<audy> modem*
<joelteon> isn't it hara kiri
<audy> I have no idea
<joelteon> it is
<joelteon> it annoys me that people pronounce it hari kari
<yorickpeterse> haki kahi
alexgord_ has joined #elliottcable
yorick_ has joined #elliottcable
yorick_ has quit [Client Quit]
alexgordon has quit [*.net *.split]
yorick has quit [*.net *.split]
eligrey has joined #elliottcable
yorick has joined #elliottcable
<devyn> lolol hari kari
<purr> lolol
<devyn> god I love LTE
niggler has joined #elliottcable
niggler has quit [Max SendQ exceeded]
niggler has joined #elliottcable
niggler has quit [Max SendQ exceeded]
niggler has joined #elliottcable
sharkbot has quit [Remote host closed the connection]
sharkbot has joined #elliottcable