ELLIOTTCABLE changed the topic of #elliottcable to: Embrace, extend, extuingish.
<ELLIOTTCABLE> trying to read up on RubySpec, see what sorts of lessons I can learn about how they tackled this stuff
<ELLIOTTCABLE> more and more convinced I need a specialized spec-language for this task, instead of just using existing built-ins.
<devyn> I feel like what Paws.js is doing is wrong
<devyn> and I'm not really sure why it's doing it
<ELLIOTTCABLE> reading
<ELLIOTTCABLE> completely possible, Paws.js is terrible :P
<ELLIOTTCABLE> what's wrong with it?
<devyn> well what Paws.js is doing is what you'd naively expect to happen, but the thing is, by the time [infrastructure execution branch[] []] has been evaluated, `implementation void` has already been evaluated... and impl void only accepts one caller (obviously)
<devyn> but it ends up getting both
<devyn> and (should) discard one
<devyn> which is why that's happening in Paws.rs
<ELLIOTTCABLE> errrr hol on readin' the code again
<ELLIOTTCABLE> "only accepts one caller"
<ELLIOTTCABLE> re-phrase that, or elaborate?
<ELLIOTTCABLE> reduce it to a single example for now
<ELLIOTTCABLE> (single branch, not two)
<devyn> infrastructure execution branch[] [] stages original and clone with each other
<devyn> then both go to implementation void
<devyn> but it's the same implementation void
<devyn> not two separate instances
<devyn> so it only uses one as the caller
<ELLIOTTCABLE> AH
<ELLIOTTCABLE> why is it the same implementation void?
<ELLIOTTCABLE> the semantics of <execution> <message> is call-pattern, i.e. it involves a clone *at combination-time*
<ELLIOTTCABLE> so,
<devyn> well why wouldn't it be? `implementation void` is evaluated first, which gets a single instance, that goes on the stack...
<devyn> there's a clone involved?
<ELLIOTTCABLE> yes, and that single instance is cloned twice, and the *original* instance, the result of `implementation void` (generated by your own generative-natives-thingamajig) is actually never resumed!
<ELLIOTTCABLE> that's the only reason the **non** generative-version, the naive implementation, works at all
<ELLIOTTCABLE> yes. Paws itself, if you don't call stage() explicitly, will never stage an actual execution by combination.
<devyn> oh boy, well then, I think there's a lot wrong
<devyn> in Paws.rs
<devyn> lol
<purr> lol
<devyn> :/
<ELLIOTTCABLE> a lot? D:
<ELLIOTTCABLE> that shouldn't be too big a change, no?
<devyn> shouldn't, but I did some things that I think kinda relied on that assumption
<ELLIOTTCABLE> it always clones first. the semantics of `foo [...]`, when foo results in an procedure and [] produces multiple values because the current execution is resumed multiple times, is to A) clone `foo`, B) stage the clone, and C) unstage the current-execution
<ELLIOTTCABLE> hrm. any specifics?
<ELLIOTTCABLE> this is what happens when your specification is terrible. |=
<devyn> a) execution-style receivers are supposed to loop indefinitely in my impl in order to handle more than one invocation,
<devyn> b) the whole namespace thing I talked about, where aliens get cloned
<devyn> I think that's it
<ELLIOTTCABLE> hi sorry grandpa called
<ELLIOTTCABLE> namespace thing?
<ELLIOTTCABLE> that should still totally work; that's basically *according* to spec.
<devyn> namespace thing: impl and infra and etc. have a custom receiver that clones any aliens before delivering them
<ELLIOTTCABLE> the only way that might deviate a little is if somebody actually wanted to grab an infrastructure procedure, and advance it once, and have that advancement apply across all callers
<ELLIOTTCABLE> which is really weird and not useful and probably dangerous
<ELLIOTTCABLE> in the usual case of using combination to invoke them, they'll *already* be cloned, so it won't make any difference that the original thing being cloned is *already* a generated clone
<ELLIOTTCABLE> make sense?
<ELLIOTTCABLE> as for A), not sure I grasp
<devyn> okay, for A): receivers are supposed to take an object [, caller, subject, message]... my impl just assumes they can do this infinitely; it doesn't try to clone them
<devyn> should it?
<devyn> as for the namespace thing, well, now that I know that combination involves a clone, it just seems like an unnecessary clone
<devyn> if someone wants to stage[] a bare alien that they got, say, implementation void, without combining it first, then they should know they (probably want) to branch it first
<ELLIOTTCABLE> "assumes they can do this indefinately"
<ELLIOTTCABLE> lemme think for a second here
<ELLIOTTCABLE> hrm
<ELLIOTTCABLE> hrmmmmmm
<ELLIOTTCABLE> if they're native, that's not dangerous.
<ELLIOTTCABLE> the only thing that precludes,
<ELLIOTTCABLE> is making a native receiver able to do Other Work, by being manually staged in another way, *after* having been manually staged (not as a receiver) with receiver-data
<ELLIOTTCABLE> which is a very very very strange situation o_O
<ELLIOTTCABLE> and I don't believe I can see any situation in which precluding that will cause any real harm to abstraction-builders.
<ELLIOTTCABLE> oh w-
<ELLIOTTCABLE> no i'm a dumb
<ELLIOTTCABLE> fcukfgekawrl
<ELLIOTTCABLE> okay. so.
<ELLIOTTCABLE> receivers *have* to be cloned, because if you write a libside receiver, it has to be able to receive further combinations, *so that it can call things*.
<ELLIOTTCABLE> yah. so, you can only assume infinite-recallability with native receivers. (remember, native == previously-called-alien, ughterminology)
<ELLIOTTCABLE> does that destroy that optimization?
<devyn> what I'll do in that case, then, is if the receiver is a stageable ObjectReceiver, it gets branched
<vigs> ELLIOTTCABLE:
<devyn> if it's a NativeReceiver, it's just a plain function fn (&Machine, Params) -> Reaction anyway, so there's nothing to clone
<devyn> and I think that still works, because NativeReceivers often don't require calling other things
mcc has joined #elliottcable
<devyn> if they do, they can either be Alien ObjectReceivers instead,
<devyn> or the NativeReceiver can create an Alien or whatever
<devyn> so yeah I don't think this breaks that optimization, ELLIOTTCABLE
<devyn> anyway, I'm going to be gone for a few hours
<ELLIOTTCABLE> here's a thought:
<ELLIOTTCABLE> optimization-wise, don't allow *native* receivers to call other things. *at all*.
<ELLIOTTCABLE> perhaps native receivers can be synchronous in all cases.
<ELLIOTTCABLE> then if you need asynchronous operations, require that they be **composed libside**.
<ELLIOTTCABLE> like, write the native parts of those receivers as normal native executions, (not NativeReceivers), and then treat them as normal executions.
<ELLIOTTCABLE> right? that cool?
<devyn> no, no, NativeReceivers can call *exactly one thing* (that's what the Reaction) type is, so if they want to create their own Aliens to continue and React with that, that works
<devyn> er, misplaced end paren, but lol
<purr> lol
<devyn> they get the caller as part of Params, so they can do whatever they want with that
<devyn> usually they just React(params.caller, whatever)
<devyn> but they don't necessarily have to
<ELLIOTTCABLE> “ exactly one thing* (that's what the Reaction) type” wat
<ELLIOTTCABLE> oh
<ELLIOTTCABLE> gotcha I think probably maybe
<ELLIOTTCABLE> so. all should be well, afaict?
<devyn> yes
<devyn> think so
<ELLIOTTCABLE> "executions" as receivers, actual executions, whether written in Paws or as native Executions, can do asynchronous operations,
<ELLIOTTCABLE> because they just get, well, staged. put in the queue, like anything else.
<ELLIOTTCABLE> and can spin off other tasks and blahblahblah and eventually resume the caller with the result of the combo-being-received
<devyn> right right
<devyn> NativeReceivers are slightly different though; they can't be staged
<ELLIOTTCABLE> yeah, exactly
<ELLIOTTCABLE> hrm, uhhhhh,
<devyn> if you infrastructure receiver[] them, it actually wraps them up into an alien
<ELLIOTTCABLE> *can* we skip the queue for any receiver, like that?
<ELLIOTTCABLE> hmrrmbmrbm
<devyn> I believe so. if it's unsafe, that's up to the NativeReceiver function to machine.enqueue() instead of React-ing
<ELLIOTTCABLE> hmmmmm.
<ELLIOTTCABLE> oh, you said it *can* queue, instead of preforming a reaction itself?
<devyn> yeah
<ELLIOTTCABLE> (if I understand correctly, React is your event-loop body, right?)
<devyn> well React() just means do-this-immediately
<devyn> or you can Yield, which says go-back-to-the-queue
<ELLIOTTCABLE> so NativeReceivers can electively, themselves, preform a “shorted” loop, on the stack?
<devyn> yes, so can Aliens
<ELLIOTTCABLE> hrm
<ELLIOTTCABLE> hrmrmgmhmrm
<devyn> we talked about this before :p
<devyn> anyway I'mma be back later
<vigs> ELLIOTTCABLE: on a scale from MLK to Churchill, how drunk are you right now?
<ELLIOTTCABLE> Uh, what.
<prophile> ||<----------> || by my reckoning
<ELLIOTTCABLE> I'm completely sober at ten past five in the afternoon, getting ready to hop in the shower and then head out hunting down a trail-head up at Hatcher's Pass
<ELLIOTTCABLE> Fuckin' Alaska™
<ELLIOTTCABLE> devyn: although we did discuss situations where the *implementation* may need to be able to (or already can) jump the queue,
<ELLIOTTCABLE> A) that requires a design change, and definitely needs some more discussion,
<vigs> idk man your 7 tweets in a row…
<vigs> brb getting 50% packet loss
<vigs> gonna reset the router
<ELLIOTTCABLE> but more importantly B) I *really* don't think that power should be handed over to an external API.
<ELLIOTTCABLE> hell, the *act of jumping the queue* as has been discussed, is still actually working within the bounds of the design of the queue … we're just changing the semantics for the orodering of the queue itself.
<ELLIOTTCABLE> I can't imagine where it'd be a good idea to hand an API consumer the central reactor and say “Hey, throw some shit in here to be run, but remember <all of the semantics of Paws' ownership and safety>, and make sure not to violate any of them at all, 'kaysies?”
<ELLIOTTCABLE> really think it's best to only hand them the "put this in the queue" (i.e. reactor.stage() or .push() or whatever), and then do any possible queue-jumping / immediate-evaluation in an encapsulated methodology
<ELLIOTTCABLE> vigs: I'm elliott, I'm still elliott when sober, I'm just *more* elliott when drunk
<vigs> oh
<vigs> hay
<vigs> ahh, the feeling of not-51.4% packet loss
<devyn> ELLIOTTCABLE: well, Paws.rs is meant to be a fast impl which kinda means I have to let API consumers do potentially unsafe things
<devyn> that's kinda the goal, really
<devyn> you can be safe and always enqueue() or you can figure out what you need to do yourself
Guest45674 has joined #elliottcable
prophile has quit [Quit: The Game]
Guest45674 has quit [Ping timeout: 255 seconds]
alexgordon has quit [Quit: Textual IRC Client: www.textualapp.com]
thsutton has joined #elliottcable
Guest45674 has joined #elliottcable
eligrey has quit [Read error: Connection reset by peer]
<ELLIOTTCABLE> hmmmmm
<ELLIOTTCABLE> ugh suppose that makes sense
<ELLIOTTCABLE> wonder if there's any *safer* way to expose that sort of power …
<ELLIOTTCABLE> or rather, I suppose, a middle ground.
* ELLIOTTCABLE shrugs
<purr> ¯\(º_o)/¯
<ELLIOTTCABLE> your impl!
<ELLIOTTCABLE> hi!
<devyn> ELLIOTTCABLE: well, anyway, I actually feel like *almost always* jumping the queue is safe, and it's just data-graph related operations that have to be ordered, right?
<devyn> so any charge[]-type operation, really
<devyn> or implicit uncharge
<devyn> (i.e. removing a member that was charged)
<devyn> er, owned
<devyn> sorry, I mean own[] or charge[], really
<devyn> both
<devyn> those need to be ordered, for sure
<devyn> ELLIOTTCABLE: at the moment, my impl almost always jumps the queue even now, and it's able to run your examples no problem
<devyn> so...
<devyn> ELLIOTTCABLE: anyway, I think I'll probably just do it this way, and if your Paws runnable-spec finds problems with the way this works, I can correct them
* vigs enthusiastically pets purr
* purr rrrrrr
<devyn> niice
thsutton has quit [Ping timeout: 260 seconds]
<Cheery> I hoped I could have made the yesterday's blog post better. but oh well.
mcc has quit [Quit: This computer has gone to sleep]
<devyn> ELLIOTTCABLE: hah, I think I already found a problem with my queue jumping :p
<devyn> ELLIOTTCABLE: yeahhhh stage_receiver (i.e. exe/alien default receiver) jumping the queue is definitely a bad idea
<devyn> ELLIOTTCABLE: I still don't like the idea of touching the queue so often... hoping I can think of a better optimization
<ELLIOTTCABLE> talk me through it
<devyn> well basically I've fixed test-branch.paws
<devyn> not cloning was an issue, but the other issue was that I made stage_receiver skip the queue
<devyn> which led to a bit of an ordering mess
<devyn> once I made it enqueue() instead, that was fixed
<devyn> anyway, I have no idea how to optimize things so there isn't so much touching the queue,
<devyn> so.
<devyn> yeah.
<devyn> there's still quite a bit of queue-skipping, but it seems to be a much safer sort
<ELLIOTTCABLE> which part was skipping, and why was it unsafe?
<ELLIOTTCABLE> seems to me, anything after branch (with regards to which of the two branches runs first) is inherently unordered.
<devyn> haha, you're going to laugh
<ELLIOTTCABLE> that's not a problem, that's a feature, if anything.
<ELLIOTTCABLE> if you want to order them, create a mutex, use responsibility.
<devyn> I was skipping the queue every time you jux
<devyn> like
<devyn> Execution and Alien default receivers
<devyn> React() immediately rather than enqueue()ing
<ELLIOTTCABLE> hm
<ELLIOTTCABLE> hm
<devyn> but that was causing problems with branch
<ELLIOTTCABLE> yeah I getcha
<ELLIOTTCABLE> as in, your conception of the *semantics* of "call-pattern" were completely off
<ELLIOTTCABLE> not only in that they're supposed to clone,
<ELLIOTTCABLE> but in that they're supposed to *queue*, not just invoke
<ELLIOTTCABLE> right?
<ELLIOTTCABLE> eek :P
<devyn> well, there's no 'just invoking' in v. 10, so I don't really know which queueing is necessary and which isn't
<devyn> I have discovered that the stage_receiver's queueing is necessary.
<devyn> :p
<ELLIOTTCABLE> "discovered"
<ELLIOTTCABLE> I lol'd
<purr> lol
<ELLIOTTCABLE> gotchaaaaahhghawirhetau9srhdruligr
<devyn> I still haven't tried this out with multiple parallel reactors
<devyn> I really want to
<ELLIOTTCABLE> have you already got infrastructure for *actual locking*?
<ELLIOTTCABLE> not responsibility shit, but, data-level locking within the implementation?
<devyn> yeah, absolutely, I did that pretty early on
<ELLIOTTCABLE> thought so, just checking
<devyn> I have tests that test it in parallel
<devyn> just
<devyn> haven't actually *tried* it on real code
<ELLIOTTCABLE> what's blocking on you actually "embarrassingly parallelizing" a Paws program?
<ELLIOTTCABLE> ah gotcha
<devyn> basically contention around locking on the queue. I need to either find a queue structure that works for this that doesn't lock so much,
<devyn> or
<devyn> I guess some kind of work-stealing system
<ELLIOTTCABLE> so, back to pipelining:
<ELLIOTTCABLE> how can we arrange multiple ops in a way that can be executed without cognizance of the greater environment.
<ELLIOTTCABLE> okay bbl finishing a movie
<devyn> basically I think it's up to the individual receivers and aliens to know that. for example, the Thing default receiver should *definitely* be safe
<devyn> (and Paws.rs does currently short-circuit that)
<purr\Paws> [Paws.rs] devyn pushed 1 new commit to master: https://github.com/devyn/Paws.rs/commit/45a7b47dc3cf626a15f5acc6f5cb4fd78d6c464a
<purr\Paws> Paws.rs/master 45a7b47 Devyn Cairns: Fix ordering, stage_receiver semantics: SR now clones and enqueue()s instead of simply React()ing
<devyn> but I'd like to find a way for the reactor to *cheaply* know whether the Execution/Alien default receiver can safely be short-circuited
<devyn> and whether it even needs to clone in the first place
<devyn> because that would be huge, I think
<devyn> if it can get away just cloning once for a multi-arg call-pattern,
<devyn> that would be... fantastic
<devyn> and if it doesn't have to touch the queue, that would be more fantastic
<devyn> the locks themselves are quite cheap, it's just contention I'm worried about,
<devyn> but I think avoiding cloning would be a bigger benefit
<devyn> since allocation is obviously more expensive
<devyn> than atomically flipping a bit on a lock
<ELLIOTTCABLE> “12:35 AM <+devyn> but I'd like to find a way for the reactor to *cheaply* know whether the Execution/Alien default receiver can safely be short-circuited”
<ELLIOTTCABLE> would give money for IRC read-recipts
<ELLIOTTCABLE> “12:36 AM <+devyn> if it can get away just cloning once for a multi-arg call-pattern,”
thsutton has joined #elliottcable
vil has quit [Ping timeout: 255 seconds]
Guest45674 has quit [Ping timeout: 248 seconds]
vil2 has joined #elliottcable
vil2 is now known as vil
<Cheery> heh somebody messed up with the demo I threw into glsl while ago
<Cheery> cool stuff.
Sgeo_ has quit [Read error: Connection reset by peer]
yorick has joined #elliottcable
_whitelogger_ has joined #elliottcable
sammcd has quit [Ping timeout: 245 seconds]
nuck has quit [Ping timeout: 245 seconds]
othiym23 has quit [Ping timeout: 245 seconds]
Cheery has quit [Ping timeout: 245 seconds]
ag_dubs has joined #elliottcable
ag_dubs has quit [Changing host]
othiym23 has joined #elliottcable
nuck has joined #elliottcable
Cheery_ is now known as Cheery
oldskirt has joined #elliottcable
oldskirt has quit [Remote host closed the connection]
Guest45674 has joined #elliottcable
prophile has joined #elliottcable
eligrey has joined #elliottcable
sammcd_ is now known as sammcd
<Cheery> also
<Cheery> wrote myself constraints language, planning to play with the concept a bit.
gozala has joined #elliottcable
<katlogic> Oh please pretty please, make prolog human readable.
<Cheery> well not that kind of constraints.. linear constraints in layouts are the thing I'm studying.
<Cheery> but.. this layouting stuff is complex
<Cheery> might very well end up to prolog
<Cheery> to battle the complexity
<katlogic> Layout, as in HTML/DTP/CSS?
<Cheery> yep.
<Cheery> except that I'm not trying it with html
<katlogic> Its not that evil as prolog, but pretty close.
<katlogic> Sounds just like bunch of linear algebra indeed.
<Cheery> I suspect it's more than that
<Cheery> but.. I'm first trying with the cassowary.js, although it seems horrible in source.
<katlogic> algebra to find optimal solutions to optimal sizing problems
<Cheery> accepting that because realised the solver itself is simple.
<katlogic> but binary decisions must be done too
<Cheery> the implementation and paper are just crap.
<katlogic> like um, media queries in CSS
<Cheery> yup. they had something like that set up.
<Cheery> in gridstylesheets.org
<katlogic> trying to write a browser now? :)
<Cheery> ah no! just writing a small app, which I can use to test a theory
<Cheery> I want to see what kind of properties constraint layouts have
<Cheery> so I'm coding an app, where one can study it
<Cheery> but if I figure out useful things, it might lead to browser redesign proposal.
<katlogic> Brave of you. The closest I've had to deal with that was some UI widgets and I was pretty miserable already.
<Cheery> it's always miserable if you don't deal with it. heh.
<Cheery> I don't have much of bias here.. it might very well be that the layout stuff in constraints isn't helpful.
<katlogic> Btw, most efficient implementations of partitioning problem/graph minimization etc solvers are probably physics sandbox engines.
<katlogic> The problem domain is all same, cramp something into smallest space while satisfying all constraints. NP-hard.
<Cheery> um no.. the problem isn't to cramp it into small space
<Cheery> the problem is to get it look optically nice.
<katlogic> Into given space.
<Cheery> yup
<katlogic> as for optically nice, thats why those pesky constraints are there :)
<katlogic> anyhow, there is some cutting edge research on this wrt mobile UIs
<katlogic> as in, how to write css for widly varying display sizes, this is largely unsolved now
<Cheery> yeah. apparently the theory goes that designers want things to line with each other.
<katlogic> separate css and sizing cutoff in media queries is not really solution
<katlogic> Cheery: So ideally one would end up with something incredibly complex giving a lot of freedom in how one could set constraints.
<Cheery> or incredibly simple. I don't think complexity belongs here.
<katlogic> One interesting approach would - UI designers draws elements, bounding/groupings divs etc for several display sizes.
<katlogic> "Prototypes"
<katlogic> Then some very smart algo derives constraints from this.
<katlogic> Machine learning, but would make the job much easier for a lot of UX designers.
<Cheery> I don't think UX designers should have easy time
<katlogic> :>
<Cheery> but the tools they use, should match how they think.
<katlogic> Ok, programmers who are forced to do UX design
<katlogic> and hate it with passion and must hire some CSS guy :)
<Cheery> well we'll find out soon enough how constraints feel and look.
<Cheery> and whether there's point in them.
<katlogic> Without constraints theres chaos. People tried that one already with tucking all elements one after another in floating divs.
<katlogic> The result is horrible. Thousands of broken thumbs trying to tackle clunky UIs on touch screens.
eligrey has quit [Ping timeout: 264 seconds]
eligrey has joined #elliottcable
sharkbot has quit [Remote host closed the connection]
sharkbot has joined #elliottcable
oldskirt has joined #elliottcable
thsutton has quit [Quit: Goodbye]
Guest45674 has quit [Ping timeout: 252 seconds]
Sgeo has joined #elliottcable
<devyn> Cheery: pretty!
<devyn> Cheery: the GLSL with smoothing I mean
<devyn> Jul 01 18:49:15 DevHost1 postgres[668]: FATAL: the database system is starting up
<devyn> that doesn't sound like a fatal error to me
<joelteon> you're fucked mans
<joelteon> man
<devyn> you betta' run, da database system be startin'
thsutton has joined #elliottcable
thsutton has quit [Ping timeout: 240 seconds]
thsutton has joined #elliottcable
sammcd has quit [Excess Flood]
sammcd has joined #elliottcable
sammcd has quit [Excess Flood]
sammcd has joined #elliottcable
Guest45674 has joined #elliottcable