Streetwalrus has quit [Quit: ZNC 1.7.3 - https://znc.in]
Streetwalrus has joined #m-labs
Stormwind_mobile has joined #m-labs
Stormwind_mobile has quit [Remote host closed the connection]
Stormwind_mobile has joined #m-labs
Streetwalrus has quit [Quit: ZNC 1.7.4 - https://znc.in]
Streetwalrus has joined #m-labs
sb000 has joined #m-labs
sb000 has quit [Quit: Leaving]
_whitelogger has joined #m-labs
ohama has quit [Ping timeout: 265 seconds]
ohama has joined #m-labs
iwxzr has quit [Ping timeout: 245 seconds]
iwxzr has joined #m-labs
proteus-guy has quit [Ping timeout: 268 seconds]
<whitequark>
ZirconiumX: re duplication/bloat: that's fundamentally the issue here, and the duplication can't really be dealt with
<whitequark>
ironically, in omigen, duplication was implicit, so few people would see it and complain
<whitequark>
re memories: that's basically a yosys issue, since there's no way i can emit RTLIL that translates to $readmemh
<Dar1us>
ZirconiumX: initial begin $readmemh(..); end works in Xilinx tools (although it is veeery fussy about the size, too big or small and it just deletes it with only a minor warning...)
<ZirconiumX>
Dar1us: Sure, but you can't emit a `$readmemh` in nMigen
<ZirconiumX>
Dar1us: and I've been fighting with Quartus for various reasons for long enough that my brain has fried
<whitequark>
it should be a fairly easy fix in yosys, i might do that later
cr1901_modern has quit [Quit: Leaving.]
<Dar1us>
ZirconiumX: edit the source before compilation ;)
<whitequark>
that's a lot of effort
<whitequark>
especially given how huge nmigen verilog files are
<ZirconiumX>
whitequark: can you put a decorator on __init__?
<ZirconiumX>
If so, couldn't you have a decorator that says "I promise that nothing external modifies this, so if the parameters to __init__ are identical, so too are the output modules"?
<ZirconiumX>
That could unconditionally apply to something that takes no parameters other than self
cr1901_modern has joined #m-labs
Stormwind_mobile has quit [Ping timeout: 240 seconds]
Stormwind_mobile has joined #m-labs
Stormwind_mobile has quit [Ping timeout: 240 seconds]
<whitequark>
ZirconiumX: nmigen doesn't currently place any importance or even remember the hierarchy of user modules
<whitequark>
the only hierarchy it cares about is the hierarchy of Fragments that results from elaborating the complete design
<whitequark>
by that point, no two instantiations look the same
Stormwind_mobile has joined #m-labs
<whitequark>
you *could* rearchitect it to work a different way, but (a) you'd have to rewrite most of its internals, (b) you still have to support nmigen.compat, which now becomes significantly harder
<whitequark>
you're welcome to try, but I don't find that the benefit / cost ratio is high enough for me to work on it
<whitequark>
now, on the other hand, if we added some way to *enforce* the use of modules only through a well-defined external interfaces (which you have also requested in the past), that task becomes much easier, since now nmigen can use something it already has to care about (these interfaces)
<ZirconiumX>
So then would deduplication (in whatever form) work by checking that these interfaces match?
<whitequark>
not quite
<whitequark>
a major issue with deduplication is that you don't have to just compare the *internals* of the module
<whitequark>
you also have to consider that another module outside can just twiddle with any signal
<whitequark>
that means that some instantiations will have inputs as inputs in the netlist, some instantiations will have them hard assigned to default values, and yet some others may have them overridden by an external signal
<whitequark>
which right now results in flattening
<whitequark>
I think we need something like "PureElaboratable" that defines the complete interface of any module and promises that every instantiation uses nothing more than the arguments passed to the constructor
<ZirconiumX>
My idea with the hierarchy is that nMigen can skip elaboration of a module entirely, which is useful when nMigen is elaborating each module of my code 15 times more often than necessary.
<whitequark>
that's also difficult
<whitequark>
the reason is that right now, a single Signal object can be used anywhere in the hierarchy
<ZirconiumX>
A PureElaboratable sounds like a good idea, although I'm not smart enough to figure out how you'd declare the interface of a module prior to the class being defined.
<whitequark>
therefore, if you instantiate even a PureElaboratable 15 times, you'd have 15 times the Signals, which allows the module that includes (e.g.) all of them to disambiguate between ports of different instances
<whitequark>
but inside the single deduplicated instantiation, you only have one Signal, since, well, it was deduplicated
<whitequark>
that means you have to build a map from the Signal used by the outside world to the Signal used inside the module
<whitequark>
this is the real reason why we have to pre-declare interfaces like that
<whitequark>
as for how it'd work, it would use Python type annotations, I think
<ZirconiumX>
whitequark: this is the real reason why we have to pre-declare interfaces like that <--- couldn't you use a Record or something for the declaration, actually?
<whitequark>
why would that be better?
<ZirconiumX>
A Record is a "struct" of packed Signals, right?
<whitequark>
correct
<ZirconiumX>
So if you use a Record as the interface for the module, and pass that to the deduplication function, would that work?
<whitequark>
that doesn't change any of the hard problems here
<whitequark>
all you did is picked a specific way to list the signals that are a part of interface. essentially, picked the syntax
<ZirconiumX>
Evidently I don't understand the issues here, then.
<whitequark>
it's not a particularly great syntax (Record is trying to do too much in nMigen and that's indicative of some design flaw IMO)
<whitequark>
but more importantly it doesn't change that a Record is just a bag of Signals internally
<whitequark>
creating a Record is exactly the same as creating its inner Signals one by one and then using Cat to assemble them together, except it adds a bit of fluff on top
<ZirconiumX>
I'm trying to internally sketch what this looks like, but it seems I'm running up against limitations of both my Python knowledge and my knowledge of nMigen internals.
<ZirconiumX>
Because to me when you say "use Python type annotations", I know what they are, but I have no idea how to apply them to this problem
<whitequark>
you know how record layouts look like, right? [("name", shape), ...]
<ZirconiumX>
nMigen is cleverly designed, so evidently I am not smart enough to work on it, so I'll shut up.
<whitequark>
you could write a record layout with Python type annotations using something like `name : shape`
<whitequark>
there's no real difference between my suggestion and your suggestion in terms of syntax except for mine being a bit more in line with what Python upstream is doing, because fundamentally a module interface in nmigen is a kind of a type declaration
<whitequark>
it's not more clever or anything
<whitequark>
it's just a different way to write the same thing.
Stormwind_mobile has quit [Ping timeout: 276 seconds]
<whitequark>
in theory you could also integrate with mypy, but it's unclear if mypy is actually flexible enough for that to work in any useful way, so I don't count on it now
<ZirconiumX>
I thought types here are fixed, i.e you can say it's an int or a string, but if a shape takes parameters then you would need to... create a type at runtime to declare the type of something else?
<whitequark>
that's correct, and python already has to cope with it
<whitequark>
imagine you're writing a type annotation for a Python list
<ZirconiumX>
This feels like I'm reaching "Haskell is a dynamically typed, interpreted language" levels.
<whitequark>
that's a type parameterized with another type. a parametric type, if you will
<ZirconiumX>
Yeah.
<whitequark>
that's also correct!
<whitequark>
nmigen requires type-level integers for signal widths, as well as type-level arithmetics
<whitequark>
this is just one step before true dependent types
<ZirconiumX>
I stole that line from Typing the Technical Interview. It's a good read.
<whitequark>
i mean, it technically already counts as dependent types (it's a type that depends on a value), but not in any interesting way so i avoid stating it vacuously like that
<whitequark>
and yes, i've read that post, it's nice
<whitequark>
it's genuinely relevant here because adding types to nmigen means using Python as a part of the value level sublanguage (the x.eq(y) part) as well as a part of the type level language (the Signal(unsigned(2 ** other_signal.width)) part)
<whitequark>
if you were writing a typechecker, this is where you realize that it's going to be undecidable to say if the program is well-typed or not, which is one reason why people avoid dependent types
<ZirconiumX>
I will admit, I saw that wizardry and assumed it worked by `isinstance` stuff.
<whitequark>
there's no wizardry there, it works exactly how it looks like
<ZirconiumX>
Maybe my brain has been warped by Rust too much
<whitequark>
so most type systems are very declarative
<ZirconiumX>
But my brain goes `struct Signal<T>(T) where T: ValidSignalType` and then assumes that you're checking that at runtime.
<whitequark>
ocaml's, for example, lets you build a tree* of polytypes with monotypes as leaves. like `(int, string) map` or something
<whitequark>
in ocaml's type system you cannot** express something like "a container with one element" and then "a function from a container with n elements that returns a container with n+1 elements"
<whitequark>
* a graph with the -rectypes option, but that's very obscure
<whitequark>
** you can by using peano numerals but you very quickly run into their limits
<ZirconiumX>
I think I first stumbled across your code while I was working with OCaml.
<whitequark>
what you want for something like nmigen is to be able to express arbitrary computations in type signatures
<whitequark>
if you have something like haskell's undecidable instances, you can treat your type system as basically a prolog interpreter, which it actually is
<ZirconiumX>
Which you do at runtime while my brain is trying to express this at compile time, I suppose.
<whitequark>
so if you can contrive your problem into looking like prolog, you can express it in type-level haskell
<whitequark>
the problem is that i have absolutely zero interest in contriving mundane gateware related tasks into looking like prolog, and everyone else has a highly negative amount of interest in doing that
<ZirconiumX>
Right.
<whitequark>
the observation is that since you need to have your typechecking undecidable *anyway* to do anything interesting (rust's already is btw) you might as well make it use a real programming language
<whitequark>
well, ok, that's unfair to prolog
<whitequark>
what i mean is that you can make the language you use to write types in, and make it very much like the language you write values in
<whitequark>
*you can take
<whitequark>
if you want to express 1 + 1, you don't have to define type level peano numbers first. you can literally just say "We have type level numbers now"
<whitequark>
Rust did... something really weird in between those
<whitequark>
it allows more or less arbitrary expressions involving constant generics, but it uses pattern matching on them when determining type relationships
<whitequark>
so for example if you have something like uh
<whitequark>
fn<T, U> foo(x: U) where T: usize, U: [u32; 1 + T] // i don't actually know the const generic syntax
<whitequark>
and then you have fn<T, U> bar(x: U) where T: usize, U: [u32; 2 + T] { foo(x) }
<whitequark>
then it doesn't typecheck
<whitequark>
but if you rewrite the condition for bar to look like `U: [u32; 1 + 1 + T]` then it does
<ZirconiumX>
I think the problem here is that we have inherently different views of how a module works. If I'm understanding you correctly here, to you a module is a type?
<ZirconiumX>
[nMigen modules]
<whitequark>
it's not a type
<whitequark>
let's go with an analogy to a different language
<ZirconiumX>
(to me it's a function, which is probably why I'm screwing up)
<whitequark>
I would say that right now, nmigen modules as written by the user (i.e. classes deriving from Elaboratable) are generic functions from signals to signals
<whitequark>
the problem is twofold: they are generic in a way that is parameterized by the entire world (the rest of your program) since they can just decide what to do based on global data
<whitequark>
and also, their set of parameters and results is determined by the entire world (any module higher in the hierarchy that includes this module as a part)
<whitequark>
what you're proposing is combining together any two functions that have the same parameters, arguments, and returns
<whitequark>
that's not inherently unreasonable but there's no way to actually determine if that's the case for any two nmigen modules
<whitequark>
well, not upfront
<ZirconiumX>
I should research the Python introspection interface
<whitequark>
not really
<whitequark>
introspection isn't helpful here for two reasons
<whitequark>
first, while you *can* observe what the elaborate method does and make a decision on whether it's actually pure or not, it would be very slow and nonportable
<ZirconiumX>
whitequark: what you're proposing is combining together any two functions that have the same parameters, arguments, and returns <--- here I had an implicit assumption of "parameters, arguments, returns, module name and module filename", but I suppose that would have to be an absolute path to be unique.
<whitequark>
no, module names don't even matter
<whitequark>
when I say "parameters" I don't mean "parameters to __init__"
<whitequark>
the elaborate function of a module can use absolutely any global data to make decisions. this is still true even for the most well-behaving module implementations
<whitequark>
this is literally as simple and well behaved as it gets, right?
<ZirconiumX>
My underlying assumption here, is that if you are told that a duck quacks, waddles and swims, and that all ducks behave like this, then you don't need to ask again if somebody asks you how a duck behaves.
<ZirconiumX>
And yep.
<ZirconiumX>
You still need the mapping of signals, which I realise
<whitequark>
we can even disregard that for a moment, imagining that it's already solved for us
<whitequark>
consider what happens if you pass a list as `inputs`
<whitequark>
lists are mutable. that means that, until `elaborate` actually runs, the question to "what determines what SomeModule will elaborate to" can only be answered with "any code that has access to the list in `self.inputs`", which for practical purposes means "the entire rest of the program"
<ZirconiumX>
... So this reduces to - I want to call it escape analysis, but it's probably the wrong term.
<whitequark>
I think what you're looking for is "referential transparency", which is a property not possessed among other things by mutable objects
<whitequark>
if escape analysis tells you an object doesn't escape a scope, and it's written to exactly once, you can transform it to an immutable one
<whitequark>
unfortunately escape analysis isn't practical in most contexts
<whitequark>
it's not that you can't apply it to e.g. python code like that
<ZirconiumX>
Mhm. So, effectively a pure module must also be referentially transparent?
<whitequark>
you can
<whitequark>
the problem is more that you will surely regret it if you build your language around it
<ZirconiumX>
While I realise the problem in general is difficult, if you have a decorator wrapper... somewhere... then one solution is to deep copy arguments to __init__. However, that's not intuitive to Python which passes by reference, and probably violates the principle of least surprise.
<ZirconiumX>
I'm throwing ideas at the wall, and even if they're all full of shit, I'd like to say I'm learning.
<whitequark>
a decorator wrapper doesn't change that you might legitimately want to depend on global data
<whitequark>
for example, oMigen had a feature where you could read CSRs from a CSV dump
<whitequark>
in a case like this you'd pass a filename and then the constructor or the elaborate function would read the file
<ZirconiumX>
Sure, but that would be by definition impure, no?
<whitequark>
depends. let's say you instanitate a four-core minerva system like that, all with the exact same CSRs
<whitequark>
you probably want that deduplicated, right?
<ZirconiumX>
Yes, true.
<whitequark>
and your *intent* is that all four cores have the exact same CSRs
<whitequark>
the general problem here is that what you want is to know the intent of the programmer, but all that you have is the code they wrote.
<ZirconiumX>
But couldn't you then restructure it to pass the CSV as a string? Then it would be pure.
<whitequark>
you can *try* guessing the intent from the code. i've been working on languages that do this since 2012. it works in simple cases and then breaks horribly in more complex one. in effect, you have to have a complete understanding of the compiler to write reliable code for that compiler
<ZirconiumX>
LuaJIT comes to mind there...
<whitequark>
in related examples, C does this. if you do something like `x->y = 10; if (x == NULL) ...` it guesses the intent as "the null pointer comparison probably results from a macro or an inlined function or something like that, and should be eliminated as dead code" and then does it
<whitequark>
there's a gcc flag that changes how gcc interprets this specific pattern. that flag essentially says "for this compilation unit, treat the intent of null pointer checks as a primitive comparison, and not as what the spec implies it should be"
<whitequark>
but that's a bit like trying to fix a shipwreck with duct tape
<whitequark>
the root cause is that instead of communicating with the compiler through explicit declaration of intent, you communicate with the compiler by implicitly declaring intent by following a set of rules, usually one that grows very complex over time
<whitequark>
let's consider some more examples
<whitequark>
in Python, if you assign some fields in a constructor, then most Python JITs will interpret this as an intent to write only to these fields. so such a JIT would optimize the code assuming that that's what will happen
<whitequark>
if that is not in fact the case, then the JIT would have to perform a (usually very expensive) deoptimization operation
<whitequark>
if you use the __slots__ field, then you declare that intent, and it becomes binding, i.e. you can't declare it and then change your mind, since any assignment to a non-slot will fail
<whitequark>
as a consequence, such a deoptimization will never happen, making your code run more predictably
<whitequark>
if you "only" lose performance as a consequence of misdeclared intent, that's usually just fine. but if you silently lose correctness as a result, that's awful, especially for a HDL where the consequence may well be "you tape out a brick"
<ZirconiumX>
Right, okay
<whitequark>
right now, no matter how you write your nmigen modules, they behave exactly according to how the elaborate function instructs them to, modulo any bugs in nmigen. if you deduplicate them based on declared intent *without actually checking if that holds*, you may silently make them behave incorrectly
<whitequark>
in view of many C programmers, when a compiler removes a null pointer check like the above, it "broke their code". that's both false and true. it's false because those C programmers don't actually know C, and the code was broken in the first place. it's true because the language they *think* they write would be vastly more useful than C, so in a way, every ISO C compiler breaks all code it compiles
<whitequark>
now, here comes the really tricky part
<whitequark>
in something like Rust, I have tools to enforce invariants in my code. something outside of a module simply has no way to poke at the module internals; you can't write that even with unsafe code because the struct field order is randomized for example
cedric has quit [Ping timeout: 268 seconds]
<whitequark>
in Python, I do not. it's very easy to break nmigen modules by poking at their internals. e.g. you can instantiate a Memory, get a read port, then do mem.width *= 2 and the port would be completely wrong
<whitequark>
now one could conclude that Python was a mistake, and it was, but we're using it anyway, so we have to make the best of it
cedric has joined #m-labs
cedric has quit [Changing host]
cedric has joined #m-labs
<ZirconiumX>
Yeah, okay. So deduplication is just not feasible with the level of monkey patching Python has?
<whitequark>
we've established that nothing I can do will *prevent* or *detect* violation of the invariants that allow module deduplication, so the next best thing would be to make these invariants *easy to keep*
<whitequark>
i.e. we have a Python programmer accustomed to doing many things that could lead to breakage, and we need to somehow communicate exactly which of them are OK and which are not, and do it in some way that is easy to understand and to discern from the codebase
<whitequark>
that's where escape analysis fails
<whitequark>
escape analysis necessarily relies on a machine's view of your code, traversing an object graph according to some rules you invented. these rules can't be just "any reachable object" because any live object is reachable from any code in a Python interpreter
<whitequark>
as a language designer, you can say "here are the rules, they are what they are, and if you violate them, I wash my hands", and that's how you get crap like C
<whitequark>
(or crap like 'x in synthesizable Verilog)
cr1901_modern has quit [Quit: Leaving.]
<whitequark>
(synthesizers need a "freeze" operation like LLVM has, and HDLs with 'x need to expose it, but that's a whole other story)
* ZirconiumX
is googling LLVM freeze right now
<whitequark>
if you want your language to be helpful in writing reliable code, you have to somehow communicate intent (both from the programmer to the compiler, and between two programmers) to make sure these vital invariants are unlikely to be violated
<whitequark>
that, by the way, is the reason for having a typechecker (mypy) in a dynlang at all
<whitequark>
yes, you'll always be able to just ignore it, and (sigh) yes, it's unsound, but neither of those makes it useless. what it means is you can't use a typechecker to prove things, like you can in rust or ocaml, but there are other reasons to use one, too
<whitequark>
it's really more of an advanced linter in that aspec
<ZirconiumX>
Okay, I think it makes sense now
<ZirconiumX>
The most boring way of checking the invariants is to elaborate the module and then check for being "equal" for some definition of equal.
<ZirconiumX>
Obviously this wouldn't be great.
<ZirconiumX>
I think expressing these invariants would be a good start if this is going to be done in any serious capacity
<whitequark>
finding exact subtrees is easy
<whitequark>
finding subtrees modulo equivalence is, I think, NP-hard
<ZirconiumX>
Thus the "for some definition of equal". Exact subtrees would guarantee equivalence though, no?
<whitequark>
there are no exact subtrees in nmigen code because that's useless
<whitequark>
you can just remove one instance without any change in behavior
<whitequark>
hm
<whitequark>
actually, we could reduce this problem to tree isomorphism (which is linear time) plus tree hashing to reject obviously wrong candidates
<ZirconiumX>
Given what you said about trying to infer intent, I'm presuming you'd want some form of explicitly expressing "deduplicate this please", rather than a best-effort attempt?
<whitequark>
yes
<ZirconiumX>
And, then this would error if it couldn't confirm the invariants or whatever?
<whitequark>
yes
<ZirconiumX>
Luke Gorrie in his RaptorJIT stuff talked about avoiding "high impact, medium generality" optimisations, and I'm wondering if this would count
<whitequark>
I mean, deduplication by itself isn't actually interesting
<whitequark>
the first thing any synthesizer will do is... expand it back
<ZirconiumX>
But I suppose it's really the same argument you are making
<whitequark>
literally the only thing it gives you is more compact Verilog intermediates
<daveshah>
Not necessarily - synthesis doesn't have to involve flattening
<ZirconiumX>
(I don't think synth_xilinx flattens by default, does it?)
<daveshah>
No, it doesn't
<daveshah>
I've used this for big PnR benchmark designs - only flatten after synthesis
<whitequark>
hm, you're right
<ZirconiumX>
So since it's spending less time on the duplicated modules, doesn't this potentially give a synthesis speedup?
<whitequark>
yes, that's true
<ZirconiumX>
Plus if nMigen can spend less time elaborating modules, that's a transpilation speedup, right?
<whitequark>
there's no such thing as a "transpiler". there are compilers and there are compilers named by people who are wrong
<whitequark>
but that aside
<ZirconiumX>
I was going to use "translation" originally, but
<whitequark>
(nmigen doesn't even technically qualify as a "transpiler" because it outputs RTLIL, which only coincidentally uses a human readable form and frankly would be better off outputting binary RTLIL if it existed)
<whitequark>
(that's like calling gcc a transpiler because it outputs assembly as text)
<whitequark>
anyway, so let's consider *why* you might want deduplication
<whitequark>
if you want it strictly as an *optimization* that may at the discretion of the compiler be not run on any particular input code, then "best effort" deduplication is fine provided that it runs in better than quadratic time, which it can (something I didn't realize before today)
<whitequark>
in this scenario, an issue saying that "modules such and such don't get deduplicated" can be closed with "not a bug, won't fix"
<ZirconiumX>
Based on the entirely unscientific experiment of my laptop running on battery power, nMigen takes ~2 seconds to elaborate a single pipeline, but ~41 seconds to elaborate 16 of them
<whitequark>
this doesn't save you any elaboration time at all
<whitequark>
best-effort deduplication can only run on fragments
<ZirconiumX>
Right
<whitequark>
that means you build the entire hierarchy of modules and then throw most of them away before you output verilog
<whitequark>
all it wins you is smaller verilog and (with some synthesizers, not yosys right now) faster synthesis
<ZirconiumX>
Definitely Quartus though
<ZirconiumX>
But that's because Quartus sucks ^.^
<whitequark>
now one could say "I want to annotate my module so that it can be cached and only elaborated once" and my response is "the whole point of Migen was to make a HDL hard to use incorrectly, adding annotations that may silently break correctness to make things slightly faster goes completely against that"
<whitequark>
(okay, not the whole point, maybe half of it)
Stormwind_mobile has joined #m-labs
<whitequark>
also
<whitequark>
based on 13:47 < ZirconiumX> Based on the entirely unscientific experiment of my laptop running on battery power, nMigen takes ~2 seconds to elaborate a single pipeline, but ~41 seconds to elaborate 16 of them
<whitequark>
it sounds like this discussion, yet again, revolves around an XY problem
<whitequark>
the problem you have is that elaboration is slow, but your suggestion is to "add deduplication" for some reason, instead of "make elaboration faster"
<whitequark>
the current fragment transform code is stupidly slow for no real reason other than me making a design mistake when writing it
<ZirconiumX>
My suggestion is "the fastest elaboration is no elaboration", but I'm just tired of all of this
<whitequark>
sure, but you can't avoid elaboration and also never incorrectly cache things, in Python at least
<whitequark>
if you could, you wouldn't need any annotations
Stormwind_mobile has quit [Ping timeout: 240 seconds]
<whitequark>
on the other hand, the fragment transform code currently always copies fragments, even when it works on a copy it just created
<whitequark>
it has to copy at least once, since if you e.g. make `Cat(ResetSignal())` or something like that and pass it to multiple modules in different clock domains, mutating that object would break the code (referential transparency again)
<whitequark>
(assuming the transforms mutate the AST, anyway)
<whitequark>
but it never has to copy more than once, and right now that's a major waste of time
<whitequark>
ZirconiumX: can you share the design that takes 41 seconds to elaborate so I can use it as a benchmark?
<ZirconiumX>
It's also possible (probable) that I'm doing this inefficiently.
<whitequark>
that makes it a better benchmark
<whitequark>
I mean, bad news if you want your code to work quickly, good news if you're improving the compiler; definitely a matter of perspective :p
<whitequark>
really, the more pathological the input is, the happier it can make compiler devs, at least if it exposes some previously unseen issues with it
<whitequark>
hence the efficiency of fuzzers like csmith
Stormwind_mobile has quit [Ping timeout: 245 seconds]
Stormwind_mobile has joined #m-labs
Stormwind_mobile has quit [Ping timeout: 250 seconds]
zkms has quit [Quit: zkms]
zkms has joined #m-labs
<Astro-_>
@pathfinder49 thanks for your branch again, I fixed shdn and iset_width in master now.
<Astro-_>
btw, was the "thermostat" name final? my proposal would be "tecpak" :)
<mtrbot-ml>
[mattermost] <pathfinder49> @astro Not final. Aside: pins PK1 and PQ4 are also missing. Probably worth rechecking that all pins are configured.
Stormwind_mobile has joined #m-labs
Stormwind_mobile has quit [Ping timeout: 246 seconds]
ohama has quit [Ping timeout: 240 seconds]
ohama has joined #m-labs
X-Scale` has joined #m-labs
X-Scale has quit [Ping timeout: 246 seconds]
Getorix_ has joined #m-labs
X-Scale` is now known as X-Scale
Getorix has quit [Ping timeout: 265 seconds]
Stormwind_mobile has joined #m-labs
Stormwind_mobile has quit [Ping timeout: 250 seconds]
<mtrbot-ml>
[mattermost] <hartytp> @astro the name thermostat is very final
<mtrbot-ml>
[mattermost] <hartytp> Too much time has been wasted arguing over hardware project names and I’m happy with that one
<mtrbot-ml>
[mattermost] <hartytp> I also don’t really get this whole “pak” thing and am not really interested in adopting it
rohitksingh has quit [Ping timeout: 250 seconds]
Stormwind_mobile has joined #m-labs
Stormwind_mobile has quit [Ping timeout: 265 seconds]
rohitksingh has joined #m-labs
Stormwind_mobile has joined #m-labs
Stormwind_mobile has quit [Ping timeout: 240 seconds]
X-Scale has quit [Ping timeout: 240 seconds]
X-Scale` has joined #m-labs
X-Scale` is now known as X-Scale
rohitksingh has quit [Ping timeout: 265 seconds]
<Astro-_>
@pathfinder49 right, I don't know what to do with these pins. I'd guess PWM at some frequency? configurable?
<mtrbot-ml>
[mattermost] <pathfinder49> The pin needs to be high or low to toggle 0.5 vs 1 MHz switching. Have a look at the data sheet.
kaaliakahn has quit [Remote host closed the connection]