<wpwrak>
i didn't add any red LED around the buttons because the green LEDs make it hard to see. best case, it would just be a sea of yellow light
<wolfspraul>
can the leds increase radiation emissions?
<wpwrak>
you mean EMI ? if you run the PWM very fast ....
sh4rm4 [sh4rm4!~sh4rm@gateway/tor-sasl/sh4rm4] has joined #milkymist
<wpwrak>
i switched them at 1 kHz. not sure how fast the edges would be.
<wpwrak>
worst case, we could add some caps to the matrix, e.g., after the current-limiting Rs
<wolfspraul>
looking at collage-front - ok
<wolfspraul>
no comment other than I don't like more than 1 color
<wolfspraul>
looks like a cheap xmas tree
<wpwrak>
at least one green LED will relocate anyway :)
<wolfspraul>
do you want multiple colors?
<wpwrak>
oh, it can become red. no problem with that.
<wpwrak>
ah, interesting. they replaced the laser pointer in the rii keyboard with an universal IR remote. smart move, though cat lovers may be disappointed.
<wolfspraul>
so that's good, I think we will zoom in on a vendor that is willing to work with a tiny customer like us, and sell us keyboards at a good price and with the specs we want (though it is in our own interest to pick an existing high-volume model)
<wolfspraul>
do we need infrared as well?
<wolfspraul>
I though if we have keyboard + mouse that's perfect (with USB dongle)
<wolfspraul>
also backlight, I think very important
<wpwrak>
the thing just seems to come with an extra. either laser pointer (old) or new IR remote.
<lekernel>
most signal names are autogenerated with the new python introspection hacks
<wpwrak>
looks nice and tidy :) some constructs are non-intuitive, though. e.g., the structure of things in wishbone.InterconnectShared, or what verilog.convert does.
<wpwrak>
but i guess alot of that stuff is ultimately a documentation issue
<wpwrak>
and maybe some of the procedural stuff for code generation can be hidden later
wolfspra1l [wolfspra1l!~wolfsprau@p5B0ABF11.dip.t-dialin.net] has joined #milkymist
<kristianpaul>
cheap xmas tree indeed ;)
<lekernel>
kristianpaul: you should start using migen for your gps stuff. no more manual modifications of wb_conbus ...
<kristianpaul>
lekernel: yeah sure, i'll update your migen repo and look for a example
<kristianpaul>
milkymist-ng**
<lekernel>
atm it runs the beginning of libhpdmc
<lekernel>
ie you get a banner
<lekernel>
then there's no UART RX (maybe you could help? simply port the existing code to migen...) and no sdram
<kristianpaul>
yeah i noticed that, why you dont use the old uart anyway?
<lekernel>
because we have a new generic system for generating CSR banks and interrupt/event controllers, rather than hand-code it in the UART and every other core
<kristianpaul>
hmm yup
<lekernel>
but yes, the lazy-lazy approach is to use instance encapsulation (like I did for lm32)
<wpwrak>
aka backward-compatibility ? :)
<kristianpaul>
yay ;)
<wpwrak>
i'd consider that a feature. allows for incremental deployment, etc.
<lekernel>
it'll make sense for more complex code, but uart is 100 lines
<wpwrak>
ok :)
<lekernel>
and by the way, there's no backward compatibility for the -ng soc, except for the lm32 instruction set
<kristianpaul>
avoiding to write/fork a new cpu? ;)
<lekernel>
why? lm32 is fine
<kristianpaul>
sure ;)
<kristianpaul>
so you plan rewrite sdram controller too, tmu?
<lekernel>
and everyone's writing (often shitty) softcore CPUs anyway ...
<lekernel>
yes, sdram controller and TMU will be fundamentally redesigned
<kristianpaul>
btw fast memory link will too i guess?
<lekernel>
yes, new OOO bus
* kristianpaul
looks for its python book
<kristianpaul>
btw is icarus happy with the verilog code generated by migen?
<lekernel>
yes
<kristianpaul>
phew
<lekernel>
why shouldn't it?
<wpwrak>
(no compatibility) so yet another round of register address, interrupts, etc., reassignments ?
<lekernel>
yes
<wpwrak>
phew
<lekernel>
the CSR address space is saturated anyway
<kristianpaul>
oh yes :)
<wpwrak>
so you extended it ?
<lekernel>
yes
<kristianpaul>
finally 1 bit? was it?
<wpwrak>
good then. we'll need room for all the LED controls ;-)))
<kristianpaul>
shouldn dunno, well lm32 still not last time i tried..
<lekernel>
and we'll have level-sensitive interrupts everywhere (you asked for it for linux, now you'll get it)
<wpwrak>
i don't mind having level-sensitive interrupts :)
<kristianpaul>
how a mmu could fir in -ng, it will be just another core? or migen could provide some aid to write it?
<lekernel>
the MMU needs to be integrated into the LM32 pipeline, which is pretty much independent of migen
<kristianpaul>
ah so this fork lm32 anyway, interesting
<kristianpaul>
i guess you discussed this before, but put stuff inside that pipeline is not make things slower?
<wpwrak>
i don't think an MMU would have to "fork" lm32. but it's surely an invasive internal change (mainly to the cache, of course)
<kristianpaul>
invasive indeed
<kristianpaul>
ah ok
<kristianpaul>
but cache is part of pipeline? now i'm cofused :)
<lekernel>
yes, the cache is part of the pipeline
* wpwrak
wonders if there's a good primer on physically tagged virtual caches. i have a boot that explains caches rather nicely but that's not really suitable as a reference here
<wpwrak>
s/boot/book/ # grmbl
<kristianpaul>
boot the book !
<wpwrak>
"unix systems for modern architectures" by curt schimmel. from 1994. YMMV. (your "modern" may vary)
<lekernel>
now there's the interesting possibility of using dataflow to build a CPU, but that would need support for speculation
<wpwrak>
of course, there probably haven't been too many changes to the state of the art since then
<GitHub30>
[migen/master] Logo - Sebastien Bourdeauducq
DJTachyon [DJTachyon!~Tachyon@ool-43529b4a.dyn.optonline.net] has joined #milkymist
cjdavis1 [cjdavis1!~cjdavis@cpe-71-67-99-208.cinci.res.rr.com] has joined #milkymist
elldekaa [elldekaa!~hyviquel@abo-168-129-68.bdx.modulonet.fr] has joined #milkymist
elldekaa [elldekaa!~hyviquel@abo-168-129-68.bdx.modulonet.fr] has joined #milkymist
lekernel_ [lekernel_!~lekernel@g225044059.adsl.alicedsl.de] has joined #milkymist
<sh4rm4>
!addquote * kristianpaul looks for its python book
<lekernel>
!quote
kilae [kilae!~chatzilla@catv-161-018.tbwil.ch] has joined #milkymist
<larsc>
hm, he always quits when i'm about to ask something
lekernel [lekernel!~lekernel@g225044059.adsl.alicedsl.de] has joined #milkymist
<larsc>
lekernel: i'm curently trying to understand the migen flow stuf
<larsc>
so an actor does something. it has 0 to many inputs and zero to many outputs. an input has a data-in and a data-accept signal and and output has an data-out and a data-ready signal
<larsc>
the data-output signal is a normaly a register
<lekernel>
no, the data-out can have combinatorial logic too
<lekernel>
it just depends how the control signals are driven by the actor
<larsc>
but you have to have a register somewhere in the actor
<lekernel>
btw - the control signaling is the same as the TMU which is explained in my thesis, except that stb-to-ack combinatorial feedback at an actor input is allowed
<lekernel>
no, you don't
<lekernel>
in a very simple case you can simply pass through the control signal and have some combinatorial logic in the token path
<lekernel>
the adder actor (and the other basic logic functions) do this, for example
<larsc>
so it only acks the input once the output has been successfully delivered?
<lekernel>
of course, if you stack too many of those actors in series, you can have timing closure problems later
<lekernel>
that's why there's a generic "buffer" actor which simply inserts a register
<larsc>
and you have to insert the buffer manually?
<lekernel>
at the moment yes
<lekernel>
but it shouldn't be too hard to implement some algorithm that examines the graph and inserts buffer actors based on some heuristics
<lekernel>
(btw, the buffer actor adapts to any data type automatically)
<larsc>
is there any plan that an actor can adapt it's number of stages depending on the graph
<larsc>
e.g. if you have a multiplier in paralell with some other stuff
<lekernel>
you mean, for pipeline compensation?
<larsc>
and the other stuff takes a fixed number of cycles
<larsc>
i don't know what pipeline compenstation is
<lekernel>
having the same number of registers in all data paths so all associated data elements come out at the same time
<larsc>
ok. no i meant if we know from the graph that there is an actor in paralel to our actor which needs a fixed number of cycles and our actor can be implemented with different cycle times, that the cycle time would be choosen, so that the resources usage is minimized
<lekernel>
ha
<lekernel>
yes, but I'm rather thinking about implementing it in a more heuristic way
<lekernel>
i.e. you run a simulation of your whole system on some datasets
<lekernel>
and it tries to find which actors are uselessly fast/resource intensive, and tries to switch them to more sequential implementations
<larsc>
hm
<lekernel>
in the same vein there can be actor sharing, too (i.e. if the same actor appears twice in the graph, it can be only implemented once, and some glue logic multiplexes the tokens in and out)
<lekernel>
supporting this sort of stuff only in the "fixed number of cycles" case is too limiting to be really useful imo
<lekernel>
in real world, you have DMA's to system memory and algorithms with a processing time depending on the data
<lekernel>
this whole thing would only appear way later though
<lekernel>
I want a migen version with great "manual" dataflow to start with :)
<larsc>
hm
<lekernel>
another "blue skies research" area is speculative execution. then you could imagine to build efficient pipelined CPUs quite easily using dataflow if done right
<lekernel>
but I'm not really thinking about this yet... too much to do already
<larsc>
ah, ok. the combinatorial scheduling model is just not mentioned in the README
<larsc>
so that's basically a passthrough of the control signals and some combinatorial logic applied to the data
<lekernel>
yes, but it can be a bit more complicated than a passthrough
<lekernel>
ie if you have a combinatorial actor with two inputs and one output
<larsc>
hm
<lekernel>
it must only ack those two inputs at the same time, and when both have a token to send that can be accepted downstream
<lekernel>
this needs a few AND gates
<lekernel>
passthrough is only in the simple 1 input 1 output case
<larsc>
yes
<larsc>
hm, i guess two outputs, one input is even more complicated
<larsc>
hm, copy & paste error in ala.py for And()?
<GitHub83>
[migen/master] flow/ala: fix typo for And (thanks Lars) - Sebastien Bourdeauducq
<larsc>
somehow the distinction between different scheduling models reminds me of 'reg' and 'wire'
<wpwrak>
;-)
<lekernel>
the point of scheduling models is (1) to have generic control logic in the base Actor class (2) to be able to implement algorithms that do things like remove the control logic and signals when a static schedule is found, insert buffer actors, equalize pipelines, etc.
<lekernel>
so I think you have spoken too fast *g*
<larsc>
yes, but for example combinatorial is just a special case of pipelined, with the length of the pipeline being zero
<larsc>
also it is a special case of sequential with the ratio being one
<larsc>
also which scheduling model has a actor which is combined actor out of an pipelined and sequential actor?
<wpwrak>
so the scheduling model is basically a hint to the code generator ? i.e., in theory, a perfect code generator could figure it out on its own. but it may be hard to implement.
<larsc>
imo a better model is to let an actor have both properties
<larsc>
the number of pipeline steps and the number of sequential steps
lekernel [lekernel!~lekernel@g225034075.adsl.alicedsl.de] has joined #milkymist
<larsc>
imo a better model is to let an actor have both properties
<larsc>
the number of pipeline steps and the number of sequential steps