<sb0>
rjo, your last emails say "PATCH 1..3/7" - are patches 4..7 missing?
<GitHub92>
[migen] sbourdeauducq pushed 3 new commits to master: http://git.io/qGiigQ
<GitHub92>
migen/master 7c19e43 Robert Jordens: vivado: mode batch to prevent vivado from opening tcl shell on error
<GitHub92>
migen/master 6836432 Robert Jordens: cordic: vivado is bad at inferring compact adder/subtractor logic
<GitHub92>
migen/master 4328122 Robert Jordens: vivado: add more reporting
fengling has joined #m-labs
_florent_ has joined #m-labs
<sb0>
rjo, you can get close - if you always access the page buffers (i.e. only make accesses that are within the same row in each bank) then the only dead times are due to refreshes, roughly 1%
<sb0>
otherwise, there are delays due to the precharge/activate when you cross a row boundary in a bank. on a sequential access, they are small.
<sb0>
switching between read and write also causes turnaround and write recovery delays
fengling has quit [Ping timeout: 268 seconds]
<rjo>
sb0: re patches. yeah. they are not ready yet ;)
<rjo>
sb0: i see. so a lasmi dma master would then run at -- lets say -- 125 MHz with 512 bit wide data?
<sb0>
yes
<sb0>
I'm not sure if we're going to do any DMA...
<rjo>
sb0: ack.
fengling has joined #m-labs
<sb0>
we're probably going to put the network packets in BRAM
<sb0>
it's a small amount (a few kilobytes) and it's much easier. especially if we want to do some processing in gateware.
<rjo>
sb0: re DQ write leveling. are the measured read delays not a good estimate for the write delays?
<sb0>
they are related, yes
<rjo>
sb0: i was thinking about fast and long streams of rtio pulses.
<rjo>
sb0: but they not a better estimate than the initial write leveling you do?
<sb0>
the write leveling aligns DQS with CK (for each chip)
<sb0>
of course, the more skew one chip has, the more delay you need to put on DQS in order to align it
<sb0>
and the later it outputs data after a read command
<rjo>
but you put the same delay on DQ (-specced S/H).
<rjo>
there is no special leveling of DQ wrt DQS.
<sb0>
when writing, DQ has setup/hold requirements wrt DQS. but no, there is no special calibration mode for this timing, all you can do is try to write and read back
<sb0>
I don't think you can use the read DQ timing to determine the write timing, as the SDRAM has a nonzero clock-to-data (and clock-to-DQS) spec
<sb0>
(when reading)
<sb0>
write timing and read timing are related, but not to the level of precision we need
<rjo>
sb0: i see. and the DQ DQS length match is probably much better then what could be inferred from the read leveling.
<sb0>
yes. and we're already making a length matching assumption to be able to send the command/address to the SDRAM (though the bit times are 2x slower there)...
<rjo>
vivado is taking an unholy long time to compile this thing...
<sb0>
oh yes
<sb0>
that's why I've been mostly using ISE
<rjo>
isnt it supposed to be faster?
<rjo>
and why does it take 90k luts just for the lm32_load_store_unit?
<sb0>
rjo, do you really want all lines <= 79 chars?
<rjo>
sb0: pretty please ;) is that a problem?
<sb0>
it results in more lines/scrolling, and modern monitors (unlike 80col VGA consoles from ages ago) have no problems with long lines
<sb0>
but well, I'm not going to argue about that...
<rjo>
vivado is doing something wrong. both lm32 and mor1kx need ~90k luts just for the lsu
<sb0>
that may help explain the long compile times
<sb0>
did it fail to infer block RAM?
<rjo>
sb0: monitor real estate is not the reason for short lines. the fact that short lines are more readable has been known since the first books were printed ;)
<rjo>
sb0: doesnt look like it. it uses 241 "luts as memory"
<rjo>
sb0: they are all used as logic.
<rjo>
multipliers are there as well...
<sb0>
do you have a xilinx webcase account?
<rjo>
i believe i received that honor by randomly clicking around at some point in the past. but i never made use of it.
xiangfu has quit [Remote host closed the connection]
<stekern>
vivado swap the dcache memories for luts to improve timing
<stekern>
so, it's not a bug, but a feature
<stekern>
I bet you can turn that off somehow
<rjo>
stekern: what message am I looking for to confirm that?
<stekern>
wait, I can look it up
<rjo>
stekern: it claims it only uses 240 luts as distributed ram
<sb0>
rjo, if that was years ago, they probably closed it (they closed a bunch of them and redirected users to the community forums where legitimate xst bug reports turn into FSM coding styles discussions that have nothing to do with the bug)
<rjo>
stekern: but 79k slice registers and 90k luts as logic...
<sb0>
otherwise, that's the sort of problem they used to fix
<rjo>
sb0: no i just checked. i can still log in with my old ETHZ account and jsut for good measure i am also getting a webcase account for here now...
<rjo>
just.
<rjo>
sb0: ack. i might try to submit that then...
<rjo>
if there wasn't "Oracle Access Manager Operation Error
<rjo>
anyways. gotta go now. stekern. if you can give me a hint what i should be looking for, i'll scroll through the log tomorrow.
<stekern>
rjo: hmm, doesn't seem to be in the reports, so it was probably somewhere in the stdout output I saw it