hansfbaier has quit [Read error: Connection reset by peer]
roboknight has joined #litex
<roboknight>
So, I've been trying to add a verilog component to my vexriscv processor. I've finally got it compiling. How do I go about simulating everything such that I can see what happens to the signals when I access the component? I can get a handle around that if I'm setting up a single component, but what about monitoring the wishbone bus signals
<roboknight>
coming into my target component?
<_florent_>
roboknight: if it's some verilog code, you can use litex_sim as a starting point for your simulation, you'll be able to simulate vexriscv and your component with verilator
<_florent_>
you can also just generate the verilog from LiteX and do the simulation manually with verilator, icarus, etc...
<roboknight>
I'll take a look at that... I was trying to dig into the betrusted-io stuff for some simulation help, but a lot of the stuff seems wrapped up in some of their own tools for generating the proper simulation files. I'll also take a look at verilator and icarus to see how that goes... those might be significantly easier... not sure.
<somlo>
_florent_: I think I managed to hook up litex_sim, litex_server, and litescope_cli, and to trigger a capture based on the rising edge of a signal.
<somlo>
Remaining issue is that dump.vcd looks "blank" when I bring it up in gtkwave...
<somlo>
hoping the "nothing showing in gtkwave" issue is a common well-known n00b mistake with an easy answer :)
<_florent_>
somlo: I just saw your mails, it seems good. For GTKWave, you have to select the Signals and click on Append
<somlo>
_florent_: so I was right, then: well-known n00b mistake with an easy answer :D
<somlo>
thanks much, looks like I'm on my way!
<roboknight>
Can the migen.fhdl.verilog convert function generate the full SoC? Or is it mainly used for generating parts of an SoC for testing? When attempting to use it like this: soc = BaseSoC(...) and then convert(soc).write("output.v") it says attempted to use a reset synchronizer, but platform does not support them... So I'm guessing I that's not
<roboknight>
quite right.
<joseng>
I have a strange behavior of the LiteDRAMDMAReader. When I read with it the first time, it returns the correct RAM content. When I change the data in the RAM and read again, I get the same content from the DMA Reader as before.
<joseng>
Do I need to set/reset something between transactions? Or might I have the timing/handshaking wrong for the addresses?
<joseng>
Just used a base address of 0 to read from the beginning of the RAM. When I read/write with the BIOS commands to the RAM, I see the correct values also changing in the RAM
<joseng>
Or now the DMA Reader reads only FF for all data I try to read...
roboknight has quit [Quit: Connection closed]
<_florent_>
joseng: it's possible you are writing/reading at a different location.
<_florent_>
using the BIOS's mem_read/mem_write functions can be useful to diagnose this: verify that you read the same value with the mem_read and the LiteDRAMDMAReader, then update the data with mem_write and be sure you also get the updated data with LitedRAMDMAReader.
<_florent_>
you can also try to replicate this in simulation with litex_sim and look at the .vcd traces or use Display as a printf equivalent to understand/debug the gateware
<joseng>
Thanks for the hints with the wrong address. Thought when I provide the address 0 to the DMA Reader this means address 0x40000000 where the RAM starts
<joseng>
But my empirical tests showed now, that a write in the BIOS console "mem_write 0x40000000 0xAAAABBBB 0x810" shows exactly 16 byte of AAAABBBB returned by the DMA reader
<joseng>
After the 16 bytes of correct data I get FF again. When I write longer data from the BIOS command, I get more data returned by the DMA reader
<joseng>
Where does this come from, that the "start address" of the DMA reader is shifted up to 0x800 * 4 byte?
<joseng>
_florent_ Does the vexriscv I instantiate shift the address up with the memory sizes I provide there?
<joseng>
Or what do these lines exactly do? "integrated_rom_size=0x10000" and "integrated_main_ram_size=0x0000" (just copied this from the NeTV2 code when I first started)
<joseng>
The only thing I can find with a size of 0x2000 (0x800 * 4) is the "integrated_sram_size = 0x2000," in the SoCCore class, but the csr.csv export said that the "sram" region is at 0x01000000?! So whats the offset for the DMA Reader then?
Bertl is now known as Bertl_oO
<_florent_>
joseng: not sure that's your issue, but the addressing of the DMA reader is in controller's words not bytes, so 256-bit / 32-bytes
oter has joined #litex
oter_ has joined #litex
oter_ has quit [Client Quit]
oter has quit [Client Quit]
Defferix has joined #litex
Defferix has quit [Client Quit]
Defferix has joined #litex
<joseng>
_florent_ Did some more testing. You are right, it is no offset or address mismatch. It seems to be some kind of caching in/in font of the DRAM Reader. When I write more than 8kB data to the RAM (either by a BIOS command or by a WIshbone RemoteClient in Python) the data the DMA Reader outputs changes immediately. When I write smaller junks of data,
<joseng>
the output of the DRAM Reader does not change. (Strangely the output of the mem_read BIOS command reflects all RAM writes from the Python client immediately, and vice versa)
<joseng>
I see that there is the l2 cache when calling "add_sdram", and the default is 8kB
<tcal>
kgugala -- https://github.com/enjoy-digital/litex/pull/774 caught my interest -- I had just been looking at the `add_debug(..)` routine in soc/cores/cpu/vexriscv/core.py. And I've used the debug interface before. What does the call to `soc.bus.add_slave(..)` add? Is it for information logging, to put more info in csr.csv?
<tcal>
Oops that should have been kgugala:
Defferix has quit [Quit: Connection closed]
mithro has quit [Write error: Connection reset by peer]
sorear has quit [Read error: Connection reset by peer]
sorear has joined #litex
david-sawatzke[m has quit [Ping timeout: 268 seconds]
sajattack[m] has quit [Ping timeout: 268 seconds]
jevinskie[m] has quit [Ping timeout: 268 seconds]
DerFetzer[m] has quit [Ping timeout: 260 seconds]
xobs has quit [Ping timeout: 260 seconds]
lambda has quit [Ping timeout: 260 seconds]
<joseng>
_florent_ Tested now a design with an l2_size=128, sets the caching size down and I need to write less data. Not quite only 128 32bit words, but 140 words is enough to get the DMA reader output 128 changed words (16 * 256bit)
<joseng>
So something seems to be a bit odd with the caching?