_florent_ changed the topic of #litex to: LiteX FPGA SoC builder and Cores / Github : https://github.com/enjoy-digital, https://github.com/litex-hub / Logs: https://freenode.irclog.whitequark.org/litex
tpb has quit [Remote host closed the connection]
tpb has joined #litex
HoloIRCUser has joined #litex
HoloIRCUser2 has quit [Read error: Connection reset by peer]
HoloIRCUser1 has joined #litex
HoloIRCUser has quit [Ping timeout: 246 seconds]
Skip has quit [Remote host closed the connection]
Degi has quit [Ping timeout: 260 seconds]
Degi has joined #litex
<_florent_> dkozel: ok good, thanks for the feedback. For the speed, you should be able to go up to 13Gbps by changing the buffering configuration.
<_florent_> dkozel: for wishbone examples, you can look at:
<tpb> Title: liteeth/etherbone.py at master · enjoy-digital/liteeth · GitHub (at github.com)
<tpb> Title: litex/wishbone2csr.py at master · enjoy-digital/litex · GitHub (at github.com)
<_florent_> i was also planning to create a similar example for the wiki, i could speed this up
tcal has quit [Ping timeout: 240 seconds]
_tcal has quit [Ping timeout: 260 seconds]
HoloIRCUser has joined #litex
HoloIRCUser1 has quit [Read error: Connection reset by peer]
<_florent_> shuffle2: there are indeed some duplications between Migen/MiSoC and LiteX since the projects have a common base (we were collaborating together) but took different directions. I tried to put some efforts in the beginning to avoid MiSoC/LiteX to diverge too much, but it was complicated due to some disagreements (technical and human).
<scanakci> _florent_: I updated my LiteX to recent version. When I use the command --with-sdram, neither Vexriscv nor BP starts executing the binary that I specify with --sdram-init. Both comes to BIOS terminal and that's all. Do I need to change anything else?
<scanakci> The whole command is (./litex_sim.py --with-sdram --sdram-module=MT48LC16M16 --sdram-data-w --sdram-init=boot.bin.uart.simu --output-dir build/trial --cpu-type vexriscv --cpu-variant standard)
<_florent_> scanakci: indeed, in this case it's still using the BIOS, you can add this: https://github.com/enjoy-digital/litex/blob/master/litex/tools/litex_sim.py#L356
<tpb> Title: litex/litex_sim.py at master · enjoy-digital/litex · GitHub (at github.com)
<scanakci> I have that line in my litex_sim. I am using the most recent LiteX commit.
tcal has joined #litex
HoloIRCUser1 has joined #litex
tcal_ has joined #litex
<_florent_> but you will probably need to modify args.ram_init to args.sdram_init, i will have to check why we have a different behavior between the --ram-init and --sdram-init.
HoloIRCUser has quit [Ping timeout: 244 seconds]
<_florent_> scanakci: is it better with this?
<scanakci> Vexriscv attempted to boot program rather than BIOS.
<scanakci> BP is worse (not even printing LiteX logo :) ). It should be related to BP, though. I do not think it is related to LiteX
<scanakci> Thanks for the help.
HoloIRCUser has joined #litex
HoloIRCUser1 has quit [Ping timeout: 240 seconds]
HoloIRCUser1 has joined #litex
HoloIRCUser has quit [Ping timeout: 244 seconds]
<scanakci> setting the sdram-width to 16 actually started printing the Logo and CPU info for BP. It did not come to Liftoff, though.
<_florent_> https://github.com/enjoy-digital/litex/wiki/Installation, sorry for the inconvenience.
<_florent_> For information, i just merged https://github.com/enjoy-digital/litex/pull/399, so LiteX will now use Python modules instead of git submodules. This will simplify installing external dependencies in the future (and will reduce installation size if no CPU or only some are used), but this also means that if you want to update LiteX, you will have to reinstall it following
<tpb> Title: Home · enjoy-digital/litex Wiki · GitHub (at github.com)
<_florent_> scanakci: ok, maybe it's the same issue you have when testing on hardware. I would probably need to see the code to be able to help more.
HoloIRCUser has joined #litex
<scanakci> Actually after waiting long enough, It started working. I got an assertion failed.
<scanakci> Something to debug more tomorrow
HoloIRCUser1 has quit [Ping timeout: 260 seconds]
<dkozel> _florent_: thanks for the pointers. Yes, I'd like to increase the speed. Are you talking about buffering on host, the FPGA, or both?
<dkozel> I'd like to help with the documentation of the example you make for the wiki. Looking at wishbone2csr I can see how short it is and pretend that I understand what it's doing, but there's so much assumed knowledge that I don't really.
<dkozel> Not a problem! This is a new domain for me, I expect to have to do research and self-learn, but I'd like to leave some documentation behind as I do.
futarisIRCcloud has quit [Quit: Connection closed for inactivity]
<dkozel> Just been looking at the wiki! The outline looks excellent.
<_florent_> dkozel: any help is welcome for the wiki :)
<tpb> Title: aller: increase max_pending_request and buffering to increase DMA speed. · enjoy-digital/litepcie_aller_test@51b055b · GitHub (at github.com)
rohitksingh has quit [Quit: No Ping reply in 180 seconds.]
rohitksingh has joined #litex
<daveshah> _florent_: I'm going to play with the Alveo DDR4 this afternoon (I got a basic no DRAM SoC working with Vivado)
<daveshah> What is the best way to create a SoC with main RAM and SDRAM, so I can serialboot new init code?
<_florent_> daveshah: cool, the best way is to add another ram and execute the code from there, i already have this and can prepare a skeleton for you
<zyp> which cpu would be the best bet if I wanted to try porting over some microcontroller code? (from arm cortex-m) is vexriscv the best supported one?
<zyp> targetting ecp5, so space shouldn't be an issue
<daveshah> yeah, vexriscv in its default config is probably roughly Cortex-M3 equivalent
<daveshah> _florent_: thanks, that would be great
<_florent_> daveshah: i can do it in ~1h
<daveshah> No worries, I've got some other issues to deal with first (like a DQS pinout discrepancy...)
Skip has joined #litex
<_florent_> daveshah: https://github.com/enjoy-digital/litedram_init_test, it's for the Arty but the only change you have to do in your target is adding: self.add_ram("firmware_ram", 0x20000000, 0x8000)
<daveshah> _florent_: great, thanks
<daveshah> Just managed to get MPR reads and writes working on the command line so seems the hardware side is alive now
gregdavill has quit [Ping timeout: 240 seconds]
<daveshah> I'm not properly setting up the RDIMM register though, and memtest isn't working yet, so that is next
<tpb> Title: Snippet | IRCCloud (at www.irccloud.com)
<_florent_> ok, the write leveling ic clearly not centered but the readleveling should work on at least some of the modules
<_florent_> what are you using for cmd_latency? from the last tests i did, i would recommend using cmd_latency=1
<_florent_> but that could be interesting to test with 0 and 1.
<daveshah> I was using 0, I will try 1 once the current build has finished. I guess the register might change the latency situation too?
bunnie has joined #litex
<daveshah> sorry, what was posted was with latency 1, latency 0 gave less useful results
<daveshah> I am now experimenting with register configuration, I think that is the problem now
<daveshah> Yay, latency=1 and some register init gets half the bytes working
<tpb> Title: Snippet | IRCCloud (at www.irccloud.com)
<daveshah> I think the failing half are related to the address/ba inversion that the register chip does
darren099 has joined #litex
<_florent_> daveshah: great for the first modules working, the write leveling does not seem good for the first modules, this could also be the reason
<daveshah> looks like it isn't the address inversion actually, that is the half that is working fine
<daveshah> if I set the register to PLL bypass mode, then write levelling passes for the first 3 bytes but all read levelling fails
<tpb> Title: Snippet | IRCCloud (at www.irccloud.com)
HoloIRCUser1 has joined #litex
HoloIRCUser has quit [Ping timeout: 246 seconds]
<dkozel> _florent_: I'll do what I can on the wiki. Reviewing existing text for clarity and adding references/crosslinks is probably the most useful thing I can do at the moment
<dkozel> Most/all of the ToDo pages are ones that I'm a prime candidate consumer for
<dkozel> I just rebuilt the Aller image and it didn't enumerate on PCIe this boot. Almost certainly some error in my setup. I'll debug
tcal__ has joined #litex
<daveshah> Hmm, by playing with a combination of cmd_latency (either 1 or 2) and the register PLL and latency settings, I'm able to get either the first 4 bytes or the last 4 bytes to pass read/write training but never both at the same time
<daveshah> Anyway, enough for me today, if anyone is curious the changes are at https://github.com/daveshah1/litedram/tree/alveo_u250, https://github.com/daveshah1/litex/tree/alveo_u250 and https://github.com/daveshah1/litex-boards/tree/alveo_u250 (the RDIMM related changes need to be made conditional still, and the RCD init code isn't included here)
<tpb> Title: GitHub - daveshah1/litex at alveo_u250 (at github.com)
CarlFK has quit [Quit: Leaving.]
rw1nkler has joined #litex
<zyp> I'm trying to instance a vexriscv with debug support, but I can't get it working properly, wishbone-tool gives me: ERROR [wishbone_tool] invalid configuration: GDB specified but no vexriscv address present in csv file
<zyp> the log from building the soc looks like this: https://paste.jvnv.net/view/bS7Oo and the csr.csv looks like this: https://paste.jvnv.net/view/d4QEc
<zyp> what am I missing?
<tpb> Title: JVnV Pastebin View paste – Untitled (at paste.jvnv.net)
<zyp> figured out I were missing register_mem("vexriscv_debug", …, so I'm getting closer :)
gregdavill has joined #litex
<dkozel> Ah yes. I did not recompile the kernel module after rebuilding the gateware so the CSRs likely did not match or similar.
<dkozel> The buffer changes gave an 11% increase in speed, to 9.29 Gbps.
<dkozel> I'll see about what might be limiting on the host and test code side.
CarlFK has joined #litex
rw1nkler has quit [Remote host closed the connection]
futarisIRCcloud has joined #litex
tcal__ has quit [Remote host closed the connection]