tpb has joined #litex
freemint has quit [Ping timeout: 245 seconds]
freemint has joined #litex
_whitelogger has joined #litex
_whitelogger has joined #litex
_whitelogger has joined #litex
_whitelogger has joined #litex
_whitelogger has joined #litex
_whitelogger has joined #litex
freemint has quit [Ping timeout: 245 seconds]
freemint has joined #litex
freemint has quit [Ping timeout: 276 seconds]
rohitksingh has quit [Ping timeout: 250 seconds]
rohitksingh has joined #litex
rohitksingh has quit [Ping timeout: 245 seconds]
freemint has joined #litex
freemint has quit [Remote host closed the connection]
freemint has joined #litex
freemint has quit [Ping timeout: 245 seconds]
freemint has joined #litex
rohitksingh has joined #litex
rohitksingh has quit [Ping timeout: 240 seconds]
<somlo> the default Rocket Linux variant is set up with (nSets=4, nWays=1, nTLBs=4) for both the L1D and L1I caches, and per my earlier experiments connecting LiteDRAM directly to the cached-RAM axi port, we get a 7% performace boost going from 64-bit to 128-bit data width, and a 1% boost going 128->256.
<somlo> it was nagging at me that some of that is muddied by the presence and size of the cache, so I generated low-cache Rocket variants with (nsets=4, nways=1, ntlbs=4)
rohitksingh has joined #litex
<somlo> MUCH worse performance overall, but going from 64-128 mem_axi data width got me 18% better perormance, and going from 128->256 data width got me a 3% improvement
<somlo> so, more improvement from mem_axi width doubling in the absence of a large L1 cache
<somlo> _florent_, daveshah: ^^
<somlo> just wanted to make sure it's actually worth adding "wide" rocket variants as a first choice, and keeping data-width FSM based conversion as a backup plan only
ambro718 has joined #litex
<somlo> but with small caches, performance was really BAD in absolute terms, so it took a few days to get all the results :)
keesj has quit [Ping timeout: 268 seconds]
freemint has quit [Ping timeout: 245 seconds]
<scanakci> _florent_: I am a bit confused about --rom-init in terms of provided file format. My understanding is that mem.init file in generated gateware folder contains rom content. If I put my program into mem.init (each line includes 32 bit number in hexa format), I successfully simulate the program. If I use the --rom-init with the same file, It does not work.
<scanakci> For a reason, mem.init file does not contain what I want to have in ROM.
freemint has joined #litex
<_florent_> somlo: thanks for the results, so do you think it's useful to use 128-bit data-width with the L1 cache?
<_florent_> scanakci: --rom-init should point to a binary file, mem_x.init are the memory initialization files for the generated verilog
<_florent_> scanakci: but if you pass a binary file to --rom-init, you should have a mem_x.init file with the same content (but in hexa format)
<scanakci> thanks _florent:. I will try it soon.
<scanakci> I could generate a bitstream of Litex+Blackparrot. As far as I see, there is not --rom-init option in genesys2.py file.
<scanakci> How am I supposed to load Litex-ROM? I thought that it is again reading from file during systhesis with readmemh.
CarlFK has quit [Ping timeout: 264 seconds]
<_florent_> indeed, it's not available from the command line, but i could add that
rohitksingh has quit [Ping timeout: 240 seconds]
rohitksingh has joined #litex
<scanakci> okay. using binary file with --rom-init still did not give me my expected mem.init file. For now, I will manually update mem.init in gateware for both simulation and FPGA.
ambro718 has quit [Quit: Konversation terminated!]
rohitksingh has quit [Ping timeout: 240 seconds]
CarlFK has joined #litex
somlo has quit [Ping timeout: 268 seconds]
somlo has joined #litex
tpb has quit [Remote host closed the connection]