<gregdavill>
_florent_: Weird that I'd never seen this startup behaviour happening before.
<gregdavill>
I've got a test setup here, if the memtest passes it reboots. If it fails it will spin-wait.
<gregdavill>
Following the advice from Lattice notes, I've altered the init clock domain (also por) to be sourced from the internal OSCG. Results look good so far.
<gregdavill>
I've run through ~100 test/reboot cycles without failing. But my tree is dirty. Later I'll need to sync everything up and validate again.
<benh>
somlo: I wouldn't expect it to be *that* slow unless it's going through layer for every bit...
<benh>
I am not that familiar with the MMC layer in Linux mind you
<benh>
perf is your friend to measure what's going on
<benh>
the linux SPI MMC driver is ... a bit dumb
<benh>
I would rather write a dedicated MMC driver for litesdcard :)
<benh>
esp if you want to do dual or quad
<gregdavill>
Turns out I'd only changed the POR counter to be running from the OSCG (31MHz). That seems like enough to fix. Weird indeed.
<_florent_>
gregdavill: thanks for looking at this, i also don't remember having this behavior before. I will test your fix and will try to understand
<gregdavill>
If I run init clock domain from the OSCG too, it is not fixed.
<_florent_>
which Lattice note where you following? (just to also have a look)
<gregdavill>
TN-02035: ECP5 High speed I/O interface. In Table 9.3 for the GDDR_SYNC soft IP (Which litedram implements similar function with a timeline)
<gregdavill>
`SYNC_CLK: Startup clock. This cannot be the RX_CLK or divided version. It can be other low speed continuously running clock. For example, oscillator clock`
st-gourichon-fid has joined #litex
kgugala_ has joined #litex
kgugala has quit [Ping timeout: 264 seconds]
kgugala has joined #litex
kgugala_ has quit [Ping timeout: 256 seconds]
kgugala_ has joined #litex
kgugala has quit [Ping timeout: 264 seconds]
gregdavill has quit [Ping timeout: 240 seconds]
kgugala_ has quit [Read error: Connection reset by peer]
kgugala has joined #litex
st-gourichon-fid has quit [Ping timeout: 256 seconds]
gregdavill has joined #litex
<_florent_>
gregdavill: the issue seems related to the recent changes i did to the PHY (on the read/write control path + dqs), i'm looking at this
<gregdavill>
Maybe timing related? I've been able to run the same json-netlist through nextpnr with different seeds. One resulting bitstream works, another fails. (Or maybe I've hit a different bug :/ )
st-gourichon-fid has joined #litex
scanakci has quit [Quit: Connection closed for inactivity]
gregdavill has quit [Ping timeout: 260 seconds]
jordigw has joined #litex
Skip has joined #litex
leons has quit [Quit: killed]
CarlFK[m] has quit [Quit: killed]
david-sawatzke[m has quit [Quit: killed]
john_k[m] has quit [Quit: killed]
xobs has quit [Quit: killed]
bunnie has quit [Quit: killed]
sajattack[m] has quit [Quit: killed]
nrossi has quit [Quit: killed]
disasm[m] has quit [Quit: killed]
sajattack[m] has joined #litex
Skip has quit [Remote host closed the connection]
xobs has joined #litex
disasm[m] has joined #litex
CarlFK[m] has joined #litex
nrossi has joined #litex
john_k[m] has joined #litex
leons has joined #litex
david-sawatzke[m has joined #litex
bunnie has joined #litex
<_florent_>
gregdavill: this should now be working. The issue is that the calibration can required different bitslip value between initialization and the changes i did on the Bitslip module was preventing it. I did a couple of others fix on things i wanted to investigate and also update the boards in litex-boards.
<_florent_>
next time i look at this i'll try to understand why we have a different Bitslip results between initilializations (we should be able have consistent results), but enough of this for today :)
<tpb>
Title: Update BP main repo website by scanakci · Pull Request #3 · litex-hub/pythondata-auto · GitHub (at github.com)
satnav has joined #litex
<satnav>
Can someone please advice me about the CSR bus and Wishbone bus in aspect for example of connecting UART core to SoC? I mean, what's the difference between make the UART core as wishbone slave and in the UART core code add CSR with wishbone2csr bridge. vs, just add UART as CSR to soc's CSR bus.
bunnie has left #litex ["Kicked by @appservice-irc:matrix.org : Idle for 30+ days"]
<somlo>
satnav: the CSR bus is LiteX's way to auto-allocate MMIO addresses and auto-generate accessor methods for LiteX's own devices (part of the ecosystem, written in migen, hosted in one of the lite* projects on github under either enjoy-digital or litex-hub, etc.
<somlo>
If you have your own UART (with a wishbone interface already exposed), then probably hooking it directly into the wishbone bus would make more sense
<satnav>
How I will interact with the UART core and CPU in aspect of registers? Should I use CSR?
<satnav>
Bottom line, I try to understand how should I expose registers with wishbone slave core
<somlo>
satnav: if you are using the UART included with LiteX, it's already being generated to use CSR out of the box. If for some reason you have your own non-LiteX UART IP block, then you'll have to write some glue logic in migen to "connect" it to LiteX. In *that* case, it depends on what your IP block exposes in terms of an interface
<satnav>
thanks a lot for the informative answers somlo
<satnav>
Just to make sure that I understood correctly, according to the example here - https://github.com/enjoy-digital/litex/blob/master/litex/boards/targets/simple.py the LittleEthPHY added as slave to the wishbonebus and also added to the CSR bus? are they separated buses? also, can I make a SoC without wishbone bus at all, just CSR bus?
<somlo>
satnav: LiteETH has a more complex interface -- there's the phy (exposed as CSR mmio registers), there's the ethmac region (basically memory for tx/rx buffers), and an interrupt. But yeah, the configuration registers are set up as CSRs via that example you pointed out
<satnav>
thanks somlo!
Skip has joined #litex
satnav has quit [Remote host closed the connection]
FFY00 has quit [Remote host closed the connection]