<vup>
@EmilJ you have to make it use the zynq soc platform using "-s Zynq"
<vup>
so in total something like `python3 src/experiments/hdmi_test.py -d Zybo -s Zynq -b` should work
<d1b2>
<EmilJ> shouldn't a "platform" be bound to the SoC?
<vup>
well we have platforms and soc's
<d1b2>
<EmilJ> I mean, you are defining resources on a platform with how they are connected to pins on the SoC, in the SoC's naming scheme
<vup>
the idea is, that soc's here define how CSR's can be read out
<vup>
it is possible that one might not want to use the Zynq SoC on and Zynq fpga but instead use jtag to read ot the CSR's
<vup>
s/ot/out/g
<vup>
so we don't want to tightly couple a platform to a SoC type
<d1b2>
<EmilJ> I'm getting lost in this abstraction - the assumption here being that JTAG can access arbitrary addresses in processor system memory maps?
<vup>
no
<vup>
the gateware can define registers that can be read out externally
<vup>
how exactly those registers can be read out is defined by the choosen soc
<vup>
if you choose the zynq soc you can read it out using axi
<vup>
if you choose jtag, you can read it out over jtag (using some basic protocol we have defined for that)
<vup>
now as you have found out, this is not very cleanly implemented right now and some things specifically depend on being used on a zynq, mainly because we haven't got to writing serdes and clocking / pll abstractions yet
<d1b2>
<EmilJ> so, platform selection decides not only what resources are on-board, but also what synthesis happens to creat interfaces to your CSRs
<d1b2>
<EmilJ> doesn't that still mean that selecting a platform asserts platform pinout namespace?
<vup>
kind of, you select the interface to your CSRs using a seperate switch in the cli (the -s switch), but internally this is represented by wrapping the "hardware platform" with a wrapper that implements the stuff used for the selected CSR interface
<vup>
what do you mean by that?
<d1b2>
<EmilJ> if I look at, say, MicroZedZ020Platform which inherits from Xilinx7SeriesPlatform, I see it has a Connector, which translates pins such as "W18". The pinout naming of different SoCs is not going to be typically the same, so selecting a platform is already bound to a SoC
<vup>
well the idea is that the pinout naming for compatible things should be compatible, so for example the ZyboPlatform has a resource named hdmi, that is compatible with the resource generated by the `hdmi_plugin_connect` call in the hdmi_test.py file
<vup>
also a specific fpga doesn't really have only one possible way of interfacing with the CSRs
<vup>
one might want to use jtag to read them out on a zynq for example (maybe because one has nothing actually running on the arm cores)
<d1b2>
<EmilJ> okay, that makes sense
<vup>
but yeah, as noted, this isn't all super clean right now and you actually have to use the Zynq CSR interface for the hdmi_test gateware, as the thing that provides access to the ps7 is not yet decoupled from the thing that implements the CSR interface
<d1b2>
<EmilJ> Hmm... so if I look into hdmi_plugin_connect, I see references to things named "lvds0_p" - in what namespace do those exist? Does this function expect an, uh, virtual Connector to rewire the hdmi module to real pins on the FPGA?
jimr has joined #nmigen
<vup>
if you take a look at the beta and the micro_r2 platform you cat see calls to `gen_plugin_connector` which generates those
<vup>
basically its a just a header that different things can be connected to
<d1b2>
<EmilJ> Should that layer exist even on devices that don't have physical modules?
<d1b2>
<EmilJ> ooooh, so in beta_platform.py, the pin namespace is "enumerated pins on expansion connectors"
<vup>
I don't think that layer should exist on devices that have no physical modules
<vup>
So for example the Zybo has a hdmi connector and just has a resource that directly represents that
<d1b2>
<EmilJ> Oh, okay
<d1b2>
<EmilJ> I'll take another look at it tomorrow. It's almost 3am here
<vup>
well same :)
jimr has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jaseg has quit [Ping timeout: 244 seconds]
jaseg has joined #nmigen
lkcl has quit [Ping timeout: 272 seconds]
lkcl has joined #nmigen
Degi has quit [Ping timeout: 272 seconds]
Degi has joined #nmigen
d0nker5_ has joined #nmigen
PyroPeter_ has joined #nmigen
d0nker5 has quit [Ping timeout: 258 seconds]
PyroPeter has quit [Ping timeout: 272 seconds]
PyroPeter_ is now known as PyroPeter
electronic_eel_ has joined #nmigen
electronic_eel has quit [Ping timeout: 272 seconds]
_whitelogger has joined #nmigen
emeb_mac has quit [Quit: Leaving.]
emeb_mac has joined #nmigen
electronic_eel_ is now known as electronic_eel
emeb_mac has quit [Quit: Leaving.]
proteusguy has quit [Ping timeout: 256 seconds]
_whitelogger has joined #nmigen
proteusguy has joined #nmigen
jeanthom has quit [Ping timeout: 240 seconds]
Asu has joined #nmigen
jeanthom has joined #nmigen
jeanthom has quit [Ping timeout: 272 seconds]
jaseg has quit [Ping timeout: 272 seconds]
jeanthom has joined #nmigen
jaseg has joined #nmigen
jeanthom has quit [Ping timeout: 246 seconds]
DaKnig has quit [Ping timeout: 246 seconds]
<lkcl>
erm... *puzzled*. how do you write simulations that involve 2 different clock domains?
* lkcl
never had to consider this before
<lkcl>
i have a JTAG module to write, and it's self-clocked
<lkcl>
however it connects to a different piece of logic that has a standard sync domain
<lkcl>
but the "join" will obviously be via an AsyncFFSynchroniser
<lkcl>
... so how the heck... ahhh, "add_clock" can take a domain argument, which defaults to "sync"