sf-slack has quit [Read error: Connection reset by peer]
OmniMancer has joined #symbiflow
<bunnie[m]>
<Xiretza "mithro: I was asking less about "> I'm not officially affiliated with prjxray but if Xilinx came after me for the work I did documenting the bitstream encryption format and fuses, I'd view it as an opportunity to have an adult conversation with their lawyers about why they are wrong, and if they still disagree I would treat it as a solid opportunity to set a favorable legal precedent for future work like this;
<bunnie[m]>
and if that doesn't pan out, at least it will make a great awareness campaign for the need for future legal reforms to protect our rights and freedoms to learn and innovate.
futarisIRCcloud has quit [Quit: Connection closed for inactivity]
<Xiretza>
daveshah: I think xilinx/pack_clocking_xc7.cc:39,40 should be {MMCME2,PLLE2}_BASE instead of _BASIC, no? with that changed it seems to work correctly, anyway :)
<Xiretza>
also, I finally got to the point where my SoC gets through ghdl and yosys, but I end up with 10 global clocks - 6 of them are real, the others have cryptic clkbufmap.cc names, gotta try to find those.
<ZirconiumX>
Yosys' naming is really not the best
<daveshah>
Note that (with router2 anyway) both global buffer inputs and outputs will be shown as global clocks during routing - as both should be using dedicated resources
<daveshah>
I'll have a look at the PLL naming issue later
Bertl is now known as Bertl_oO
<daveshah>
6 clock domains still seems like quite a few for the current state of nextpnr-xilinx, but that would be at the point of a useful benchmark for future development anyway
<Xiretza>
daveshah: the Arty input clock, a core/bus clock, two graphics clocks (pixel+TMDS), two UART clocks (18.432 MHz from PLL which then gets divided down to RCLK according to UART registers)
<daveshah>
That doesn't seem too bad - but I'd clock all the UART stuff off a single clock with enable rather than two UART clocks
<daveshah>
I have a LiteX design with input, DDR3 memory, system, and ethernet clocks that works alright so it's not outside the realms of what may work
<Xiretza>
daveshah: yeah, I need to rewrite the UART anyway, it was supposed to be a reproduction of the 16550, but just became a mess
<daveshah>
It may well be when you sort out the UART it will route fine
<daveshah>
Xiretza: btw, what is the arc that is failing to route?
<Xiretza>
daveshah: 'Failed to route arc 82 of net 'io.clk_pixel', from SITEWIRE/BUFGCTRL_X0Y10/O to SITEWIRE/SLICE_X6Y92/CLKINV_OUT.'
<daveshah>
Interesting, can you provide a JSON/XDC? I think what happens is that the clock placer chooses global buffer sites that conflict sometimes and it would be good to fix this
<daveshah>
Pushed the PLL name fix, looking at the BUFG routing now, I also note that TMDS_33 IO aren't supported yet, but might be possible if they don't require any extra config bits
<Xiretza>
daveshah: ah, those are actually usually routed through OBUFDS, but I commented those out since they're not supported yet
<daveshah>
I think they should be supported
<daveshah>
The problem is specifically the TMDS_33 IO tyoe
<Xiretza>
oh okay, I think I remember nextpnr saying it doesn't have any OBUFDS sites
<daveshah>
I might be able to patch that though
<Xiretza>
daveshah: FYI, there's some more mentions of _BASIC in pack_clocking_xcup.cc and README.md.
<daveshah>
Thanks - btw I think the clocking issue is SRL related. You might be able to get further with -nosrl (generally SRL support is still experimental)
<daveshah>
Xiretza: clock routing issue is now fixed in router2 upstream
futarisIRCcloud has quit [Quit: Connection closed for inactivity]
<Xiretza>
daveshah: awesome, thanks! I'll try that in a bit.
tiwEllien has joined #symbiflow
<Xiretza>
daveshah: good news: the whole flow is working now! however, it still thinks some of my timings aren't met - is there a way to specify target frequencies per clock net? `create_clock` in the XDC isn't recognized (yet).
<daveshah>
No, not currently
<daveshah>
The timing data is still a bit ropey so I wouldn't read too much into it anyway
<Xiretza>
alright, I'll just ignore it then
tiwEllien has quit [Ping timeout: 265 seconds]
tiwEllien has joined #symbiflow
OmniMancer has quit [Quit: Leaving.]
<mithro>
Xiretza: FYI - The vpr flow supports timing + sdc but that is only Series 7
<daveshah>
This is series 7 (Artix-7)
<daveshah>
and yes, the VPR flow is probably the better choice if you care about timing atm
tiwEllien has quit [Ping timeout: 258 seconds]
<Xiretza>
mithro: VPR = Verilog to Routing? I guess that would work, I can use yosys to transform VHDL to verilog
<mithro>
Xiretza: are you working on microwatt or some other VHDL?
<Xiretza>
mithro: nope, my own, school project
<mithro>
Xiretza: be warned that VPR is as heavy as vivado rather than nice and light like nextpnr at the moment
<daveshah>
You still use Yosys for synthesis
<Xiretza>
mithro: alright - maybe I'll have a look at it in the future, but I don't really need accurate timing, mostly focussed on getting stuff to work at the moment.
<mithro>
tmichalak: In those diagrams you shared around placement - is there a way to "color" the resource usage in a way?
freemint has quit [Ping timeout: 245 seconds]
freemint has joined #symbiflow
<daveshah>
Xiretza: in any case, nextpnr-xilinx now has basic xdc create_clock support
<Xiretza>
daveshah: wow, thanks!
<daveshah>
btw, it's the post-route timing numbers that matter - the pre-route timing numbers often under-estimate Fmax by a factor of 2 or so, this is a known issue
<daveshah>
propagation of constraints across BUFGs and PLLs isn't implemented either yet, so you'll probably want to be constraining with get_nets rather than get_ports
<Xiretza>
yeah, I've noticed that pre-route fmax are way lower - just makes post-routing feel so much better ;)
<sf-slack1>
<tmichalak> mithro: do you mean coloring it in a way similar to what you did in the google doc I shared today?
<mithro>
tmichalak: Yeah, wondering if there is a way to color the litedram / vexriscv / etc parts and see if my random guesses are correct
<mithro>
tmichalak: I'm only like 30% confident
<sf-slack1>
<tmichalak> mithro: I will look into that
tiwEllien has joined #symbiflow
citypw has quit [Ping timeout: 260 seconds]
<hackerfoo>
litghost: I tried `--router_high_fanout_threshold 64`, and routing picosoc on the full 50T wasn't any slower. Should we enable this?
<litghost>
Are there even nets with a fanout of 64?
<hackerfoo>
Yes. The highest fanout is 993, according to route_profiling output.
<ZirconiumX>
...Are you sure that's not just, say, the global clock?
<daveshah>
There's also probably a reset
<hackerfoo>
Maybe, but I also have fanouts of 64, 65, 68, 77, 82, 83, 85-7, 153, 160, 397 & 543
<litghost>
hackerfoo: If the runtime is the same or less, and the critical path is the same or better, sure
<hackerfoo>
Okay. I'll make a PR to see how it performs on CI.
<litghost>
hackerfoo: Yep
tiwEllien has quit [Ping timeout: 268 seconds]
tiwEllien has joined #symbiflow
tiwEllien has quit [Ping timeout: 268 seconds]
tiwEllien has joined #symbiflow
proteus-guy has quit [Ping timeout: 268 seconds]
<hackerfoo>
Seems like VPR fails to route picosoc on ice40 without the flag. I'm rerunning the test locally to see if I can figure out why, but it might have to be set for that architecture.
<litghost>
hackerfoo: I'm fine with setting that on a per arch basis
<hackerfoo>
`make -j picosoc_bin` (ice40) fails locally. VPR spent ~120s each on 731 & 3248 fan out nets.
<litghost>
With `--router_high_fanout_threshold`?
<hackerfoo>
With the flag removed. The default is 64.
<litghost>
Ya
tiwEllien has quit [Ping timeout: 268 seconds]
freeemint has joined #symbiflow
freemint has quit [Ping timeout: 248 seconds]
freeemint has quit [Remote host closed the connection]
freeemint has joined #symbiflow
freeemint has quit [Remote host closed the connection]
freeemint has joined #symbiflow
freeemint has quit [Remote host closed the connection]
freemint has joined #symbiflow
<hackerfoo>
Only a few seconds with it set back to `-1`, and it routes. So it will need to remain set for ice40.
freemint has quit [Remote host closed the connection]