<tpb>
Title: WIP: Add Base Litex minitest capable of running Linux on Vexriscv by tmichalak · Pull Request #1320 · SymbiFlow/symbiflow-arch-defs · GitHub (at github.com)
<sf-slack>
<acomodi> @litghost Exactly, so, by using the same clock for CLKB and CLK in ISERDESes seems to have solved the problem. There is to say though that both Vivado and VPR do report timing violations, but the design works on HW
<litghost>
acomodi: How large is the violation? I expect a ~0.5 ns setup slack violation is fine, but a 2-4 ns setup violation or a 1-2 ns hold violation likely eats away most of the timing models margin
<sf-slack>
<acomodi> litghost: regarding VPR there is only a setup violation of -3.776 ns
<litghost>
acomodi: Yikes, that is pretty substantial. Vivado will likely report an even large setup violation, I'd guess ~4 ns. With a setup violation of that magnitude against a target CPD of 10 ns, I'd expect that bitstream to fail on a measureable number of parts
<sf-slack>
<acomodi> litghost: and a worst setup violation of -8.081 ns
<litghost>
acomodi: For now I recommend we leave the SDC at 50 MHz, and create an issue to track getting timing closure at 100 MHz for future work
<sf-slack>
<acomodi> litghost: the interesting part is that the implementation behaves as it should on HW
<sf-slack>
<acomodi> litghost: sure, I can revert to 50MHz
<litghost>
acomodi: Of course. The timing model is typically very conservative so that it works on the full distribution of parts that come out of manufacturing. However if you were to deploy that same bitstream over 100 to 1000 parts, I'd expect a non-trival number of them to fail
<litghost>
acomodi: I don't have a good intuition on the histrogram of part quality.
<sf-slack>
<acomodi> litghost: makes sense. Well, once we get everything stable and merged, we can focus on further improve the timing model, as suggested
space_zealot has quit [Remote host closed the connection]
space_zealot has joined #symbiflow
space_zealot has quit [Remote host closed the connection]
space_zealot has joined #symbiflow
adjtm has joined #symbiflow
<hackerfoo>
There are far too few assertions in VPR.
<litghost>
hackerfoo: VPR provides way to have varying levels of assertions, and the upstream maintainers will likely accept PR's to that affect
<hackerfoo>
I found at least one bug in the serializer that requires ptcs to be ordered.
<litghost>
hackerfoo: Ya that code is terrible, and has been terrible since forever
<hackerfoo>
But segment order also seems to be assumed, so I have to fix that.
<litghost>
hackerfoo: PTC's are poorly documented, and aren't used often either :/
<litghost>
hackerfoo: Segment order should be soley determined by the id attribute?
<litghost>
hackerfoo: When generated it is implicit, but the serializer should always be key'd on "id"
<hackerfoo>
Something's breaking the segment_map_ in connection_box_lookahead, at least.
<litghost>
hackerfoo: An inconsistent lookahead binary could have different segment definitions from the rrgraph segments. But the rrgraph segments should be fixed once the virtual rr graph is generated
<litghost>
hackerfoo: And the crude caching system we have should detect a change in the input patched rrgraph, even segment definitions, and recompute the lookaheah
<hackerfoo>
Thanks. I'll look into that.
phire has quit [*.net *.split]
phire has joined #symbiflow
<hackerfoo>
Recomputing the segment map takes 0.01 seconds. I'm removing it from the lookahead bin file.
<litghost>
ok
benelson has joined #symbiflow
az0re has quit [Ping timeout: 240 seconds]
<_whitenotifier-3>
[yosys-symbiflow-plugins] litghost opened issue #8: Assertion violation in XDC plugin - https://git.io/JvBR0