sf-slack has quit [Remote host closed the connection]
adjtm_ has quit [Remote host closed the connection]
adjtm has joined #symbiflow
futarisIRCcloud has joined #symbiflow
Bertl_oO is now known as Bertl_zZ
nrossi has joined #symbiflow
futarisIRCcloud has quit [Quit: Connection closed for inactivity]
futarisIRCcloud has joined #symbiflow
OmniMancer has joined #symbiflow
GuzTech has joined #symbiflow
futarisIRCcloud has quit [Quit: Connection closed for inactivity]
proteusguy has quit [Ping timeout: 250 seconds]
Bertl_zZ is now known as Bertl
<sf-slack2>
<acomodi> litghost, mithro: when you have some time, could you look at this https://github.com/SymbiFlow/vtr-verilog-to-routing/pull/36 on tile equivalence? Before proceeding with the other placer changes it would be good to check whether I am going into the right direction
<sf-slack2>
<mkurc> @litghost I created a tool to traverse rr graph which allows to verify if a connection between two nodes is possible. And I verified that there is a possibility of connection between the SOURCE in the BLK_SY-GND and a CLB tile despite that the VPR throws an error. I checked using node indices as included in the error message.
<sf-slack2>
<mkurc> The tool only cares only about node indices and edge to node indices. It does not check channel location and directions nor IPIN and OPIN directions.
<sf-slack2>
<mkurc> BTW the graph traversal tool is still a WIP. I will be able to push it tomorrow.
<litghost>
mkurc: Okay, that matches what happened when I first connected the constant network
<tpb>
Title: Routing failure on global channel structure · Issue #520 · verilog-to-routing/vtr-verilog-to-routing · GitHub (at github.com)
<litghost>
mkurc: kem_ did submit a fix for the constant routing that was require, but unclear if that is what is going on
<litghost>
mkurc: All you did was shift the entire grid, yes?
<litghost>
Weird :|
<sf-slack2>
<mkurc> @litghost Yes what i did was shfit the grid 2 tiles to the right.
<litghost>
mkurc: It's very confusing why that would break things, we must be missing something
<litghost>
mkurc: The details routing output might provide a hint
<litghost>
mkurc: I might finish up the FF CE/SR changes today, so I can dig into what is going on with your PR. Any new commits on the PR that haven't been pushed?
<sf-slack2>
<mkurc> Though I am not 100% sure whether the grid shift implementation is correct. Lets say I am 99.9% sure that it is correct.
<sf-slack2>
<mkurc> I will try with the kem's suggestion
<sf-slack2>
<mkurc> Actually what is weird is when i shift the grid to the east I got an error but when I shift to the north I got no error...
<sf-slack2>
<mkurc> @litghost I didn't push anything new
<litghost>
mkurc: Definitely take a look at the detailed routing output, something must be different :(
GuzTech has quit [Remote host closed the connection]
<litghost>
hackerfoo: And if you are correct that there isn't a reason to use it, then there is no need to add back RAM32X1D synthesis to Yosys
<litghost>
hackerfoo: However there is not reason not to support the primative, in case someone used one in their design
<hackerfoo>
I'm guessing that it makes routing easier by not connecting those two address pins.
<litghost>
hackerfoo: Another option to consider is to tech map the RAM32X1D to the RAM64X1D and let yosys further techmap down to the our VPR primatives
<litghost>
hackerfoo: Sure, but yosys would've tied the pins to VCC anyways, and VCC ties on those lines are "free" and the default state of the hardware
<hackerfoo>
That must be interpreted incorrectly, or BI is being enabled implicitly?
<litghost>
Hold on a minute
<litghost>
That FASM output only has 2 RAM instances
<litghost>
Is the RAM32X1D located in just one LUT?
<litghost>
With O5 as DPO/SPO and O6 as the other?
<litghost>
That is a significant different
<litghost>
And not what I modelled
<hackerfoo>
That seems to contradict the datasheet.
<hackerfoo>
But there's not much information on 32X1D. It just says it takes 2 LUTs.
<litghost>
hackerfoo: Okay. Check the routing resources display on the Vivado output and see what is the actually placement
<litghost>
hackerfoo: E.g. Where is the SPO and DPO lines?
<elms>
mithro: I discovered those tests when I started looking reorganizing the code. Currently I think CI only runs on top level utils/
<mithro>
elms: Yeah
<mithro>
hackerfoo: So I started thinking about the idea of having a "library" of flipflop objects in symbiflow-arch-defs which we map the Xilinx primitives to
<hackerfoo>
Both RAM32X1Ds seem to map to those same two LUTs.
<litghost>
hackerfoo: Which lines are the SPO and DPO?
<litghost>
hackerfoo: I think what is happening is that RAM + SMALL makes two RAM32 instances per LUT
<litghost>
hackerfoo: You were asking why RAM32X1D, well you can pack 2 RAM32X1D per LUT pair
<litghost>
hackerfoo: Assuming I'm reading what is happening correctly
<litghost>
hackerfoo: The key is thinking about a LUT6 as really just two LUT5's with a mux at the output
<hackerfoo>
Ah, so 1 takes 2 LUTs, but you can fit a second one in for free?
<litghost>
hackerfoo: That's the theory
<litghost>
hackerfoo: If that is correct, then the two outputs on the top LUT should be inst1.SPO and inst2.SPO
<litghost>
hackerfoo: Checkout LUT6_2 in the 7-series library
<litghost>
hackerfoo: It has diagram that is pretty useful
<litghost>
hackerfoo: If I5 is tied high, then the two LUT5 instances can be considered indepedent
<hackerfoo>
SPOs are O5/6 from DLUT, and DPOs are from CLUT.
<litghost>
hackerfoo: Yep, that matches the theory
tpb has quit [Remote host closed the connection]
tpb has joined #symbiflow
<hackerfoo>
None of this explains why 32X1D doesn't work in VPR, right? I think I need to look at the routing.
<litghost>
hackerfoo: True, VPR should be able to pack 1 of them (instead of both)
<litghost>
hackerfoo: Check which address lines were being used
<litghost>
hackerfoo: The high bit should be unused
<litghost>
mkurc: I've got top_bram_n2 in Vivado simulating, and I'm starting to debug it. I can see that the fsm_pulse_cnt isn't incrementing, but debugging is going pretty slow