_whitelogger has joined #symbiflow
citypw has joined #symbiflow
citypw has quit [Remote host closed the connection]
citypw has joined #symbiflow
citypw has quit [Excess Flood]
citypw has joined #symbiflow
citypw has quit [Ping timeout: 245 seconds]
citypw has joined #symbiflow
OmniMancer has joined #symbiflow
_whitelogger has joined #symbiflow
citypw has quit [Ping timeout: 272 seconds]
_whitelogger has joined #symbiflow
OmniMancer has quit [Quit: Leaving.]
<sf-slack> <acomodi> Hi, I have been making progress on the slicem issue and made a PR (https://github.com/SymbiFlow/symbiflow-arch-defs/pull/402) that should be able to solve the problem. Further explanations of my findings can be seen in the PR itself
<tpb> Title: d_dram.pb_type: update xml definition to solve slicem issue by acomodi · Pull Request #402 · SymbiFlow/symbiflow-arch-defs · GitHub (at github.com)
proteusguy has joined #symbiflow
<sf-slack> <mkurc> Hi, I've started working on conversion of top-level CLBs to slices so the VPR will operate on slice base. I'll try to modify the architecture XML file and the routing graph instead of tools which build them.
<sf-slack> <tmichalak> Hi Guys, I am still struggling with the placer issue for the carry chains (https://github.com/SymbiFlow/vtr-verilog-to-routing/issues/8). The architecture XMLs seem to be in order, hence the bug must be somewhere in the clustering part...
<tpb> Title: Placer cannot handle slice_l and slice_r carry chains in the same CLB · Issue #8 · SymbiFlow/vtr-verilog-to-routing · GitHub (at github.com)
<sf-slack> <mkurc> I've been thinking: Is the routing graph correct? Assuming that the VPR uses the routing graph to determine how to place/connect carry chains (right?), the problem might be there somwhere.
<sf-slack> <kgugala> @mkurc yes, it tries to pack a molecule and then it checks if the packing is routable
<sf-slack> <kgugala> I think mithro described that somwhere above
<mithro> kgugala: So, looking at the architecture definition the multiple carry chains is going to cause issues
<sf-slack> <kgugala> yep
<sf-slack> <kgugala> mitrho: and I think I found the place where we're hit by the multiple chain issue
<tpb> Title: vtr-verilog-to-routing/cluster.cpp at master+wip · SymbiFlow/vtr-verilog-to-routing · GitHub (at github.com)
<sf-slack> <kgugala> sorry in 1471
<litghost> mkurc: There are 3 parts to splitting up the CLBs, making new tile pb_types, making a new grid, and modifying the graph import. Which aspect are you planning on doing?
<sf-slack> <kgugala> it checks if the molecule we're about to add is a chain, and if is, if it's root block type is the same as root block type of the logic cell to which it is going to be packed
<sf-slack> <kgugala> in our case the root type is e.g BLK_TI-CLBLL_L
<sf-slack> <kgugala> and the case is true for both SLICE0 and SLICE1
<sf-slack> <kgugala> of this CLB
<litghost> kgugala: E.g. if we had 1 slice per CLB, that check would work correctly?
<sf-slack> <kgugala> when we use first fit prepacking we ion fact have one slice with CARRY per CLB and it works
<sf-slack> <kgugala> *in fact
<litghost> kgugala: Right
<sf-slack> <kgugala> I'll try to hack the packer so it will use only SLICE0 from CLBLL_L and CLBLL_R
<tpb> Title: Adding DRAM README.md file by mithro · Pull Request #403 · SymbiFlow/symbiflow-arch-defs · GitHub (at github.com)
<litghost> mithro: Done
<mithro> kgugala: Can't we just disable the carry-chain in everything apart from SLICE0 in CLBLL_L ?
<litghost> mithro: No, picosoc won't fix in the ROI if you do that
<sf-slack> <kgugala> exactly
<mithro> Disable carry-chain generation full stop then?
<sf-slack> <kgugala> we have this now (when using first fit)
<sf-slack> <kgugala> mithro, litghost: the hack works (at least for the chain_pack test)
<sf-slack> <kgugala> the chains are packed into BLK_TI-CLBLL_L.BLK_IG-SLICEL0.CARRYCHAIN and BLK_TI-CLBLL_R.BLK_IG-SLICEL0.CARRYCHAIN
<sf-slack> <kgugala> this confirms that having two chains in one CLB is causing the problems
<sf-slack> <kgugala> in my test i have two 8-bit long counters packed into each CARRY_CHAIN
<litghost> Cool
<sf-slack> <kgugala> but I use only SLICE0 from CLB's
<sf-slack> <kgugala> and the hack is really ugly ;)
<litghost> kgugala: We should be able to do the equivilant of the hack by simply modifying the pb_types
<sf-slack> <kgugala> sure
<sf-slack> <kgugala> because the hack is:
<sf-slack> <kgugala> ```
<sf-slack> <kgugala> + /* XXX: hack */ + if((pattern == 5 || pattern == 7 || pattern == 8 || (pattern >= 17 && pattern <= 19)) feasible_patterns[pattern] = false;
<litghost> kgugala: In the long term splitting the CLB's into two is the right solution, but in the short term disabling the second chain on each CLBLL and disabling the M chain in the CLBLM will get 1/2 of the carry chains working, rather than 1/8
<litghost> At 1/2 carry chains working, picosoc will likely fit
<litghost> Once the CLB split is dont, we'll have 3/4 of the chains working, and would just need to solve the SLICE_M issue
<litghost> done*
<sf-slack> <kgugala> I agree
<sf-slack> <kgugala> but we should modify the CC specializer script, rather than use the above hack
<mithro> I will be chatting with the vtr devs today about the idea of equivalent blocks
acomodi has quit [Quit: Connection closed for inactivity]
<sf-slack> <kgugala> mithro: great, please share the outcome of the talk
<litghost> kgugala: I agree, that is probably a good place to do it. Do you have time to try that? Or do you want me to submit a PR?
<sf-slack> <kgugala> I have to catch a flight in a about 11 hours, so I may not be able to do that today
<litghost> kgugala: Ok, I'll take a crack at oit
<litghost> it
<elms> litghost: Is there a reason to require a pb_type prefix? It looks like it stops the grid_loc prefix from being emitted, but isn't actually used: https://github.com/SymbiFlow/vtr-verilog-to-routing/blob/master%2Bwip/utils/fasm/src/fasm.cpp#L62-L67
<tpb> Title: vtr-verilog-to-routing/fasm.cpp at master+wip · SymbiFlow/vtr-verilog-to-routing · GitHub (at github.com)
<elms> I forgot I had added the prefix to get to emit, but doesn't seem correct to me. I'm cleaning up all my changes.
<litghost> elms: No, that code looks broken. Does anything break when those lines are moved?
<elms> I'll test it out.
<mithro> litghost: Do you have a fasm output from vpr around somewhere?
<litghost> mithro: Sure why?
<litghost> mithro: Want a paste?
<mithro> litghost: Sure
<mithro> litghost: Actually, it would be good if it was just a wire example
<litghost> mithro: You mean pip?
<mithro> litghost: I mean a simple design which only has a single trace between IO or something
<sf-slack> <mkurc> @litghost Regarding the CLB split - I know how to do new pb_types and modify the grid. In fact I've almost completed the former. Once this is done i'll have to modify the routing graph accordingly. I am doing it with a python script which modifies the final architecture desc. xml.
<litghost> mkurc: Not good enough, you'll need to completely rewrite the channels
<litghost> mkurc: The new pb_type's are the easiest part. What's your plan for the grid?
<tpb> Title: top.fasm · GitHub (at gist.github.com)
<sf-slack> <mkurc> @litghost: Well, I'll expand the grid horizontally and split each CLB into two adjacent element with different X coordinates. I haven't looked at the grid in detail yet. It contains not only CLBs But I think that I can do it with a script.
<sf-slack> <mkurc> What do you mean by "rewrite channels" ?
<litghost> mkurc: VPR requires all routing wires in the grid to be in "channels" that run in the x or y direction
<litghost> mkurc: prjxray_form_channels.py created channels using a grid with a 1:1 relationship
<litghost> mkurc: If you modify the grid afterwards, the channel start/end are wrong, and you haven't formed the channels required to connect the slice furthest from the interconnect tile
<sf-slack> <mkurc> yeah but it all is stored in the routing graph, right ?
<litghost> mkurc: Sure, but what channels are you planning on using?
<litghost> mkurc: Modifying the grid in the arch means that the planned channels are wrong
<sf-slack> <mkurc> yes, i know that
<sf-slack> <mkurc> but the graph contains nodes and connections
<sf-slack> <mkurc> it does not care about grid or any structure
<litghost> mkurc: Yes it does
<sf-slack> <mkurc> so if i can split a clb to two pb_types i can do the same with the graph
<sf-slack> <mkurc> i haven't figured out how to do it yet, but this is my idea
<litghost> mkurc: Not really, the way that node are assigned to tiles won't work, they are tightly coupled
<litghost> mkurc: Basically you are hand-waving all the routing graph import work, which is where the rubber meets the road so to speak
<sf-slack> <mkurc> hmmm
<sf-slack> <mkurc> ok, so I'will look at it tomorrow. It seems that I am missing important details.
<sf-slack> <mkurc> I wanted to avoid modifying the whole flow (prjxray python scripts) since the CLB-based architecture is there everywhere.
<litghost> mkurc: I don't think that is the right answer, nor the fastest route
<litghost> mkurc: Modifying the flow (e.g. python in xc7/utils/) is the correct method, and will allow for future split site instances to be supported
<sf-slack> <mkurc> @litghost: I know that it is the correct method but much more complicated. I thought that by doing a "patch" to the final architecture definitions I can check whether splitting CLBs will work.
<litghost> mkurc: I doubt your "patch" will be faster than just modifying the flow
<sf-slack> <mkurc> @litghost: Ok, I'll rethink that tomorrow. Maybe I need a deeper look into the flow architecture.
nonlinear has quit [Ping timeout: 244 seconds]
nonlinear has joined #symbiflow
tpb has quit [Remote host closed the connection]
tpb has joined #symbiflow