<litghost>
Confirmed, at a minimum 16-bit BRAM's pass the simple RAM tester! Last step is to add support for initial BRAM contents, and 16-bit BRAM's are ready.
<mithro>
FYI - My house is literally getting packed into boxes right now, so probably won't be around much :-P
<mithro>
litghost: Should I kick off a rebuild now? Has everything needed be merged
<litghost>
Go ahead and kick off a rebuild
<mithro>
litghost: Did you get a chance to chat with kgugala and acomodi about the packing issues they were having?
<litghost>
mithro: Ya, your comment above is relevant. Thinking about splitting CLBLL into two SLICE_L's. There are some specific assumptions by the placer that may be placated by that route. In parallel, they are going to continue to hammer at VPR to see if we can solve the issue directly
<mithro>
litghost: I think we should probably have a sync up around packer stuff once I'm back in the US
<mithro>
litghost: Are there things actually shared between slices in a tile?
<litghost>
For a CLB, no
<litghost>
For a BRAM, yes
<litghost>
For a FIFO, unclear, etc, etc
<mithro>
litghost: I was pondering if we convert slices in CLBs into VPR tiles?
<mithro>
the description to VPR is a /representation/ of the hardware - it doesn't need too (and probably shouldn't) the Xilinx representation, nor how the actual hardware is exactly....
citypw has joined #symbiflow
<litghost>
You mean a slice?
<mithro>
litghost: Yes, split a CLB into two tiles - one for each slice
<litghost>
mithro: That's what I suggested to kgugala / acomodi
<mithro>
litghost: Ha, great minds think alike - so do terrible ones, guess it's hard to know which is which :-P
<mithro>
Gah, I forgot to update before doing the rebuild...
jevinskie has joined #symbiflow
<litghost>
'
_whitelogger has joined #symbiflow
<mithro>
litghost: '
<mithro>
unbalanced quotes make me uneasy :-P
jevinski_ has joined #symbiflow
jevinskie has quit [Ping timeout: 250 seconds]
jevinski_ has quit [Read error: Connection reset by peer]
jevinskie has joined #symbiflow
OmniMancer has joined #symbiflow
symbiflow-slack has joined #symbiflow
tgielda has joined #symbiflow
pgielda_ has joined #symbiflow
<pgielda_>
slack gateway should be working now
<symbiflow-slack>
<pgielda> symbiflow.slack.com
<symbiflow-slack>
<pgielda> there is a channel there called symbiflow, that is synced both ways with the irc
<symbiflow-slack>
<pgielda> symbiflow-slack is a proxy user that forwards messages both ways
<mithro>
pgielda_: Great
<symbiflow-slack>
<me1> Testing?
<symbiflow-slack>
<kgugala> Looks fine
<mithro>
pgielda_: Any idea why it just said I was <me1> ? :-P
<mithro>
Maybe should shorten the nick to something like sf-slack?
<mkurc>
test
<symbiflow-slack>
<mkurc> test
<symbiflow-slack>
<mkurc> :thumbsup:
<symbiflow-slack>
<pgielda> I am pretty sure we can tweak this all
<symbiflow-slack>
<pgielda> @mithro me1 is probably because on slack "me" was taken (or maybe to short?) and your email is me@domain
<symbiflow-slack>
<pgielda> I am guessing here of course
<symbiflow-slack>
<pgielda> so your real username on slack is me1 apparently ;)
<symbiflow-slack>
<pgielda> This can definitely be fixed though as this is something our bridge adds
symbiflow-slack has quit [Remote host closed the connection]
sf-slack has joined #symbiflow
<sf-slack>
<pgielda> here we go, sf-slack it is
citypw has quit [Ping timeout: 258 seconds]
<sf-slack>
<mkurc> Do we support RAM128X1D in VPR ? I'm trying to pack (just pack) the Picosoc test and it fails saying that "Message: Can not find any logic block that can implement molecule. Pattern DRAM128_DP soc.memory.mem.28.1.0.f7a_mux". When I remove 128bit RAMs and use only 32 and 64 bit wide ones the pack succeeds. I can see that the techmaps converts them to SPRAM128+DPRAM128+DRAM_2_OUTPUT_STUB. Haven't checked the arch XML file
<sf-slack>
yet.
<sf-slack>
<acomodi> I am dealing with something similar for the `SLICEMs` issue. I have found out that by modifying the DRAM xml definitions in `xc7/primitives/slicem/` the SLICEM issue is solved, but, by trying to route the `xc7/tests/dram/128x1d it fails with the following message
<sf-slack>
<acomodi> `No possible routing path from cluster-external source (LB_SOURCE) to cluster-internal sink (LB_SINK accessible via architecture pins: BEL_BB-DRAM_2_OUTPUT_STUB[0].DPO[0]): needed for net ram0.DPO_FORCE' from net pin 'ram0.f7a_mux.O' to net pin 'ram0.stub.DPO'`
<sf-slack>
<acomodi> I would assume there is some bug in the dram arch definition
<sf-slack>
<mkurc> ok, I'll look into that
OmniMancer has quit [Quit: Leaving.]
mkurc has quit [Quit: WeeChat 1.9.1]
<sf-slack>
<acomodi> `slicem` issue update: I have been checking the xml definition of slicem and I have noticed that d_drams are not produced by the templates as it happens for dram_a/b/c.
<sf-slack>
<acomodi> in `vpr` I got the following error message: `Differing modes for block. Got LUTs previously and DRAMs for interconnect DO6.` It was related to pin DO6 which most probably had to do with dram_d. I was suspicious about the fact that only for the pin DO6 I received the `mode` error, so I decided to uniform the dram_d to the other ones (a, b, c) by changing the `slicem.pb_type.xml` in `xc7/primitives/slicem`
<sf-slack>
<acomodi> By modifying the xml definition the `chain_packing` test with 5 counters passed (including `slicems`) and we got a `top.bit` with all the leds blinking (also the one using the `slicem`)
<sf-slack>
<kgugala> +1
<sf-slack>
<acomodi> I am still testing if this was actually the issue, and I have run all the DRAM tests in the `xc7/tests/dram` directory and got the following results
<sf-slack>
<acomodi> (before the slicem.pb_type.xml changes)
<sf-slack>
<acomodi> ``` 1_256x1s: not passing 1_128x1d: not passing 2_32x1d: not passing 2_64x1d: passing 2_128x1s: passing 4_32x1s: passing 4_32x2s: passing 4_64x1s: passing ```
<sf-slack>
<acomodi> I am currently testing all of them adopting my solution to get the slicems to work with the `chain_packing` test and redo all the dram tests. I believe there are some pins which are not well defined in the xml definitions of the slicems
<litghost>
128x1d used to pass
<litghost>
mkurc: 128x1d should be supported, however the DLUT DRAM isn't templated because it is a special snow flake
<tpb>
Title: [WIP] round robin packing by kgugala · Pull Request #9 · SymbiFlow/vtr-verilog-to-routing · GitHub (at github.com)
<litghost>
acomodi: making D06 like the others will not work
<sf-slack>
<acomodi> I am not sure if this is the cause
<sf-slack>
<mkurc> @litghost Yes, the 128x1 is supported but it failed to pack for some reason when I tried the picosoc with my techmap.
<litghost>
acomodi: If you make DO6 like the other DRAMs it will no longer work in hardware
<sf-slack>
<acomodi> Actually I have produced a bitstream for the basys3 and all the 5 leds were working correctly (where the 5th LED is related to the slicem)
<sf-slack>
<kgugala> but that bitstream does not use brams
citypw has joined #symbiflow
<sf-slack>
<acomodi> Yeah, probably that is why it works for the `chain_packing` test, but maybe it will fail for the `drams` one
<litghost>
The DLUT structure is not templated because it must be
test_user has joined #symbiflow
<litghost>
acomodi: what change did you make to that helped with chain issue, but broke the DRAM pack test?
<sf-slack>
<acomodi> So, without the changes I have obtained the results I have previously written. Anyways, the changes consist in the usage of the template also for the d_dram. Basically I changed the `CMakeLists` in `xc7/primitives/slicem/Ndram/` to include d_dram in using the templates
<sf-slack>
<acomodi> and then I have adapted the `slicem.pb_type.xml` to deal with the d_dram that is using the template
test_user has quit [Quit: Lost terminal]
<sf-slack>
<acomodi> I am running the `dram` tests once again with the master branch of `arch-defs` to see which fail and which don't. BTW all of the previous tests failed during the `cluster_routing` step when trying to `pack`
<sf-slack>
<acomodi> `dram` tests update: all the tests have passed using the current `arch-defs` master as well as the conda `vpr`
<litghost>
That is pack only
<litghost>
PnR only
<litghost>
You need to test on hardware
<litghost>
It won't work
<sf-slack>
<acomodi> Ok, I'll try them on HW, but they are without my modifications in the slicem.pb_type.xml. I'll let you know in a bit
jevinskie has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
jevinskie has joined #symbiflow
<sf-slack>
<mkurc> Can you tell what is the most complex design that we have managed to implement using Yosys+VPR on 7-series ? I am concerned that if we move from a 4-bit counter to the PicoSoc in one step we might fail miserably...
<tpb>
Title: Need BRAM simulation model · Issue #360 · SymbiFlow/symbiflow-arch-defs · GitHub (at github.com)
pgielda_ has joined #symbiflow
<sf-slack>
<acomodi> Indeed the dram tests do not work on HW (by input switches do not seem to produce the supposed effects)
<litghost>
The DLUT was setup that way for a reason
<sf-slack>
<acomodi> To be more precise though, I have tested the master, no changes in the slicem definitions neither in the DLUT
jevinskie has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<litghost>
Did you regenerate the harness?
<sf-slack>
<acomodi> Right, I forgot about that
<litghost>
mkurc: Can you open an issue with your DRAM128X1D issue? We have a PnR test that appears to be passing, so I'm suprised by the failure
<litghost>
mkurc: I re-tested master, and DRAM128X1D appears to be packing on master with master+wip VPR. In the issue, include which VPR you are using
<sf-slack>
<mkurc> @litghost I'd rather not open an issue for DRAM128X1D yet. Because the tests are passing and the DRAM packs. I've encountered a problem when trying to pack the whole picosoc which contained DRAM128X1Ds. I've probably stumbled upon the issue with incorrect packing of carry chains. I used VPR from master branch. I'will check it against @kgugala fix with round-robin packing tomorrow.