<tpb>
Title: GitHub - SymbiFlow/symbiflow-examples: Examples designs for showing different ways to use SymbiFlow toolchains. (at github.com)
kgugala has joined #litex
kgugala_ has quit [Ping timeout: 258 seconds]
HoloIRCUser has joined #litex
HoloIRCUser1 has quit [Ping timeout: 246 seconds]
HoloIRCUser1 has joined #litex
HoloIRCUser has quit [Ping timeout: 246 seconds]
kgugala_ has joined #litex
kgugala has quit [Ping timeout: 246 seconds]
<futarisIRCcloud>
Has anyone here tried Linux on Litex on the Arty board in the last few months, and can point me at working revisions etc?
Skip has joined #litex
<futarisIRCcloud>
keesj & daveshah: I hit that bug in openocd (from the distro) for Ubuntu 18.04 LTS today too (on a fresh install). It's why the litex installation instructions recommend installing openocd from source.
HoloIRCUser has joined #litex
HoloIRCUser1 has quit [Ping timeout: 246 seconds]
tucanae47 has quit [Read error: Connection reset by peer]
flammit has quit [Read error: Connection reset by peer]
Claude has quit [Read error: Connection reset by peer]
tucanae47 has joined #litex
flammit has joined #litex
Claude has joined #litex
kgugala has joined #litex
kgugala has quit [Read error: Connection reset by peer]
kgugala has joined #litex
kgugala_ has quit [Ping timeout: 260 seconds]
<_florent_>
keesj: if you want to avoid lib conflict issues, instead of sourcing manually the Vivado settings, before building your target, you can do: export LITEX_ENV_VIVADO=/opt/Xilinx/Vivado/20XY.X
<futarisIRCcloud>
Building latest linux-on-litex-vexriscv with Vivado 2020.1
<_florent_>
keesj: the scripts will do the sourcing just before running Vivado, and it will allow you yo use OpenOCD with --load after the build
<_florent_>
keesj: i switched the default programmer to OpenOCD on 7-Series, since a lot faster than Vivado for loading and flashing bitstreams
<_florent_>
but you can still use Vivado programmer:
<_florent_>
from litex.build.xilinx.programmer import VivadoProgrammer
<_florent_>
futarisIRCcloud: this is the bitstream i tested, but there is only serial+ethernet enabled
<somlo>
adding printf statements is tricky, as the routines appear to be called with some weird interleaving pattern, and I get word salad on stdout :)
<somlo>
so I'm not quite sure when that function is called and from where, and why it returns STA_NOINIT when it *should* return 0
<futarisIRCcloud>
Is there a sdcard PMOD that I can buy that you are using with linux-on-litex-vexriscv ?
<_florent_>
somlo: do you still have the manual init in boot.c?
<somlo>
_florent_: no, this is with strictly upstream code (plus/minus the hardcoded `return 0` in disk_status()
<somlo>
tried adding printf statements to disk_status() and disk_initialize(), but they get interleaved in weird ways and I couldn't figure out what the actual sequence is, and what happens during disk_initialize
<somlo>
_florent_: nope, still getting a file read error
<somlo>
I'm really ambivalent about the new fat code, btw. Between the weird redefined uint return types, and the non-obvious call tree, it's not much fun to troubleshoot :(
<futarisIRCcloud>
Ok. My fresh build of the latest HEAD of linux-on-litex-vexriscv seems to be working on arty. Doing a serial upload of firmware now, at around 86KB/s.
<_florent_>
somlo: FatFs is used on very various embedded systems, it's also used in Barebox
<futarisIRCcloud>
The LED blink pattern is very pretty.
<futarisIRCcloud>
Ok. Single Core linux-on-litex-vexrisv HEAD running 'dhrystone 1000000' at 100MHz gives:
<futarisIRCcloud>
Dhrystones per Second: 40192.9
<futarisIRCcloud>
40192.9 / 1757 = 22.8759 DMIPS @ 100 MHz or 0.22 DMIPS/MHz ... Seems a little slower than it should be.
<daveshah>
Could be compiler optimisation related?
HoloIRCUser has joined #litex
HoloIRCUser1 has quit [Ping timeout: 240 seconds]
<keesj>
_florent_: thanks for the info! I will update my script
<futarisIRCcloud>
And running a single core on litex_vexriscv_smp HEAD running 'dhrystone 1000000' at 100MHz gives:
<futarisIRCcloud>
Dhrystones per Second: 71839.1
<futarisIRCcloud>
About 41 DMIPS @ 100 MHz, or 0.41 DMIPS/MHz.
<futarisIRCcloud>
Four instances on a 4c, gives around 64-65k per core.
<futarisIRCcloud>
150 DMIPS total (roughly)
<futarisIRCcloud>
Biggest difference between the two seems to be Memspeed. Reads at 458 Mbps on smp. Reads at 327 Mbps on single.
<_florent_>
futarisIRCcloud: in the SMP repo, each CPU can have 2 dedicated LiteDRAM native ports whereas in the single repo the CPU is has 2 wishbone interface connected to the main wishbone bus
<_florent_>
futarisIRCcloud: in the SMP repository, are you testing the 4c variant or mp4c?
<_florent_>
futarisIRCcloud: 4c has 2 LiteDRAM ports for the Cluster, while mp4c has 2 LiteDRAM ports per CPU
<_florent_>
futarisIRCcloud: but not sure mp4c is fitting on the Arty
<_florent_>
we still have to make it more resource efficiant
<_florent_>
efficient
<felix_>
wasn't the only limitation of the AXI bus that you aren't allowed to make a processor core that uses the ARM ISA and uses the AXI bus and calls it AXI? IIRC the workaround was to call the bus something else in that one case
<Finde>
I think it was something like that too felix_
proteusguy has quit [Ping timeout: 256 seconds]
proteusguy has joined #litex
<awordnot>
does anybody know why a .vcd waveform produced by migen's run_simulation would produce `state` and `next_state` variables 40-bit wide with seemingly random values in them?
<awordnot>
i'm just using the standard FSM module with 5 states, and I can see the states are ordered sequentially in the outputted verilog
<awordnot>
running the simulation with nmigen instead (using the compatibility layer) produces a waveform with valid states. Gonna assume that's a bug in migen then
CarlFK has joined #litex
captain_morgan48 has joined #litex
HoloIRCUser1 has joined #litex
HoloIRCUser has quit [Ping timeout: 246 seconds]
captain_morgan48 has quit [Remote host closed the connection]