Vincenttl has quit [Quit: Connection closed for inactivity]
citypw has joined #symbiflow
efg has quit [Ping timeout: 256 seconds]
citypw has quit [Ping timeout: 250 seconds]
citypw has joined #symbiflow
_whitelogger has joined #symbiflow
photon70 has joined #symbiflow
photon70 has quit [Ping timeout: 256 seconds]
acomodi has joined #symbiflow
tmichalak has joined #symbiflow
<digshadow> nats`: good night!
<digshadow> oh ha that was last night
<sxpert> from last night, it seems like vivado is written by monkeys
citypw has quit [Ping timeout: 250 seconds]
<nats`> sxpert, you had doubt about that ? :D
<nats`> I mean until vivado 2016 I could crash it by clicking too fast on popup
<nats`> it happened to me at least one million time
<nats`> anyway I think hopefully because we can use tcl interface we can solve almost all fuzzing problem using my patch
<nats`> and we can have a sort of parallelism
<sxpert> nats`: all I have ever used so far is ise because vivado doesn't support my https://www.scarabhardware.com/minispartan6/
<tpb> Title: miniSpartan6+ | Scarab Hardware (at www.scarabhardware.com)
<nats`> uhhmm tbh vivado is way above ise, if you want argument we could discuss that in private
<sxpert> so I'd like to be able to use said minispartan6 with something else at some point ;)
<nats`> I don't know if there is a known effort on serie 6 maybe ask digshadow
<digshadow> sxpert: theres definitely interest in 6 series, but nobody working on it
<digshadow> is that something you might be interested to add support for?
<sxpert> digshadow: unfortunately, I'm rather unqualified
<digshadow> sxpert: if you are interested but aren't sure how to start, we can help with that
<nats`> the idea is not stupid because serie 6 despite being really expensive compared to serie 7 keeps some advantage like static consumption
<sxpert> (and lack time)
<digshadow> thats a bigger issue :)
<sxpert> nats`: the 6 series is in a large number of blackmagic devices
<digshadow> sxpert: have you used them? how was your experience?
<sxpert> which are pretty cheap, and could gain from opensource firmware
<nats`> blackmagic the video manufacturer ? (if the discussion continue we may have to switch to private or an other channel to not pollute chan)
<sxpert> nats`: yeah
<digshadow> I think he means the 1bitsquared product
<digshadow> ah
<digshadow> nats`: but yeah thanks for keeping OT
<nats`> is it a good idea to have a linked chan where we could put related but not totally ontopic discussion in case it interest more people in the chan ?
<nats`> something like symbichat
<nats`> (I'm not fan of having tens of channel but may be useful here)
<digshadow> well theres ##openfpga
<nats`> ah sure :)
<digshadow> which tends to be looser
<nats`> get it :)
<sxpert> noted
<nats`> I'll add it to my chan list again, if you want to keep going on the discussion sxpert ping me there :)
<nats`> digshadow, Bug #501
<nats`> are you talking about the output of get_pips ?
<nats`> if yes I tried to do that because 072 relies on it but a simple puts seems not enough each item of pips is stored with a string and it seems that's not good
<digshadow> nats`: I think someone is working on that
<digshadow> already
<digshadow> marked as assigned
<nats`> oky so I'll wait to integrate it in 072 because i didn't manage to do that well
<digshadow> unless kgugala wants to hand off to you?
<digshadow> ah? its used there as well?
<nats`> in 072 yes
<nats`> and I call it a lot of time now
<nats`> in each job
<nats`> but beware get_pips with no -ob_object return something like 2e6 results on the 50T
<digshadow> gotcha. So making it common would speed that up quite a bit
<nats`> yep
<nats`> that's why I tried
<nats`> but failed :D
<digshadow> nats`: just talked to him
<digshadow> sounds like you should be good to take that
<digshadow> hes mostly working on 052
<nats`> ohhh if he wants to do that no problem !
<nats`> I don't want to take it from him
<digshadow> no, I think you should go for it
<nats`> oky
<digshadow> hes focusing on the core logic of 052, not list pips
<nats`> so the idea is to find a way to store that
<nats`> it could be binary or text ?
<digshadow> it should be the same file format as it is now right? just only generated once
<nats`> are we talking about the same things, the output of get_pips command ?
<tpb> Title: prjxray/piplist.tcl at master · SymbiFlow/prjxray · GitHub (at github.com)
<digshadow> specifically running that file
<nats`> maybe we have a bigger rework to do in fact
<nats`> foreach pip [lsort [get_pips -filter {IS_DIRECTIONAL} -of_objects [get_tiles $tile]]] {
<nats`> set src [get_wires -uphill -of_objects $pip]
<nats`> set dst [get_wires -downhill -of_objects $pip]
<nats`> if {[llength [get_nodes -uphill -of_objects [get_nodes -of_objects $dst]]] != 1} {
<nats`> puts $fp "$tile_type.[regsub {.*/} $dst ""].[regsub {.*/} $src ""]"
<nats`> }
<nats`> }
<nats`> that part is really close to what the 072 does
<nats`> the main difference is that fuzzer starts at a speificied object
<nats`> where in 072 we simply list all wires everywhere if I get it correctly
<nats`> maybe in fact a lot of 05x could be called after the 072 ?
<nats`> (the real question is optimisation worth the effort ?)
<digshadow> nats`: no. 07x series takes too long
<digshadow> and its not related to bitstream stuff
<digshadow> I don't want any of the fuzzers blocked on 07x that don't have to
<nats`> did you try with the new version ?
<nats`> on my computer it takes 30 minutes now and it could go faster with more core and finding a better share between RAM/CPU
<nats`> what do you considere too long ?
<nats`> real question
<digshadow> nats`: from fresh checkout on a not very powerful computer, I can start running 05x series after only maybe 10 minutes?
<digshadow> the longest is tilegrid
<nats`> sure ok get it I think on a middle range computer you'll have avec something like one hour on 07x
<digshadow> it really just seems like an unnecessary dependency
<nats`> yep
<nats`> you're giht
<nats`> right
<nats`> I'll finish the 07x problem and look into sharing piplist for 05x fuzzer if kgugala is making something else
<digshadow> sounds good!
Vincenttl has joined #symbiflow
acomodi has quit [Quit: Page closed]
tmichalak has quit [Ping timeout: 256 seconds]
celadon has quit [Ping timeout: 250 seconds]
Rahix has joined #symbiflow
celadon has joined #symbiflow
Vincenttl has quit [Quit: Connection closed for inactivity]
citypw has joined #symbiflow
<nats`> litghost : should I handle the possible Exception of subprocess.check_call ?
<nats`> in that case how should I do that ?
<nats`> I mean what would be the behavior of the python script
<nats`> I could force exit but what will happens to other thread in that case ?
<nats`> try:
<nats`> subprocess.check_call("${XRAY_VIVADO} -mode batch -source $FUZDIR/job.tcl -tclargs " + str(blockID) + " " + str(start) + " " + str(stop), shell=True)
<nats`> print("Problem happened in jobs !")
<nats`> except subprocess.CalledProcessError:
<nats`> #dirty way to kill all the process
<nats`> os.kill(os.getpid(), signal.SIGINT)
<nats`> need to be tested
<nats`> it reacts well with a Ctrl+C input
<nats`> so let's say it'll work in case the vivado process fails to start
<nats`> do I have to do something to signal that I modified the PR ?
<litghost> Do not handle the exception
<litghost> If cleanup needs to happen, make sure to put it in a context handler
<litghost> I would just us multiprocessing.Queue rather than managing it by hand
<litghost> Revise that, use multiprocessing.Pool and imap_unordered
<litghost> If you get an exception, then you can use multiprocessing.Pool.terminate to stop work
<litghost> map_async also works if you need to preserve ordering and what to handle errors without exceptions
<tpb> Title: 16.6. multiprocessing — Process-based “threading” interface Python 2.7.15 documentation (at docs.python.org)
<nats`> uhhmm oky i'll take a look
<nats`> it's not a little overkill for a "run once" script ?
<nats`> I'm looking it I'm not sure it's usefull to go into such complexity
<mithro> If you want to chat about alternative firmware for Blackmagic products I recommend joining #timvideos channel
<nats`> sure but I was more discussing making an opensource board :)
<litghost> If you want to avoid complexity, then avoid introducing threading to being with. If you want parallel execution, multiprocessing.Pool is the safe and recommended way to do it
citypw has quit [Remote host closed the connection]
<nats`> litghost what's your opinion ? I think I should modify the way I call thread and calculate the index
<nats`> oky I'll rewrite it
<nats`> but for what I saw the current version is working well at catching signal
<litghost> The code will actually get simplier using Pool, because it will handle the nbParBlock logic for you
<nats`> I have to read about that I really don't know those module it's literally 10 years I didn't write real python code :)
<litghost> If should just be #1 create pool, #2 start worker, #3 add work, #4 get results (or handle error) , #5 close pool
<litghost> Opps, #2 start workers isn't required, creating the pool starts the workers
<somlo> after building the latest yosys, prjtrellis, and nextpnr, I successfully managed to compile one of the examples included with prjtrellis (soc_versa5g); But after connecting the LFE5UM5G-45F-VERSA-EVN board via usb, and running "make prog", I get an error from openocd (https://pastebin.com/udAHnXFQ). The board stops blinking and spinning its segment-LED thingie, so *something* happens, but not quite what *should* happen. Any clue as to what I'm
<somlo> screwing up (I'm a n00b at openocd and jtag) much much appreciated!
<tpb> Title: [soc_versa5g]$ make prog openocd -f /usr/share/prjtrellis/misc/openocd/ecp5-ver - Pastebin.com (at pastebin.com)
<daveshah> somlo: you need to set J50 to bypass the ispclock
<daveshah> short (1, 2) and (3, 5)
* somlo digs around for magnifying glass...
<daveshah> for some reason openocd can't cope with the ispclock in the JTAG chain, even if you give it the IR length and other details
<somlo> 3,5 --that would be the two upper-left-horizontal ones and the far-right vertical two (in a ":::" pattern) -- right ?
<somlo> I had (1,2) (3,4) and (5,6) from the factory
<daveshah> yes
<somlo> I get the same behavior
<daveshah> same error too?
<daveshah> would you be able to post a photo of your jumpers?
<somlo> daveshah: same error, same numbers (except for at the bottom, where I get different READ values each time
<somlo> daveshah: I'll upload a picture in a few minutes -- and thanks a ton for your help !!!
<daveshah> one more thing to try would be to put the CFG0, CFG1 and CFG2 jumpers all up (off)
<nats`> litghost, I still have to make a loop to create pool of 4 thread no ?
<somlo> there's four dip switches (4,3,2,1) with (3,2,1) labeled cfg0, 1, and 2, respectively
<somlo> and they're down,down,up,down
<daveshah> somlo: yeah, try all four up
<daveshah> that forces it to JTAG mode, which shouldn't be necessary but might be for some odd reason
<nats`> ahh no if I understand correctly you tell the pool to create 4 worker and feed the data and it'll do whatever needed to process all the data while using only 4 parallel thread
<nats`> ?
<somlo> now I don't get the blinky default behavior on power-up, and still the same error
<daveshah> somlo: just to double check, this is how J50 should be https://usercontent.irccloud-cdn.com/file/8CeHwSxq/j50.jpg
<daveshah> somlo: actually I think your versa board might be from the bad batch
<litghost> nats: No, create the pool of 4 once. Then add all of the work at once (either via map/map_async/imap/imap_unordered). Once all the work is added, it will be batched to each worker.
<daveshah> what is the text in the top right corner, and what is the text on the FPGA?
<litghost> nats: ya
<tpb> Title: Imgur: The magic of the Internet (at imgur.com)
<daveshah> somlo: yes, you have a bad versa
<daveshah> Lattice accidently built a batch of Versa 5Gs with non 5G FPGAs
<daveshah> This is the second I've seen now
<daveshah> somlo: replace the nextpnr command in the Makefile with `nextpnr-ecp5 --json attosoc.json --lpf versa.lpf --basecfg ../../misc/basecfgs/empty_lfe5um-45f.config --textcfg $@ --um-45k --freq 50 --speed 8` and try again
<somlo> so basically s/um5g/um/g :)
<sorear> isn't the non-5G architecturally the same whenever both meet timing?
<sorear> I guess, they have different IDCODEs in OTP and that confuses openocd?
<somlo> hehe, if anyone anywhere ever makes a bad batch of anything, I will probably end up with it :D
<daveshah> sorear: yes, the idcode is in the bitstream too
<daveshah> Timing is complicated here, because the 5G version runs at higher Vcc which accounts for the faster fabric
<somlo> wonder if I can get them to send me a replacement...
<daveshah> I guess this board runs the non 5G version at the higher Vcc, so timing is anyone's guess, although probably closer to the 5G
<daveshah> somlo: yes, I'm sure you can
<daveshah> However, there's a risk of getting another from the same batch if you go through the distributor
<sorear> general semiconductor process question: how correlated are process variations on nearby transistors? on a "slow" chip, are all of the transistors slow, or just a few of them?
<sorear> I guess there must be some correlated variation since I've seen in-circuit measurement devices that rely on being "near" the operational circuit, but a spatial power spectrum would be very interesting here
<somlo> daveshah: thanks again, make prog now doesn't error out anymore, and I get 8 flashing LEDs next to the usb connector, four red, and for yellow
<daveshah> Connect to the second uart (ttyUSB1 with no other devices) at 9600 and you should see hello world
<somlo> whoo hoo, it works! thanks again for the handholding!
<daveshah> \o/
<nats`> litghost, I wrote that but I'm sure it's good enough to handle a crash in one of the thread
<tpb> Title: [Python] 072 split job python pool - Pastebin.com (at pastebin.com)
<litghost> Ya, that looks good
<nats`> can you explain to me how it'll be handled ?
<nats`> because in case of crash of one it should stop all other
<litghost> the pool context manager should call terminate
<nats`> oky
<nats`> I need to fix the fact that map can't take multiple argument
<somlo> while I'm on the topic of ECP5 dev boards, is there anything out there with e.g. a LFE5UM5G-85F and at least 128MB of DDR RAM (i.e. something comparable to the nexys4-ddr or Arty)?
<daveshah> Not at the moment
<daveshah> The only board with significant DDR is the Versa
<somlo> If only they made an 85 kCell version...
<sorear> any guesses why they didn't?
<litghost> just send an list of arguments
<nats`> you'll certainly laugh but I don't know how to construct it right :p
<nats`> blockId = range(0, nbBlocks)
<nats`> startI = range(0, pipscount, blocksize)
<nats`> stopI = range(blocksize, pipscount + 1, blocksize)
<nats`> argList = [ blockId, startI, stopI ]
<nats`> with Pool(processes=nbParBlock) as pool:
<nats`> pool.map(start_vivado, argList)
<nats`> this is not good
<daveshah> sorear: no idea, maybe supply related in the early days
<daveshah> They are pin compatible so they could make one easily enough (from memory there's no shortage of capacity on the power supplies)
<litghost> argList = zip(blockId, startI, stopI )
<nats`> ahh sure zip.....
<nats`> I always forget about that one
<nats`> thx
<nats`> litghost, testing the new code I hope it'll be good that time :)
<nats`> will let it run to verify sha1
<nats`> litghost, should I close my PR and reopen one ?
<nats`> because a lot of commit are useless now
<nats`> I must admit I don't know how to that in git...
<litghost> git rebase -i master
<litghost> but before doing that, I would make a backup branch or write down the commit hash
<litghost> if you are new at git
<nats`> uhhh can I avoid that ?
<litghost> The rebase?
<nats`> yep
<nats`> I'm pretty sure I'll break something
<litghost> Feel free to provide the PR as you feel you can. rebase is the fastest way, but if you have another way to make a cleaner PR, feel free
<nats`> if I understand correctly if I rebase on a commit it smash all the history and everything back to that commit ?
<litghost> You choose. default is no change
<litghost> squash or s will cause the commit to be merged with it predecessor
<nats`> litghost, should I close the PR anyway and reopen one rebased
<nats`> ?
<nats`> because the current one will keep all discussion etc
<litghost> after rebasing, git push <remote name> <branch name> --force
<litghost> and that will update the PR but not force a new PR to be created
<litghost> assuming you have the same branch name
<litghost> PR only cares about the branch name
<litghost> that's it
<litghost> In general do not use --force with push unless you are specifying intending to rewrite history (like a rebase for cleanliness purposes)
<nats`> I'll make a clone of my local repo to test
<litghost> sure
<litghost> That's overkill, but it will work
<nats`> for safety in fact
<nats`> because i'm not sure to understand how rebase works
<nats`> basically I want to commit my current state to few commit before and smash all those in the pile
<litghost> ya, that's what rebase does
<nats`> but if I understand it only works on branch
<nats`> I made all my modification directly in master
<litghost> that's fine actually
<litghost> just make a branch at the place you are at (git checkout -b <new branch name>)
<litghost> then move master back to the upstream location (git checkout master then git reset --hard upstream/master or origin/master)
<litghost> then rebase, and you are good to go
<nats`> I replace origin/master by the commit number I want to base on ?
<nats`> or the one after the last commit I want to keep ?
<litghost> git rebase -i <oldest commit to leave alone>
<nats`> oky i'll try that the new version is working
<nats`> last question should I commit my last modification before rebase ?
<litghost> yes
<litghost> rebase manipulates commits
<litghost> staged and unstaged files are not allowed
<nats`> ah oky
<nats`> I'm rebasing should I select drop ?
<nats`> uhhmm I have a conflict
<litghost> squash
<litghost> not drop
<litghost> git rebase --abort is you friend
<nats`> yep I just made that
<nats`> oky i'll try with squash
<nats`> oky it opens me the comment window with all the previous comment I guess I delete everything
<nats`> and just make a new one
<nats`> and I push force now
<nats`> duhhhhh it's more stressful than playing Resident Evil :)
<litghost> nats: So that rebased version still generates the same hash?
<litghost> If so I'll merge it
Foo__ has joined #symbiflow
Foo__ has quit [Client Quit]
<nats`> let me rerun once
<nats`> only 30 minutes left
<nats`> but yes
<litghost> okay, sounds good
<nats`> last time I tried it was ok
<nats`> but I prefer to retest :D
<litghost> good by me
<litghost> Good PR, thanks for the investigation!
<nats`> thanks :)
<nats`> I'll try to focus on 074 now
<nats`> and also thanks for he help refreshing my python and git knowledge ;)
<nats`> s/he/the/
powerbit has joined #symbiflow
<nats`> litghost, maybe you can close https://github.com/SymbiFlow/prjxray/issues/465
<tpb> Title: High RAM usage in fuzzer 072-ordered-wires · Issue #465 · SymbiFlow/prjxray · GitHub (at github.com)
<mithro> nats`: How much RAM does it use now?
<tpb> Title: All output products should go into "build" dir · Issue #171 · SymbiFlow/prjxray · GitHub (at github.com)
<nats`> depends on how many thread you run and blocksize but with defaut parameter I eat lesst than 8GB and run in 30 minutes
<nats`> wait before closing the build dires I need to test that one for 074
<nats`> mithro I just checked it tops at 4x1.6GB
<litghost> mithro: Not only does the new code use less memory, it runs ~3x faster
<nats`> so we could maybe make longer blocksize
<nats`> making even more fast build
<litghost> nats: I would focus on fixing 074 before tuning it further
<nats`> sure
<litghost> the current memory requirements are excessive and are the higher priority
<nats`> with test we could infere an optimum and then automagically parametrize it
<nats`> yep will take a look
<nats`> can you run a full 07x Fuzzer build on a server ?
<nats`> to see if all the PR is coherent ?
<nats`> in the mean time I'll see what I can do for 074
<nats`> litghost, 074 will be longer to fix the tcl is way more complicated, I'll try to have that done on sunday (I'm not available saturday)
<nats`> (basically this evening and sunday I can work on it)
<litghost> No rush, this issue has been long standing.
<litghost> Thanks for the work
<nats`> np :)
temp_ has joined #symbiflow
temp_ has left #symbiflow [#symbiflow]
<nats`> litghost, reverified sha1sum is ok with the pushed version :)
<nats`> so now it's sure
<nats`> I mean 101% sure
<nats`> my first conclusion is the same apparently it's get_nodes again which is leaking ram
<nats`> at almost the same rate
<nats`> you know how long it takes to run the 074 initially ?
tpb has quit [Remote host closed the connection]
tpb has joined #symbiflow
<litghost> long time
<litghost> I agree that get_nodes is the likely culprit again
<litghost> there is more happening in that tcl though
<litghost> might be worth breaking up the sites from the tiles
<litghost> or something along those lines
<litghost> there are some straight forward split points
<nats`> yep what I'm looking :)