<cr1901_modern>
Does a logic analyzer streaming applet (like lafx2fw) exist yet?
Foxyloxy has quit [Ping timeout: 256 seconds]
Foxyloxy has joined #glasgow
electronic_eel has quit [Ping timeout: 264 seconds]
electronic_eel has joined #glasgow
PyroPeter_ has joined #glasgow
PyroPeter has quit [Ping timeout: 264 seconds]
PyroPeter_ is now known as PyroPeter
<whitequark>
electronic_eel: not really, it's just fun
FFY00 has quit [Ping timeout: 268 seconds]
FFY00 has joined #glasgow
fibmod has quit [Ping timeout: 256 seconds]
bvernoux has joined #glasgow
thasti has joined #glasgow
IZh has joined #glasgow
Chips4Makers has quit [Ping timeout: 272 seconds]
Chips4Makers has joined #glasgow
fibmod has joined #glasgow
<lukego>
okay let's see if we can make this oscillator oscillate
<lukego>
going to check for shorts with 1V2 and perhaps desolder the nearby capacitors and check them with LCR meter
<lukego>
I did a continuity check across C8, C11, C12 but nothing there. also between 1V2 test point and each terminal of Y1.
<lukego>
nothing (while powered off)
<lukego>
power draw is 10mA today verses 18mA yesterday. that bothers me.
<lukego>
time to read the datasheet for Y1
<d1b2>
<OmniTechnoMancer> Is there continuity with suitable pins of U1?
<d1b2>
<OmniTechnoMancer> It is a crystal so will require excitation yes?
<tnt>
lukego: the two load cap of the crystal should be the same. They are obviously not, so at least one of them is wrong, no question there.
<tnt>
If you have enough spares ... save some time and replace both.
<lukego>
I removed C11 and the LCR doesn't give any reading from it
<lukego>
tnt: ok, C11 and C12 right?
<tnt>
Can't find your pic again ... so not sure.
<tnt>
the ones attached to the xtal.
<lukego>
yeah, from kicad I'd say it's those, "obviously" but with n00b random error probability
<lukego>
replacing
<lukego>
(oh how I long to move to a bigger desk where I don't have to keep making space for stuff like my little fishtank vacuum sucker all the time)
<tnt>
they should be in the pF range or so. 10-20 or so.
<lukego>
broke out some fresh ones. they are white like the other one was. LCR meter reads 18pF from the new one. so maybe they were both mismatched and broken, or just the reading was thrown off by excess solder on the one I removed for test
<lukego>
okay replaced
<lukego>
(I should be keeping a log of all my faults... will try to collect notes before amnesia kicks in...)
<lukego>
powered up from bench. drawing 20mA now. checking with scope
Stormwind_mobile has quit [Ping timeout: 256 seconds]
<lukego>
okay not perfectly steady but the scope is showing a 24MHz wave now whereas it absolutely did not before. time to try USB into laptop..
<lukego>
it's nextpnr commit c8ecb8341ca766e1e7565cc2b652b63eaba67508 which it's called 2020.08.22
<d1b2>
<DX-MON> I'm going to be honest.. I would strongly suggest using the repo version of Glasgow and the latest Git versions of yosys and nextpnr
<d1b2>
<DX-MON> so much has been fixed and changed since that date, that it could be any number of reasons you're crashing
<lukego>
ok, I'll update nixpkgs to use those
<d1b2>
<DX-MON> when I say the repo version of Glasgow, latest Git master is probably the best bet
<lukego>
(or rather I'll tweak nixpkgs to pick up local git hceckouts)
<d1b2>
<DX-MON> okie
<lukego>
hey I've been working from whitequark/glasgow#master instead of GlasgowEmbedded/glasgow#master all this time
<lukego>
revC1 rather
<electronic_eel>
I think the repo is forwarded, but the code in the revC1 branch is about a year old - you should definitely use the code in master instead
<lukego>
thanks good to know. building latest nextpnr now.
<whitequark>
lukego: hm that nextpnr crash might be handy to report
<whitequark>
with the "bad" version, can you run `glasgow build -t archive --rev C1 <whatever you were running>`?
<d1b2>
<daveshah> As for the nextpnr issue mentioned earlier, I don't have a particular memory of a segfault bug being fixed for ice40 in the last few months
tf68 has joined #glasgow
<d1b2>
<daveshah> in the off-chance you still have the json and pcf floating around, it would be nice to check there isn't any latent defect
<lukego>
I'll check if I can reproduce the original issue with nextpnr crash
<d1b2>
<daveshah> In that backtrace it looks like build_top.sh is failing but I can't obviously see why (afaik UnusedElaboratable is just a warning and not the error)
<d1b2>
<daveshah> maybe there is still some nextpnr issue going on?
<lukego>
I didn't update nmigen, is that also needing to stay latest?
<lukego>
I can reproduce that other error but I don't know how to find top.json and top.pcf, they seem to be cleaned up automatically somehow
<lukego>
I'll try fpga-toolchain but guessing it won't work on nixos. maybe I can spin up an ubuntu in docker as plan B.
<d1b2>
<OmniTechnoMancer> wq gave you a command to run for the previous nextpnr crash?
<d1b2>
<OmniTechnoMancer> > with the "bad" version, can you run glasgow build -t archive --rev C1 <whatever you were running>?
<lukego>
or is there any distro that has glasgow of suitable vintage already packaged? that I could pull from dockerhub?
<whitequark>
'suitable vintage'?
<whitequark>
in general, glasgow is expected to be installed as an editable package from the git repo and regularly updated
<whitequark>
(you don't *have* to--the interfaces it relies on are reasonably stable--but that's the happy path workflow)
<d1b2>
<daveshah> Hmm, nextpnr seems fine here
<lukego>
I'm just trying to test the hardware here, I'm not at the stage of actually using it yet
<d1b2>
<daveshah> I can't accurately reproduce the failure without the exact JSON though
<lukego>
so I'm looking for the simplest path with the fewest software problems
<whitequark>
ok well, if you can use pip, then yowasp is probably a good choice
<lukego>
ok, I'll give that a shot in an ubuntu container
<whitequark>
(does nixos really not let you use pip at all?..)
<lukego>
I'm not sure. Generally it has its own solution to that problem, and generally it leads to headaches trying to ignore that and do what other distros do.
<lukego>
NixOS is basically about making everything ten times harder to do, but only having to do them once...
<russss>
given that homebrew just forcibly upgraded my python installation while I wasn't looking, I kind of appreciate that approach
<russss>
you should be able to use a virtualenv though, but it might still be more faff than a container.
<hl>
pip will probably work if you install stuff at user level, however it is not really in the spirit of NixOS
<d1b2>
<DX-MON> as a software engineer and having had to deal with the "system full of old libraries and don't you dare change the core ones or it will mess up your day", I'd find that stifeling and frustrating
<russss>
fun thing is that apparently most packages haven't released macos wheels for python 3.9 yet, so I tried to install scipy again and it gave me an inscrutable Fortran compile error.
<lukego>
yeah. also it only took me a few seconds to fire up an ubuntu userspace which is not happily installing stuff from pip, and which will leave no trace on my system to trip me up a month from now when I want to use pip again
<d1b2>
<DX-MON> esp when the approach involves reinventing the entire package ecosystem wheel for a language all over
<electronic_eel>
installing and running glasgow only at user level should be enough, no need for sudo or anything like that. only copy the udev-file over and all the necessary user level rights should be there
<lukego>
whitequark: I'm doing the incantations for yowasp above but "glasgow: command not found"
<lukego>
I did sudo instead of the udev trick because such tricks practically never work on nixos and I reflexively work around them...
<lukego>
pip3 incantation to install glasgow also needed?
<electronic_eel>
nixos doesn't respect neither udev uaccess tag nor plugdev-group? that would be wildly user hostile
<lukego>
"stifling and frustrating" -- I thought you said you weren't already familiar with nixos? ;-
<lukego>
okay I'll try the udev thing if it matters. didn't seem worth the cycles because it takes 6 milliseconds to write "sudo" :)
<electronic_eel>
no, it matters. because when you do sudo, glasgow is run as root and the programs must be installed as root and so on. without sudo you run it as user and the programs are installed as user
<lukego>
but my immediate problem seems to be that the pip3 incanttaion didn't give me glasgow inside my container. I'll try checking it out and running it locally from in there.
<lukego>
ok
<d1b2>
<Attie> @lukego - please just try to ignore as much of nixos as possible... either chuck it, or get a blank ubuntu (e.g) container, and pass usb through to it correctly
<d1b2>
<Attie> get into bash, and use it like "normal" system
<lukego>
yes I am in an ubuntu container, and I have wasp-yosys in there, the problem is I don't have glasgow there yet
<lukego>
but I'll grab that from source since there doesn't seem to be a pip3 install option (?)
<d1b2>
<Attie> once you're in a bash in a blank ubuntu container, follow the steps as documented in README.md
<d1b2>
<DX-MON> > "stifling and frustrating" -- I thought you said you weren't already familiar with nixos? ;- @lukego I was commenting on what you were saying and reflecting on my own usage and the typical user usage.. XD
<lukego>
electronic_eel: problem number two of course is that now I'm in a container and there doesn't *exist* any user except root :)
<d1b2>
<Attie> @lukego - in this case, sudo is not required
<lukego>
there's a certain irony here. On the one hand NixOS strictness makes it very frustrating to deal with larger interdependent software ecosystems like yosys/nextpnr/nmigen/glasglow. On the other hand NixOS reproducibility is the only thing that makes me willing to deal with those ecosystems in the first place :)
<d1b2>
<Attie> I suspect as you start working with hardware more, you'll become increasingly frustrated with nixos...
<d1b2>
<Attie> ....... let me put together a docker image that works for me
<lukego>
thanks, no, it's okay
<lukego>
I'll flip burgers before I try to do this stuff without nix
<lukego>
I'm installing glasgow from source in my ubuntu docker now and it's running privileged with usb device visible
<d1b2>
<DX-MON> I'll be honest.. because of the awesome work by people like wq, and the choice to use Python.. even on Arch which is a rolling release distro, the bar to entry in this ecosystem is super low using pip and that whole bit
<lukego>
(I don't mean to drag you all down the nix frustration hole with me, I can use docker setups for this stuff and keep my trials to myself on that score in future :))
<d1b2>
<DX-MON> and likewise the bar to entry in Arch because of PKGBUILD is also super low
<lukego>
good to know, maybe I'll use an arch container next time
<lukego>
Glasgow is a bit special since its a tool of its own and won't be part of my build, should be fine to run it out of docker etc.
<d1b2>
<DX-MON> can you try pins-ext too (glasgow run selfltest pins-ext)
<hl>
lukego: I just did nix-shell -I nixpkgs=$X -p glasgow, glasgow test selftest, with $X being nixpkgs master just pulled. Works fine, no segfault. ?
<d1b2>
<DX-MON> so that's telling you the board has shorts between those given pins
<lukego>
nice diagnostics :-)
<d1b2>
<DX-MON> well, somewhere between the package and the expansion connector
<d1b2>
<DX-MON> it could be on something like the pull resistors
<d1b2>
<DX-MON> pins-pull I think was the test for testing pullups
<lukego>
maybe also bridging on the connector? I did put a lot of solder on those and it could have gone through the holes and bridged under the plastic
<d1b2>
<DX-MON> it's possible.. a multimeter will help tell all
<lukego>
what's the test strategy with the multimeter?
<d1b2>
<DX-MON> that gives you a nice set of pin pairs to go check and rework
<d1b2>
<DX-MON> well, a multimeter will help you determine what kind of short you have
<tnt>
Isn't pins-ext and pins-pull the unreliable ones ?
<d1b2>
<DX-MON> if you get hard continuity between the pair of pins it's talking about, then it found an actual short
<tnt>
Does pins-int pass ?
<d1b2>
<DX-MON> if you get hard continuity between a the pair of pins it's talking about on the IO expander side of the pull resistors, then you've isolated the pins-pull faults
<d1b2>
<DX-MON> if not then you might just be seeing flux conduction
<d1b2>
<Attie> yeah, pinx-ext isn't 100% reliable... I had a board with high resistance between pins (MOhm), failed pins-ext, but should be fine for general use
<d1b2>
<DX-MON> hence testing with a multimeter to figure out what exactly is going on
<d1b2>
<Attie> tip
<d1b2>
<Attie> *yip
<lukego>
thanks for all the help as always! I have to run now but I know what I have to do next
<whitequark>
lukego: selftest pins-ext is corrently just broken
<whitequark>
*currently
<whitequark>
i really should fix it. anyway, your board is almost certainly fine
<d1b2>
<Attie> @whitequark - any interest in a functional Dockerfile for glasgow?
<Lofty>
Can USB devices even be passed into Docker containers?
<d1b2>
<Attie> yeah they can
<d1b2>
<Attie> but it's not particularly pretty
<d1b2>
<Attie> ... and you need to pass sysfs through to get hotplug working
<sorear>
if you're trying to use docker "as intended" that's one thing but what if you're just going for sparkling chroot + bind mount?
<whitequark>
Attie: would that really have a lot of utility?
<whitequark>
it seems that whatever problem it is solving would be solved better in some other way
<whitequark>
e.g. i'm reasonably certain that virtualenvs still work on nixos, and you can still use pip in virtualenvs there
<russss>
I know there are some people who are very keen on running everything in docker. In theory it's a nice idea when you have something with tons of dependencies.
<russss>
but when you start dealing with non-file USB devices it does get annoying
<d1b2>
<Attie> it'd be a "grab from shelf and use" type solution... but i grant you it would potentially cause headaches when trying to get stuff in/out
<whitequark>
glasgow is going to extreme lengths to avoid having dependencies
<sorear>
call this a hunch but I don't think virtualenv will do anything about nextpnr segfaulting
<whitequark>
it's just nmigen, libusb, pyvcd, jinja2 (transitively), and bitarray but i'll ditch that
<whitequark>
sorear: sure it will, if you use yowasp
<whitequark>
being able to virtualenv nextpnr is the *whole goddamn point* of yowasp
<whitequark>
ok well maybe half of the poit
<d1b2>
<Attie> i do like the "archivability" of docker containers though... and sometimes put toolchains into them for that exact reason
<d1b2>
<Attie> i'm not suggesting (or even remotely wanting) for the docker container to be the primary method of use
<whitequark>
but... you can do this with virtualenv
<d1b2>
<Attie> oh?
<whitequark>
either `pip freeze`, or even zip the entire thing
<d1b2>
<Attie> oh i see, yes
<d1b2>
<Attie> true
<whitequark>
virtualenv has the advantages of docker without the disadvantages of docker. why bother with docker?
<d1b2>
<Attie> fair enough 🙂
<d1b2>
<Attie> i'm not super fond of pip freeze for archive purposes, but I grant you "zip the entire thing" probably does the job
<whitequark>
Attie: yeah, it's fair, especially if you have editable/git dependencies
<whitequark>
Attie: hm, so virtualenvs are not relocatable. I guess what you want is achieved with a combination of pip freeze + pip download
<whitequark>
there is also `virtualenv --relocatable`, no idea how broken that is
<russss>
last time I used that, it was very cursed
<whitequark>
--relocatable?
<russss>
yeah
<d1b2>
<Attie> this is true - never relocate a venv (or at least put it back exactly where it was)
<russss>
I think the problems I had were related to compiled extensions though
<d1b2>
<Attie> seems to work well enough / I'll add a README
<whitequark>
tbh, there *is* one good reason to use Dockerfiles
<whitequark>
it is to ensure our installation instructions keep working
<d1b2>
<Attie> oh, @whitequark - i think i mentioned somewhere else and it got lost... do you have the 'glasgow' project on pypi / should it be grabbed?
<whitequark>
i don't, and i should grab it and put a placeholder there
<whitequark>
eventually i think every git commit that passes CI should be pushed to pypi
<d1b2>
<Attie> +1
<whitequark>
i'll set that up for nmigen-boards first
<whitequark>
then for glasgow
<d1b2>
<Attie> I also noticed an issue with running python setup.py develop... related to nmigen... iirc nmigen needed to be installed first / explicitly
<whitequark>
this will go away once nmigen 0.3 is released
<d1b2>
<Attie> great
samlittlewood has joined #glasgow
<d1b2>
<DX-MON> so, I can confirm that virtualenv relocation has actually been fixed, so that does work nowadays
<d1b2>
<DX-MON> additionally, it's not actually that hard to edit the files in <env>/bin (they're mostly all shell scripts and Python scripts.. and those are the bits that matter, not the binaries) to use relative paths so they are also relocatable
<d1b2>
<DX-MON> so pip freeze'ing a made-relocatable venv is absolutely a thing
<d1b2>
<DX-MON> it must be a Python 3.x (>= 3.7) venv for it to work properly, but I doubt that's a problem any more
<whitequark>
hum
<electronic_eel>
so how would that relocatable virtualenv thing work for glasgow? we distribute a zipped up virtualenv thing with everything in it on github and the users should download, unpack and run it? or would the users create a virtualenv themselves on their machines, with the benefit that the virtualenv shields them from some dependency hassle on their machines?
<whitequark>
latter, at least if you ask me
<d1b2>
<DX-MON> technically the former would work.. most of the time.. however, I have to agree with whitequark here that because distros can have quite different available versions of shared objects and Python 3.. I'd urr on the latter
<d1b2>
<DX-MON> it's the less tech support headachy way
<whitequark>
the thing is that the only extension we *should* depend on is libusb (and perhaps wasmtime) which are manylinux anyways
<whitequark>
we *do* depend on bitarray but once again, we should just ban bitarray from all of our code
<d1b2>
<DX-MON> the problem with virtual env is that it copies system python into <env>/bin and it's what that can depend on that would be the tech support headache