<GitHub175>
[smoltcp] podhrmic commented on issue #53: Closing for now, will reopen with more complete feature set. https://git.io/vdiij
<GitHub106>
[smoltcp] podhrmic opened issue #56: rustfmt.toml for smoltcp https://git.io/vdiX0
<GitHub111>
[smoltcp] whitequark commented on issue #56: Sure. Do you think you could play with rustfmt's options to get an output very close to the current one? Then I'll merge that and reformat the rest. https://git.io/vdiMV
<sb0>
rjo, I just built it locally last time (for the 3.0 release)
<sb0>
(it == misoc)
<GitHub19>
[migen] sbourdeauducq tagged 0.6.dev at fce4a41: https://git.io/vdiy8
<sb0>
_florent_, so for artix7 you are just setting some fixed bitslip values? how did you determine them? what about setup/hold timing?
<sb0>
I have the impression that your design is just working by chance, and the proper way of fixing this would be to shift the clock that goes to the SDRAM
<sb0>
then do write leveling first (of course this only works if each SDRAM chip has its own clock), then read leveling
<sb0>
is it possible to do glitch-free phase shifts on artix7 plls or did xilinx fuck that up, too?
rohitksingh_work has joined #m-labs
sb0 has quit [Quit: Leaving]
rohitksingh_work has quit [Ping timeout: 248 seconds]
rohitksingh_work has joined #m-labs
rohitksingh_work has quit [Ping timeout: 240 seconds]
<_florent_>
sb0: and I use values in the middle of the window
<_florent_>
sb0: so yes we are using static delay, i've been using that on 4 different boards (arty, nexys video + 2 custom boards) and it seems to be working fine
<rjo>
sb0: because the buildbot was broken already then?
rqou has quit [Ping timeout: 246 seconds]
rqou has joined #m-labs
<GitHub13>
[smoltcp] phil-opp commented on issue #49: > @phil-opp Do you think you can take the abovementioned outline and implement this yourself? I'm fairly busy right now.... https://git.io/vdPUR
<GitHub75>
[smoltcp] batonius commented on issue #49: Should we coordinate this effort with #55? It would be great if the new `Device` trait or the new `Interface` struct could be used as trait objects. https://git.io/vdPC5
sb0 has joined #m-labs
<sb0>
rjo, yes, it was broken
<GitHub55>
[smoltcp] whitequark commented on issue #49: @batonius I disagree with your approach on #55, but haven't had the time to write that down yet. In general I am quite opposed to trait objects in smoltcp; you can notice that in the existing code... https://git.io/vdPlS
<sb0>
_florent_, why not calibrate it in firmware?
<sb0>
like the BIOS does write leveling on kintex
rohitksingh_work has quit [Read error: Connection reset by peer]
<GitHub159>
[smoltcp] batonius commented on issue #49: @whitequark Sure, I just think we should keep the issues in sync. https://git.io/vdP2v
<rjo>
sb0: how was it fixed?
<sb0>
it was not. I just built it locally
<rjo>
sb0: doesn't that go against the idea of having the buildbot?
rohitksingh has joined #m-labs
<sb0>
as long as conda brims with bugs, it's either that or wasting half a day on simple operations like this
<whitequark>
once I get done with this DMA emergency I can spawn images on demand
<whitequark>
can update the buildbot to*
<whitequark>
that will take care of conda bullshit
<rjo>
emergency?
<rjo>
srtio is not even in master.
<rjo>
sb0: that's a bad choice. you are just hoping that others will take care of it.
<rjo>
sb0: and the complaining does not change one iota.
<sb0>
rjo, yes. we need money that is independent from the sayma mess.
<sb0>
rjo, and yes, someone else should take care of it. I've said many times that we need a yak-shaver. besides, I've already fixed a number of conda problems already.
<whitequark>
the sort of person who can effectively track down contrived problems in conda is typically not employed as a conda yak-shaver. that's the whole tragedy of it.
<whitequark>
it should have just been designed decently instead
<rjo>
sb0: that's not a worakble attitude. saying something doesn't make it happen. in the same way that whining doesn't change much. every one of us has fixed numerous conda issues and wrestled the buildbot. let's set the priorities straight. let's fix the buildbot working and look at sayma again before we return to srtio/dma interactions.
<sb0>
rjo, disagreed.
<rjo>
rewriting conda may be a nice idea for next year.
<rjo>
but definitely not now.
<rjo>
employing a personal slave is also not something we can wait for.
<sb0>
just build misoc locally for now (it is the only package affected) and move forward
<rjo>
fwiw this smells more like a bad interaction of buildbot with conda or divergence of the buildbot slaves or hidden and broken assumptions between buildbot and conda. not even an intrinsic conda problem. just snapping and whining about conda if something goes wrong might be very premature.
<sb0>
well, artiq wasn't affected before. I built 3.0 and every other package on the buildbot
<sb0>
_after_ misoc broke
<rjo>
doesn't matter
<whitequark>
no, both slaves failed misoc builds
<whitequark>
and artiq-board builds
<sb0>
also, the only reason srtio isn't in master is because I want to avoid breaking and delaying further SAWG on Sayma
<whitequark>
not migen builds, so this really seems specific to conda workdir code
<sb0>
and the DMA issue
<rjo>
sb0: i appreciate that
<rjo>
whitequark: i remember a similar situation a while back. and you had a trick to unbreak it. but i can't find it.
<whitequark>
rjo: I have no memory of that event
<whitequark>
(that doesn't mean it didn't happen)
<whitequark>
sigh. let me try something
<sb0>
rjo, was it the one that can be worked around by triggering a build with --branch?
<rjo>
whitequark: forcing the branch maybe?
<rjo>
yes. rings a bell.
<whitequark>
bb-m-labs: force build misoc --branch master
<bb-m-labs>
build forced [ETA 3m23s]
<bb-m-labs>
I'll give a shout when the build finishes
<rjo>
could also be unrelated... but the symptoms seemed familiar
<sb0>
the workaround was to trigger a build with another branch, then restart the original build. idk if it works by simply adding --branch
<rjo>
looks good. thanks.
<rjo>
fwiw this is a conda-buildbot interaction issue.
<whitequark>
bb-m-labs: force build --branch master --props=package=artiq-kc705-nist_clock artiq-board
<bb-m-labs>
build forced [ETA 16m06s]
<bb-m-labs>
I'll give a shout when the build finishes
<rjo>
whitequark: while i have you on ansible... quick questions: (a) do you always write the recipes to be strictly idempotent? (b) how do you handle secrets (ssh keys etc) (c) do you develop the recipes with that idempotency and trial-and-error?
<whitequark>
(a) yes (b) - vars_files: [vars/private.yml]; echo vars/private_yml >>.gitignore (c) i don't understand the question (d) by using --check (dry runs)
<whitequark>
i wanted to reorganize my setup to use roles for a while so i'll publish some of what i consider high-quality roles soon
<whitequark>
3rd party roles are mostly crap
<whitequark>
i don't even know why people share them
<whitequark>
oh and anything targeting ansible prior to 2.4 is probably also crap because ansible only grew a functional dependency/abstraction system at 2.4
<rjo>
crap because they are not generic enough?
<whitequark>
the opposite
<whitequark>
they're too generic in a way that creates tons of edge cases which they do not handle
<whitequark>
i typically write a recipe that's only parameterized by names and such. rarely, one parameter, or some loops
<rjo>
yes. not handling self-inflicted edge cases is what i meant.
<whitequark>
sothere isn't any combinatorial explosion
<rjo>
then you can throw things like distribution packages over board.
<rjo>
on (c): if you e.g. tweak your config for something, do you iterate over the edit/ansible-execute/test cycle?
<whitequark>
(continuing) if there *is* combinatorics then i handle that with separate roles and dependencies
<rjo>
or whatever the ansible command is to do something
<whitequark>
much more robust ime
<whitequark>
ansible-playbook since the ansible command doesn't handle roles
<whitequark>
with a trivial playbook
<whitequark>
oh and
<whitequark>
i do two more things.
<whitequark>
the first is my recipes are not just idempotent, they do set_facts: ... cacheable=yes for things like "checking if nginx is installed"
<whitequark>
so that i never wait centuries on a high-latency link when i just want one config file updated
<whitequark>
the second is the tagging system
<whitequark>
tags are really a weak point, so i work around that by manually adding a combinatorial... well it's not a detonation, let's call it a combinatorial conflagration of tags
<whitequark>
e.g. nginx (only nginx), nginx-site (all nginx sites), whitequark.org (everything that backs up whitequark.org), nginx-site:whitequark.org (intersection of nginx-site and whitequark.org) etc
<whitequark>
this lets me do very finely grained config updates and a rapid modify-execute-apply cycle
<whitequark>
erm, modify-execute-test
<rjo>
ok. with the cacheable thing you have shortcuts in the playbooks to make the iteration quicker. and similarly with tags?
<whitequark>
yes.
<whitequark>
otherwise i found it unbearable. just getting distracted all the time with 10s of seconds of wait time per cycle
<whitequark>
of course you could just check that out on the lab.m-labs.hk but i don't use vim
<rjo>
i see.
<rjo>
but if i e.g. have a working "manual" nginx setup. how do i re-implement that in ansible? step by step replacement? shut everything down for a day and then rebuild?
<whitequark>
first, go through it manually and refactor it
<whitequark>
so that all parts that can be reasonably shared, are
<whitequark>
then do a step by step replacement until ansible reports no (significant?) changes
<whitequark>
then run it
<whitequark>
then run it on a fresh new VM to find where you had bad assumptions and missing dependency edges
<rjo>
ok. apart from your code, any recommendations where to steal from?
<whitequark>
i did that while migrating whitequark.org and it took me a total of something like three days. dovecot postfix roundcube nsd tarsnap
<whitequark>
hm
<whitequark>
i have not found any ansible playbooks or roles that i found even minimally acceptable
<whitequark>
the only thing i stole were workarounds for ansible language deficiencies and even that was mostly the bugtracker
<whitequark>
i would suggest thinking about it as any other programming language and apply the same techniques. and read the module documentation.
<rjo>
do you use it for your laptop as well?
<whitequark>
not currently but I should do this before the US visit
<rjo>
hahaha.
<whitequark>
so that I can wipe and reimage at will
<whitequark>
one of the last things blocking that, really...
<whitequark>
rjo: being able to set up a config on your laptop identical to the config on your server is *very* valuable
<whitequark>
far more valuable than it first seems
<whitequark>
insta-staging
<whitequark>
also write a task for synchronizing the database/files from production, worth every second doing it
<whitequark>
speaking of "wipe and reimage" that'll also be what i'll do if anyone hacks into whitequark.org or i lose the ssh key i have here
<whitequark>
no other way to access it except via that key
<whitequark>
also a good idea to have a task for restoring from backup fwiw
<rjo>
how far down the setup do you go? what's the initial state? just a default/empty debian installation? do you set the hostname/gen the ssh key?
<whitequark>
empty debian installation, which i strip even further down (i don't want systemd or acpi or exim4 or tty2-6 getty's on my server)
<whitequark>
hostname and key is set by digitalocean
<whitequark>
there's a task for creating a digitalocean machine too
<whitequark>
i also remove sytemd, install sysvinit and reboot, in an automated way
<whitequark>
if digitalocean didn't set hostname and authorized_keys for me i'd do that with ansible, too
<rjo>
if you worried about the gettys, do you build and distribute your own kernel?
<whitequark>
of course not
<whitequark>
i just think they clutter up `top`
<whitequark>
(actually could kill that on tty1 too, looking as how I don't have passwords anyway)
<rjo>
and then you run unattended-upgrades, i guess (by the way, that is not really working on lab.m-labs.hk), and if that collides with your playbooks, what then?
<rjo>
also isn't upgrading the debian release a mess? how do you handle that?
<whitequark>
unattended-upgrades: yes, it doesn't work for some reason. doesn't work on whitequark.org either. i must be screwing up the config in the same way, but i never actually figured out how.
<whitequark>
it doesn't screw up my servers because they're all on stable+stable-backports
<rjo>
"it" being release upgrades?
<whitequark>
no, unattended-upgrades
<whitequark>
i do release upgrades by hand because i literally need it once per two years
<whitequark>
went through the last one on whitequark.org and it was almost completely painless
<rjo>
so u-a does not work on whitequark.org but it is stable+?
<rjo>
but the procedure for a release upgrade is that you do the upgrade and then re-run your playbooks and fix them?
<whitequark>
i removed a few default_release options, removed specific versions from mysql and postgresql packages, manually migrated the postgresql cluster, updated php to 7.0, and handled a breaking change in dovecot-sieve
<whitequark>
that was all
<whitequark>
took me all of like 30 minutes, including updating playbooks
<whitequark>
and not including the DB migration, with corresponding downtime of several hrs
<whitequark>
rjo: u-a updates packages as well as distros. i configured it to only update packages. it doesn't do anything.
<whitequark>
release upgrade procedure: yes.
<whitequark>
removing references to jessie-backports was trivial, php/mysql/postgres manifested as apt errors, sieve change manifested as getting several hundred emails in my inbox and a lot of headscratching
<whitequark>
i blame that on dovecot people
<rjo>
if you were to migrate e.g. a postgresql setup from one host to another, do you write a playbook to do that or do you just do the setup with the playbook and the data migration manually?
<whitequark>
the latter. it is very likely to go wrong in ways that are of no benefit to automate handling
<whitequark>
it did, in fact, go wrong, and i had to restart it a few times.
<rjo>
bb-m-labs: force build --branch=rtio-sed artiq
<bb-m-labs>
build forced [ETA 8h32m20s]
<bb-m-labs>
I'll give a shout when the build finishes
<rjo>
bb-m-labs: force build --branch=master artiq
<bb-m-labs>
The build has been queued, I'll give a shout when it starts
<rjo>
whitequark: ok. i think i got enough. thanks!
<GitHub141>
[smoltcp] podhrmic commented on issue #56: I ll give it a try. I think the main difference is the alignment - i.e. this is what is in smoltcp now:... https://git.io/vdPb8
<GitHub78>
[smoltcp] whitequark commented on issue #56: To be honest I find that it helps readability a lot. https://git.io/vdPb1
kristianpaul has quit [Quit: Lost terminal]
rohitksingh has quit [Ping timeout: 240 seconds]
rohitksingh has joined #m-labs
rohitksingh has quit [Read error: Connection reset by peer]
rohitksingh has joined #m-labs
rohitksingh has quit [Quit: Leaving.]
kristianpaul has joined #m-labs
<whitequark>
sb0: I have an idea
<whitequark>
about DMA
<GitHub177>
[smoltcp] podhrmic commented on issue #56: After poking around it looks like there is no setting for rustfmt that would keep the LHS => RHS alignment as you have it. Apparently it is a discouraged style for Rust. ... https://git.io/vdX32
<whitequark>
verifying currently
<GitHub177>
[smoltcp] whitequark commented on issue #56: Let's keep it, but just so that your work doesn't go nowhere, please post your rustfmt config here. If we ever switch to using rustfmt I'll make use of it. https://git.io/vdX3P
<GitHub87>
[smoltcp] podhrmic commented on issue #56: OK. Place `rustfmt.toml` in the root dir of the project.... https://git.io/vdXs3
<whitequark>
nevermind, a fluke.
<whitequark>
ok, I do have some leads for tomorrow